
“I, Robot” – the 3 laws considered harmful - edent
https://shkspr.mobi/blog/2019/01/i-robot-the-3-laws-considered-harmful/
======
one-more-minute
I think Asimov knew all this – his stories are about how the laws can be
twisted into having surprising consequences. It's meant as a Sci-Fi version of
the "literal genie" [0].

Has anyone actually taken them as a serious suggestion in AI ethics?

[0]
[https://tvtropes.org/pmwiki/pmwiki.php/Main/LiteralGenie](https://tvtropes.org/pmwiki/pmwiki.php/Main/LiteralGenie)

~~~
TeMPOraL
Seconded. Somehow, the fact that the Three Laws _are_ broken and Asimov's
works are all about showing how and why they're broken is something that most
people missed. The popular culture ran off with the Laws, forgetting the
context.

~~~
PurpleRamen
I think most people haven't read Asimov. They only know the laws because of
their hype, but don't know their origin or meaning.

Thus, we need more Asimov in TV.

~~~
BerislavLopac
Oh I would die for a R. Daneel Olivaw TV procedural!

(Hm, apparently BBC had made adaptations of The Caves of Steel and The Naked
Sun in the sixties...
[https://en.wikipedia.org/wiki/R._Daneel_Olivaw#Appearances_i...](https://en.wikipedia.org/wiki/R._Daneel_Olivaw#Appearances_in_other_media))

------
jawns
This is silly.

The Three Laws of Robotics are not harmful because _they can't actually be
implemented_.

Asimov never details how the three laws are baked into positronic brains in so
fundamental a way that they can't be disabled without destroying the robot.
Heck, we don't even know how to build a positronic brain -- because it's a
fictional technology!

So even if you _wanted_ to build some sort of AI that is guided by the Three
Laws, you couldn't.

You might be able to build AI that is guided by certain ethical principles,
but right now, in the current state of technology, it's all rules-based stuff
-- like a self-driving car with a rule about slowing down to avoid an impact
that could injure someone.

Nobody knows how to give AI an ethical intuition, much less tweak that
intuition. (I'm an objectivist, so I actually prefer saying, "Nobody knows how
to give AI the ability to perceive metaphysical principles.")

This whole post is like saying that Star Trek's Warp Drive is harmful because
we shouldn't be trying to travel faster than light.

~~~
Topolomancer
Correct. By the author's logic, the article should _also_ be considered
harmful because it is quite vague. In fact, it does not even go remotely into
the detail about vagueness. Moreover, it omits the fact that Asimov actually
plays with these laws in all his books (I think this has been mentioned
already in another comment), showing exactly the interesting consequences of
the wording...

While I like AI policy discussions a lot, this article does not help
unfortunately. :-/

------
inherentFloyd
The point of "I, Robot" was to show how these laws can lead to dangerous
results if followed to the letter, and their inherent contradictions. I agree
with the author: applying the Three Laws to actual AI research and engineering
is dangerous. Thankfully, I haven't seen too many people in AI research that
actually advocate for their use.

------
dhruvmittal
I get the impression that most people arguing in favor of the 3 laws have not
actually read I, Robot. There are many places in pop culture that the laws
appear, so it's not impossible that someone might quote them devoid of
context.

To catch people up: I, Robot is a collection of short stories largely follow
either (a) robopsychologist Susin Calvin or (b) robotics engineers Powell and
Donovan as they attempt to diagnose anomalous behavior by examination of the 3
laws of robotics, highlighting the logical flaws and traps of these simple
laws. A secondary theme is the use of robots as a mirror or lens through which
particular human behaviors can be blown up and examined, largely through the
absence or magnification of particular traits.

In universe, the 3 laws are considered to be practical engineering safeguards
(and eventually, over hundreds of years, a fundamental building block to the
function of the positronic brain)-- even as they are shown to the reader to be
the origin of many conflicts.

------
dpark
"Considered harmful" has become a lazy writing device. There's nothing valid
here that even says why these are harmful except possibly that they are
"vague". The only thing that looks even remotely like a real concern (that
people, even scientists, believe these are somehow magically hard-wired laws)
is not cited and appears to be entirely manufactured.

~~~
Bubass
I think you may have missed the joke. The article explicitly calls out the
word "harm", as it's used in the three laws, as being a source of vagueness,
which the article then goes on to say becomes fodder for some of the conflicts
in the stories.

------
Bubass
The article specifically calls out the word "harm" in the three laws as a
source of ambiguity. This, as the author also points out, becomes a source of
conflict in the I, Robot stories.

The author is making a joke using the "considered harmful" trope/meme in
conjunction with the assertion that the word "harm" is ambiguous, thereby
rendering the phrase "considered harmful" relatively meaningless.

Unfortunately, this joke seems to be lost on many of the commenters here.
Technically minded folks seem to see the phrase "considered harmful" and
proceed to lose their minds. Never mind that the phrase, in my experience, is
almost exclusively used in jest. But based on people's reactions to it,
"considered harmful" in a title might as well be flame bait. In fact, I seem
to remember a serious article a while back called "'Considered Harmful'
Considered Harmful" that should have put this whole thing to bed.

------
ergothus
Leaving aside the literary purpose for the moment - when I go back and look at
them now, some multiple decades after I first read them, I see a massive
human-centric focus that I never did before. WHY is a human life more valuable
than a robotic one? WHY is human "harm" so bad? WHY are human commands so
powerful?

I mean, for story purposes it all makes sense, but I'm fascinated that I never
raised these questions myself before. I accepted human divine right
unquestioningly.

~~~
dpark
> _WHY is a human life more valuable than a robotic one?_

This "but why humans" reaction is bizarre to me every time I encounter it. We
care about humans because we are humans. There's nothing deeper to uncover. We
care about ourselves.

Humans do not care about the value of a robot "life", because we generally
feel it has none.

~~~
ergothus
> There's nothing deeper to uncover.

For me, at least, you're coming at it from the other direction.

I'm not really asking "why" in the "how did this come to be" \- the answer to
that is rather self obvious. I'm asking "why" in the "is this objectively
actually correct? Is this how I want things to be?" \- which have no answer,
but the effort of trying to find one can uncover plenty of deeper ideas.

~~~
dpark
There is no meaningful answer to your question. For religious individuals, the
answer is dogmatic (and generally _yes_ , humans have value above all else
except God(s)). For atheists, the answer is logically a vacuous _no_ , because
all of existence is without meaning.

~~~
ergothus
>> which have no answer

> There is no meaningful answer to your question

Didn't I say that? Didn't I point out that the ATTEMPT to answer was valuable?

>> Is this how I want things to be?

> because all of existence is without meaning

Atheists don't have be nihilists.

~~~
dpark
I’d love to see a meaningful argument to the contrary. Near as I can tell
atheism implies nihilism and the extent an atheist disagrees is decided
entirely by how convincingly they can lie to themselves.

~~~
mfoy_
Atheism only implies nihilism if you subscribe to the concept that values are
objective and morality is universal, which begs the question of who determines
values and what is morally "good" and how.

~~~
dpark
So you assert that an atheist can find meaning by following their _subjective_
values and morality, correct? This seems entirely circular and equivalent to
saying that atheists can find meaning by valuing the things that they value.

How does an atheist hold subjective morals and values without those morals and
values essentially being arbitrary?

~~~
mfoy_
What's wrong with them being arbitrary?

~~~
dpark
In what way are arbitrary values _meaningful_?

~~~
mfoy_
Define "meaningful".

You seem to have a really big chicken-and-egg problem here... you want
morality to be universal. The problem is, "ethics", "morality", and "values"
are all human constructs. We created them, we define them, we discuss them ad
nauseum. They are as varied as we are. I don't think that makes them
"meaningless".

~~~
dpark
I don’t have a chicken and egg problem. The chicken and egg problem arises
from an attempt to pull meaning out of an arbitrary universe.

You say in different words that things are meaningful only because we believe
they are. That’s fine if it helps you but it’s also circular and rather
solipsistic. Your stance is also literally nihilistic: “a doctrine that denies
any objective ground of truth and especially of moral truth”.

~~~
mfoy_
See:
[https://news.ycombinator.com/item?id=19048897](https://news.ycombinator.com/item?id=19048897)

------
jrace
The whole point of the 3 laws is not to define the exact laws that need to be
followed, instead it is a comment on the fact that we must have some laws
governing the operation of devices that have the capability for autonomous
actions.

The fact that it is still discussed in regards to Robotics means the 3 laws
are not harmful, in actuality they gave sparked discussion and thought. That
is powerful, not harmful.

------
dekhn
From the day I learned about them, I considered them an unreasonable bar for
robots; to implement them requires effectively far-beyond human capabilities.
I figure, if you have the tech to implement the 3 laws, you have the tech to
go sublime.

~~~
AtlasBarfed
And they are computationally impossible to any degree of certainty, based on
only a cursory examination of quantum physics, chaos, theory of computation,
and complexity theory.

------
Isamu
This actually comes up a lot - people writing about problems inherent in
Asimov's laws.

What I never see is what you would replace them with, and the problems
inherent in that. That omission to me is scarier than these 3 laws.

~~~
mcguire
"An it hurts no one, do what thou wilt shall be there whole of the law."

