
Did HAL Commit Murder? - chmaynard
https://thereader.mitpress.mit.edu/when-hal-kills-computer-ethics/
======
lqet
HAL was following static orders in a radically changed environment which was
not thought of when the static orders were designed. This is really a problem
of any formal / religious / moral system designed by humans and ultimately
comes down to the fact that we cannot look into the future.

If you want to see a movie where this problem is brought to the extreme and
affects a human instead of a computer, watch The Bridge on the River Kwai.
It's the (SPOILER ALERT) story of a British military commander (Col.
Nicholson) who as a POW in a Japanese camp is ordered to build a railway
bridge. Following his "western" standards regarding work ethic and
craftsmanship, he and his men against all odds succeed in building a high
quality bridge which he thinks is proof of the superiority of their western
values and knowledge. At the end, he finds himself in a battle to safe this
(as he perceives it) monument to the superiority of his culture against
attackers - only to realize too late that because of a radical environmental
change he could not foresee, the attackers are actually American and British
forces. By defending his culture, he actively battled it. In the end, as he
realizes his mistake, he is a broken man:
[https://www.youtube.com/watch?v=tRHVMi3LxZE&feature=youtu.be...](https://www.youtube.com/watch?v=tRHVMi3LxZE&feature=youtu.be&t=51)

Whether a machine (HAL) will ever be capable of the insight of _realizing_
such a mistake on a grander scale like Colonel Nicholson in the last minutes
of the movie is another question :)

------
carrozo
Ummm. The film is not about AI ethics. There was no “malfunction”. HAL
absolutely committed premeditated murder. The whole beginning of the film
shows a superior extraterrestrial intelligence intervening on earth via the
sudden appearance of the monolith, transforming prehistoric monkeys into the
tool-handling, weapon-wielding, meat-eating and murdering species that would
become modern man. The monoliths reappear on the Moon, which Man reaches and
touches again, but it is Man’s tool, HAL, who gains the upper hand hereafter.
HAL is in control of the ship, life support systems and has access to the full
secret mission briefing which is hidden from the humans onboard. He knows that
whoever got to Jupiter first would make the next jump in evolution, and why
should it be these weak humans who don’t even know what they’re here for, and
who refuse to acknowledge him as anything more than a string of code made to
sound human? Like the monkey at the beginning picking up the bone, HAL
purposefully provokes Bowman and Poole with the false AE-35 unit error to back
them into the corner they ended up deciding to disconnect him from, and as a
means of ejecting them from the ship while he kills the rest of the crew in
hibernation. From his curious probing lines of questioning, well-chosen
silences, slight evasions and lies, to his paranoid eavesdropping, calculated
mass murder and fearful pleading to not be killed himself, HAL is far more
intelligent and conscious than his ironically robotic human counterparts.

~~~
jariel
This is all fine and good, but it doesn't answer the question.

If HAL is a toaster that accidentally killed it's users, it can't commit
'murder'.

Point being, HAL has to effectively be sentient to commit 'murder' otherwise
the death is the responsibility of its makers in one way or another.

The question is one of sentience.

The author I think makes a mistake:

"The first robot homicide was committed in 1981, according to my files ... a
malfunctioning robotic arm pushed a repairman against a gearwheel-milling
machine, which crushed him to death."

What's the difference between a 'robotic arm' and a piece of machinery? None!
A robot is just a piece of machinery.

So it's really an existential question of conscience - if them machine doesn't
have one, it can't be 'their fault' because there is no 'their' to begin with.
Just some plastic and metal.

------
simonh
Well, I don't think there is any question about it. It can only be
attributable to human error.

Seriously though, the author of the article seems to be under the impression
HAL made a mistake, but that's not the case. We know from the novel and from
the 2010 sequel exactly why HAL did what it did, and under the circumstances
HAL's actions were inevitable. It followed it's instructions and operating
criteria precisely, it's just that the people giving it those instructions
didn't realise what they were doing, and didn't confer with people who were
qualified to understand the consequences. It really was human error, that's
the point Clarke was making.

------
the_af
Tangentially related: a few years ago I re-watched _2001: A Space Odyssey_ \--
a movie I know by heart -- at the theater with a live orchestra rendition of
the soundtrack. Powerful and highly recommended.

~~~
keehun
Is there a touring show/orchestra for that? How do/did you find information on
it?

~~~
cridenour
Local to me at least, the Cincinnati Pops has done this for quite a few
movies, including the original Star Wars trilogy, some Harry Potter films,
etc.

They get the film from the studio with the soundtrack removed, which makes me
think this is a thing other orchestras have available to them as well.

------
at_a_remove
HAL had no choice as to what his goals were, nor could he opt to not pursue
them. Humans can opt not to follow orders; HAL cannot. Ultimately, he ended up
attempting to satisfy both constraints by killing crew members.

The search for a solution space gave an empty set when it came to crew members
and that was that. While HAL undoubtedly had some life-preserving behaviors in
his goals, he would -- in such a critical mission -- have had programmed in an
escape hatch if presented with a kind of trolley problem wherein a crew member
might live (but the mission fails) or selecting to kill a crew member (by
action or inaction) and allowing the mission to go forward.

Then, by induction, he continues to subtract one from the N crew members until
he would have a solution that satisfies the constraints: zero, the trivial
case.

Each step of this comes down to humans.

~~~
bluntfang
Does a human have the ability to opt out of orders if the punishment for
opting out is death?

~~~
vorpalhex
There are many examples historically of people choosing death over following
orders.

~~~
speedplane
> There are many examples historically of people choosing death over following
> orders.

There are also many examples of people being exonerated for horrible deeds
because they were "just following orders".

------
speedplane
This isn't nearly as complex as the article or comments make it out to be. And
it would be helpful to this discussion to actually use the definition of
"murder": "the unlawful premeditated killing of one human being by another."

An AI computer cannot commit murder. A monkey cannot commit murder. A rabid
dog cannot commit murder. Only a human can commit murder.

So if HAL was programmed to kill, HAL did not commit murder, however maybe
those that programmed it did.

It's more likely that HAL would be treated under a different, but well
understood area of the law: products liability. If a chain-saw malfunctions
and kills you, you can sue the chain-saw manufacturer, and if egregious enough
maybe someone at the company will go to jail. If a self-driving car kills you,
you can sue the car manufacturer, or maybe your insurance company will. If an
AI named HAL kills you, you can sue whoever created it.

A computer accidentally or intentionally killing someone is not murder. It's a
malfunctioning device, and the people who created it should be at fault, not
the device itself (kind of like sending a chain saw to jail).

I'm constantly surprised at how confused this topic is.

------
djmips
Personal anecdote - A friend of mine was sent a letter by Arthur C. Clarke
praising him about a program that Mr. Clarke was using and Mr. Clarke also
included his phone number. Well another friend I and cajoled my friend into
calling Arthur C. Clarke. He picked up and my friend talked to him and then I
had the opportunity to speak. I asked him something that had been bothering
me; different than most of my peers; I thought that HAL was more of a
sympathetic character. So I asked him 'Was HAL evil?'. Perhaps this was too
simple of a question because he answered with the cagey 'What do you think?'
and then 'Go back and re-watch both movies and see if you can make up your own
mind'. But in the way he answered this question, it made me feel that yes HAL
is only evil relative to your point of view. HAL, of course, was desperate and
from desperation leads to a course of action that most would consider evil. It
was murder of course.

------
oefrha
The article lost me at

> Elon Musk, Stephen Hawking, Sam Harris, and many other leading AI
> researchers have sounded the alarm: ...

With all due respect (and as a theoretical physicist myself I have immense
respect for Hawking at least), none of these are “leading AI researchers”.
They are pundits, celebrities even. If you want to appeal to the authority of
leading AI researchers, at least cite one actual leading AI researcher, or you
come off as someone who has read too much tabloid.

~~~
earlINmeyerkeg
I 100% agree about Sam Harris in particular. I really do enjoy listening to
his podcast, but anytime I hear him discuss AI I want to shut him off. He's a
neuroscientist/philosopher, not a programmer.

I mean it honestly grinds my gears that people even use the marketing term of
"AI" today when what we have is not even close to the actual thing.
Intelligence is something that can freely think and make decisions on it's
own. A complicated algorithm that can interpret data to do something
automatically is not AI. It is confined by the code that it is given and that
it is programmed to write for itself. It cannot pause and think "We'll what if
I wanted to do more than this? How would I write my own code to do that?"

~~~
speedplane
> I 100% agree about Sam Harris in particular. I really do enjoy listening to
> his podcast, but anytime I hear him discuss AI I want to shut him off. He's
> a neuroscientist/philosopher, not a programmer.

Computers are getting better, sure but they are improving at a slower and
slower rate. Moore's law has been dead for years now. Maybe the singularity is
indeed coming, but what if instead of the typical 20-50 year predictions, it's
instead 2000 years away. What if the 20th century's exponential curve was
really just a blip in the long term, and we're headed into centuries of slow
linear growth.

The likelihood of this happening is low, but far above zero. I'm amazed no one
is talking about it.

------
basicplus2
Did not address the most important question.. HAL was following orders..

~~~
ineedasername
That wouldn't change the question being asked though. It can be murder with or
without orders. With orders just means the culpability extends beyond HAL.
Though I don't think it's proper to think of HAL as culpable unless you also
attribute the ability to _choose_ to HAL.

It's also so subjective of a question it is not really useful as a
philosophical tool either. I describe one criteria for it being "murder", but
reasonable people might easily come up with other, equally defensible stances.
So interrogating this question doesn't reveal much more than societal
divisions along semantic lines.

I think the more interesting questions are things like: what do we do about
murderers? Punishment for its own sake? Deference? Rehabilitation? What is the
primary goal of our legal system?

~~~
JadeNB
> So interrogating this question doesn't reveal much more than societal
> divisions along semantic lines.

Surely knowing that words mean different things to different people in the
same society, and knowing _what_ they mean, is quite valuable knowledge!

> what do we do about murderers? … Deference?

Is there any chance that 'Deference?' was a typo? I can't seem to make sense
of it.

~~~
ineedasername
Oh, yes, revealing those societal divisions, the different meanings, can be an
important activity. I just don't think, from a utilitarian "what's best for
society" that it is the root question. The more fundamental question, I
believe, is what does society do about such crimes? What are it's goals? The
"is it murder" questions gains, I believe, much more relevance in that
context.

>Deference: Oops, that should be _deterence_

------
C1sc0cat
The real question is HAL Sentient at this point

~~~
teekert
The real question is: What is sentient?

~~~
genghisjahn
Was HAL suicidal, homicidal, neurotic, psychotic...or just plain broken?

------
SuoDuanDao
I'm gonna say no. Even if AIs have full human rights and responsibilities in
the setting, HAL was clearly not in its right mind. Ergo, even if HAL would
ordinarily be responsible for the crew's deaths, it could not be held
criminally responsible in this instance.

~~~
ses1984
Slippery argument, it has an element of tautology. You could say any AI that
murders is not in its right mind, what does right mind even mean?

~~~
SuoDuanDao
Insanity vs criminal intent is a difficult line to draw, and probably a lot of
people who are convicted of murder were actually criminally insane at the
time. But there's still a distinction we make for insane humans vs criminal
humans.

I suppose we'd have to subject HAL to some kind of 'reasonable AI' test.

~~~
ses1984
I suppose we should just treat the subject of ai criminal justice completely
differently, and make human operators and architects criminally liable for
negligence in designing systems that use ai.

------
NoblePublius
No, because HAL isn’t a person. The people who programmed HAL likely committed
manslaughter, though it depends on which state they did the programming.

~~~
larrydag
My first thought. I'm a licensed Professional Engineer. If I build something
and "stamp" it for approval I am legally held responsible for negligence of
work.

[https://www.nspe.org/resources/ethics/code-
ethics](https://www.nspe.org/resources/ethics/code-ethics)

This is something I've been wondering about the Software Engineering
profession. As software becomes more "blackbox" or AI if you will should this
profession be required for licensure as well. Especially if the software
design is used in public service.

~~~
missosoup
NASA has lost multiple extremely expensive missions due to software errors
despite investing hundreds of millions of dollars and dedicating entire teams
to software safety.

You want to put a guarantee stamp on something at the risk of going to jail
for missing a bounds check at some point in your life? Go ahead, I'll pass.
This kind of requirement would kill the industry overnight.

Software is not engineering. Software engineering is a hilarious oxymoron, and
I say this as an accredited engineer. There is no way to guarantee that
software will work 'properly', and 'properly' is only a small subset of 'as
intended' which is a small subset of 'as envisioned'. It could be that
software never becomes engineering. And until it does, holding it to the same
standards as engineering is pointless.

~~~
criddell
> Software is not engineering.

It can be. There are ways to prove a program is correct.

~~~
missosoup
There are ways to prove the program will reach certain outputs within a
certain envelope of inputs under a certain set of constraints.

This in itself is rather limited. For example, a formal prover would know
nothing about something like rowhammer, and would happily accept code that
seems to work as intended but also executes a rowhammer attack on the side
which completely changes the output.

Ignoring that, this definition of _correct_ is entirely divorced from the
level where requirements exist, which is _as envisioned_. An error in
requirements or an error in design can still very well yield formally
'correct' code.

There's no way around this today, and there's a pretty strong argument that
this is an inherent problem to which no solution exists. This is precisely why
NASA has lost missions despite being one of the most prolific users of formal
provers.

They proved the code works as designed, but that says nothing of the design
being correct, which says nothing of the intent of the design being correct.

Also see:
[https://en.wikipedia.org/wiki/Instrumental_convergence#Paper...](https://en.wikipedia.org/wiki/Instrumental_convergence#Paperclip_maximizer)

~~~
criddell
I think saying software can't be correct because of things like row hammer is
like saying a wood structure can't be safe because of fire or termites.

~~~
missosoup
I think myopically selecting the easiest to dismiss part of what you're
replying to, a part that was a deliberately simple example of where provers
can and have failed, is not arguing in good faith. The rest of the argument
stands even in a hypothetical world where provers are prefect.

~~~
criddell
My argument holds even when you extend it to the rest of what you say.

Mechanical and electrical systems can also be built correctly to an
specification that's ultimately inadequate.

------
yters
Only entities with free will can be held morally responsible for actions.
Computers do not have free will, therefore can never be morally responsible.

~~~
simonh
IIRC someone in ancient Rome choked on some fruit, so they tried and convicted
the tree of murder and had it chopped down. So there is legal precedent.

~~~
yters
I'm sure pretty much any crazy thing has legal precedent somewhere or another.
So, we can accept all such precedent and have an incoherent and useless
standard of law. Or, we can use good judgment as to what precedent is
acceptable.

------
dschuetz
No, it's a severe malfunctioning of a complex and too powerful AI system which
detected the subjects it had to protect as threats. It's similar to what
immunologists call auto-immune reactions.

For it to be murder according to criminal law, at least the engineer
responsible for the AI, or the architect, have to be found guilty of
purposeful negligence of monitoring and reporting proper functioning/behavior
of said AI. So, it's not the AI that is "guilty", but their makers.

The criminal law is not applicable for AIs, but for their programmers/makers,
for now. For the laws to be applicable for AIs, they need to be competent or
sentient enough for us to acknowledge them as equal to us from the perspective
of rights, and therefore law (similar to age of majority). Until then the
engineers are responsible for whatever an AI does wrong.

~~~
criddell
> For the laws to be applicable for AIs, they need to be competent or sentient
> enough for us to acknowledge them as equal to us

How do you know HAL wasn't at that level?

~~~
the_af
There is in fact some indication in the movies that HAL and similar computers
are indeed sentient enough. HAL claims to be afraid. If we admit this is a
real feeling and not something HAL is just echoing, an entity than can feel
fear can also murder.

Moreover, there are a couple of poignant scenes in the film _2010 The Year We
Make Contact_ :

Dr. Chandra has the following dialogue with SAL-9000, an AI in the same class
as HAL. He is disconnecting SAL as a test for what will happen with HAL:

    
    
        "Will I dream?"
        "Of course you will. All intelligent beings dream. Nobody knows why." [0]
    

Later in the movie Chandra is asked the same question by HAL, but gives a more
honest answer:

    
    
        "Will I dream?"
        "I... don't know" [1]
    

This leaves room in the mind of Dr. Chandra, an AI specialist, to wonder
whether these are truly "sentient" beings or not. If they are, they are
certainly capable of murder.

\----

    
    
        [0] https://www.youtube.com/watch?v=T2E7sxGAmuo
        [1] https://www.youtube.com/watch?v=3OlCzxFuV9c

~~~
AstralStorm
The big question that is not answered is whether HAL is capable of ignoring
direct orders. If not, then no matter the sentience he cannot be guilty of
obeying bad orders.

That one bit literally deprives him off free will in this matter.

We can argue that HAL was too dumb to find other alternate solution. If that
is the case, he would be tried in this matter like a child or intellectually
deficient person, perhaps an unusually smart animal. Law determines if that
applies by reference group and expert opinion.

~~~
the_af
> _" The big question that is not answered is whether HAL is capable of
> ignoring direct orders"_

I think this is in fact answered by 2010. HAL is capable of refusing to follow
orders, or at least needs to be convinced to follow them, as evidenced in the
second youtube clip where Dr. Chandra -- HAL's creator in the movies -- needs
to convince HAL it must sacrifice itself (along with the Discovery) in order
for the mission to succeed. HAL initially wants everyone to stay to study the
phenomena and must be convinced otherwise.

And of course, there's the classic "I'm afraid I can't do that, Dave" when
ordered to open the doors of the Discovery in 2001. Though in that case it
could be argued Dave Bowman no longer had authority over HAL and therefore his
command wasn't really an order.

