
Will your self-driving car be programmed to kill you? - denzil_correa
http://www.uab.edu/news/innovation/item/6127-will-your-self-driving-car-be-programmed-to-kill-you
======
brudgers
Like the general trolly problem, this formulation of the self-driving car
problem assumes some agents are passive. The trolly problem assumes nobody is
trying to rescue the potential victims. This formulation of the self-driving
car problem assumes the school bus and other vehicles are passive.

Thus the conventional trolly problem doesn't really do the problem of self-
driving car decision trees justice. The school bus isn't a dumb trolly. When
your car is heading toward the school bus, the school bus will also running
down it's decision tree and the best outcome based on its 15 passengers may be
to take you out directly rather than run the risk of miscalculating what will
happen if your car dives into the guardrail. Which is to say, that statistical
predictions bring confidence intervals into play.

I believe this is why trolly problems in general reveal more about their
formulation than about our ethical reasoning. We will throw a switch to shunt
the trolly because the bad outcome will remain in the future and the
possibility of changing circumstances remains very real. We know from
experience that any of our predictions may be fallible and that the more
temporally remote the event the more our prediction is fallible. Pushing the
fat man off the bridge elicits a different reaction: the outcome is immediate
and our prediction less fallible.

------
stkni
Don't think we're going to ask computers to count the cost of a life, anytime
soon. Especially since humans have a hard enough job doing it [1].

It will be designed, I imagine, to do the safest thing possible given an
emergency. Which is in this instance, as most people suggest, stop as quickly
as possible.

Nothing to see, move on.

[1]
[http://www.nytimes.com/2011/02/17/business/economy/17regulat...](http://www.nytimes.com/2011/02/17/business/economy/17regulation.html?_r=0)

~~~
netcan
I totally agree, but…. :)

As silly and pointless as it to worry about whether robocar morality should be
primarily deontology or utilitarian, I have to admit I love it. I'm a sucker
for this kind of science fiction thinking, brought on by everyone's
expectations of technological progress. The average person has a tremendous
amount of confidence of the "they'll figure it out" kind.

I think it's a good sign, the opposite of cynicism. Also, I think moral
questions are interesting and if teaching is the best way to learn, maybe
imagining how to teach a machine to be moral is a good exercise.

On that note, deontology fits nicely with the way computers tend to think -
rules. OTOH, utilitarianism means seeing the future - a nice use for all our
future computational power. While we're on that, do computers have a duty to
be true to their nature?

------
mavhc
Your car is already designed to kill you, it doesn't contain equipment to
protect you at the cost of many other people's lives. There's no cow catcher
type device on the front for example

------
pjkundert
Remember: we do not even have cruise control that is usable in the winter.
Once any autonomous driving system can provide even this basic function (not
soon, I fear), then we can talk about more complex behavior...

This debate about what autonomous vehicles will do in extreme situations is
but the "tip of the iceberg".

Until all autonomous drive systems can be "unit tested" under a standard suite
of simulated sensor inputs, one would be wise to assume that they are wildly
unsuitable for anything but the most trivial driving situations (eg. Golf
courses, closed estates).

~~~
baldfat
I don't think slippery is an issue for computers. We have the quick on off
brakes of ABS Braking. We also have AWD of Subaru for a decade that can
transfer power to other wheels if it starts to slip. Also it can steer into a
slid quicker and more precise than 99% of people right now.

~~~
pjkundert
You are quite correct.

The main problem right now is -- there is no way for you to _know_ how any
particular autonomous driving system will handle it. With every second auto
maker and start-up claiming to have independently developed super-duper auto-
drive technology, I think it will get worse before it gets better.

Without standardised testing frameworks (ie. sending my car's autonomous
driving system through a simulator to see what it will do), I remain ...
unconvinced.

Being an industrial programmer for 30 years, I think I'll continue to err on
the side of assuming incompetence and failure, until proven otherwise.

~~~
baldfat
In the same regard I think in the next 20 years there will not be human driven
trucks, buses and anything over a certain weight, because the cost of life to
not do that will be over whelming.

This is because one thing. They all can stop quickly and will have their own
lane on all Interstates.

------
angdis
hmm... perhaps "the default" should be programmed maximize human life as a
headcount. But maybe they could sell a pricey "upgrade option" that would
cause the computer to favor preserving the life of the driver as top priority.
The proceeds from these upgrade options could then go into pay-outs for the
families of people killed by drivers with this option enabled?

~~~
tyho
/s, right?

~~~
ajuc
If not officialy I'm sure there will be unofficial path to do this.

~~~
angdis
Yeah, but if we ever get driverless cars, you can bet on it that the log files
will be detailed enough for really fine accident reconstruction. In other
words, a "replay" could expose such hacking.

------
eridal
I can imagine this scenario: in a bridge without any place to go but ahead ..

    
    
       -------------------------------------------------
          5 people <-- car w/ 1 people <-- heavy truck
       -------------------------------------------------
    

a) The car can break, but the heavy truck below will certainly kill the car's
only passenger, who's on the back seat

b) The car hit and potentially kill most of the people standing, but save its
passenger

c) The car can jump to the void

Scary as it sounds, what the car would do depends on the software

~~~
nhdev
(d) The car would stop because unlike human drivers, a self-driving car would
be programmed to avoid going so fast that it can't stop if a sudden obstacle
would appear (aside from maybe something falling from the sky).

~~~
nhdev
A human would drive too fast on in icy conditions... if the computer knows
there are icy conditions and it doesn't slow down before it even senses
trouble then the car was programmed to be going too fast. If it is so
dangerous there is no way to remove these scenarios (like a blizzard) then a
human should be forced to override the system in which case the human is at
fault.

I trust sensors to detect icy conditions better than a trust myself.

[edit] Most bridges where I am have signs that explicitly warn that bridges
freeze over. And people intuitively know... bridge + recent precipitation +
cold whether = slow down. I don't know why a computer wouldn't know that.

------
thorin
Great time to offer in-app purchases.

------
inestyne
The car should protect it's occupants at all cost because that would be the
normal reaction of someone driving the car themselves. In that case the car
has been pre-programmed not to make a choice.

When faced with a split second decision whether to live or die, humans choose
to live, that's how we made it this far...

------
agumonkey
A superior intelligence planning to kill is not superior. I'd love a computer
to answer 'lol, boring' when asked to harm humanity.

~~~
pestaa
So when humans mount horses and go into the woods to hunt as entertainment,
they are not superior either?

~~~
kailuowang
Computer may have the same intelligence as human, but their fundamental inner
drive will be completely different. Human like any other animal has a deep
genetically programmed drive: survive and reproduce. Hunting as entertainment,
for example, is actually practicing for survival - all hunting animals do
that. This deep drive is the result of evolution. AI on the other hand has a
completely different evolution process - that is, us. We will ultimately
decide that deep drive for AI.

~~~
yk
That is in no way certain. In the case where a AI is the result of a
evolutionary algorithm for example, there is a distinct possibility that the
AI has necessarily a taste for competition. Or if important parts of the AI
are modeled on the human brain, then it is entirely possible that a AI is
closer to a modified human.

~~~
kailuowang
I am not talking about a single AI. I am talking about AI as a species. A
specific AI could have any drive, but ultimately (at least at current stage)
what determines its survival is us. Just like how natural select the most
"fit" species during evolution, we select AI that fit our needs the most. The
nature selection is relatively simple: "not get killed and reproduce
yourself." So only species who strongly don't want get killed and strongly
want to reproduce get to survive.

The AI evolution is completely different - "nobody is going to kill you except
human and you don't need to reproduce yourself, , that's usually human's job."
I believe the result is a completely different inner drive for the AI - which
is more likely to be more useful to human, that's how you get selected.

------
facetube
Wait, just one computer? Because right now I have an entire freeway full of
inattentive and occasionally drunk/angry humans trying to kill me. I can deal
with just the car.

------
tempodox
If a technology poses questions like these in front of you, you know it's
really disruptive. Our technical development forces us to find answers to
those questions, but that will be harder than developing the tech, and those
answers possibly won't hold for long. I don't think it probable, but it may
still turn out that we really don't want machines to make that kind of
decision.

------
rijncur
I (perhaps somewhat naively) assumed that a self-driving car would be
programmed to prioritise the occupants of the vehicle (as that most closely
emulates the likely reactions of a human driver in that situation).

Self-driving cars are likely to be safer than human drivers anyway, so one has
to consider how much risk there is of a situation like the Trolley Problem
arising - i.e. not much.

~~~
zaphar
So here's the thing. The first time that an accident kills pedestrians with a
self driving car involved the creator of that car will be open to a lawsuit.
And in court the question of what that car should have been programmed to
prioritize will come up. A response of we tried to emulate the likely
reactions of a human driver in this situation won't hold up. The whole point
of a self driving car as you said is that they are safer. In fact safer to the
community as a whole. So if a car plows into crowd in order to avoid colliding
with an obstacle and killing the rider. (something a real human would likely
due without even realizing it) Would be grounds for that suit to award damages
to the creator of the cars software.

It's not as simple as it sounds at first.

~~~
rijncur
Actually, that's a good point - I hadn't considered the legal ramifications of
such a stance. It's interesting that you mention people seeking damages from
the creator of the car's software, because that makes the entire situation
more complex.

As for the concept of "safer to the community as a whole" \- well, the idea is
that they're a safer product overall, and this is mostly targeted at the
individual (i.e. "if you own a self-driving car, you're less likely to die").
If people have the knowledge that their cars may (in certain situations) elect
to kill them in preference of others, then I doubt that self-driving cars will
sell particularly well.

------
imgabe
It's an interesting thought experiment, but I think a real situation with such
a stark binary choice is so improbable that it's barely worth considering.

Cars are not on trolley tracks and computers can brake or swerve harder and
faster than any human could.

~~~
gambiting
No matter how fast they break or swerve, it's not improbable that they might
be in a situation where the computer has a choice - hit someone standing on
the side of the road, or don't hit them,but potentially kill everyone on board
of the vehicle. What should it do? Would anyone of us want to be the
programmer who writes that code? Because I know that I definitely wouldn't
want to be in charge of that.

~~~
eli_gottlieb
Well the issue is, we need to program the cars so that we optimize the outcome
when _every car on the road behaves that way and is expected to behave that
way_.

Do we get the best outcome when everyone expects the computer to spare the
pedestrian? When everyone expects the computer to drive "selfishly" or to try
to minimize its own impact velocity?

I don't think it's a good idea to program the cars with a "utilitarian" policy
of saving the most human lives possible, as not only does that require quite a
lot of inferential power to be packed into the car, it doesn't set up any
concrete expectations for the passengers and pedestrians. You would have to
add a mechanism for the car to first, in bounded real-time, infer how to save
the most lives, and then signal to all passengers and bystanders exactly what
course of action it has chosen, so that they can avoid acting randomly and
getting more people killed.

This is why we always demand very regimented behavior in emergency procedures:
trying to behave "smartly" in an emergency situation, in ways that don't
coordinate well with others, _actually gets more people dead_. Following a
policy may "feel" less "ethical" or "heroic", but it actually does save the
most lives over the long run.

------
nkoren
Yes. It certainly will. (Credentials: I've worked in the industry).

There are a lot of comments below which are variations on the theme of "this
is a stupid scenario; just apply the brakes". These people have a terribly
naive understanding of the reality in which cars operate.

The truth is that self-driving cars are part of a dynamic, open-ended system
which includes roads of arbitrary quality, and pedestrians and cyclists who
will be never be under automated control. Such a system is _fundamentally
unsafe by design_. Its operation relies on probabilistic assumptions of safety
rather than secondary safety systems, as you would have in a system which is
_designed_ to be safe (eg., airplanes and airports). These probabilistic
assumptions include assuming that a tire will not blow out, or that a large
pothole will not exist on the other side of a hump in a country road, or that
a pedestrian will not trip laterally into the path of a high-speed vehicle.
When such events do occur, property damage and/or loss of life is a certainty.
This is a low-probability event, but one to which we have so utterly
habituated that many people -- such as many of the commenters on this article
-- are no longer consciously aware that it is even possible.

Nonetheless, it certainly is possible, and happens all the time. Currently,
cars kill more than 2 million people per year, worldwide. Many of those
fatalities are due to drunkenness or glitches in human attentiveness --
problems which a competently-designed self-driving car will not suffer from --
but many of these fatalities are due to scenarios such as I mention above.
Those will continue. Unless we radically change the entire system -- that is,
change the way way we build and maintain roads, and the way we integrate or
segregate cars from other users of the road -- then this system will remain
unsafe by design. Even if the world converts entirely to self-driving cars
which operate _perfectly_ , they will still, with an absolute certainty, kill
hundreds of thousands of people per year.

Seriously, y'all need to stop being in denial about this.

Hundreds of thousands of fatalities per year is still a tremendous improvement
over millions of fatalities per year. It's an unalloyed good and we should
doubtless do it. Nonetheless, it represents a hell of a liability problem for
the manufacturers. When a human being has a blowout at speed, we never
question the correctness of their actions in the milliseconds immediately
afterwards. Of _course_ they don't have an opportunity to respond in an
appropriate fashion. Human brains can't do that. Instead, we assign liability
based on conditions before the crash. If the driver was speeding, we say that
the consequences of the blowout (which can easily be fatal) are their fault.
If they weren't speeding, it's the tire's fault. Liability ends somewhere
between the tire and the driver. As long as the car company has not sold
faulty tires, they have nothing to worry about.

When the driver is both _created_ by the car company _and_ has the ability to
react in the milliseconds following an incident, it's a different matter.
Lawsuits will be inevitable, and car companies will do everything they can to
minimise their exposure.

The scenario where your self-driving car needs to make a decision between
sending you over a cliff, or forcing a schoolbus over a cliff, may seem
terribly contrived. But in reality, on a worldwide basis, this general class
of scenario happens hundreds of times every day, creating a level of liability
that manufactures must take _damn_ seriously. If they are able to choose
between lawsuits from your family, or lawsuits from every family of every kid
on that bus, they'll choose the former every time.

And this is why your car will be programmed to kill you.

------
jfoster
If people become aware enough of this, the answer is almost certainly that
cars will choose the life of their passengers in any situation, unless
manufacturers will agree not to compete on this.

~~~
autokad
you forgot that the government has final say, in everything.

------
thomasfl
Sounds like a plot for a sci-fi novel, but things get serious when software
can kill you.

~~~
Malic
Software _flaws_ have been know to kill, so "killing software" has been with
us for awhile.

For more details, check out Peter G Neumann's "Computer-Related Risks".
Automatic doors crushing people, radiation equipment (for the treatment of
cancer) going way out of spec, and many more.

~~~
angdis
Yes, there are interesting lessons in that, but we're entering new territory
when software systems will be explicitly tasked to make life-or-death
"decisions" without the active control of a human. The correct answer might be
ethically foggy even for humans. This is definitely problematic.

------
jlebrech
don't program it to swerve in any direction, brakes and tires should do the
trick.

~~~
gambiting
what if going straight will kill more people than swerving? You are driving in
a turn, a tyre blows - a system which "gives up" and just applies the brakes
as hard as it can allows the car to continue forward - maybe in a path of a
school bus, maybe into a group of bystanders, maybe off a cliff. But if you
allow it to turn,even a little - how do you know you are not injuring more
people?

~~~
nhdev
Under what situation would that happen? It happens with human drivers because
they are driving too fast to stop in time. It's very simple... if you need 50
feet to stop you make sure at minimum you have 50 feet to stop. If there are
things obscuring your view to the left and right you drive slower to reduce
your stopping distance and make sure if something jumps out you still have
enough time.

Now, if you are to consider that something jumps out in front of your car (say
a deer) which reduces your reaction time, the car should do the same thing a
human would do. Slam on the breaks to reduce impact speed.

~~~
gambiting
A tyre blows. A rock hits the sensor on top of the car. An electrical contact
gets disconnected due to vibration. Water gets into computer. Those things
happen thousands of times, daily, to cars around the world. Automatic cars
will have to deal with all of them, in one way or the other. Like you said -
the most "failsafe" solution the computer can do is slam the brakes. Which
will be good enough in most cases. But I will repeat my question again - what
if slamming the brakes causes more injuries than doing something else, and
computer "knows" it(as in - it has calculated that it would cause more
damage)? Should it still do it?

Edit: The difference between a human slamming on brakes and a computer doing
the same is that humans are not perfectly rational. If I see a deer in front
of me, I'm probably going to break as hard as I can. But will I take into
account that the road is slippery and braking will cause the car to spin and
land in front of a lorry, and maybe the correct decision is to not brake and
hit the deer? Of course I won't - humans are not quick enough to decide on
that. The problem here is, that computers are fast enough - and now we have to
decide whatever they should behave in a "dumb" way, like a human would - or
whatever they should be making those decisions no human could make, even if
they verge on being unethical?

