
Should your robot driver kill you to save a child’s life? - ceejayoz
http://theconversation.com/should-your-robot-driver-kill-you-to-save-a-childs-life-29926
======
jimmcslim
This debate has come up a few times now and I doubt it reflects the reality of
what responses to such situations can be practically expressed in code using
today's computers and languages anyway (in the absence of some AI that can
make 'ethical' decisions).

I suspect the best collision avoidance algorithm in such scenarios will simply
be decelerate as quickly as possible whilst maintaining course. Then a 'best
effort' at avoiding a collision can be seen to have been made, and perhaps the
pedestrian will survive their injuries. This is what a human driver would have
most likely done?

What if the child/small adult (can the computer vision practically recognise
the difference) is themselves quite 'evil' (again, another value judgement)
and is running in front of robot cars in such situations on purpose?

~~~
ceejayoz
I think it's best we start considering these things _before_ we get computers
good enough to make it an issue, though.

------
informatimago
It is a stupid question, but not even because people, even children, should be
responsible of their acts, but because a driver should expect such occurences.

This is the reason why speed limits are very low in residential and school
areas: because it is expected that there are people on the side walks that
_could_ possibly change direction, move on the road, and even perhaps stop
(fall) there.

Similarly, when driving in the mountains or in the Arizona desert, you might
go fast when there are nobody around, but if you see somebody walking on the
side, you must take into account his possible changes of dirrection, and
therefore slow down. Around highways, there are often fences to prevent people
to come close, so that cars may go fast. Also, you don't go as fast when there
is fog than when the sky is clear: you have to expect more unknowns when there
is fog than when it's clear.

Now, I'm not saying that human drivers apply speed limits (either legal or
moral (ie. "bayesian")) often or at all, but they should.

I'm saying that a robotic car will have much less difficulties in controlling
scientifically its speed, taking into account the presence of people, animals
and other mobile objects surrounding it, their velocities and their possible
direction changes. Therefore a "tunnel" situation as described in that
articule won't occur (unless there's a bug in the software).

------
sixQuarks
This is easy to answer and quite frankly a stupid question.

Why should you have to pay for someone else's mistake? If a child runs out in
front of a car, it is either their fault or their guardian's fault for
allowing them to run into traffic. It's tragic, but to suggest the driver
should give up their life for someone else's mistake is ridiculous.

Who in their right mind would get inside a car that is programmed to commit
suicide in certain "ethical" circumstances?

~~~
riquito
> Who in their right mind would get inside a car that is programmed to commit
> suicide in certain "ethical" circumstances?

I suspect there are various classes of people that would choose this option.
Believers of certain religions, old/hill people with high morals or strictly
pragmatic, aspirant suicides, risk lovers, etc...

Not for me, though :-)

~~~
echaozh
The point is, even if people do buy such a car, they don't often come back for
a second car. The business will run out of customers very quickly. You do the
math.

~~~
dragonwriter
> The point is, even if people do buy such a car, they don't often come back
> for a second car. The business will run out of customers very quickly.

This seems to assume that the frequency with which drivers (automated or
otherwise) are faced with "someone is going to die, you or the other person,
and which one depends on your driving choices" situations is very high.

This is not consistent with the evidence.

------
zeteo
> Allowing designers to pick the outcome of tunnel-like problems treats those
> dilemmas as if they must have a “right” answer that can be selected and
> applied in all similar situations [...] it becomes clear that there are
> certain deeply personal moral questions that will arise with autonomous cars
> that ought to be answered by drivers.

The answer is not as clear-cut as the example would suggest. First, it's not
zero-sum since computer control results in more safety overall. The computer
has more information available (e.g. sensor readings on all sides) and can
react much faster (eliminating the ~1.5 s typical reaction time [1]). The
driver and the kid in the road will both survive more often this way.

Second, drivers can be pretty selfish. E.g. everyday behaviors like cutting
people off or running red lights offer little benefit for the cost of
endangering everyone on the road. How many would simply set it to "always
drive over the kid in the tunnel"?

[1]
[http://www.tandfonline.com/doi/abs/10.1207/STHF0203_1#.U9wU2...](http://www.tandfonline.com/doi/abs/10.1207/STHF0203_1#.U9wU2fldXzg)

~~~
falcolas
Yeah, drivers are selfish; humans are frequently selfish.

For example, if a truck driver has the option of hitting another car head-on,
or running off the road and just wreaking the truck, they will frequently hit
the car.

The reason being, if there's no evidence of a second vehicle in the accident,
they will end up being responsible for the loss of the cargo, and lose their
job.

Plus, the truck driver will typically survive a head-on with a car.

------
nobodysfool
I'd say that in the example given in the article, the best decision would be
to slam into the side of the tunnel instead of hitting the child. It makes no
difference the age of the child, since if they are laying in the road, there's
a high likelihood that they will be killed and moderate likelihood that you'
be deflected enough to be seriously injured or killed as well by choosing the
'through the living being route' since that particular case is fraught with
uncertainty, the most logical choice is a controlled crash. Since cars these
days have airbags and crumple zones and such things, it's hard to hit an
inanimate object and get killed in such a vehicle. And with car insurance, and
unlawful death lawsuits in practice, it seems the financially responsible
choice is to have a controlled crash. So to all those saying there is no
'right' answer, you are generalizing the problem too much. The question as
stated in the article, not generalized to all situations, but to the situation
as it was stated, seems clear.

------
transfire
It's also a bit of a red haring. While it could happen in very rare instances,
the whole benefit of the automated car is that the number accidents will drop
off a cliff. When nearly every car on the road is automated then there simply
won't be many accidents any more.

~~~
krapp
It's not necessarily that simple.

You're assuming the best possible implementation of autonomous car will be the
one which actually arrives on the roads. Given the daunting complexity
necessary to even make that feasible,it would surely be the first system of
its kind and scale without a considerable number of flaws due to bad
programming, exploits, cost cutting, unforseen consequences, etc.

The more 'intelligent' and 'autonomous' the cars are, and the more of them
there are on the road, the more likely it is systemic problems will arise.
This is not a criticism of the idea of autonomous cars but just an inevitable
result of complex systems interacting. Just because they're computers, doesn't
mean they won't fail, or that accidents will be _so_ rare as to be not worth
considering. Airplanes almost never crash now compared to a century ago, but
that doesn't make worrying about air safety, or the affect of automation on
pilot competence in the event that manual control becomes necessary, a red
herring.

~~~
informatimago
There's only two systemic problems in the current road system that I know of:
traffic jams, and the fact that increasing road capacity increases road
traffic (and therefore doesn't reduce the potential for traffic jams). They're
entirely due to the behavior of human drivers (given the surrounding
circumstances in part too, of course).

Are other systemic problematic effects known in the current road
transportation system? Notably, are there any systemic effects that lead to
increasing speed and accidents, rather than traffic jams (not that traffic
jams don't cause accidents, since human drivers are running too fast, not
expecting jams).

It seems obvious that the two systemic problems above, won't occur with
intercommunicating robotic cars.

It would be "fun" to discover new systemic epiphenomenon with robotic cars,
but I would expect them to be more discreet and even less "catastrophic".

~~~
krapp
You also have to consider potential problems with the cars themselves, in
terms of engineering and software, as well as potential issues with
networking, weather, inaccurate satellite data, etc. It would be a vastly more
interconnected and complex system than the one which already exists.

~~~
nobodysfool
But you're also seemingly forgetting that cars these days are very safe, with
airbags, seat belts, crumple zones, roll bars... this whole 'should it kill
you' is not likely an issue these days. Look at all those idiots who crashed
their cars going 80mph and lived to tell about it (tesla for example). The
point being that if a car controlled the crash, your chances of survival are
far better. So it's an option of killing someone or having a controlled crash,
not 'kill or be killed'.

~~~
krapp
Ok, that's fair. The chances of autonomous cars ever being built with fewer
safety features than currently exist is pretty much nil.

------
guidedlight
Interesting question:

Looking at Issac Amimov's laws, I believe that the 0th law would cause the
child to die because that outcome reasonably reduces the number of humans
potentially harmed in the situation (as driving into the wall may cause
unpredictable results including additional harm to humans).

Looking at the EPRSC / AHRC principles of robotics, I believe the child would
die because robots are artifacts and should not invoke an emotional response.

See:
[http://en.m.wikipedia.org/wiki/Laws_of_robotics](http://en.m.wikipedia.org/wiki/Laws_of_robotics)

~~~
krapp
It might be worth mentioning that Asimov's Three Laws were plot devices meant
to be broken by robot antagonists for the sake of drama, and that actually
taking them seriously as a framework for AI ethics is probably a fool's
errand.

------
guelo
Maybe we'll get ethical settings in our robots so, like a human driver today,
we get to decide if we'll kill the kid or kill ourselves. But the problem with
letting user have too many options is that we might end up at a less than
optimal global outcome due to prisoner's dilemma-type choices. There could be
more people killed, worse traffic, worse energy efficiency, etc. because users
choose the selfishly beneficial options.

------
jack-r-abbit
Unless instructed to by me, _my_ robot driver should never leave the road
unless it is to _save_ me, not kill me. The decision should be to drive the
car into the area that causes the occupants the least harm.

~~~
Bakkot
I've never understood this logic, especially since you're deciding on the rule
for _all cars_ , not just your car. Shouldn't you prefer that the rule be to
minimize the number of people harmed, since this is the option which (when
taken over all cars) will minimize the likelihood of _you_ being harmed?

~~~
jack-r-abbit
I actually put emphasis on "my" for a reason. I was talking about _my_ robot
driver. You should be free to instruct _your_ robot driver how ever you wish.

~~~
Bakkot
Yeah, sure, but now suppose that you're going to decide the rule that
_everyone 's_ car will follow. Are you still going to choose what amounts to
defecting in the prisoner's dilemma?

Put it another way: this is one of those very rare occasion where collective
cooperation (a la prisoner's dilemma) is possible. Shouldn't that... be the
thing we want?

~~~
jack-r-abbit
I would not be deciding that rule for everyone's car. I would provide a
checkbox (or what ever way there would be to customize a car's options). I'm a
pretty big believer in user settings. I wish more software gave users more
options on some of the stuff some random developer thought was best.

~~~
Bakkot
The thing is, in the absence of collective action people choose to defect in
prisoner's dilemmas. Giving people the option results in everyone defecting.
Having it be preset results in everyone cooperating. Everyone cooperating is a
better outcome for _everyone_.

------
transfire
Simple. Whatever will cost the insurance company less!

