
Will your driverless car kill you so others may live? - walterbell
http://www.latimes.com/ct-driverless-cars-safety-google-perspec-20151209-story.html
======
Sir_Substance
>The autonomous vehicle rounds a corner and detects a crosswalk full of
children. It brakes, but your lane is unexpectedly full of sand from a recent
rock slide. It can't get traction.

Why did an autonomous car with a map that tells it there's a crosswalk around
a blind corner round the blind corner so fast it couldn't stop if there was
unexpected debris on the road?

What a stupid scenario, autonomous cars have infinite patience, and when the
people inside them can watch netflix all the way to work they won't care much
about a few extra seconds either.

The obvious solution is to program the car to never take any blind corner,
mapped crosswalk or not, so fast it can't stop on a dime. These things are
going to have to work in winter in Canada. Low traction is a regular problem
they'll have to solve. That'll mean every auto that takes this corner will do
it very slowly, but that's fine because it can report back to the citywide
network that this corner is unsafe, meaning all other autos will avoid it if
they can (so no congestion), and as a plus it can be registered for
modification with the local council. Maybe a networkable sensor can be
erected, allowing the cars to see around the corner.

Taking corners faster than you can stop if there's a problem is a very human
thing to do. One of the key benefits of autos is they won't do that.

~~~
crististm
The AI used in these cars is probability based. They can't be held accountable
in the sense that they can't explain their decisions rationally. You can't
actually predict what you'll get by slight modifications of probabilities when
adding new test cases.

Gerry Sussman explained this recently way better than I can.

~~~
adrianN
Your brain is also probability based and you don't have an audit trail for
your thoughts written to a black box.

~~~
dasil003
So what is really needed is to add post-hoc rationalization engines to these
self-driving cars to facilitate satisfying our sense of justice.

~~~
TeMPOraL
That was actually my first reaction to reading about Sussman's AI ideas in an
article[0] that was on HN recently[1]. Propagators seems neat, but the quote:

"Sure, when you're running the program, that whole thing is a black box. So is
your brain. But you can explain to me the reasoning of why you did something.
At that point, being able to inspect the symbolic reasoning of the system is
all you have."

made me immediately notice that there's a reason we have negative connotations
with the world "rationalization" \- because most of the time, post-fact
rationalizations are bullshit and do not reveal one's actual thoughts at the
time of the event, but what one wishes one thought at that time. Just bolting
a "reasoning explainer" on top of a Bayesian inference system won't fare any
better than human rationalization does.

[0] - [http://dustycloud.org/blog/sussman-on-
ai/](http://dustycloud.org/blog/sussman-on-ai/)

[1] -
[https://news.ycombinator.com/item?id=10388795](https://news.ycombinator.com/item?id=10388795)

------
chrisfosterelli
Every time I read one of these articles I feel like the author doesn't have a
very good grasp of exactly how these work. The car doesn't have to "choose".
Nobody at Google is programming these things to calculate the life value of
humans, the percent chance of injury to occupants, or the benefits of
sacrificing pedestrians.

They're programming them to try and avoid accidents _in general_. There will
be times when the computer fails to do that, but that doesn't mean it decided
to kill someone by making a moral choice about human life priority. Maybe in
the future this will be a relevant argument but the sort of algorithms
mentioned that decide which crash scenario is 'best' simply don't exist.

The best way to deal with these moral issues is to just... not program that
behaviour.

~~~
m0nty
I think people are excited about this because it seems to mesh so closely to
the Trolley Problem, which has been an important thought experiment in the
study of ethics.

[https://en.wikipedia.org/wiki/Trolley_problem](https://en.wikipedia.org/wiki/Trolley_problem)
(They even have an "Implications for autonomous vehicles" section now.)

So, who knew? Suddenly it's not so theoretical, it apparently has a real-world
application. Except, as you seem to be getting at, probably the only thing
engineers will be considering in any depth is how and when to brake, not
trying to swerve the car elaborately in one direction or the other. The number
of occasions when a serious accident occurs because a driver reflexively
swerved to avoid an animal or other obstruction (sometimes literally dozens
dead in one accident) emphasises that swerving is not so good a strategy as
braking. Or, as my driving instructor once told me, "There is only one cause
of accidents on our roads: failing to stop in time."

~~~
Already__Taken
A car has a fixed contact patch with the road, 100% of this is used to
accelerate, brake, or turn. Any combination of those must stay within that
100%.

Why would a computer choose to swerve and not slam the brakes. With reactions
that are as fast as a computer could be what kind of a situation even is a
swerve vs a course correction.

------
CM30
How about the car isn't designed with utilitarian ethics in mind and it
instead automatically values anyone in it more than other people?

Okay, that's probably a very unpopular opinion, at least when expressed in
that way. But to be perfectly honest, it's probably what a lot of people
riding in said cars want to happen, and perhaps a bit closer to how cars work
in real life when not driven by a computer.

Good luck trying to sell selfish AI to the public though...

~~~
bsder
> How about the car isn't designed with utilitarian ethics in mind and it
> instead automatically values anyone in it more than other people?

I don't consider that an unpopular opinion, at all, thank you very much.

In reality, these questions don't take into account the fact that _humans_
don't actually even make decisions in these situations. People don't choose
whether to kill that child or swerve into oncoming traffic--they don't even
react in time for there to be an option or the reflexively jerk away without
thought. Truckers don't get time to consider whether they're going to kill
other people on the road with a jackknife or roll when they have to do a
sudden avoidance maneuver.

Autonomous cars are going to be _so_ much better at avoiding situations
preemptively that you will have to be pathologically stupid to be able to put
an autonomous car into such a situation. And, if you are that stupid, _YOU_
deserve to die.

~~~
CM30
But you're never going to avoid situations like this completely, regardless of
how good the car itself is. Imagine one breaks down. Or gets hit by something
(whether that be due to a natural disaster, the local landscape, human
actions, animals, etc). Or it gets hacked. Or it just plain run out of power.

Now imagine the other autonomous cars following it have to make an urgent
decision; go straight or swerve out of the way. Imagine the only somewhat
clear path has some innocent people on it, and the alternative is basically to
let the car end up as a giant fireball after it hits whatever's in front.

You can try and avoid similar situations as much as possible, by having the
car figure out what's going on ahead and do things to avoid it (change lane,
slow down, etc). But the situation isn't avoidable all the time. If something
happens close to you and you're going down a motorway at 70mph, then physics
alone says the car cannot stop in time. It will have to decide then to either
take the risk or get obliterated.

It's not pathological stupidity, it's often bad luck. And in cases like that,
the driver is right to assume the vehicle won't sacrifice them for others.

------
unabridged
I'm worried about more mundane situations. What happens at a 4-way stop if
they all get there at the same time? Will the cars drive so passive/defensive
that none of them have the "balls" to break symmetry?

~~~
tomp
Exactly. This is an actual issue in robotics/AI - "best" in each scenario
might in turn become "worse" overall.

A funny story I read about once:

 _> I've got a good story about the mines using this tech. A mate works for
them and was telling me after changing over to driverless trucks they started
getting huge divots in the ramps in the mines._

 _> They went to site and watched the trucks to see what was going on, and the
trucks were (with centimetre precision) changing gears in the exact same spot
causing huge divots in the ground. They ended up having to program in a random
number generator in that algorithm to avoid them._

Source:
[https://www.reddit.com/r/Futurology/comments/3p7qg6/driverle...](https://www.reddit.com/r/Futurology/comments/3p7qg6/driverless_trucks_move_iron_ore_at_automated_rio/cw4ejvq)

------
solidsnack9000
If we replace autonomous cars with taxi drivers, we have the same agency
problem...but it's not a question people consider troubling.

> A taxi rounds a blind corner on to an inner city railway crossing. Half a
> car's length across the rails is a small child who's fallen off their
> tricycle. A steaming locomotive is barreling down the track. If the driver
> breaks in time, your vehicle will stop short of the child but be torn to
> bits as its shuttles down the tracks in front of the train. If they maintain
> speed, you'll just make it past the train -- but the child is not likely to
> survive.

~~~
crististm
I was very surprised to find that at least Google car is programmed top _not
enter_ a railway crossing unless it has clearance all the way to the other
side.

~~~
a_e_k
Isn't that just standard rules of the road/defensive driving practice, though?
Certainly that's how I handled it back when I lived near a train crossing.

~~~
crististm
I did not mention that the demo video was with a raised barrier crossing. It
seemed strange to me because I allow for less clearance when the railway
crossing has a raised barrier (I look at it as a normal crossing with green
light). Google's AI was more conservative than I would have been in the same
situation.

When there are no barriers installed we're already talking about a different
situation and all cars should stop and confirm there is no train approaching.

~~~
foota
Even with barriers, you don't really have all the much time between when the
barriers come down and when the train will cross the road. You don't want to
be stuck waiting for the person ahead of you to go.

~~~
crististm
I don't know about other places but where I live you can have up to several
minutes between lowering barriers and train arriving. It's true that even this
may not be sufficient in some circumstances.

------
danr4
Can anybody who understand the market give me a reason as to why aren't smart
roads built simultaneously to these driverless cars? I mean, why focus on
making a car able to 'see' rather than being able to communicate with it's
infrastructure?

The reasons I thought of:

1\. As mentioned - building/upgrading infrastructure is probably much more
expensive and logistically elaborate.

2\. Market penetration - current batch of driverless cars can drive on
existing infrastructure - no need to wait.

3\. If smart cars won't be usable dumb roads it will also be hard to sell

4\. Smart, durable, cost-effective infrastructure hasn't been solved.

~~~
HeyLaughingBoy
Businesses build cars; governments build roads.

It's hard enough to get a government to keep roads pothole-free, how much
harder is it going to be to get them to build "smart roads" when they have
little or no incentive to do so?

------
datashovel
I think the answer is obvious, but not because I have some moral reason behind
it. I think it really comes down to the actors involved and what their current
responsibilities are.

I can't imagine today's pedestrians assume a driver will attempt to preserve
their life over the lives of the people in the car. So as a pedestrian you
take measures to avoid putting the driver of the car into those situations.

I think it's reasonable to assume a driver's objective is self preservation
and preservation of his/her passengers before preservation of people outside
of the car.

I can certainly imagine a scenario where a person driving by themselves might
make different decisions because they can't imagine living through an event
where they were responsible for killing others, and didn't do everything they
could to prevent harming others. But then what if there are passengers? I have
a dilemma of attempting to preserve others' lives in the car and outside of
the car. Are we honestly going to ask our drivers to "rate" each passenger?
ie. If Joe is in the car with me, I could care less if he survives a horrible
crash because he slept with my wife, so you can feel free to kill us both in
extreme circumstances. If Mary is in the car with me by all means do
everything you can to preserve her life. She has kids and a husband and is
happily married.

The objective of the driver (and thus driverless vehicle) is to transport the
driver and passengers safely. They shouldn't attempt to actively reduce
likelihood of achieving that objective in order to preserve non-passengers.

------
jasonjei
How about a simpler solution? Designate certain roads as autonomous only to
assure safe operation among other vehicles (for example highways do not allow
pedestrians, bicycles, etc; we could require certain roads to be autonomous
only as self-driving vehicles become more commonplace), and maps should be
updated to identify all possible pedestrian crossings to drive
pessimistically?

~~~
CM30
That's a decent solution for some roads, though it still doesn't solve the
issue of how these cars will work in towns and other highly populated areas,
where pedestrians and cyclists are everywhere and they can't reasonably be
expected to only be on the pavement or only cross by pedestrian crossings.

It also doesn't really work outside the US, since a lot of places are quite
fine with 'jaywalking'. For example, in the UK, you can cross the road just
about anywhere. These autonomous vehicles will have to programmed with this in
mind.

~~~
erdojo
Or, like with other automated equipment, people learn to not do stupid
behavior that will get them run over.

For example, we all just know not to step in front of a train. We have NO
EXPECTATION it can stop. Do accidents happen? Yes, but only at the fault of
the pedestrian (intentional or not).

If we all know that every car won't stop for us in between crosswalks, we may
still jaywalk but we'll be danmed sure a self-driving car isn't coming. Right
now a lot of people jaywalk and are overconfident if a car is coming it'll
stop.

~~~
tedunangst
Legislation shifting blame exclusively to the pedestrian outside the cross
walk seems unlikely to pass.

------
brianmcconnell
While we are exploring hypothetical scenarios, here's one.

A schoolbus full of young children has lost its brakes and is careening down a
hill toward an almost certainly lethal high speed impact with a building at
the bottom of the hill.

Two Google cars are idling at an intersection halfway down the hill. Both
determine that if one of them pulls in front of the bus just so, it will
result in a survivable crash (for the kids on the school bus). But which car
should go?

Both cars run a PANN (psychometric artificial neural network) analysis of
their owner's email. Car A determines that its owner is an abusive sociopath.
Car B determines that its owner is a nice person, newly wed and possibly
pregnant. The each compare their owner's PANN scores.

Car A signals "CARMA program activated. Taking one for the team!"

The schoolbus plows into Car A, which absorbs most of the impact. The kids are
fine, minor injuries only, but the owner of Car A expires before the fire
department can extract him with the jaws of life.

~~~
bewatson
Very interesting hypo. "CARMA" made me laugh. Another element to consider is
if there are fleets of empty and unmaned cars on the road (eg an Uber), an
empty car could take the hit to skip the single unnecessary death.

------
Eerie
Nonsense. The car should behave in a way the law requires a human driver to
behave. That's it, end of moral dilemma.

~~~
tedunangst
That's a dodge. Does the law require running over children or driving off a
cliff?

The law distinguishes between deliberate actions and accidents in which there
isn't time to decide. But a car will not have the luxury of a jury
understanding that it panicked.

~~~
GrumpyBen
The law says you should only leave your lane if it's safe to do so, no law
requires you to sacrifice yourself.

"I panicked" is not an excuse, you need to control your vehicle at all time.
You should never be in a situation where there isn't time to decide, you need
to adjust your speed and safety distance for that.

~~~
loco5niner
Because brakes never fail when going down a steep hill.

------
nmc
The thesis is disturbingly similar to the one detailed in:
[http://www.iflscience.com/technology/should-self-driving-
car...](http://www.iflscience.com/technology/should-self-driving-car-be-
programmed-kill-its-passengers-greater-good-scenario)

Almost feels like plagiarism.

------
matobago
The main problem with this is we are thinking in autonomous car as they where
super humans, so a human with endless amount of time will have to rely on
ethics to make a decision, but not a driverless car.

A computer will rely in an algorithm, and most important than that in multiple
sensors, not just their own sensors but also other autonomous cars sensors.

And there is a huge opportunity here to create even more sensors around
crossways or sense people cellphones or bikes, or static speed sensors to warn
other cars, creating tons of information for the car to know where cars, bikes
and people are.

I see this as a remote scenario if we start building more sensors on top of
the current highways infrastructure.

------
huuu
It all depends on the motives of the owner.

I think in general the cars will be programmed to avoid any damage.

But what if a company starts providing rides that promise no injury will
happen to the passenger?

That's why laws are needed to prevent this from happening in the future.

Side note: in The Netherlands we have an automated sea barrier [1]. The system
is made automatic so nobody has to make the decision to close it. So yeah,
some systems are programmed to save peoples lives even if this means the lives
of others can't be saved.

[1]
[https://en.wikipedia.org/wiki/Maeslantkering](https://en.wikipedia.org/wiki/Maeslantkering)

------
taoufix
TED-ED video about the subject :
[https://www.youtube.com/watch?v=ixIoDYVfKA0](https://www.youtube.com/watch?v=ixIoDYVfKA0)

------
kubiiii
There ils also an issue with making different cars from different car makers
cooperate efficiently. When an accident can't be avoided, involved cars would
need to share info and processing power to behave optimaly and act as master
and slaves.

------
matobago
The car has to have the power to avoid being in a bad case scenario, it has
more power to calculate the odds before even gets there. That is why this
should be a DMV call not programers.

------
pvaldes
The problem here is that we are using a philosophical and humanist focus for a
technologic problem. Machines do not need to be human.

------
fleitz
The driver is presumably the owner of the vehicle therefore it should act in
the interests of whom it is serving

------
rdancer
tl;dr: No.

