
The Robot Car of Tomorrow May Just Be Programmed to Hit You - arikrak
http://www.wired.com/2014/05/the-robot-car-of-tomorrow-might-just-be-programmed-to-hit-you/
======
Schwolop
I have personally written algorithms to select paths for 800kg autonomous
vehicles operating around humans and livestock, and that minimise some cost
function. My cost functions always treated crashes as (effectively) infinite
cost. In the limit case where all possible paths had such an infinite cost,
the failure mode was to revoke any electrical system that could increase the
kinetic energy of the vehicle, apply maximum braking power (and/or any other
subsystem that would serve reduce the kinetic energy of the vehicle),
disconnect any chemical stores of energy from any activation system (i.e.
stall the engine), and then finally - and as a direct consequence of these
actions - maintain heading/steering angle.

Since we elected to revoke all power systems, we lose the ability to steer.
We're attempting to reduce the kinetic energy of the system to zero as rapidly
as possible (since we've computed that there are no viable controlled
manoeuvres available), and we concluded that the best (and in fact only)
option was to maintain trajectory whilst trying to slow down.

As a result, I don't think this argument has merit. There is no concept of
"who should we crash into" because both are unacceptable, and we'd let physics
take over instead.

~~~
stickfigure
Opting to hold direction is a technical limitation of your system; it's not
obvious that disconnecting all power systems is inherently the safest choice -
especially as other vehicles react to the situation. Or at the very least,
there is a long ethical decision tree of probabilities that must be evaluated
before a crash is 100% certain. Do you pick a 48% chance of hitting the Volvo
or a 26% chance of hitting the motorcycle?

I predict that the ultimate solution will be to pick the path that optimizes
for survival of the occupants of the automated vehicle. After all, they paid
for it.

~~~
Schwolop
In a manner of speaking what you say is true, because we've classed all forms
of crash as being equally bad (infinitely so). And our vehicles were empty of
humans, so we deemed their destruction as insignificant compared to harming
anything else. I do still think the approach of treating it as an energy
minimisation problem was appropriate for our situation, but the differences in
the scenarios certainly alter the approach.

------
NPC82
I think this problem is nonexistent to an autonomous car. A situation where
the car has a choice to hit something would never happen. The only times
humans have that choice is if we don't look far ahead enough or our field of
view is impaired.

Sure, autonomous cars may eventually hit something. But if they do it will be
totally out of their control, by design. We can make them see further, stay
focused, and be cautious. Humans can't always do that, and that's why we have
to constantly choose our lesser evils.

~~~
rando289
It is a real problem. It should be fairly obvious that it is. More
realistically, It won't be a real time decision of hit this or that, it will
be that it is programmed to do behavior X instead of Y. X is known to be 30%
more likely to hit a pedestrian, but 10% less likely overall to be in any
accident. Then you dramatize it by saying the car decided to hit a pedestrian
instead of a car. Just substitute the behavior described in the story or some
variation.

~~~
thenmar
I think it will probably be much more complicated than that. Suppose the car
is controlled by an artificial neural network that has been trained with a
combination of real world experience and human-generated driving data. There
is no giant switch statement that someone wrote for the outcome. Who is to
blame then? Would using that data to train the model further be sufficient
accountability?

------
DigitalSea
The kind of scenario Wired propose between hitting two cars based on which
would absorb the impact the better or cause the least harm aren't the kind of
decisions humans make during a crash. I've been in a couple of car accidents
caused by other people luckily and not once during any of those crashes did I
weigh up my options between which car or which direction to swerve. Maybe we
will expect more from an autonomous for its ability to make these kind of
decisions, but I think it's an argument with a few holes.

A lot of cars are currently at a point where they come standard with sensors
that can allow a car to automatically parallel park itself between two other
cars on a busy street. Smart cruise control which will know when to brake if
the distance between your car and the car in-front gets smaller, cameras that
follow the lines on a road. It's all there, car technology is already insanely
smart.

The questions people aren't asking here about autonomous cars is not what if
it has to choose between two targets to hit, but rather what happens when a
bug in the software controlling the vehicle rears its head and the car gets
confused and reacts to something that isn't really happening? A faulty sensor
thinks you're about to hit something when you're driving on a straight road
and swerves into a tree trying to avoid an accident that wasn't even about to
take place, now that's a scary thought.

That would be my biggest worry. I am aware modern jetliner planes are
essentially fly by wire except during takeoff and landing, but if the computer
fails, another takes its place and on the rare occasion redundancy fails, the
pilot and crew are there to manually guide the plane.

The arguments presented here aren't even realistic. Why should autonomous
computers be expected to make such decisions based on environmental factors
like if a car can handle the impact or not? For accidents where the outcome
will be the same (choosing between two vehicles for example) does it matter
what the thinking behind the decision is? There is going to be an accident and
no matter how in-depth the analysis is, depending on the speed and location,
someone is bound to get hurt to an extent or killed.

I have a much better idea and it's far cheaper and more realistic: if an
accident is about to take place, the car slams on the brakes and the car
slides to a stop. Technology like that already exists in some cars. Swerving
in a car accident is most of the time the wrong choice as well, you're most
likely to cause more harm to someone swerving in an unpredictable direction
then you are straight on, unless of course you can see someone ahead,
something on the road or about to have a head-on collision with another car.

~~~
sounds
Thank you!

The article asks the question of which way should the car swerve, when physics
indicates that attempting to alter course takes available tire frictional
force away from the braking maneuver.

Most modern roads are designed so that even if an oncoming car is suicidal,
you can make some kind of defensive maneuver to prevent an accident. Not that
it's impossible to have a no-win scenario, but the article makes it sound like
autonomous vehicles will be getting in these kinds of accidents all the time.

The National Safety Council estimates the two leading causes of motor vehicle
accidents in the US are:

    
    
      ~40% alcohol-related [1]
       23% distracted driving [2]
    

There were 34,080 fatalities from motor vehicle accidents in 2012 [3]. If
someone heading home from the bar could just get in the back seat and let the
car take them home, late at night, wouldn't that be worth something?

[1] [http://en.wikipedia.org/wiki/Alcohol-
related_traffic_crashes...](http://en.wikipedia.org/wiki/Alcohol-
related_traffic_crashes_in_the_United_States)

[2]
[http://www.nsc.org/safety_home/MotorVehicleSafety/Pages/Moto...](http://www.nsc.org/safety_home/MotorVehicleSafety/Pages/MotorVehicleSafety.aspx#distracted%20driving)

[3]
[http://en.wikipedia.org/wiki/List_of_motor_vehicle_deaths_in...](http://en.wikipedia.org/wiki/List_of_motor_vehicle_deaths_in_U.S._by_year)

~~~
stickfigure
Perhaps a minor point, but nevertheless: [1] explicitly points out that the
40% number is misleading and that alcohol- _involved_ accidents should not be
construed as alcohol- _caused_ accidents.

~~~
sounds
True. It's always going to be difficult to identify all the causes of an
accident with the conflicts of interest. Just getting the 40% number was very
difficult – number of fatalities with non-zero blood alcohol is easy to find
but number of accidents with alcohol present gets political way too fast
(which I find informative as the US starts to debate autonomous vehicles).

------
NAFV_P
> _In the name of crash-optimization, you should program the car to crash into
> whatever can best survive the collision. In the last scenario, that meant
> smashing into the Volvo SUV. Here, it means striking the motorcyclist who’s
> wearing a helmet. A good algorithm would account for the much-higher
> statistical odds that the biker without a helmet would die, and surely
> killing someone is one of the worst things auto manufacturers desperately
> want to avoid._

This could encourage bikers to refrain from donning protective gear.

This reminds me of "Fight Club", where the central (anonymous) character
describes his job in crash investigations. He has to decide whether recalling
all instances of a make of car is less expensive than getting sued by the
drivers in court.

------
dm2
What if an autonomous vehicle see a pedestrian walking towards the road in
front of the car (X meters in front but on a collision course with the car)?

It calculates that if the human continues this path, they will collide. Should
the car slow down so that no matter what the car can stop if the human decides
to run out in front of the car or assume that the human will stop?

What if you change human to deer?

Autonomous vehicles should never be in the situation of hitting something.
They should always calculate ahead of time what could possibly move into their
path. If there is an obstacle then the car should slow down sufficiently to
prevent an accident no matter what happens.

I realize that there are near unlimited use-cases that will have to be
accounted for. How would it handle a squirrel that is just sitting in the
road? Stop and honk? Would it ever go around the obstacle by moving into an
oncoming lane if it's deemed safe?

Should they program in the ability to recognize the difference between a piece
of carpet that has fallen onto the road and a board with nails or should it
stop in the middle of the street for both?

------
tunesmith
This reminds me of the Trolley Problem [1].

It could be argued it is the difference between utilitarianism and value
ethics. Or around a recent dinner table, the star trek metaphor was "The needs
of the many outweigh the needs of the view" versus the "Prime Directive". The
former would actively choose whatever crash would do the least damage by some
utility function, while the latter would see that a crash would happen either
way and thereby _turn off_ any guidance technology since it could then be
argued to not be responsible for an active choice in either direction.

[1]
[http://en.wikipedia.org/wiki/Trolley_problem](http://en.wikipedia.org/wiki/Trolley_problem)

~~~
Schwolop
The second option (turning off all guidance) is effectively what I did (see
[https://news.ycombinator.com/item?id=7708105](https://news.ycombinator.com/item?id=7708105)).
Upon recognising that we had no controlled trajectories that were viable, we
just tried to reduce kinetic energy as fast as possible without making any
trajectory changes at all.

------
ArchD
In an unexpected complex situation for which there is very little training
data and the behavioral dynamics are not merely about physics, namely
impending accident with other cars controlled by other entities that can also
respond to this car, any kind of simplistic reasoning like "hit the SUV
because it can survive better" has no scientific basis. It's not clear what is
being optimized for. For example, how do you know that SUVs are not more
likely to have babies who are more fragile, so hitting the SUV is not
necessarily better in terms of reducing fatalities. Also, the other cars are
responding to this car. This is not about shooting stationary targets.

------
jerf
Another edge case I've wondered about is whether there will be driving
circumstances during inclement weather in which a sensibly-programmed
autonomous car will simply pull over and refuse to operate, even as human
drivers happily drive forth. Autonomous cars will be capable of producing
reasonably objective measurements of driving condition safety... I wouldn't be
surprised winter storms are more routinely capable of inducing conditions that
are worse than we humans really realize. Or, alternatively, that _we_ realize
are dangerous but choose to drive in anyhow, but no autonomous care programmer
would ever choose to accept.

------
sdfjkl
This headline got me thinking, what if such a car gets infected with malware?

Let's see. I don't know much about self-driving cars, but I imagine they use
plenty of cameras and image processing software. That might make it pretty
suitable to run some facial recognition software. Now multiply that with the
botnets we already have today. The result might be assassination by malware:
Upload a set of facial features, distribute by botnet to all infected cars,
wait until target gets in range, accelerate.

Optionally report success and uninstall malware in the milliseconds between
inevitable outcome and actual impact to destroy evidence.

~~~
arikrak
Or what if people take advantage of knowing what it will do...

------
bagels
Why program the car to choose to crash in to things? This makes no sense. What
if the sensors are wrong and there was no second car?

It seems more prudent to just program it to avoid crashing in to things, even
if that is an impossible goal in some circumstances. In those cases, a better
fallback would be to reduce velocity, and keep the car on the road or road
shoulder perhaps?

------
jfoster
"As a matter of physics, you should choose a collision with a heavier vehicle
that can better absorb the impact of a crash, which means programming the car
to crash into the Volvo."

Better for the car that is being hit, but not the car that we have control of.
I'd like my robotic car to hit the car with more crumple or smaller relative
momentum.

~~~
couchand
You raise an interesting point: does the software controlling an automated car
optimize for the owner of that car (as capitalism would suggest) or does it
optimize for universal utility (as this article and many write-ups assume).

I'd guess we wouldn't see as significant of gains if the cars optimize for
their individual driver; however, it won't be long before some plucky
manufacturer sees an untapped market and builds a selfish car.

------
glomph
The other day I was thinking how easy it would be to DoS a robotic car. You
could just walk in front of it and then out the way again, over and over. How
would they deal with malicious pedestrians?

~~~
IvyMike
Of course you could already do this to a human car. In which case you'd call a
human cop.

And thus the answer becomes obvious. Robocar is DoS'd? Call Robocop.

------
angersock
The basic thought experiment is this: "If the robot car that hit me on my bike
did so with provably the best intentions, should I still be angry?"

At least with a person driving, I can criticize their thought patterns, their
reactions, their choices--I can laud them for dodging the stroller, or curse
them for drinking and swerving into me (provided I can still talk).

With a machine, though, we lose any chance at morality. They act exactly as
programmed, and we all know how reliable programs are.

If, following such an accident, I were to be in a wheelchair for the rest of
my life, I would want the entire stack trace that led up to my collision. I
would want the unabridged source code and issue trackers. I would want the
core dump. I would want every scrap of information that went into the
vehicle's decision to zig instead of zagging.

I'm not sure that we want to hand over so much blanket responsibility to
black-boxes designed by committee and implemented by antisocial 20-something
engineers working weird hours.

EDIT:

Reflecting on this a little has suggested, perhaps, that we should have one
standard algorithm and package for doing this sort of thing, open-source and
in the public domain.

There shouldn't even be a question about allowing closed-source autonomous
vehicles on the road.

~~~
jotux
>implemented by antisocial 20-something engineers working weird hours.

I feel like this is a myth we should actively work to squash. The vast
majority of engineers and coders are 30/40-somethings with normal social
lives, families, and working 9-5 weekdays.

~~~
FatalError
And even if they were, surely this is only a bad thing in that it might
indicate that some potentially good engineers had been excluded from the
workforce.

Is reverse-bigotry really so far gone that some people genuinely think that
being an "antisocial 20-something" makes a person a worse engineer?

~~~
dagw
One of the key factors in becoming a good engineer is experience. a
20-something, antisocial or otherwise, basically by definition is lacking in
this experience. Sure they may be really really smart and write incredibly
clever code, but it takes more than that to be a good engineer.

