
A self-driving car might decide you should die - hmsimha
https://medium.com/backchannel/reinventing-the-trolley-problem-85f3d1730756
======
_benedict
There's actually a much more exciting corollary to this though: the car can
choose to crash in whatever way minimizes the risk of injury to you and
others. This may entail deliberately crashing into the known best crumple
zones, even at the optimal angle to ensure there is reduced risk of
impingement of your limbs, say - perhaps even tailored to knowledge of your
shape. It would know you have no passengers, perhaps, and so choose to
cannibalize your rear by either spinning sharply (if this is safe, which
admittedly it's probably not, unless perhaps the road surface has lost grip
and you cannot decelerate effectively using breaks) or - if a pile-up is
likely - breaking more sharply to begin the crash with the vehicle behind
ahead of schedule. There could be cooperative crashing, with all vehicles
minimizing occurrence of collision and maximising optimal crumple-zone
utilisation. I have no idea whatsoever what is or isn't safe or possible, but
the cars can be made to know.

These things are all a long time away, but it's actually far more exciting
than terrifying.

~~~
brador
Scenario: two passangers in your car, four in the other. Head on crash
incoming. Let's say the AI has two options:

A - You both die, one passanger in the other car dies. B - You survive, all
four in the other car die.

What should the AI do?

Should the AI be set to value the driver and occupants life above all others,
no matter what? or do we set an equation? 1 driver = 2 others? 3 others? 4?
maybe the AI should ascertain fault and make a decision on that? ("if you're
to blame you get a x2 weighting to die")

My guess is Google is ignoring this possibility and working to the car never
crashing. But at 55mph on ice a loss of traction and control is always a
possibility.

See, once we have AI making decisions, then we must also implement
responsibility chains. Who takes the fall when decision making AI goes bad?
The code writer? code owning corp? vehicle owner? CEO?

In general: AI = good to go, decision making AI = we are not ready.

~~~
dragonwriter
We already have rather detailed rules (product liability law) for
responsibility for products; and I've yet to see anything that points to any
specific shortcoming of them when the product is an "AI making decisions".

I don't really understand why people act like "this product caused harm when
used for its intended purpose, should someone be accountable and if so who?"
is a question that society hasn't thoroughly considered.

~~~
brador
Dumb products (for which the established laws apply) don't make decisions. AI
does. That's what makes it AI, and it's gonna need new rules and regulations.

~~~
dragonwriter
Even to the extent that the ways putative future AIs might "make decisions"
might be more like the way humans do than the way existing products respond to
stimulus and produce results, out body of liability law, thanks in part to the
legacy of less egalitarian times, already had ample precedent for the
treatment of responsibility for results of human decision making by entities
that are legally property rather legal persons.

To the extent that more enlightened times might see actors with that kind of
independence as legal persons even if they are manufactured, well, we have
personal liability law already, which also addresses agency relations, etc.
where one legal person might be responsible for the acts of another.

So, again, rather than vague handwaves at poorly defined distinctions, I'd
like to hear those arguing that liability law is a real imminent problem with
AI that will require major and fundamental change to do something to specify
the specific ways in which existing law is inadequate.

------
warmfuzzykitten
This is a clickbait meme that has been done to death.

There is no reason to assume that self-driving cars won't behave as selfishly
as human drivers. That is, act in the self-interest of the passengers in all
but the most extreme cases, most of which will be beyond the capability of the
AI (or human driver) to understand. Like a human, an AI might be expected to
stop suddenly if a child runs into the street, even at the risk of being rear-
ended, because the risk to the passengers is minor. In the same circumstance,
one would not expect an AI to drive off a cliff to avoid the child.

------
gus_massa
Previous discussion (It has no too many comets, but the comments are
interesting.) :
[https://news.ycombinator.com/item?id=9584026](https://news.ycombinator.com/item?id=9584026)
(9 points, 60 days ago, 5 comments)

------
headstrongboy
The problem with this article us that it does not take into account
consequences. With human drivers the consequences of hitting another driver or
pedestrian is on the driver while for automatised vehicles the consequences
will be on a few companies. Companies are rarely held accountable for their
action as seen by the GM ignition switch deatht except for a few fines and law
suits. While a driver can be arrested and tried by criminal law for their
actions. A person also has to live with the fact they killed someone while
companies it will be a statistic.

------
jmnicolas
Ah the endless protectionist argument : let's take all control from your life
so that we can save lives.

The road to hell is paved with good intentions.

I'm not saying people dying is a good thing and nothing should be done to
prevent it, but we are to be reasonable about it.

No matter what we do, people will eventually die, so let's not transform our
lives in something so boring that the only non natural cause of death will be
suicide.

------
kardos
Banks get hacked, governments get hacked, purveyors of malware get hacked,
sleazy websites get hacked.... any reason to expect that self-driving cars
won't get hacked? Something as complex as autonomous pilot software surely has
plenty of attack surface, which will be promptly exploited by scum like
HackingTeam to provide remote control/monitoring abilities to shady customers.

------
malandrew
I honestly don't think this will ever be the case because it relies on two
major assumptions: (1) there is some parent process in control of both cars
involved in an accident. (2) all cars implement the same AI.

What is far more likely is a future in which there are many different types of
self-driving AIs on the road from many different manafacturers, each of which
will have different algorithms for all aspects of autonomous driving.
Ultimately, this diversity will lead to carmakers designing algorithms that
speculate on what another car might do. This is likely because the
coordination costs between n different automaker's autonomous systems will
rise with the square of n. In other words, unless we end up with a monoculture
or only two or three different systems on the road, coordination just isn't
going to happen. The second issue is that even if the automakers could
coordinate, such coordination would rely on vehicle to vehicle communication
of the actions each car wants to take. Coding the decision making for other
cars into each car to anticipate onboard makes no sense because your vehicle
would not have access to the sensor data to resolve the decision of the other
car. Lastly, with V2V communications, negotiation and decision making, the
impact of latency in likely to be so costly to making the best decision that
any attempts of doing V2V negotiation will either be non-trivial or limited to
an extremely simple model such as each car proposing n strategies (where n is
like 2-3 solutions) and each care deciding the best option and incompatible
option. This could keep negotiation fast enough to make a best decision.

Regardless of whether there is negotiation or merely guessing in real time
about what the other car might do, the only thing each autonomous system
designer can consider is the well-being of their passengers. All vehicles
involved in an imminent accident acting in the self interest of their
passengers is likely to be the optimal solutions. The only improvement I can
see is some negotiation where some ideas of what decisions the other car might
make allows each vehicle to make slightly between decisions to preserve the
well being of their passengers.

People just like bringing this up because it lets them discuss this
interesting philosophical question about morality. Once these discussions
encounter the realities of how autonomous cars are likely to evolve and
penetrate the market, it's going to be painfully obvious that this is a futile
thought exercise to even waste your time with.

------
cabirum
I'll go with a custom firmware which saves driver's life unconditionally.

~~~
MatthewWilkes
How does the firmware save you from the prison shank?

------
colin_jack
Pity the title refers to an interesting discussion point that isn't discussed
till quite late in the article, and then only fleetingly.

