
An Autonomous Car Might Decide You Should Die - rahmaniacc
https://medium.com/backchannel/reinventing-the-trolley-problem-85f3d1730756
======
Nadya
I've found the problem is too few people enjoy reasoning and critical
thinking. They prefer to listen to the fear-mongering and their "leaders" to
think for them.

Let's give humans a random % of accidents. Let's say... 10%. For every 10
drivers that day - one will get into an accident. Once a week, this accident
will be fatal and result in 52 deaths/year. An autonomous vehicle has a 0.1%
chance of an accident. For every 1,000 vehicles driving that day - one will
get into an accident. Once a week, this accident will be fatal and result in
52 deaths/year.

Both scenarios result in 52 deaths/year and neither is better than the other.
This is what people are being led to believe. It attempts to hide the 10 vs
1,000 difference in number of drivers. Once you scale it up to be 1,000 humans
driving and 100 will get into an accident. Each week, 100 will be fatal. There
are now 5,200 deaths/year.

52 deaths/year should be something everyone desires over 5,200 deaths/year.
Even better - autonomous vehicles can improve faster than humans. Eventually
there may only be 5.2 deaths/year. Then 0.52 deaths/year. Then 0.052
deaths/year. Meanwhile, humans are a bit more consistent... there will likely
be 5,200 deaths/year _every_ year without seeing any improvement.

The faster we make _any_ improvement over human drivers - the more lives are
saved.

If your state senators are trying to ban autonomous vehicles. Remind them that
they are are contributing to deaths of people. Any improvement over humans to
reduce these deaths should be desired - not banned.

------
thoman23
I think many people assess risk differently when they are passive vs. active
participants. When you are the driver of a car, you tell yourself that you can
avoid any possible accident and it's only "other people" that will be part of
those unfortunate 30,000 deaths a year. When you are a passive participant you
lose that sense of control and all of a sudden feel yourself to be more at
risk. That's a big reason why so many people fear air travel more than driving
their car.

That said, I hope we find a way past this irrationality. I look forward to a
future of fully autonomous vehicles.

------
shopinterest
The other mental block is that people somehow trust/prefer 'randomness' vs.
determined outcomes from a machine. But it shouldn't be so.

E.g. with people-driven cars, when an accident happens (lets say 1 passenger,
1 driver on each vehicle), the result is random. (4 dead, 3-1,2-2,3-1,4) If a
self-driven, intelligent vehicle determines that a sudden shift to the right
when the crash is happening will save 3 persons, and one dies (and this was
the best outcome under all the scenarios/simulations the car calculated under
the circumstances of the crash) in this case, the car determined who would be
the casualty, as opposed to let the accident happen at random, which could
result in all dead, all alive or other choices.

As humans with subjective POVs..we somehow prefer randomness and see it as
'natural way', 'God's will', 'Destiny', 'Fate' etc...

In the near future, algorithms will make a large number of life-death
decisions. We just have to be comfortable on statistics and results. Machine
results > Human Results > Random Results - so autonomous cars and other
upcoming vehicles will have to make these calculations every day, we'll just
have a hard time letting go of randomness or of the overrrated value of human
decision-making during a car crash.

------
topkai22
Interesting points. The problem with discussing the trolley problem as it
applies to autonomous cars is that it conflates possibilities of the future
with decisions that need to made today. Todays autonomous cars are not
networked and not particularly smart. They are unlikely to know much about
their own occupants, let alone information about the occupants of others.
Their behavior in emergency situations is likely to be "avoid hitting/being
hit by a solid object" and "if I have to hit a solid object, lets try not to
make it a person or vehicle", although that second one may be future safety
improvement. We can largely avoid the moral question of what is the moral
behavior here (at least at in the short term) by pointing out that the
machines aren't even capable of understanding the world well enough to adapt
the behaivor, but are still (as the author points) far more staff in aggregate
than non-autonomous systems.

~~~
rahmaniacc
Do you think it is somehow going to be better when there are more autonomous
cars on the road(all networked) and the behavior of the other cars on the road
is easily predictable? And with that machines wouldn't have to adapt to the
behavior of human beings but to other machines (which seems a lot easier).
Seems more utopian to me though.

