
Trolley problem - networked
http://en.wikipedia.org/wiki/Trolley_problem
======
zo1
Has anyone considered the first option? I.e. not actually touching the lever.
At least, in the human example of the problem, and not the one related to
autonomous vehicles. If so, what do you think of it?

I'm of the opinion that touching the lever would make you liable for the
death/harm of the single individual. Sure, you "saved" 5 people and prevented
them from coming to harm. But you knowingly decided to harm someone completely
innocent in all of this. In my book, that's as good as letting go of the
trolley in the first place. i.e. It's no longer an accident, but rather a
deliberate (and arguably malicious) act.

~~~
eloisant
This is precisely what the "trolley problem" aims to highlight, and why the
question is interesting rather than just being answered "obviously you'll
choose one death rather than five" by everyone.

------
kens
This parody of the trolley problem is amusing:
[http://www.mindspring.com/~mfpatton/Tissues.htm](http://www.mindspring.com/~mfpatton/Tissues.htm)

------
brudgers
The problem with the trolley problem is that it discounts time. In the time
between throwing or not throwing the switch and the trolley reaching the
location of the potential victim[s], our experience gives us some small hope
that the victim[s] might be rescued by others. themselves, or trolley
malfunction. There is still time for good fortune to replace the bad. The
problem does not account for our ordinary intuition in this regard.

This explains the apparent conundrum of divergence in the Fat Man version. If
we push him off the bridge, there is no time for good fortune to save him, but
there is time for the group to escape, self preserve or get lucky.

------
plg
I wonder how Tesla software will handle this problem.

Also who will get sued when it actually happens.

~~~
ghaff
We're far far away from autonomous driving software being at a state where
this is an interesting question. And, PR aside, Tesla's latest update is the
same sort of assistive driving tech that's already in a number of high-end
cars. It's not self-driving in that the driver is supposed to remain alert
(and remains fully responsible for being in control of the vehicle).

We do seem to be rapidly heading toward an "interesting" place where assistive
driving technologies are good enough that a lot of people realistically won't
pay attention while driving (given that plenty of people text and use their
phones even without such technologies).

~~~
cjbprime
You seem overly pessimistic about autonomous driving software; it seems to me
like Google's driving software could easily already be at a point where it
might be able to consider the choice between hitting pedestrians crossing the
street, or purposefully crashing the car.

~~~
ghaff
As a _general_ approach, I can certainly see assistive driving algorithms
making decisions about whether to veer off the road vs. possibly hitting
something ahead, i.e. how do you prioritize staying in the lane vs. avoiding a
known/likely collision. One implication is that you'd sort of like to know
what "off the road" means in a given location.

~~~
walshemj
And what happens if you have an out of date map and the car thinks its
swerving into an empty lot - but its now a kids playground.

------
nakedrobot2
KILL FEWER PEOPLE.

What is the problem, here?

~~~
gweinberg
Overpopulation?

------
petersouth
...only a real question if the identities of the people are known.

------
simonh
Who needs karma anyway:

[http://xkcd.com/1455/](http://xkcd.com/1455/)

------
hurin
A circle jerk of academics salaried for writing useless things about a
hypothetical situation.

~~~
m_y-n_a_m_e
"Sidetracked by trolleys: why sacrificial moral dilemmas tell us little (or
nothing) about utilitarian judgment". Soc Neurosci. 2015 Mar 20:1-10. [Epub
ahead of print] -
[http://www.ncbi.nlm.nih.gov/pubmed/25791902?dopt=Abstract](http://www.ncbi.nlm.nih.gov/pubmed/25791902?dopt=Abstract)

tldr; ".... Sacrificial dilemmas therefore tell us little about utilitarian
decision-making". Paper proposes to "studying proto-utilitarian tendencies in
everyday moral thinking" \- seems a good option to choose.

