
The moral dilemma of self-driving cars that choose who to sacrifice in accidents - gnicholas
http://www.cbsnews.com/news/moral-dilemma-of-self-driving-cars-which-lives-to-save-in-a-crash/
======
gnicholas
One big problem with the utilitarian approach is that it could enable human
bad actors to cause a vehicle to choose to kill its passengers. For example,
if an autonomous vehicle is crossing a bridge and two people dart out in front
of it, the vehicle might decide that instead of having two pedestrians die in
a collision, it's better to have just one person (the passenger) perish—by
having the vehicle drive off the side of the bridge.

While there might be some people who are OK with a utilitarian approach to
deciding who lives and who dies, this concept is much less appealing when
relative negligence (or malfeasance, as above) come into play. If a drunk
adult jumps into the street, should autonomous vehicles swerve, cause tens of
thousands of dollars in property damage (to be compensated by whose
insurance?) in order to avoid the negligent drunk? This is a closer question
than the first scenario, but it reveals that pure utilitarianism—with no sense
of fault or intentionality—is insufficient.

