
Should driverless cars kill their own passengers to save a pedestrian? - metasean
http://qz.com/536738/should-driverless-cars-kill-their-own-passengers-to-save-a-pedestrian/
======
informatimago
Obviously not. The pedestrian is responsible of his movements and can avoid
the car. On the other hand the passenger is prevented of having any action.
Therefore he should be protected at all cost by the car.

Now if the driverless car is empty, then of course, it should gladly destroy
itself to avoid hurting human pedestrians, and even any animal or robot.

If the driverless car transports a human or animal passenger, then it should
avoid hurting them, even if that means hurting external humans, animals or
robots.

But if the driverless car transports a robot, then it should avoid hurting it,
even it that means hurting an external robot. But it may hurt its passenger
robot to avoid hurting an external human or animal.

Now, assuming some mean of communication between the driverless car and the
external robot, and the passenger robot, the driverless car could, if it was
absolutely impossible to find any combined action of the external robot and
the driverless car avoiding to hurt both of them, then a solution that would
hurt one or the other might be found, depending on the criterial of
"importance" of each robots. For example, if one or both of them contain
information or transports materials critical to save the life of humans or
animals in an immediate situation (the reason why one, the other or both
robots are moving), then the choice should be to keep moving the one with the
best impact.

For example, the robotcar drives a robot transporting the antitoxin that will
neutralize a biobomb that could kill 100,000 humans in ten minutes, while the
external robot transports the bronchodilator that will save an asthmatic
child, and there's no way the robocar and the external robots can avoid
themselves without hurting the passenger robot, then the passenger robot
should take priority and the robocar should optimize its safety.

------
mobiuscog
The problem with this whole argument is that is supposes the self-driving car
has allowed itself to get into the situation in the first place.

For example, the cliff / people scenario suggests that the car was unable to
_see_ the problem until it was too late.

Obviously, the answer it to make sure that the sensor range makes these
situations almost impossible. There will still be an inevitable situation of
some sort, but I suggest it won't be quite the simplistic 'moral' problem that
is constantly raised.

~~~
metasean
From the article: >Of course, cars will very rarely be in a situation where
it[sic] there are only two courses of action, and the car can compute, with
100% certainty, that either decision will lead to death. But with enough
driverless cars on the road, it’s far from implausible that software will
someday have to make such a choice between causing harm to a pedestrian or
passenger. Any safe driverless car should be able to recognize and balance
these risks.

An interesting example would be a fast-developing natural disaster, such as an
earthquake, that causes falling debris [1] and road damage [2] and as a result
quickly and drastically changes available options for many different vehicles
and pedestrians in a short amount of time.

[1]
[https://duckduckgo.com/?q=earthquake+falling+debris&t=ffsb&i...](https://duckduckgo.com/?q=earthquake+falling+debris&t=ffsb&iax=1&ia=images)
[2]
[https://duckduckgo.com/?q=earthquake+road+damage&t=ffsb&iax=...](https://duckduckgo.com/?q=earthquake+road+damage&t=ffsb&iax=1&ia=images)

------
mariodiana
Whatever the course of action, whoever is ultimately behind it is going to get
sued by the survivors of the party harmed by the programming.

