
Self-Driving Mercedes Will Prioritize Occupant Safety Over Pedestrians - Turing_Machine
http://blog.caranddriver.com/self-driving-mercedes-will-prioritize-occupant-safety-over-pedestrians/
======
smallnamespace
Relevant comic:

[http://www.smbc-comics.com/comic/self-driving-car-ethics](http://www.smbc-
comics.com/comic/self-driving-car-ethics)

And an actual extended trolley dilemma, wrapped in a joke:

[http://www.smbc-comics.com/?id=3556](http://www.smbc-comics.com/?id=3556)

Ethically, are all people's lives equal? If not, is it wise to encode it into
our robots?

From the article, the fact people won't use a car that favors the children
over the occupants is interesting, and perhaps justifiable. After all, humans
favor their own families and friends -- maybe it's equally reasonable for
robots to prioritize the safety of their owners first.

~~~
eyelidlessness
Ethically, empowering machines to make ethical decisions is not worth a
discussion. The reason that ethics is a philosophical pursuit is because it's
unsolved. Humans can debate the merits of a given ethical philosophy, but
machines cannot stop and contemplate the consequences.

We have never stopped engaging this discussion. Encoding it into circuits and
setting it loose is not participation, it's profit.

For whatever is wrong with modern politics, and most of it is wrong, it isn't
a closed book. We deserve better than that.

~~~
smallnamespace
If we pursue continue to pursue autonomous technologies, machines will
inevitably be put in situations where they make life and death decisions for
us, because there isn't a human around to make those split-second decisions in
real time.

So encoding our ethics is going to happen whether we like it or not. It's just
a question of whether we as a society talk it out and make these ethical
choices explicitly, or wait for corporations to make those choices for us.

~~~
eyelidlessness
That's my point. It's unsolved. We are allowing corporations to make choices
for profit that humans have not resolved for ethics. We should categorically
reject these sorts of automations because they present a danger we can't
determine, according to the ways we evaluate danger.

~~~
Turing_Machine
"they present a danger we can't determine"

Why not? Counting the number of deaths per passenger-kilometer seems pretty
straightforward.

What if self-driving cars wind up killing fewer pedestrians than manual cars,
even if occupant safety is prioritized?

------
wmf
I wonder what this actually amounts to at the code level. Is it even possible
to predict accident outcomes reliably? If not, the trolley problem isn't even
relevant and there is no need to decide.

Given that they are planning to reduce accidents by 99%, it seems like
political suicide to even take a position about the few remaining cases. I
know this is counterintuitive, but I wonder if it would appear more fair for
the car to decide randomly in those cases.

~~~
smallnamespace
> it seems like political suicide to even take a position about the few
> remaining cases

But these cases eventually _will_ happen in real life, at which point whatever
the car chose effectively becomes their 'position'.

> Is it even possible to predict accident outcomes reliably? If not, the
> trolley problem isn't even relevant and there is no need to decide.

That's just kicking the can down the road. As we collect more data and
technology improves, we will know the statistical distribution of outcomes
under various scenarios.

> if it would appear more fair for the car to decide randomly in those cases

Random choice is still a decision. What if there are 4 occupants? Should you
change the weighting? If not, then you are implicitly saying 1 life is equal
to 4, which doesn't make sense.

~~~
wmf
My point is that the car (really the programmers) should avoid choosing at
all. The difference between "the car is programmed to kill pedestrians" and
"pedestrians are statistically more likely to be killed in accidents" is
intent, which has a lot of ethical and legal weight.

Also, if your code has a decide_who_to_kill() function, Dieselgate starts to
look like a faux pas in comparison.

~~~
ehnto
What would you propose the software do in a scenario where it does have to
decide?

Throw a fatal exception, appropriately, and stop operating?

I do agree with you, but I'm not certain you can avoid making a decision when
your software will inevitably be provided a scenario where it has to do
something, but no option is good.

All of that said I would be surprised if we expect our cars to swerve and dart
around the road to avoid obstacles, I figured it was going to be down to
slaming on the brakes in a fraction of a second in most cases. As human
drivers, swerving makes things significantly more dangerous in most cases.

~~~
lobotryas
I would never want someone to explain to my family that the reason my self-
driving car plunged over a cliff is because STATISTICALLY I had a better
chance of surviving the fall than a child had a chance to survive being run
over.

In other words, are you or anyone else reading this ready to die RIGHT NOW if
it somehow saved the life of other humans? Probably not.

I applaud MB for what they are doing. If i an to buy a self-driving car then I
would only take one that protects my life and the lives of my occupants (my
children, family, friends) above the lives of all other people.

------
kbart
_" A self-driving car identifies a group of children running into the road.
There is no time to stop. To swerve around them would drive the car into a
speeding truck on one side or over a cliff on the other, bringing certain
death to anybody inside."_

Call me cynical, but such behavior seems logical. I would not drive myself
manually to a certain death to save pedestrians that violated traffic rules,
so why should autopilot do that to me? That raises an interesting possibility
of future cars marketed by their "ethics".

~~~
hoppa_liza
> pedestrians that violated traffic rules

You might be able to make this call as a human, but can we assume the car is
capable of deciding on that?

~~~
kbart
I assume autopilot would always drive according to rules (I doubt they would
get licensed otherwise), so if pedestrians pop up in front of a car, I it's
likely their fault.

~~~
dbg31415
I do find it hard to imagine a situation where the auto pilot has to choose
one life over another -- I've never had to choose that and I've been driving
20+ years...

Someone runs out into the street, likely the car can brake or swerve. I've
stopped for my share of kids chasing soccer balls without so much as a spilled
latte.

Deer, moose, cattle on the road maybe it's different... The car will have to
determine where to place the impact. But hopefully cars with more censors
avoid these things better than people do. We are pretty bad at spotting
deer...

------
buro9
This is the opposite of what it should do, but who will buy a self-driving car
that may prioritise a person you don't know over yourself.

Ultimately, the software is good enough when the chance of death (to the
driver and car occupants) is significantly lower than compared to a human
driver.

Externally, self-driving cars should always be safer to 3rd parties.

As someone who cycles, and who has over 300 cycling websites including a 50k
user London one... I will happily promote any campaign against self-driving
vehicles that externalises it's danger. To effectively ban them from major
cities until they are able to prove they are safer for all road users. I'm
sure many other groups will feel the same.

~~~
kahrkunne
So just because you don't drive means they should kill themselves to save you?
Ridiculous. Clearly the morally right thing to do is to prioritize the driver,
which the car exists to serve (within reason)

I sure as hell wouldn't drive a car that makes the exact opposite moral
decision from what I would make, killing me in the process

~~~
buro9
> So just because you don't drive

I do drive too. Most cyclists are also drivers.

Feel free to purchase such a car, just don't have the expectation that crowded
urban areas will permit vehicles that externalise their danger. There are more
pedestrians and cyclists in London, with a much stronger lobby, than there are
those who drive here, and even the pro-car press will delight at the chance to
lay into self-driving cars.

------
empressplay
It will be interesting to see the first time a car manufacturer gets sued for
negligent homicide. The occupants of the car _can_ agree to a TOS that says
the car will do its best to avoid killing pedestrians -- the pedestrians of
course cannot (and why would they?) agree to said TOS. So from a liability
perspective, I can't see this policy lasting long.

~~~
vacri
Yep - I don't want anything on the road that will prefer to kill me over its
user who put it there.

Perhaps a different way to spin this to the selfish: would you prefer to have
only one vehicle on the road that would prefer killing you, or would you
prefer _every other vehicle_ on the road to prefer killing you.

~~~
JumpCrisscross
> _I don 't want anything on the road that will prefer to kill me over its
> user_

Replace "user" with "self" and you have an expression against every other
human with self-preservation instincts.

~~~
vacri
Believe it or not, people generally don't like killing other people, game
theory be damned.

Witness, for example, the US army having to develop particular training to
reduce the 20%-odd soldiers who ended up just not firing their weapon at all
when in action (no data on how many fired but intentionally mis-aimed).

------
camillomiller
Of course! The occupant will usually be richer and much more well off in
society than some peasant who still uses its feet to move from a place to
another.

It's not classism, it's probability!

~~~
manyxcxi
As a human being I would choose my safety over that of someone I don't know
and I would choose my family's over mine in all circumstances. If I'm ceding
control of my car to an AI wouldn't I want it to do the same?

Especially when the probability of the issue being the pedestrian's fault
continues to rise as the driving systems get better- I wouldn't want my car to
put me in danger because someone else screwed up.

That being said, if it got to the point where it could deduce the likelihood
of injuring the pedestrian vs. the probability of risk to the occupants while
maneuvering to not injure, that would be best.

~~~
AstralStorm
The latter is why all cars should have black boxes. Including ones driven by
humans. We can evaluate fault that way.

------
ScottBurson
I think situations where one has to choose between someone outside the vehicle
and its occupants are not very common.

They do happen. In fact, I know someone who almost had to make such a choice.
It was in France (he's French). He was doing somewhere around 140km/h (call it
90mph) on a rural limited-access road, and came around a curve to find a car
stopped in the left lane of a two-lane roadway -- apparently on account of a
flat tire. One of the occupants was out of the car and walking toward the
right lane, as if about to cross to the right shoulder. The timing was such
that my friend was able to pass by in the right lane before the pedestrian got
there, but had he been two seconds later, as my friend said to me, "I would
have had to hit him".

All that said -- it was 20 years ago that I heard this, and I don't recall
hearing any similar tales since. This one obviously stuck in my head, so I
think I would recall. I've certainly never been in such a situation myself.

So the argument that self-driving cars will reduce fatalities so substantially
that we don't need to worry too much about these edge cases makes a lot of
sense to me.

~~~
ddeck
_> So the argument that self-driving cars will reduce fatalities so
substantially that we don't need to worry too much about these edge cases
makes a lot of sense to me._

Even more so considering this case likely wouldn't form part of the "edge
case" pool, since a self-driving car presumably wouldn't be driving at 140
km/hr down a non divided rural road. The speed limit on such roads in good
weather is 90 km/hr.

Perhaps the limits were higher in the past, but I can't imagine they were ever
that high.

~~~
ScottBurson
Sorry, I wasn't clear. It was a divided highway. Yes, there really was someone
stopped in the left lane of a divided highway changing a flat.

------
nercht12
They really should make it the option of the passenger. It removers
manufacturer liability. You can always force the passenger to decide before
driving, just like forcing Windows upgrades.

------
dbg31415
Say whatever you want about this, but they know their audience.

~~~
justinlardinois
This. The reality is that no one will buy a self driving car that leans
towards killing them over pedestrians.

~~~
micaksica
Especially because people, when driving, blame someone else for all of their
problems; this plays into the whole "Why do I have to suffer because that
asshole ran out in front of me in traffic?" concept most drivers have and want
reinforced by their vehicle AI.

 __Never mind the fact that you were turning right on the no turn on red, and
they had the crosswalk...

~~~
BoorishBears
The hope being AI driven cars won't turn right on the no turn on red, so the
remainder of cases will really come down to pedestrian "negligence"

~~~
micaksica
Yes, but my point is that the consumers vilify the pedestrian. Let's not even
get into the cyclist.

~~~
justinlardinois
Don't remind me; yesterday's discussion[0] about dooring was among the biggest
shitshows I've seen on Hacker News. There's something Godwin-esque about
online discussions concerning the relationship between cars and bicycles.

[0]
[https://news.ycombinator.com/item?id=12674533](https://news.ycombinator.com/item?id=12674533)

------
euske
This is arguably better than Microsoft/Apple/Google's strategy: send all the
diagnostic data and then crash the system.

------
xbmcuser
I feel self driving cars will have have a larger field of view so probably
will not get into the same situations as a human plus in the future when all
vehicles are interconnected the truck will already be aware of what the car is
going to do and will avoid it as well

~~~
oh_sigh
Yup...and you really don't want to be in a situation where other people can
trick a self driving car into killing the occupants(for, example, stepping out
in front of it from behind an obstacle). By always prioritizing occupant
safety, those kinds of "tricks" become impossible.

------
inimino
It's odd that Mercedes would speculate on this when mentioned level 4 and 5
systems don't even exist yet.

The first time a self-driving Mercedes kills a pedestrian these remarks are
going to come back to bite them.

------
Mao_Zedang
In what situation does a self driving car run into a pedestrian other than
them jumping/running onto the road within the cars legal stopping distance?

~~~
vkou
Swerving to avoid another car, software bug, sensor malfunction, a plastic bag
getting blown into its field of view, and being interpreted as a dangerous
obstacle...

------
ClayFerguson
This is good because I'd hate to be a passenger when the AI notices an
opportunity to save a drunk homeless person passed out on the railroad tracks
by parking the car i'm in on the railroad tracks to block a train. We all have
to admit that even if AI functioned as well as the human mind, a lot of this
stuff will be a tossup and like the "trolley problem" mentioned by WMF on this
thread.

