
Self-Driving Mercedes Will Sacrifice Pedestrians to Save the Driver - braythwayt
https://www.fastcompany.com/3064539/self-driving-mercedes-will-be-programmed-to-sacrifice-pedestrians-to-save-the-driver
======
jdashg
The trolley problem is so overblown. Just brake. Just braking is a simple and
really effective solution to the vast, vast majority of problems. Instead of
trying to program complex and fraught harm minimization systems, focus on
braking distance, collision safety, and enforcing a safety envelope for self-
driving: Never go faster than your brakes, even if humans often do.

~~~
braythwayt
The trolley problem as usually posed is contrived, but the "swerve instead of
braking" problem happens more frequently than many people think.

Consider driving along an undivided highway. An approaching car veers into
your lane. You definitely want to swerve, rather than brake and risk being hit
head-on.

Of course, we can contrive a situation on top of that, like a group ride of
cyclists to your right, so swerving will take them out...

But that is besides the point. I am just saying that there are cases where
braking is not the safest thing for the occupants of the car.

~~~
JoeAltmaier
But that is exactly the trolley problem! Brake and 1 occupant is at risk.
Swerve and the cyclists at the side of the road are at risk.

So the decision to swerve is the trolley problem either way. You know the
cyclists are there, and swerve because the occupant is valued more. Or the car
doesn't know if cyclists are there, swerve and these possible/virtual cyclists
are sacrificed for the occupant of the vehicle.

~~~
braythwayt
I was comparing swerve instead of braking to the trolley problem as usually
contrived, i.e. you can direct the trolley to one of two tracks.

The person I was replying to said this is nonsense, as in a car you can just
brake and nobody gets killed. I explained that there is a real-world situation
where braking is not an escape, it is—as you say—equivalent to directing the
trolley to the line where there is one victim.

We agree that the trolley problem in the abstract is about making a choice as
to who is harmed. I was just pointing out that car brakes do not invalidate
the concern.

~~~
JoeAltmaier
Of course, sorry.

I've predicted for years now that the first day of automated driving will have
to solve the trolley car problem. I can only see one way out: we make a rule
e.g. automated cars cannot leave the roadway.

~~~
gricardo99
I think you’re specifying the solution you prefer to the trolley car problem,
rather than finding a way out of the problem. I think that’s the point.
Decisions need to be made about how to deal with these ethical dilemmas. It
shouldn’t be left up to individual companies or software implementations.

~~~
JoeAltmaier
Nobody's solved the trolley car problem to everyone's satisfaction in a
century. Not gonna happen tomorrow.

Agreed it shouldn't be left to the companies - that's why the 'make a rule'
idea. Else the companies are in an escalating war of ad-hoc behaviors to
preserve their customer vs all others.

------
russellbeattie
I wrote this as a comment a few months ago
([https://news.ycombinator.com/item?id=21250661](https://news.ycombinator.com/item?id=21250661)),
and was downvoted, but I'll post it again because I feel it's right:

Artificial Intelligence is a tool. Just like anyone who uses a hammer, backhoe
or dump truck is responsible for what happens when they are using it, the same
is true for AI.

As a tool, all AI systems should be designed to always do the most harm to the
user of the AI first. It should be embedded into every autopilot-like system,
and users should be aware of that choice. It's the only moral and ethically
correct solution.

Let's say your AI-controlled car is driving at speed and turning the corner,
there suddenly appears a little girl in the road in front of you. It can
either swerve into a wall or run down the girl. A human might not be able to
make the decision in time, but an AI system would. It needs to be programmed
to always hit the wall.

Though this might seem extreme, the opposite is completely immoral. If it was
a human driving, one could assume bad luck to be put in that situation: aka
innocent until proven guilty. But for an AI system, we have to assume the
opposite: The AI should always be presumed to be faulty. Therefore the user of
that AI is culpable of putting it in that situation.

And just like any other tool, the manufacturer of that tool is legally
responsible for its quality and reliability.

The opposite of the above means we'll all be riding around in autonomous
tanks, aggressively maneuvering around each other at higher and higher speeds,
pedestrians and others be damned.

------
Schwolop
This guy needs some PR advice, but the approach is sound. To be ethically
clear, the approach must be hard-coded in that it doesn’t evaluate and choose
a control output. Instead it is forced to pick one control output that was
predetermined to be safe enough. Reducing energy ASAP by braking meets that.
Attempting to swerve does not - because you must _decide_ which way to do so,
and that decision is fraught with ethical issues.

------
maerF0x0
People may be appalled by the prior decision to save the driver over others. I
think the onus is, at least in part, on others to prove that a human driver
wouldnt do the same thing. If something is coming at me, I'm likely to swerve
(by quick reaction), even if it means hitting someone else.

~~~
jdashg
We have laws that say it's not ok to hit pedestrians to avoid obstacles, which
is a strong prior that society doesn't want, and enforces against, a greedy
self-trolley solution. Expect legislation prohibiting this behavior.

~~~
Arnt
At the very least it'll tilt the balance further against car drivers, in
favour of people who don't want traffic where they live.

[https://www.bussgeldkatalog-mpu.de/bussgeld-
wAssets/img/verk...](https://www.bussgeldkatalog-mpu.de/bussgeld-
wAssets/img/verkehrszeichen/durchfahrt-
verboten/weblication/wThumbnails/verkehrszeichen-250-durchfahrt-
verboten-3570d73ddff018agf3d7b07fe1a9d902.jpg)

------
neilalexander
Assuming we are talking about vehicles that still have manual controls as a
last resort, is saving the driver not an obvious course of action? It’s the
driver that needs to be given as much time and every possible opportunity to
take over and react, to attempt to correct the situation or to move the
vehicle (and its occupants) out of further harms way after an accident.

------
m463
two cars:

\- brand A saves who it decides.

\- brand B saves the occupant.

which would you buy?

Also, I can't help but think of the brand A airplane that knew better and
crashed [1] and the recent brand B airplane that in one case it thought it did
and crashed [2]

1:
[https://en.wikipedia.org/wiki/Air_France_Flight_296](https://en.wikipedia.org/wiki/Air_France_Flight_296)

2:
[https://en.wikipedia.org/wiki/Lion_Air_Flight_610](https://en.wikipedia.org/wiki/Lion_Air_Flight_610)

------
Jeff_Brown
> "... Save the one in the car,” von Hugo told Car and Driver in an interview.
> “If all you know for sure is that one death can be prevented, then that’s
> your first priority.”

How is this quote relevant? THe point of the trolley problem is that you face
a clear choice between who dies. Is Mercedes willfully programming its cars to
understand the mortality of the car's occupants and nobody else's?

~~~
Gibbon1
Mercedes has a duty towards the people that buy their cars not random
jaywalkers.

~~~
Jeff_Brown
If profit motive was the only duty anybody felt, civilization would quickly
collapse.

That said, I don't expect business to do anything that's not in its own
interest. This sort of problem, if in fact it's a problem[1], would seem to
require a legislative solution.

[1]
[https://news.ycombinator.com/item?id=21818086](https://news.ycombinator.com/item?id=21818086)

------
thatfrenchguy
Waiting for the lawsuit ?

