
Self-Driving Mercedes Will Be Programmed to Sacrifice Pedestrians - Tomte
https://www.fastcompany.com/3064539/self-driving-mercedes-will-be-programmed-to-sacrifice-pedestrians-to-save-the-driver
======
robomartin
Programming language designers have been meeting in secret in order to make
critical decisions related to the advent of ubiquitous self driving
technologies. A decision on how to handle this is imminent. Participants
include all mainstream languages, C, C++, Python, etc.

The currently winning proposal is to extend the standard "if" conditional
statement (or the closely related "switch" construct) to include another
option designed for self driving cars.

Programmers today can opt to use "if", "else if" and "else" or "case" and
"default". The proposal would be to add a "fuck!" option to these statements.

While usage isn't fully defined yet, the idea is that there are instances when
no reasonable decision can be made. In these cases "else" or "default" don't
quite capture or convey the critical nature of having to make what is,
effectively, an impossible decision. Hence the introduction of "fuck!". Here's
example pseudo-code:

    
    
        if(some_condition):
        {...}
        else if(some_other_condition):
        {...}
        else: // Unknown but not critical condition
        {...}
        fuck!:  // Critical condition for which there are no good options
        {...}
    

Language designers expect to issue an official announcement within the next
five years. As is to be expected, there are no good options here and they
hesitate to even suggest that "fuck!" should be integrated into these
languages but they seem to have no options left at this point.

------
gamblor956
1) This is from 2016.

2) Mercedes will face huge, possibly ruinous lawsuits from the families of
anyone killed by a self-driving Mercedes. And investigations from legislators.

In the example they give, the Mercedes would choose to kill multiple innocent
bystanders to save one occupant in the vehicle...even though it is a
mechanical failure of the vehicle that put anyone at risk in the first place.

Technologists have this absurd view of the future where they think that
"computers" or "AI" are magical fantasy words that makes real-world
consequences go away. But if that horrific event were actually to come to
pass, the programmers of the vehicle would likely be tried for manslaughter,
and I'd give _at worst_ even odds on a conviction. I'd also put money down on
the NHTSA banning self-driving Mercedes (and possibly even all self-driving
vehicles) until their programming is corrected and audited. And on vigilantes
everywhere vandalizing every Mercedes they see.

------
natch
> So far, the highest-profile death in a self-driving car was when a Tesla
> crashed on May 7, 2016, while in Autopilot mode.

Ugh why does the media keep repeating incorrect information?

That was not a self driving car.

It was a car with a driver who was not paying attention.

The reasons for his not paying attention are up for debate (ranging from he
_perceived_ the car, incorrectly, as being self driving, to he was being
careless, to the utterly crazy assertion that Tesla or Elon intentionally
tricked him into thinking the car was self driving...).

Regardless of where the truth on the driver’s reasons lie, the underlying fact
is that the car did not have self driving software installed on it yet.

~~~
credit_guy
I think we've had this discussion a number of times on HN ( * ). Calling a
certain driving mode "Autopilot" rather than "cruise control" sends a message.
A lot, if not most people, perceive the message as being "self-driving car".
The fact that Elon deemed fully autonomous self driving cars just around the
corner a number of times certainly doesn't help. I personally lost track. Are
we at full self-driving capability yet? Or is it in 2020? Should be any day
now, right?

Just for entertainment, here's a headline from the Tesla website from 2016:
"All Tesla Cars Being Produced Now Have Full Self-Driving Hardware" [1]. I
know, in the post they make clear that the software is not there yet, but how
many on HN (and on the internet in general) are not a little bit guilty of
reading only the headlines?

[1] [https://www.tesla.com/blog/all-tesla-cars-being-produced-
now...](https://www.tesla.com/blog/all-tesla-cars-being-produced-now-have-
full-self-driving-hardware)

( * ) by "we" I don't mean the two of us, just the HN community in general.

~~~
natch
>A lot, if not most people, perceive the message as being "self-driving car"

But it’s reasonable to expect better than this from the media.

------
yazan94
This is quite a useless article IMO. No-one drives selflessly and suicidaly
altruistic anyway. The article's author is shaming and criticizing Mercedes
for making this design decision, while ignoring all the practical aspects of
the technology. These sorts of decisions will have to be made by self-driving
cars at some point, and its not like self-driving cars will be murderous and
seeking out people to splat. I'm sure that if a life has to be taken and a
machine is going to be judge, jury, and executioner - most people would
generally prefer that the machine spare them. Considering that this tech is on
its way in the near future regardless if ethical philosophers are ready or
not, and barring some legislation forcing a set of priorities for the machine
to take into account, I would prefer that the car I spent tens of thousands of
dollars on prioritize my life over someone else's. It would be a sales and PR
embarrassment/nightmare to market that the car might sacrifice its occupants
if the car deems that it's for 'the greater good'.

~~~
homonculus1
>I would prefer that the car I spent tens of thousands of dollars on
prioritize my life over someone else's.

The risks of travel should fall on the traveler, externalizing them onto
bystanders is wrong and invoking the cost of the machine is monstrous.

~~~
Ill_ban_myself
Considering the cost of the machine is inevitable, but that cost calculus will
ultimately include a lot more variables than the MSRP.

Ultimately these machines need to be insured. The fact that the decisions can
and will need to be pre-programmed will ultimately result in the market for
insurance dictating the least cost to the insurer has ultimate say over the
programming.

What we have in the interim is just a chicken and an egg problem while
insurers gather enough data to determine the real, "costs" including lawsuits.

Google SawStop.

------
cr0sh
I know this was from 2016, but I'd think that a self-driving vehicle would be
programmed to recognize, at a minimum, all humans in the direction of travel,
and would command the vehicle to avoid such obstacles - as much as possible
and within the limits of physics and its mechanical systems.

In an accident, it should continue to do this to the best of its ability.
Let's say something weird like a ball joint fails while driving at speed on
the freeway. Now, for those of you that don't know, in a regular car this is a
bad situation - essentially, your wheel is about to fall off and you have
limited to no steering control.

But if the car hit the brakes, turned the wheel (however ineffective that may
be), and/or did other inputs to try to avoid a pedestrian on the sidewalk, or
another car, or whatever - in an attempt to come to as safe a stop as
possible...

...but it still injured or killed somebody - then I'd say it did the best it
could, same as a good-to-great driver would. Assuming it logged everything
about everything; what went wrong, when it sensed it, what it did in response,
what it saw or sensed, etc - then I'd say it should be exonerated, or at worst
"charged" with involuntary manslaughter or something of that nature.

If it was programmed (whatever that means, ultimately - I'm just an amateur
self-driving vehicle enthusiast; I've taken a few MOOC courses on self-driving
vehicle technology, including the Udacity nanodegree for Self-driving Car
Engineers - so I know some things about how this stuff really works from a
code and mechanical and sensor perspective, but I am no expert) to do the best
it could to avoid the situation, but it still failed - and it could be shown
it tried - then can we hold the vehicle, the owner, or the manufacturer
responsible?

I suppose if maybe the driver (owner?) knew they had a failing ball joint (or
whatever) that led to the accident (how they would know this? especially on a
fairly new vehicle) - maybe fault or blame could be laid there. Or if road
conditions caused the problem (chuckhole causing a blowout at speed?) - maybe
the city/state?

~~~
blackflame
Yea you would think the best decision is the one that leads to the least
damage to the car and by extension the least number of objects hit. Even if
the decision were people vs concrete pillar, it's perfectly reasonable that a
human driver would also choose to save their own life at the cost of others.

------
jnurmine
I find these kind of discussions about AI cars odd...

Do AI cars really have to choose who to potentially wound or kill in a
collision situation? Should the AI analyze absolutely everything before making
any control choices whatsoever? Why? What if the time spent on this analysis
will lead to more casualties?

Humans act in a random way when faced with a vehicle collision situation. Some
freeze and fail to act, some divert to either left/right, some hit the brakes
too softly, some hit the brakes hard, some accelerate by mistake (!) and so
on.

Fully autonomous computers driving cars, if it ever will happen outside of
more or less ideal road and weather situations [x], will be like humans,
except with more deterministic (and limited) choices of action. I have a hard
time believing there will be rationalizing over the perceived value of
potential victims or anything of that kind.

[x] I mean conditions of high winds with watery sleet and darkness sprinkled
with water glares, near zero C, very bad visiblity, in a situation when my car
might say "sensors blocked".

Edit: footnote. I still don't know how to do footnotes properly.

------
tynpeddler
At some point we're going to have an entirely new field of study specifically
for this situation. I propose the name "Quantitative Philosophy". The goal is
to merge AI research, philosophy and law to determine the "correct" choice in
complicated situations when the agent of action has time to consider all
possibilities.

Without a by-in from academia, law, legislatures and society as a whole, any
action Mercedes takes in this situation will get them in trouble. In addition,
we could either see premature or delayed adoption of AI.

~~~
eindiran
Ethics already has a lot of very "quantitative" problems it deals with: for
example, see all the variants of the trolley problem where there is a concrete
number of individuals to save/kill on each track [0]. Or Hardin's lifeboat
[1].

Further, CS is not the first quantitative discipline to need to deal with
ethical problems attached to hard numbers of human lives. For example, Medical
ethics deals this problem with eg triage in emergencies and engineering ethics
has had to deal with it for as long as people have been building things. So
there is a lot of prior art to be built on.

[0]
[https://en.wikipedia.org/wiki/Trolley_problem](https://en.wikipedia.org/wiki/Trolley_problem)

[1]
[http://www.garretthardinsociety.org/articles/art_lifeboat_et...](http://www.garretthardinsociety.org/articles/art_lifeboat_ethics_case_against_helping_poor.html)

------
BrainInAJar
Car manufacturers have absolutely no incentive to care about anyone outside
the vehicle until they start being held liable for collisions and that's why
any talk about self driving cars being safer is naive at best or disingenuous
at worst.

~~~
uhtred
Wouldn't a human driver also choose to hit the pedestrian if it was either
that or smash head first into another car?

~~~
dkersten
Depends on the driver.

------
jdkee
Those ML and AI engineers better rethink that logic, as it is equivalent to
the reckless taking of human lives, i.e. criminally negligent homicide.

------
musicale
But only if they are perceived as a potential threat to the car's shiny paint
finish.

------
076ae80a-3c97-4
This article was published in 2016.

