
The complicated ethics of Tesla’s Autopilot - NN88
https://www.bloomberg.com/news/features/2019-10-09/tesla-s-autopilot-could-save-the-lives-of-millions-but-it-will-kill-some-people-first
======
tannerc
In most complex systems the reality is people will encounter harm. The
commercial airline industry is a good example of progress through crashing.

In 2017 there were zero reported accidentally deaths [1]. Compared to 1972,
one of the most dangerous years to fly, where more than 2,300 accidental
deaths occurred on commercial flights.

How did we go from thousands of deaths to zero? Primarily through crashing.
Then working to evaluate what went wrong and improve the design, processes,
and programs around commercial aviation.

As one pilot explains [2]:

"One of the ways we’ve become so safe was to realize that our efforts were
never going to be good enough."

So yes, I think technologies like autopilot will save lives. But it won't be
able to do it magically, unfortunately some harm will occur because we—in all
our human wisdom—just can never accurately predict all possibilities. It's the
same problem most self-driving companies are facing today, and the reason
promises of self-driving being here today have fallen short. The world is far,
far more complex than we realize... especially when we put a computer out into
it and tell it to "go forth."

[https://www.reuters.com/article/us-aviation-
safety/2017-safe...](https://www.reuters.com/article/us-aviation-
safety/2017-safest-year-on-record-for-commercial-passenger-air-travel-groups-
idUSKBN1EQ17L)

[https://www.usatoday.com/story/travel/columnist/cox/2018/01/...](https://www.usatoday.com/story/travel/columnist/cox/2018/01/07/ask-
captain-why-aviation-so-safe-2017/1005183001/)

~~~
Someone1234
If Tesla's AutoPilot was regulated like the airline industry with post-crash
reviews, mandatory improvements, and open sharing of data we likely would see
that. At the current time Tesla's system is proprietary/secret and their
oversight is lax.

If we look at the history of airline safety, going back further, we can see
that it took many years and numerous crashes to get to the point where it
became well regulated and crashes were independently investigated. Before
government stepped in, private companies weren't doing a great job and their
biases (inc. bad headlines/liability/cost) got in the way of improving
passenger safety.

So let's absolutely keep with the airline analogy and regulate Tesla's
AutoPilot like the airline industry.

~~~
mywacaday
I see a cost issue, easy to justify a full investigation every time a
commercial plane crashes, how do you do a full investigation every time
somebody crashes a car or is crashed into. How do you get the improvement loop
that the airline industry had?

~~~
Someone1234
I thought the topic was AutoPilot crashes, not all car crashes. Is AutoPilot
crashing enough to be cost prohibitive?

~~~
netsharc
At least plane crashes are usually mechanical and not software, or it's
straightforward software (hello MCAS), investigating a machine learning's
system behavior ("what input lead it produce this output?") sounds like a more
expensive problem.

~~~
mywacaday
Until you look you don't know if i was hardware, software, road conditions or
user, unless you look its just a a pile of twisted metal.

------
tcskeptic
My general expectation is that Autonomous vehicles will at some point become
safer than human drivers. However, even when that happens, when they make
errors they will be errors that seem stupid and incomprehensible to humans.
This will be because the failure mode of an ML driving model will be utterly
different to the failure mode of human beings. This will lead to a lot of fear
and uncertainty, humans want to comprehend what might kill them, and
autonomous driving, while safer will kill them in bizarre ways.

~~~
wruza
Communication plays a substantial role in traffic situations. If self-driven
cars could signal whether they “see” you or not, it would be safer for
pedestrians to cooperate. E.g. when you take a crosswalk, it could shine green
for braking before you, or red/yellow for accelerating without seeing any
obstacles. It would also be good, if a green laser projected its suggested
path with a bold stop line right on the road (or fading red dot pattern if it
doesn’t).

~~~
leggomylibro
Pedestrians have the right of way, though; they don't need to cooperate. A
self-driving car shouldn't be allowed to run someone down just because they
were jaywalking and didn't correlate the red light speeding towards them with
an autonomous death machine. We generally don't let human drivers get away
with mowing down pedestrians or hit-and-runs if they get caught, so why would
it be an acceptable failure mode for a machine? Who is responsible for that
failure and that death, in the eyes of the court? If the answer is 'nobody',
then self-driving cars will have a lot of difficulty finding acceptance.

Also, many states have regulations surrounding which colors of lights can go
where on a car for these sorts of reasons - unambiguous and predictable
communication is key to driving, and putting what looks like a brake light on
the front of your car is confusing.

~~~
Sohcahtoa82
> Pedestrians have the right of way, though; they don't need to cooperate.

I'd change "don't need to" with "shouldn't have to".

There are a lot of dead pedestrians who didn't look before crossing a street
because they knew they had the right of way.

It's just another form of "trust but verify". I know when I'm crossing a busy
intersection, even if the cross-traffic has a red light and I have a walk
sign, I'm closely watching the on-coming cars to make sure none of them are
running the red light. I'm extra vigilant for the people making a right turn
on red.

------
claudeganon
I’m very skeptical of this authoritarian market logic whereby we are told we
must sacrifice others lives for the sake of “progress,” which is almost
exclusively tied up with one person or business’ individual profit.

While I don’t think autonomous cars are a particularly good investment for the
future of transportation, does anyone doubt that with massive, publicly-funded
investment we could not achieve them? And what’s more on terms where whatever
“costs” were associated with them were subject to democratic checks and
debate?

~~~
Bud
1\. We have democratic checks and debate now, regarding this issue.

2\. I see no evidence that "massive, publicly-funded investment" would achieve
this technological goal faster or better than via capitalistic means. Do you
have any?

3\. Lives will be lost regardless of how the tech is developed. Surely this is
obvious. So what are you even talking about?

~~~
sverige
Re: 1. above, who do we sue when the autopilot kills someone? The owner of the
vehicle? The manufacturer? The driver behind the wheel (if there is one and
it's not the same as the owner)?

~~~
fizx
Ideally, every self-driving car would have secondary insurance baked into its
price. If someone dies, it's an insurance payout of a standardized amount.

~~~
sverige
I don't think the plaintiff's bar is going to allow standardized payouts on
deaths or injuries. That would ruin their business model, and they have a lot
of influence. The question will be whether the inevitable losses due to jury
awards can be sustained by any of the manufacturers.

------
mantap
What kinds of accidents is autopilot going to prevent that automated accident
avoidance cannot? Why does it have to drive the car to achieve the safety
benefits?

If you watch videos of autopilot preventing accidents on YouTube, it involves
two stages: first the car detects a situation is likely to lead to a
collision, then it brakes and steers the car to avoid it. There is no reason
for autopilot to be controlling the vehicle before anticipating a collision.

By contrast the deaths involving autopilot have all revolved around autopilot
failing to perceive some object, or swerving the car into it. I don't know of
any deaths that were caused by Tesla's accident avoidance feature.

~~~
notus
If you have all the cars driving themselves they can function more like a
swarm/grid of vehicles managed externally vs individual agents working against
each other. This means you can have cars going like 80-100 mph but driving
bumper to bumper.

~~~
beojan
You mean like a train? Why not have trains instead?

~~~
smachiz
Because trains can't take exits... unless the whole train wants to.

------
zaroth
NHTSA found that the Economic and Societal impact of Motor Vehicle Crashes,
when taking into account harm from pain & suffering and lost wages, is about
$877 billion per year. [1]

That means we have a tremendous incentive to make investments which make the
roads safer, and lower the crash rate. The single biggest technological
advancement to make this possible ( _without eliminating cars_ ) will be
autonomous driving.

Today the lowest hanging fruit for self-driving is algorithmic work which
allows the car to navigate safely based on its own sensor suite, and _mostly_
by ignoring map data which could be unreliable. Reliance on highly accurate
maps and roadway infrastructure is not really a pressing concern, because the
cost/benefits of investing in improving the internal algorithm is much higher
than the cost/benefits of adding roadway infrastructure for such a small
percentage of vehicles to leverage.

I believe there will be a cross-over point, where safety for a significant
number of vehicles can be significantly improved by making significant changes
to road infrastructure to support self-driving. When that day comes, I hope
that the USG will be ready to invest on the order of $1 trillion to deploy
that infrastructure and ultimately, mandate all new vehicles on public roads
are self-driving.

I've said it in many comments before - it is a moral imperative to get
computers and not humans "behind the wheel". We can afford to spend trillions
of dollars to get there, but that's not to say we get there faster by spending
trillions of dollars _today_.

[1] - [https://www.nhtsa.gov/sites/nhtsa.dot.gov/files/2015sae-
blin...](https://www.nhtsa.gov/sites/nhtsa.dot.gov/files/2015sae-blincoe-
costs_of_crashes2010.pdf)

------
zarro
The title makes it appear as if "Sacrifices" must be made, which is either a
disingenuous click-bait or misguided, as its categorically untrue.

The whole process of implementation of autopilot is based on the logic that
its implementation is scaled as a function of the number of lives it SAVES
through its utility. Meaning that as the percent automation increases, you
should be able to see a net DECREASE in casualties in automated vehicles
compared to non-automated vehicles. If we are not seeing this, then the rate
of implementation may be too high, and the software needs to be corrected
until it is.

Nothing in life is perfect, there will always be accidents, the only question
is can we create tools which will decrease the percent chance of them
happening.

------
simion314
The problem is that "it could" there is no proof that after Tesla tests on
public roads won't fail or go bankrupt.

I am not aware of any proof that neural networks combined with current tech is
enough.

I know there are some misleading statistics around but those are not correct
are biased and is not a proof.

~~~
bob_theslob646
>The problem is that "it could" there is no proof that after Tesla tests on
public roads won't fail or go bankrupt.

Tesla will just raise more capital. It's unlikely that they will go bankrupt
anytime soon.

Unfortunately, there are nowhere near the misfortune of Faraday future.( They
just announced bankruptcy today)

~~~
simion314
Assuming Tesla will do excellent, it could be Uber,Waymo or company X in
country Y where some politicians will sign a piece of paper letting them
killing people on the streets and later X fails because some bizarre reason or
maybe they wanted to use n cameras but they had no idea that you needed double
that so all that testing and people killed was for nothing.

Then you have 12 companies testing with real people and not sharing data so
killing about 12 times more people then needed.

------
krm01
People were having the same discussions about airbags. They ended up saving
quite a number of lives. The media should do more to contribute to a better
world vs writing clickbaity articles that make adrevenue. It’s astonishing
that somethinf like drunk driving for example, is portrayed less scary than
autonomous vehicles.

------
ryanmarsh
I’ve posted this several times on HN but I’ll post it again.

I once had the opportunity to interview a renowned cancer doctor and
researcher at MD Anderson hospital. He told me that we would never again see
the kinds of advances we saw in the early days (70’s and 80’s) of cancer
research because those days were the Wild West. He basically said they did
many reckless things, but that’s how they stumbled upon treatments such as the
protocol for using steroids with chemo for acute lymphoblastic leukemia.

Ok, so high risk experimentation lead to exaptation and innovation. Not sure I
want to see that happen with self driving cars but the other part of me knows
that’s how complex systems “evolve”.

------
6gvONxR4sf7o
Think how many lives we could save by loosing regulations on medical
experimentation. If we killed not that many people we could save billions.

Good thing we already have decades (even centuries) of research ethics we’ve
developed to deal with situations like this. Too bad Tesla is ignoring them.
The proposal in the article to treat these like (potential) medical
breakthroughs is spot on.

I think we all agree that autonomous cars will eventually save lives, but that
was true 50 years ago too. The question is whether _these_ autonomous cars
will save lives. We’ve developed the ethics. We’ve developed the standards of
evidence. In seems like a no brained to require them here.

------
sorokod
Wouldn't public transport "save millions" much more efficiently?

~~~
twic
Yes, but we haven't worked out how use public transport to divert billions of
dollars of investment into paying machine learning specialists telephone
number salaries, so it's not actually plausible.

~~~
sorokod
Sarcasm aside, the discussion seems to be driven by technology fetish and a
promise to buy / sell more stuff.

~~~
twic
Looks like HN has a new strapline!

------
duaoebg
Like many people I accept that there are unavoidable deaths in the name of
progress.

What I don't like is that I cannot opt out of this experiment. I believe many
of these deaths are needless and due to cost savings and sloppiness.

Tesla to me seems less like a center of excellence and more like a group of
people who can put up with Elon. Presumably made easier by telling him what he
wants to hear.

While the average driver may be bad, the average hour driven is by good
drivers.

Plus I don't like the idea of a Tesla can crashing into me. Seems like
something that could be weaponized.

------
shrubble
We could fix autopilot problems across the entire USA in approximately one
year.

However it would not have a winner take all, Wall Street frothiness pump and
dump aspect to it.

You put rfid tags in the asphalt, on the curbs and some kind of beacon on the
telephone poles, or a 'wire in the road' (like a smarter version of the
working system that GM built in the 70s) and then, because it is publicly
owned infrastructure, everyone can use it.

That goes against the current hypocritical faux-libertarianism that is
espoused by some of our tech elites-but it would in fact work.

I said a year, because that is how long it would take to install it.

Edit to add: for fun, you can go on Mouser or Digikey and determine how to put
the system together with off the shelf parts. Just add weather proofing...

~~~
rasz
US is known for their world class infrastructure, and its regular maintenance.
[https://www.businessinsider.com/asce-gives-us-
infrastructure...](https://www.businessinsider.com/asce-gives-us-
infrastructure-a-d-2017-3?IR=T)

------
andrewla
Systems created by humans adapt to humans, especially where they carry
significant downside risks. The risk of death or injury is high enough in
driving motor vehicles that we've taken a ton of precautions (licensing
drivers, criminalizing reckless behavior, road signs, stop lights,
intersections design, freeway design) to try to minimize those risks, and the
designs have adapted to how humans drive.

Because there is skin in the game, people drive carefully -- drivers who are
consistently reckless are removed from the system. Pedestrians and other non-
drivers have adapted their behavior as well to reduce the risks.

Autonomous drivers have the problem that they do not carry skin in the game --
they are not removed from the system for bad behavior. In the short term that
just gives us an ethical quandary about who is responsible for deaths caused
by autonomous systems.

In the long term it presents much higher risks as non-autonomous-drivers (both
drivers who are not autonomous and humans who are not driving) will adapt
their behavior to their mental model of the behavior of autonomous vehicles.
Pedestrians will adapt their behavior correspondingly, creating even more
dangerous scenarios because the interaction with sufficiently complex system
is inherently unpredictable. We rely on the fact that we can build an
approximate mental model of human behavior as it pertains to other drivers to
predict what their behavior will be, and we constrain that behavior via rules
and legal deterrents so that they are more predictable.

When we are 50% autonomous, what happens to someone how deliberately tries to
cross a busy highway? Other systems, like trains, we have warnings for --
"implacable vehicle coming that will destroy anything left in its path", but
for autonomous vehicles? The priority on keeping humans alive will cause
situations like this to require building stricter behavioral systems to try to
limit people's behavior, but the complexity of the underlying behavior is real
-- that is, most of the time, crossing an autonomous freeway is actually a
safe operation because of the way that they cars are designed -- so there's no
feedback that will cause pedestrians to modulate their behavior. Similarly,
the driving systems themselves are designed with some elements of human
behavior implicitly included -- as these behaviors drift due to the coupling
of complex systems, it's not clear how we can directly adapt the behavior of
the autonomous drivers.

------
russellbeattie
Caution: Extreme opinion below.

Artificial Intelligence is a tool. Just like anyone who uses a hammer, backhoe
or dump truck is responsible for what happens when they are using it, the same
is true for AI.

As a tool, all AI systems should be designed to _always do the most harm to
the user of the AI first_. It should be embedded into every autopilot-like
system, and users should be aware of that choice. It's the only moral and
ethically correct solution.

Let's say your AI-controlled car is driving at speed and turning the corner,
there suddenly appears a little girl in the road in front of you. It can
either swerve into a wall or run down the girl. A human might not be able to
make the decision in time, but an AI system would. _It needs to be programmed
to always hit the wall_.

Though this might seem extreme, the opposite is completely immoral. If it was
a human driving, one could assume bad luck to be put in that situation: aka
innocent until proven guilty. But for an AI system, we _have_ to assume the
opposite: The AI should always be presumed to be faulty. Therefore the user of
that AI is culpable of putting it in that situation.

And just like any other tool, the manufacturer of that tool is legally
responsible for its quality and reliability.

The opposite of the above means we'll all be riding around in autonomous
tanks, aggressively maneuvering around each other at higher and higher speeds,
pedestrians and others be damned.

------
jwr
What bothers me about autonomous-driving software is that while you might
argue that drivers are free to take the risk, I as a pedestrian (or another
driver) did not sign up for the risk.

There is an entirely new set of failure modes that we are not familiar with,
and particularly as a pedestrian, it worries me greatly.

------
loceng
It reminds me of how the violence-harm that exists in a democracy vs. a
tyranny isn't evenly distributed: there may be very little violence (due to
control mechanisms) in a tyrant controlled state - at least until you're their
target and then slaughter, mass murder can occur.

------
Beltiras
If anyone else balked at the pilot story, here it is (and it's a bit juicy):
[https://en.wikipedia.org/wiki/Northwest_Airlines_Flight_188](https://en.wikipedia.org/wiki/Northwest_Airlines_Flight_188)

------
Piskvorrr
The ending is telling: [I trust the AV, but not enough to risk hitting a
cyclist right during the interview. More safety...at the expense of people
sans Teslas] (paraphrasing mine)

------
mindfulplay
All of these efforts will eventually result in an amazing world where there
are a lot fewer or zero crashes. Perhaps even change how cities or homes are
built/organized. No doubt.

But the really troubling thing is the attitude and approach: this is not an
"experimental software rollout" that somehow Tesla seems to advocate. You
cannot use live human beings in real streets with very little safety net to
"roll out" your self driving torpedoes. That just doesn't make sense and seems
very irresponsible.

I would rather not have silicon valley mentality when it comes to cars or
healthcare or even banking. Real people's lives are at stake.

------
neonate
[http://archive.is/N3bF3](http://archive.is/N3bF3)

------
Bud
Anyone have a link to get past the paywall for this article?

~~~
neonate
[http://archive.is/N3bF3](http://archive.is/N3bF3)

------
omarforgotpwd
Cool! That's me.

------
omarforgotpwd
Hey, cool! That's me in the story

------
magwa101
I honestly hate this "false baseline" story line that applies to almost all
anti AI/automation comments. A person who is focused may be better than self
driving vehicle but even that is debatable. However, given real life self
driving are already much safer than real life human drivers. Think about
pedestrians and bike rider safety too!!

~~~
mondoshawan
Bunk. Autopilot makes very strange decisions during normal operation --
decisions that normal human drivers would NOT make, such as merging into an
entrance lane on the freeway. This in and of itself is enough to confuse and
befuddle other drivers on the road, which makes it more surprising and
dangerous than a human driver. Worse, it will actively fight the driver at the
helm to make these decisions. Source: test drove a model 3 and it did just
this and nearly caused an accident during the test drive and I could not
instruct it to stop.

~~~
gwbas1c
> I could not instruct it to stop

Bunk. You can take over at any time by turning the wheel, pushing the brake,
or pushing the accelerator.

Source: I take over from autopilot all the time.

~~~
mondoshawan
Bunk. I was not made aware that was even possible.

Source: I... was there? :D

~~~
gwbas1c
Everyone knows that you can step on the brake to stop a car, even with the
cruise control on.

At this point, you're not saving face by arguing with fact.

------
kwhitefoot
This is a really misleading and inflammatory article.

" reading books, napping, strumming a ukulele, or having sex."

Perhaps it was like that years ago, long before the Model 3 but it isn't like
that now. My 2015 Model S would complain long before I could complete a sex
act. Or perhaps Bloomberg's hack suffers from premature ejaculation.

~~~
CobrastanJorji
The author is alluding to a specific incident, which was posted to porn sites.
Elon Musk commented about it on Twitter (because of course he did), saying
"Turns out there’s more ways to use Autopilot than we imagined."

------
zaroth
I take issue with the headline. The fact that crashes still do and will occur
even while AutoPilot is on does not mean that AutoPilot is legally or
ethically responsible for the crash, until the system is advertised as a Level
4 system. Only at that point we can look at the accident rate of the L4 system
and decide if those accidents are the fault of the autonomy, or if the
algorithms could be improved to have avoided an accident that was even another
driver's fault.

Distracted driving is a leading cause of crashes today, and particularly if
you use AutoPilot, you really see the _ridiculous_ numbers of distracted
drivers all around you. Many drivers choose to use their phones instead of
looking at the road, whether they have AutoPilot or not. It's dangerous in
both cases, but less-so if you have AutoPilot on. In either case, in a L2
system, if the car crashes while the driver is on their cell phone, it’s the
driver’s fault regardless of whether there was AutoPilot enabled or not.

------
xyzzy2020
I don't get why people are up in arms over this: the average person drives
like an I D I O T. A Tesla auto-pilot drives above average. And right now they
are driving the worse they will ever drive.

Therefore: more auto-pilot, less human death.

~~~
simion314
I don't think this is true, there are maybe many young drivers and some drunk
drivers that cause a lot of accidents.

But consider this, you are a decent driver and you need to send your children
somewhere. Do you drive him yourself because you know that you will not speed
or text or be drunk or you send it with a robot that is better then an idiot
but worse then you ???

Sure if you were drunk or tired is safer to send him with the robot.

~~~
xyzzy2020
Yes that is the physiology barrier that you describe that will be difficult to
overcome.

The flaw is everyone thinks they will be a better driver than an AI, even
though very few actually will be.

So if I had to choose my children driving with an "average joe" / friend / etc
vs an AI, I would say: the data says the AI will crash 25% less, therefore it
is safer.

~~~
simion314
I assume you are better then a drunk teen that comes from a party and has 3
months of experience, The most road deaths that happen here in Romania are
caused by young drivers cumming from parties late at night, maybe drunk, the
car is full with people so you have 1 crash causing a lot of deaths, So the
average driver from the stats is a terrible driver because the stats are
skewed hard by the inexperienced, drunk or tired drivers. You can have both
this next two facts true at the same time

1 replacing all drivers with AI is x% safer

2 replacing you an experienced responsible driver with an AI is less safer

Does it make sense ?

