
Tesla Autopilot Crash: Why We Should Worry About a Single Death - mcspecter
http://spectrum.ieee.org/cars-that-think/transportation/self-driving/tesla-autopilot-crash-why-we-should-worry-about-a-single-death
======
fennecfoxen
Article argues, "autopilot could let people who are blind and disabled drive
now, where they couldn't drive before. but they could never get killed in a
car accident _while driving_ before, so this is ethically questionable."

Article's logic is dumber than a sack of hair.

~~~
SomeCallMeTim
The article is arguing for inaction in the Trolley Problem [1]. In short: You
have a choice between saving five people and letting one other person die, or
doing nothing and letting five people die.

Which _is_ a controversial position, but not stupid exactly. I would call it
misguided, and certainly not a very utilitarian position. And given the
options I personally would vote for the utilitarian answer: Save five people
even if the one who dies wasn't otherwise threatened.

[1]
[https://en.wikipedia.org/wiki/Trolley_problem](https://en.wikipedia.org/wiki/Trolley_problem)

~~~
aetherson
Do you similarly advocate that we should, right now, today, in real life
policy, set up a program where we check people for matches to multiple people
needing organs, and if one healthy person's organs can save the lives of 2+
people, murder that healthy person and take his/her organs?

~~~
dalore
No, because the risk of surgery and transplant, plus the initial health of the
person needing the organ, means it's not 100% certain it will succeed. But
leaving the organ in the original person will.

~~~
aetherson
Obviously controllable for. Do you support the program if we kill only people
that we believe are statistically likely to save more than 1 life?

------
thedevil
One day my son will drive.

Do I want for him to face a 1/10000 chance of being killed each year by a
human driver?

Or do I want for him to face a 1/20000 chance of being killed each year by an
autonomous vehicle?

I'll take option two, for him and for me. I don't give a damn about
philosophical questions. And I don't give a damn if it's a different dice I'm
rolling.

The only reason I care much about this one death is because it risks slowing
down progress. Lives are at stake here. Yours. Mine. Your family's. My
family's.

The statistical/utilitarian argument for self-driving vehicles is less
compelling than the personal story argument against them. So I want to make
the for argument as personal and compelling as we can.

~~~
turtlebits
The only problem is that the statistics can vary greatly as there are so many
factors to consider. Car age/model, geographic area, roads driven, etc.

~~~
underwires
How about this factor - 31% of all traffic-related deaths involve alcohol-
impaired driving.

~~~
JoeAltmaier
SO, a car breathalyzer that allowed only automatic driving if too high, would
definitely be an improvement we could all get on board with?

~~~
underwires
wow, that was a leap. I think a lot of people would be thrilled to have the
car drive when they've been drinking.

------
madmax96
>We can’t cut corners just because we want to rush a life-saving product to
market.

But there's a huge difference between a self-driving car and a novel drug.
These cars are using statistical learning mechanisms -- it is _impossible_ to
prove that the car will always function correctly.

>Yes, it could be that autonomous cars will save many more lives than they
take. But “the ends justify the means” is a dangerous approach in ethics,
capable of justifying any evil as long as the math worked out.

I mean, yeah, "the ends justify the means" is a dangerous approach. This is a
strawman: allowing less than perfect algorithms to transport us a little less
than safely (though safer than currently) is not an instance of "the ends
justify the means." It's an instance of pragmatic rationality in response to a
very real and very common danger.

>If torturing innocent people or infringing on other rights can deliver some
greater good, we would (or should) be deeply troubled by the choice.

The author is using an analogy between torture or general rights-violations as
an argument against driverless technology! How does this seem reasonable?

There's one question I have for the author: what and who's rights are being
violated by driverless technology that are not already being violated by human
driving?

~~~
uola
Computer software can also be updated, yet almost everything is broken.
Rushing self-driving cars is not pragmatic, it's keeping the lack of safety
culture that driving already has. Whatever lives you save in the short run is
likely to be lost many times over on the margin of the standard we expect from
self-driving cars. Pragmatic safety would be things like speed and distance
enforcement. That's how self-driving is really going to save lives, by not
making those one in hundred thousand decisions that kill people. Which you
could have today if it wasn't for that watching movies with autopilot is so
much cooler than being saved from yourself.

~~~
branchan
Can you elaborate on 'everything is broken'?

~~~
uola
There's a supreme mismatch between the state of software (and infrastructure)
and how we use it today. If nothing else people should be concerned about this
because it makes the Internet de facto not open, since most people can't
afford the resources (like security) to run many service.

You would think that sending something to a machine on the Internet would
imply that it would actually reach the recipient and that no one else is
listening. Not only is that not true, it was essentially never intended to be
true. We work very hard to try and make it so, but will probably never reach a
point were it will be equal to what it could have been if that was the
intention from the beginning.

[https://medium.com/message/everything-is-
broken-81e5f33a24e1](https://medium.com/message/everything-is-
broken-81e5f33a24e1)
[http://swiftonsecurity.tumblr.com/post/98675308034/a-story-a...](http://swiftonsecurity.tumblr.com/post/98675308034/a-story-
about-jessica)

------
alexandros
Oh god, a deontic argument for slowing down self-driving cars. Yes, we choose
to let several people die every day by not spending every available dollar on
healthcare. Even then, we would have to decide where the large but still
limited budget would be invested. There's no pure white way to get out of
consequentialism. Might as well accept it and work with it rather than
shutting your ears and singing la-la-la.

Sorry for the rant, for those that care for a reasoned argument along the
lines of the above, Scott says it better than I ever could --
[http://raikoth.net/consequentialism.html](http://raikoth.net/consequentialism.html)

------
whack
Wow. After the introduction where the author thoroughly refutes his own thesis
( _approximately half a million people would have been saved if the Tesla
autopilot was universally available_ ), the 3 reasons given was astoundingly
poor.

 _Reason 1: Different people will die. Example, blind people who are currently
not at risk (because they don 't drive) will die in greater numbers._

And this is an argument against saving 500,000 lives every year? Really?

 _Reason 2: Humans are irrationally afraid of situations where they lose
control. Example: flying in airplanes vs driving a car_

And we're in favor of 500,000 additional people dying every year, because of
people's irrational fears?

 _Reason 3: The ends don 't justify the means. Example: Torture_

Even if you accept the underlying premise that the ends don't justify the
means, this argument only carries weight in situations where the "means" are
considered evil. Like torture. Or robbery. What exactly is "evil" about
allowing consumers to voluntarily purchase and use self-driving cars, and
saving thousands of lives every year in the process?

I was surprised to see Musk's candid and blunt defense of self-driving cars.
It's not PR savvy, but honestly, it's exactly what we need. The reasons being
marshaled by people like this author, against self-driving cars, are
horrendously bad. And while we sit here twiddling our thumbs, discussing human
irrationality while sipping our lattes, hundreds of thousands of people are
dying. Maybe it's time we stopped splitting hairs and started doing something.

~~~
krschultz
I'm not seeing where the author concedes that 500k lives a year would be
saved. That number is in an Elon Musk quote, not the authors argument.

Keep in mind total US road fatalities in the US is ~32k. I'm not sure how Elon
got to 500k, but even if we handed out a free Tesla to every single American,
and not a single person was ever killed in a Tesla, it's off by an order of
magnitude. Of course Elon is quoting global fatalities, but road fatalities in
2nd & 3rd world countries could be improved with lots of basic infrastructure
& existing car technology. It's simply not comparing apples to oranges, before
you even start considering the unrealistic cost of a Tesla for those markets.

tl;dr; the 500k number is bullshit and any argument based on it is asinine.

~~~
whack
That quote was planted in the middle of a section where the author himself was
talking about the vast prevalence of car fatalities caused by human error, and
how automated driving can help minimize that. Given the context around the
quote, the author's decision to include the quote, and the fact that the
author made no attempt whatsoever to contest or dispute the quote at all, I
see no reason to believe that the author doubts the quote.

Regarding the numbers behind the 500k:

\- there were 1.25 million road deaths globally in 2013

\- yes, this can be improved through seatbelts and safety standards, but it
can also be improved by self-driving technology. This isn't an either-or
problem. Having one doesn't preclude the other.

\- self-driving cars aren't limited to Teslas. Any car manufacturer can
integrate self-driving into their cars.

\- there's no reason why self-driving technology won't penetrate the market
and become mainstream, at some point in this century. This is obviously not
going to happen overnight. The more we delay the initial roll out, the longer
mainstream penetration will take.

------
thinkmoore
While I agree some of the early arguments are a bit weak, I liked the author's
discussion of the parallels to medical research---I think it gets across why
some (including myself) think its worth approaching this exciting new
technology with a slightly less cavalier attitude:

 _In developing cancer drugs that could save millions of lives—just like robot
cars are promised to do—we understand we can’t ignore problems in clinical and
human trials. We can’t cut corners just because we want to rush a life-saving
product to market.

“The perfect is the enemy of the good,” as famously declared by Voltaire, is
also a common reaction to ethical critiques of autonomous cars. But this is a
straw-man argument: no one is demanding perfection, just due diligence,
especially if death is on the line. Just as with cancer drugs or anything else
on the market, a product doesn’t have to be perfect, though that’s not an
excuse to not be more careful.

Look at seatbelts as an iconic safety device: even they aren’t absolved of all
sins, just because they save a lot of lives overall. Unlatch buttons that are
too large (and can be accidentally bumped open) or too easily opened have
sparked lawsuits and massive recalls. These aren’t really malfunctions but
only bad designs, and bad designs can kill.

The extra care needed to avoid these problems doesn’t have to take a Herculean
effort or stall research and development. It just means investing some time to
think it through and properly set expectations. This could save lives, and
every one counts. (Just ask their families.)_

~~~
ergothus
> its worth approaching this exciting new technology with a slightly less
> cavalier attitude

While I'm a huge fan of vehicle automation, I have plenty of concern that it
not be commonly used until it is in fact safer overall (I allow that come
circumstances become less safe while others become more). I'm not sure what
"cavalier" attitude you refer to? Where, exactly, are people claiming there is
no concern for safety? What legislation has been passed that is seeing through
rose-colored glasses?

Heck, the death referenced in the headline got lots of attention, both in
criticism and in praise of automation, so I can see a lot of conclusions from
that but "cavalier" isn't one of them. Looks to me like all sorts of caution
is being considered, proclaimed, and hammered.

So what cavalier attitude is there that this article, with it's admittedly
weak arguments, is good to be fighting?

~~~
thinkmoore
I think the idea of releasing a vehicle safety system that you apparently have
little enough faith in to call a "beta," and marketing said system under the
name "Autopilot" and then acting surprised when people treat it as a reliable
autonomous control system are pretty cavalier. I was also referring to the
attitude (evident in this thread) that since there was a warning it is
completely the drivers fault. I certainly think this technology will improve
safety. But we should certainly consider what role the technology itself
played in accidents---whether because driver's aren't using it as intended or
it has a design defect (no sensing at windscreen height).

I get the sense that a lot of people feel like since the intent is good and
driver's are opting in that basically anything goes---which seems pretty
cavalier to me.

------
andrei_says_
What a tragic misnaming of a feature. I see this over and over again, people
expect the "autopilot" to drive the car for them.

Naming it something like "cruise-assist" or similar would remove so much
confusion and expectation.

It's a wonderful car, with great features and one terrible UX flaw in labeling
a core feature.

~~~
stannol
I wouldn't say it's tragic, I'd say it's negligent misnaming. The people in
charge surely were aware that some users would use it in way it shouldn't be
used due to the expectations caused by its name. Some of the blame for these
accidents is on them.

------
iamleppert
Is it really accurate to take the very limited and incomplete data of this
self-driving feature and compare that to all of the world's accident rates?
Think about the long tail here...

Have they even released this data for examination? I am thinking that many
people have approached this feature with caution, in that its a novelty but
they aren't "trusting" it or relying on it like you would a human driver to
safely get you somewhere. It would be also interesting to see the types of
roads and conditions where the feature is enabled. In the rain? Snow? At
night? Poor visibility? Traffic conditions.

Just a single number paints an incomplete picture of the data. Why has no one
mentioned this? People are sheep that are quick to declare victory after
seeing a single number without questioning where or how that number came to
be.

My thoughts are this: give this feature to your average Joe driving a Honda
without much guidance and we'd have a lot more mechanized death.

Youtube is full of Tesla autopilot fail videos where the driver had to take
quick action to avert disaster. What if we let this into the hands of a lot
more irresponsible people who think they can tool down the highway watching a
Harry Potter movie, blind faith and trust in a "feature". How many Tesla
owners here would honestly trust the feature that much with their life at this
point?

------
tobyjsullivan
I couldn't get through most of the article because the premise was so weak.
Specifically, I couldn't get over the "different people will die" argument -
the implication being that the status quo is somehow more fair. This ignores
the fact that many collision-related fatalities are not the drivers or in any
way at fault.

2014 stats[0] show that 38% of fatalities in car accidents were not drivers.
It's reasonable to assume that even more people who were driving were not at
fault. _Those_ deaths are arguably near-random in most cases. The argument
that hundreds of thousands of random deaths are somehow more "fair" is an
entirely poor argument in my opinion.

[0] [http://www.iihs.org/iihs/topics/t/general-
statistics/fatalit...](http://www.iihs.org/iihs/topics/t/general-
statistics/fatalityfacts/gender)

~~~
SomeCallMeTim
The "different people will die" argument has been debated for decades in the
philosophical question called the "Trolley Problem." [1]

The short answer is that yes, it's generally agreed to make the active choice
to save more people, even if it means different people die.

Air bags kill people sometimes in accidents where they likely wouldn't have
died, but we put up with the risk because overall we're more likely to live
through a serious accident. Air bags are fired off because a computer reading
a sensor decides that they're needed. It's a very close analog situation, and
yet the people making the moral argument against them have been downvoted to
oblivion, and air bags are now mandatory.

[1]
[https://en.wikipedia.org/wiki/Trolley_problem](https://en.wikipedia.org/wiki/Trolley_problem)

~~~
tynpeddler
I'm not sure if the trolley problem applies here. The trolley problem relies
on one of the choices being passive. In the case of cars however, both choices
require active effort. Either we actively build cars that cause X deaths every
year, or we actively build cars that cause Y deaths every year. It seems
pretty obvious that we should build the car that causes the fewest deaths.

Even if this is a trolley problem, if we look at the problem from the point of
the regulators, then the situation could be considered the trolley problem
turned on its head. Regulators can do nothing, and let Tesla develop their
autopilot and save lots of lives. Or regulators can actively intervene, stop
Tesla, and thousands more will die. From this point of view, the trolley is
headed towards one man and you have a switch in hand to kill 5 people instead.
It's hard to imagine that anyone would choose to flip the switch.

~~~
SomeCallMeTim
> From this point of view, the trolley is headed towards one man and you have
> a switch in hand to kill 5 people instead.

Fair enough. Like your analogy.

------
espadrine
> _According to research, we generally believe that we’re above-average in
> intelligence, and that we’re above-average in driving skills. (By definition
> of “average,” obviously that’s impossible.)_

It actually is possible. It is a common mistake to confuse statistical average
and median. For instance, 75% of the following numbers (1, 1, 1, 0) are above
average. More than 90% of humans have an above average number of legs.

Sure, it is difficult for the general public to understand academic results.
Still, level zero of academic reporting is to link to the original paper,
instead of playing a game of Chinese whispers by linking to an article that
links to an article that links to the paper.

What the paper really points to is that people assume by default that
strangers are worse at driving than they are, which ironically is what is
happening with autopilot instead of strangers.

------
ubercore
Admittedly I didn't finish the entire article, but the point regarding blind
and disabled people being killed seems like a strawman to me? Tesla never
intended people who couldn't otherwise drive to use autopilot, so I don't know
what lessons we can draw from this in that regard.

------
coolspot
Tesla should not have advertised autopilot as something magic and close to
autonomous car. Because it is just another advanced cruise control. Volvo,
Mercedes and others have it too. Now they made it harder to everyone, to
convince general public that robot cars are for good.

------
alex_hirner
The argument is reversed entirely if you take human desires as given, i.e. if
you acknowledge that people do text on the road.

The metric to compare both alternatives then becomes, how many miles of
texting while driving vs. being driven by an autopilot while texting does it
take to kill one person. I don't have the numbers, but I'm pretty sure the
latter is lower. Maybe autopilots encourage "bad" behavior, but convenience
also spurred construction of coal power plants at the cost of shortened lives.
Much like fines for neglected pollution abatement, I would favor mandatory
life insurance for each mile on autopilot to internalize risks.

------
throw7
I will never willingly get into a car that's being driven solely by a
computer. full stop. That tesla driver was presumably warned to be alert and
act accordingly as if he was driving... obviously he didn't and he paid a high
price.

I actually would welcome the day we have autonomous "self-driving cars" so I
could tell it to go to the shop for service/inspection and drive itself back,
but if I'm in the car I will always be the driver.

~~~
avar
One day we'll treat your insistence on driving your own car the way we would
treat you insisting on celebratory gunfire today.

At some point computers will be safer drivers than human driven cars in every
way. Are you going to insist on irrationally putting your fellow man in harms
way at that point?

------
banku_brougham
The responses in this thread all seem to hold the assumption that these
flotillas of self driving cars will have a low failure rate.

The Tesla crash makes it easy to imagine a 100 car freeway pile-ups caused by
AI drivers discovering novel ways to crash. Driving headlong into the side of
a trailer because the color looked like the sky?

The possibilities are endless, and the subsequent media attention and
reactionary legislation could set back the whole project back a generation.

~~~
greglindahl
Are you saying there hasn't already been a 100-car freeway pileup caused by
(now-standard) cruise control? Or that human-driven cars don't drive into the
side of a trailer because the driver didn't see it?

~~~
cmotzakut6
>Or that human-driven cars don't drive into the side of a trailer because the
driver didn't see it?

I won't claim that that never happens, but I doubt it's very common. If you
were to take 1000 random car crash deaths I do not think a very large
proportion of them are caused by people running head-on into a sideways
tractor trailer that they had a long distance to notice and react to.

Tesla says this is the only fatality in 130 million miles of Autopilot
driving, and that in the US there is a fatality every 94 million miles of
driving overall. But it's known that fatalities are more common in risky
driving scenarios where Autopilot is not used, so this is not an apples-to-
apples comparison
([https://www.progressive.com/newsroom/article/2002/may/fivemi...](https://www.progressive.com/newsroom/article/2002/may/fivemiles/),
[http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1449863/](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1449863/)).
If the appropriate adjustment is 40% or more, then the Tesla is riskier than
human driving in the "non-risky" scenarios. The fact that the accident here
was an uncommon failure mode for humans, and the Tesla was at fault, should
give a hint as to which direction the numbers need to be adjusted.

------
hartator
I am bullish both on Autopilot and Tesla. That said, I don't think lying about
numbers is a good thing. They weren't any death on Tesla model S until now.
So, saying using Autopilot is still safer than driving 'manually' is a lie.
Tesla was comparing itself to the average not so good US average car. I think
it is a bit of a stretch. I will still use it though.

~~~
sevenless
We also don't know yet how safe Tesla is. There's hardly any data. You can't
do any kind of utility calculus with comically huge error bars.

If we assume a death is binomially distributed, 1 death in 100 million
kilometers is consistent with a real rate of anywhere between 0.03 and 3.7
deaths per 100 million kilometers. That's a big bracket.

------
tfb
These self-driving cars are a good start though. It makes sense - to me at
least - that more self-driving cars on the road means fewer accidents. Imagine
if nearly every vehicle was autonomous and wirelessly networked with other
nearby vehicles. We'd see little to no accidents, assuming everything is
secure and working correctly.

------
nyxtom
This was going to happen eventually. My whole feeling on this situation is it
will continue to put the pressure on the industry to get much much better at
this. A tragedy yes, but at least that person's death can be used to help
improve safety - which is a huge shift from throwing up our hands to driver
error.

------
whamlastxmas
Maybe we can worry about self driving car deaths once they surpass train
related deaths. So far they're safer than multi-million dollar machines
operated by highly trained professionals, directed by other highly trained
professionals, and are limited to only moving two directions on a set rail.

------
klakier
We should worry because of the blurred responsibility. Have you ever seen
large company responsible for death? This is how it works, you will die and
there will be nobody responsible for this.

Tesla is not safe car, it is as safe as Honda Civic, with ~40 deaths per
million cars. Do the math if you don't believe me.

~~~
mikeash
Which is how it should be. If I die in my car, it's almost certainly going to
be my fault or the fault of another driver, _not_ the fault of the car maker.
Of the ~30,000 traffic deaths/year in the US, how many are due to faulty
equipment?

As far as doing the math, are we counting deaths like the guy who stole one
and crashed it going 120MPH without a seat belt?

