
Driver who died in a Tesla crash using Autopilot ignored 7 safety warnings - _1
https://www.washingtonpost.com/news/the-switch/wp/2017/06/20/the-driver-who-died-in-a-tesla-crash-using-autopilot-ignored-7-safety-warnings
======
basseq
From a UX (and potentially legal) standpoint, it matters less how many
warnings he ignored in the few minutes prior to the crash, but how many
warnings he—and others—ignored on other drives.

By way of extreme example, if every time anyone engaged autopilot, it issued a
warning every minute, you would quickly ignore them. Information fatigue.

~~~
RickHull
> _Since the crash, those reports have said, Tesla has updated its Autopilot
> feature to include a strikeout system, whereby drivers who repeatedly ignore
> safety warnings risk having their Autopilot disabled until the next time
> they start the car._

Looks like this concern has been addressed already.

~~~
thinkfurther
_One_ concern is also whether this is fair criticism or talking shit about a
dead person. Any changes that came after the crash are completely irrelevant
to that concern.

~~~
Sohcahtoa82
> talking shit about a dead person

Being dead does not automatically command more respect. If someone's fuck-up
caused them to die, there's nothing wrong with criticizing them for their
fuck-up.

~~~
thinkfurther
Did I say it is? I said that _one_ concern is "whether this is fair criticism
or talking shit". And for that question the state of the autopilot _then_ is
relevant, not any improvements afterwards. Speaking of autopilots I'm really
curious what thought processes, if any at all, lead to such downvotes and
responses. It doesn't get more simple than basic chronology, after all.

~~~
Sohcahtoa82
> Did I say it is? I said that one concern is "whether this is fair criticism
> or talking shit"

You said "whether this is fair criticism or talking shit _about a dead person_
", which can easily be interpreted as saying you think people that are dead
deserve an extra level of respect simply for being dead.

Obviously your message has been misinterpreted by multiple people.

------
WhatIsDukkha
I'm not sure I like the focus on the driver here. I do think he was a fool for
trusting the device.

It genuinely sounds like the autopilot failed to prevent a collision that,
whether right or wrong, we would expect it to before broadly deploying these
devices.

A semi is a very large and pretty reasonably defined visual mass that
intersected with the path of the vehicle.

What did the autopilot do and when did it do it seems to be missing from this
round of reports on this incident and thats pretty lame.

Edit- Notice there is no mention of a COLLISION warning being issued? Only
nagging "hands on wheel".

~~~
MBCook
Despite the terrible name Tesla's system is supposed to _assistive_ , the
driver is still supposed to be in the loop.

If the driver totally abdicated all involvement, the accident is his fault.
He's the final safety system and it sounds like he didn't even try to do his
job.

~~~
CydeWeys
Names are important. Calling it "autopilot" implies that it does the driving
for you, and people will take that at face value. It should be named in a way
that accurately describes its abilities.

If it is actually just assistive, and isn't autopilot, then calling it such is
incredibly dangerous. It'd be like calling homeopathic junk a "cancer
medicine", which is illegal mind you.

~~~
Robotbeat
"Autopilot" in an aircraft doesn't imply it flies for you. And in an aircraft,
there are multiple levels.

"Autopilot" is no worse than "cruise control" in this regard.

~~~
dragonwriter
“Autopilot” is worse, not because of what “autopilot” actually means in
aviation, but because of what “autopilot” _suggests_ to the car-buying public,
who are, generally, not aviators.

~~~
hsod
What does the car-buying public think those folks in blue suits are doing at
the front of the plane?

~~~
neumann
Giving each other handskys!

------
dsr_
Sounds like he either committed suicide or was unconscious/disabled before the
accident.

An autopilot shouldn't just hand over control to the driver and pray when it
needs help and the driver is unresponsive; it should try to come to a stop on
the side of the road, then shout for help. Shouting via a cell call to 911 (or
country-specific equivalent) wouldn't be a bad idea.

"This is a Tesla P85. My driver is unresponsive. The car is stopped on I-95
southbound, 2.1 miles south of exit 4 past I-295. The car is red. Please send
help. This message will repeat three times."

~~~
revelation
But this wasn't a _disengagement_ of the autopilot, he just didn't have his
hands on the steering wheel.

Not to mention that you don't actually need to have autopilot running to get
the emergency braking feature. Autopilot or not the car should brake before
going full throttle into the back of a semi.

For example:
[https://www.youtube.com/watch?v=eD89Fc_ofXc](https://www.youtube.com/watch?v=eD89Fc_ofXc)

~~~
pbhjpbhj
It's not a terribly informative video as one can't tell what the car did and
what the driver did. Did the car or driver steer around the car in front,
who/what operated the break first, did the driver have to specify the degree
of breaking or was that all down to the vehicle. Was the driver watching the
car in front of the vehicle he was following; did the vehicle do that, what -
if anything - did the vehicle react to. At what point does the driver receive
control back from the vehicle if it did take over control, was it full or
partial control and up to what point.

None of that appears in the video AFAICT?

Extra credit: if the vehicles in front were attempting a carjacking is it
possible for the Tesla driver to floor it and push the car in front out of the
way?

~~~
revelation
Description says autopilot was disabled, and the quick successive beeps you
hear are the emergency braking system detecting an impending collision and
braking. The driver is steering (and was before; no autopilot).

This is how emergency braking works in every car that has it.

------
mrbabbage
The article mentioned how newer Tesla vehicles have a "strikeout" system where
the autopilot software disables itself until the next car startup if the
driver repeatedly ignores warnings.

While this is definitely an improvement, how does that actually work? If the
driver isn't paying attention, how do the newer vehicles force the driver back
into paying attention and back to taking full control of the car? This seems
like a really hard problem, and the article doesn't really dig into it.

Related:
[https://en.wikipedia.org/wiki/Dead_man%27s_switch](https://en.wikipedia.org/wiki/Dead_man%27s_switch)

~~~
chrisfosterelli
The vehicle starts to slow down (eventually coming to a stop) and turns the
hazard lights on.

Here is a video:
[https://youtu.be/UiNDVdmF9ws?t=47s](https://youtu.be/UiNDVdmF9ws?t=47s)

------
lancewiggs
While some (elsewhere) are arguing the system could have gracefully handed
back control by slowing down to an eventual stop, it's pretty clear the driver
was at fault. It's also not clear how the autopilot could have done better -
is slowing down on a high speed road any less dangerous? It needed a human in
the loop.

The evidence was that the driver was wilfully ignoring the safety warnings,
just like ignoring a fence or sign above a steep cliff. That increased the
fatal risk to him and other road users. Any other (less litigious) country
would immediately place the fault where it clearly lies.

~~~
Steko
> , it's pretty clear the driver was at fault

It's pretty clear both the driver is primarily at fault and Tesla's poorly
designed system (as it existed at the time) compounded the driver error.

> is slowing down on a high speed road any less dangerous

Yes, slowing down gradually and pulling over is safer than careening down the
road with zero input the human driver which as you admit "needs to be in the
loop".

------
MBCook
I know they changed the software somehow (although I don't know exactly how).

It seems like if the driver isn't interacting and following the rules (like
not keeping their hands on the wheel) why not slow the car and have it pull
over with hazard lights on?

My car's systems (though not as advanced) will simply turn off if you're not
doing your part.

~~~
basseq
They did. From the article: "Since the crash, those reports have said, Tesla
has updated its Autopilot feature to include a strikeout system, whereby
drivers who repeatedly ignore safety warnings risk having their Autopilot
disabled until the next time they start the car."

~~~
MBCook
That's good. Like I said I didn't know the exact change they made (WaPo is
paywalled, I read the Ars article).

Thanks.

As long as they are so advanced having the system eventually pull over seems
like a good idea. Turning the system off won't work great if the driver falls
asleep and the beeps don't wake them. At least if it pulled over they'd be
safe. Probably a really hard thing to do though.

~~~
skykooler
It doesn't pull over yet, but if you repeatedly ignore warnings it will put on
flashers and slowly pull to a stop.

~~~
MBCook
Really? Excellent. That's most if the way there.

------
dmix
I'd love to compare the results of this final analysis with the FUD that I
remember coming out following this crash on Reddit et al with a series of
people saying 'I told you so'.

Complex systems of risk mitigation tends to get pushed in the media as an
ideal solution to these accidents well before the complete facts come in.
Common sense precaution is typically ignored in this culture of total risk
adverse superiority that floats to the surface after these events and would
ultimately result in very few of these experimental products, which drive
innovation, being tested (as is clearly the case in many risk adverse cultures
such as Japan and Switzerland).

With any early adopter's of automation whose success is partially dependent on
human intervention (ie, flying planes) having stories of failures of
incentives to know when or not to actually listen to the warnings provided by
the interface over your own intuition is helpful not only in training but also
in the design of these systems.

To not expect a certain degree of these types of failure situations where the
system largely acted as intended is naive and ultimately defeatist.

I hope the drivers realize the risk they are engaging in here and take
appropriate behavioural change as a result but I can't imagine many formal
systems that would practically be effective here at deterring these situations
other than maybe adapting educational training and the content of the warning
systems.

~~~
CydeWeys
As someone who doesn't own a car but is often a cyclist or pedestrian, I don't
want to be reliant on drivers' (potentially lacking) common sense for my
safety.

~~~
dmix
That's why when I bike daily in Toronto I take full lanes where it's obvious
there isn't much room for bikes on the side or cars are turning right.

I also pass on the left when cars will clearly be turning right on a green or
red light.

I'm amazed that this is a rare thing for a biker to do, when it should be
common sense, but it's always best to design systems for the lowest common
denominator. Still I'm amazed so many bikers put so much trust in other cars
and assume they are looking into to their mirrors when they turn or merge.

The best advice I've heard as a cyclist is that it's safest to assume no
driver is paying attention and act accordingly. This is common sense
precaution.

When it comes to designing automated driving systems there is also a safe
assumption that common sense (by other people) isn't expected to a certain
degree so prepare for edge cases.

But in the particular case I'm not sure what you can do beyond running those 7
warnings before the crash. All I said is that the language used and severity
of the warnings could be tested in experiments as well as considering the
marketing of the efficacy of the systems to early adopters of alpha systems.
Basically eliminating over confidence in existing oversight mechanisms.

------
revelation
This was the crash where the Tesla drove at full-speed into the back of a
trailer yes?

I don't see how the "7 safety warnings" are relevant here, that was just the
cars reminder that he should put his hands back on the steering wheel. It does
not mean the car detected it was losing vision and would stop steering the
vehicle shortly. The safety warnings had nothing to do with the Tesla failing
to recognize the obstacle ahead and brake.

I don't think the future looks good for autonomous driving if the NTSB is
going to accept an explanation that there is no vehicle failure here because
it gave some regular reminders to put your hands on the steering wheel.

~~~
fiter
The hands on the steering wheel is a proxy for paying attention. That's really
the key point of these "assist" features: you need to be paying attention just
as much as you would be otherwise. Unfortunately, this reduces their
effectiveness to being backup.

~~~
revelation
That's great, if you're looking to take someones license away. That is not
what the NTSB does. The assist feature failed to recognize the obstacle ahead
and worse yet, it failed to recognize it's own failure. This is what the NTSB
is investigating.

The autopilot did not disengage. If it disengages and tells the driver to take
over immediately, hey maybe (big maybe) we can accept that this was not a
technical failure and solely the drivers fault. But that didn't happen.

~~~
fiter
At no point should the system have to "[tell] the driver to take over
immediately", because the driver should always be in control. That is the
point of the assist technology: to act as a backup for driver error. The
driver is not acting as backup for the autopilot error.

------
pwthornton
Until we get to real autonomous cars, we'll probably have more issues with
people and their hubris. My car has a bunch of these features, and I have them
only in case of messing up. They are redundancy. Nothing more.

Some people are way overestimating what these systems can do or are paying
less attention because they think the car will correct for their mistakes.
We're going to be in for a rocky few years or so where these advanced systems
may lead more crashes from inattentive drivers.

Having used these various technologies, I can firmly say that I can't wait for
fully autonomous cars.

~~~
MBCook
Mine does too, but instead of calling it 'Autopilot' it's 'adaptive cruise
control' and 'lane keep assist'. The names make it clear the car is not
driving its self (although it's clear to me its close to capable for the
simple case).

The car makes it VERY clear you MUST steer. The car mostly keeps itself in the
lane but is a little bit unsmooth about it (I assume to encourage drivers to
do it themselves) and will yell at you loudly if you go more than 10-15s
without doing steering yourself and disable its self.

It's a great feature and helps a ton with high crosswinds... but it's very
clear it's not meant to be relied upon to drive the car.

------
samfisher83
Maybe the biggest problem is Tesla calling it autopilot. Just call it lane
assist or adaptive cruise control. I know many of the cars will stop working
if you take your hands off the wheel too long.

~~~
artursapek
But their plan is to slowly evolve it into a full autopilot feature. I think
Elon says they're demoing a CA->NYC no-hands trip sometime in the next year.

Maybe they could have branded it differently early on?

~~~
samfisher83
Call it autopilot when it works like an autopilot, but right now it really
doesn't do all of that.

------
quantumwannabe
The crash had nothing to do with the driver ignoring autopilot warnings. For
some reason the automatic emergency braking feature of the Tesla did not work.
This is a serious problem and AEB should be completely separate from
autopilot. What is the point of a safety feature which is supposed to stop you
from crashing if it doesn't activate? In this case the driver appears to be an
idiot who wasn't paying attention, but imagine if he had a seizure, fell
asleep, got cut off and brake checked, or something else?

~~~
pbhjpbhj
To be fair a vehicle coming across you at a crossroads is hard to judge the
speed of, hard to see in time and hard to decide what to do. If you break will
it slam in to the side of you; do you need to floor it, maintain speed? What
if you're in amongst traffic and can't see the vehicle (visually or via tech
assist)?

Is there other info you're using to come to your conclusions here, the article
doesn't seem particularly informative about the crash (location,
circumstances, etc.).

~~~
quantumwannabe
From the NTSB Report
([https://dms.ntsb.gov/public/59500-59999/59989/604694.pdf](https://dms.ntsb.gov/public/59500-59999/59989/604694.pdf)):

>In concluding the interview, the witness offered the opinion that both
drivers had sufficient time to have seen the other vehicle and should have
been able to slow or stop sufficiently to have avoided the crash.

The full witness statement is on page 19. It sounds like the truck was making
a left turn across the Tesla's path and according to the witness they both had
enough time to see the other car - the truck was moving slowly and the Tesla
took around three seconds to reach the intersection from the time when the
witness first saw it.

------
danso
Worth noting that the driver seemed particularly enthusiastic about autopilot,
to the point of frequently recording his experiences and publishing them on
YouTube:

[https://www.nytimes.com/2016/07/02/business/joshua-brown-
tec...](https://www.nytimes.com/2016/07/02/business/joshua-brown-technology-
enthusiast-tested-the-limits-of-his-tesla.html)

> _CANTON, Ohio — Joshua Brown loved his all-electric Tesla Model S so much he
> nicknamed it Tessy._

> _And he celebrated the Autopilot feature that made it possible for him to
> cruise the highways, making YouTube videos of himself driving hands-free. In
> the first nine months he owned it, Mr. Brown put more than 45,000 miles on
> the car._

> _“I do drive it a LOT,” he wrote in response to one of the hundreds of
> viewer comments on one of his two dozen Tesla-themed videos. His postings
> attracted countless other Tesla enthusiasts, who tend to embrace the cars
> with an almost cultish devotion._

That he was a Navy SEAL who specialized in electronics and bomb defusal
probably also gave him additional confidence/hubris when driving.

------
ProfessorLayton
Why does autopilot continue to drive the car without someone holding the
steering wheel in the first place? If the car can't drive itself without
driver input, then why isn't letting go of the steering wheel the equivalent
of disengaging autopilot?

Tesla's technical hubris could have killed more than just the driver.

------
jkchu
[https://www.tesla.com/blog/tragic-loss](https://www.tesla.com/blog/tragic-
loss)

I think Tesla's original statement is relevant to post here. Although likely
somewhat biased, they provided a succinct explanation to the reason why the
Tesla did not brake automatically to avoid the collision.

From the statement: "Neither Autopilot nor the driver noticed the white side
of the tractor trailer against a brightly lit sky, so the brake was not
applied. The high ride height of the trailer combined with its positioning
across the road and the extremely rare circumstances of the impact caused the
Model S to pass under the trailer, with the bottom of the trailer impacting
the windshield of the Model S."

------
YCode
Has anyone driven one of these using Autopilot?

It strikes me as an eerie feature in that semi-autonomous means you're not
controlling the car but at any moment you may have to jump in generally
unaware of the forces the car/tires have been experiencing.

It just seems like a major mindset shift from mostly idle passenger to active
driver and offhand I feel like until the car can reliably drive itself I'd
rather just drive the whole way.

------
kuldeep_kap
1\. I think any autopilot system which makes users to keep their hands on
steering wheel and pay equal attention is totally useless. In fact it's
detrimental since it gives false sense of control and naturally leads users to
lapse.

2\. Anyone who read Don Norman knows this is usually fault of the design of
the system not the user. The system needs to do more to assist users in these
edge cases.

~~~
MBCook
1\. Yep, which is why they never should have named it that. That was a very
dangerous decision.

2\. No, it's the fault of the marketing and system designers for vet giving
users the impression it was a 'true' autopilot (see #1)

------
glitcher
For a high tech vehicle with so many safety features built in, I don't
understand why an automatic collision avoidance system didn't override the
autopilot system and start braking well ahead of this high speed impact. Maybe
this model doesn't have such a system, or the traffic conditions changed so
rapidly there was no time left to brake?

------
kamaal
This again brings us back to the fundamentals.

Do you trust your self-driving car AI algorithm? Yes? Sure lets remove the car
controls(Accelerator, break and steering wheel). And then safety and other
liability is on the car manufacturer.

If you don't have this level of trust on your algorithm you don't really have
a self driving car.

------
joeblau
Our family is looking at getting a Tesla and my wife brought up this incident.
I told her that Teslas are not fully capable of driving themselves and that
the guy was probably not paying attention to what he was doing. I'm sad this
guy lost his life, but ignoring safety warnings for whatever reason eared him
the Darwin award.

~~~
legolas2412
am going to disagree there. Tesla names its system autopilot. Uses the fact
that autopilot that includes collision avoidance reduces accidents by 40%, and
then claims that there system is safer than a human (assisted human safer than
a human, genius!). Elon musk has a statement every month about self driving
autonomous driverless cars. They even have the gall to say that new cars have
"full self driving hardware", is proof of outrageous statements even required?
And all this hype/misleading advertising is fine because Tesla showed a few
warnings? And the driver didn't have sufficient reason to believe the car was
capable of complete autonomy. A simple Google search will tell you that this
driver wasn't the only one.

------
MBCook
Ars Technica has an article that's not paywalled:

[https://arstechnica.com/tech-policy/2017/06/tesla-model-s-
wa...](https://arstechnica.com/tech-policy/2017/06/tesla-model-s-warned-
driver-in-fatal-crash-to-put-hands-on-steering-wheel/)

~~~
ktamura
Yes, and you pay in the form of giving away your data to advertisers.

To be sure, WaPo also sells data to their advertisers. At the same time, WaPo
has broader coverage and probably needs a larger newsroom as a result. Ars
Technica is a subsidiary of Conde Nast, and Conde Nast as a whole has
paywalled publications (like New Yorker).

In general, I think it's perfectly fine to promote paywalled content on HN so
long as the content itself is good. We have to fund quality reporting/writing,
and HN can serve as a curation mechanism to surface "content worth paying
for".

~~~
MBCook
I tend to see 1-2 articles a month I want to read from WaPo, but it's about
$50/year for a subscription. And many of those articles aren't much more than
what other sites have (because they're not deep investigative features).

I'm not interested in paying about $3.33/article.

When they do big investigations I'd be happy to to pay $1 or $2 to read it.

This? No.

~~~
ktamura
You might want to try subscribing to WaPo through Apple News or Google News
Stand. They have monthly quotas on # of articles you can ready.

While the principle of "per article" pricing is appealing, it won't probably
work in practice: generating content worth paying for is not like spinning up
servers because the cost of production occurs upfront _and_ have non-zero
marginal cost.

~~~
MBCook
I'm not surprised the pay-per-article thing has never taken off.

I've never used Apple news, I'm kind of hesitant to. I already track the sites
I care about through RSS and see many of the same things through Twitter
first. I don't really want to start using a third thing that overlaps with the
other two.

