
NTSB: Autopilot steered Tesla car toward traffic barrier before deadly crash - nwrk
https://arstechnica.com/cars/2018/06/ntsb-autopilot-steered-tesla-car-toward-traffic-barrier-before-deadly-crash/
======
Animats
NTSB:

 _• At 8 seconds prior to the crash, the Tesla was following a lead vehicle
and was traveling about 65 mph._

 _• At 7 seconds prior to the crash, the Tesla began a left steering movement
while following a lead vehicle._

 _• At 4 seconds prior to the crash, the Tesla was no longer following a lead
vehicle._

 _• At 3 seconds prior to the crash and up to the time of impact with the
crash attenuator, the Tesla’s speed increased from 62 to 70.8 mph, with no
precrash braking or evasive steering movement detected._

This is the Tesla self-crashing car in action. Remember how it works. It
visually recognizes rear ends of cars using a BW camera and Mobileye (at least
in early models) vision software. It also recognizes lane lines and tries to
center between them. It has a low resolution radar system which ranges moving
metallic objects like cars but ignores stationary obstacles. And there are
some side-mounted sonars for detecting vehicles a few meters away on the side,
which are not relevant here.

The system performed as designed. The white lines of the gore (the painted
wedge) leading to this very shallow off ramp become far enough apart that they
look like a lane.[1] If the vehicle ever got into the gore area, it would
track as if in a lane, right into the crash barrier. It won't stop for the
crash barrier, because _it doesn 't detect stationary obstacles._ Here, it
sped up, because there was no longer a car ahead. Then it lane-followed right
into the crash barrier.

That's the fundamental problem here. These vehicles will run into stationary
obstacles at full speed with no warning or emergency braking at all. _That is
by design._ This is not an implementation bug or sensor failure. It follows
directly from the decision to ship "Autopilot" with that sensor suite and set
of capabilities.

This behavior is alien to human expectations. Humans intuitively expect an
anti-collision system to avoid collisions with obstacles. This system does not
do that. It only avoids rear-end collisions with other cars. The normal
vehicle behavior of slowing down when it approaches the rear of another car
trains users to expect that it will do that consistently. But it doesn't
really work that way. Cars are special to the vision system.

How did the vehicle get into the gore area? We can only speculate at this
point. The paint on the right edge of the gore marking, as seen in Google
Maps, is worn near the point of the gore. That may have led the vehicle to
track on the left edge of the gore marking, instead of the right. Then it
would start centering normally on the wide gore area as if a lane. I expect
that the NTSB will have more to say about that later. They may re-drive that
area in another similarly equipped Tesla, or run tests on a track.

[1] [https://goo.gl/maps/bWs6DGsoFmD2](https://goo.gl/maps/bWs6DGsoFmD2)

~~~
falcolas
One more thing to note - anecdotal evidence indicates that Tesla cars did not
attempt to center within a lane prior to an OTA update, after which multiple
cars exhibited this "centering" action into gore sections (and thus required
manual input to avoid an incident) on video.

To me, that this behavior was added via an update makes it even harder to
predict - your car can pass a particular section of road without incident one
thousand times, but an OTA update makes that one thousand and first time
deadly.

Humans are generally quite poor at responding to unexpected behavior changes
such as this.

~~~
txcwpalpha
And this is exactly why all of these articles recently about how "great" it is
that Tesla sends out frequent OTA updates are ridiculous. Frequent,
unpredictable updates with changelogs that just read "Improvements and bug
fixes" is fine when we're talking about a social media app, but is entirely
unacceptable when we're talking about the software that controls a 2 ton hunk
of metal flying at 70mph with humans inside of it.

The saying has been beat to death, but it bears repeating: Tesla is a prime
case where the SV mindset of "move fast and break things" has resulted in
"move fast and kill people". There's a reason that other vehicle manufacturers
don't send out vehicle software updates willy-nilly, and it's _not_ because
they're technologically inferior.

~~~
slg
This isn't an issue specific to Tesla as all automakers are now making cars
that are more and more dependent on software. So what is the _right_ way to
handle these updates? You mentioned a clear flaw with OTA updates, but there
are also numerous advantages. For example, the recent Tesla brake software
issue was fixed with an OTA update. That immediately made cars safer. Toyota
had a similar problem a few years ago and did a voluntary recall. That means
many of those cars with buggy brake systems were on the road for years after a
potential fix was available and were driven for billions of potentially unsafe
miles.

~~~
txcwpalpha
>This isn't an issue specific to Tesla as all automakers are now making cars
that are more and more dependent on software.

Cars have been dependent on software for a _long_ time (literally decades).
This isn't something new. Even combustion engine cars have had software inside
of them that controls the operation of the engine, and this software is
_vigorously_ tested for safety issues (because most car manufacturers
understand a fault with such software could result in someone's death). Tesla
seems to be the only major car manufacturer that has a problem with this.

>So what is the right way to handle these updates?

The way that other vehicle manufacturers (car, airplane, etc) have been doing
it for _decades_ is a pretty good way.

>You mentioned a clear flaw with OTA updates, but there are also numerous
advantages. For example, the recent Tesla brake software issue was fixed with
an OTA update. That immediately made cars safer.

There is no evidence that said OTA update made Tesla cars any safer. There
_is_ evidence that similar OTA updates have made Tesla cars more unsafe.

The brake OTA that you mentioned has actually potentially done more harm than
good. Tesla owners have been reporting that the same update made unexpected
changes to the way their cars handle/accelerate in addition to the change in
braking distance. These were forced, unpredictable changes that were
introduced without warning. When you're driving a 2 ton vehicle at 70mph,
being able to know exactly how your car will react in all situations,
including how fast it accelerates, how well it handles, how fast it brakes,
and how the autopilot will act is _crucial_ to maintaining safety. Tesla
messing with those parameters without warning is a detriment to safety, not an
advantage.

~~~
omgwtfbyobbq
>Cars have been dependent on software for a long time (literally decades).
This isn't something new. Even combustion engine cars have had software inside
of them that controls the operation of the engine, and this software is
vigorously tested for safety issues (because most car manufacturers understand
a fault with such software could result in someone's death). Tesla seems to be
the only major car manufacturer that has a problem with this.

The TACC offered by most (if not all) manufacturers can't differentiate
between the surroundings and stopped vehicles. I wouldn't be surprised if
their Lane Keeping Assist (LKA) systems have similar problems.

[https://support.volvocars.com/en-CA/cars/Pages/owners-
manual...](https://support.volvocars.com/en-CA/cars/Pages/owners-
manual.aspx?mc=l541&my=2018&sw=17w46&article=96ad4d2183b62a59c0a8015176bd1b81)

>WARNING When Pilot Assist follows another vehicle at speeds overapprox. 30
km/h (20 mph) and changes target vehicle – from a moving vehicle to a
stationary one – Pilot Assist will ignore the stationary vehicle and instead
accelerate to the stored speed.

>The driver must then intervene and apply the brakes.

~~~
darkerside
This comparison just sold me on how morally wrong it is what Tesla is doing.
Intentionally misleading and marketing to customers a feature called Autopilot
that is only a marginal improvement on what other cars already offer. What if
Volvo started calling their (clearly not independent) feature Autopilot and
saying it was the future of hands-free driving? Seems inexcusable.

~~~
slg
Which is also exactly what GM is doing with Super Cruise. Here is just one of
their commercials.

[https://www.youtube.com/watch?v=u__51kTl4j8](https://www.youtube.com/watch?v=u__51kTl4j8)

Despite this warning in the manual[1]:

>Super Cruise is not a crash avoidance system and will not steer or brake to
avoid a crash. Super Cruise does not steer to prevent a crash with stopped or
slow-moving vehicles. You must supervise the driving task and may need to
steer and brake to prevent a crash, especially in stop-and-go traffic or when
a vehicle suddenly enters your lane. Always pay attention when using Super
Cruise. Failure to do so could result in a crash involving serious injury or
death.

[1] -
[https://www.cadillac.com/content/dam/cadillac/na/us/english/...](https://www.cadillac.com/content/dam/cadillac/na/us/english/index/ownership/technology/supercruise/pdfs/2018-cad-
ct6-owners-manual.pdf)

~~~
TeMPOraL
Riffing off the parallel thread about Google AI and how "corporations are
controlled by humans" and can have moral values - no, corporations are
controlled primarily by the market forces. When Tesla started branding line
assist as autopilot, it put market pressure on others to follow suit. Hence,
I'm absolutely not surprised about this ad and the associated warning in the
manual.

~~~
darkerside
TBF, corporations are controlled by humans and overwhelmingly influenced by
market forces, which are also controlled by (other) humans.

That's a nitpick. Your broader point about Tesla pressuring the market down an
unfortunate path is spot on.

------
MBCook
“[Driver’s] hands were not detected on the steering wheel for the final six
seconds prior to the crash. Tesla has said that Huang received warnings to put
his hands on the wheel, but according to the NTSB, these warnings came more
than 15 minutes before the crash.”

This kind of stuff is why I’ve lost all faith in Tesla’s public statements.
What they said here was, for all intents and purposes, a flat out lie.

Clearly something went wrong here, but they lept to blaming everyone else
instead of working to find the flaw.

~~~
jakobegger
Also, „hands were not detected“ does not mean that they really weren‘t on the
wheel. Maybe someone who drives a Tesla can comment on how reliable the hand
detection is.

~~~
martin_bech
Hand detection is pretty good. You only need to rest your hands on the wheel.
Have a model s as my daily driver.

~~~
TheForumTroll
Lots of drivers have problems where they have to "shake" the wheel to make the
car understand that they do indeed have their hands on the wheel. The system
is pretty rubbish.

~~~
kwhitefoot
It seems to me that my 2015 S70D expects me to be holding the wheel firmly
enough to actually take action.

Mostly I just keep one hand lightly on the wheel when driving down a motorway
with autosteer on and just give it a gentle squeeze or wiggle once in a while.
If I use it on less good roads I keep both hands on the wheel in the proper
location for taking control.

It seems pretty good to me.

------
abalone
_> During the 18-minute 55-second segment, the vehicle provided two visual
alerts and one auditory alert for the driver to place his hands on the
steering wheel. These alerts were made more than 15 minutes prior to the
crash._

Whoah. So there were NO alerts for 15 minutes prior to the crash. Compare this
with Tesla's earlier statement:

 _> The driver had received several visual and one audible hands-on warning
earlier in the drive and the driver’s hands were not detected on the wheel for
six seconds prior to the collision._[1]

This gives a very different impression. They omitted the fact that there were
no warnings for 15 minutes. Frankly that appears to be an intentionally
misleading omission.

So basically the driver was distracted for 6 seconds while believing that the
car was auto-following the car in front of it.

[1] [https://www.tesla.com/blog/update-last-
week’s-accident](https://www.tesla.com/blog/update-last-week’s-accident)

~~~
IkmoIkmo
It's a blatant lie because they knew exactly what they were implying and it
wasn't true, i.e, it was a lie.

------
mymacbook
Reading that initial report is terrifying. I am so glad the NTSB set the
record straight that the driver had his hands on the wheel for the majority of
the final minute of travel. Really makes me feel like Tesla was out to blame
the driver from the get go. To be clear the driver is absolutely partially at
fault, but my goodness autopilot sped up into the barrier in the final seconds
— totally unexpected when the car has automatic emergency breaking.

Emergency breaking feels not ready for prime time. I hope there are
improvements there. Don’t want to see autopilot disabled as a result of this,
would rather Tesla use this to double down and apply new learnings.

Just so sad to hear about this guys death on his way to work - not the way I
want to go. :(

~~~
falcolas
> To be clear the driver is absolutely partially at fault

I'm... not so certain. Why? The autopilot had likely exhibited proper behavior
every time that the vehicle had passed that particular section of road prior,
and if the driver was paying full attention to the behavior of the vehicle, he
would only notice the problem around the 5 second mark.

Five seconds, if you have no reason to be concerned about the vehicle's
behavior, is not much time - especially if you consider that alert drivers are
recommended to give themselves a minimum of 4 seconds of reaction time (i.e.
follow a vehicle by at least 4 seconds).

~~~
ericpauley
My vehicle (a Honda civic) exhibits this exact same functionality and
behavior. (lane keeping, ACC, emergency braking for cars) They make it very
clear the limitations of these systems. I'd say that in the 5 months I've
owned it it's had this exact behavior (veering off exit ramps) 10 times. A
simple jerk of the wheel puts it back on track, it's such a natural motion if
you're paying even the slightest bit of attention. That being said Tesla fails
to make their drivers aware of the limitations of autopilot, so I agree that
this may not be in the driver.

~~~
freerobby
Tesla reminds you to pay attention and keep your hands on the wheel every time
you engage Autopilot. It's one of very few legal disclaimers they show you all
the time on the screen, and don't give you any way of turning it off.

~~~
jonhendry18
And we all know how assiduously people pay attention to messages that flash up
on screen.

~~~
freerobby
How about audible nags? How about flashing white lights on the display? How
about gradually slowing the car down until you give tactical feedback proving
you are in control? Tesla does a lot of things to coerce drivers to pay
attention. If you check out TMC, you'll see lots of people complaining about
how paternalistic and "naggy" the system is, even for those who use it
properly.

I am continually surprised by how little emphasis there is on personal
responsibility when this community discusses an L2 system such as Autopilot.
According to both the law and the operating manual, the driver is in control
at all times. Tesla warns you of this every time you turn it on. Yes, there
are enough bad drivers out there that Tesla is wise to implement habit-forming
nags; but drivers also need to take responsibility for how they use (and
abuse) these systems. Nobody would pass the buck to cruise control for a
driver who set it to 65 and then plowed into something in a moment of
distraction. All due respect to the victim here -- and I feel absolutely
terrible for him and his family -- but if you are paying attention and looking
at the road ahead of you, there is no situation where you accelerate for three
full seconds into a concrete barrier at 70MPH -- not with Autopilot, and not
without it.

------
ckastner
> _His hands were not detected on the steering wheel for the final six seconds
> prior to the crash._

> _Tesla has said that Huang received warnings to put his hands on the wheel,
> but according to the NTSB, these warnings came more than 15 minutes before
> the crash._

> _Tesla has emphasized that a damaged crash attenuator had contributed to the
> severity of the crash._

These may or may not have been factors contributing to the death of the
driver, and ultimately may or may not absolve Tesla from a legal liability.

However, the key point here is that without question, _the autopilot failed_.

It is understandable why Tesla is focusing on the liability issue. This is
something that _they can dispute_. The fact that the autopilot failed is
_undisputable_ , and it is unsurprising that Tesla is trying to steer the
conversation away from that.

The discussion shouldn't be _either_ the driver is at fault _or_ Tesla screwed
up, but two separate discussions: whether the driver is at fault, _and_ how
Tesla screwed up.

~~~
ebikelaw
The deficiency of Tesla AP has been abundantly clear to anyone with eyes, for
a long time. Only fingers-in-ears Musk fans cannot see that.

~~~
d0lph
Or perhaps us fingers-in-ears Musk fans don't expect the AP to be perfect.
Especially considering how new the fields are.

Autopilot doesn't need to be perfect, just better than humans.

~~~
jumelles
So because the field is new, it's okay that a human being is dead?

~~~
Robotbeat
It's not just that the field is new, it's that 1) SO MANY people die
constantly in cars, and this is the start of trying to change that permanently
but it cannot be perfect out of the gate, 2) nor should we wait until it's
perfect if waiting that long means more people die.

Ultimately, we cannot rely on mere anecdotes. We need statistics that show it
is worse. If it's as bad as the media coverage claims it is, it should be easy
to demonstrate such statistics.

------
nwrk
The report itself - worth of reading

[https://www.ntsb.gov/investigations/AccidentReports/Reports/...](https://www.ntsb.gov/investigations/AccidentReports/Reports/HWY18FH011-preliminary.pdf)

~~~
gwern
No surprise there about it steering into the barrier or Tesla not-quite-lying
about him getting warnings, but I'm surprised that he apparently wasn't dead
on impact but survived all the way to the hospital?

So even a three-car pileup with the Tesla steering straight into a barrier at
71MPH & accelerating, with the car catching on fire & being reduced to a husk,
still isn't enough to kill a driver immediately. In a way, you could interpret
that as demonstrating how safe Teslas are (at least, without Autopilot).

~~~
codeonfire
A doctor has to declare a person dead, paramedics can't do it. In all cases it
makes sense to transport the body to the hospital where the person is
pronounced dead by the doctor. Otherwise the family may argue in court the
paramedics did not try enough to save the person.

~~~
gwern
Usually in reports, if someone dies at the scene but is only officially
declared dead at the hospital, they'll write a phrase like 'was declared dead
at the hospital'. This report says instead that he was transported to the
hospital 'where he died from his injuries'. Double-checking, media articles at
the time uniformly describe him as dying at the hospital, and at least one
says that he died in the afternoon (
[http://sanfrancisco.cbslocal.com/2018/03/27/tesla-crash-
inve...](http://sanfrancisco.cbslocal.com/2018/03/27/tesla-crash-
investigation-highway-101-wei-huang/) ) while the accident was at 9:27AM, so
either traffic in the Bay Area is even worse than I remember, they like to
spend hours doing CPR on a corpse, or he did in fact survive the crash and
died at the hospital in a literal and not legalistic sense.

------
jackson1way
Despite the autopilot failure, I find the battery failure quite remarkable
too:

> The car was towed to an impound lot, but the vehicle's batteries weren't
> finished burning. A few hours after the crash, "the Tesla battery emanated
> smoke and audible venting." Five days later, the smoldering battery
> reignited, requiring another visit from the fire department.

Where is your LiPo god now? Batteries have more energy density than 20 years
ago, ok. But they are also much more dangerous. Now imagine the same situation
with Tesla's huge semi batteries. They'll have to bury them 6ft under, like
Chernobyl's smoldering fuel rods. Minus the radiation.

~~~
curiousgal
Battery failure?

You're expecting their engineers to design a battery that remains safe after a
70mph crash into a barrier?

~~~
jumelles
Yes? Welcome to the automotive industry. Lives are at stake.

~~~
thatswrong0
Is there an equivalent safety standard for ICE vehicles? I don't think a gas
task would do particularly well in the exact same circumstance either

~~~
Alupis
ICE vehicles don't typically spontaneously combust after a crash...

~~~
cfadvan
If the combustion occurs as the result of a crash, can that really be said to
be “spontaneous?”

------
netsharc
Dear Elon, want to start a website that rates how fake-newsy government-
produced accident reports are? /S

"FDA said my farm is producing salmonella-infected chicken. Downvote their
report on this URL!"

~~~
unethical_ban
Without commenting on the rest of the issues with Musk/Tesla, his tweet about
a news verification agency was a joke. Verifiably. It was nicknamed Pravda,
and a corporation related to his tweet was registered to Musk on Oct 17 2017,
the anniversery of the October Revolution.

~~~
aerovistae
A joke? It's verifiable because he said he might name it Pravda, and the fact
that he actually registered it is further evidence that it's a joke?

I'm not saying he's really going to do it, but I question your certainty here
with a total lack of actual evidence.

~~~
unethical_ban
Verifiably is a strong word on this site. Fair enough. I'm still right though.

Options:

* In 2018, A foreign-born Capitalist billionaire with sometimes negative press coverage suggests creating a Ministry of Truth named after a Russian propaganda newspaper to call out journalists who make false claims in their reporting. He registers a corporation for that purpose on the 100th anniversery of a bloody revolution bringing about the Soviet Union.

* A man known for using puns and showmanship in their communications trolled a bunch of people who latch onto his every word, especially when they don't like the man.

And to ComradeTaco, this isn't the presidency, and he didn't spout hate
speech. It was a joke.

~~~
unethical_ban
Never been brigaded on HN before. What a feeling.

------
gburt
I am generally against often-called "excessive regulation," but the regulator
-- perhaps FTC -- should aggressively prohibit the misleading marketing
message here.

The entire problem manifests from calling this lane keeping mechanism
"Autopilot." Tesla should be prohibited from using that language until they
have achieved a provably safer self-driving level 3+.

The problem is exacerbated by Musk's aggressive marketing-driven language.
Saying things like _we 're two years out from full self-driving_ (first said
in 2015) and _the driver was warned to put his hands on the steering wheel_
(15 minutes prior to the crash) makes Musk look like he is plainly the bad guy
and attempting to be misleading.

"Provably safe" probably means some sort of acceptance testing -- a blend of
NTSB-operated obstacle course (with regression tests and the like) and real
world exposure.

------
dcposch
Tesla Autopilot makes it to HN pretty much every week now, almost never in a
good way.

Every time, we have a big discussion about autopilot safety, AI ethics, etc.

What about _lack of focus_?

Tesla has already reinvented the car in a big way--all-electric, long range,
fast charge, with a huge network of "superchargers". It's taken EV from a
niche environmentalist pursuit to something widely seen as the future of
automotive.

Why are they trying to tackle self-driving cars at the same time?

This feels like a classic mistake and case of scope creep.

Becoming the Toyota of electric is vast engineering challenge. Level 5
autonomous driving is an equally vast engineering challenge. Both represent
once-in-a-generation technological leaps. Trying to tackle both at the same
time feels like hubris.

If they just made great human-piloted electric cars and focused on cost,
production efficiency, volume, and quality, I think they'd be in a better
place as a business. Autopilot seems like an expensive distraction.

~~~
jchb
The interior design of the Model 3 is very simple - there are few physical
controls - with the assumption that the car usually does not need to be driven
by a human. As Elon presented it:
[https://youtu.be/GZm8ckvsu9I?t=2m2s](https://youtu.be/GZm8ckvsu9I?t=2m2s).
Pure speculation: perhaps the simplified interior is necessary to bring the
costs down for mass-production, and here is where the "synergy" with the
autopilot comes in.

------
menacingly
Tesla has to realize these "shame the dead dude" posts are PR nightmares,
right?

They are reason alone for me to never consider one, that a private moment for
my family might end up a pawn in some "convince the public we're safe using
any weasel stretch of the facts we can" effort.

If this is disruption, I'll wait for the old guard to catch up, lest I be
disrupted into a concrete barrier and my grieving widow fed misleading facts
about how it happened.

~~~
justwalt
>shame the deadman posts

If that were actually the case, then what are they supposed to say?

>lest I be disrupted into a barrier

This made me audibly chuckle.

------
RcouF1uZ4gsC
After this incident and Tesla's response to it, I hope Tesla is sued and or
fined into bankruptcy. Tesla is normalizing releasing not fully tested
software to do safety-critical things, and literally killing people as a
result. A message needs to be sent that this is unacceptable. In addition,
their first response is a PR driven response that sought to blame to driver,
and violated NTSB procedures. Safety is probably the most important thing to
get right with these types of software and Tesla is nonchalantly sacrificing
safety for marketing.

~~~
dboreham
And a helpful lesson for all the "excessive government regulation" people.

~~~
JackCh
Whether regulation is insufficient or excessive should be determined on a
case-by-case basis. Anybody making blanket statements about all regulation as
a whole is an ideologue who probably should not be taken seriously.

------
kevinchen
Tesla Autopilot should be recalled via the next OTA update.

The “Autopilot” branding implies that users need not pay attention, when in
reality, the system needs interventions at infrequent but hard-to-predict
times. If an engineer at Apple can’t figure it out, then the average person
has no chance. Their software sets users up to fail. (Where failure means
permanent disability or death.)

Inevitably, Musk fans will claim that recalling Autopilot actually makes Tesla
drivers less safe. But here's the problem with Musk’s framing of Autopilot.

Sure, maybe it fails less often than humans. (We don't know whether we can
trust his numbers.) But we do know that when it fails, it fails in different
ways — Autopilot crashes are noteworthy because they happen in situations
where human drivers would have no problem. That’s what people can’t get over.
And it is why Autopilot is such a dangerous feature.

An automaker with more humility would’ve disabled this feature years ago.
(Even Uber suspended testing after the Arizona crash!) With Musk, my fear is
that more people will have to die before there is enough pressure from
regulators / the public to pull the plug.

~~~
Traubenfuchs
> But we do know that when it fails, it fails in different

Oh the hubris of man. I am not even a Tesla fan, but still. "I'd rather have 2
of 100 people drive against a wall instead of 1 of 100 automatic cars driving
against a wall.", that's what you are essentially saying, no?

------
MBCook
So people are asking why the barrier wasn’t detected, and that’s fair.

Here’s another question: why wasn’t the ‘gore’ zone detected?

Why did the car thing it was safe to drive over and area with striped white
lines covering the pavement?

It saw the white line on the _side_ of that area and decided that was a land
market but ignored the striped area you’re not supposed to drive on?

If you’re reading the lines on the pavement you have to try to look at all of
them.

I don’t know if other cars, like those with MobileEye systems, do that but
given Tesla’s safety claims they’d better be trying.

~~~
stetrain
This gore zone did not have a striped area, just solid lines on each side.

[http://www.dailymail.co.uk/sciencetech/article-5582461/Tesla...](http://www.dailymail.co.uk/sciencetech/article-5582461/Tesla-
autopilot-nearly-crashes-exact-spot-Apple-engineer.html)

Edit: Google street view of the location:
[https://www.google.com/maps/@37.410912,-122.0757037,3a,75y,2...](https://www.google.com/maps/@37.410912,-122.0757037,3a,75y,205.08h,52.79t/data=!3m6!1e1!3m4!1sjXdU2LMnTz4QQshWHKU0Tg!2e0!7i16384!8i8192)

~~~
MBCook
Ah. I wonder if they do recognize the stripes then.

BTW does anyone know why it’s called a ‘gore’ zone? I can see it being a
(brutal) nickname but I’m hoping there is some better reason.

~~~
roywiggins
"A gore (British English: nose),[1] refers to a triangular piece of land.
Etymologically it is derived from gār, meaning spear."

[https://en.wikipedia.org/wiki/Gore_(road)](https://en.wikipedia.org/wiki/Gore_\(road\))

~~~
MBCook
Thanks.

------
mcguire
Here's the most interesting quote to me:

" _The crash created a big battery fire that destroyed the front of Huang 's
vehicle. "The Mountain View Fire Department applied approximately 200 gallons
of water and foam" over a 10-minute period to put out the fire, the NTSB
reported._

" _The car was towed to an impound lot, but the vehicle 's batteries weren't
finished burning. A few hours after the crash, "the Tesla battery emanated
smoke and audible venting." Five days later, the smoldering battery reignited,
requiring another visit from the fire department._"

Shouldn't it be possible to make the battery safe?

~~~
Robotbeat
It's a lithium-ion battery, so it needs to be fully discharged.

------
userbinator
This just reconfirms my belief about Tesla's "autopilot" \--- most of the time
it behaves like an OK driver, but occasionally makes a fatal mistake if you
don't pay attention and correct it. In other words, you have to be _more_
attentive to drive safely with it than without, since a normal car (with
suspension and tires in good condition, on a flat road surface) will not
decide to change direction unless explicitly directed to --- it will continue
in a straight line even if you take your hands off the wheel.

Given that, the value of autopilot seems dubious...

~~~
Robotbeat
You can't make that kind of conclusion from a single, extremely-highly-
publicized occurrence, even a fatal one. Given how often human drivers kill
people, you need statistics to show that autopilot is worse.

~~~
jonhendry18
I'm going to need statistics from an independent source, not from Tesla, that
autopilot is safer.

I'm not inclined to trust Tesla's statistics given how cavalier they have been
about putting their shoddy autopilot product on the road and marketing it as
being better than it actually is.

~~~
Robotbeat
Of course. How about the National Highway Traffic Safety Administration?

[https://techcrunch.com/2017/01/19/nhtsas-full-final-
investig...](https://techcrunch.com/2017/01/19/nhtsas-full-final-
investigation-into-teslas-autopilot-shows-40-crash-rate-reduction/)

A 40% reduction still means plenty of anecdotes for media fodder.

There may come a time when autonomous vehicles make driving so safe that the
only data we can and should rely on is the rare anecdote like this and the
subsequent NTSB report, like airlines. But while it’s always good to take
seriously the lessons that NTSB’s targeted investigations provide, a lot more
people will die tragic but less-publicized deaths if we stop any company from
deploying an imperfect but still overall beneficial technology.

------
walrus01
This guy tested it at the EXACT same location with tesla autopilot. The Tesla
starts steering directly into the barrier before he corrects it.

[https://www.youtube.com/watch?v=VVJSjeHDvfY](https://www.youtube.com/watch?v=VVJSjeHDvfY)

~~~
ninkendo
Hmm, totally off-topic but that line needs to be re-painted. The _actual_ lane
marker is almost entirely faded away, while the line that follows the exit
lane is bright white. If the glare is bad enough it's possible for a human to
end up thinking the more solid line is the lane marker and follow it left.

(Not to excuse tesla here though, if that wasn't clear.)

------
LinuxBender
Disclaimer: Taboo comment ahead.

Subtle bugs in self driving cars would be a simple way to assassinate people
with low cost overhead. One OTA update to a target and you could probably even
get video footage of the job being completed, sent to the client all in one
API call.

Surely by now someone must have completed a cost analysis of traditional
contractors vs. having a plant at a car manufacturer.

Am I the only one thinking about this?

~~~
detaro
Probably doesn't need self-driving cars: many higher-end cars nowadays have
some kind of wireless interface and software control over speed and steering.

~~~
LinuxBender
Good point, I should clarify and add in those vehicles, such as the infamous
Jeep Grand Cherokee that was hacked on a live highway.

------
jakelarkin
self driving systems cant well reason about untrained scenarios or the intent
of other humans on the road. I think the people have grossly underestimated
how driving in an uncontrolled environment is really a general AI problem,
which we're not even close to solving.

~~~
CydeWeys
Disagree. I think that the likes of Waymo and Cruise understand exactly how
hard of a problem this is. It's only companies like Uber and Tesla that are
underestimating it, and doing stupid things like disabling automatic braking
(Uber) or not using LIDAR (Tesla).

~~~
jakelarkin
the best software engineering company in the world has spent over a decade and
hundreds of $millions on this problem and they are _still_ stuck in a private
beta in the easiest road environment they could find. what does that tell you
about the feasibility of the solution.

There's no evidence that Cruise is in the same class as Waymo.

~~~
CydeWeys
Waymo is about to start ferrying around paying customers later this _year_.
Nobody said the problem isn't hard, but it definitely does not seem
infeasible.

------
cmurf
_Involuntary manslaughter usually refers to an unintentional killing that
results from recklessness or criminal negligence, or from an unlawful act that
is a misdemeanor or low-level felony (such as a DUI)._ (Wikipedia)

It's rather uncontroversial that this kind of accident falls under civil law,
because there is some degree of liability involved in marketing a product as
being safer than a human driver, but then fails in an instance where a human
driver flat out would not fail: apples to apples. If the human driver is
paying attention, which the autonomous system is always doing, they'd never
make this mistake. It could only be intentional.

But more controversial and therefore more interesting to me, is to what degree
the system is acting criminally, even if it's unintended, let alone if it is
intended. Now imagine the insurance implications of such a finding of
unintended killing. And even worse, imagine the total lack of even trying to
make this argument.

I think a prosecutor must criminally prosecute Tesla. If not this incident, in
the near future. It's an area of law that needs to be aggressively pursued,
and voters need to be extremely mindful of treating AI of any kind, with kid
gloves, compared to how we've treated humans in the same circumstances.

------
crazygringo
Wow. I will say that, when you look straight-on in Street View, it does look
disturbingly like a valid lane to drive in -- same width, same markings at one
point [1]:

[https://www.google.com/maps/@37.4106804,-122.075111,3a,75y,1...](https://www.google.com/maps/@37.4106804,-122.075111,3a,75y,117.92h,81.35t/data=!3m6!1e1!3m4!1snAoBJlvBLm0NQWYBWKxWGw!2e0!7i16384!8i8192)

If it were night and a car in front blocking view of the concrete lane
divider, it doesn't seem too difficult for a human to change lanes at the last
second and collide as well. (And indeed, there was a collision the previous
week.)

There's no excuse for not having an emergency collision detection system...
but it also reminds me how dangerous driving can be period, and how we need to
hold autonomous cars to a higher standard.

[1] Thanks to comments by Animats and raldi for the location from other angles

------
beenBoutIT
Anyone here actually think Elon uses autopilot?

~~~
jungturk
Not sure about Elon, but I suspect (hope?) most of us Tesla drivers have
familiarized themselves with the limitations of the system.

Its got a couple scenarios in my experience where it shines - chiefly long
drives on clean clear interstates and stop-and-go traffic - but its got
suicidal tendencies in others.

Its not unlike being driven by a tipsy friend or a teenager - there are enough
indications that it can't be trusted.

Unlike those situations, its easy enough to take control back once you realize
that.

Marketing it above its capabilities surely isn't helping, though.

~~~
Analemma_
To be blunt, I'm kind of stunned that you are defending a system that you
yourself describe as "like being driven by a tipsy friend". What?! Sure, I
_could_ sit in the front passenger seat while being driven by my tipsy friend
and grab the wheel from him if he was about to steer into a wall, but that is
clearly much less safe than just driving myself. And I definitely wouldn't pay
for the privilege nor endorse it as a good way to travel!

This is what Elon's personality cult, and the psychological need to defend a
large purchase, does to people. You are making excuses for a clearly defective
product that wouldn't be acceptable if this was anyone other than Tesla.

~~~
jungturk
Its curious to see you appeal to ad hominems, outrage, and assumptions of bad
intent, but I'll give it a go anyhow.

Perhaps I wasn't clear enough about the two use cases I cited, but the car
performs excellently (and entirely soberly) in those situations.

It's a tool that has significant utility and significant footguns. I'm not
advocating whether you should use it or not - I'm sharing my experience with
where the system delivers and hoping users are looking past the hyperbolic
marketing.

Its abundantly clear (even to us in the Elon personality cult?) that Tesla and
Elon have done themselves a disservice with how they handled this tragedy.

~~~
freerobby
Fellow Tesla owner here. Agree with everything you wrote.

Autopilot is a wonderful system when used properly. It's lousy when it's
abused. Applied responsibly, it is very obviously a safety improvement to
anybody who uses it. Used as a substitute for human attention, it's very easy
to see how it can turn tragic (as we are discussing/witnessing).

The way Elon talks about these situations frustrates me to no end. The data
are on his side, yet he continually resorts to half-truths because they are
simple to present. Pretty obnoxious for a dude who is on a tear about media
dishonesty. But: that is a knock against Elon, not against Autopilot. At the
end of the day, I believe Autopilot does much more good than bad, and results
in in more people staying alive than would without it. I'm glad we have NTSB
and NHTSA to keep Tesla honest, and I'm also glad that they are more thorough
and tempered than a lot of folks here on HN would like them to be.

------
27182818284

        The NTSB report confirms that. The crash attenuator—an accordion-like barrier that's supposed to cushion a vehicle when it crashes 
        into the lane separator—had been damaged the previous week when a Toyota Prius crashed at the same location. 
        The resulting damage made the attenuator ineffective and likely contributed to Huang's death.
    
    

kinda sounds like maybe that part of the road isn't well designed or marked
too.

~~~
twblalock
Any kind of self-driving system that is going to be usable and safe in the
real world needs to be able to deal with the kind of roads we have in the real
world.

Maybe one day, when the majority of cars are self-driving, road design will
change. To get there, self-driving cars will need to prove themselves on
today's roads so people will buy them.

~~~
stephengillie
Can all driving situations on all roadways today be computed in Polynomial
time? Or are some NP? Some intersections are known to be complex for humans to
navigate.

------
tqi
> During the 18-minute 55-second segment, the vehicle provided two visual
> alerts and one auditory alert for the driver to place his hands on the
> steering wheel. These alerts were made more than 15 minutes prior to the
> crash.

If your hand is always supposed to be on the wheel, why is does the car not
constantly alert you when it detects that your hands are off (similar to how
cars beep at you if your seatbelt is unbuckled while driving)?

------
deaps
I think one of my main concerns with "autopilot" is that for _a lot_ of
drivers, it will absolutely make the roads safer for them and those that use
the roads around them. Consequently, for some safer and more-alert drivers, it
has the potential to make driving less safe.

------
jhanschoo
Here's a relevant video that shows autopilot directing a Tesla into lane
split.

[https://www.youtube.com/watch?v=6QCF8tVqM3I](https://www.youtube.com/watch?v=6QCF8tVqM3I)

------
dre85
I wonder what percentage of owners actually have the courage to turn on
autopilot? How many people here would/do?

------
ericb
If I was building this, I would upload millions of hours of data from actual
Tesla drivers, and I would have autopilot releases step through data and flag
the variances from the behavior of the actual drivers. I'd run this in a
massively parallel fashion.

For every release, I'd expect the score to improve. With a system like this, I
would think you'd detect the "drive towards traffic barrier" behavior.

------
stretchwithme
Job 1: Don't run into things.

------
newnewpdro
Tesla Autocrash, how much does this option cost again?

------
myth_drannon
I was listening to a Software Engineering Daily podcast with Lex Fridman about
self-driving deep learning. Very interesting topic on ethics of self-driving
cars. What he was saying, is that we need to accept the fact that people are
going to die following incidents with autonomous vehicles involved. In order
for systems to learn how to drive, people will have die. It's more of societal
change that is needed. 30,000 people die on the roads in US every year, in
order to decrease that number we need self-driving cars even with a price that
society as of now can't accept

~~~
wwwdonohue
> we need to accept the fact that people are going to die following incidents
> with autonomous vehicles involved.

Car accidents will kill people at least as long as there are any human-piloted
vehicles on the road and probably long after (pedestrians, etc.)

What doesn't need to happen is companies in the self-driving space recklessly
exaggerating their cars' ability to self-drive.

------
manicdee
Short version: due to poor lane markings, Autopilot made the same mistake as
many humans in the same situation and collided with the divider. Due to the
frequency of this kind of accident, the crash attenuator had been collapsed
and not reset meaning the Tesla hit the concrete divider at full speed, as has
happened in the past with humans in control.

But please continue to blame Autopilot for not being smarter than the human
operating the vehicle.

