
Tesla driver reproduces fatal autopilot accident [video] - rgbrenner
https://www.youtube.com/watch?v=6QCF8tVqM3I
======
marcell
I really dislike the marketing around "autopilot." The common defense of
Tesla's autopilot is that there are disclaimers, and the driver should always
be at 100% attention.

Well, look at this, copied from Tesla's website:

> _Full Self-Driving Hardware on All Cars_

> All Tesla vehicles produced in our factory, including Model 3, have the
> hardware needed for full self-driving capability at a safety level
> substantially greater than that of a human driver. [1]

An average person is going to read this and think, "This care drives itself,
and it's safer than me!" The rest of the page describes space-age features
like switching lanes to get to an exit faster and self parking:

> All you will need to do is get in and tell your car where to go ... Your
> Tesla will figure out the optimal route, navigate urban streets (even
> without lane markings), manage complex intersections with traffic lights,
> stop signs and roundabouts, and handle densely packed freeways with cars
> moving at high speed ... and park itself.

They're selling a car that full on drives itself. This is not being sold as
adaptive cruise control.

FWIW, they included a weasel phrase by saying "hardware" is there, as opposed
to software, but come on! An average person is not going to understand that
distinction. I think they need to dramatically scale back the hype around this
feature until it actually delivers what it promises.

Edit: I'm also confused by why they need to hype up autopilot so much. They
are already selling more cars than they can make, and a sexy electric car
appeals to lots of people. I think it would be enough to stick with that.

[1] [https://www.tesla.com/autopilot](https://www.tesla.com/autopilot)

~~~
phobius
could be a well-meaning tech culture screw up; Most tech people know that
airplane autopilots weren't end-to-end automated for the majority of their
existence

~~~
Barrin92
>Most tech people know

most tech people also know how "better than human" sounds to the public and
the average driver. This isn't well meaning tech culture, it's a PR statement
that is misleading and formulated the way it is precisely because it is a
sales pitch.

It's irresponsible and unethical. It should be called a 'driver assistant' and
have a warning label in font size 100 that tells drivers to not treat it as
autonomous and infallible.

~~~
rohit2412
I wouldn't even ask for the size 100 font, but Elon musk and Tesla to stop
hyping their amazing autonomous capabilities (which are all non existent)

------
jijojv
Really sad to see Autopilot is a joke and nothing more than adaptive cruise
control found in common cars, despite Musk claming it was a year ago [1]

Any one who owns Autopilot knows the warning at :20 is NOT a crash warning at
all and is just the usual warning every minute to jiggle the wheel if you
don't hold the wheel firmly enough when it can't detect you holding it.

1\.
[https://twitter.com/elonmusk/status/823632597284691969](https://twitter.com/elonmusk/status/823632597284691969)
" Yes, safety should improve significantly due to autonomy features, even if
regs disallow no driver present 23 Jan 2017"

~~~
Tepix
The bad state of the road plays a huge role. There should be a "report unsafe
road conditions" function in these cars to get US roads up to Belgian
standards. Doesn't Ttump want to spend money on infrastructure anyway?

~~~
cm2187
Whatever state of the road, a self-driving car shouldn't run heads on into a
large static obstacle.

------
jstanley
This (video) is another instance where the lanes split apart but the part in
the middle has absolutely no markings to indicate that you shouldn't drive
there.

This (photograph) is what it should look like:
[http://s0.geograph.org.uk/geophotos/02/24/67/2246762_3a6ea81...](http://s0.geograph.org.uk/geophotos/02/24/67/2246762_3a6ea811.jpg)

Is this (bad line painting) common in the US?

Edit: on a second viewing, I think it actually does have some chevrons, but
they're very faint.

~~~
ars
There is a solid white line. It's forbidden to cross a solid while line on the
freeway.

What does a solid white line mean to you (in your country)?

The linked video is a different story, but the image you linked makes it quite
clear not to drive there.

~~~
jstanley
I didn't see the car cross a solid white line in the video that this thread is
about.

The image I linked is showing what it _should_ look like. I agree the image I
linked is perfectly clear.

~~~
ars
Ah, I misunderstood you. I thought you linked to another example of a failure
situation because you wrote "this is another instance".

~~~
jstanley
Just realised I used the word "this" 3 different times to mean 3 different
things :). I've disambiguated this now.

------
Shank
Part of me wonders whether or not we're going to see mutual improvement of
infrastructure with self driving cars in the future. This seems like something
that you could obviously fix by adding in some kind of passive or active flag
that, to a self driving car, says "woah, you're /really/ not supposed to be
here."

I know we're supposed to be focusing on building systems that are as capable
as humans are as drivers. However, if we approached the problem from changing
the roads with some kind of guide beacons, it would probably be really trivial
to detect them and improve trustworthiness a lot more.

We already have lane markers and chevrons and all sorts of things for humans
to interpret and react to. What if self driving systems got guides that were
optimized to them? Instead of having really luminescent paint, they would be
radio noisey or have some other electronic fingerprint that's just as "loud"
as visual and audible markers are for humans.

At some point, if we want self driving cars to be the real future, we should
start working the problem from both ends. When properly implemented, self
driving technology is safer than normal humans -- and if we can accelerate
that by adding stupid easy electronic markers to some roads, then why not?

~~~
simion314
We should also not allow software that hits a static object, the car should be
able to detect solid object, should be able to detect it's speed relative to
the car and to the ground, and it should detect a possible collision and take
action.

You can put electronics on the roads but if the cars will depend on those then
it will fail when those electronics are missing.

~~~
PuffinBlue
Not sure why this point isn't being more widely discussed, especially as it is
a known problem[0] of almost all 'auto-pilot' systems based on radar.

To me it would seem to be self-driving 101, namely - don't hit a stationary
object.

If the challenge is so great that current technology can't overcome the
difficulties of detecting stationary objects in the path of the car (not just
those stationary objects to ignore like overhead gantries) then it's time to
change the language around 'autonomous' driving to ensure drivers understand
the limitations. Obviously this isn't happening well enough right now as
people are dying because of it.

From a personal perspective, until Lidar becomes commonplace I think I would
eschew the 'autonomous' modes offered in the current generation cars.

[0] [https://www.wired.com/story/tesla-autopilot-why-crash-
radar/](https://www.wired.com/story/tesla-autopilot-why-crash-radar/)

------
gknoy
That kind of lane marking (or lack thereof) seems very unsafe. I realize that
often this is due to construction / etc but the immediately-to-the-left lane's
marker was Very Clear, and didn't have a visible split in it. I had to watch
it a second time to make sure that I didn't miss the indications that the lane
was splitting away (other than the highway signs). This kind of thing (lack of
clear markings) can be really dangerous when driving at night in the raid
(road is more reflective than normal), or in an unfamiliar area.

The autopilot could be better, but the road engineers should have made sure
the lines were painted correctly.

~~~
trimbo
> the road engineers should have made sure the lines were painted correctly

If this is the intersection, they were painted correctly:

[https://goo.gl/maps/QuuRiixk6Y42](https://goo.gl/maps/QuuRiixk6Y42)

But in this video, the left solid white line was more visible than the right
one. For whatever reason -- a trick of light, snowplows tore it up this
winter, who knows.

That kind of thing is going to happen and autonomous cars need to deal with it
in order to be _actually autonomous_.

~~~
ilyagr
I believe it's this intersection, judging by the signs in the beginning of
video. Paint is not great.

[https://goo.gl/maps/V9qnLYTL2tn](https://goo.gl/maps/V9qnLYTL2tn)

------
sdrothrock
One thing I always wonder about is the Tesla HUD -- my gut feeling is that if
instead of rendering three generic lanes, it tried to render the
lanes/obstacles it saw, the driver might be more attentive while in Autopilot
mode.

1\. The interface changes, and changing things are more interesting to look
at.

2\. Seeing something like two lanes when you're on a five-lane high way should
make a driver do a double-take and perhaps pay more attention or disengage
autopilot.

3\. Ditto for obstacles.

~~~
alt_f4
or, you know, instead of looking at the HUD, you can look at the road? that
interface changes naturally

------
tiredwired
This video is from a highway in Illinois. 2000+ miles away from the 3/23/2018
crash. 41.7779557,-87.6309277

~~~
Johnny555
Does it matter? Do cars act differently depending on which state they are in?
Seems like a similar situation to the crash in California.

~~~
rocky1138
It does matter when the article title claims that it's a reproduction of the
accident. If it's not the same location, it's not really a reproduction, even
if the cause/effect is the same.

~~~
evfanknitram
I don't think the word reproduction is always used as strictly as your
definition. In computer software/hardware the word is used even if the
geographical location and hardware is different. I don't see using it to
reproduce car issues would be any different.

------
cornholio
While Tesla screwed up the marketing on this product and failed to even ensure
the driver is looking at the road at all times, I believe the black box
approach (recover crash data from every incident and work it back into the
product) is exceptionally powerful, as demonstrated in the airplane industry.

It use to be that whenever an accident like this happened, people shrugged
their shoulders and blamed the victim, very damaging design flaws were being
corrected statistically, after many wound up dead. We now have a corporation
to blame, we internalize that the human drivers cannot be trusted and
technology must take over and learn from each and every accident.

The safety advances this routine will bring forth cannot be overstated - and
putting presure on Tesla and other manufacturers is the best way to get them.

------
greendesk
Wow, I had to watch the road a second time to see markings of the split. In
the beginning of the movie, I was expecting that there would be two lanes on
the left and two lanes on the right.

------
modeless
Tesla this month pushed out an update which makes autopilot disengage a lot
less than it used to. People have been making videos all month showing much
improved performance [1], but this overconfident behavior may be a side effect
of the same change.

[1] [https://electrek.co/2018/03/08/tesla-autopilot-update-on-
off...](https://electrek.co/2018/03/08/tesla-autopilot-update-on-off-ramp-
curve-of-death/)

------
mephitix
Autopilot marketing is bad and Elon deserves a lot of the blame, but do we
still expect to be able to trust these systems enough to not put hands on the
wheel and look at the road?

IMO we are still in the very early stages of this work in the industry.
Personally I think bugs and flaws in systems like this are normal at this
stage. I would expect the same with GM Super Cruise and I wouldn't trust that
either to let go of the steering wheel.

~~~
colechristensen
>I think bugs and flaws in systems like this are normal at this stage

I don't think you should be able to license a vehicle to drive on public roads
with such "bugs and flaws".

I don't have a choice, I have to interact with public roads. My local, state,
and national government owes it to me to keep experimentation off of public
roads that modern life is impossible without.

The problem with robot cars is they fail in very alien ways. They will be made
illegal after a small number of horrific incidents where a car will kill
people in ways a licensed human never would.

I think the usage of the phrase "intelligence" in "Artificial Intelligence"
needs to stop. It was cute in the 90s but today people are starting to
actually believe it when it's really a vulnerable shallow approximation that
is vulnerable to profoundly unintelligent failures and it will continue to be
until there starts to be a debate on if a synthetic system is actually
sentient.

~~~
mephitix
It's a good point about whether they should be on public roads. At the same
time, it's crucial to their success that they are and experience outlier
situations.

The hope is that by experiencing an outlier and making a mistake, the
algorithms will be able to adapt to ensure that mistake never happens again.
Humans today don't have this kind of networked, massively parallel mind to
avoid the mistakes of others. Again, that's the _hope_.

I think it'll be a long time before learning algorithms will take over the
wheel completely - but I think we are at or near a time when these systems can
and should let us know of low confidence, insufficient data to make a
decision, or equal confidence of multiple outcomes. A networked AI should be
able to tell us far enough in advance (e.g. while driving) for humans to step
in and make the final call, based on their own rationalization, empathy, etc.

~~~
colechristensen
> it's crucial to their success ...

I don't care about a company's success if it requires endangering members of
the public who have no choice in the matter.

If we were talking about, say, astronauts and test pilots who know the risks
they take and make the choice, I'd have no problem with them risking their
lives. We're not.

>I think it'll be a long time before learning algorithms will take over the
wheel completely

They did. A system called "auto pilot" drove a man at full speed into a
barrier and killed him.

I have a limited trust of other humans on the road. The deal is that I
understand them. They behave in a way that makes sense to me, even (or
especially) the mistakes they make. As another driver or a pedestrian, I can
communicate a lot with a car just by looking at the driver and interpreting
body language and attention.

> The hope is that by experiencing an outlier and making a mistake, the
> algorithms will be able to adapt to ensure that mistake never happens again.

I think people are making a huge underestimation about the number and types of
edge cases. If you have to teach a car not to drive into a static object at
full speed, the issue sin't going to be about fixing this one problem, the
issue is that if this problem exists 1000 others like it do too.

I come from cold climates and country roads, I think the fact that these
vehicles are being designed and tested in the bay area (or Arizona) makes them
incredibly dangerous because of the fair weather bubble that the people design
them in are living. Make your engineers live and work in rural northern
Minnesota and maybe you'll get a bit more trust from me.

------
Too
Is it the driver or the car doing the final braking in the video? Even if
autopilot followed the wrong line it should perform emergency braking or
evasive maneuver when the barrier is right in front of it. This is not just
one error, it's two.

~~~
xuhu
I assume the driver didn't want to wait and see if the car brakes at the very
last moment, so it's hard to determine if the car would brake in this case.

------
blinkingled
So it just fails to detect the fairly large obstacle in the road? This doesn't
seem very different than the adaptive cruise control in Hondas but people
somehow use it as if it's true auto pilot?

------
calvinbhai
I dont own a Tesla, but have driven it and I am floored by the whatever level
of Autopilot it had, but as a non-owner I still think I'd never trust the
Autopilot 100% (its like when riding shotgun, I'm as attentive as the driver,
or more). So, I'm not sure if the Autopilot feature is something that makes
the owners trust it more, if they use it more often.

Now, Tesla doesn't claim it is Level 5 Autonomous Driving with "Autopilot".
Nor does it say one can drive it handsfree.

Accidents can be much worse if someone drove hands/attention free with just
the cruise control on.

In this case of the fatal accident, I'm really surprised that the driver (as
per the family) knew the problem area very well, and yet was driving with the
autopilot on and not paying attention.

Unless he let the Autopilot go on its own, on purpose, thinking that Model X
is super safe, so a head on collision with the barrier would prove his point
to Tesla, or the Autopilot became too adamant and refused to disengage,
leading to the crash, not sure at what point Tesla can be held liable, if
there's any.

Also, I thought those barriers are super safe with enough crumple zones.
Wonder how crashing into a barrier like that could lead to such a fatal
accident.

------
EGreg
To me this is very strange.

Why do the road markings trump the objects coming up?

Obviously if a giant-ass obstacle is coming up, the car should be prepared to
drive around it, and to decelerate as a last resort. Moreover, if the obstacle
is moving, the car has to try to extrapolate its movement in order to avoid
it.

How else can you pass trucks and such?

Also why can’t these cars be tested autonomously with soft tops instead of a
metal chassis, so colossions don’t really hurt anyone? Basically a pillow on
wheels!!

~~~
rohit2412
Because Radar can't see static obstacles

[https://www.wired.com/story/tesla-autopilot-why-crash-
radar/](https://www.wired.com/story/tesla-autopilot-why-crash-radar/)

And Cameras (Vision) are overhyped, they are not reliable enough to recognize
obstacles etc

~~~
EGreg
This is why Waymo rulez

[https://arstechnica.com/cars/2017/01/googles-waymo-
invests-i...](https://arstechnica.com/cars/2017/01/googles-waymo-invests-in-
lidar-technology-cuts-costs-by-90-percent/?amp=1)

------
kfe
I wonder how easy is it for Tesla to 1) find the exact problem and 2) fix it
(provided it is software based). Does it mean that they first have to figure
out what some "black box" neural network(s) are getting wrong and then
retraining (on simulations), etc?

What if city X decides to repaint all its buses and that confuses the object
detection system? How log would it take to get the (hot)fix out of the door...

------
KKKKkkkk1
I can totally see how a human driver can crash in this situation if he or she
is solely focused on the lane markings. This puts the lie to Tesla and
Mobileye's claims that camera-based computer vision is enough for autonomy
just because humans use two low-res cameras to drive.

~~~
friedman23
> This puts the lie to Tesla and Mobileye's claims that camera-based computer
> vision is enough for autonomy.

How are these things related? Are you suggesting that cameras are not
sufficient for self driving?

~~~
sudhirj
The companies are stating that they won't be using LIDAR or Radar, and that
cameras are enough.

Cameras tend to rely on markers of some sort - and in cases like the markers
on the road were very misleading. Even a short-sighted beginner human driver
who was told to carefully follow the lanes would have made the same mistake.
And cars running purely on cameras are comparably to short sighted baby human
drivers.

~~~
manicdee
Tesla uses radar, and are working on improving their radar to return point
clouds just like lidar does.

Even then, cameras are enough for accurately mapping the world around the car,
it’s just that Tesla’s state of the art has not matched the academic proofs of
concept that have been demonstrated over the last few years.

------
est
It looks like an overfitting stick-to-left-lane problem.

------
RobLach
I’ve driven this road many times and it’s pretty obvious where to go. The
lanes split into 2 different roads and you see the elevated way turning right
in the distance so turning left would never make sense.

I wouldn’t be surprised if the chevrons which are pretty worn down would be
low priority on the maintenance list given how obvious it is and how annoying
work there would be.

~~~
Symbiote
> low priority on the maintenance list

The impression abroad, from reading about collapsing bridges, is that pretty
much all maintainance is low priority.

On my last visit, my friend told me to avoid particular freeways near his home
because chunks were falling off the bridge.

------
hutattedonmyarm
What a terrible video till.e Also, why are people recording with their phones
while driving (or in this case letting drive)? This doesn't look like a
dashcam

------
mdekkers
Fun fact - given the current implementation of Autopilot, it is insane to
expect an autonomous vehicle. There should be some kind of common
sense/intelligence test for would-be users of Autopilot. So far, every
incident involving Autopilot that I have heard about or looked at can be
summarised with "Driver was being a dumbass"

~~~
Swizec
When most of your users have a problem it’s not PEBKAC, it’s bad UX.

Just because a door says push, doesn’t mean it looks pushable. If it doesn’t
look pushable, users will pull.

If all the PR and the CEO keep saying “autopilot”, users will use it as auto
pilot. If it isn’t autopilot, it should have a different name.

~~~
PhantomGremlin
_If all the PR and the CEO keep saying “autopilot”, users will use it as auto
pilot. If it isn’t autopilot, it should have a different name._

The word "autopilot" originated in aviation, where it _is a system used to
control the trajectory of an aircraft without constant 'hands-on' control by a
human operator being required. Autopilots do not replace human operators, but
instead they assist them in controlling the aircraft._[1]

Most aviation professionals understand those limitations. You're complaining
that the average Tesla owner ascribes a different meaning to that word?

Sadly, in aviation, over-reliance on autopilot has also led to very tragic
results, the most prominent recent example being the crash of AF447, which
killed 228 people.[2]

Musk is doing nothing worse than Airbus, which has pushed automation to the
extent that many of their pilots no longer understand the basic principles
involved in aviation. They aren't pilots, they're operators of a very complex
machine which has a UX that is usable in benign conditions, but that is
abominably bad under difficult conditions.

When faced with "temporary inconsistencies" from various sensors, the AF447
autopilot gave up and returned control to the pilots. But the pilots'
inexperience in actually "flying" the aircraft, together with the awful Airbus
UX, caused one pilot to say "We've lost all control of the aeroplane we don’t
understand anything we’ve tried everything".

In reality AF447 was a perfectly flyable aircraft. "pitch and power" was all
it needed: Holding the aircraft level with pitch at IIRC 6 degrees above the
horizon and applying 85% power was all that was necessary.

[1]
[https://en.wikipedia.org/wiki/Autopilot](https://en.wikipedia.org/wiki/Autopilot)
[2]
[https://en.wikipedia.org/wiki/Air_France_Flight_447](https://en.wikipedia.org/wiki/Air_France_Flight_447)

~~~
Swizec
> without constant 'hands-on' control by a human operator being required.

This is exactly what the Tesla autopilot isn’t. It requires constant hands-on
control. It in fact goes as far as flashing alarms when your hands are not on.

~~~
TomMarius
It requires constant hands on steering wheel, but not constant control (that's
just regular driving) - only readiness, just like in aviation. The difference
is that you have more time to react when you're flying so hands on controls
are pointless.

