
This Test Shows Why Tesla Autopilot Crashes Keep Happening - knuththetruth
https://jalopnik.com/this-test-shows-why-tesla-autopilot-crashes-keep-happen-1826810902
======
rayiner
> Tesla has always been clear that Autopilot doesn’t make the car impervious
> to all accidents

This makes it sound like the problem is that there are edge cases that aren’t
handled. In reality, the test shows that Autopilot is only designed to handle
the special case of lane keeping and following the car in front of you, but is
too simplistic to actually handle the general case of driving the car. It’s a
very advanced cruise control, not a self-driving system that still has kinks
to work out.

------
noncoml
It’s clear the Tesla is dealing with this as a PR/marketing problem, but the
fail miserably even at that.

If you think it’s a PR/marketing problem and the system operates as it should,
then stop calling it Autopilot and stop advertising cars with full autopilot
capabilities __*

 __* “Theoretically the hardware should be enough to enable full auto pilot
some time in the future. At the moment use this other Autopilot, which is not
really autopilot. Btw we made this really cool autopilot video, but it’s not
the Autopilot you have in your car.”

~~~
manicdee
The marketing problem is lay people thinking that aircraft autopilots are
magical self flying machines when they often require the pilot to provide
corrective input, especially during “autoland.”

If you think Tesla Autopilot needs another name, stop operating on mystical
beliefs like Rei-Ki and magical self-flying autopilot and learn what any
aircraft autopilot is actually capable of.

~~~
dingaling
An aircraft does not require any corrective inputs during autoland, in fact it
is the opposite; inputing any control action will abort the autoland.

It just needs the pilots to make a land / no-land decision at the specified
altitude. Can they see the runway lights within the specified distance and is
the crosswind-component within the acceptable range? If they choose to
continue the landing then they don't touch the controls.

Imagine you're coming in to a wet runway with 20kt crosswind and one engine
inoperative. Sounds like a nightmare; but in that case autoland will do a
better and more consistent landing than most pilots. Can we say that about car
self-driving systems yet?

~~~
manicdee
Here's a discussion of flying a plane on a Cat III ILS autoland:
[https://youtu.be/UO2K4-zRubA](https://youtu.be/UO2K4-zRubA)

Note all the crosschecking between the human operators and the two autopilots.
Mentour speaks almost exclusively about flying the Boeing 737, a very reliable
aircraft. Even on this aircraft the human pilots are ready to take over from
the autopilot at a moment's notice.

Here is an auto land using Cat III ILS in an Airbus 320, with the humans
invigilating at all times, ready to go around if anything appears off nominal
[https://www.youtube.com/watch?v=mSNE3SmYA-8](https://www.youtube.com/watch?v=mSNE3SmYA-8)

So to a naive observer, it may appear that you are correct: planes can land
themselves without intervention. But that is only true for airport, aircraft
and conditions qualified for Cat III approaches (i.e.: ideal conditions) and
remain that way throughout the procedure. If the airport, aircraft or
environment are not qualified for Cat III approach, you don't get this ultra-
precise auto land capability. If anything goes wrong with any equipment during
approach, you don't get the auto land capability (and indeed a go-around is
required).

If there is anything outside spec (e.g.: aircraft manoeuvring on the taxiway
can distort the ILS beam, thus Low Visibility Procedures), things will go
wrong and the pilot needs to intervene.

At no point is it safe for a pilot to let the plane do the flying and sit back
to have a cup of tea. The captain and first officer have their assigned tasks:
be ready to land the plane and be ready to go around (respectively).
Intervention is required from time to time, so they need to be ready to
intervene every time.

Cars do not have the option of "going around". Every action has the potential
to cause fatality within seconds. There's even less room for inattentiveness
when using a car on autopilot.

The OP suggested that autopilot should be able to drive the car without any
human intervention, but they have confused autopilot with autonomous driving,
probably because of the "auto" prefix, yet they've never been confused about
an automobile requiring constant human attention.

It's clear that Tesla have a PR/marketing problem, when the population is
happy to drive an auto with auto, but can't differentiate between autopilot
and autonomous.

------
King-Aaron
I can understand Tesla's position in saying that the driver should still be
aware of their surroundings. _But_. Avoiding stationary objects should be the
first objective of a system like this in my opinion, especially if the object
is directly in the projected path the vehicle is travelling in.

~~~
mirimir
Sure, but such systems won't work at all if they don't ignore such stationary
objects as bridge abutments. And they lack the angular resolution to determine
whether stuff is directly ahead, or off to one side. Plus the fact that roads
curve.

Or, as in the Thatcham Research video, stuff that's hidden by a leading
vehicle. I always make an effort to know what vehicles far ahead are doing. If
necessary, I shift off lane occasionally to check for hidden stuff. And when
I'm behind large vehicles, I don't follow so closely that I can't see what's
ahead.

But if you're going to do all that, why use Autopilot?

~~~
Maybestring
>And they lack the angular resolution to determine whether stuff is directly
ahead, or off to one side.

Sounds like my Grandma. For her safety, and everyone else's, she doesn't drive
any more.

~~~
mirimir
Sounds like me without my glasses. But then, I don't drive without my glasses.

------
adt2bt
Autopilot-bashing articles are becoming more and more popular as people love
to gawk at accidents. I believe a lot of the comments on these threads are no
more than online rubbernecking.

When these things pop up, I'd love to know the answers to some of these
questions.

1\. What is the rate of accidents for Tesla drivers with and without Autopilot
on similar driving terrain.

2\. How 'correctable' are these flaws in Autopilot? I doubt someone is
programming in what to do in every situation. Is the solution to twiddle with
some weights and pray your ML pipeline spits out a better version? I'm
guessing the true answer lies in the middle, but I don't have the technical
background to know for sure.

3\. I see a lot of references to Waymo leading the pack of would-be self
driving vehicles. What is it that Waymo has done that makes their self-driving
tech so much better?

4\. Are we okay with accepting the costs of self driving cars given their
potential in the future? Most major transport-related revolutions have come
with a significant human cost at the outset as the early adopters accept a
large amount of risk to push the technology forward. With time and usage,
things stabilize and become safer. Flying was quite dangerous early on, too.

~~~
batmansmk
I'll try to answer as best as I can.

1\. Your question seems mostly answered in this recent article
[https://www.wired.com/story/tesla-autopilot-safety-
statistic...](https://www.wired.com/story/tesla-autopilot-safety-statistics/)

2\. I don't know much about architecture / model used by Tesla. I don't have
open access to the accident's details. It is challenging for most people to
publicly and answer your question based on facts. Sorry!

3\. Again, I would say only very limited number of people have a clear vision
about what's the current status. Some claim it is hard for Tesla to
manufacture and design cars, solar panels, batteries and designing an
autopilot at the same time. It requires a very broad diversity of skills that
may be hard to steer and build from the ground up.

4\. Fair enough, point taken. My personal complain is not about the technology
in general but about Tesla's marketing. I hope the first planes were not
claiming to be safer. The Titanic did though :).

------
nanis
It seems to me the kids who program these toys have never driven anything
other than the RC model they built to get this job. And, of course, it drives
fine on the toy "roads" they set up on the carpet in the living room. Real
world, not so much.

~~~
stephengillie
From a recent article[0]:

> _When a car is moving at low speeds, slamming on the brakes isn 't a big
> risk. A car traveling at 20mph can afford to wait until an object is quite
> close before slamming on the brakes, making unnecessary stops unlikely.
> Short stopping distances also mean that a car slamming on the brakes at
> 20mph is unlikely to get rear-ended.

But the calculation changes for a car traveling at 70mph. In this case,
preventing a crash requires slamming on the brakes while the car is still far
away from a potential obstacle. That makes it more likely that the car will
misunderstand the situation—for example, wrongly interpreting an object that's
merely near the road as being in the road. Sudden braking at high speed can
startle the driver, leading to erratic driving behavior. And it also creates a
danger that the car behind won't stop in time, leading to a rear-end
collision._

[0][https://news.ycombinator.com/item?id=17274179](https://news.ycombinator.com/item?id=17274179)

~~~
bigiain
So instead of "startling the driver", who's supposed to be paying careful
attention to what's going on, the algorithm chooses to plow into the
stationary object?

I suspect _that_ might _also_ "startle the driver"...

~~~
Boxbot
The algorithm has been designed to ignore stationary objects because the
algorithm and associated hardware is incapable of determining which stationary
objects are actually in the path of the vehicle. Regardless of whether the car
slammed on the brakes or blared a klaxon there are so many false positives
that to do otherwise would make it useless.

So of course the only sensible thing to do is release a product that happily
accelerates into the back of a bright red emergency vehicle with flashing
lights. Or into a stationary concrete barrier. Or into the side of a massive
tractor-trailer.

I mean, who wants to bother with false positives?

~~~
bigiain
What's the old gag?

"You deliver a project 85% correct at university, you get a high distinction.
You deliver a project 15% wrong at work, you get fired."

Who are these people?

------
pwinnski
Computers have one huge advantage over humans: they can process a lot of
information very quickly.

Humans have one huge advantage over computers: we pattern-match instinctually,
building mental models of our surroundings.

Tesla is taking advantage of the first fact, but by avoiding LIDAR, they're
falling victim to the second fact.

We humans don't have LIDAR, so it makes sense that a car without LIDAR should
be able to match us, but we have brains and visual systems that far exceed
anything available today or at any point in the near future when it comes to
pattern-matching.

40,100 vehicle deaths in 2017 in the US, most of them the result of distracted
driving. It's a shame that Tesla's system performs worst at exactly the point
that humans need the most help.

~~~
MarkMMullin
LIDAR is certainly useful in building up point clouds, especially when coupled
with RADAR, however the devil always lies in the details

LIDAR is higher resolution than radar, but often slower - counterintuitive to
be sure, but such is the case - i.e. this large volume of data can lag reality
by some small delta, and at 60MPH, that can add up.

Uber does use LIDAR and RADAR, however the informed external commentary I've
seen, admittedly guesswork based on the NTSB reports seems to indicate it was
a fusion error behind the AZ fatality. Sensor fusion is a beastly complex
problem on top of miserable calibration exercises :-(

More importantly, AI only exists in the minds of marketers, media, and the
improperly informed. This is all just pattern recognition - we're a ways off
from having a system understand that if an occluding obstacle moves out of the
way, then high priority previous unknown information now exposed needs to be
checked. I'm sure such tests will get hardwired into current systems, however
the system is then limited by what does get hardwired into it. At the end of
the day a visual system, no matter how sophisticated is still only a visual
system.

------
fanzhang
In that video, does the Tesla try to brake, and brake hard?

This seems to be the case -- if you watch the video the Tesla is at a full
stop about 10 feet after the car, implying it must have hit the brakes hard
way beforehand.

In that case, I don't blame Telsa for crashing -- you literally are baiting it
into the crash a la a bull with a flag. You make it follow one car, prevent it
from changing lanes, and then have an obstacle parked in the middle of the
lane.

I'm all for getting Tesla to be safer. The accident count has gotten way too
high. But this seems to be a test that is rigged, and Tesla did break.

~~~
rhino369
>In that case, I don't blame Telsa for crashing -- you literally are baiting
it into the crash a la a bull with a flag. You make it follow one car, prevent
it from changing lanes, and then have an obstacle parked in the middle of the
lane.

This situation is pretty common on the road. And a lot of humans who aren't
paying attention get caught by it.

But it's not impossible to avoid, you have to be watching ahead of the car in
front of you. And if you can't see in front of the car in front of you, you
gotta have a lot more space so you can break in tiime.

------
faitswulff
So...why is it that the default autopilot behavior isn't to slow down
drastically if the driver doesn't initiate manual control fast enough?

~~~
cjhopman
I don't think you understand. The Tesla autopilot there thinks it's doing a
great job because it thinks there's just an empty lane ahead of it (it's model
of the world is basically that all stationary things are just pictures on the
ground).

------
savrajsingh
Autopilot is currently an aspirational name. Perhaps the name should be
changed.

------
mikeash
So, why do non-autopilot crashes keep happening?

~~~
Boxbot
Same reason the autopilot crashes happen: Driver isn't paying attention.

Problem is that the autopilot system makes it harder for the driver to pay
attention.

~~~
mikeash
Does it really?

~~~
FreakyT
Yes. Think about it this way: if having Autopilot on requires the same amount
of attention/effort as having it off, then there would essentially be no point
to the system whatsoever. The entire idea is that the system takes over some
of the driving tasks to make the experience more pleasant, just like how
regular cruise control is convenient because it frees you from having to pay
attention to maintaining the car's speed by pressing the accelerator pedal.

The question is, does it reduce the amount of attention paid to a significant
enough degree that it's dangerous? Consider subways -- most of them have
automatic train control systems in place that could effectively drive the
trains automatically. However, operators are still required to control the
speed, largely because it keeps them engaged.

Give an operator nothing to do, and they won't pay attention -- this is basic
human psychology (look up "Vigilance Tasks" if you're interested in further
reading).

~~~
mikeash
A lot of the attention you pay during normal driving is on mundane things like
making tiny adjustments to stay within the lines. By removing that requirement
you have more attention to put towards useful things.

~~~
bigiain
Yeah - but that's most likely to be your phone, or the argument you had at
work with your boss, or wondering why your partner didn't talk much this
morning... I strongly doubt any "freed up attention" is getting redirected
into driving-safety related "useful things".

~~~
manicdee
This is where we separate the good drivers from the accidents waiting to
happen. The good drivers will be watching the five or more cars ahead in all
lanes to catch any unexpected braking or lane changes, and watching passing
vehicles for indications of being exit-darters (those drivers who are so keen
to catch the exit they will drive at high speed in the passing lane, then
madly swerve through the slow lanes to get to their exit).

The accidents will instead get caught up in that antivaxxer Facebook debate on
their phones.

------
nodesocket
The video seems ludicrous. They fail to show that same test with a Tesla in
autopilot crashing into the back of a car in manual mode. I doubt a manual
driver would be able to ovoid that rear-end accident. I could be wrong, but
they should demonstrate it with scientific results and facts instead of making
fear-mongering promotional videos.

~~~
stephengillie
Given the Model 3's poor stopping distance, you may unfortunately be correct
that even an aware driver couldn't stop in that space.

    
    
      Car           distance (feet)
      Model 3(orig) 152
      Model 3 (new) 133
      F-150         129
      Model X       127
      Camry Hybrid  125
      F-150 Lariat  119
      Model S       118 
      Porsche P.GTS 110 - Panamera 
      Chrysler 300S 109

~~~
pwinnski
After Consumer Reports highlighted this, Tesla issued an OTA update, so that
now the Model 3 stops in a reasonable distance consistently.

~~~
stephengillie
Yes, this is noted in the chart:

    
    
      Model 3 (new) 133

