
NTSB: Tesla’s Autopilot UX a “major role” in fatal Model S crash - notlob
https://arstechnica.com/cars/2017/09/ntsb-teslas-autopilot-ux-a-major-role-in-fatal-model-s-crash/
======
apeace
I'm a huge Musk fan and wish him and Tesla all the success in the world. But
releasing any kind of machine-assisted driving without a near infallible
"don't hit objects in front of the vehicle" function is simply irresponsible.

This crash, as well as the one where a Tesla crashed into a concrete
barrier[1] are evidence that their tech is not ready for release.

How much do you want to bet the Waymo self-driving technology would have
avoided both of these? Seems like one of the simplest cases to handle:
something up ahead is blocking the way!

Anyone who uses Tesla machine-assisted driving features today is putting
themselves in grave danger (except for low-speed features like parallel
parking).

[1]
[https://www.youtube.com/watch?v=YIBTBxZ3NWw](https://www.youtube.com/watch?v=YIBTBxZ3NWw)

~~~
zip1234
On the other hand, if you know that the technology is not completely
infallible, but better than humans in a general sense and will save lives by
releasing it, wouldn't you be morally obligated to do so? Wouldn't holding it
back even though it would save lives be worse?

~~~
davrosthedalek
I think people will prefer risking being killed by their own mistakes than by
the mistakes of an AI, even if that risk is somewhat higher, maybe even
considerably higher.

~~~
sol_remmy
Disagree. I will have my family use whichever is statistically safer.

~~~
davrosthedalek
This might be harder to gauge than you think. Maybe a Tesla is much safer on
the highway, but less safe in the city, or the other way around. Or less safe
on ice, etc. The averaged risk (and we probably won't have anything else for a
while) over the whole population might be quite different from that for your
actual use.

------
sorenjan
I think it's weird that so many people seem to think that Tesla have made such
a significant technology leap ahead of everyone else that they're comfortable
with putting their lives in the hands of their Autopilot system while other
manufacturers have no such thing available in production. I wonder how much of
that relaxed critical thinking is down to the hype surrounding Tesla and its
founder.

I also think it's strange that they're allowed to sell cars equipped with
Autopilot at the moment, there's too little known about its capabilities and
weaknesses.

~~~
GuB-42
The real question is : does Autopilot saves more lives than it claims, as most
advocates like to say?

If the answer is no, then Tesla's Autopilot shouldn't be allowed as it is.

If the answer is yes, then it means that other manufacturers are too cautious.

~~~
aidenn0
That is a false alternative.

One very simple example from a complicated issue:

If Autopilot saves lives when used correctly, but Tesla's marketing material
implicitly encourages people to use it incorrectly (e.g. video of Musk with no
hands on the wheel), then the marketing is dangerous.

~~~
PeterisP
There should be no "when used correctly" \- the question is whether it saves
(net) lives or kills people when used _normally_ as people tend to do for
whatever reasons, no ifs or buts, just count the cases. Just as people
violating laws by driving drunk still is a valid argument for more automation
of driving, so people violating some guidelines by using automation
incorrectly is a valid argument against it. If the automation can't handle the
common misuse and _still_ be safer on average, then it's not ready yet.

~~~
rhino369
But Autopilot isn't an all or nothing thing. If Tesla could save lives by
changing the name, requiring constant hands on the wheel, doing a better job
monitoring the person is actually in control, then they are being negligent by
failing to do so.

If you invent a cure for cancer, but 1/10th of your pills have rat poison that
could easily have filtered out, it doesn't matter that you overall saved
lives. You still killed people through negligence.

~~~
dkonofalski
In this situation, though, the parallel would be more closely stated as "If
you invent a cure for cancer, but 1/10000000th of your pills have rat poison,
did you kill people through negligence if you acknowledged that you're in a
trial period and that there's a small chance they may have a reaction and die
and made them sign a waiver accepting that risk?"

Every single person that uses Autopilot acknowledges the risk multiple times
before they can use the system.

------
jquery
The fact that the NTSB is investigating is comforting. We're moving crashes
from the domain of "shit happens" to more systematic "autopilot system X
suffered Y deficiency", more analogous to the investigation of plane crashes.
I'm hopeful to see the number of car fatalities sharply drop over the next 20
years as autopilots improve.

~~~
Animats
Yes. Here's the actual NTSB abstract.[1] The full report isn't out yet. Main
recommendation:

 _To manufacturers of vehicles equipped with Level 2 vehicle automation
systems (Audi of America, BMW of North America, Infiniti USA, Mercedes-Benz
USA, Tesla Inc., and VolvoCar USA):_

 _5\. Incorporate system safeguards that limit the use of automated vehicle
control systems to those conditions for which they were designed._

 _6\. Develop applications to more effectively sense the driver’s level of
engagement and alert the driver when engagement is lacking while automated
vehicle control systems are in use._

The NTSB points out that the recorded data from automatic driving systems
needs to be recoverable after a crash without manufacturer involvement. For
plane crashes, the NTSB gets the "black boxes" (which are orange) and reads
them out at an NTSB facility. They want that capability for self-driving cars.

[1]
[https://www.ntsb.gov/news/events/Documents/2017-HWY16FH018-B...](https://www.ntsb.gov/news/events/Documents/2017-HWY16FH018-BMG-
abstract.pdf)

~~~
davedx
That's really reassuring. Constructive and thoughtful recommendations. Even
though this government agency is in a country I do not live in, I feel better
about autopilot systems from reading how this is being addressed over in the
USA.

~~~
davidmr
The NTSB and most of other highly developed nations ' equivalents really are
quite amazing. For whatever reason, I read most of the full NTSB aviation
incident reports. Aside from the thoroughness of them and the engineering
geekery, they are truly master classes in understanding how to do a proper
postmortem.

The NTSB's ability to remain apolitical and engineering-focused is really
remarkable. They way they were portrayed in "Sully" is shameful and was
uncalled for.

------
karol
> The machine learning algorithms that underpin AEB systems have only been
> trained to recognize the rear of other vehicles, not profiles or other
> aspects.

Not something I have known about Tesla's autopilot before reading this.

~~~
CalRobert
Geeze, this is terrifying given that my rear probably doesn't look like that
of a vehicle when I'm walking, or cycling, or motorcycling, etc.

~~~
EngineerBetter
I've seen Autopilot 2 fail to notice cyclists. But then, it's designed for use
on large, fast roads, rather than city streets.

I cycle through the streets of London every day, BTW, so I'm not suggesting
that it's lack of recognition ability is a good thing.

------
EngineerBetter
If you use Autopilot for any length of time and believe that you don't need to
be paying attention, then I'd suggest that you're responsible for whatever
fate befalls you.

Autopilot 2 keeps you in lane, does traffic-aware cruise control, and can do
lane changes. That's it. No more, no less.

It sometimes gets spooked by shadows. I've had it brake sharply when going
under bridges on the motorway, because of the shadow on the road. I've had it
mis-read lane markers (in understandable circumstances; the old lines had been
removed but the marks they left still contrasted with the tarmac).

If anything, I find Autopilot 2 to require too much hand-holding - literally!
I rest my right hand gently on the bottom-right of the wheel when Autopilot is
engaged, resting my wrist on my knee, which is also how I drive regular cars
on the motorway. This isn't enough for Autopilot to think I've got my hands on
the wheel, so it's constantly bugging me, and I have to wiggle the wheel to
tell it I'm there.

Apparently some Model 3s have been seen with driver-facing cameras. Eye-
tracking seems like a much better solution to me.

I love Autopilot. I recently had to drive ~500 miles on motorways in a thing
that explodes ancient liquidised organic matter, and it was an absolute chore.
Autopilot is ace when used as intended, and great in traffic jams.

~~~
lagadu
I don't understand why you're mixing autopilot with IC engines here. The two
things are completely unrelated.

~~~
makecheck
There is a _ton_ of electric power available for features in a Tesla that
might require an irresponsible amount of fuel in a regular car.

~~~
mikeyouse
What?

IC cars have been shipping with all of the features currently available in
Teslas for the better part of a decade. 1 horsepower = 745 watts, so we're
talking about somewhere in the range of 20HP to generate as much power as the
average Tesla consumption _including powering the drivetrain_.

I'd be surprised if all the electric features in a Tesla consume more than 5HP
equivalent.

~~~
makecheck
Well for instance, a Tesla can run the AC as often as it wants to keep the car
cool when you are away; there's no risk of "killing" its battery like in a
regular car. Every device, all of it can turn on at will and not be too
concerned about power.

~~~
lagadu
The same is true for any IC car whose engine is on. It's the alternator that
generates the power used; it's not taken from the battery.

IC cars even have the bonus that heating the cabin is effectively free, no
extra battery drain from it.

------
cmsonger
Lots of folks are saying: too early. Let me make the opposite argument.

My wife has to do regular weekend interstate travel. She also has serious RSI
issues with her hands and arms. The ability to keep focus, but completely
disengage from the steering wheel for a while and accelerator for the length
of the trip is incredibly valuable to her. It's why we purchased the car.

Tesla is clear on the limitations. The feature is turned off by default.
Turning it on presents the limitations to the user.

~~~
euyyn
I understand Tesla's autopilot is now disengaged if you fail to return the
hands to the wheel on time?

------
dsfyu404ed
No matter how you cut it the failure mode trying to drive a car of known
dimensions though a smaller space than those dimensions is pretty inexcusable.

>The machine learning algorithms that underpin AEB systems have only been
trained to recognize the rear of other vehicles, not profiles or other aspects

Does Tesla's software make the assumption that the lane is safely navigable
unless it detects otherwise?

IMO it's pushing the envelope of recklessness to have a default software of
"good to go" for a non-critical system (whereas something like a shipboard
fire suppression system should default to "good to go" in order to prevent
corner cases from impacting functionality).

While I understand that you don't want the car braking aggressively when it
encounters a bag blowing in the wind or poor lane markings mid-corner not
driving into something those are behaviors that are not unexpected from an
excessively conservative human driver and preferable to running into the back
of a van that's parked on the side of the road.

edit: If you're gonna down-vote you might wanna say why.

~~~
plaidfuji
Not sure why you're getting downvoyed because it's a good point. How in the
world do you let a computer vision decision override the other sensors that
are saying "massive wall is moving in front of the car"? And who tf thought
training the CV only on the backs of cars was good enough? The edge cases are
what cause accidents, not the typical situations.

~~~
mandevil
Because a lot of this is transient signals that are crap? One of our robots
back in '06 would panic stop every 30 seconds or so whenever there was a dust
cloud (this was an army off road formation project, so dust was a fact of life
for all the followers) because occasionally there would be enough random
scatter returns from the dust to convince the MMWR that there was a wall in
front of it. Panic stopping from highway speeds every time your MMWR detected
a flyer floating in the breeze near some dust seems like it would be pretty
dangerous- especially for the cars behind you that are also traveling at
highway speeds. And LADAR doesn't have the range to be useful at highway
speeds (from memory, the ratio of range to safe braking distances means you
can't get above 25-30kmh with LADAR being useful for you).

As for whether having shared autonomy at highway speeds is a good idea, that's
a really good question that I hope Tesla is asking itself right now. But if
you are trying to do that, cameras pretty much have to be your primary source
of truth.

------
MikusR
Seems that Elon Musk is taking quite an active role in showing dangers of AI.

~~~
jakelarkin
naive take is he grossly overestimates AI capabilities ... tech today (1)
cannot broadly drive a car (2) is not a near existential threat to humanity

~~~
baq
1) he reads his SF 2) he understands exponential growth.

------
bognition
Why does the title claim problem was in the UX when the article claims it was
faulty models:

>As NHTSA found, the Automatic Emergency Braking feature on the Tesla—in
common with just about every other AEB fitted to other makes of cars—was not
designed in such a way that it could have saved Brown's life. The machine
learning algorithms that underpin AEB systems have only been trained to
recognize the rear of other vehicles, not profiles or other aspects

~~~
helloworld
Because the user is an integral part of Tesla's "Autopilot" system. The
product name implies autonomy, but in fact, the vehicle requires a human in
the loop for safe operation.

Unfortunately, a poorly designed user experience lulled the driver into
behaving foolishly. That's why the headline mentions UX.

~~~
josefresco
"lulled the driver"

Did it? Or did the driver know what they were doing and not care? I don't
think it's responsible to make such assumptions about the driver's reasoning.

~~~
lagadu
What's relevant is that this wouldn't have happened with any of the other
semi-autonomous systems from other brands because they force the user to be
keenly aware.

Had it been the same for all systems then the argument could be made that the
current concept in general is flawed in this iteration but because the others
demand a lot more driver involvement the problem pivots to being a Tesla
problem, not an semi-autonomous driving problem.

------
mcguire
" _The machine learning algorithms that underpin AEB systems have only been
trained to recognize the rear of other vehicles, not profiles or other
aspects._ "

Holy crap. Who thought that was a good idea?

------
rhino369
Level 3 autonomy is reckless. It provides next to no value when it's used as
intended (i.e., a fully aware drivers instantly ready to take over) and it is
dangerous under normal usage conditions (i.e., people using it to text and
drive).

~~~
pavlov
Couldn't Level 3 autonomy still be valuable for drivers who are slightly
impaired, like elderly people? That seems like a substantial market.

~~~
sorenjan
How would they be able to quickly take the wheel and brake if something
happens? I think the opposite is true, if you need longer reaction time it's
probably a bad idea to not be fully immersed in the driving until an accident
is imminent.

~~~
pavlov
I'm thinking of a scenario where the driver is holding the wheel, but the
computer is actually doing the driving most of the time while giving just
enough feedback to the driver to feel like she is in charge.

~~~
mandevil
In our tests a decade ago, we found that shared autonomy was actually much
more frustrating for users than either full or no autonomy. When you were
telling the computer to go over there and the computer wanted to go over here,
it left all of the users frustrated and complaining that the computer wouldn't
let them go where they wanted. Maybe you can come up with a better
adjudication system than we did, but it is very tricky to design the system so
that it catches when the user does something stupid but not when they are
doing the right thing- if the computer could figure out the right thing you
wouldn't need the human!

~~~
pavlov
Interesting, cool to hear your experience!

------
torpfactory
The first question should really be: "Is autopilot+human drivers safer than
human drivers alone?" If it isn't, it would be arguably criminal to allow
drivers access to it. As with most technological breakthroughs, there will be
a learning curve (euphemism for death in a car crash in this case). As long as
it is an improvement over human drivers, it is a net good, I would argue. From
this perspective, even if Tesla is releasing a system with flaws, it is BETTER
than the alternative: more humans dying by driving their own cars. Imagine if
they waited to release a more perfect autopilot system, wouldn't that be
keeping a valuable safety system off the road?

This is only true, of course, if autopilot is actually safer than a human.
From what I have read, there is little data to support either conclusion.
Lacking proof that such a system is actually safer, Tesla should take the
conservative engineering decision and reduce or eliminate drivers' access to
such a system until enough data is available to make a conclusive statement on
safety.

------
crimsonalucard
These self driving systems need to use machine learning models more in line
with the what humans see. Nobody just recognizes the rear of a car... we look
at the rear of a car and we are aware of its entire three dimensional shape
and the three dimensional structure of the universe around the same car. Are
there any machine learning models that can achieve this level of recognition
or is it all just associating names with pictures?

~~~
jpindar
Did you see the article recently about recognizing stop signs? However the
machine supposedly recognized stop signs, it had nothing to do with them being
red, hexagonal, or saying "STOP". It was possible to deface a sign such that
it was clearly still red, hexagonal, etc. but the machine didn't recognize it.
So IMO the machine had no 'mental model' of what a stop sign is.

------
readhn
I agree that the name of the feature - "autopilot" is misleading consumers.
Honestly Tesla should be sued for this and name "autopilot" recalled until it
really is an autopilot!

"An autopilot - is a system used to control the trajectory of a vehicle
without constant 'hands-on' control by a human operator being required."

What Tesla has is not an autopilot!

~~~
chc
What Tesla has seems to fit that definition. In fact, what Tesla has seems
pretty similar to the autopilot in airplanes — both keep the vehicle on the
right course without constant control from a human, but both require a human
to be ready to take control in unforeseen circumstances. If it's deceptive,
it's because drivers have an overly hopeful idea of what autopilot is, not
because it's a very inaccurate name.

------
iamgopal
Elephant in the room, why Tesla do not partner with Google for autopilot ?
Google will have tons of data and Tesla will have an order of magnitude better
software capabilities. Such partnership can even accelerate global automatic
vehicle adoption.

~~~
bobbygoodlatte
Because both companies believe that owning proprietary self-driving systems
will be a big competitive advantage. If one company beats the other to
mainstream adoption by 12-18 months they'll have a shot at dominating the
industry.

------
amelius
Perhaps a stupid question, but what entity tests self-driving cars before they
are admitted to the streets? Is this entity partially responsible?

------
boznz
"Minor Role" I read.

------
hashkb
The report found human error was equally to blame and went out of its way to
say that self driving cars are the future. This article is sensationalizing.

~~~
ceejayoz
Sure, but the human error is having too much confidence in the autopilot.

~~~
hashkb
The system was warning him more or less constantly, according to the report.

~~~
ceejayoz
I'd count that part human error, part design error. Constant warnings
shouldn't be possible with this - it should disengage and refuse to function
again until you start driving more responsibly.

