
Tesla's 'Autopilot' Will Make Mistakes. Humans Will Overreact - svenfaw
http://www.bloomberg.com/view/articles/2016-07-01/tesla-s-autopilot-will-make-mistakes-humans-will-overreact
======
Animats
That's part of the problem.

Tesla has two basic problems with their "autopilot". One is hype, and one is
bad technical design.

Tesla's "autopilot" is just automatic lane keeping and automatic cruise
control. Mercedes and BMW have been shipping that for years. The other car
companies have hardware to check if the driver's hands are on the wheel. Tesla
not only didn't do that, they hyped those basic functions as being an
"autopilot", as if they had something comparable to what Google has. Tesla has
significantly overhyped their technology, to a dangerous level.

Here's another Tesla crash, from yesterday.[1] This is a Tesla rear-ending the
corner of a stalled van on a highway. This may be an "automation in wrong
mode" error, and complicated excuses from Tesla will probably be forthcoming.
That's not good enough. Airline pilots get into "wrong mode" errors, and they
get extensive training, including simulator time. Drivers don't get that. This
driver presumably thought the automation would do something sensible in that
situation. It didn't.

The other big Tesla problem is a pure technical design error - the bumper-
mounted radar is blind at windshield height. This is the immediate cause of
the fatal accident involving under-running the semitrailer. It's also the
cause of the parking accident with the truck with overhanging load sticking
out the back.[2] Tesla's implementation has deadly blind spots. They need a
radar at windshield-top height. It's quite likely, now that the NHTSA is
looking into this crash, that Tesla will have to do a recall and retrofit one.
(Along with hands-on-wheel sensors.)

Tesla was relying on stereo cameras too much. Depth from stereo is always
iffy; there needs to be some detail on the target to get range. You can't
range a uniform surface with stereo vision. (Human flyers have the same
problem flying over ice.) A big white truck under some lighting conditions has
that property. Cameras are useful, but backed only by today's algorithms, not
enough. Teslas come with a sensor suite inadequate for the job.

From a software perspective, Tesla's software seems to be overoptimistic that
there's no problem ahead. 3D vision systems know when they don't have range
info for a large area. Tesla's software was willing to drive into that. It
didn't note the absence of range info for the road surface; it may not even be
profiling the road ahead. (We all had to do that in the DARPA Grand Challenge
because we were operating off-road. That's a solved problem. Google does it
with their roof-mounted LIDAR.) Tesla owners have asked for "pothole
detection" as a feature. Lack of this may eventually take someone off a cliff.

The press is gradually picking up on this. Until recently, Tesla's autopilot
was treated as a magic black box in the press. Now some automotive writers are
starting to ask the right questions.

This is not a setback for automatic driving. This is just a Tesla problem.

[1]
[https://www.youtube.com/watch?v=qQkx-4pFjus](https://www.youtube.com/watch?v=qQkx-4pFjus)
[2] [http://www.roadandtrack.com/new-cars/car-
technology/news/a29...](http://www.roadandtrack.com/new-cars/car-
technology/news/a29133/tesla-self-driving-crash-summon-autonomous/)

~~~
joshdickson
> The other big Tesla problem is a pure technical design error - the bumper-
> mounted radar is blind at windshield height. This is the immediate cause of
> the fatal accident involving under-running the semitrailer. It's also the
> cause of the parking accident with the truck with overhanging load sticking
> out the back.[2] Tesla's implementation has deadly blind spots. They need a
> radar at windshield-top height. It's quite likely, now that the NHTSA is
> looking into this crash, that Tesla will have to do a recall and retrofit
> one. (Along with hands-on-wheel sensors.)

This is just completely false. We've been putting them at bumper height for 10
years because that's where the system ends up working best with the greatest
number of targets to see. NHTSA has been regulating these sensors for years
and designing the tests for them; they are not new to this. Tesla is not going
to have to do a recall on these, and spare me the idea that they are going to
"retrofit a sensor at windshield height." This is just mindless speculation
you made up.

~~~
Animats
Volvo has placed radars at the top center of the windshield. They had a blind
spot at bumper level, so later Volvo systems added a second radar down there.
And a LIDAR. [1] One radar at bumper level is just enough to keep from rear-
ending the car in front. That's not enough for a hands-off "autopilot". You
need at least two, or beam steering in elevation, like the Fujitsu unit.[2]
(Although that has only ±9° of elevation, not enough at bumper level to
eliminate the need for a higher sensor.)

[1] [http://www.pcworld.com/article/2025386/volvo-v40-watches-
the...](http://www.pcworld.com/article/2025386/volvo-v40-watches-the-road-
with-camera-radar-laser.html) [2] [http://www.fujitsu-
ten.com/business/technicaljournal/pdf/38-...](http://www.fujitsu-
ten.com/business/technicaljournal/pdf/38-1.pdf)

~~~
joshdickson
I had not seen some of that Volvo tech, thanks for the link. Ultimately the
radar portion is going to be more three dimensional tech still in the bumper.
It's a struggle financially to get a single one of these sensors on a
commodity vehicle, so the tech is focused on how to do more with that. Still,
what Volvo is doing with the technology available is very interesting
transitional technology.

------
joshdickson
I used to work on the radar systems in these vehicles. It's a problem that the
feature is called "autopilot," yes, but the larger issue is that the car does
not _enforce_ what the company often says in press releases, which is that you
need to remain alert and with your hands on the wheel. There have been similar
systems in other vehicles for years - I'm most familiar with those in Mercedes
vehicles that are not as advanced but perform similar actions. Those systems
will _automatically turn off_ after just a few seconds of removing your hands
from the wheel. That is what the Tesla should do. The autopilot feature is a
toy; it's not to be used in place of a human driver, nor would a human driver
be able to step in quickly and take over for the system were it to suddenly
fail had the person not been paying attention. It should not be possible to
watch a Youtube video of someone driving completely hands free for minutes at
a time.

Yes, Tesla should not call it "autopilot," but if you want to really prevent
people from using the system in a way in which it's not intended, you need to
enforce it in hardware.

~~~
te
Is Tesla's autopilot using radar? How is radar fooled by a white truck against
a bright background? I was surprised by this point of failure, which suggests
to me that they aren't even using radar.

~~~
joshdickson
As of July 2016, the autonomous driving system in the Model S (& X,
presumably) is _not_ using radar, only cameras. The vehicles do have radar
which is used for other things (various crash avoidance programs), but not
this, and these things presumably did not fire correctly. I do not know if
Tesla's software update capabilities would allow it to push a future
autonomous driving update in the future that could interface with the radar
sensors; I'd assume so. When I was working on these things the software build
that went to the factory was what was on the car for 15 years, obviously this
way is superior.

As a side note, radar in cars and semi trucks are a difficult match. The
height of tractor trailer trucks makes it difficult to figure out if they are
actually obstacles or if you're really seeing a bridge or something. Normally
there's enough hanging down in back that it's not an issue, but in this case
the truck was perpendicular to the car. It would not surprise me if the radar
went right underneath the truck and either didn't see it, or saw it but
thought it was something like a bridge that the car could go under. These are
difficult and complex systems to engineer.

~~~
ccozan
:o) Tesla is not using radars?!

I am driving an '14 Audi A6 and I can tell you, the radars + software on it
are the most advanced I ever saw. They even catch a small cat when was trying
to cross the street. The car actioned a emergency break before I even
realized. The only time when this wasn't working, was during a snow blizzard,
when the radar sensors got iced and they couldn't see anything. Otherwise,
they are pretty wide range, at least 100m.

It also keeps the car in between the lanes - this is done with a camera, but
if you take your hands of the wheel, you get a max of about 30 seconds of free
driving to maybe open a bottle of water, before it get's really loud.

~~~
joshdickson
> :o) Tesla is not using radars?! I am driving an '14 Audi A6 and I can tell
> you, the radars + software on it are the most advanced I ever saw.

TIL everyone on Hacker News drives a much nicer car than I do. The crash
avoidance measures run in tandem with the autonomous driving features, but the
system (which is mainly licensed technology from Mobileye) just doesn't get
any depth intelligence from it. Presumably if autonomous driving pointed you
into a brick wall, the radar stuff would still trigger the AEB features
(autonomous braking).

------
tuna-piano
I'm saddened, though not surprised, with how many people are so quick to come
to Tesla's defense here.

Autonomous cars will have issues. But we should hold the software makers
accountable for those issues the same way we hold airplane makers /
maintainers responsible in the rare event their planes crash.

As other commenters pointed out, Tesla likely released a half-baked technical
solution too early, and created misleading (and in this case dangerous)
marketing to go along with it: calling it "autopilot" and not requiring hands
on the steering wheel. Tesla screwed up here, and shame on them.

This is not a binary world: I'm not saying that they should be bankrupted or
the autonomous car industry shut down. When a plane crashes due to poor
maintenance - I want that company to suffer market+legal consequences, I don't
want the entire aviation industry to be shut down.

White trucks on a sunny sky are not rare enough to call this an anomaly. If
Tesla had installed sensors on their cars to collect data but not enabled
autopilot yet - would this man still be alive today?

The MVP model is great, but not for things where life is on the line. Google
seems to be doing this right.

~~~
DasIch
The product Tesla has built is a perfectly fine assistance mechanism. The
problem is that it's not an autopilot and nowhere near autonomous driving,
nevertheless Tesla is naming it and marketing it as such.

The problem is that Tesla's marketing here is (borderline) fraudulent. There
should be and need to be legal consequences for that.

~~~
cynix
> The problem is that it's not an autopilot…

An autopilot is a system that assists, but does not replace, the human
operator in operating a vehicle. Tesla's system does exactly this.

> … and nowhere near autonomous driving…

Nobody ever claimed it was autonomous.

~~~
DasIch
A plane can depending on the airport start, fly and land on autopilot, you
just need to tell it where to go. There is pretty much no unplanned
intervention necessary. This is what the term means to people, autonomous
driving. I don't think most people would draw any distinction between the two.

In a Tesla it would be irresponsible to take your hands of the wheel or not to
pay attention to the road for even a minute. Tesla's "Autopilot" is nothing
like what people imagine an autopilot to be like.

------
tptacek
This entire article is based on the premise that two very different situations
are directly comparable.

In the former, people sometimes accidentally stomp the gas instead of the
brakes, then report the opposite.

In the latter, Tesla markets a feature as "Autopilot". It pilots a car into
another vehicle. The feature had apparently been demonstrated at events with
key safety features disabled: attendees are able to use it without their hands
on the wheel. "Autopilot" fails to account for a common real-world traffic
occurrence and its driver has an inadequate mental model of the feature, which
is made up of a complicated mesh of sensors and software.

There may be something to the argument that self-driving car features will
reduce traffic fatalities even as they become responsible for a distinctive
(but hopefully small) subset of those fatalities. But this article doesn't
make that point.

The problem people have with features like Autopilot is that it may not be
reasonable --- at least, as those features are currently designed --- to
expect human drivers to model their behavior accurately enough to use them
safely. This is an HCI problem that people in the computer industry should be
very familiar with. I'm surprised to see it dismissed so easily.

~~~
lisper
> without key safety features disabled

Did you intend this to be a double-negative? It's pretty confusing as written,
but would make sense as a single-negaitve.

~~~
tptacek
Nope, thanks. I shuffled the sentences around in that paragraph but should
have just rewritten it.

------
lumberjack
>It appears to have been something of a freak accident -- white trailer riding
high against the bright sky, so that the autopilot didn’t detect the truck in
its path.

It wasn't a freak accident. The AI failed. The human driver was distracted and
failed to respond in time. Two failures Tesla has to answer for. IMO the
second one is the gravest. They shouldn't have deployed the technology,
primitive as it is.

~~~
ricardobeat
How come Tesla has to answer for human failure?

~~~
diggan
It depends on what kind of failure. If there is a red button next to an
slightly darker red button and you get the instruction from your boss to press
the red one, you have a 50% of failing. But maybe the designer of said buttons
should have thought about giving the buttons different colors instead of
similar ones.

"The Design of Everyday Things" talks about this, that sometimes it's the
designer of something's fault, rather than the user of the something. Also
called "Human Error" by statements.

Search for
"[https://archive.org/stream/DesignOfEverydayThings/DesignOfEv...](https://archive.org/stream/DesignOfEverydayThings/DesignOfEverydayThings_djvu.txt")
here
[https://archive.org/stream/DesignOfEverydayThings/DesignOfEv...](https://archive.org/stream/DesignOfEverydayThings/DesignOfEverydayThings_djvu.txt)

Point being, unless Tesla makes it safe to be used as autopilot, you probably
shouldn't say it's autopilot. Otherwise it's not good enough and you would
fault Tesla for the design rather than the users of the autopilot.

~~~
ricardobeat
I agree that the naming is poor - that doesn't make them liable for human
error though, the driver has to opt in and is made well aware of the operation
modes.

Incidentally I'm carrying that exact book in my backpack right now. I think
the problem here is much deeper, and even opposite to what Norman discusses in
the book. An autopilot is the epitome of "don't make me think" \- I want to go
from A to B, no need to worry about the driving. The problem at hand is that
the technology is not mature enough to fully take over from humans, and we are
not good at staying alert in this half-focused state - I don't own a Tesla but
it must be incredibly boring. It's an interesting conflict of tech vs human
nature.

------
soneca
Humans "overreacting" is the reason why airplanes are so safe. This reaction
(IMO "fair" not "over") is important for the development of the driverless
cars tech. It will make them much, much safer and won't kill any innovation.

"Underreaction"[1] is much more dangerous for society than overreaction. I
would cite digital privacy as an example of this.

[1] i'm not sure _underreaction_ is a word or if i invemted it. Is
underreaction so underreacted to that it is not even a word?

~~~
the8472
From a utilitarian perspective one can invest too many resources in making
specific things more secure. That happens when other causes of death dwarf the
probability of incidence to cost of fixing ratio of a particular failure mode.

If you're accounting for micromorts then we might be at the point where TSA
checks destroy more human life-hours than terrorism and normal plane crashes
combined. Simply reverting airport security to pre-9/11 levels could save more
lives than _infinite_ spending increases for airplane crash safety.

So yes, at some point it becomes an overreaction when resources would be
better spent on other ways of saving lives.

I'm not saying that we're there yet with self-driving cars. We probably are
with planes.

~~~
aianus
This is how I feel about car insurance in Ontario (think $5000 a year for a
teenage male).

On one hand, it's great that if I get in a crash I get crap loads of money for
physical therapy and what not. On the other hand I wasted 2h every day of high
school waiting and sitting on the bus and so did everyone else. Hours I'll
never get back. Would rather have taken my chances :/

------
KKKKkkkk1
The biggest elephant in my view is that Tesla has known about this accident
and has been hiding it from investors since May. The only reason why this came
out is the NHTSA investigation, and if it were up to Musk, this crucial
information would have stayed secret until he had completed the dumping of
SolarCity onto Tesla shareholders.

------
AndrewKemendo
This is the most horrifying part to me:

 _Eventually, the National Highway Traffic Safety Administration got involved,
and wrote up a report which found that … yup, these drivers were stepping on
the gas instead of the brakes, often with horrific results. That didn’t save
Audi, for which sales collapsed and which almost pulled out of the U.S.
market._

Despite thousands of hard working engineers, QA, Marketing, Sales, Legal and
design people earnestly working to put out a safe, quality vehicle, Audi was
almost undone by histrionic consumers. Really depressing when you think about
it as a founder.

~~~
carterehsmith
Far as I remember, that Audi had pedals that were narrower than usual and
closer than usual. So it's not about the customers being histrionic, more like
somebody in engineering not really understanding what he was doing.

------
danielmorozoff
The tesla incident highlights the exact technical/ethical problems of using
such systems. I was just at a conference speaking with the Google automated
car team and the handoff problem is yet to be solved conceptually. The tesla
accident is just a first example of it. Obviously the utility of an autopilot
system becomes questionable if you can't guarantee safety. It's very similar
to medical advances in this way. 1% errors is not a good error rate if it
leads to death or serious injury even though such error rates for autonomous
systems are remarkable.

That said , if we look at the statistical likelihood of death in a car
autonomous systems may very well be much better than humans, but whether we as
a society are ethically okay with coming to terms with a piece of software
that kills 1 / 100M is another question. A drunk driver that kills someone or
themselves is obviously to blame. In this case who's fault is it? The drivers
, tesla or the safety board for allowing autopilots to be sold?

~~~
pipio21
"In this case who's fault is it? The drivers , tesla or the safety board for
allowing autopilots to be sold?"

Obviously the insurance. In Europe you need to have insurance to pay whenever
is your fault.

With autopilot the only difference will be that the event of having an
accident will be less probable. This means that paying the normal driver fee
the insurance company will be making money if you autodrive.

But most of the time the fault will be on the other side, because the
autopilot is not going to drive like a drunk or slept person. It will be deer
or dogs or children on the road when they should not be there.

~~~
danielmorozoff
I wouldn't be so quick to say insurance companies would pick up the tab. I
don't know which country you're from but in the us getting insurance companies
to pay for a claim is very hard and until a set of laws are established I
would expect dramatic pushback from them. Furthermore in most accidents
insurance companies try to seek out the party at fault if it's not crystal
clear to avoid paying.

But an interesting situation would occur if the incentives from the insurance
companies would be directed towards automated driving because of reduced risks
and costs to the driver. That would be an interesting scenario

------
ojosilva
I think Tesla screwed up by hyping it like they have an autonomous driving
system. Mercedes, BMW, etc. did not make that mistake with similar technology.
Just google "tesla autopilot" and you'll see the phenomena: people sleeping,
playing games, recording videos of themselves.

From that search, here's an interesting article from WIRED warning about the
dangers (and lawsuit proneness) of the technology:

[https://www.wired.com/2015/10/obviously-drivers-are-
already-...](https://www.wired.com/2015/10/obviously-drivers-are-already-
abusing-teslas-autopilot/)

------
rasz_pl
Best outcome of NHTSA investigation would be financial penalty and order
prohibiting use of AutoPilot brand on NON autonomous system. It was clearly
picked to mislead customers into thinking they are buying more than there
really is.

------
Sami_Lehtinen
A bad joke, but this somehow reminded me about: Darwin Awards movie, Autopilot
Cruise Control scene. Afaik, Tesla Autopilot is an assistant, not an fully
autonomous system. Which inherently means that it requires constant
supervision. Using boats or planes autopilot won't either relieve you from
monitoring where you're going to end up. Afaik, some of the news articles were
titled with pure lies. 'Autonomous car', hmm, nope. Tesla isn't such. It's
always important to acknowledge the true capability s of a system and not
blindly trust it. This applies to any more or less automated system. People
start blindly trusting it even if it really shouldn't be trusted. Let's say
robot or AI stock investment bots. Luckily those are't going to kill anyone. I
know, quite rude. It's sad that what happened happened. But in technological
sense, we all knew it was going to happen sooner or later. We're going to see
more and more things like this in future. - I've blogged several times about
this same phenomenon, even before this case.

------
kirrent
I don't get the argument that using the name Autopilot implies that the driver
need not supervise the car. In aircraft autopilots require pilot supervision
who can take over in the event of autopilot failure. In marine applications
autopilots are even simpler. No-one would think that an autopilot in either of
these fields where the word has been traditionally applied can function
completely independently.

~~~
comicjk
I disagree. Besides specialists, I think most people think that Autopilot
means it doesn't need supervision. Wiktionary's definition:

A mechanical, electrical, or hydraulic system used to guide a vehicle without
assistance from a human being.

"I set the autopilot to due south, so I could get some rest."

------
99_00
As Tesla's sales mix shifts from mostly early adopters to more 'normal'
consumers we are going to see a lot more complaints.

Normal consumers aren't interested in technical details or complicated
explanations. They just want what the marketing promised them.

------
ychoratio
How about $500 million fine for Mr. Elon Musk with five years supervision of
public statements by or for Tesla by government appointed lawyer.

------
paulsutter
\- Tesla has collected 130 million miles of autopilot data in 8 months

\- Google has collected 2 million miles of data in 5 years

\- others have collect 0 million miles of data

One might assume that each 10x of data doubles safety. So, a billion miles of
data is perhaps 8x better than a million. It was bold of Tesla to ship
autopilot, but it gives them a significant data lead while Google's caution
means they may be squandering their once-thought-unassailable technology lead.

My thinking is that first you eliminate the 1% problems, then the 0.1%
problems, then 0.01% and so on. So each 10x of scenarios tested is roughly one
step of improvement.

~~~
agildehaus
You have to define what data they're collecting that's so valuable. Their
system is so terrible at sensing the world around it that it will run right
into construction barricades if you don't take over:

[https://www.youtube.com/watch?v=iSasPVYzSQ0&t=150](https://www.youtube.com/watch?v=iSasPVYzSQ0&t=150)

It's a fancy cruise control, not self-driving. It doesn't understand anything
but how to follow lanes (sorta, it doesn't work everywhere) and it has some
ability to recognize obstructions in front of it and stop (again, sorta).

Google is teaching their car how to navigate city streets, deal with human
drivers and pedestrians, navigate intersections, handle construction zones,
deal with emergency vehicles and cops directing traffic, etc. All with a
_much_ better sensor package than in a Tesla Model S. Their system doesn't
even need lane markings on the road, as they compare LIDAR data with a pre-
marked reference map. Tesla's system, and every other system that doesn't do
this, breaks down when lane markings are faded or non-existent.

Google's system would not have been prone to this type of accident five years
ago, much less today.

~~~
gajjanag
> Google is teaching their car how to navigate city streets, deal with human
> drivers and pedestrians, navigate intersections, handle construction zones,
> deal with emergency vehicles and cops directing traffic, etc. All with a
> much better sensor package than in a Tesla Model S.

The interesting thing is that no matter how much one anticipates situations a
priori and builds them into the AI/model, there will likely exist situations
not anticipated by the engineers.

For example, it is not clear to me how exactly a situation like
[https://en.wikipedia.org/wiki/Driving_direction#Sweden](https://en.wikipedia.org/wiki/Driving_direction#Sweden)
will be handled. Will this be "patchable" (never mind the logistics and
security issues of providing updates to a nation's fleet), or will it require
a full retraining of the AI? Or more generally, how robust/modular will the
designed AI be to these kinds of situations?

The reason I like this example is because it is a classic instance where
humans have inertia and thus have difficulty in consistently applying the
"switch driving side" rule - I have seen international travellers requiring at
least a few minutes at the wheel to reorient themselves.

In principle one would hope that a modular software solution can handle things
more consistently.

------
rdlecler1
There were 1.07 fatalities every 100M automobile miles last year. Compare that
to 1 fatality in 130M autopilot miles. This is not going to be the last
fatality we see but it's not clear that autopilot is any worse than a human
and statistically this still looks to be the safer option.

~~~
unfamiliar
For comparison, that is 1.391 fatalities per 130M automobile miles. So 0.391
people were saved by Tesla, but that doesn't account for the fact that this is
a new, eerie technology and that people are possibly using it in the safest of
conditions until they get comfortable with it. Also is the average Tesla
driver really representative of the overall population, i.e. would Tesla
drivers have had that many crashes without Autopilot? How much of that 0.391
is due to the safety features of the car itself, and not the success of the
autopilot system?

At any rate, there isn't much room for error in those numbers, and considering
the above this almost looks like evidence that Autopilot is less safe than
normal driving.

------
satyajeet23
This is how things evolve.

~~~
Pica_soO
It shall get a chapter in the orange catholic bible, the prelude to the
butlerian jihad. ;)

