
Why emergency braking systems sometimes hit parked cars and lane dividers - dredmorbius
https://arstechnica.com/cars/2018/06/why-emergency-braking-systems-sometimes-hit-parked-cars-and-lane-dividers/
======
rayiner
What this shows is that Tesla isn’t even on the path to self-driving
technology. It sounds like “Autopilot” has no world model (or an extremely
basic one)—it ignores stationary objects because it can’t tell whether it’s
approaching something on the side of a road that curves away versus an
obstacle in the lane. It can’t tell because it doesn’t know what the car is
going to do and what the lane is going to do. That’s not the first step to
self driving technology. It’s a fundamentally different and simpler technology
that isn’t going to evolve into real autonomous driving.

~~~
paulie_a
Let's be honest, actual mass deployed self driving tech is probably 20 years
away at minimum. I personally might be able to enjoy it just before I retire.

~~~
makewavesnotwar
Unless companies like Tesla stop worrying about being the next "Great American
Industry" and start focusing on real world problems and their solutions.

If Tesla could develop a network of self driving buses, that would have far
greater reach and impact than trying to solve how to make a private network
for charging and all case-considered self-driving system for asshole-mobiles.
Not that Tesla cars aren't amazing, but they seem like a giant waste of
resources for self-centered people. Not to mention in situations like self-
driving, buses have completely predictable paths and because of that, it's
much easier to optimize for corner cases to work toward a general solution.

With the exception of the handicapped, people don't really need cars.
Bikes/e-bikes work great for any type of commute <30 miles and hauling cargo
<100 lbs with a trailer. Electric skateboards are also great if you have
little to carry and <12 miles to go. Beyond that, self-driving delivery fleets
in the vain of UPS and buses make sense. There's really no valid reason to use
a car unless you're an invalid.

But guys like Elon Musk are happy to burn the earth to the ground to terraform
Mars instead of working on how to terraform places like Arizona or Pakistan
because it looks better to them on paper. Or so it would seem.

~~~
EpicEng
>With the exception of the handicapped, people don't really need cars.

What country do you live in? We don't all live in downtown SF. In fact, most
of us don't. We live outside the city and drive 20 miles to get to work
(you're nuts if you think most people are going to spend hours commuting 60
miles on a bicycle every day). And there are no bike Lanes, let alone paths
And it snows.

>There's really no valid reason to use a car unless you're an invalid.

You're on another planet. It's interesting that you talk about finding
practical soltutions to "real problems" yet seem to lack any notion of what
the real problems are.

~~~
OnMyPhone
Anytime I read posts like that, I just assume they have never been to a place
that requires a car to get somewhere within a decent amount of time or it's
been so long that they've forgotten.

Yeah, I could get an e-bike, I just have to make 5x+ the trips to the store
since I doubt it could hold what I can get in 1 trip now. How would I drive 20
miles one way to work in the winter when there is zero public transportation
here? There's no bike lanes out here, and taking an e-bike on the road will
eventually get you killed, that's if you can stand driving 40 miles round trip
in the snow / rain 5 days of the week.

I guess I can move, (as been suggested before lol) but now there's a lot of
new problems with that that an e-bike won't touch.

It just isn't feasible for most people yet. Hopefully sometime soon, but
definitely not right now.

~~~
EpicEng
Pretty much. I grew up in rural IL, good luck with your e-bikes.

------
fhrow4484
This is a pretty good explanation of the shortcomings of today's assisted
driving capabilities. The marketing towards 'self driving' or 'auto pilot'
should really emphasize on these.

My guess here is that the lane assist and adaptive cruise systems somehow
thought the car it was following switched one lane to the right, and so the
Tesla assumed it was on the most left lane, without any moving car in front
(hence accelerating back to 75mph). But what it thought was the most left lane
was actually the space between an hov exit and the real most left lane.

What's amazing is that, prior to that article, I had no idea that collision
avoiding system at high speed would ignore any stationary object, even if it's
right in front of you... This changes dramatically how I'll perceive such
assisted technology from now on.

~~~
joe_the_user
The key thing seems to be that the automatic breaking system has a group of
systems each designed to satisfy multiple constraints and these systems aren't
particularly well integrated.

This seems like how I would imagine a "dumb" system would be designed. One
would imagine a "artificial intelligence" (of any sort at all) would find a
way to smoothly integrate all these system. The problem is end-to-end neural
nets or similar devices intended for such integration don't seem ready for
prime time.

~~~
Phrodo_00
I would generally prefer smarter (more error prone) systems to be surrounded
by dumber systems. So the break pedal overrides everything, then you'd have a
run of the mill radar breaking system, regular lane following system, and only
after that (and maybe even further back) the more advanced self driving-ish
stuff. It's weird that it isn't designed that way, since that's the way
automated systems in factories and the like are setup.

~~~
joe_the_user
_So the break pedal overrides everything, then you 'd have a run of the mill
radar breaking system, regular lane following system, and only after that (and
maybe even further back) the more advanced self driving-ish stuff_

The problem the article points out is that there's no safe, dumb response to
radar seeing something stopped right in front of a car going 70 mpg.

If anything, the systems the article describes are a lot like what you
describe - a bunch of semi-separate systems each of which _can_ do something
if it can be sure that something is safe and which do nothing otherwise.

The problem seems to be that given enough no-easy-answer situations, you wind-
up with a false sense of security.

~~~
jstarfish
Why does "in case of anomaly or ambiguity, slow the fuck down, gather more
data and reassess" never seem to be part of the functional logic with these
things?

~~~
other_herbert
I think it's because at this point they would not be able to move forward at
all...

------
nanis
This article and many others like it are written as if one ought to be ashamed
for thinking self driving cars should not routinely smash into stationary
objects on their paths or kill people:

    
    
        This isn't the only recent case where Autopilot steered
        a Tesla vehicle directly into a stationary
        object—though thankfully the others didn't get anyone
        killed. Back in January, firefighters in Culver City,
        California, said that a Tesla with Autopilot engaged
        had plowed into the back of a fire truck at 65mph. In
        an eerily similar incident last month, a Tesla Model S
        with Autopilot active crashed into a fire truck at
        60mph in the suburbs of Salt Lake City.
    
        A natural reaction to these incidents is to assume that
        there must be something seriously wrong with Tesla's
        Autopilot system. After all, you might expect that
        avoiding collisions with large, stationary objects like
        fire engines and concrete lane dividers would be one of
        the most basic functions of a car's automatic emergency
        braking technology.
    

Regardless of how they came to be, yes there is something seriously wrong with
these systems.

~~~
tracer4201
Didn't hit me that way. Sounds more like we as consumers misinterpret what the
technology is and trust it too much... of course, there's a feedback loop
where companies like Tesla call everything autonomous and hype the marketing
to mislead consumers.

~~~
stevenwoo
It seems like the radar/camera system Toyota uses for its Pre Collision System
has the same weaknesses as Tesla's sensor suite but the list of exceptions in
the Toyota manual is pretty brutally honest about the issues. Just for
pedestrians it warns: Some pedestrians such as the following may not be
detected by the radar sensor and camera sensor, preventing the system from
operating properly:

• Pedestrians shorter than approximately 3.2 ft. (1 m) or taller than approx-
imately 6.5 ft. (2 m)

• Pedestrians wearing oversized clothing (a rain coat, long skirt, etc.), mak-
ing their silhouette obscure

• Pedestrians who are carrying large baggage, holding an umbrella, etc.,
hiding part of their body

• Pedestrians who are bending forward or squatting

• Pedestrians who are pushing a stroller, wheelchair, bicycle or other vehi-
cle

• Groups of pedestrians which are close together

• Pedestrians who are wearing white and look extremely bright

• Pedestrians in the dark, such as at night or while in a tunnel

• Pedestrians whose clothing appears to be nearly the same color or brightness
as their surroundings

• Pedestrians near walls, fences, guardrails, or large objects

• Pedestrians who are on a metal object (manhole cover, steel plate, etc.) on
the road

• Pedestrians who are walking fast

• Pedestrians who are changing speed abruptly

• Pedestrians running out from behind a vehicle or a large object

• Pedestrians who are extremely close to the side of the vehicle (outside rear
view mirror, etc.)

~~~
sib
I'm trying to figure out whether there are actually a meaningful fraction of
pedestrians who _would_ be detected...

~~~
rootusrootus
I imagine that list was drafted with the direction of the legal department and
they wanted to exhaustive list that might very well exclude basically all
pedestrians. They don't really expect many people to read it unless they find
themselves in court, when it will be too late.

------
amluto
If Tesla actually made Autopilot into an emergency braking (or, more
generally, safety) system, it would be fine. All they’d have to do is make two
changes: switch from automatic steering to a lane departure warning system (so
the driver _must_ keep their hands on the wheel and will always be paying
attention) and possibly rename it.

Of course, their stock price might crash and a whole bunch of customers who
paid for self-driving features might be seriously pissed.

~~~
trackofalljades
Ever been in a Tesla? The current system requires you to keep your hands on
the wheel, and they always have. In fact there are YouTube videos of idiots
showing how they use clamps and other defeat devices to take their hands off
the wheel, which the car is inherently designed NOT to let you do.

~~~
amluto
Yes, I’ve driven them. Given personal experience and Elon’s claims of drivers
using autopilot with hands off the wheels, I believe you’re incorrect.

To be clear, when I say the software should be changed to not automatically
steer, I don’t mean that it should more aggressively complain about hands off
the wheel. I mean that the car should drive in a straight line with hands off
the wheel, and it should _also_ alert the driver if it thinks that the driver
is leaving the lane.

------
gwbas1c
It's currently illegal to say that something can cure cancer, and this is
because a lot of people just don't understand to the details of fighting
cancer to really know what a cure means.

Perhaps we need to regulate how driver assistance is sold so that people don't
accidentally think they can just keep their eyes off the road?

~~~
macspoofing
The only irresponsible car company has been Tesla (and Uber). All the other
companies positioned their adaptive systems correctly and very conservatively.
Tesla came out and downright asserted their system was autonomous[1].
Predictably, they have since had to scale that back significantly.

[1][https://www.tesla.com/en_CA/videos/autopilot-self-driving-
ha...](https://www.tesla.com/en_CA/videos/autopilot-self-driving-hardware-
neighborhood-long)

~~~
dmode
Tesla literally never says that autopilot system is self driving and has
repeated nags and reminders to keep hands on the wheel and pay attention. This
message is repeated during onboarding, training videos, and manual etc. 99% of
Tesla owners follow these instructions. There will of course be 1% drivers who
will not.

~~~
icc97
> repeated nags and reminders to keep hands on the wheel and pay attention

Above 45mph hands free is allowed for 3 minutes [0] when following another car
or 1 minute if not. There's no eye tracking unlike Super Cruise [1]. It only
takes seconds to get distracted as the video of the Uber driver proved.

> There will of course be 1% drivers who will not.

It's the 1% that kill people.

[0]:
[https://en.wikipedia.org/wiki/Tesla_Autopilot#Alerts](https://en.wikipedia.org/wiki/Tesla_Autopilot#Alerts)

[1]: [http://www.cadillac.com/world-of-
cadillac/innovation/super-c...](http://www.cadillac.com/world-of-
cadillac/innovation/super-cruise)

------
tntn
This idea of refusing to use a 3d imaging system because it's theoretically
possible without is sort of bogus.

No one refuses to use GPS because it's possible to use a map or sextant like
humans can. No one refuses to use radar because you might be able to point at
a plane with your eyes.

Tesla should invest in improved lidar or imaging radar, rather than hoping
they can come up with a neural network to solve all their problems from visual
images.

------
YeGoblynQueenne
>> So the people designing the next generation of autonomous driving systems
are going to need a fundamental philosophical shift. Instead of treating
cruise control, lane-keeping, and emergency braking as distinct systems,
advanced driver assistance systems need to become integrated systems with a
sophisticated understanding of the car's surroundings.

My understanding was that self-driving systems are trained end-to-end to do
simultaneous localization and mapping (a.k.a. SLAM [1]). In other words, the
same model would control breaking, accelerating, lane keeping and everything
else.

In fact, I thought this was why Uber had switched off its car's built-in
breaking system- because their AI had taken over breaking and the AEB on the
car would interfere with the self-driving.

Perhaps that is not the case for Tesla in particular, though?

_______________

[1]
[https://en.wikipedia.org/wiki/Simultaneous_localization_and_...](https://en.wikipedia.org/wiki/Simultaneous_localization_and_mapping)

~~~
trashcan
"braking"!

~~~
azernik
Well, if they don't brake for firetrucks and do brake for shiny patches of
ground, then they're _also_ breaking systems.

------
stephengillie
The cost of false positives at freeway speeds is part of the reason:

> _When a car is moving at low speeds, slamming on the brakes isn 't a big
> risk. A car traveling at 20mph can afford to wait until an object is quite
> close before slamming on the brakes, making unnecessary stops unlikely.
> Short stopping distances also mean that a car slamming on the brakes at
> 20mph is unlikely to get rear-ended.

But the calculation changes for a car traveling at 70mph. In this case,
preventing a crash requires slamming on the brakes while the car is still far
away from a potential obstacle. That makes it more likely that the car will
misunderstand the situation—for example, wrongly interpreting an object that's
merely near the road as being in the road. Sudden braking at high speed can
startle the driver, leading to erratic driving behavior. And it also creates a
danger that the car behind won't stop in time, leading to a rear-end
collision._

When training new human drivers, they're taught to to steer around obstacles
at freeway speeds, instead of braking for them. This is partly because it can
take too long to brake.

The other half is these are often separate modules calling each other.

> _And like adaptive cruise control, automatic emergency braking is often
> implemented as a separate system from the lane-keeping module. Most AEB
> systems lack the kind of sophisticated situational awareness a fully self-
> driving system would have. That means it may not be able to tell if an
> object 100 meters ahead is in the current travel lane or the next lane
> over—and whether it 's a temporarily stopped car, a pedestrian, or a bag of
> garbage._

The auto-braking system could be a basic distance sensor calling the drive-by-
wire API with "FULL STOP". This would definitely be non-ideal for freeway
situations and speeds.

 _" If you're at lower speeds, at 30mph, and it detects a stationary object,
these systems will generally respond and slow the car down and bring it to a
stop," Abuelsamid told us. "When closing speed is above about 50mph, if it
sees a stationary car, it's going to ignore that."_

Indeed, automated braking that can only apply 100% braking is not ideal at
freeway speeds. Collision avoidance at that speed must depend more on steering
than stopping, but systems in cars aren't integrated in a way that would allow
them to create this level of awareness.

~~~
PeterisP
Steering around obstacles is not trivial to do safely. Indeed, when training
new human drivers, they're taught _not_ to steer around minor obstacles (e.g.
not-large animals) at freeway speeds - to slow down and accept the impact if
it happens, instead of swerving into opposite lane or a ditch on the other
side, which has a lot of potential to kill people. Similarly, for many kinds
of debris and garbage on the road, if you can't safely change the lane (or
it's a single-lane road, where you can't go around at all if there's oncoming
traffic), the safest approach is to just drive through the obstacle; possibly
even if you _could_ stop before it, which isn't always the case.

~~~
ghaff
This is probably one of the more complicated freeway cases--which otherwise
are likely mostly easier environments than urban ones. It's not rare to have
debris of various types in and around the roadway. The right action from
braking to swerving/steering (or some combination thereof) depends on a whole
lot of factors.

------
untangle
Until I read this article, I thought that the all-optical adaptive cruise
control system of my BMW i3 was clearly inferior to modern radar-based gear. I
now see at least one use case -- stationary objects -- where the i3 setup has
an edge. (But the i3 setup suffers mightily in low-contrast and low-sun-angle
scenarios.)

~~~
Someone1234
And Tesla's Model 3 has optical cameras too, but they've chosen to continue to
use radar only for automatic emergency braking for reasons only known to them
(easier to maintain due to their older radar only fleet?).

But, yes, the i3 and Subaru have optical AEB that works great. Something
people like to gloss over again and again in these discussions while talking
about radar's limitations (and ignoring that the Model 3 has under-utilized
forward facing optical cameras).

------
bartread
A common situation, certainly on British motorways in relatively heavy
traffic, is to be tooling along at 70mph, and then have to come to a halt - or
drop to 20mph - fairly abruptly due to waves of slow moving traffic that move
_backwards_ along the carriageway. I wonder how well such systems would handle
that kind of situation, where obstacles might not be stationary, but the speed
difference is large?

~~~
stevesimmons
It depends on the detector used. Radar in particular is bad for stationary
obstacles because it can't separate reflections from the ground from a
stationary obstacle. Moving objects are no problem because their radar
reflections gets doppler-shifted and can then be separated the ground clutter.

~~~
bartread
Thanks - makes perfect sense.

------
PhasmaFelis
> _A natural reaction to these incidents is to assume that there must be
> something seriously wrong with Tesla 's Autopilot system. After all, you
> might expect that avoiding collisions with large, stationary objects like
> fire engines and concrete lane dividers would be one of the most basic
> functions of a car's automatic emergency braking technology._

> _But while there 's obviously room for improvement, the reality is that the
> behavior of Tesla's driver assistance technology here isn't that different
> from that of competing systems from other carmakers._

Is this actually saying that there's nothing "seriously wrong" with an
autopilot that randomly decides to aim at a wall and accelerate, because other
self-driving systems _also_ do that?

This is some very weird apologetics.

------
drblast
I don't follow this that closely and I've tended to think of self-driving cars
as an inevitable thing that results from march toward a "perfect" system.

But I think one of our blind spots as a culture is the assumption that nothing
bad will happen if you drive a car correctly and follow all of the rules, and
I think that assumption might prove to be wrong and that collisions are an
inherent aspect of driving a car in an unpredictable world.

We as humans like to feel like we're in control of our lives, and I think that
we have a prejudice toward looking at situations as if that were true, and
that bad outcomes are the result of bad decisions instead of bad luck.

~~~
ghaff
>But I think one of our blind spots as a culture is the assumption that
nothing bad will happen if you drive a car correctly and follow all of the
rules, and I think that assumption might prove to be wrong and that collisions
are an inherent aspect of driving a car in an unpredictable world.

That assumption is clearly wrong. There are other drivers but even leaving
that aside for the distant day when most vehicles are autonomous.

\- Debris of various kinds

\- Mechanical failures of vehicles

\- Environmental factors, e.g. ice causing skids

\- Road damage

\- Animals

etc.

------
DINKDINK
A false positive braking condition at 60 miles per hour is a pretty horrible
user experience.

~~~
dredmorbius
Not so hot for proximate nonusers either.

~~~
amelius
The same holds for false negatives.

~~~
dredmorbius
The acceptable solution set is tightly constrained, both ways.

------
hfdgiutdryg
_And it also creates a danger that the car behind won 't stop in time, leading
to a rear-end collision._

Inattentive tailgaters shouldn't be accommodated. Sure, their problem becomes
your problem when you brake hard, but I'd much rather be rear-ended (with head
rests and the whole rear of the car as a crumple zone) than plow head on into
a stationary object at high speed.

~~~
MBCook
If the car in front of me doesn’t stop until it’s too late (or at all) it
doesn’t matter how attentive I am, I probably can’t stop in time either.

Most of the time I can’t see past the car in front of me. It’s too tall, no
windows (commercial truck), etc. so the ONLY way I know I’m in danger is how
they’re acting.

~~~
astura
You're kidding, right? If you're unable to stop in time you are, by
definition, tailgating.

You're supposed to keep a safe following distance where you avoid a collision
if the vehicle in front of you stops suddenly.

------
xkcd-sucks
Self-driving cars turned out to be chatbots this whole time?! Color me
surprised

------
microcolonel
The tremendous personal arrogance of Elon Musk, and the blissful disregard of
those who would rather agree than question him, will probably ruin the
industry for many others to come. Eventually enough people will have died in
frankly ridiculous circumstances that trust will be lost.

------
raverbashing
I still don't get why do Autocrasher systems think it's better to ignore
stationary objects even whey it's driving towards it at a high speed!

Sure, slamming in the brakes is a bad idea. This doesn't mean the system can't
start to slow down or change the direction of the car.

~~~
Pyxl101
I suspect that it's too difficult for the systems to accurately determine
whether an upcoming object is actually in the lane. It's not uncommon to pass
stationary vehicles parked on the side of the road. If you're approaching
those vehicles along a curve, they might seem like they're in the road to a
radar sensor even when they're not. I can see why it's a hard problem:
attempting to detect these probably leads to too many false positives.

Still, you would think that there would be _some_ threshold where the car
decides, "Hey, this stationary obstacle is _right in front of me_. I should
slow down"

You could imagine a next generation self-driving system that uses the
combination of data from multiple sensors as well as maps to detect plausible
obstacles. The mapping data could tell the vehicle when it should expect to
turn. Maybe the vehicle could integrate imaging information from both radar
and stereo cameras, to detect where the lane is, and which obstacles are in
the lane.

Are there good existing techniques in the computer vision community for
synthesizing data from multiple imaging sensors, like radar and stereo cameras
and LIDAR? I'm imagining dumping all this data into an algorithm, getting back
a 3D reconstruction of probable objects around me, along with metadata
describing their velocity, confidence of the assessment, and all that.

~~~
stephengillie
> _Still, you would think that there would be some threshold where the car
> decides, "Hey, this stationary obstacle is_ right in front of me _. I should
> slow down "_

From the article:

> _When a car is moving at low speeds, slamming on the brakes isn 't a big
> risk. A car traveling at 20mph can afford to wait until an object is quite
> close before slamming on the brakes, making unnecessary stops unlikely.
> Short stopping distances also mean that a car slamming on the brakes at
> 20mph is unlikely to get rear-ended.

But the calculation changes for a car traveling at 70mph. In this case,
preventing a crash requires slamming on the brakes while the car is still far
away from a potential obstacle. That makes it more likely that the car will
misunderstand the situation—for example, wrongly interpreting an object that's
merely near the road as being in the road. Sudden braking at high speed can
startle the driver, leading to erratic driving behavior. And it also creates a
danger that the car behind won't stop in time, leading to a rear-end
collision._

~~~
jstarfish
So...dont brake suddenly. Decelerate, buy some time to collect and analyze
more data and if the end result still appears to be a collision, then brake
hard.

If a human driver started hallucinating behind the wheel, we would expect them
to do the same, not maintain speed (or accelerate!) through the supposed
object in their path.

This tech was supposed to be _safer_ and more convenient than human driving,
not a simulation of the decision-making abilities of a 12-year-old playing
Grand Theft Auto.

~~~
stephengillie
> _So...dont brake suddenly. Decelerate, buy some time to collect and analyze
> more data and if the end result still appears to be a collision, then brake
> hard._

This requires the automated brake computers to be connected to the autonomous
cruise computers. According to the article, in most cars, these computers are
separate systems.

It's like if one person operated the brake pedal, another operated the gas,
and a third steering the car. When one hallucinates, the others might not
realize right away.

------
daferna
Autopilot would have worked fine if the lane markings were properly painted.
No one will blame Caltrans terrible maintenance of 101. As I am writing this,
NB 101 near Oregon Expressway has been under construction for over two years
with no painted lane markings (and the lanes suddenly become super narrow),
leading to merge hell and probably at least 3-4 fender benders a week. The
Tesla GUI shows you whether or not it can see & track the lane markings,
watching the road is much more important than "keeping your hands on the
wheel". The one thing I will lay at the foot of Tesla: they really should have
implemented eye tracking.

~~~
wyldfire
You're presuming that in these fatalities that the driver's eyes weren't on
the road. What if the autopilot executed a maneuver that attentive drivers
couldn't recover from in time?

~~~
microcolonel
Such as swerving unexpectedly into a concrete barrier which was not
obstructing the road, and accelerating.

------
telesilla
I'm so glad I read this. I've been driving a VW Golf with adaptive cruise
control and enjoying its responsiveness in heavy traffic. Knowing that it
doesn't work with stationary objects will really change how alert I keep my
foot near the brake pedal. The first time I realise what adaptive cruise
control was I was delighted but I've become increasingly worried that I rely
on it too much so I'm interested to try driving a Tesla or similar vehicle to
see how I react cognitively to a car that takes over more control. Anyone have
any similar experience?

------
gargravarr
At present, the 'driver assistance' solutions on the market seem to be
counter-intuitive. As the article notes, they may handle most situations
competently, lulling drivers into a false sense of security, before reacting
completely inappropriately to a rapidly developing hazard leaving the driver
no time to re-take control.

Tesla explicitly states that a driver using their Autopilot must keep their
hands on the wheel at all times. They place the driver in the role of
supervising the machine. It seems like other driver-assistance technologies
require the same, which is a fundamental misunderstanding of what drivers
expect these systems to do. Drivers would expect these systems to ease their
workload or take over some of the tedium of long highway drives. But these
systems require the driver to be alert and constantly monitoring the system
for abnormal behaviour. As far as I can tell, this is _more_ work than simply
driving manually. On a long drive, I can slip into my own sort of autopilot
where I'm paying full attention to the road, able to react to changing
conditions ahead, but am also entertaining a long train of thought in my head.

Six seconds sounds like a long time to react to a changing situation when
driving (drivers know that accidents happen in split seconds), but if things
are working normally, six seconds is about the length of time you might spend
changing the climate controls or selecting a different playlist. If Tesla are
saying their cars can't be trusted for a single-digit number of seconds
without human supervision lest they total themselves, since it's so often
toted that the driver's hands were off the wheel and thus _he couldn 't retake
manual control and prevent the crash_, then these companies are approaching
driver assistance technology in completely the wrong way. More than that,
these systems are outright _dangerous_ because they're implemented so
completely wrong. Taking your hands off the wheel in an unassisted car for six
seconds won't result in it driving itself into the barrier beside you unless
you're extremely unlucky at that exact moment and your front wheel hits a
pothole. For these machines to be this unpredictable is becoming a serious
safety hazard.

I'm going to stick with a car with non-adaptive cruise control and nothing
else. And be extra vigilant around Teslas on the roads.

------
tabtab
Re: "The fundamental issue here is that tendency to treat lane-keeping,
adaptive cruise control, and emergency braking as independent systems."

You won't be competitive with normal human drivers that way. Humans (usually)
have to ability to combine many diverse, and potentially conflicting, pieces
of info into a coherent story. AI and AI-like automation will need to
similarly synthesize diverse clues.

------
stilley2
Am I the only one who thinks the fault primarily lies with the driver? He
trusted his life in equipment he didn't adequately understand and/or put to
much faith in. This is not to say that Tesla and others shouldn't learn from
this and try to improve their systems and driver education, of course. But
drivers should also understand how their cars work, and respect their
limitations.

------
jaimex2
Radar on autopilot is a crutch and not a great one. I know Tesla are working
hard on moving everything to vision detection. Andrej Kaparthy head of
autopilot vision gave a really good talk recently on what keeps him up at
night - datasets so the cars know what they are dealing with.

It did make me wonder if firetrucks were accidentally left out of the
datasets.

------
kbos87
This article tries to cover for Tesla by relying on the fact that competing
systems also cant avoid similar objects. Sorry, this is further proof that the
engineers and managers at Tesla willingly put unsafe technology on America's
roadways. A hard stop and prison time for someone is in order at this point.

------
pishpash
Let's just come out and say it: Tesla Autopilot is shit. If it's not, I
challenge anyone to trust it.

~~~
simion314
We can say is not autopilot where autopilot means what we think it means,
because Tesla and the fanboys are pulling out some other definitions that mean
something different, while marketing and Elon share/promote images and videos
of people not using their hands when using autopilot.,

~~~
rhino369
There also needs to be a question of what is the purpose of the so called
Autopliot. You have to sit there with full attention and your hands on the
wheel. How is that better than just steering yourself.

------
kuon
I naively thought modern systems used a battery of lidar (with different
wavelengths to provide redundancy, like visible+ir+uv) to create a precise 3d
map of the objects around the car. Coupled with cameras for visual awareness
this seems like the only sane system.

------
linsomniac
"Sudden braking at high speed can startle the driver"

Yeah, my Model S did this once on the Interstate, for no reason that I could
tell. I nearly had to engage a dry cleaner.

------
lafar6502
So tesla has emergency braking system now? I paid for auto pilot

