
Tesla Autopilot failing where there was a fatal car accident [video] - bobsil1
https://www.youtube.com/watch?v=VVJSjeHDvfY
======
alkonaut
What level of autonomy does Tesla market their latest hardware as? So long as
they never label it as being above SAE Level 2 (i.e. don't take your hands of
the wheel or eyes off the road _ever_ ) then I have no problem with this tech.
The only thing they need to fix then is the name of the product. They
shouldn't be calling it autopilot as it gives the impression you can answer an
email on your phone while the car is doing the commuting for you.

In my opinion Tesla and all other manufacturers should perfect their Level 2
tech (and not call it autopilot) and then jump straight to Level 5, even if it
means it's 3 decades out. We can't have drivers sleeping at the wheel in level
3/4 cars with patchy software. And that's _regardless_ of whether
manufacturers can prove that their tech has markedly fewer accidents than
often-failing human drivers. I'm happy to take the risk to crash with a sleepy
fleshdriver, but I'm not ok with crashing due to a software bug. I think many
share this sentiment - which means there is a double standard that setting the
bar _much_ much higher for autonomous systems than they will ever be for
humans.

~~~
OscarCunningham
>I'm happy to take the risk to crash with a sleepy fleshdriver, but I'm not ok
with crashing due to a software bug.

I find this very hard to understand, especially when you're making the choice
on behalf of others. Would you drive your children to school, knowing that
you're making them less safe, just so that if they do die it's your fault
rather than that of some programmer?

~~~
tremon
What's the hard to understand part? The programmer is not the one facing a
life-or-death decision when a driving hazard occurs -- the driver is.

~~~
msl
It seems to me that people are actually more likely to take unreasonable risks
when they have to make these calls on the spot. We often overestimate our
ability to handle difficult situations and are prone to valuing "getting there
in time" more than playing it safe.

NASA is known to define a set of strict and absolute mission rules before the
beginning of a mission for just this reason. See for example the rules defined
for various Apollo missions [1]. Note how those rules define all sorts of
limits and abort triggering conditions. This is done because there is always a
desire accomplish the mission even when something has gone wrong.

[1] [https://www.hq.nasa.gov/alsj/alsj-
MissionRules.html](https://www.hq.nasa.gov/alsj/alsj-MissionRules.html)

------
gregoriol
There is another one: [https://www.youtube.com/watch?v=KEaJW-
Mk0Bo](https://www.youtube.com/watch?v=KEaJW-Mk0Bo)

~~~
danso
Same driver just posted a new video, this time during the day, in which AP
behaves correctly:
[https://www.youtube.com/watch?v=bkDsozA1Pb8](https://www.youtube.com/watch?v=bkDsozA1Pb8)

~~~
gregoriol
That's the scary and also dangerous part: the behavior might be ok at some
point, and might fail at another point. Tesla's (and NTSB) investigations will
likely find out what went wrong in this particular accident and on this part
of the road, and likely will improve the systems. But users are certainly not
enough aware of that kind of risk: they might try the Autopilot, see that it
works, and trust that if it worked, it will work again and again and again...
On a daily commute for example, if it worked many days in a row, and then
fails for some reason (technical, external, whatever), the user will likely
fail to notice.

~~~
unityByFreedom
> On a daily commute for example, if it worked many days in a row, and then
> fails for some reason (technical, external, whatever), the user will likely
> fail to notice

Yup. This is why I won't get into a self-driving car until they're fully
driverless.

Drivers become less attentive as the system gets better.

------
downandout
I get that the system is designed to follow the white line, and that in this
nightmarish freeway design, the white line leads straight into the divider.
But I thought that the whole point of LIDAR/whatever Tesla uses for object
detection was to be able to detect upcoming objects. This seems to have been a
problem in the Uber pedestrian crash as well - while it was clear from that
video that a human driver probably couldn’t have seen the woman, a LIDAR
equipped vehicle should have easily detected that she was there and avoided
her (and that Uber vehicle had full LIDAR I believe).

So what is going on here? I’m not a radar expert. Is there some fundamental
flaw with this technology that is preventing it from “seeing” all objects
under all circumstances? Or does the problem arise from faulty driving
logic...perhaps the object detection system reports the object, and for
whatever reason the system interpreting that data ignores it?

~~~
e98cuenc
Tesla does not use lidar, only regular cameras.

~~~
alkonaut
Any car with distance sensing cruise control or collision avoidance at least
has some kind of forward looking radar tech, right? So it should at least be
able to panic-break when noticing the mistake? It looked like the car had gone
full speed into the barrier.

~~~
simion314
Tesla has this radar tech but they get some confusing data sometimes so they
decided to put code to ignore some of the detections
[https://www.tesla.com/blog/upgrading-autopilot-seeing-
world-...](https://www.tesla.com/blog/upgrading-autopilot-seeing-world-radar)

------
sauravt
Loss of even 1 life is far greater than the convenience gain for all other
passengers combined. Current autopilot technology just isn't ready for the
roads yet (some of them like this one in the video constructed with gross
incompetence). This and many such future accidents to follow cannot be fixed
by simple software glitches, instead, auto driving car manufacturers should be
required to construct magnetic road white stripes for major highways and
routes.

~~~
Scaevolus
It's not just convenience. Human drivers make mistakes and die all the time.
If every car in the US had autopilot and half the rate of fatal crashes (not
perfect!), you'd save 18,000 lives each year.

~~~
danso
There's no evidence that autopilot has that good of a safety effect yet.

------
sschueller
Interesting how the autopilot section on their website for the model s is no
longer there as it was in 2016 yet the other items are still there[1].

Or on this page[2], Specifically the section "Full Self-Driving Capability"
makes it sound like that car can completely drive itself.

[1]
[https://web.archive.org/web/20160726173047/https://www.tesla...](https://web.archive.org/web/20160726173047/https://www.tesla.com/models/)

[2]
[https://web.archive.org/web/20171116234123/https://www.tesla...](https://web.archive.org/web/20171116234123/https://www.tesla.com/autopilot)

------
Gustomaximus
1) Given Tesla has been using passive data collection wouldn't a well traveled
road like this have clear data that there is a split road but nothing in the
middle so geo-position should have stopped this?

2) How did the radar /camera not see the barrier.

3) Give their update capability I'm surprised this section of road hasn't been
fixed or updated to stop 'autopilot' for this section.

~~~
improbable22
Automated mapping seems like it ought to be able to solve this. Even if the
cameras are less good than a human, all the other cars' cameras could help
immensely.

Why isn't every Tesla tracking all the lanes it can see, and the behaviour of
every option at every exit, and flagging cases where one of the other lanes
does something weird, or dangerous, half a mile later?

------
beobab
The way the concrete is laid there gives mis-clues.

I had a funny brain-wobble whilst watching the video, playing “hunt for the
lane”.

I guess the car decided on the wrong one, with fatal consequences.

~~~
joshvm
What should the road look like (without the low dynamic range of the camera)?
It looks like that's a junction and there should be a hashed white area
between the slip road and the highway. At least that's what you'd see in the
UK.

[http://www.highwaycodeuk.co.uk/uploads/3/2/9/2/3292309/rule-...](http://www.highwaycodeuk.co.uk/uploads/3/2/9/2/3292309/rule-132-reflective-
road-studs-mark-the-lanes-and-edge-of-the-carriageway_orig.jpg)

In Britain you can contest parking fines if the double yellow lines are
patchy, because arguably you could have missed them. There's also (I think)
some laws that say that in order to enforce fines regions must be correctly
marked, so the council has a duty to keep them well-painted. Seems like Tesla
would argue similarly here.

~~~
Piskvorrr
Indeed, that's what's supposed to be there. Alas, "do not worry about things
that shouldn't exist but do, or about things that don't exist but should" is
not a viable strategy on the road.

------
duncan_bayne
I think the problem is that, as things stand[1], autonomous cars are an "AI
hard" problem, in that you'd need to build something of equivalent general
intelligence to a human.

[https://en.m.wikipedia.org/wiki/AI-
complete](https://en.m.wikipedia.org/wiki/AI-complete)

[1] One example of how things might change is if societies decided to build
"smart roads" to assist AI drivers, and then banned non AI drivers from them.

~~~
simion314
A smart road could be like a train track, completely automated, then you would
enter in a cart, input a destination and the centralized system will compute
yo2ur route and speeds, this needs a lot more infrastructure but theoretically
you could have huge speeds and no stops at intersections, just some small
slowdowns to allow carts pass near each other at intersections.

------
danso
After last week's crash, when Tesla claimed [0]:

> _Our data shows that Tesla owners have driven this same stretch of highway
> with Autopilot engaged roughly 85,000 times since Autopilot was first rolled
> out in 2015 and roughly 20,000 times since just the beginning of the year,
> and there has never been an accident that we know of. There are over 200
> successful Autopilot trips per day on this exact stretch of road._

Do we know how many of those trips were made with the same software version as
the victim's? And do we know how many of those trips were in that particular
lane?

[0] [https://www.tesla.com/blog/what-we-know-about-last-weeks-
acc...](https://www.tesla.com/blog/what-we-know-about-last-weeks-accident)

------
dawnerd
Not just a Tesla problem as I recon all cars using lane markings will have
done the same. My Passat will constantly try to pull me into off-ramps or
whenever theres old paint or other weird markings.

~~~
amelius
When exactly did we start to allow these half-baked products on the road?

~~~
dx034
Other manufacturers market that product for collision avoidance, not to drive
the car for you. These features can help to avoid accidents when the driver is
tired or doesn't pay attention.

~~~
simion314
And then some SV startups decided to abuse people trust and named a lane
assist feature as autopilot, put tons of marketing into showing how great the
autopilot is and very soon we will no need to have a human driver.

------
bkovacev
Correct me if I am wrong, but the car followed the bolder line instead of a
barely visible one? Could it be sunlight + concrete + bad vision of the lane
line?

------
coolspot
Wow. This and one in comments are scary, I could absolutely miss situation
like that.

------
PoppaSmurf
I've read the concept that a self driving car cannot react to everything that
it sees in front of it because otherwise it would basically be inoperable, ie,
it has to disregard things like overhead signs, bridges crossing over the
highway etc, because braking for those is unnecessary and possibly dangerous.

But, when the car detects these things and decides to disregard them, does it
just forget about them? Just thinking back to the Tesla car that ended up
going under the truck crossing in its path, supposedly the car didn't react as
you would expect because it assumed it was an overhead sign. But I would think
that an overhead sign would begin to look different than a truck on the road
crossing in front of you as you continue to get closer to it, is the software
not capable of at first deciding to disregard something up ahead but then
change its mind as it gets closer? Likewise with a concrete stanchion that
keeps getting closer and closer? Does the software allow for it to second
guess itself?

------
babaganoosh89
This seems very hard for Tesla to fix with a simple software patch. If the car
is using cameras to look at the lines on the road + radar which can not see
stationary objects well, then I don't see how they'd have enough sensor
information to fix this reliably.

~~~
simion314
They have many cars passing those places, can't they improve the maps they
have, in the places where the video inputs and Tesla maps will conflict decide
to slow down and ask the human to take over, then analyze the issue and add is
as a unit test.

~~~
Piskvorrr
Improving maps only papers over the issue at this one particular place. What
happens when this happens somewhere else, where there's no marking in the map
instructing the lane-keeping software "this is a known bug, follow the _right_
lane marker here"?

~~~
simion314
I don't mean of marking this spot, but soemthing like 1 this is a new road,
"autopilot" will not engage but collect data and add it to the map

2 after 100 or other better number of cars passed that point and the maps are
updated enough AP can engage

3 if radar+camera detects something that does not match the maps it slows down
and asks the human to take over, the maps are used as a way to double check
what the AI wants to do.

This system will not work on small roads with few Tesla cars on them but it
will prevent problems on big roads and collect useful data to add to the unit
tests

~~~
Piskvorrr
In other words, better roads get better, worse roads get nothing. Isn't that
reinforcing the current problem of "SDVs drive well in areas where SDVs drive
well; beyond the map edge there be dragons"? See my reply earlier:
[https://news.ycombinator.com/item?id=16706856](https://news.ycombinator.com/item?id=16706856)

~~~
simion314
Is not this better then no road gets safer? At least high traffic roads get
safer, less damages and live lost, fixing the bugs that are discovered here
could improve the situation on smaller roads.

My idea was how they could act on this situations with the drive assist
tech(stupidly named autopilot) I am not of the opinion that true self driving
cars are around the corner, not even in major US cities.

~~~
Piskvorrr
Sure, for autonomy ~level 3, this would be okay. I, too, dislike the rhetoric
"just patch this one specific killing bug and voila, full auto!"

------
TheSpiceIsLife
It'd be interesting to see other vehicles with adaptive cruise control / lane
keeping try this same experiment.

~~~
gambiting
They would fail too, but I'm not aware of any adaptive cruise control from any
manufacturer being sold as "autopilot".

In either case, I think the biggest(and insurmountable) obstacle to self-
driving cars will be the fact that they rely on having clear signs, clear
markings, clear lanes, signs not rotated by the wind or vandalised - and it's
going to be impossible(due to cost) to maintain our infrastructure to the
required level. Yes, Tesla can patch their system to work in this specific
road situation - but there will be countless others.

~~~
simion314
But what happens when you have tons of self driving cars, and it starts
snowing, do all this cars get stuck until the snowing stops and how can you
clear the roads if the cars are stuck? (I am referring at a scenario where
this are actual self driving cars and not drive assist systems with a human
behind the wheel)

------
heurist
I wonder if autopilot networks will eventually reroute vehicles around bug-
prone locations like this. The only problem is recognizing bug-prone
locations. It is interesting to think of some geographic locations as wells of
danger that are best avoided...

------
m4tthumphrey
UK driver here. If self driving cars rely on lane markings for working out
where to sit on a road, how does it work on country lanes or newly laid roads?

