
Tesla crash victim had complained about auto-pilot in same location - ucaetano
http://abc7news.com/automotive/i-team-exclusive-victim-who-died-in-tesla-crash-had-complained-about-auto-pilot/3275600/
======
antirez
"Our data shows that Tesla owners have driven this same stretch of highway
with Autopilot engaged roughly 85,000 times since Autopilot was first rolled
out in 2015. ... There are over 200 successful Autopilot trips per day on this
exact stretch of road."

Who wrote this statement does not understand software. If there is some kind
of Heisenbug, those numbers are too small to prove that the software is ok,
and actually the guy that died reporting that there was an issue there, and
later crashing there, looks like a very interesting hint about a potential
software bug that should be investigated. The Tesla statement is the
equivalent of "works in my laptop" at a bigger scale. Also consider that 85k
times since 2015 means that potentially only a fraction of those trips were
executed with the latest version of the software. Moreover the street layout
may now be kinda different triggering some new condition.

~~~
shawn
Heisenbugs in AI are also interesting. As computers approach sentience, it
will become progressively harder to explain their behavior.

Why did you eat a bagel instead of eggs this morning? Usually there isn't a
scientific answer.

In the present, AI models are already so complicated that it seems hard to get
reproducible diagnostic results, short of just saving every frame of data that
the car's sensors pick up. And for video that seems rather prohibitive.
Imagine collecting all sensor data for each of those 200 trips per day along
the entire stretch of road.

~~~
smartician
Aren't many AI methods also inherently probabilistic? That would make it by
definition impossible to definitively explain any particular behavior.

~~~
baddox
What do you mean by probabilistic? If you mean that the models takes input and
outputs one or more labeled probabilities (e.g. “90% confidence the input
photo is a dog, 10% confidence the input photo is a cat), then yes, I believe
that many AI systems work this way. If you mean _random_ , in the sense that
the system may return different results given the exact same inputs, then in
not sure if there are AI systems that work that way.

~~~
joshuamorton
There are. Monte Carlo methods is the keyword you're looking for. AlphaGo
(Monte Carlo Tree Search) is an example of one such AI.

Obviously you can set the RNG seed to be the same every time too, but even
that only works if your system is wholly synchronous, which a car probably
isn't.

Note that I doubt Monte Carlo methods are common in the autonomous vehicle
space.

~~~
chaboud
I'd expect Monte Carlo methods to be used in a number of cases that have
deterministic time envelopes for evaluation. Randomized selection and
evaluation can be incredibly effective. They also resist degenerate structured
input vulnerabilities.

I'm not in the automotive space, but I'd be surprised if there were a viable
self-driving car team _not_ using Monte Carlo methods somewhere in the vehicle
stack.

~~~
joshuamorton
Yeah, this was unfortunately worded. There are subproblems for which
MC/randomized methods fit well, but in general those circumstances are well
understood.

------
segmondy
One of the things you learn in flying is not to force it. If conditions are
not safe. Just forget about it. It almost seems the same type of judgement
needs to be made for auto-pilot. If your auto-pilot acts up at all. Just turn
it off and don't use it till it's resolved. All you need is one incident to be
dead, so if you get a chance to observe any abnormalities, consider it a
blessing.

Something that will also be great will be a sort of "crash dump/bug report"
button for these cars. If at any time your car does something unsafe, you can
hit that button. The car will save the last 60 seconds so the manufacturer can
analyze it to debug and figure out what went wrong.

I was so excited about auto-pilot and dreamed of getting in my car and
sleeping while it made cross-country trips. So much for that, that seems way
far out.

~~~
herbst
> I was so excited about auto-pilot and dreamed of getting in my car and
> sleeping while it made cross-country trips. So much for that, that seems way
> far out.

This is more a hijack than a direct critique but I think the general issue is
the assumption that we need a private vehicle for that.

Trains are perfectly capable for bringing me from A to B while I am sleeping
for years now.

Edit:// with years I mean years. Before that pressure made longer rides
usually more uncomfortable than they are now with modern trains. Probably only
because my area has plenty of mountain tho.

~~~
mohaine
In the US you will have lots of time to sleep due to waiting for an oncoming
train to clear.

In most parts of the country we don't have dedicated passenger tracks. Amtrak
(our passenger line) just rents track time on the freight lines. The problem
is that freight lines are often single track lines so if there is oncoming
freight you just wait until the freight clears. Sometimes this his hours.

Until we have dedicated passenger lines, long distance train travel is a non-
starter here. It was too slow for my 75 year old uncle on a site seeing tour.
His exact words were "fun, but never again"

~~~
greglindahl
I've ridden a lot of Amtrak and it's never been on a single track line. Are
you sure that's an actual problem? Long delays, sure, but is that the reason?
Seems to usually be train equipment failure.

~~~
tbihl
Having ridden a lot of Amtrak almost makes it less likely that you'd have
experienced the single track problem, since quality rail experiences in the US
are pretty concentrated such that someone who has good train experiences
probably lives/works in an area where that is the rule. On the East coast,
pretty good service arguably extends as far south as Richmond, but past that
sitting on the tracks is very common. And with that you get the classic
downward spiral of crappy transit. Ride a train in SC or GA, and you'll see
that anyone with the money for a plane ticket has abandoned rail there.

~~~
greglindahl
I mostly rode the train from SC to Boston and back (Southern Crescent) ...
it's dual track all of the way. Also DC to Chicago, southern route, dual track
all of the way.

~~~
james-mcelwain
I _love_ Amtrak and am willing to put up with the delays, but DC to Chicago
stop all the time, especially in the mountains, to wait for higher priority
trains to pass. Sometimes it's only for a few minutes at a time, but sometimes
it adds up to hours.

------
anonytrary
I can't believe this person kept using it. If I had noticed a bug in auto-
pilot and complained about it, I would be way too scared to ever use auto-
pilot again. Personally, I never use auto-pilot because driving is piss easy,
as it's designed to be.

 _Perfect_ self-driving cars is a nearly impossible feat to accomplish in an
unbounded track. I can only imagine automated driving in a system which has no
room for error. Examples include: tunnels under the ground, chain links on the
ground (as in trolleys, trains, etc.), or anything else that vastly reduces
the entropy involved in driving.

With self-driving cars on current roads, it will probably take years to get
from 1% error to .1% error, and decades to get from .1% error to .01% error,
which isn't even good enough. Perhaps it will take a century or longer to
develop the required artificial intelligence to make self-driving cars perfect
"enough". There's just too much room for unique problems to spawn. Bounding
vehicle freedom seems to be the only way forward.

~~~
WhompingWindows
Your numbers about error percentiles don't make sense. Ideally, you'd want an
outcome measure like fatalities per million miles, accidents per 100k miles,
not "% error" which is vague.

Furthermore, look at the actual data we have right now. SDC makers actually
put out data in California about their "disengagement rate" which is how many
time the human drivers took over from the software. Waymo have steadily
increased that rate over the past few years, now they are driving many hours
without disengagements. Look at the link below, page 4, you'll see they have
63 disengagements over 350k miles. That's 1 per 5.5k miles, so these cars are
driving for days without a human takeover.

They will not need their own infrastructure, that would be not be economically
viable. They will go on the roads we have or they won't go at all. Tunnels are
going to be reserved for high-density point-to-point travel, if the boring
company or others ever get scale...

~~~
anonytrary
Then let's add some perspective. You must be referring this[0] paper. If the
average person puts on 1,000 miles per month[1], then that means they'd have
to deal with disengagement (a mishap) at least twice a year, which is not
acceptable for fully autonomous driving. I'm going to define a "fully
autonomous vehicle" as "a vehicle which should not _ever_ require me to sit in
the front seat and control it under any conceivable circumstance".

Put differently, I should be able to lay down for a nap in the back seat and
wake up at my destination without any chance for disengagement during my
entire lifetime. At the current rate of 1 mishap per 5,500miles, I would be
dead after about 6 months.

Assuming a human lives to 75 years (we should really be using 75years minus
16years, but it's unimportant), a lifetime of driving is about 1,000mi/mo x
12mo/yr x 75yr = 900,000 miles. I don't even want the probability of
encountering a mishap to be once per _lifetime_ , let alone once per 6 months.
One mishap per 900,000 miles isn't enough, because, on average, I'd encounter
one disengagement in my lifetime. Assuming we're striving for a world where 7
billion people can drive without a single incident in 75 years (a vast
underestimate), we need the probability of a mishap to occur to be less than
once per 7,000,000,000humans x 900,000mi/human = 63 x 10^14 miles.

1/5e3 is not even _close_ to 1/6e15. We're talking about 12 orders of
magnitude in our error rate. I'd say we're laughably far away from our goal.
We've got a _long_ way to go.

[0]
[https://www.dmv.ca.gov/portal/wcm/connect/42aff875-7ab1-4115...](https://www.dmv.ca.gov/portal/wcm/connect/42aff875-7ab1-4115-a72a-97f6f24b23cc/Waymofull.pdf?MOD=AJPERES)

[1] [https://www.fool.com/investing/general/2015/01/25/the-
averag...](https://www.fool.com/investing/general/2015/01/25/the-average-
american-drives-this-much-each-year-ho.aspx)

~~~
nemothekid
> I don't even want the probability of encountering a mishap to be once per
> lifetime,

This doesn't seem reasonable - Waymo's report doesn't dive in depth enough
about each disengagement to warrant this sort of extreme reliability.

If "2 disengagements" per year, were at most fenderbenders - something I'd
wager humans do way more than twice per year - that would be a very different
story than if those 2 disengagements were life threatening. Sure you'd wake up
from your nap, but you wouldn't be dead, and at most you'd have to exchange
insurance information.

~~~
anonytrary
> This report covers disengagements following the California DMV definition,
> which means "a deactivation of the autonomous mode when a failure of the
> autonomous technology is detected or when the safe operation of the vehicle
> requires that the autonomous vehicle test driver disengage the autonomous
> mode and take immediate manual control of the vehicle.”

So, you're right, there's no clear distinction, but I would further argue that
it doesn't matter. Even if only 1/1000 disengagements are fatal, my conclusion
remains the same. I think we're splitting hairs at this point, though.

Even if not fatal, I highly doubt a significant fraction of such events (as
defined above) would allow me take a nap upon departure and wake up at my
destination, so it would still be unacceptable to me. I guess we have to agree
on what an acceptable end-game is for fully autonomous vehicles. If you think
"waking up on the shoulder exchanging insurance" is acceptable, then that
would indeed change the numbers (but by how much? Two, maybe four orders of
magnitude?).

Humans get into fender-benders all the time, but surely we'd strive to
eradicate this inefficiency in the automated driver. I think this is still an
active area of debate; some assembly-line work can be made more efficient with
machines, but we've seen humans out-perform machines in other types of work. I
think driving tends to utilize more reactive, intuitive "System 1"
thinking[0], so I imagine that humans will be vastly better than machines at
driving for a very long time.

[0]
[https://en.wikipedia.org/wiki/Thinking,_Fast_and_Slow](https://en.wikipedia.org/wiki/Thinking,_Fast_and_Slow)

------
WillPostForFood
The path into the barrier looks a lot like a lane.

[https://imgur.com/a/iMY1x](https://imgur.com/a/iMY1x)

And the old striping is lightly visible as well.

~~~
dixie_land
looks like that would trick a human driver under certain lighting conditions.
between Seattle and Bellevue a stretch of freeway looks just like that and
trips me over every time

~~~
cesarb
Either the same place or a nearby one (same pair of highways) did trick a
human driver two years ago:
[https://www.ntsb.gov/investigations/AccidentReports/Reports/...](https://www.ntsb.gov/investigations/AccidentReports/Reports/HAR1701.pdf)

"[...] when it entered and traveled in an unmarked gore area, rather than the
intended high-occupancy-vehicle (HOV) lane, and collided with a crash
attenuator. The 990-foot-long gore, with an unmarked inside area, separates
the left exit HOV lane for State Route 85 from the US-101 HOV lane."

~~~
pacificmint
Not the same place nor nearby. From the picture on page 24 it's clear that
this is the 85/101 interchange in south San Jose, about 20 miles away from the
other 85/101 junction where the Tesla accident happened.

The type of accident does seems comparable though.

------
MBCook
If he had seen this issue multiple times before why would he keep using
autopilot in that area? That seems like a very odd decision.

~~~
ucaetano
True, but let's not blame the victim.

~~~
twblalock
I'd definitely blame an airline pilot who crashed because he relied on
autopilot in a situation he knew was not handled well by the autopilot. (I
would _also_ blame the autopilot.) I don't see why the driver of a car with
any kind of semi-automated system should be held to a lesser standard.

It sucks that the driver died, and it sucks that the Tesla autopilot system
had problems handing that kind of situation, but that does not mean the driver
is blameless. He put himself and the people in the cars around him at risk by
using the autopilot feature on a stretch of road where he knew it did not work
well.

~~~
jschwartzi
> I don't see why the driver of a car with any kind of semi-automated system
> should be held to a lesser standard.

Because the driver didn't receive any training from the manufacturer. Airplane
pilots, in contrast, receive a ton of training right down to how to fly a
specific type of aircraft(single-engine, twin-engine, instrument flying,
etc.). Additionally the manufacturer will provide training to pilots on how to
operate any nifty features of the commercial aircraft.

I don't believe Tesla provides any training whatsoever on how to use these
features. And I'm not aware of any mechanisms preventing untrained users from
activating these features. A tutorial that you can click through does not
count because you do not ensure rapport with the trainee like you would in a
person-to-person training.

Back when cars were being commoditized the dealer would often provide training
to new drivers. And in all states new drivers are required to take a practical
test to demonstrate that they are competent to drive. Does Tesla require their
users to prove any sort of understanding or competence before they unlock
Autopilot?

You might argue that requiring training sets a dangerous precedent, but users
need to be made aware that the driver assistance systems are not foolproof,
and the only foolproof way to do that is to require them to attend a training.

~~~
machinehermit
Exactly.

Tesla is beyond irresponsible with this and IMO they should be sued out of
existence.

It isn't a new feature on a cell phone that you just watch a youtube video on
and move on with your day.

------
jijojv
[https://www.cnbc.com/2018/01/31/apples-steve-wozniak-
doesnt-...](https://www.cnbc.com/2018/01/31/apples-steve-wozniak-doesnt-
believe-anything-elon-musk-or-tesla-say.html)

"Man you have got to be ready — it makes mistakes, it loses track of the lane
lines. You have to be on your toes all the time," says Wozniak. "All Tesla did
is say, 'It is beta so we are not responsible. It doesn't necessarily work, so
you have to be in control.'

"Well you that is kinda a cheap way out of it."

~~~
cup-of-tea
Unlike people like Elon and Jobs, you can safely calibrate your bullshit meter
with Woz. He's not as famous because the press don't generally like that.

------
aerovistae
I love Tesla to death and in most cases will defend them beyond the point of
reason.

But I took a test drive in a model S for the first time earlier this year and
almost immediately noticed autopilot’s extremely unreliable behavior— it would
swerve out of lanes in ordinary situations that should have been easy to
handle. The _second_ I saw that, hat was it: I would never use it again before
many years of testing and improvement had taken place. No way am I gambling my
life on a clearly incomplete feature just because it’s cool. Fuck that.

Of course Tesla is fairly safe behind their disclaimers and warnings, and to
be honest I think it may be impossible to develop such a system without
putting it into the wild before it’s perfect.

But for me, personally...I’ll let other people choose to be the Guinea pigs.
The risks are all too obvious. Continuing to use the feature is very
dangerous. Do it knowing this may very well happen to you.

~~~
kwhitefoot
I have a 2015 Tesla model S and I have never seen the behaviour you describe.
I use autostart a lot on the ground that it's good to have two of us paying
attention.

~~~
makomk
They dropped the system used in their older cars because their supplier for it
decided they were too reckless and refused to do business with them anymore.
Happened around 2016, I think. Since then they've been using an in-house
system that doesn't work so well.

------
wjp3
It'll be interesting to see if anything comes of the issue with the already-
collapsed crash barrier and what CalTrans says about it. That sort of thing is
there for a reason, and to be left in a crushed state for any period of time
is bad.

~~~
Cshelton
In Texas, I've seen crushed barriers remain collapsed for weeks on end. Either
that or they are just hit again right after being replaced. Which tells me,
it's a poorly designed road and causes confusion for drivers. Which in fact
may be what this Tesla crash turns out to be.

~~~
hagope
Exactly, I drive past this barrier every day...the problem is the left two
lanes on 101 are carpool/EV lane so Tesla drivers just zoom down it...at this
particular exit, the left carpool lane leads to an HOV flyover exit which puts
you on 85. If you are not paying attention (ie on autopilot flying past
traffic), you will end up on a completely different highway! I see people very
frequently swerve out of the flyover lane back on to 101 very often, so my
initial thought was that he tried to disengage too late to either get back on
101 or the catch the flyover (not clear which one).

~~~
ScottBurson
He lived in Foster City and was working at Apple, so it's likely he wanted to
take the 85 ramp to Cupertino.

I've seen something that said that the car was warning him to take control,
but he hadn't done so. Texting, maybe?

~~~
djm_
Before this rumour spreads: the latest Tesla statement [1] is worded in a
deliberately misleading fashion and the only thing we can tell from it is that
his hands were _according to the software_ not on the wheel for 6s before the
impact.

>The driver had received several visual and one audible hands-on warning
earlier in the drive and the driver’s hands were not detected on the wheel for
six seconds prior to the collision

Note the _and_.

[1] [https://www.tesla.com/blog/update-last-
week’s-accident](https://www.tesla.com/blog/update-last-week’s-accident)

------
_ph_
One thing to remember in all the discussions about the accident is, that there
is no information available whether the autopilot was active at the time of
the accident.

~~~
modeless
Tesla has just announced that autopilot was on during the crash.
[https://www.tesla.com/blog/update-last-
week’s-accident](https://www.tesla.com/blog/update-last-week’s-accident)

It sounds like they are saying the driver had 5 seconds to notice the problem
and react. It is scary to think that when you are on autopilot you may be 5
seconds away from death at any time. Better not take those eyes off the road!

~~~
imtringued
If you take the human factor away 5 seconds ahead of time warning is quite a
lot. The car could have slowed down to a possibly non fatal speed. By asking
the human to act the autopilot is actually throwing away at least 2 of those
precious 5 seconds.

------
jijojv
SW/HW Bugs happen - fact of life. More concerning is that Tesla denying this
was ever reported to them...

I had my driver side view mirror stuck issue fixed and they put in the notes
that they re-filled all my tires during repair service allegedly as reqd. by
some CA law. Later that week I got an alert that my tire pressure was low and
had to get it air pumped so obviously they didn't do that despite having
claimed to have done just that...

~~~
CobrastanJorji
> More concerning is that Tesla denying this was ever reported to them.

This is how Tesla reacts to all bad publicity. Bad review on battery life
comes out? Data dump indicating the reviewer might've done something wrong.
Accident? Statistical dump showing that most cars don't do this. I like this
technique more than generic "we can't comment" responses, but I'm pretty
confident they heavily cherry pick, looking for something that people who glom
onto data can see and say "oh good, Tesla's right."

~~~
jackvalentine
Yup Tesla knows their audience - it's their semi-rabid owner and aspiring
owner fan base. Throw out some logs or stats without context and let them do
the dirty work for you.

------
userbinator
_Walter took it into dealership addressing the issue, but they couldn 't
duplicate it there_

Why not get a mechanic to ride along with him to that location, perhaps with
extra diagnostic equipment connected? I could see how a bug that is dependent
upon being at that location would certainly not be reproducible somewhere
else.

Here's an automotive service booklet from almost 70 years ago which recommends
the same thing for troubleshooting:

[http://www.imperialclub.com/Repair/Lit/Master/021/Page14.htm](http://www.imperialclub.com/Repair/Lit/Master/021/Page14.htm)

------
mdekkers
_before the crash, Walter complained "7-10 times the car would swivel toward
that same exact barrier during auto-pilot_

...and after 7 to 10 times , he still didn't learn his lesson? That's pretty
stupid if you ask me. If my car does something weird at a particular stretch
of road, especially 7 to 10 times, you can bet your bananas that i'll be
paying a lot of attention on that stretch of road. If my "autopilot"
(seriously, Tesla should stop using that name) isn't reliable in certain
circumstances or places, then - guess what? - I WONT BE USING IT THERE. Why
blame Tesla (I'm no Tesla fan), when the operator of the vehicle refused to
operate it properly in the face of prior experience? Poor guy, and I feel for
his family, but come on, what a dumbass

------
jakobegger
Previous discussion of this crash:
[https://news.ycombinator.com/item?id=16694365](https://news.ycombinator.com/item?id=16694365)

------
markmark
I really don't like autopilot. It's good enough to make drivers trust it and
not pay attention, but it's not good enough to not kill people when they do
that. And when there's an accident Telsa come out and say "the system warned
the driver to put their hands on the wheel" or something similar. Unless a car
can 100% self drive drivers aids should require the driver to have hands on
the wheel and be paying active attention at all times.

~~~
toast0
This is part of why I haven't considered a Tesla. If I get into an collision
in any other manufacturer's vehicle, the manufacturer's PR team won't impune
my honor in the court of public opinion.

It seems like the Tesla autopilot is very similar in capabilities to other
manufacturers' active lane keeping, adaptive cruise control and active
collision avoidance braking systems, however the marketing and user behavior
is much different.

There's no expectation that a Pacifica with all the bells and whistles is
going to do a good job driving for you with no hands, but if somebody stops
suddenly, it will too.

------
marcell
Highway driving is supposed to be "easiest" problem for self driving cars to
solve, since there are fewer edge cases, less turning, etc., but it's also the
most dangerous type of driving. You are much more likely to die going at 65mph
than 25mph.

I think deploying self driving cars at <=25mph speeds at first would be wise.
Personally, I wouldn't risk letting a car take over at high speeds until there
is a longer track record of safety.

~~~
ivanech
Many places with <=25mph speeds have pedestrians, and they're a lot more
delicate than vehicles are. Drivers aren't the only ones at risk.

------
animex
The fact that it happened to him "several times" and not to others to me might
indicate a specific hardware/sensor issue related to his vehicle. A sensor,
slightly misaligned or not working to the same tolerance? Pure speculation, I
know. Also the 200 trips/day refers to what? All Tesla vehicles? How about for
his specific year & model and software version and configuration (both
equipped and driver-defined)?

------
tytytytytytytyt
He complained 7 - 10 times and then just forgot about how it used to swerve
towards a head on collision with the median?

------
dawhizkid
Do you "feel" any responsibility as an autopilot ML engineer? I know a few
that joined within the last year.

------
icc97
Is there a specific system for traffic sign detection? I'd have thought you
could have a system that is dedicated to spotting traffic signs in the current
country with a significantly higher accuracy than cat detection.

Even a small part of a sign should be enough. They're designed to be easy to
spot.

It seems like we just set neural networks up to recognise all objects and
assume they'll recognise simple objects too. However typically how humans
learn is through simple cartoons first and then layer on top of that rather
than the other way round.

Edit: should have done some googling before opening my mouth [0] [1]

[0]: [https://amundtveit.com/2017/07/13/traffic-sign-detection-
wit...](https://amundtveit.com/2017/07/13/traffic-sign-detection-with-
convolutional-neural-networks/)

[1]:
[http://www.bartlab.org/Dr.%20Jackrit's%20Papers/ney/1.TRAFFI...](http://www.bartlab.org/Dr.%20Jackrit's%20Papers/ney/1.TRAFFIC_SIGN_Lorsakul_ISR.pdf)

------
sjg007
Certainly CalTrans is responsible for not replacing the barrier. Expect a
lawsuit/settlement. Tesla will be found liable too, that’ll be a jury trial
civil suit. Why? Despite reporting that AP failed at that interchange and
despite still using it, drivers and juries expect safety features to work as
advertised.

~~~
static_noise
The autopilot is not a safety feature.

------
lewis500
I am pretty mystified why tesla is even running autopilot on their cars. They
already have more demand than they can handle, and I don't think anyone buys a
tesla because of the autopilot. They are opening themselves up to huge court
claims at the same time they are low on money. Just keep running tests and
maybe do a pilot for a few years. Isn't Waymo sort of in the lead right now?
They aren't running their software on millions of vehicles but seem to be
progressing okay.

It seems all these companies are in a big rush and being slapdash. Maybe it's
a disconnect between the engineers and the execs/shareholders. It doesn't even
seem profit-driven...almost fear driven. ("we don't wanna be left behind.")

~~~
sseth
It allows them to develop self driving technology without the expense of
dedicated cars and paid drivers, and without any liability (because drivers
take the risk).

------
raverbashing
The question is, if the driver was aware the autopilot was unsafe at that
area, why did they keep using autopilot there?

You can't just throw your hands on the air and hope for the best. Your car,
you are on the wheel, you are responsible.

------
smoyer
Am I the only one that thinks that drivers with auto-pilot and back-up drivers
for driver-less cars should be ready to drive at any instant? As an engineer,
I'm seeing these features as beta at best.

~~~
sf_rob
The problem is that that's not how human cognition works. If the auto-pilot is
working well your brain will inevitably become accustom to the lack of
stimulus. Ironically I think that these systems have a kind of uncanny valley
type area where they are probably safest when the auto-pilot is poor or great,
but not in the middle.

~~~
smoyer
I understand that ... and that's another problem we've yet to solve. What
you've described is what led to the crash of the Korean Airliner that
undershot the runway at SFO a few years ago. In that case, they would have
been better off letting the plane land itself but that's not SOP.

------
Bombthecat
Side story, I have an open with crash warning.

On a highway I think it even detects crashes ahead of you ( had once two cars
in a three lane almost crashing while one car tried to switch Lanes)

Well, back to story. I have a street where cars park left and right of the
street like a zick Zack. There is one spot where it always warns me about a
crash..

------
pmarreck
IMHO, they need to get autopilot at least an order of magnitude better than
human drivers statistically, before releasing this tech, because this sort of
news is extremely bad public-perception-wise. I don’t think it’s reasonable to
expect a Tesla driver to always have his hands on the wheel during autopilot

------
utopcell
Why call something "auto-pilot" when it is clearly not remotely ready to do
what its name implies ?

~~~
SteveCoast
Unless someone has ever used an auto pilot in something like a Cessna, that
person would probably have a wildly overoptimistic idea of what an autopilot
does. Even on a passenger jet it's really not that smart, there's just lots
more volume to explore so it's hard to kill everyone.

A better analogy would be cruise control. It controls essentially one
variable. As does lane keeping. You combine a couple of these things and you
think it's smart, and it isn't. We learnt this (edge cases between single
variable trackers) ~40 years ago in aircraft, that there are places in the
flight envelope that combinations of single variable trackers will still let
you go, but will also kill you.

~~~
freehunter
There used to be the same problems with cruise control, though. People thought
it would brake automatically and steer around corners, and would get into
accidents that proved their assumption wrong.

------
dotsh
Why no one talks about other victims of car accidents and their is about 6k
per month on average in US alone. But Uber and Tesla are on headlines when
someone dies and there is like what 3-4 victims in total after all those
years? It is not even worth mentioning.

~~~
vasilipupkin
Miles driven, genius. There are millions of other vehicles on the road and not
that many teslas

------
hedora
Am I the only one that finds it odd that a human also ran into the same
barrier within 24 hours of the crash? Perhaps Caltrans is partially to blame.

(Full disclosure: Their incompetence almost killed me a few times back in my
I-880 commuting days)

~~~
lazyjones
> Am I the only one that finds it odd that a human also ran into the same
> barrier within 24 hours of the crash?

Source? ABC7 says the previous crash was 11 days ago (DUI...), the barrier
just wasn't fixed immediately.

[http://abc7news.com/automotive/exclusive-i-team-
investigates...](http://abc7news.com/automotive/exclusive-i-team-investigates-
why-caltrans-didnt-fix-safety-barrier-before-tesla-driver-died-there/3280399/)

------
jaimex2
They should rename autopilot to auto-scapegoat.

There is zero information yet and its all everyone is jumping onto. Last week
the same thing happened to a Tesla that wasn't even equipped with autopilot.

------
godelmachine
"Autopilots performance is unrelated to navigation"

Tesla is mincing words.

------
zamalek
My opinion: EVs, self-driving or directed, need an ejection mechanism for
their batteries. Petrol has the advantage that it _may_ ignite, where
batteries are almost guaranteed to when damaged. Ideally, it would be an
active system (launching the batteries no more than a meter away) - but that
could fail under the conditions which caused the damage to the batteries. An
alternative would be a passive system made of materials known to melt when
exposed to a lithium fire - providing a few centimeters of separation from the
cabin. Either way, the current situation is not ideal.

~~~
freehunter
So your system would throw a burning pile of lithium into the woods to start a
forest fire, into oncoming traffic to cause another crash, into a pedestrian
on the sidewalk, etc?

~~~
zamalek
You're right. It's a terrible idea. Pity it's too late to edit or delete that
dumb comment.

------
saudioger
Tesla is very very very bad with PR when something goes wrong. Their response
to this is awful.

------
cube00
It looks like Status: WORKSFORME won't cut it in this brave new world.

------
SpaceEncroacher
Crash victim? More like crash dummy, auto-pilot is raw.

------
devit
Seems pretty inexcusable for a self-driving system to ever hit a static
obstacle (while not trying to avoid another collision).

There should be manslaughter charges for this kind of thing.

------
Maro
It's interesting nobody is asking why the crash attenuator was not replaced
faster.

------
Skunkleton
The fire seems pretty severe too. I wonder what caused the battery containment
to fail?

------
sabujp
blame the customer, blame the govt, but your software is perfect Tesla?!

------
notafxn
Super fun to compare the comments on this to the comments on the Uber
accident.

