
Tesla crash in September showed similarities to fatal Mountain View accident - jijojv
http://abc7news.com/automotive/i-team-exclusive-tesla-crash-in-september-showed-similarities-to-fatal-mountain-view-accident/3302389/
======
antirez
What is happening was clear to many since the start: Tesla incarnates the
behavior of his founder, exaggerating the technology that is _really_
available right now, and selling it as a product without the premise for it to
be safe. A product that kinda drives your car but sometimes fails, and
requires you to pay attention, is simply a crash that we are waiting to
happen. And don't get trapped by the "data driven" analysis, like "yeah but
it's a lot safer than humans" because there are at least three problems with
this statement:

1\. Ethical. One thing is that you do a stupid thing and die. Another is that
a technology fails in trivial circumstances that in theory are avoidable. A
doctor can make also errors, but a medical device is required to be very safe
and reliable in what it does.

2\. Of wrong comparisons: you should check the autopilot against a rested and
concentrated driver that drives slowly and with a lot of care. Since in the
statistics otherwise you do not account for the fact that when you drive, you
are in control, and you can decide your level of safety. Such a diligent
driver that goes to crash because of an immature software is terrible.

3\. And lack of data: AFAIK there is not even enough data publicly available
that tells us what is the crash rate of Teslas with auto-pilot enabled vs
Teslas without auto pilot enabled, per kilometer _in the same street
conditions_. That is, you need to compare only in the same streets where the
auto pilot is able to drive.

Autopilot will rule the world eventually and we will be free of using the time
differently (even if this was possible 100 years ago already stressing on
public transportation instead of spending the money to have a car each one...
this is sad. Fortunately in central Europe this happened to a degree, check
for instance North Italy, France, ...). But till it is not ready, to plug it
as a feature in a so immature way just to have an advantage over competitors,
is terrible business practice.

~~~
ajross
> 1\. Ethical. One thing is that you do a stupid thing and die. Another is
> that a technology fails in trivial circumstances that in theory are
> avoidable. A doctor can make also errors, but a medical device is required
> to be very safe and reliable in what it does.

This needs elaboration or citation. You're one of the only folks I've seen
come straight out and make this point: that somehow a technology or practice
that is objectively safer can still be "unethically safe" and should be
shunned in favor of less safe stuff.

I don't buy it. This isn't individual action here, there's no utilitarian
philosophy rejection that's going to help you.

In fact, medical devices seem actually counter to your argument. No one says
an automatic defibrillator's decisions as to when to fire are going to be as
good as a human doctor, and in fact these things have non-zero (often fatal)
false positive rates. But they save lives, so we use them anyway.

Bottom line, that point just doesn't work.

~~~
Ajedi32
I think the point he's making is that it's worse if a person dies because of
something they have no control over (self-driving car malfunction) than if a
person dies because of their own stupid choices (driving drunk, driving too
fast for conditions, running red lights, etc).

This, of course, ignores the fact that stupid choices drivers make tend to
affect other people on the road who did nothing wrong, so the introduction of
a self-driving car which makes less stupid decisions would reduce deaths from
both categories of people here.

~~~
matte_black
No, that conclusion completely ignores the fact that the kind of people who
buy Teslas are generally not the kind of people who drive unsafely anyway and
die in terrible wrecks.

If you give every shitty driver a self driving Tesla maybe you would do
something to make roads safer, but if you’re just giving it to higher net
worth individuals who place greater value on their own life, you haven’t even
made a dent in traffic safety.

In fact in some cases all you’re doing is making drivers unsafer because the
autopilot encourages them to not pay attention to the road no matter how much
you think they are watching carefully. The men killed in Teslas could have all
avoided their deaths if only they had been paying attention. If I see a Tesla
on the road I stay the hell away lest it do something irrational from an error
and kill us both.

~~~
ajross
Seems like your first two paragraphs are amenable to analysis. Surely there is
data out there that splits traffic accident statistics on income, or some
proxy for income. Is a Tesla on autopilot more accident-prone than a BMW or
Lexus? The numbers as they stand certainly seem to imply "no", but I'd be
willing to read a paper.

The third paragraph though is just you flinging opinion around. You assert
without evidence that a Tesla is likely to "do something irrational from an
error and kill us both" (has any such crash even happened? Seems like Tesla's
are great at avoiding other vehicles and where they fall down it tends to be
in recognizing static/slow things like medians, bikers and turning semis and
_not_ turning to avoid them). I mean, sure. You be you and "stay the hell
away" from Teslas. But that doesn't really have much bearing on public policy.

~~~
matte_black
Drunk drivers, recklessly fast drivers, redlight runners, stop sign blowers,
high speed tailgaters... those are the demographic you have to compare the
Tesla drivers to. Do people who buy Teslas engage in those kinds of dangerous
activities?

~~~
ajross
> Drunk drivers, recklessly fast drivers, redlight runners, stop sign blowers,
> high speed tailgaters.

Wait, so you are willing to share the road with all those nutjobs, yet you're
"staying the hell away from" Teslas you see which you claim are _NOT_ being
driven by these people? I think you need to reevaluate your argument. Badly.

That even leaves aside the clear point that a Tesla on autopilot is
significantly less likely to make any of those mistakes...

~~~
ksk
>That even leaves aside the clear point that a Tesla on autopilot is
significantly less likely to make any of those mistakes...

What are you basing this on? Specifically how is it 'clear' and what data has
shown this to be 'significant' ?

~~~
matte_black
Elon Musk probably

------
ChuckMcM
I have now talked with two people who have autopilot in their model S's and
both said the problem with autopilot is that it is "too good". Specifically it
works completely correctly like 80 - 90% of the time and as a result, if you
use it a lot, you start to expect it to be working and not fail. If it beeps
then to tell you to take over, you have to mentally get back into the
situational awareness of the road and then decide what to do about it. If that
lag time is longer then the time the car has given you to respond, you would
likely crash.

Makes me wonder how this gets resolved without jumping through the gap to
'perfect' autopilot.

~~~
njarboe
And in the current climate of "all new things must be super extra safe to be
used, even if it displaces much worse technology", I think machine driving
cars will have a hard time being accepted. Imagine the autopilot is 100 times
safer than human drivers. Full implementation would mean about a fatality a
day in the US. It does not seem this would be acceptable by the public even
though 30,000 annual human car fatalities could be avoided. Maybe we should
use our education system to help heal us from the safety first (do nothing)
culture we seems to gotten our selves into in the US.

This is a similar situation to nuclear power. With nuclear, all your waste can
be kept and stored and nuclear has almost zero deaths per year. Contrast to
the crazy abomination that is coal burning.

~~~
m_mueller
Does it currently really reduce fatalities if you control for car cost, age
group and area driven? I‘m unconvinced of the current statistics.

~~~
njarboe
Currently it is probably worse than the average driver. But here is a trolley
problem for you. What is the ratio of people killed who volunteer to test out
an auto-drive to the number of lives saved by that early testing? 1:1, 1:10,
1:100, 1:1000, 1:1,000,000? A million people die in vehicle wrecks each year.

~~~
creato
Those are clearly not the only two options. Waymo appears to be far and away
the leader in this space, and they've killed zero people.

~~~
0xB31B1B
Yea, most self driving miles are simulated, not done in meat space.

~~~
LoSboccacc
5 million are in meat space, too few to derive anything about safety since on
average a fatal crash happens every 100Mmiles

Can't find how many of these miles were driven on roads with a posted limit
above 35mph

------
bobsil1
Gore point stripe faded:
[https://imgur.com/a/xHq57](https://imgur.com/a/xHq57)

Street View:
[https://www.google.com/maps/@37.6346515,-122.104109,3a,75y,8...](https://www.google.com/maps/@37.6346515,-122.104109,3a,75y,84.96h,99.37t/data=!3m6!1e1!3m4!1s4LqOilvG6oJ1oNApEwLi6A!2e0!7i13312!8i6656)

~~~
z6
No real excuse for Tesla, but seriously why are our roads so bad. This is one
of the wealthiest areas in the world and a busy stretch of road.

~~~
brandmeyer
> but seriously why are our roads so bad.

Degraded paint does not a bad road make. Roads are optimized for human drivers
that can leverage additional context to make their decisions. Many countries
don't even bother to stripe their roads at all.

~~~
z6
Huh? Which developed countries don't have road markings on highways? Paint is
just one aspect. There are also plenty of cracks and uneven pavement as you
can see from the street view.

~~~
cheschire
You said road, he said road. The additional caveat of “highway” wasn’t added
until you realized his mental model was different than yours.

Now imagine trying to teach a car these things!

~~~
z6
A highway is a road. The context is quite clear since we're talking about the
crash site.

~~~
cheschire
Exactly! To you its crystal clear, but if the other party were additionally
considering all the non-highway accidents, then the distinction may not be so
clear to them until you provided the additional metadata.

Now consider how you may feel about a car which has decided based on all
available data that this strip of road is a lane. Its crystal clear to that
car until its suddenly not.

You reacted with what appeared to me to be very restrained aggression (typical
and acceptable, no offense intended) as you struggled to figure out if they
were just being dense on purpose or were actually ignorant. I dont feel either
were accurate but your message conveyed that was what you felt to me.

A car simply says “hey i dont know what im doing anymore. Help!”

------
jijojv
Week after the fatal accident Musk continues to promote Autopilot as safe by
retweeting stuff like
[https://twitter.com/Teslarati/status/980476745106231297](https://twitter.com/Teslarati/status/980476745106231297)
Tesla Model S navigates one of Vienna’s ‘crappiest’ roads on Autopilot

IIHS '17 study puts Autopilot at only reducing 40% accidents which is the same
as their '16 study which had the same 40% reduction for any car with Auto
Emergency Braking.

~~~
otalp
What does Musk have to do for him to be investigated for misleading consumers
or fraud?

~~~
qaq
Break the law? One surprising thing to me there is a ton of media attention on
the SV right now (almost all negative) and that does not look like a
coincidence. I disapprove of what FB is doing but on the scale of problems
that Big Pharma, Health Insurance, big banks are causing thats not even a blip
and yet most media attention and good amount of public outrage is fairly
skillfully directed toward tech and SV.

~~~
Piskvorrr
Or perhaps that SV is _vocal_ , excessively so: promises miracles, yesterday,
at will, almost for free. In other words, massively overpromises, then
underdelivers - that amounts to painting a huge sign "KICK ME" on its
collective butt. While other enterprises might do that as well, even to a
criminal extent, they do not _brag_ about it. This might be all the
explanation that's needed, no conspiracy necessary.

~~~
qaq
What miracles was FB promising? (Again not a fan of FB).

~~~
Piskvorrr
I would guess "we'll give you social network, not a total-surveillance tool"
would qualify. I don't think the second part would have been a reasonable
expectation.

~~~
qaq
Would you point at FB marketing that has this message?

~~~
Piskvorrr
I don't see any (although "FB is free and always will be" has plenty of wiggle
room). Also, I don't see any explicit mention that Tesla's Autopilot is
actually an autonomous Level 5 system - and yet people assume that. Are you
saying that people should only be angry about things they were promised
explicitly? (If so, I would agree - but 1. people are not 100% rational, 2.
expectations are important for PR, even if not legally binding. Going from "SV
doesn't put this in big bold letters on the front page" to "it's a conspiracy
against SV, not because they're an easi _er_ media target" is a bit too much
of a leap for me.)

~~~
qaq
" Are you saying that people should only be angry about things they were
promised explicitly?" I am def. in no position to direct how people should
feel about something. To me expecting something beyond what is actually
promised just sounds unrealistic. If I hired you to write software and you
promised to write software but I was also expecting you to clean my pool that
would pretty weird.

~~~
Piskvorrr
Indeed. But if it would make a clickbaitable story, would it seem more
lucrative to cover this vs. yet-another-corporate-embezzlement cause? In other
words, media attention is also eyeball driven, to the point of sensationalism.

------
danso
Currently, the Tesla autopilot page has this as its top headline:

> _Full Self-Driving Hardware on All Cars_

Followed by this:

> _All Tesla vehicles produced in our factory, including Model 3, have the
> hardware needed for full self-driving capability at a safety level
> substantially greater than that of a human driver._

[http://web.archive.org/web/20180323054727/https://www.tesla....](http://web.archive.org/web/20180323054727/https://www.tesla.com/autopilot)

Getting past the arguments of whether Tesla should be calling this
"Autopilot", is its claims about hardware true? Is the failure to detect
stationary objects like a firetruck or a gore point solely as a result of
currently-inadequate software? Because it doesn't sound like the sensor suite
is anywhere near the AVs of Waymo or Uber.

~~~
Shank
> Getting past the arguments of whether Tesla should be calling this
> "Autopilot", is its claims about hardware true?

Tesla actually hasn't called that Autopilot -- they refer to it as "full self
driving" and not in the same category. But...is it true? It's vaporware right
now, so we can't really say.

Obviously, in a purely hypothetical landscape, even with cameras you should be
able to emulate what humans can do and have 100% attentiveness. That is to
say, humans don't have lidar and radar, yet humans do drive. So, if you can
achieve human levels of perfection and have 100% attentiveness, it should be
possible to beat out crash statistics based on the fact that many are caused
by failure to respond on the human side accurately or fast enough.

This is all hypothetical, though. All of it comes down to computer
vision/software, which is really hard to do. It's possible, but not yet done.

~~~
notahacker
Humans don't have lidar and radar, but humans permitted to drive have optical
hardware which is in all respects except simultaneity of inputs from different
directions superior to that which Tesla is claiming is sufficient for full
safe driving capability.

Much harder to imagine software surpassing alert human driving capabilities if
it's _not_ equipped with sensors offering better depth perception that could
compensate for lack of general intelligence to process unfolding situations,
visual anomalies etc. Harder still if the optical sensors are objectively
worse than human ones in many lighting conditions.

------
dannyw
I want to ask: why is Tesla's advanced suite of cameras and ultrasonic sensors
unable to detect a big, yellow and black stationary barrier?

In combination with chevron markings that mean "never ever drive here". Either
should be a globally overriding signal to emergency break or turn, not "keep
following the white line".

If the answer is "during morning light, it doesn't see it", then they need to
take the feature offline until the car is fitted with an appropriate LIDAR; or
other technology that can detect stationary obstacles.

~~~
YeGoblynQueenne
>> I want to ask: why is Tesla's advanced suite of cameras and ultrasonic
sensors unable to detect a big, yellow and black stationary barrier?

The answer to that is that sensors detect, but don't make decisions. The
decisions are made by the car AI that takes input from the censors and outputs
control signals.

So the pertinent answer is "why is Tesla's AI unable to make the decisions
necessary to avoid a big, yellow and black stationary barrier?".

Unfortunately, nobody can answer that, including Tesla engineers, because
their cars' AI is trained with black-box machine learning algorithms that are
extremely resistant to any attempt to understand how they make decisions.

But, you know. I'm pretty sure they're really safe /s

~~~
protomyth
> Unfortunately, nobody can answer that, including Tesla engineers, because
> their cars' AI is trained with black-box machine learning algorithms that
> are extremely resistant to any attempt to understand how they make
> decisions.

What is the justification for allowing anything we don't understand to be on
the road? I really want the NTSB to ask specifically what decisions the Tesla
was making and not accept "black box learning" as an answer.

~~~
dragonwriter
> What is the justification for allowing anything we don't understand to be on
> the road?

What's the justification for allowing human brains in the control loop?

~~~
YeGoblynQueenne
Because human brains can tell a human from the side of a barn?

------
oldgradstudent
Tesla claimed[1] that:

> Our data shows that Tesla owners have driven this same stretch of highway
> with Autopilot engaged roughly 85,000 times since Autopilot was first rolled
> out in 2015 and roughly 20,000 times since just the beginning of the year,
> and there has never been an accident that we know of. There are over 200
> successful Autopilot trips per day on this exact stretch of road.

Which might be technically true, but conveniently ignores other, very similar
crashes.

[1] [https://www.tesla.com/blog/what-we-know-about-last-weeks-
acc...](https://www.tesla.com/blog/what-we-know-about-last-weeks-accident)

~~~
Shivetya
my question which is the same as others, does Tesla register events where the
driver overrides the system? If they register it how do they account for it?

if they are not even considering that the driver overrode the system because
it was going to crash then all their claims are suspect.

~~~
quotemstr
Why? The car isn't driving by itself. What matters is whether Autopilot is a
net safety improvement, which it currently appears to be. Why would you add
the weird requirement to the system that it be better without human input at
all?

------
paul7986
I'm going to wait five to ten years before I get my robot car. Let all those
early adopters be the guinea pigs we need to reach that level of robot cars
being safer then human drivers.

It seems tons of upper middle class to wealthy people are lining up to pay
dearly with their wallets and lives to be a billionaire's crash test dummy.

~~~
awakeasleep
Aren't we all the crash test dummies if we're sharing the road with them?

~~~
WhompingWindows
We are nothing like crash test dummies. Crash test dummies are intentionally
slammed into barriers at 60+ mph.

Testing of vehicle safety features, and safety features of many products, is
most often done in mass markets. Automatic safety features are being rolled
out in essentially every single auto brand. Automatic emergency braking (AEB)
is set to be standard in numerous makes in a couple years. Lane control, lane
reminders, blindspot detectors, these features are all being rolled out on the
roads, to varying degrees of success.

This is a less-safe version of the way drugs are brought to market. First,
small-scale tests with cells only, then mice, then healthy people, then target
patients, and finally post-market surveillance.

Cars indeed have a MUCH less regulated rollout for their safety features, yet
to say we are dummies in their tests is not even close to reality. While drug
testing may be more stringent, we also don't have 30-40k people a year dying
from "not taking a drug" accidents, which is the situation on our roads today.

Ultimately, it comes down to lives saved. Every time you delay a safety
feature, you have to ask the question: is this delay going to lead to loss of
life or saving a life? That's the moral quandary the automakers are in, and it
seems they go with "release it" much more often than not. That will lead to
some EXTREMELY well publicized accidents, and yet we will NOT hear of the
countless cases where everything went nominally, and who's to say precisely
how many accidents were prevented?

~~~
paul7986
So are you rushing out to be a Musk guinea pig or already are?

------
Animats
Come to think of it, whatever happened to Tesla's self-driving effort? They
claimed to be shipping Model 3 cars with all the hardware needed for self
driving (not including a LIDAR). They produced videos. Most of this was 1-2
years ago. But they never got to deployment, or even to the point of demoing
it for car magazine writers.

~~~
lozenge
They had a falling out with their software supplier Mobileye which set them
back a year or more.

~~~
BinaryIdiot
Was it really that much of a setback? It was my understanding they used some
of their own software in combination and had already planned to take over the
role Mobileye was serving.

Maybe I bought too much into their PR spin but it sounded as if it was barely
a delay.

~~~
stetrain
From reading owner discussions it seems like the general consensus is that
they only just recently reached parity with (and slightly surpassed) their
previous Mobileye implementation with their own software/sensor stack.

------
abalone
I wonder if this will be the case that really tests Tesla's blanket legal
defense in court. (That accidents are due to less than "fully attentive"
drivers.) The reaction time needed in that video to correct for the autopilot
is downright scary.

It's a situation where if you were piloting yourself you're a lot more likely
to have mentally mapped that you continue in a straight line at that point, so
listing to the left is going to feel WRONG even when suddenly blinded by the
sun. But if autopilot takes you there, and you're "lizard brain level" paying
attention but didn't have that mental navigation map in your head, and the
sun's in your eyes, you're going to take a few seconds to try to figure out
how to correct so as not to make things worse. That's enough to hit the gore
point in that video.

------
gfodor
For all those who are clamoring for autopilot to go back into the lab consider
this: I have autopilot, and I use it the way people are instructed to. I
consider it a companion to my driving, a backup co-driver that has better
reflexes. I am in control, and mentally steer the vehicle with my hands on the
wheel _with_ the machine. We are driving the car together and I overrule the
machine when we disagree. Often the machine catches things before I am able to
react. I am 100% convinced that my family is safer with "us" driving them then
just me driving them solo.

What do you say to people like me, who, if you took the feature away, would be
forced to reduce their safety? It's easy to call for its removal when you only
focus on the subset of people who will have their safety increased by removing
it (ie, the people, often through no fault of their own, who come to rely upon
it more than it is designed for.) What about everyone else that currently
benefits from it? Will you force them to go back to driving "solo" to protect
those people?

It's not so simple as "Tesla is reckless by releasing this" \-- this is
providing _real_ safety to people who are using it the way it is intended.
Also, the release of autopilot as-is also makes it possible to accelerate the
development of full autonomy, which will save lives that otherwise would be
lost to accidents if it were to be created later in time. Any analysis which
does not account for this dynamic is missing an important piece of the ethics
of removing it.

The main question that is worth asking is if there is more Tesla could do to
ensure autopilot is used as a co-driver. The name is a poor choice to start.

~~~
unityByFreedom
> What do you say to people like me, who, if you took the feature away, would
> be forced to reduce their safety?

I would say we should go by the numbers. How many accidents and deaths are
there per mile, on similar roads (divided), from vehicles of similar class
(new, luxury), in similar weather conditions (fair)? Compare that to miles
where autopilot is enabled and we have something.

As yet, I don't believe Tesla is required to report this, and they probably
won't voluntarily do it. At best, we would get results from certain states. We
would never get national or world level data.

So, we'll have to wait for the NTSB and NHTSA reports and look to their
recommendations.

~~~
gfodor
I agree that the data will be helpful, but it's also important to remember
that "Autopilot" itself is a moving target. Tesla has already made changes
that are designed to increase the degree to which drivers are forced to pay
attention when the system is engaged.

In my view, the ultimate goal should be to ensure that Autopilot is used in
the way I described, and is disabled for drivers who are unable to use it that
way. That, in theory, would provide the most benefit for the most people.
Perhaps that is a fool's errand, either because its technically impossible,
very few people use it that way, or literally nobody is able to and anyone who
thinks they are is being fooled by their own bias (as I'm sure many readers of
this thread assume I am.)

But it seems a goal worth pursuing before giving up and throwing it out
because the net result of say, reducing accident fatality likelihood by 50% 5
years ahead of full autonomy results in something like 75k saved lives (in the
US.)

~~~
unityByFreedom
Moving target, sure, but Tesla has been making statements claiming using AP is
safer than not since AP's inception. You can't have it both ways. Even Tesla
couldn't be certain whether certain changes lowered the accident rate or
raised it. We should look at total accidents, not miles driven since the
latest version.

Perhaps the best comparison is right within Tesla's data. Compare miles driven
in the same model with AP activated and with it turned off on the same roads.

> In my view, the ultimate goal should be to ensure that Autopilot is used in
> the way I described, and is disabled for drivers who are unable to use it
> that way

Actually there has been some work in determining when drivers are distracted
[1]. This seems to be what you suggest, something that can be used to disable
AP under certain conditions. Would you support Tesla installing such a system?

[1] [https://www.kaggle.com/c/state-farm-distracted-driver-
detect...](https://www.kaggle.com/c/state-farm-distracted-driver-detection)

------
brandmeyer
There certainly do appear to be some similarities. But without the car's logs,
it isn't possible for anyone reading this forum to know for sure.

That's what the NTSB is for.

------
riffic
The Associated Press stylebook calls for reporters to avoid using the term
"Accident":

[https://www.transalt.org/news/releases/9545](https://www.transalt.org/news/releases/9545)

------
josefresco
Auto-pilot will put massive pressure on states and municipalities to improve
road maintenance - specifically lane indicators and fixes to "peculiar"
roads/intersections etc. This will increase budgets, which will in turn create
massive opposition to self-driving autos by those not wanting to pay more
taxes for _other people 's fancy cars_.

I travel/drive a good deal in the summer and for a few years now have silently
(mostly) muttered to myself that if auto-pilot cars are going to use "painted
lines" to navigate - they're "gonna have a bad time".

Poorly painted lines aside, think of how many "wtf" moments you have on the
roads around your town - just down the road from my house is a road that
just... ends (due to poor planning years ago). It's obvious to human drivers
what to do (you're supposed to exit the paved road and drive over a hump to
the dirt road that continues), but for auto-pilot? Not so much unless they
make use of "data sharing" that would reveal the human solution to this
screwed up piece of road.

------
newnewpdro
> Wednesdsay, a Tesla spokeswoman told the I-Team, "Autopilot is intended for
> use only with a fully attentive driver," and that it "does not prevent all
> accidents - such a standard would be impossible - but it makes them much
> less likely to occur."

Actively veering towards stationary barriers in otherwise perfectly safe
conditions is not what I would describe as reducing the likelihood of
accidents.

~~~
lsaferite
In this one specific case sure, the AP was confused about the lane markings
and followed the wrong path. But in plenty of other cases the AP is
responsible for preventing accidents (no, I have no link to back up my
statement). Assuming that is the case, then that _is_ what I would describe as
reducing the likelihood of accidents. You cannot focus on a specific accident
(or even a handful of accidents) and use that to refute the systemic benefits.

------
11thEarlOfMar
Vehicles with autonomous features will save lives. Fewer people will be
injured or killed in cars that have these features. However, some people will
still be injured or killed, and those people will comprise a different set
than would otherwise have been harmed. They or their families will seek
restitution and hence, the way forward will be determined by the courts.

~~~
Declanomous
The thing is that most driving deaths don't occur on the highway with dry-well
lit roads, and current vehicles can't drive for shit in bad weather. So they
shouldn't have autopilot on at all, they should just have the collision
avoidance features on.

------
jv22222
I am a huge fan of Tesla.

What I am noticing right now is that if I were able to afford a Tesla today,
there is no way I would use Autopilot.

I will still buy a Tesla if I can ever afford it, I'll just leave autopilot
well alone.

I don't know what gets me back from this mental place that seems to be firmly
set in my mind now.

~~~
JimmaDaRustla
Agreed - I might test using it, but I would ensure that I have 100% mental
focus. Now I'm wondering, if I have to work hard to pay attention that a
computer is operating correctly, what advantage am I gaining by using it?

AutoPilot is a human assisted utility. I think many people here, and in
general, expect too much of it and are being too critical when there is an
incident.

I do think Tesla needs to communicate better what AutoPilot is and isn't
because we've now lost lives due to operator negligence, and by negligence, I
mean the operator is still responsible for the vehicle despite relinquishing
control to a computer.

------
TeeWEE
When i look at the video, I would probably drive wrongly for the first few
seconds: The road lining is not there anymore, causing you to make mistakes.
But people can correct quickly when there is an anomaly. Weird that autopilot
didnt see the divider.

------
brennankreiman
Biggest thing I gathered was the public safety barrier was damaged from a
previous crash, 12 days earlier, so the driver basically hit a hard wall. This
has more implications for the US’s infrastructure than Tesla’s safety IMO.

~~~
danso
via an earlier story: [http://abc7news.com/automotive/exclusive-i-team-
investigates...](http://abc7news.com/automotive/exclusive-i-team-investigates-
why-caltrans-didnt-fix-safety-barrier-before-tesla-driver-died-there/3280399/)

> _CalTrans finally sent ABC7 News a statement later Thursday "confirming"
> it's their policy to fix broken safety barriers within seven days, or five
> business days ... but storms delayed the work. As Dan Noyes reported
> Wednesday, the family told me they believe Walter Huang would have survived,
> if CalTrans fixed that safety barrier in a timely fashion._

~~~
kwhitefoot
Until the barrier is fixed that part of the road should have some sort of
marking to indicate that there is an extra danger there, that's what cones
were invented for.

~~~
dannyw
I'm sure a hitting series of traffic cones taking up the space where the
safety barrier was could have alerted even a modestly distracted driver to
brake.

------
w_t_payne
My former employer makes a camera-based lane departure warning system.
Although I was not involved with its development, I do know that the amount of
testing that they do on these systems is very extensive (hundreds of people
involved in the test process).

I would be very interested in seeing a side-by side test of the LDW systems of
major manufacturers. This might give us an idea if the problem is fundamental
to the technology or is simply a deficiency in the engineering approach of one
(or some) manufacturer(s).

------
pjkundert
The root problem (as described by Taleb in "Skin In The Game", is: not all
statistical probabilities are "ergodic" \-- have no risk of Ruin or
Extinction.

The problem with saying that "our automatic driving system is much better than
the average driver" and thus you should be good with allowing our system to
drive, is that you are not exposed -- serially -- to the whims of some average
driver. One "average driver" mistake and you're _dead_. No replays.

Therefore, the compound "odds" are INCOMPUTABLE. Just like being paid
$1,000,000 per trigger-pull to play Russian Roulette: you are not earning an
average of $833,333.33 per try! On "average", over a pool of random 1-time
players, sure. But you? You're guaranteed to be dead in a small number of
plays. Non-ergodic and ergodic probabilities are incomparable.

So; since automatic driving systems are non-ergodic, they must be dramatically
better (ie. orders of magnitude better) than the "average driver" to be
considered viable.

I live in a part of the world where even cruise control is unusable for 6
months of the year. Automatic driving systems? Not even _thinkable_ , except
for perhaps a few days of the year, on a few simple portions of a few roads...

~~~
dragonwriter
> So; since automatic driving systems are non-ergodic, they must be
> dramatically better (ie. orders of magnitude better) than the "average
> driver" to be considered viable.

The risks from human drivers are also non-ergodic, so that suggests that if
automatic driving systems are even minimally better than the average driver
you should demand that _every other driver_ be replaced with them without
delay, because you will die _sooner_ with human drivers on the road.

Of course, you won't want to be replaced unless the automatic driver is better
than _you_ , and human drivers all think they are far above average. And
that’s the real problem.

~~~
pjkundert
The real problem is that, yes, human drivers ARE better than computers — for
some unknown subset of driving. It’s that small subset of lethal mistakes you
avoid by not letting your computer drive that makes all the difference.

The computer must not just be better on average — people must know that it’s
better overall, across all possible weird events that a human just might throw
a Hail-Mary and survive!

~~~
AnimalMuppet
> It’s that small subset of lethal mistakes you avoid by not letting your
> computer drive that makes all the difference.

The small subset of lethal mistakes you avoid by not letting yourself drive
matter too...

Here's what I think is really going on. There's a transition from "not good
enough to trust with a human life" to "approximately as likely as a human to
make a fatal mistake, even if in different ways" to "clearly less likely to
cause a fatal accident than even a good human driver". But there's also a huge
pot of money for whoever wins this market. So at least some players are
pretending that they're further along the transition to "better than human",
hoping that the market believes them.

A second point: when we're in the state of "approximately as likely as a human
to make a fatal mistake, even if in different ways", the humans look at the
ways that the computer makes, and think: "That's a really _stupid_ mistake to
make. How could it be so dumb?" But just because the mistakes aren't ones that
a human would make very often, that doesn't mean that the computer isn't on
par with humans in terms of overall death rate. But the computer still looks
bad on the mistakes it makes.

------
YeGoblynQueenne
>> The company tells Dan Noyes they have made it clear -- Autopilot is a
driver assistance system that requires you to pay attention to the road at all
times.

The question is why it is Tesla, in particular, that has to remind its
customers that they have to actually pay attention when they drive its cars.
Is it possible the company somehow managed to convince people that its cars
are equipped with self-driving software, rather than a "driver assistance
system"?

------
kalal
When the self driving revolution started, conservative people claimed that one
accident would stop the whole business. Well there are many accidents already
and people are still willing to risk. One reason may be our tendency to trust
something which we do not understand. We feel that if the feature is already
in the car, it must be safe. Well it is not. And even worse, it is driven more
by marketing and money than by safety regulations.

------
hugh4life
Self driving cars have absolutely no business being any road/lanes that is not
purposely made/reserved for self driving cars. Period. Self driving cars need
an ecosystem made for self driving cars. It's frankly madness to think
otherwise.

~~~
neolefty
What if they improve safety on shared roads but aren't perfect?

~~~
dannyw
If they are unable to avoid blatantly obvious yellow and black barriers /
safety cushions and their algorithm is a simple "follow the whitest white
line", then they should not be on shared roads.

~~~
neolefty
Again, they mostly do. Same with human drivers.

By the same logic, should nobody drive?

------
StefanFrost
So aside from the fact that I can not afford these cars I would also give
these at least 10 years to mature and get further than (what really does seem)
at least the beta.

Right at the start I was very much wondering how these even allowed this
feature on the road. Just from a regulatory point of view. It very much
sounded and looked like a complete auto-pilot.

That said, people go through a drivers test, get their license and then are
allowed to drive any vehicle really. Can you just solve this by giving the
driver assistant a test and seeing if it passes? This needs some kind of
filter or regulator to make sure it is safe.

------
Jabbles
To misquote Elon Musk:

 _The advantage of getting somewhere [without having to steer] will be
negatively affected if "but also, you might die" is on the ticket._

[https://www.reddit.com/r/space/comments/76e79c/i_am_elon_mus...](https://www.reddit.com/r/space/comments/76e79c/i_am_elon_musk_ask_me_anything_about_bfr/dodes6v/)

------
bobsil1
Btw linked accident was for a Model S with Tesla autopilot 1. Can tell because
no camera on silver blinker assembly. They're now on 2.5.

~~~
josefresco
Can autopilot v1 cars be upgraded to v2.5? I'm assuming since you mentioned a
camera it's not just software.

~~~
ohitsdom
No, Elon has tweeted a few times that it would be too extensive of an upgrade
process.

[https://twitter.com/elonmusk/status/789007996038721536?lang=...](https://twitter.com/elonmusk/status/789007996038721536?lang=en)

------
MPSimmons
I have to believe that, at this point, given the markings on the road, that
California Dept of Highways (or whoever is responsible for lane markings) is
just trying to kill people.

------
nodesocket
Just my opinion, but I don't think the local bay area ABC I-TEAM news can be
trusted as a realiabe primary source. Seems like they are "cashing in" on a
national story.

~~~
danso
Has anything in their past exclusives on this story been disputed?

Victim who died in Tesla crash had complained about Autopilot:
[http://abc7news.com/automotive/i-team-exclusive-victim-
who-d...](http://abc7news.com/automotive/i-team-exclusive-victim-who-died-in-
tesla-crash-had-complained-about-autopilot/3275600/)

I-Team investigates why CalTrans didn't fix safety barrier before Tesla driver
died there: [http://abc7news.com/automotive/exclusive-i-team-
investigates...](http://abc7news.com/automotive/exclusive-i-team-investigates-
why-caltrans-didnt-fix-safety-barrier-before-tesla-driver-died-there/3280399/)

------
spullara
If you haven't driven a Tesla under autopilot please stop commenting about it.
You really don't know what you are talking about.

