
Tesla sued in wrongful death lawsuit that alleges Autopilot caused crash - mindgam3
https://techcrunch.com/2019/05/01/tesla-sued-in-wrongful-death-lawsuit-that-alleges-autopilot-caused-crash/
======
sytelus
Tesla's blog post: [https://www.tesla.com/en_GB/blog/update-last-
week’s-accident](https://www.tesla.com/en_GB/blog/update-last-week’s-accident)

 _the driver’s hands were not detected on the wheel for six seconds prior to
the collision. The driver had about five seconds and 150 meters of
unobstructed view of the concrete divider_

Looks like sensors failed to see concrete divider in nice sunny weather and
car slammed in to it at 70mph. Driver was obviously over confident on system's
ability to self-drive, probably busy looking at phone and ignored warnings to
put his hands on steering.

 _In the US, there is one automotive fatality every 86 million miles across
all vehicles from all manufacturers. For Tesla, there is one fatality,
including known pedestrian fatalities, every 320 million miles in vehicles
equipped with Autopilot hardware. If you are driving a Tesla equipped with
Autopilot hardware, you are 3.7 times less likely to be involved in a fatal
accident._

These stats don't help when you read the guy had two kids who will now grow up
fatherless for rest of their lives. Humans killing humans is very different
thing than machines killing humans even if the fatality rates are 10X lower.
Companies need to aggressively enforce, both hands on steering until self-
driving is really really really good.

~~~
rolleiflex
I'm paraphrasing from the last time this was posted, but Tesla's words need to
be read _very_ carefully. What they say is:

 _> the driver’s hands were not detected on the wheel for six seconds prior to
the collision_

What Tesla does _not_ say is that the six seconds the driver's hands were not
on the wheel was immediately preceding the crash. It is _any_ six seconds
before the crash.

Admittedly this is a very dark reading, however, this reading is supported by
their further claims that the driver 'had multiple warnings', which was in
fact fifteen minutes ago, and for an unrelated event.

This is the kind of sketchiness that makes people wary of Tesla. Funny thing
is, they make cool tech, they don't need to do any of this, nor things like
discounting the 'gas savings' from the sticker price. I genuinely don't
understand where their apparent need for pushing the definition of truth to
its breaking point is coming from.

Had they not openly misrepresented their autopilots' capabilities, they
wouldn't have lie in this bed of their own making now.

~~~
FartyMcFarter
They are still misreprenting their self-driving capabilities in their
marketing materials:

[https://www.tesla.com/en_GB/autopilot?redirect=no](https://www.tesla.com/en_GB/autopilot?redirect=no)

In the beginning of this video there's the caption "The person in the driver's
seat is only there for legal reasons. He is not doing anything. The car is
driving itself."

If the lawyers in this lawsuit are any good they will have a field day with
this.

~~~
m463
That is pre-release full-self-driving software under test.

It is definitely NOT a feature that has been released to the public.

The traffic aware cruise control and self-steering features tesla has released
to the public require your hands to be on the wheel. If you take your hands
off the wheel you will get a small warning, then a larger warning and finally
a huge alert.

By the way, if you show a pattern of ignoring alerts it will refuse to drive
for you anymore.

------
areoform
I would like to add over here that Tesla does indeed do sensor fusion in their
cars. Their Autopilot combines radar and ultrasound with vision to decide
where to drive. Commentators bringing up LIDAR are jumping the gun by assuming
that this scenario isn't something that these sensors wouldn't have detected
in either combination or individually to spot the anomaly. The problem over
here is likely to be software due to a bug in their code than a simple lack of
additional sensors (LIDAR). And this touches on deeper issues that could have
profound ramifications for the autonomous driving industry and the broader
industry in general.

At least, in my eyes, the big problem with the autonomous car industry isn't
the sensor suites they are deploying (or not), but the over-reliance on neural
networks. They are black boxes with failure modes that can't be adequately
mapped. See:
[http://www.evolvingai.org/fooling](http://www.evolvingai.org/fooling)

What if the neural net or the system used to detect obstacles didn't see it
because the precise configuration of the data fooled it? And if that's the
case then what's next? How do we decide when it is okay for safety-critical
systems to be opaque? How do we deal with autonomous driving if the conclusion
comes out to be a "no" for this case? How should broader society deal with a
yes? And who decides all of this in the first place?

Possibilities like this scare me far more than the lack of LIDAR because
replicating a bug like this would be next to impossible. We don't know what we
don't know, and we can't explore and understand the system to suss out what we
don't know.

Edit: Fleshed out the idea with more questions.

~~~
haditab
Your comment actually emphasizes why having a lidar is better.

Cameras need neural networks to detect objects. Lidars on the other hand
provide distance to all the surrounding objects. The neural networks
processing the point cloud may not be able to detect what the object is, but
because of the distance available in the point cloud data you will know there
is something ahead.

Fusing radar with camera does help but radars are noisy and unreliable.

~~~
bobsil1
Lidar is active (so lower res than vision) and uses slightly longer
wavelengths (again, lower res). Visible light uses passive solar illumination
by day, power-efficient, far more data; and is a problem you need to solve for
reliable autonomy independent of lidar.

~~~
studentrob
> is a problem you need to solve for reliable autonomy independent of lidar

Not for L4. A level 4 vehicle is not expected to operate under all conditions,
including weather and location.

------
saagarjha
The problem with Tesla's "Autopilot" is that it's marketed heavily as just
that: autopilot. It's good enough to work that way 99% of the time, too, which
is a recipe for disaster: things that work 99% of the time but have the
potential to fail catastrophically often get glossed over by humans because
it's hard to keep paying attention to something that rarely fails.

~~~
LeoPanthera
I'm not a pilot, but I have had flying lessons. Airplane autopilots are
extremely simple. They can basically fly you in a straight line, at a fixed
altitude. Tesla's autopilot is actually way more advanced than those found in
airplanes.

~~~
rayiner
A commercial airliner autopilot (the kind people think of when you say
autopilot) can perform evasive maneuvers and land the plane.

~~~
bagels
What are they evading?

~~~
beering
Other airplanes. Also the ground.

But the simple autopilots (think an old Cessna) don't even try to avoid the
ground for you. If you set it and forget it, you may find yourself dead on a
mountainside.

~~~
mkesper
You can absolutely crash a modern plane by entering altitude 0.
[https://www.bbc.com/news/world-
europe-32063587](https://www.bbc.com/news/world-europe-32063587)

~~~
garmaine
There are runways below sea level.

And low pressure zones.

------
neotek
That's what happens when you mislead your customers into believing your self-
driving technology is far more advanced than it actually is. I hope Tesla
loses this case and is forced to change their bullshit marketing before more
people lose their lives.

~~~
aneutron
But they actually didn't. They specifically tell you it is NOT a fully
autonomous autopilot and that's it's more of an advanced cruise control.

They also specifically tell you to KEEP YOUR EYES ON THE ROAD. If you fail to
do so, you're asking for trouble.

It is unfortunate the guy haf kids that will now grow up orphans, but judging
from the information I've read in the Tesla blog post, it is clear that he was
in the wrong here, at least partially, for not paying attention while driving.

~~~
aduric
So why call it 'autopilot'? Seems like bullshit marketing to me. The first
association we make when we hear that term is hands-off, not 'advanced cruise
control'. They're specifically using that terminology to sell their products
and now they're paying the price.

~~~
zip1234
In an airplane, autopilot still requires the pilot to pay attention.

~~~
slavik81
In an emergency, an airplane pilot often has time to pull out the operating
handbook, flip through to the emergency checklist for their particular
problem, and follow the instructions. Even for time-critical emergencies, it's
recommended to read through the checklist afterwards to ensure you didn't
forget anything [1].

By contrast, it would be very unsafe to be reading the Tesla owner's manual
while driving. The level of attention required is much higher.

[1]:
[http://www.tc.gc.ca/eng/civilaviation/publications/tp11575-e...](http://www.tc.gc.ca/eng/civilaviation/publications/tp11575-ex12-73.htm)

------
CoolGuySteve
If the Tesla had a proximity sensor that slammed on the breaks in the last 2-3
meters, would he have survived? I ask because it's a common luxury car feature
and would be easy for Tesla to implement. IIRC this crash and the semitruck
crash had no evidence of the autopilot hitting the breaks according to the
NTSB.

Even if his velocity was only reduced by a fraction, the amount of power
involved in the collision would have been reduced by whatever that fraction
was.

Edit: So it turns out the Model X does have automatic emergency breaking, but
the preliminary report says that the Tesla actually increased its speed in the
3 seconds leading up to the crash. Sounds like a major software bug to me.

Here's a review of the Model S AES compared to other cars, in particular the
Tesla AES trigger can't handle when a lead car moves out of way, which is what
the NTSB report says happened:
[https://www.caranddriver.com/features/a24511826/safety-
featu...](https://www.caranddriver.com/features/a24511826/safety-features-
automatic-braking-system-tested-explained/)

Here's the NTSB preliminary report:
[https://www.ntsb.gov/investigations/AccidentReports/Reports/...](https://www.ntsb.gov/investigations/AccidentReports/Reports/HWY18FH011-preliminary.pdf)

~~~
zaroth
His car increased speed because the adaptive cruise was set to a higher speed
than his car was going, and for 4 seconds before impact there were no cars in
front of him.

The adaptive cruise increased speed to his set point when his car left the
flow of traffic. The AEB will not trigger on a stationary object at highway
speeds. I do not believe there exists any AEB that will emergency brake at
highway speed, maybe except perhaps for a pedestrian shaped object?

He hit a crash attenuator which had not been reset (so basically direct impact
with concrete) after it had been hit by another car 11 days prior.

Resetting the attenuators is a simple task, but apparently this particular one
is hit fairly frequently.

~~~
rasz
>crash attenuator

>this particular one is hit fairly frequently.

Interesting way to build your road infrastructure

~~~
MertsA
One way or another highways have portions where there has to be a solid, flat,
vertical surface on the edges of barriers, exit ramps, etc. It's similar in
many ways to the hairy ball theorem.

[https://en.wikipedia.org/wiki/Hairy_ball_theorem](https://en.wikipedia.org/wiki/Hairy_ball_theorem)

You can't just round over the edge of the barrier, because that still leaves a
portion where a car could collide perpendicular to it. You can't just slope it
down to the ground, unless you want an impromptu Dukes of Hazzard reenactment.
You can't have just an unguarded edge because that would just be a death
sentence if you hit it at any appreciable speed. The solution is to place
those crash attenuators so that it slows the vehicle down over a small
distance rather than just over the length of the crush portion of the car.

[https://megarentalsinc.com/sites/default/files/feature-
image...](https://megarentalsinc.com/sites/default/files/feature-
images/crash_cushions2.jpg)

------
helloindia
A more nuanced dissection of whose fault it was or could be[0].

"Huang was apparently fooled many times by Autopilot. In fact, he reportedly
experienced the exact same system failure that led to his fatal crash at the
same location on at least seven occasions." ...........

"Huang knew that Autopilot could not be relied on in the circumstances where
he was commuting along the 101 freeway in Mountainview, California. Yet he
persisted in both using the system and ignoring the alerts that the system
apparently gave him to put his hands on the wheel and take full control."
..........

"Elon Musk and Tesla should be held to account for the way they have rolled
out and promoted Autopilot. But users like Walter Huang are probably not the
poster children for this accountability."

[0][https://www.forbes.com/sites/samabuelsamid/2019/05/01/the-
pr...](https://www.forbes.com/sites/samabuelsamid/2019/05/01/the-problem-with-
blaming-tesla-for-walter-huangs-death/#59a9c59a5c88)

~~~
chli
Or maybe, as I mentioned in a previous comment [1]:

    
    
       - Mr. Wuang knew about the bad spot
       - A Tesla OTA solved the issue for that particular spot
       - He got used to the car behaving properly in that location
       - A Tesla OTA introduces a regression for that particular spot
       - Mr. Wuang dies.
    

[1]
[https://news.ycombinator.com/item?id=17141784](https://news.ycombinator.com/item?id=17141784)

~~~
unityByFreedom
Yep, or it didn't happen all the time and he perceived it was fixed after
reporting it. We do not have all the facts.

------
barrkel
I think it's deeply irresponsible to create a device which can drive in 90 or
95% of scenarios. It's just a way to fool humans into killing themselves.

~~~
CoolGuySteve
If the system is even a few percent better than humans then it would save
thousands of lives when widely deployed.

Musk has made this argument many times. Basically don't let perfect be the
enemy of better.

But if the vendor is legally liable in each case then this software is
untenable.

~~~
toast0
Tesla (and/or Musk) keep saying it's safe to let the car drive, and then when
you let the car drive and it kills you, they say you're holding it wrong.

If you're actually trying to increase safety, the right things to build are
computers to supervise the human driver, and not systems that require a human
to supervise the computer driver.

If the human drifts out of the lane, nudge the car back in and beep, but don't
prevent the human from intentionally going over the lines.

If the human appears to be driving into an obstacle, beep and apply the
brakes, but allow the human to override.

If the human isn't paying attention, beep, then turn on the hazard lights and
slow down / try to find a safe place to pull over.

~~~
hw
A computer supervising the driver is what got Boeing into problems with their
MCAS resulting in the 2 plane crashes

Tesla also doesn't say that it's safe to let the car drive. There are ample
warnings and messages and feedback that say that you have to keep your eyes on
the road and hands on the wheel at all times.

There are likely more non-AP Tesla/non-Tesla auto accidents and fatalities
than there have been AP related ones. I feel safer riding in a Tesla with AP
engaged and the driver having his eyes on the road and hands on the wheel.

~~~
civilitty
What got Boeing into problems is management overpromising something
engineering couldn't deliver, just like Tesla is doing with "autopilot."

Boeing promised a brand new and improved jet that was cheaper because it
packed in a bunch of improvements without having to go through certification
on a new airframe or pilot retraining. Engineering tried their best but either
through negligence or "just following orders" and bureaucracy they failed
catastrophically, as any engineer will tell you would happen with a non-
redundant (in a 3+ vote configuration) sensor that could override pilot
control inputs based on faulty readings

------
baoha
While I'm not quite sure about Tesla's responsibility, I do think CA DOT has
its part in this tragic accident. Had the attenuator been replaced right after
the previous accident, it could have saved the driver's life.

Usually I don't complain much about the gov, but just look at the construction
mess they've created on 101, it's been like that for more than 4 years!

~~~
threeseed
So don't blame the company whose claiming that their car can "autopilot" under
all conditions.

Instead blame the road. Interesting take.

~~~
natch
This is a bizarrely inaccurate mischaracterization of what Tesla has ever said
about autopilot.

~~~
unityByFreedom
Yes, nevermind the video that emphasizes "the driver is doing nothing" on
their tesla.com/autopilot page since 2016, or that all vehicles come with
hardware for full autonomy.

There's nothing bizarre about threeseed's characterization.

[https://www.tesla.com/autopilot](https://www.tesla.com/autopilot)

~~~
ccorda
That is a demo of full self driving, which is a perpetually coming soon
feature. You even need to pay for it separately.

The actual section on what autopilot can do and what it requires of the driver
is pretty clear:

> Autopilot advanced safety and convenience features are designed to assist
> you with the most burdensome parts of driving. Autopilot introduces new
> features and improves existing functionality to make your Tesla safer and
> more capable over time.

> Your Tesla will match speed to traffic conditions, keep within a lane,
> automatically change lanes without requiring driver input, transition from
> one freeway to another, exit the freeway when your destination is near,
> self-park when near a parking spot and be summoned to and from your garage.

> Current Autopilot features require active driver supervision and do not make
> the vehicle autonomous.

As a Model 3 owner, it does do those things quite accurately. Certainly more
so than the MobileEye based system on our Volvo, which can't lane keep and
stops abruptly.

I find it convenient and more relaxing when driving, but I heed their warning
(which I also agreed to when enabling autopilot in car) and pay attention,
being prepared to take over at all times.

------
patejam
Tesla is playing with lives and should stop offering Autopilot.

Anything less than L4 autonomous driving is completely reckless. Calling it
"Autopilot" when it's an L2 system should be criminal.

Yes, it's an extreme stance, but we're going to have a really hard time
getting out true autonomous driving when companies are playing around with
people's lives. You can say "well they know the risks" but it's not a closed
situation. There are others on the road who will also die because of Autopilot
mistakes.

~~~
goshx
We are going to have a really hard time getting out true autonomous driving if
they _don’t_ do this.

That’s how the system will learn how to deal with real life scenarios.

~~~
patejam
You're okay training an incomplete self driving machine learning model on the
general population?

~~~
goshx
I actually help it every day during my commute. They are not stupid and this
isn’t so unsafe as people who have no actual experience with it claim to be.

~~~
threeseed
People have died. So it seems pretty unsafe to me.

~~~
tasubotadas
People have died in cars without autopilot. So what do you suggest we do with
regular cars?

------
rayiner
I hear people say “self driving cars don’t have to be perfect, they just need
to be safer than humans.” Here is a good example of why that might be harder
to achieve than expected. Apparently the car had gotten confused at that exact
location repeatedly before. That’s what self driving cars are going to do—if
there is something “weird” it’s likely that every self driving car (at least,
every one from the same manufacturer) that encounters the weird scenario will
run into a problem. That could result in catastrophic failure modes at scale.

What would have happened if every car on that road had been an identical
Tesla? How many crashes would have happened? How long would it have taken
Tesla to issue a fix? How many miles of perfect driving would be required to
make up for the cluster of crash events due to that one anomaly?

~~~
LeoPanthera
To play devil's advocate:

Assuming self-driving cars are connected to each other, either directly or via
the internet, if one car crashes, or even just makes a minor mistake, the
knowledge of how to avoid that can be transmitted to every other car on the
planet.

~~~
valine
Seems unlikely that the knowledge of how to avoid making a mistake could be
compiled without human analysis. The technology to do that autonomously
doesn’t exist. It would almost require artificial general intelligence, and if
we had that self driving would be trivial.

~~~
londons_explore
The "human analysis" could be completed within 30 seconds and beamed out to
all cars in the area.

No reason Tesla can't have operatives on duty 24/7 to assist and tweak/train
the fleet of cars.

------
protomyth
So, in a traditional embedded system, discovery can reveal the source code to
help determine what went wrong (e.g. divide by zero error in x-ray machine).
What are the lawyers going to get when they start looking into the Tesla's
autopilot software?

~~~
nn3
Likely they will get a gigantic sea of trained deep learning weights that
noone understands.

~~~
protomyth
That really isn't going to play well with a judge or jury. I can see the poor
programmer up on the stand not being able to say what it meant or how the car
made its decisions.

~~~
salawat
That is, unfortunately, the problem with neural nets in general.

They are amazing logical constructs, but there is a fundamental opaqueness to
them that absent sufficient neural network mass to convincingly simulate a
human, we can't apply the same methods of formal verification and behavioral
inference we could for other more specific machine implementations.

No one can explain just why certain weights work at certain situations and not
others. They just do.

Whether someone in the justice system is comfortable effectively legislating
from the bench by creating precedent holding companies liable for NN based
behavior that there are no hard and fast ways for them to proof test against
in the first place is another question however.

~~~
blevin
It’s almost as though you have to treat them like human drivers. We cannot
formally verify 16 year olds, either, or sufficiently introspect accurate
reasons for their behavior. Instead, we require them to pass a test, we apply
actuarial cost models, etc.

~~~
smallnamespace
16-year-olds can talk and try to explain themselves. Humans try very hard to
make themselves understand. Neural nets can't (at least, not yet).

~~~
tim333
You can figure what the Tesla NN would say if they built an explanatory speech
system in. Something like "Opps sorry - I thought that which line was a lane
and didn't recognise the barrier." Not all that helpful here really.

------
huhtenberg
This person died because of Tesla's cocky marketing, which leads people to
believe that "Auto Pilot" is just that and that does very little to discourage
this interpretation even though it is life-threatening. In this context them
blaming it on the accident victim is a 100% asshole move.

I am life-long Musk's fan, but Tesla trying their hardest to weasel out of any
responsibility here is incredibly damaging to their reputation. The future
_will_ come, trying to accelerate its arrival at all costs is reckless.

~~~
mikejb
I generally agree with you on the Tesla marketing issues, but it's not
entirely as black and white: The driver reported issues at that very location
multiple times, and complained about AP not working reliably. The driver
definitely was aware of flaws and issues with AP, worse yet, was aware of
issues with AP at the location of the crash. Tesla's marketing may have
contributed and may generally give a false sense of functionality/security,
but Huang already _knew_ it wasn't true. Yet still, at the location he knew AP
had problems, he failed to pay attention to the road.

------
rjdagost
Tesla made some very big claims in their recent autonomy day event- basically,
they claimed that they are years ahead of competitors, while operating on
"hard" mode (no lidar). And yet, a number of participants in the short
autonomy day demo rides claimed that the support driver had to disengage the
autopilot.

Has Tesla provided any evidence which shows evidence that they are in fact far
beyond competitors?

~~~
studentrob
> Has Tesla provided any evidence which shows evidence that they are in fact
> far beyond competitors?

Evidence showing they are ahead in terms of customers? For now, maybe. In
terms of safety? No.

According to some fans, they sell a product you control. You're allowed to use
it anywhere, therefore, they're ahead.

According to Tesla, they designed a chip that's best-in-class and they share a
small amount of safety data quarterly [1]. Musk will not share any more data
[2] because that would allow people to "turn a positive into a negative".

[1]
[https://www.tesla.com/VehicleSafetyReport](https://www.tesla.com/VehicleSafetyReport)

[2]
[https://www.youtube.com/watch?v=HqvatzjHGyk&t=47m17s](https://www.youtube.com/watch?v=HqvatzjHGyk&t=47m17s)

~~~
rjdagost
Thanks for the links. Their "Vehicle Safety Report" is literally 3 sentences!
I expected such a report to have much more detailed information in it.

------
kvhdude
English is not my first language. I am confused by the blatant use of the term
"Auto Pilot". Does it not suggest more automation than is currently feasible?
Why not intelligent assist? Why is tesla/musk getting a pass here?

~~~
xeromal
Autopilot in an airplane is similar. It can do a bit and make the trip easier,
but it's not going to dodge a rock that happens to fall from the sky. It's a
pretty simple system that really just relies on instrumentation. It's not
using any fancy algorithms to make bold maneuvers. I might wager than Tesla's
is doing more.

~~~
mrtksn
Then maybe using the Autopilot or a Tesla vehicle should be subject to
rigorous training and repetitive simulator sessions every 6 months to ensure
that the drivers are using it correctly and are aware of its current capacity.

If it’s like an airplane, should be regulated like an airplane.

------
robinduckett
Do they not have Rumble strips in the US?

[https://en.wikipedia.org/wiki/Rumble_strip](https://en.wikipedia.org/wiki/Rumble_strip)

I have been in situations in my youth where I was commuting or travelling
tired, and nothing alerts you like the loud rumbling sound, and I'm sure the
autopilot could detect it and bring the car to a stop if the driver hasn't
been alerted by the in car warning system or the rumble paint...

~~~
will_pseudonym
We do have them, though they're not required by law and are not always
present.

------
carlivar
I know the argument is always that autopilot is not intended by Tesla to be
abused (never mind the CEO on national television abusing it).

However, there's what you tell humans, and then there's pragmatism regarding
human nature. We should consider the latter.

It's possible to be both academically correct on the warnings/instructions
given and also practically wrong on human psychology.

~~~
Simple_Guy
Your reasoning is that people are stupid and therefore you should manipulate
them 'for their own good' because they have no self-agency and self-
responsibility.

Needless to say, people who think like this make terrible dictators when given
the chance.

~~~
semi-extrinsic
You could can easily apply this logic to say "we should revoke laws that make
it illegal to drive without a seat belt", or dozens of other examples.

Humans are notorious for thinking "I'm in a real hurry, I'll leave the safety
system off because I likely won't get hurt". We know that laws, rules and
protocols with enforcement and negative consequences are required in order to
make humans act safely at scale.

------
gooseus
It seems like a (relatively) trivial addition to the Autopilot system would be
to allow the drive to tell the car when it has made a mistake and if
Autopilot(s) consistently make mistakes in the same area then Autopilot should
give a specific warning, or force a disengagement, when it detects that it is
approaching that area.

I can't imagine that this isn't already a thing or that someone at Tesla
hasn't come up with this... am I missing something here?

~~~
deckar01
Tesla engineers talked about this in great depth during the recent streaming
event. Manual disengagement of Autopilot is the mechanism used to trigger an
upload of the sensor data. It is automatically uploaded to Tesla servers and
used to train the driving model.

~~~
rjdagost
A Twitter user who rooted his Tesla claims that autopilot disengagement
reports are very small (< 1 KB) and do not contain any actual sensor data,
just things like GPS coordinates, speed, heading, etc.:
[https://twitter.com/greentheonly/status/1096322810694287361](https://twitter.com/greentheonly/status/1096322810694287361)

Not the kind of information that you use to train computer vision algorithms.
I found his claims to be an interesting read.

~~~
natch
Definitely interesting but I wouldn’t trust a rooted system to behave the same
as a non-rooted system.

Nor is there any guarantee that the snapshot in time when he did his research
is representative of everything they do.

And such logging behavior can vary by car, by location, and by software
version.

I tend to think it is in Tesla’s interest to collect the most useful data,
efficiently, and make good tradeoffs when doing so. They say they are
gathering data on interventions that will help improve their neural nets. I
believe them.

------
11thEarlOfMar
The crux of the issue will be that autonomous capabilities will/have made cars
overall safer. Tesla will be able to cite data and circumstances where the
safety features saved lives, and likely more times than lives were lost.

The problem is that the lives that are lost when the safety features fail are
different lives than would have been lost without it at all. The families of
those killed in this manner will have their day in court.

~~~
studentrob
> Tesla will be able to cite data and circumstances where the safety features
> saved lives, and likely more times than lives were lost.

It would be great if Tesla would share such data, however their safety report
is only a couple sentences [1] and Musk just recently refused to share more
safety data because it would allow people to "turn a positive into a negative"
[2]

[1]
[https://www.tesla.com/VehicleSafetyReport](https://www.tesla.com/VehicleSafetyReport)

[2]
[https://www.youtube.com/watch?v=HqvatzjHGyk&t=47m17s](https://www.youtube.com/watch?v=HqvatzjHGyk&t=47m17s)

------
hackerpacker
can we just make a law that the driver is responsible/liable for the car they
drive, regardless how they drive it? I mean breaks failing while you are
actively driving is one thing, but to completely surrender control, that is a
choice.

~~~
mindfulplay
But that's what Elon promises. He is careful never to publicly say it outright
but his PR machinery wants you to believe this car can do things it literally
cannot.

~~~
v0x
I really like a lot of what Tesla and Musk do, but their PR campaigns are
absolutely repugnant. Like in this case where they try to make the NTSB out to
be the bad guy when in reality they go completely against accepted NTSB
standards in an attempt to control the narrative. It's awful.

------
sjg007
Tesla is certainly on the knifes edge... they must be extremely confident

~~~
pauljurczak
or delusional ...

------
_pmf_
How do you feel as a Tesla owner knowing that after your death, Tesla will
publicly post that it was your fault, based on data from your vehicle?

------
jordache
Why does this TC article need to mention he was an Apple engineer? Just trying
to fill space?

------
intrasight
"Move fast and break things"

So it has to be for rapid technological progress to be made.

------
Jonanin
That news about this case is being spread everywhere irks me a bit, because
people aren't considering the actual circumstances of the accident. Excerpt
from the article:

"According to the family, Mr. Huang was well aware that Autopilot was not
perfect and, specifically, he told them it was not reliable in that exact
location, yet he nonetheless engaged Autopilot at that location. The crash
happened on a clear day with several hundred feet of visibility ahead, which
means that the only way for this accident to have occurred is if Mr. Huang was
not paying attention to the road, despite the car providing multiple warnings
to do so."

This deserves a WTF. He understood autopilot makes errors, complained to his
wife several times [0] that the car usually makes errors in that exact spot,
and yet wasn't paying enough attention on a clear day with ideal driving
conditions to commute safely.

[0] "Family members say he complained about his Tesla veering into the same
barrier at the exact location of the crash and that he brought his car into
the dealership several times to report a problem with the autopilot function."
from [https://sanfrancisco.cbslocal.com/2019/05/01/family-
driver-d...](https://sanfrancisco.cbslocal.com/2019/05/01/family-driver-died-
tesla-autopilot-crash-files-lawsuit/)

~~~
yumraj
Your argument sounds the same as: people who smoke know that cigarettes cause
cancer, in fact it says that on the pack, still they decide to smoke. So, if
they get cancer and die it's their fault not the cigarette manufacturer's.

~~~
mikeash
Cigarette manufacturers got sued into oblivion because they suppressed science
and insisted their product was beneficial. Now that they acknowledge the
dangers, nobody’s suing.

~~~
threeseed
Of course people are still suing. And in many countries governments are
continuing to sue as well.

Just Google it. Plenty of examples.

~~~
mikeash
How many of them are not based on previous lies and fraudulent claims by
tobacco companies?

------
newnewpdro
Tesla should get taken to the cleaners for this, I _hate_ that these cars
w/"autopilot" are operated on the same streets I use.

They should focus on making quality, well-performing electric cars. Stop using
us all as beta testers for an autonomous future most of us never asked for.

------
lunulata
Living around silicon valley with all the tesla drivers here, you learn real
fast it makes for some of the worst drivers out on the road. Regularly see
them running red lights and just generally not paying attention. They crutch
too hard on the auto-pilot and dick around on their smart phones, which is
probably exactly what this guy was doing. It's bad for tesla drivers and it's
bad for drivers around them. Hope Tesla loses this lawsuit just because of
that.

~~~
beering
Is it that Tesla drivers are actually worse, or that Teslas stick out to you
more than other mundane cars and are pretty common around Silicon Valley?

~~~
lunulata
Good point, it could be they stick out more to me. They are super common in
SV. I've personally had teslas run red lights in front of me on two different
occasions within last couple months and those particularly stand out in my
mind since they put my life in danger mid-intersection. I ride a motorcycle
and suspect the Tesla auto pilot is particularly bad at spotting
motorcyclists.

------
jbritton
I think self driving cars should have both high dynamic range cameras and
LIDAR and maybe time of flight cameras. Input from a LIDAR system would be
much more likely to detect that barrier, and computer vision via a camera much
more likely to be fooled. I think an investigation into why the computer
vision system failed to detect a barrier under clear daylight conditions will
show the negligence on the part of Tesla. Lane lines are frequently not well
marked, and sunlight glare is a difficult problem for cameras. However, you
have to be able to detect a concrete barrier, in the worst of conditions. Does
Tesla have in place some kind of determination of its lane detection accuracy
and then alert the driver that it is turning off auto-pilot when accuracy is
low?

~~~
natch
Yes it shuts off with a beep if it can’t handle current conditions. In case
you think this kind of abrupt shutoff sounds dangerous, keep in mind this is
in the current generation of the system which relies on a human driver being
attentive and ready to take over at all times.

~~~
unityByFreedom
> In case you think this kind of abrupt shutoff sounds dangerous, keep in mind
> this is in the current generation of the system which relies on a human
> driver being attentive and ready to take over at all times.

What is a driver to do when the system randomly decides to brake? [1]

Phantom braking has been an issue for awhile, and has yet to be acknowledged
by Tesla.

[1]
[https://www.reddit.com/r/teslamotors/comments/b5yx1o/welp_th...](https://www.reddit.com/r/teslamotors/comments/b5yx1o/welp_this_happened_today_while_in_autopilot/)

~~~
natch
They do acknowledge it. You asked what to do. Apply gentle force to the
accelerator pedal. And that video is from a super old software version.

~~~
mastware
> they do acknowledge it.

Where has Tesla acknowledged phantom braking as a persistent issue? It has
been around since 2016.

> that video is from a super old software version.

Looks like the video was taken a month ago. The driver had the version of
software that Tesla gave him. Drivers can't choose which software version they
get, so if it was old, that's on Tesla, not the driver.

~~~
kwhitefoot
You can choose not to update.

~~~
mastware
The above example is about a case where a driver did not receive an update. He
can't choose to download it.

------
cmurf
I think it's unacceptable for automation to produce a worse result than a
human in the same situation, with the same information. i.e. it's not
acceptable for automation to fail danger, it must fail safe, including even if
all it can do is give up (disconnects, warning tone, hands control over to the
human driver).

I think it's reasonable, in the narrow case where primary control is asserted
and competency is claimed to be as good or better than a human, to hold
automation accountable the same as a human. And in this case, if this driver
acted the way autopilot did based on the same available information, we would
say the driver committed suicide or was somehow incompetent.

I see this as possibly a case of automation committing involuntary
manslaughter (unintentional homicide from criminally negligent or reckless
conduct).

~~~
corodra
Well, this brings to light the concept from the movie I, Robot. Is it a crime
when an ai kills someone, unintentional or not, or is it an industrial
accident? If it's suppose to replace a human task due to a "level of
intelligence", is it still taxed as equipment or as an employee? Is the
company (due to equipment failures) or the ai (choosing to do an action on its
own) at fault?

To be fair, these questions need to be hard lined pretty soon.

~~~
mr_toad
> these questions need to be hard lined pretty soon

I’d argue that they’re a long way from being an issue.

Firstly, current AI are far less ‘intelligent’ than almost any animal. Even a
mosquito has more brain power, and we don’t think twice about killing
mosquitoes.

Secondly, even if an AI had similar intelligence to a human, there is no
reason to believe it would be a moral creature, capable of making moral
judgments, and being judged as such. Our morality evolved over thousands,
probably millions, of years (or if you prefer, it was granted by some divine
power). Either way, intelligence and morality aren’t synonymous.

------
CriticalCathed
So many people here have hardons for tesla hate.

The guy fucked up, bad. Tesla is not at fault here.

>"According to the family, Mr. Huang was well aware that Autopilot was not
perfect and, specifically, he told them it was not reliable in that exact
location, yet he nonetheless engaged Autopilot at that location. The crash
happened on a clear day with several hundred feet of visibility ahead, which
means that the only way for this accident to have occurred is if Mr. Huang was
not paying attention to the road, despite the car providing multiple warnings
to do so."

