Hacker News new | past | comments | ask | show | jobs | submit login
NTSB sending investigators to fatal Tesla-semitrailer crash (woodtv.com)
47 points by newnewpdro 16 days ago | hide | past | web | favorite | 57 comments



Again?

So far, Teslas have crashed into two semitrailers, a street sweeper [1], a stalled car partially in the left lane, a stalled truck partially in the left lane, a fire truck, and two road barriers. We don't know about the latest one yet, but in all the other cases, the crash was not preceded by automatic emergency braking.

Tesla lacks basic obstacle detection. The original Mobileye system had "recognize car" as the first step. If it didn't look like a car, it wasn't an obstacle. Tesla claims to have replaced that with their own system, but we're still seeing full-speed collisions with obvious obstacles.

[1] https://www.youtube.com/watch?v=CmdvSXkTAxE


We don't even know if the autopilot was on in this case. And lots of self-driving systems are designed to ignore stationary objects at highway speeds, because they get too many false positives and would be coming to a stop on the highway too often.


Your first sentence is an important point.

I don't understand the rest of the comment though. Hard to reconcile "self-driving" with "ignore stationary objects".


There are many other systems like this on other cars, like the Subaru I just rented. There is no way I would trust them to not kill me unmonitored.

I don't think of anything available to end users today as "self-driving". Auto pilot is a better analogy, even if people don't seem to understand that.


It doesn't help that Elon misleads the public. Tesla's "autopilot" is a Level 2 system; you're meant to keep your hands on the wheel. And indeed that's exactly what the owners manual tells you: keep your hands on the wheel. But then Elon goes on national television and takes his hands off the wheel. What kind of message does that send to consumers?

Then there is the matter of autopilot being a bad analogy, even if it were technically accurate (I dispute that because AFAIK airplane autopilots officially allow pilots to take their hands off the controls, unlike Tesla's system), because the general public has serious misconceptions about what airplane autopilots actually do. Aircraft pilots receive serious training, so they don't have those same misconceptions, which is why aircraft autopilots aren't a problem. But much of the general public is under the mistaken impression that pilots of modern airliners simply "push a button then take a nap" as the airplane takes off, flies and lands itself. That's not reality as you and I both know, but many in the general public have that misconception, making Tesla's use of the term likely to mislead and therefore irresponsible.

See page 73-74 of the Model 3 owners manual: https://www.tesla.com/content/dam/tesla/Ownership/Own/Model%...

    Warning: Autosteer is a hands-on feature.
             You must keep your hands on the steering wheel at all times.

    Warning: Autosteer is intended for use only on highways and limited-
             access roads with a fully attentive driver. When using
             Autosteer, hold the steering wheel and be mindful of road
             conditions and surrounding traffic.*

    Warning: Autosteer is not designed to, and will not, steer Model 3
             around objects partially or completely in the driving
             lane. Always watch the road in front of you and stay prepared
             to take appropriate action. It is the driver's responsibility
             to be in control of Model 3 at all times.


Every other other Level 2 system advertisement I could find shows the driver with their hands off the wheel, at least to some extent;

[1] - Nissan - https://youtu.be/HkL67DgleQY

[2] - Ford - https://youtu.be/Z_9218dTXXY

[3] - Audi - https://youtu.be/nUlK6fpveXg

[4] - BMW - https://youtu.be/xsQvq4WlUYU


Auto pilot allows hands off controls because the frequency of input is lower and the level of training is higher. No analogy is perfect, but it's not half bad.

And it's not like there haven't been issues with auto pilot or people bemoaning it as some moral hazard for pilots.


That was explicitly mentioned in the case where the Uber car hit a pedestrian: the vehicle's standard automated emergency braking had been disabled in favor of Uber's self-driving system, and the braking force there had been limited because the system was constantly applying emergency braking.


That was at least an unreleased development version of the product, with a test driver in the car. And it was at fairly low speed - one would hope that shipping automated braking systems would be active in that case.


That sounds to me like the technology is too immature for a safety critical system.

"We can't make this work without inconveniencing the driver, but dead people don't sue!"


Yeah, maybe. "Most adaptive cruise control systems ignore stationary objects" https://arstechnica.com/cars/2018/06/why-emergency-braking-s... "Cruise cars frequently swerve and hesitate." https://arstechnica.com/cars/2018/12/uber-tesla-and-waymo-al... This was true of Google cars at least in 2014. https://www.technologyreview.com/s/530276/hidden-obstacles-f... Here's an overview of several systems including details of Toyota and Volvo's limitations. https://www.extremetech.com/extreme/270049-why-self-driving-...


Care to explain more about ignore heuristics? I'd die to hear about it.


I posted some links elsewhere in the thread, but the problem is pretty simple. The faster the car is going, the further ahead it needs to look. But the further away it looks, the lower the effective resolution of the sensors is. Eventually it can't tell the difference between an object in the lane and an object next to the lane, or a metal plate that's part of the road and something that's sticking up.


Going as fast as you can handle is one of top rules that every driver is/should be thought. Why AI is exempt?


because it's a race between startups and other entities whose option is profitability or foreclosure. it's their standard mode of operation, move fast and break humans.

no idea why people thought companies would have handled this differently.

beyond that, it's the usual overselling 'neural network' and other classification technologies as AI, of which they are not.


The video in your post is from 2016, when Teslas had the Mobileye system.


Car And Driver: We Crash Four Cars Repeatedly to Test the Latest Automatic Braking Safety Systems [1]

"To understand the strengths and weaknesses of these systems and how they differ, we piloted a Cadillac CT6, a Subaru Impreza, a Tesla Model S, and a Toyota Camry through four tests at FT Techno of America's Fowlerville, Michigan, proving ground. The balloon car is built like a bounce house but with the radar reflectivity of a real car, along with a five-figure price and a Volks­wagen wrapper. For the tests with a moving target, a heavy-duty pickup tows the balloon car on 42-foot rails, which allow it to slide forward after impact.

The edge cases cover the gamut from common to complex. Volvo's owner's manuals outline a target-switching problem for adaptive cruise control (ACC), the convenience feature that relies on the same sensors as AEB. In these scenarios, a vehicle just ahead of the Volvo takes an exit or makes a lane change to reveal a stationary vehicle in the Volvo's path. If traveling above 20 mph, the Volvo will not decelerate, according to its maker. We replicated that scenario for AEB testing, with a lead vehicle making a late lane change as it closed in on the parked balloon car. No car in our test could avoid a collision beyond 30 mph, and as we neared that upper limit, the Tesla and the Subaru provided no warning or braking. (Emphasis mine)

Our tests also exposed the infallibility myth that surrounds computers and automated vehicles. Driving the same car toward the same target at the same speed multiple times often produces different results. Sometimes the car executes a perfectly timed last-ditch panic stop. Other times it brakes late, or less forcefully, or even periodically fails to do anything at all. In our stationary-vehicle test, the Impreza's first run at 50 mph resulted in the hardest hit of the day, punting the inflatable target at 30 mph. It was only on the second attempt that the Subaru's EyeSight system impressively trimmed the speed to just 12 mph before the collision. All the results in the charts in this feature show a car's best performance."

Note that these tests were executed against an object intended to replicate the radar profile of a light vehicle. I expect worse results against a semi's trailer. Tesla is not alone in facing this challenge.

To be clear, these systems are a net win: without automatic emergency braking, fatality rates would be much higher. People will still die, but less people than before.

[1] https://www.caranddriver.com/features/a24511826/safety-featu...


Net win is the key word here. These systems are designed to improve safety, which is a pretty low bar, and should not be marketed to the contrary. NCAP standards read like they were written by the manufacturers themselves, to be able to pass in easy conditions.


"We conducted all our tests on dry pavement but shot our photography after the rain came because otherwise the photographer overheats."

Good to know.


Considering how many Teslas are on the road and aren't crashing, seems like they don't actually lack basic obstacle detection.

The fact that a Tesla crash causes NTSB to investigate is a sign that these crashes are incredibly rare.


I don't know there's any great solution here. Something that works well most of the time is going to lull drivers into not paying attention. Calling it autopilot probably doesn't help.

Tesla is either going to have to hobble it enough that drivers pay attention, or significantly improve it, if that's possible.


It seems obvious that there's room for improving a system that can't detect an object that's 75 feet long and 13 1/2 feet high directly in front of it.

Self-driving cars are decades away from reality, and self-driving trucks are even further away.


If only there we some other way to notice solid objects, instead of relying on the statistical analysis of video frame images relative to one another, over time, in real time.

If only we could figure out how to detect solid objects, but alas, such a difficult problem. So hard to implement that no one should hold their breath, in hopeful anticipation that a carmaker might add solid object detection to their sensor array package, so that cars can do the most important thing everyone expect from them.

But, you know, when you make a self driving car, hazard avoidance and collision detection actually turn up much lower on the list of priorities that you'd expect.

(in driver's ed, we learned not to overdrive our headlights, to not drive so fast that our stopping distance began beyond our visual field of good lighting surrounded by darkness; meaning that should an object suddenly appear in the light cone as you approach it, the combination of your human reaction time, combined with the tire's capacity for friction against the road surface, combined against the total inertia of the car in motion should all remain in the cognitive focus of your situational awareness as you drive a car at night; without this, as a human, you are not a safe driver;

and yet robots are judged according to different standards, even though, for them, it is always nightime, and the world is perpetually a cloudy environment shrouded in mysterious darkness; too bad that's not what sells cars)


> If only we could figure out how to detect solid objects, but alas, such a difficult problem.

Can’t we just put signs on everything?

By placing an industry-standardised logo on the corners of vehicles, and giving them away free as stickers / screw on signs, we could identify the anything that ought be considered solid and necessary to avoid regardless of its velocity.

In my opinion we aren’t going to see Level 4+ autonomous vehicles until we start including their requirements in our built environs design.


No. Your plan shifts the responsibility for avoiding collisions away from the “autonomous“ vehicle (where it belongs) to everything else in the world. If these vehicle can’t operate in the world as it exits, then they should not be allowed to operate autonomously.


Does that mean we can remove all signs, lights, indicators, etc. from everything on the road?

We’ve designed roads and cars to accomodate the shortcomings and limitations of humans.

Is there any reason to expect robotic autonomous systems to have the same set of short comings as humans?


LIDAR would help a lot. The radar Tesla's have lacks the angular resolution to distinguish between a stationary object next to the road and a stationary object in the middle of the road. Tesla claims that CV will eventually pick up the slack, but I'm pretty damn skeptical of that.

Consider that Google/Waymo seems a lot more serious about CV research than Tesla, and yet their prototypes all use LIDAR and CV sensor fusion. If Waymo thinks LIDAR is important, why should I trust Tesla when they say it isn't? And remember there is a strong profit motive for Tesla to downplay the necessity of LIDAR: LIDAR is expensive and bulky right now, but Tesla wants to advertise their cars as containing all the hardware necessary for self-driving so they can profit from the automation hype. Waymo uses LIDAR because they're trying to make it function while Tesla scoffs at LIDAR because they're trying to sell cars with hype.


I disagree on the grounds that it already happened. Perhaps survivor bias is obscuring the fact that Teslas alone have already driven roughly billion miles on autopilot. Perhaps we don't hear about those because they are uneventful.

https://hcai.mit.edu/tesla-autopilot-miles

Yes there will be bugs--all software will have bugs--and they will decline over time. Meanwhile, the accident rate of autopilot is probably better than human drivers.

https://electrek.co/2018/10/04/tesla-first-vehicle-safety-re...


The numbers for autopilot are a lot more complicated than that, and you cannot claim it is statistically safer based on the data we have. But if Tesla did have data showing autopilot is safer, I'm sure they'd be ecstatic to share it with the public. Instead they're burying the data and it can only be obtained after years of legal battles.

https://arstechnica.com/cars/2019/02/in-2017-the-feds-said-t...


Fair enough, though I'm more concerned about the claims for fully autonomous vehicles without driver controls. How many times in those billion miles did a human driver intervene? Or rather, would it have been two billion miles on autopilot if the human driver had not had to take control?


Key point is that the autopilot will get better over time, while the humans will not. This is one of those questions that is best answered by looking at the first derivative rather than the quantity in question.


This isn’t a bug.

Will crash in to stationary object at high speed is and inentional design decision.

I wouldn’t go so far as to call it a feature, but it’s certainly not a bug in the traditional, or common, usage of the term.


I've driven several thousand miles with no hands, only my knee. Of course that number is deceptive because it doesn't tell you how frequently I intervened with my hands.


I completely disagree they are decades away. I'd be shocked if there weren't completely autonomous vehicles operating anywhere in U.S.A in 5 years. No idea why you think trucks are further either.


Trucks are further because a) they are heavier and require much more distance to stop, b) they are longer and require much more room to turn and maneuver, 3) they are much more likely to carry large quantities of hazardous materials. Driving a truck safely is a lot more complex than driving a car safely.


There may be a perception that trucks will happen sooner because you could focus on interstate highways only, with "ports" of some kind near on/off ramps. It would have value, and still keep the vehicles on high quality, well marked and maintained roads with few intersections.


Doesn't matter how long it is if you're pointing the same way.


In this case the Tesla hit a truck that was perpendicular to its direction of travel.


And the rear of a dry van truck is 8 1/2 feet wide by 13 1/2 feet high, so if it can't detect that, how is it supposed to avoid another car, let alone a pedestrian?

This case (truck perpendicular to travel path) should literally be the easiest test case for object detection, other than larger stationary objects like buildings and bridges.


I seem to recall that the last time this scenario (truck across the path of travel, although in that case it wasn't moving) led to an accident, the Tesla mistook it for a bridge.

I suspect this is going to be a perpetual problem for vehicles without lidar, but I'm not particularly clueful.


Tesla is pushing the bounds of the signal they can extract from the forward facing radar (as a result of the first Autopilot trailer accident resulting in a death).

https://www.tesla.com/blog/upgrading-autopilot-seeing-world-...


"the Tesla mistook it for a bridge"

That's interesting. I can imagine, lacking real depth information, why that would happen. Especially for specialty trailers, like the ones that pull logs or wind turbine blades... where there's a large open area under the cargo.


Ah. Bigger than a proverbial barn door then.


> Something that works well most of the time is going to lull drivers into not paying attention.

That's my biggest objection, I either want to be 100% disengaged or nothing. A system that wants but doesn't require you to pay attention and allows you to check out is absurd. It's hard enough sometimes reacting to things when you are driving but a system that let's you stop paying attention but might ask you to make immediate responses to situations it can't handle? Hell no.

We've had our Tesla since November and it has given us two 30 day trials of Autopilot and the "no thanks" is so obfuscated that it's easier to say yes and then opt out after it enables it.

Some of the features like Intelligent Cruise Control would be nice but no way in hell do I want the half assed autopilot feature.


> I don't know there's any great solution here.

Stop marketing Tesla's Level-2 ADAS as 'AutoPilot', maybe? It would be a pretty radical change, but it would get rid of what is by far the most common misconception about it - namely, that it allows for even momentary distraction by the driver (as the "autopilot" on an aircraft would) - in fact, it does not. On the flip side of it, they could fairly reintroduce the AutoPilot name when they reach at least Level 3 on the ADAS scale (allowing for momentary distraction in highly selective conditions, such as-- for a start-- when the car is basically restricted to moving at pedestrian-safe speeds anyway, due to intense congestion or other reasons).


Other companies hoping to do self driving better than Tesla has would do well to heed these free lessons from them.



Hopefully Tesla doesn't feel the need to repeat mistakes of the past by releasing info outside of the NTSB process[0].

[0] https://www.npr.org/sections/thetwo-way/2018/04/13/602081183...


How does the rate of accidents while using autopilot compare to manual driving? It seems to be an important question as it can't be expected that automated systems will never get in accidents.


That's a topic of much debate. Tesla parses the numbers to indicate that it's much better than manual driving, but others are skeptical.

https://duckduckgo.com/?q=tesla+autopilot+statistics&t=osx&i...


The rate of mere accidents isn't the metric you're necessarily looking for. What's relevant is the rate of accidents compared to the rate of fatal accidents, for manual driving.

And then the same for autopilot.

Tesla's stingy with their mileage data though, so you have to get academic.

Combining our crash data with the MIT numbers, we arrive at the following fatal Tesla accident rates:

- Autopilot: 0.25 per 100 million miles

- Human Pilot: 0.20 per 100 million miles.

(Put another way, one Autopilot fatal crash every 400 million miles, one human pilot crash every 500 million miles.)

https://www.greencarreports.com/news/1119936_tesla-fatal-cra...


To add to that some more differences need to be taken into account:

* autopilot is used mostly in favorable road conditions, highways, clear weather, while humans drive in any condition, for example snow or fog

* Teslas are on average newer than the average car on the road, increasing the likelihood to survive a crash

* Teslas are upper class vehicles, further increasing the likelihood to survive a crash

All in all, the cards are stacked in favor of Tesla in a mere (fatal) accident per mile comparison.


I think it is going to be hard to draw an apples-to-apples comparison since I do not think there is data that allows us to drill down to just self-driven highway accidents in situations where autopilot would work. It is easy to say your accident rate is low when you are comparing numbers for fair weather highway driving to data that includes inclement weather or situations that autopilot would not operate properly in the first place.


With this latest death, it seems Tesla’s death rate would be significantly higher than the 40K+ luxury segment. Tesla usually does the comparison using all vehicles which include a lot of older vehicles that don’t have advanced sensors and such things as automatic advanced braking and side curtain airbags. In the 40K+ luxury segment, vehicles in general have these safety features.


According to another comment above, no other cars with driving assistant features would have avoided this crash either. It’s a matter of human attention (or lack of).


This happened at very inopportune time with Tesla desperate to attract buyers of the newly available $35k Model 3.

They did it to themselves though; bringing a glorified cruise control to market while touting it as the safest way to travel by car and calling it Autopilot.


Traffic reporter video of crash scene, no gore, but gives clearest picture of crash area and distance car traveled after collision.[1]

I own a TM3, I do not have FSD features but those were not involved in this crash. I do know enhanced autopilot doesn't seem to do a good job with tracking certain objects, especially stationary objects in the path of travel. While a truck would not be stationary I really do not see any excuse for not seeing one. At minimum two cameras had eyes on it. One of my favorite, what does it see videos [2]

[1] https://youtu.be/X8HSsQ_KJFI

[2] https://youtu.be/bImNNpB35eE




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: