Hacker News new | comments | show | ask | jobs | submit login
This Test Shows Why Tesla Autopilot Crashes Keep Happening (jalopnik.com)
32 points by knuththetruth 4 months ago | hide | past | web | favorite | 55 comments



> Tesla has always been clear that Autopilot doesn’t make the car impervious to all accidents

This makes it sound like the problem is that there are edge cases that aren’t handled. In reality, the test shows that Autopilot is only designed to handle the special case of lane keeping and following the car in front of you, but is too simplistic to actually handle the general case of driving the car. It’s a very advanced cruise control, not a self-driving system that still has kinks to work out.


It’s clear the Tesla is dealing with this as a PR/marketing problem, but the fail miserably even at that.

If you think it’s a PR/marketing problem and the system operates as it should, then stop calling it Autopilot and stop advertising cars with full autopilot capabilities*

* “Theoretically the hardware should be enough to enable full auto pilot some time in the future. At the moment use this other Autopilot, which is not really autopilot. Btw we made this really cool autopilot video, but it’s not the Autopilot you have in your car.”


The marketing problem is lay people thinking that aircraft autopilots are magical self flying machines when they often require the pilot to provide corrective input, especially during “autoland.”

If you think Tesla Autopilot needs another name, stop operating on mystical beliefs like Rei-Ki and magical self-flying autopilot and learn what any aircraft autopilot is actually capable of.


An aircraft does not require any corrective inputs during autoland, in fact it is the opposite; inputing any control action will abort the autoland.

It just needs the pilots to make a land / no-land decision at the specified altitude. Can they see the runway lights within the specified distance and is the crosswind-component within the acceptable range? If they choose to continue the landing then they don't touch the controls.

Imagine you're coming in to a wet runway with 20kt crosswind and one engine inoperative. Sounds like a nightmare; but in that case autoland will do a better and more consistent landing than most pilots. Can we say that about car self-driving systems yet?


Here's a discussion of flying a plane on a Cat III ILS autoland: https://youtu.be/UO2K4-zRubA

Note all the crosschecking between the human operators and the two autopilots. Mentour speaks almost exclusively about flying the Boeing 737, a very reliable aircraft. Even on this aircraft the human pilots are ready to take over from the autopilot at a moment's notice.

Here is an auto land using Cat III ILS in an Airbus 320, with the humans invigilating at all times, ready to go around if anything appears off nominal https://www.youtube.com/watch?v=mSNE3SmYA-8

So to a naive observer, it may appear that you are correct: planes can land themselves without intervention. But that is only true for airport, aircraft and conditions qualified for Cat III approaches (i.e.: ideal conditions) and remain that way throughout the procedure. If the airport, aircraft or environment are not qualified for Cat III approach, you don't get this ultra-precise auto land capability. If anything goes wrong with any equipment during approach, you don't get the auto land capability (and indeed a go-around is required).

If there is anything outside spec (e.g.: aircraft manoeuvring on the taxiway can distort the ILS beam, thus Low Visibility Procedures), things will go wrong and the pilot needs to intervene.

At no point is it safe for a pilot to let the plane do the flying and sit back to have a cup of tea. The captain and first officer have their assigned tasks: be ready to land the plane and be ready to go around (respectively). Intervention is required from time to time, so they need to be ready to intervene every time.

Cars do not have the option of "going around". Every action has the potential to cause fatality within seconds. There's even less room for inattentiveness when using a car on autopilot.

The OP suggested that autopilot should be able to drive the car without any human intervention, but they have confused autopilot with autonomous driving, probably because of the "auto" prefix, yet they've never been confused about an automobile requiring constant human attention.

It's clear that Tesla have a PR/marketing problem, when the population is happy to drive an auto with auto, but can't differentiate between autopilot and autonomous.


I can understand Tesla's position in saying that the driver should still be aware of their surroundings. But. Avoiding stationary objects should be the first objective of a system like this in my opinion, especially if the object is directly in the projected path the vehicle is travelling in.


Sure, but such systems won't work at all if they don't ignore such stationary objects as bridge abutments. And they lack the angular resolution to determine whether stuff is directly ahead, or off to one side. Plus the fact that roads curve.

Or, as in the Thatcham Research video, stuff that's hidden by a leading vehicle. I always make an effort to know what vehicles far ahead are doing. If necessary, I shift off lane occasionally to check for hidden stuff. And when I'm behind large vehicles, I don't follow so closely that I can't see what's ahead.

But if you're going to do all that, why use Autopilot?


>But if you're going to do all that, why use Autopilot?

I don't under stand the value of Autopilot at all. What is the value in sitting there concentrating on driving without slightly the turning the wheel to stay in a lane.

First, I think doing nothing but still being expected to pay attention is more annoying than having a role in driving. But even without that, whats so fucking hard about moving the wheel a little bit.


>And they lack the angular resolution to determine whether stuff is directly ahead, or off to one side.

Sounds like my Grandma. For her safety, and everyone else's, she doesn't drive any more.


Sounds like me without my glasses. But then, I don't drive without my glasses.


>But if you're going to do all that, why use Autopilot?

What's the point of having a self-driving system that doesn't do all that? The entire point of self-driving systems is that they're safer because they can see farther, anticipate more, and react faster than any human. If they can't, what's the point? Why even offer Autopilot?


That was basically my point. I've never even felt safe using cruise control.


That's a limitation of Tesla's hardware (despite their, likely false, claims that their hardware is sufficient for self driving).


The only objective of Tesla Autopilot is Traffic Aware Cruise Control and lane keeping. That’s it. End of story. You are there to supervise operation and take action when TACC and LK are insufficient for the environment.

Autopilot when used correctly can significantly reduce fatigue because you are not focussed on the minutiae of keeping pace with the car in front of you and keeping the car in the assigned lane. This frees your attention up for higher level operation such as observing the traffic further ahead, scanning for hazards, and enjoying the scenery to your left and right.

Discerning stationary hazards from stationary scenery is not part of Autopilot’s feature set right now, that is your job as the operator.


Autopilot-bashing articles are becoming more and more popular as people love to gawk at accidents. I believe a lot of the comments on these threads are no more than online rubbernecking.

When these things pop up, I'd love to know the answers to some of these questions.

1. What is the rate of accidents for Tesla drivers with and without Autopilot on similar driving terrain.

2. How 'correctable' are these flaws in Autopilot? I doubt someone is programming in what to do in every situation. Is the solution to twiddle with some weights and pray your ML pipeline spits out a better version? I'm guessing the true answer lies in the middle, but I don't have the technical background to know for sure.

3. I see a lot of references to Waymo leading the pack of would-be self driving vehicles. What is it that Waymo has done that makes their self-driving tech so much better?

4. Are we okay with accepting the costs of self driving cars given their potential in the future? Most major transport-related revolutions have come with a significant human cost at the outset as the early adopters accept a large amount of risk to push the technology forward. With time and usage, things stabilize and become safer. Flying was quite dangerous early on, too.


I'll try to answer as best as I can.

1. Your question seems mostly answered in this recent article https://www.wired.com/story/tesla-autopilot-safety-statistic...

2. I don't know much about architecture / model used by Tesla. I don't have open access to the accident's details. It is challenging for most people to publicly and answer your question based on facts. Sorry!

3. Again, I would say only very limited number of people have a clear vision about what's the current status. Some claim it is hard for Tesla to manufacture and design cars, solar panels, batteries and designing an autopilot at the same time. It requires a very broad diversity of skills that may be hard to steer and build from the ground up.

4. Fair enough, point taken. My personal complain is not about the technology in general but about Tesla's marketing. I hope the first planes were not claiming to be safer. The Titanic did though :).


What Waymo have done is not release a product for sale that can even remotely be confused for some level of autonomy. They have ensured this by not releasing a product.

Autopilot is not autonomous. It needs human supervision, and the fact that some simplistic “autonomy scale” includes “requires human supervision” is the largest part of this problem.

If it requires any supervision, it is not autonomous.

Any discussion of autonomy that even mentions Autopilot is automatically null and void. I would consider it the autonomy equivalent of Godwin’s law: in any discussion about autonomy the chance of comparing the safety of an autonomous system to Autopilot or Cruise Control approaches 1 and the first person to mention the comparison automatically loses the argument.


3. Waymo is R&D project at this moment. They are solving hard technical problem and disregard associated costs (such as LIDAR).

Tesla claims that hardware in model 3 is capable of self-driving once software catches up. That allows them to charge 3K USD for "FSD package".

These are two very different approaches to the same problem.


It seems to me the kids who program these toys have never driven anything other than the RC model they built to get this job. And, of course, it drives fine on the toy "roads" they set up on the carpet in the living room. Real world, not so much.


From a recent article[0]:

> When a car is moving at low speeds, slamming on the brakes isn't a big risk. A car traveling at 20mph can afford to wait until an object is quite close before slamming on the brakes, making unnecessary stops unlikely. Short stopping distances also mean that a car slamming on the brakes at 20mph is unlikely to get rear-ended.

But the calculation changes for a car traveling at 70mph. In this case, preventing a crash requires slamming on the brakes while the car is still far away from a potential obstacle. That makes it more likely that the car will misunderstand the situation—for example, wrongly interpreting an object that's merely near the road as being in the road. Sudden braking at high speed can startle the driver, leading to erratic driving behavior. And it also creates a danger that the car behind won't stop in time, leading to a rear-end collision.

[0]https://news.ycombinator.com/item?id=17274179


So instead of "startling the driver", who's supposed to be paying careful attention to what's going on, the algorithm chooses to plow into the stationary object?

I suspect _that_ might _also_ "startle the driver"...


The algorithm has been designed to ignore stationary objects because the algorithm and associated hardware is incapable of determining which stationary objects are actually in the path of the vehicle. Regardless of whether the car slammed on the brakes or blared a klaxon there are so many false positives that to do otherwise would make it useless.

So of course the only sensible thing to do is release a product that happily accelerates into the back of a bright red emergency vehicle with flashing lights. Or into a stationary concrete barrier. Or into the side of a massive tractor-trailer.

I mean, who wants to bother with false positives?


What's the old gag?

"You deliver a project 85% correct at university, you get a high distinction. You deliver a project 15% wrong at work, you get fired."

Who are these people?


Thanks for the link. I remembered the article, but didn't manage to find it.

In the Thatcham Research video, however, swerving would have been far more workable than braking. But then, we don't want too much autonomous lane hopping, either.


Computers have one huge advantage over humans: they can process a lot of information very quickly.

Humans have one huge advantage over computers: we pattern-match instinctually, building mental models of our surroundings.

Tesla is taking advantage of the first fact, but by avoiding LIDAR, they're falling victim to the second fact.

We humans don't have LIDAR, so it makes sense that a car without LIDAR should be able to match us, but we have brains and visual systems that far exceed anything available today or at any point in the near future when it comes to pattern-matching.

40,100 vehicle deaths in 2017 in the US, most of them the result of distracted driving. It's a shame that Tesla's system performs worst at exactly the point that humans need the most help.


LIDAR is certainly useful in building up point clouds, especially when coupled with RADAR, however the devil always lies in the details

LIDAR is higher resolution than radar, but often slower - counterintuitive to be sure, but such is the case - i.e. this large volume of data can lag reality by some small delta, and at 60MPH, that can add up.

Uber does use LIDAR and RADAR, however the informed external commentary I've seen, admittedly guesswork based on the NTSB reports seems to indicate it was a fusion error behind the AZ fatality. Sensor fusion is a beastly complex problem on top of miserable calibration exercises :-(

More importantly, AI only exists in the minds of marketers, media, and the improperly informed. This is all just pattern recognition - we're a ways off from having a system understand that if an occluding obstacle moves out of the way, then high priority previous unknown information now exposed needs to be checked. I'm sure such tests will get hardwired into current systems, however the system is then limited by what does get hardwired into it. At the end of the day a visual system, no matter how sophisticated is still only a visual system.


In that video, does the Tesla try to brake, and brake hard?

This seems to be the case -- if you watch the video the Tesla is at a full stop about 10 feet after the car, implying it must have hit the brakes hard way beforehand.

In that case, I don't blame Telsa for crashing -- you literally are baiting it into the crash a la a bull with a flag. You make it follow one car, prevent it from changing lanes, and then have an obstacle parked in the middle of the lane.

I'm all for getting Tesla to be safer. The accident count has gotten way too high. But this seems to be a test that is rigged, and Tesla did break.


>In that case, I don't blame Telsa for crashing -- you literally are baiting it into the crash a la a bull with a flag. You make it follow one car, prevent it from changing lanes, and then have an obstacle parked in the middle of the lane.

This situation is pretty common on the road. And a lot of humans who aren't paying attention get caught by it.

But it's not impossible to avoid, you have to be watching ahead of the car in front of you. And if you can't see in front of the car in front of you, you gotta have a lot more space so you can break in tiime.


I don't see that setup as anything out of the normal skill required of human drivers. Cars change lanes from in front of you all the time, sometimes because there's something stopped on the road in front of them.

The test in only "rigged" in the way a unit test looking at a specific edge-case is "rigged". Yeah, it's not something that happens every day, but there's very few human driver who wouldn't have perfectly safely followed the Merc into the right hand lane and easily missed that inflatable car. Most of us would have noticed the stationary car in front of the car we were following, and quite likely had already changed lanes before the Merc did.


> In that video, does the Tesla try to brake, and brake hard?

It's different takes edited together. Note the cameraman near the dummy car as the tesla approaches (you can see him and part of the video was shot by him).

> In that case, I don't blame Telsa for crashing

And in the case where it didn't brake? Would you blame Tesla then? Well... none of Tesla's driver assistance can handle that case, there's no way that they would brake. It will not brake for a stationary object (unless that object was previously moving).


You don't blame the Tesla?

If I did that the police and my insurance would blame me. Regardless of if I were braking hard. I'd almost certainly end up with points on my licence if I survived.


According to the NTSB, in the recent fatal case in which a Tesla drove directly into concrete on a highway, it was still accelerating at the moment of impact.

This is a very serious and consistent issue for Tesla's so-called "autopilot," as the article highlights with links to a variety of recent cases.


So...why is it that the default autopilot behavior isn't to slow down drastically if the driver doesn't initiate manual control fast enough?


I don't think you understand. The Tesla autopilot there thinks it's doing a great job because it thinks there's just an empty lane ahead of it (it's model of the world is basically that all stationary things are just pictures on the ground).


Because it's been shown that it does not account in any way for stationary objects. It doesn't see any danger at all ahead.


Autopilot is currently an aspirational name. Perhaps the name should be changed.


So, why do non-autopilot crashes keep happening?


Same reason the autopilot crashes happen: Driver isn't paying attention.

Problem is that the autopilot system makes it harder for the driver to pay attention.


Does it really?


Yes. Think about it this way: if having Autopilot on requires the same amount of attention/effort as having it off, then there would essentially be no point to the system whatsoever. The entire idea is that the system takes over some of the driving tasks to make the experience more pleasant, just like how regular cruise control is convenient because it frees you from having to pay attention to maintaining the car's speed by pressing the accelerator pedal.

The question is, does it reduce the amount of attention paid to a significant enough degree that it's dangerous? Consider subways -- most of them have automatic train control systems in place that could effectively drive the trains automatically. However, operators are still required to control the speed, largely because it keeps them engaged.

Give an operator nothing to do, and they won't pay attention -- this is basic human psychology (look up "Vigilance Tasks" if you're interested in further reading).


A lot of the attention you pay during normal driving is on mundane things like making tiny adjustments to stay within the lines. By removing that requirement you have more attention to put towards useful things.


That's not how human attention and focus works. There are literally decades of study on this sort of thing. Almost every other major carmaker and autonomous vehicle development company (except Uber, and some poor woman was killed because of it) have publicly abandoned plans for SAE Level 3 autonomous vehicles because over years of testing they've found that their own engineers, test drivers, directors, etc, who are intimately familiar with the technology and the peculiarities of their own product, can't stay focused on the car and the driving environment under those conditions.


Yeah - but that's most likely to be your phone, or the argument you had at work with your boss, or wondering why your partner didn't talk much this morning... I strongly doubt any "freed up attention" is getting redirected into driving-safety related "useful things".


This is where we separate the good drivers from the accidents waiting to happen. The good drivers will be watching the five or more cars ahead in all lanes to catch any unexpected braking or lane changes, and watching passing vehicles for indications of being exit-darters (those drivers who are so keen to catch the exit they will drive at high speed in the passing lane, then madly swerve through the slow lanes to get to their exit).

The accidents will instead get caught up in that antivaxxer Facebook debate on their phones.


Part of the reason I use the adaptive cruise in my Camry Hybrid is because it frees me from paying constant attention to the distance between me and the car ahead. Being a responsible driver means keeping an eye on this, as it does make errors and occasionally puts me in an unsafe situation - either too close to another car, or too fast in a traffic situation.


Yes. See the FAA’s research on actual autopilot and pilots’ attention fatigue.


The video seems ludicrous. They fail to show that same test with a Tesla in autopilot crashing into the back of a car in manual mode. I doubt a manual driver would be able to ovoid that rear-end accident. I could be wrong, but they should demonstrate it with scientific results and facts instead of making fear-mongering promotional videos.


The video seems plausible. I mean, I regularly avoid road hazzards I cannot see until the car in front of me moves to avoid them. The key is to not be an ass and tailgate the person in front of you so you have time to react.


Ludicrous and not 100% scientifically sound are two very different things. The Tesla doesn't appear to brake at all going in, as has been the case in actual crashes. A driver may not be able to avoid any accident but almost certainly could apply brakes to some extent. That braking can be the difference between life and death.


Any human paying attention would follow the leading vehicle as it swerved. That was not at all an abrupt swerve. Also, most people know not to focus just on the leading vehicle. And to back off when the can't see around it.


Given the Model 3's poor stopping distance, you may unfortunately be correct that even an aware driver couldn't stop in that space.

  Car           distance (feet)
  Model 3(orig) 152
  Model 3 (new) 133
  F-150         129
  Model X       127
  Camry Hybrid  125
  F-150 Lariat  119
  Model S       118 
  Porsche P.GTS 110 - Panamera 
  Chrysler 300S 109


After Consumer Reports highlighted this, Tesla issued an OTA update, so that now the Model 3 stops in a reasonable distance consistently.


Yes, this is noted in the chart:

  Model 3 (new) 133


Wow, a Ford F-150 truck has better stopping distance? Do you have a source for this?


http://www.thetruthaboutcars.com/2018/05/pump-brakes-consume...

> In a recent test of various 2018 F-150s, Motor Trend recorded a 129-foot stopping distance for the 3.3-liter XL Supercab model, while an upscale Lariat trim made the stop in 10 fewer feet. For comparison, a Chrysler 300S tested by the same publication made the stop in 109 feet.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: