Hacker News new | past | comments | ask | show | jobs | submit login
[flagged] Tesla: Designing autopilot program to shut off less than a second before impact (twitter.com/muttgomery)
29 points by doener on June 11, 2022 | hide | past | favorite | 18 comments



This is quite misleading, as explained in https://twitter.com/BrandonLive/status/1535477873267515393.


OP and this need to be reconciled. Bit of a shame there are two plausible statements which is a screenshot of text and a Tweet. Rumor started and debunked from sources suspect.


What is the alternative? In a situation where the software has detected an imminent and inevitable crash, the _only_ sensible action is to turn off self-driving and revert to 100% human control. I don’t see this action alone as Tesla attempting to avoid culpability.

The only exception to reverting to human control is if the software predicts there is a path to accident avoidance. In that case, the accident-avoiding action should be taken. That’s almost a tautology as it’s a given that FSD software will always _try_ to avoid accidents.

Culpability should be an entirely separate question. What are the expectations of use for a particular FSD software? Tesla is treading a fine line, but my understanding is that you are still expected to pay attention to the road and be ready to take control whenever using FSD.

Perhaps Tesla should also be modelling reaction times? If the time between reverting to human control and the moment of impact is too short, then you could argue the accident is a result of the software. However, what about the time between risk identification and impact time? Different people would identify a risk at different times. The same will be true for different self driving algorithms. FSD has a long tail problem. Very rare events are difficult to learn and test for. The best system we have for dealing with rare events is a human driver - so it’s more than reasonable to me to flip over control to drivers under these circumstances.


As someone who's not versed in the ins and outs of FSD, can I ask why it's better to revert to human control in the case of an unavoidable crash?

If a human sees that's about to happen, they'd likely just scream and jerk the steering wheel some random direction and slam the breaks and generally cause a loss of control, no?

If the FSD can predict a crash and calculate all the trajectories, can't it at least attempt to steer the vehicle in such a way to minimize collision forces (try to glance by the other car if it doesn't take you off a cliff) and/or orient the car for maximum protection (avoid tbones) or even just brake with a faster reaction time and safer ramp that doesn't skid the tires?

I'm genuinely wondering here why reverting control is considered safer.


It is in no way safer. Anyone with experience in analyzing vehicle impacts know that.


I believe you, but as someone WITHOUT experience in analyzing vehicle impacts, can you explain why? (or sources that explain it?)


The sensible thing is to apply the breaks.

Tesla Advertise FSD and Crash avoidance. Turning it off is crash acceptance.

You whole paragraph about time show you no nothing about vehicle impacts.


This is outrageously misleading. The autopilot will disengage when it gets extremely confused. It's likely to get extremely confused when faced with a situation of an impending collision. So if you look at a bunch of crashes, you'll find a lot where the AP quit right before. That doesn't say anything about which party was at fault, it's just the way the software is going to work.

And the other side of the argument, that somehow this was somehow intentionally done to avoid culpability in reporting, is just plain wrong per Tesla's own data. Read the methodology section at https://www.tesla.com/VehicleSafetyReport

> [...] To ensure our statistics are conservative, we count any crash in which Autopilot was deactivated within 5 seconds before impact [...]

So the "less than a second" bit is just a lie, unless there's some other report being cited that uses different methodology. Someone is spinning.


I like that you think 5 seconds is enough time of collision avoidance by a human.

Also, you are putting a lot of trust in the words said by a corporation that has billions to lose.


I can see this reading both ways. I'm extremely skeptical of Autopilot/FSD claims, but it all depends on interpretation. All of the currently available documentation states that Autopilot/FSD can give you a "take over immediately" warning and disengage when it has no idea what to do. It would make sense, then, that the system would detect that a crash is imminent, and have absolutely no idea what to do to avoid it, and error out, handing control back to the driver.

I think the full story on a crash is kind of required to make this conclusion. The whole point of the NHTSA investigation is to see if Tesla is massaging the data like this. It wouldn't really surprise me either way, but I'm really skeptical that Tesla would report that Autopilot wasn't engaged on a technicality when investigators will clearly turn it into a PR disaster if it's true.


Do Tesla PR disasters seem to be holding it back? It might simply be a calculated risk.


Tesla is still strictly Level 2 driver assistance legally and in practice.

Meanwhile Mercedes has received approval for Level 3 with full legal responsibility in limited settings. Mercedes gives drivers at least ten seconds warning before giving back control in any situation.


Tbh I don’t understand how that’s possible given how quickly conditions on the road can change.


By turning it into a nearly-useless system. You can only engage Mercedes' system if a number of conditions are all met:

- The route has to have been pre-mapped by Mercedes

- Only works on highways

- Only in traffic jam-like conditions, i.e. traffic moving slower than 60 kmh (37 mph)

- No construction sites (even a single orange cone left over from prior road works will mean that you can't turn on the system / it will turn off)

Under these conditions, 10 seconds doesn't sound too difficult.

I still think that this is the "saner" approach than calling your product Autopilot and letting the general public beta test it (and driving in traffic jams is annoying so I wouldn't mind using the Merc system), but overall it feels mostly like a PR move to be able to say "we have a Level 3 product on the road". I read an article where Mercedes wanted to demonstrate the system to a journalist and they had to have a second car drive in front of them to artificially create a traffic jam because otherwise it wouldn't activate.


It's something more than a PR move if they received approval.


Yeah, what does that even mean? 10 seconds at 60 mph is like 3 football fields long... how do you tell a driver "Ok, in ten seconds I'm going to give you back control because something bad is about to happen?"


I genuinely don‘t know, but if you can predict something bad will happen then the consequential behaviour should normally be quite clear - e.g. brake. My wild guess is therefore that the auto-pilot will switch off and give you control if it has low confidence of the surrounding: e.g. a sensor is broken, data from various sensors don‘t align (e.g. due to bad weather) or it knows it will enter a terrain for which it is less trained - going from high way to sub-urban terrain - or something similar. Really only stabbing in the dark. Anyone from Mercedes here that can explain? Edit: typo and please see other comment on Grandparent‘s comment for explanation.


Perhaps it slams the brakes every time the AI confidence is low, that would cause a lot of phantom braking but it’s safer for the passengers (not the traffic behind). We will see how it goes once it’s widely deployed.

It’s great to see some good competition in the market.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: