Hacker News new | past | comments | ask | show | jobs | submit login

Why is it every time self-driving comes up, proponents act like there's nothing between forging ahead towards L5 and improving over an unaided driver?

Currently AP's "self-driving" features cannot save a life, they have killed people.

To understand that better, examine AP in finer detail:

Teslas have safety features that can intervene when the current situation seems to be leading to an accident. Let's call those correction. Those can (and have) save lives.

Tesla also has features that are supposed to be more convenient for the driver. The claim is they take a load off for the driver. Call those the convenience features.

-

The convenience features have not evolved to the point where you are ever allowed to be less vigilant than you would be otherwise! By definition if AP is driving, you too must be fully attentive and "driving" hence the increasingly intrusive detection which has now evolved to detecting wheel input every 15 seconds...

Now realize that while for the sake of examination I'm describing convenience vs correction, these are all systems in one car. You can get the correction features without FSD, but you can't get the inverse.

Because of this, the convenience features alone cannot avoid an accident that the avoidance systems would not have.

-

This is tricky to think about, but consider a driver who would have drifted off the road without AP. As soon as the driver deviates too far from what AP would have done and starts touching a lane marker, the lane detection systems realize there's an issue and correct.

However, say there's a tricky lane marking where the correction system wrongly believes the driver input is wrong. They get an alert and an attempt at correcting steering inputs.

But because the driver is already inputting a direction with full intention, this wrong input is already actively being overridden by the human who plotted their turn and can clearly sense that they should fight the suggested input before they end up running off the road.

With the convenience features enabled, the driver's input will come after the wrong action has already been taken!

If the avoidance systems knew that the convenience feature was about to send you into a jersey barrier... the convenience feature wouldn't be "allowed" to send you into a jersey barrier!

-

I want to see studies that focus on the safety of driving with full driving aids but without convenience features. Claims that you are less tired from using them show that it's an inherently unsafe practice since you should be exactly as vigilant as normal.

Because of what's explained above, the expectation would be that there is no drop in safety if your car only corrects when there is a hazard... but there is a drop in safety when you are falsely lulled into a sense of safety.

To me WIP self-driving should be monitored by trained people. It shouldn't be a box you get to tick in your infotainment screen.

If Tesla wants to carefully pick who's allowed to run these betas more power to them, but the number of idiotic videos I can pull up right now of people intentionally testing FSD updates in situations they know are hairy and risking other people's safety is insane.

That is just setting up self-driving to be dead on arrival.




> Currently AP's "self-driving" features cannot save a life, they have killed people.

That is just not true. Distracted/impaired human driving is a real thing that kills people. Just two weeks ago there was a DUI event where a Tesla driver on AP literally passed out behind the wheel and the car was trivially brought to a stop anyway.

> This is tricky to think about, but consider

This is postulating a two-mode failure! That's just bad engineering. Yes, that can happen, but by definition (involving two rare events overlapping) it's an extremely rare condition and will show only a vanishing frequency in any final statistics.

Never compromise core features (in this case "don't let distracted drivers hit the thing in front of them") chasing this kind of thing. Work on the secondary/tertiary modes once you have everything else down.


> Just two weeks ago there was a DUI event where a Tesla driver on AP literally passed out behind the wheel and the car was trivially brought to a stop anyway.

Nope, and this is a perfect example!

AP didn't save that person's life, or more specifically, AP wasn't not needed to save this person's life

Tesla made Emergency Lane Departure Avoidance standard on all Teslas a while back.

Emergency Lane Departure Avoidance only intervenes if the driver deviates from expected behavior

So situations where AP would avoid a collision while being in control, the car still has a feature using the same sensor suite and the same capabilities as AP to intervene.

A Tesla without AP would have stayed in it's lane in the absence of driver input and slowed to a halt with the flashers on.

Honestly a better outcome...

It's as simple as, AP is half heartedly handling the happy path + the unhappy path.

It's ok to not have perfect handling of the unhappy path as it does currently, something is better than nothing in 99% of cases. But it's not ok to have this mode that falsely gives drivers the impression they're covered on the happy path.

Like your example could not be more perfect because a person driving under the influence relied on AP to handle the happy path for them. Now you can get into "well maybe they would have driven anyways if it didn't have AP"

And that's where emergency LKA and AEB come in. They wouldn't have given this driver a crutch to get on the road, but if they still chose to drive, they'd avoid an accident in the same manner AP did once things went off the rails.

This all goes back to the fact that the convenience features by definition cannot save a life. AP, as in the feature manually activated to assist the driver actively, cannot save a life, because it only works on the happy path. Other features like LKA and AEB are what save lives

> This is postulating a two-mode failure! That's just bad engineering. Yes, that can happen, but by definition (involving two rare events overlapping) it's an extremely rare condition and will show only a vanishing frequency in any final statistics.

> Never compromise core features (in this case "don't let distracted drivers hit the thing in front of them") chasing this kind of thing. Work on the secondary/tertiary modes once you have everything else down.

I have no idea at all where any of this is coming from... please quote where I suggested core features should be compromised for secondary features.


> A Tesla without AP would have stayed in it's lane in the absence of driver input and slowed to a halt with the flashers on.

That's not how those systems work. You're confused, I think. If you let Jesus take the wheel in a Tesla under manual control it will absolutely depart your lane and hit whatever you are aimed at. The emergency features are last minute corrections to avoid an imminent collision, and they work about as well as other car vendor's products do, they reduce the likelihood and velocity of an immiment collision. They don't drive the car for you. The feature that does is AP.

> please quote where I suggested core features should be compromised for secondary features.

You want to refuse autonomy solutions to everyone to save hypothetical bikers at sunset in Mendocino! (Edit: sorry, I got the wrong elaborate example and was paraphrasing another poster. You want to save drivers where they incorrectly countermand an incorrect autonomy decision in a circumstance where the incorrect autonomy would have been correct. Or something. But the logic is the same.)


> If you let Jesus take the wheel in a Tesla under manual control it will absolutely depart your lane and hit whatever you are aimed at.

You are wrong.

https://www.teslarati.com/tesla-lane-departure-avoidance-sav...

I understand your confusion though, Tesla is well behind other manufacturers here.

VWs have had this behavior for years now, almost as soon as they started dabbling in advanced safety assists, meanwhile Tesla got it just two years ago.

https://youtu.be/TITEf_taUto

Kind of shows where the priorities were early on, saving incapacitated drivers vs "look ma no hands!"

-

Also your second quote just leaves me with more questions.

What on earth are you talking about

> You want to save drivers where they incorrectly countermand an incorrect autonomy decision in a circumstance where the incorrect autonomy would have been correct.

What?


I don't know what to tell you. I have tested this feature on my own car, and its behavior matches the public documentation (c.f. https://www.tesla.com/blog/more-advanced-safety-tesla-owners). It will scream at you as it departs the lane, but it will depart the lane. It will correct only if it detects an object or barrier it needs to avoid. It does not do what you think it does. Tesla does offer a product to do that. It's called "Autopilot" and it absolutely works great. For example it prevented a collision (or even lane departure) entirely in that DUI case we're discussing.


Your test sounds like it failed then... did you read your own link?

> Emergency Lane Departure Avoidance is **designed to steer a Tesla vehicle back into the driving lane** if our system detects that it is departing its lane and **there could be a collision, or if the car is close to the edge of the road**.

And again, AP acted as a crutch that an intoxicated driver attempted to lean on.

Now an intoxicated driver could have decided to drive without the crutch... but emergency lane departure avoidance would have kicked in when they passed out and centered the vehicle to a stop instead of trying to maintain a certain speed.

(And let's head-off the straw-man here, no I'm not blaming AP for an intoxicated driver but I am stating the reality, that AP. Autopilot. Is marketed in a way that causes sober people to think of it as a co-pilot, let alone drunk people)

AP literally disengages when off the happy path, it's other safety features that help when things go wrong.

It's amazing that people still haven't caught onto this.


You just keep digging here, and it's all wrong. I don't know how else to reply but with facts:

> AP literally disengages when off the happy path, it's other safety features that help when things go wrong.

AP never disengages autonomously, ever. You turn it off via driver action, either via steering input that countermands it or via the stalk control. (Same deal for TACC -- it's on until you hit the brakes to kill it).

Really, go rent a Tesla for a few days and experiment. I think you'd find it enlightening.


Wrong again...

https://jalopnik.com/this-video-of-a-terrifying-high-speed-t...

https://www.techtimes.com/articles/254126/20201112/watch-tes...

> Autopilot disengaged about 40 seconds prior to impact due to the Tesla issuing a Forward Collision Warning (FCW) chime

Here's a reckless driver in a Tesla assuming AP could handle unsafe speeds on their behalf.

Once again AP is not the one to blame but... a driver tries to use it as a crutch and it disengages due to the FCW.

Due to the complete lack of input from the driver, many suspect the driver didn't realize AP would be disengaged by the FCW, and while AP disengaging would have given an alert, it'd be at the same time as the multiple FCW alerts they got.

So just like the DUI, the safety features would have worked *better* without AP. If they had been driving without AP they would have not have expected a now-disabled feature to respond to the FCW for them...

-

Seems like you didn't realize this was the case either with FCW and AP, so you're welcome....

Ps: My Model S test drive (remember those) was not going so hot when the headliner started peeling off the demo car, but then went cold when the not-a-salesperson panicked when I didn't disable AP fast enough (since we were approaching a section of lane markers in their test circuit where it was known to veer towards barriers)

I'm pretty familiar with Teslas though, I was a huge fan until gestures to everything from AP marketing to silent CPO->used change, etc.

Pps: Please don't go and try this on a public road to prove an internet stranger wrong.

You already admitted to trying to test ELKA (which won't intervene until things are going seriously wrong.)

You're not going to be able to test a FCW with AP without doing something silly...


It’s a setting. You can have a just a warning or a corrective maneuver applied.


Emergency LKA (which maneuvers) turns on every time the car is started regardless of the setting and has to be re-disabled unless something changed since the press release




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: