Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've watched quite a few FSD videos and in almost every single one the car barely knows what it wants. The route jumps all over the place. I'm pretty sure it's just the nature of their system using cameras and the noisy data that can generate.

The sat nav didn't update the route to go that way. The FSD system decided to. It probably completely lost track of the road ahead because of the unmarked section of the road and just locked on to the nearest road it could find, which was that right turn.

I've seen videos where it cross over unmarked road and then wants to go into the wrong lane on the other side because it saw it first. It seems like it will just panic search for a road to lock onto because the greatest contributor to their navigation algorithm is just the white lines it can follow.



If you watch the Tesla AI day presentation they explain the separation of duties that kind of explains what's happening.

The actual routing comes from something like Google Maps that is constantly evaluating driving conditions and finding the optimal route to the destination. It does this based on the vehicle's precise location but irrespective of the vehicle's velocity or time to next turn.

The actual AI driving the car is trying to find a path along a route that can change suddenly and without consideration. It's like when your passenger is navigating and says "Turn here now!" instead of giving you advanced notice.


But the navigation guidance didn't change in this case.

If such a trivial and obvious edge case as navigation changing during the route isn't handled, it just shows how hopelessly behind Tesla is.


The navigation is not in control of the driving. At all. It is only suggesting a route.


That could be selection bias. The [edit, was '99%'] vast majority of the time the car does the boring thing are not click-worthy. Have you driven a Tesla for a reasonably long period of time?

There is a bigger lesson in there: click-driven video selection creates a very warped view of the world. The video stream is dominated by 'interesting' events, ignoring the boringly mundane that overwhelmingly dominates real life. A few recent panics come to mind.


99% perfect is not good enough, it's horrifyingly bad for a car on public roads.

When I drive, I don't spend 1 minute plowing into pedestrians for every 1 hour and 39 minutes I spend driving normally.

If a FSD Tesla spends just 99% of its time doing boring, non-click-worthy things, that is itself interesting in how dangerous it is.

To your point, I'm definitely interested in knowing how many minutes of boring driving tend to elapse between these events. The quantity of these sorts of videos that have been publicized recently gives me the impression that one of these cars would not be able to spend 100 minutes of intervention-free driving around a complex urban environment with pedestrians and construction without probable damage to either a human or an inanimate object.


99% is a an very rough colloquial estimate meaning 'the vast majority of the time' to drive the point. Could well be 99.999999%. What really matters is how it compares with human performance, I don't have data to do that comparison. The only ones that can make the comparison are Tesla, modulo believing data from a megacorp in the wake of the VW scandal.


FYI, that 99.999999 number you quoted is still bad. It means around 30 minutes of the machine actively trying to kill you or others while driving on public roads. I assumed a person driving 2 hours a day for a year.

FSD should not be allowed on the road, or if it is it should be labeled as what it really is: lane assist.


I'm not 'quoting' any numbers. I don't own a Tesla. I don't trust lane assist technology, the probability of catastrophic failure is much larger even compared with dynamic cruise control. I'll steer the wheel thank you very much. I'm not a techno-optimist, rather the contrary. I would like independent verification of the safety claims Tesla makes, or any other claims made by vendors of safety-critical devices.

What I am saying is that selection bias has exploded in the age of viral videos, and this phenomenon doesn't receive anywhere near the attention it deserves. We can't make sound judgements based on online videos, we need quality data.


>Could well be 99.999999%

That's up to you (or Tesla) to prove, isn't it? Taking your (or Tesla's) word that it's good most of the time is utterly meaningless.


I have an intersection near where I live where the Tesla cannot go thru in the right lane without wanting to end up in the left lane past the intersection (I guess it's a bit of a twisty local road). At first, it would be going in a straight line, then when it hit the moment of confusion would snap into the left lane so quickly, some 200 ms or something. Never tried it with a car in that spot fortunately. After a nav update, it now panics and drops out of auto-pilot there and has you steer into the correct lane. Nothing to do with poor visibility or anything, just a smooth input that neatly divides its "where is the straight line of this lane" ML model, or whatever.

It's actually fascinating to watch - it just clearly has no semantics of "I'm in a lane, and the lane will keep going forward, and if the paint fades for a bit or I'm goign thrur a big intersection, the lane and the line of the lane is still there."

It also doesn't seem to have any very long view of the road. I got a Tesla at the same time of training my boys to drive, and with them I'm alwys emphasizing when far away things happen that indicate you have increased uncertainty about what's going on down the road. (Why did that car 3 cars ahead break? Is that car edging towards the road going to cut infront? Is there a slowdown past the hill?) The Tesla has quick breaking reflexes but no sign of explicitly reasoning about semantic layer uncertainty


FSD beta does have mechanism for object permanence, as explained on Tesla AI Day.


Is that the same as a model for what it doesn't understand, so it can reason about the limits of its data?


Yes, for 4 years I did, and what they're saying is absolutely true. It desperately tries to lock on to something when it loses whatever it's focused on. Diligent owners just learn those scenarios so you know when they're coming. Others may not be so lucky.

For example, short, wavy hills in the road would often crest just high enough that once it couldn't see on to the other side, it would immediately start veering into the oncoming lane. I have no idea why it happened, but it did, and it still wasn't fixed when I turned in my car in 2019. I drove on these roads constantly so I learned to just turn off AP around them, and it helped traffic on the other side was sparse, but if those weren't true, I'd only have a literal second to response.

EDIT: IMO the best thing it could do in that scenario is just continue on whatever track it was on for some length of time before its focus was lost. Because it "sees" these lines it's following go off to the right/left (such as when you crest a hill, visually the lines curve, or when the lines disappear into a R/L turn) but only in the split second before they disappear. Maybe that idea doesn't fit into their model but that was always my thought about it.


Re the edit: also, if it's a road the car has been on before, why doesn't it remember it?


1% is ridiculously high for something endangering the ones inside and outside.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: