Pretty telling that the first instinct of the video poster was to ask about cutting the incident.
Even the most cherry-picked, edited-more-than-a-Michael-Bay-blockbuster videos recorded by fanboys on bright, dry days with at most medium traffic still make everyone go "This entire industry should be regulated to death asap".
A lot of videos get posted later without context. FSD keeps getting better and better but that context is lost when you just see a few close calls. Twitter, this site and Facebook then amplify a false narrative.
I'll say this time and again. Tesla's decision to use cameras instead of Lidar is a fundamental constraint that causes downstream problems.
Lidar is accurate 100% of the time. Tesla cameras create an almost as accurate reconstruction, sure. But, approaching 100% accuracy is very different from 100% accuracy.
I previously thought that having a low latency radar system was a genius way to get around the hard constraints of not hitting a physical object 100% of the time. But, in Elon's infinite wisdom, he removed it.
From Tesla's manufacturing tolerances to their range claims to their FSD systems. Tesla's approach of '99% of the time it works 100% of the time' is the kind of toxic carelessness that software engineers have allowed to seep into a hard engineering discipline where real lives are on the line. (As a once employed mechE making cars, and now an ML person writing software, I feel like I have the perspective to make this egregious claim)
'Marginally better than a distracted karen' is not the utopian FSD future I was promised.
P.s: I understand that all sensors have noise, so any claim to a 100% is dubious. But, for the purposes of this discussion, the implied meaning of a 100% should be clear.
AFAIK Tesla uses cameras instead of LIDAR because LIDAR, by itself, will not be able to tell you what a thing is. It cannot tell you, for example, whether the thing standing at the side of the road is a stationary trash can or a human about to cross the road. Hence you have to solve vision anyway. And if you have to solve vision, you might as well go all in on cameras.
Which would be fine if cameras had the same clarity as eyeballs, which are hundreds of megapixels, have a physical iris, crazy wide dynamic range and a lens optimized for such a small package.
And which have eyelids and tears and a human attached with hands ready to wipe if any dust or dirt gets into them.
Then again, a human, even if fully focused on driving, can look only in one direction at a time. And, of course, most drivers aren’t fully focused on driving.
I still don't understand the decision to get rid of all the other sensors they used to employ. Yes, a human can drive using only video input (though sound and touch definitely help), but humans have an eternity in evolutionary pressure behind them to achieve that feat.
It would have made so much more sense to keep the sensors for the ramp-up, even if just as a fallback. Then you could validate the decisions made based on visual information, until the collected data indicate that the system is robust enough to function on visual alone.
The COVID models of the car don't have the old sensors because they couldn't supply the parts. I'm not saying that's a good reason for disabling them on older versions of the car but that's what Tesla decided to do.
If I'm being honest, I don't understand their obsession with using only cameras for FSD. It needs to have high beams on just for lane assist on highways at night now, which seems absurd to me.
I think it could be a case of "they were so preoccupied with wondering whether they could that they didn't stop to think whether they should". Without a doubt, FSD using only visual input would be the gold standard and awesome to _have already figured out_, but working backwards from that and disregarding the usefulness of "training wheels" in form of other sensors might not have been the best decision.
> We all know that there’s plenty of wildly dedicated, glassy-eyed Tesla fans out in the world who will seize your arm at a party and talk toward you about the Glory of Elon and the Good News about Tesla, gripping your bicep tighter and tighter until you emit an involuntary gasp of pain.
Yeah, with that opening I'm sure the rest of the article will be completely unbiased.
Here's my take: for whatever reason, the planner is pretty bad. Perception is pretty good, and in this case it does detect the cyclist, but the planner does something inexplicably stupid based on that information. Based on extensive viewing of FSD beta videos, this seems to very often be the case. No idea why it's so bad, but it indicates that there's a lot of low hanging fruit in that area.
What's the status of a cyclist in the US? Are they together with pedestrians the weakest participants of the road? The UK recently changed to rules; and on the European continent it has been the case for a while: https://www.gov.uk/government/news/the-highway-code-8-change...
There is a winding mountain-like road (Atascadero? Something like that) with several blind corners in Portola Valley (SF peninsula area) with no dedicated bike lanes but plenty of bikes. This is heavily used as a commute route because it’s a critical shortcut. The wealthy suburbanites who live there take it at very high speed in their Porsche and Land Rover SUVs. I’d be very curious to see what happens with FSD on that road.
Even the most cherry-picked, edited-more-than-a-Michael-Bay-blockbuster videos recorded by fanboys on bright, dry days with at most medium traffic still make everyone go "This entire industry should be regulated to death asap".