Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Oh my dog.

Part of this is the fault of Tesla's marketing, but you are wildly off mark. The cars you are seeing are Autopilot, not FSD. Most of them are even the even older, radar-based autopilot.

Tesla Vision has no issues detecting white semis crossing your path. Vehicles with radar, on the other hand, struggle with discerning those from overhead bridges, so if one appears close to a bridge, you're SOL due to whitelisting.

Tesla Vision in FSD is a much, much more developed version which has been excellent about detecting its environment, especially now with the new occupancy network. Its decision making needs work but you will notice, when watching all those videos, that detection of vehicles - even occluded ones - is not a problem at all.

Your comment about useless data is also wrong. They are experts in their field and they know exactly what type of data they need. Both Tesla and Karpathy himself have shown on multiple presentations that they focus on training unique/difficult situations because more data from perfect conditions is not useful to them anymore. They have shown exactly how they do it, and even showed the great infrastructure they've built for autolabeling.

Your claim about cruise control from competitors being equal to FSD is laughable. They don't even match Autopilot: https://www.youtube.com/watch?v=xK3NcHSH49Q&list=PLVa4b_Vn4g...



Going to post this here as a rebuttal, a video made by Tesla fans that shows some severe shortcomings in the current version of FSD.

https://insideevs.com/news/616509/tesla-full-self-driving-be...

TLDR: a Tesla can't identify a box in the road. IO can finally identify people, but it still doesn't do a good job of avoiding them.

Tesla Vision has no issues detecting white semis crossing your path. Vehicles with radar, on the other hand, struggle with discerning those from overhead bridges, so if one appears close to a bridge, you're SOL due to whitelisting.

Both of these statements are false. Tesla Vision still has trouble detecting white semis as of October 2022. There are no self-driving vehicles that use radar for navigation(you appear to be mixing up radar with LIDAR, which has range-sensing built in, and all of Tesla's competitors are able to tell trucks apart from bridges; truck identification failure is unique to Tesla), though many regular modern cars do use it for autobraking systems. As these systems are only intended for use at extremely short ranges directly in front of the vehicle, it's irrelevant whether the object detected is a bridge or a semi.

Tesla Vision in FSD is a much, much more developed version which has been excellent about detecting its environment, especially now with the new occupancy network. Its decision making needs work but you will notice, when watching all those videos, that detection of vehicles - even occluded ones - is not a problem at all.

This does not match reality. At all. Teslas still regularly swerve themselves across lanes of traffic and into oncoming traffic. In a brand new Tesla acquired by a co-worker several weeks ago, Tesla FSD could not identify cyclists on the road, failed to identify a number of pedestrians crossing at a crosswalk, did not successfully distinguish between semi trucks and the open sky, and only successfully identified about 1/2 of the other cars on the road with it. Maybe the super-duper secret version of Tesla Vision performs well, but the one actually available on Tesla vehicles right now performs worse than a drunk teenager.

Both Tesla and Karpathy himself have shown on multiple presentations that they focus on training unique/difficult situations because more data from perfect conditions is not useful to them anymore. They have shown exactly how they do it, and even showed the great infrastructure they've built for autolabeling.

This is demonstrably false; admission into the FSD program requires a safety score which cannot be achieved in areas with rough or steep roads, and is almost impossible to achieve in urban traffic, ergo, they are by definition not focusing on training unique/difficult situations. Moreover, as they still can't identify semi trucks, other cars, cyclists, or pedestrians with any reliability, the "great infrastructure" for "autolabeling" is basically just fraud.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: