Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Tesla AP2.0+ vehicles have three forward facing cameras, two on each side (front quarter panels, side B pillars), and one rear camera, with overlap between them all. This should be more than sufficient to use structure from motion [1] instead of LIDAR for ranging (not including their ultrasonics).

https://www.teslarati.com/tesla-autopilot-software-v9-blind-... (Tesla Version 9 real-world blind spot test shows Autopilot’s 360° cameras in action)

Youtube video of 3D Autopilot perspective: https://www.youtube.com/watch?v=wypE4fC56bg

Of note:

> Particularly noticeable in Erik’s demo, though, was what seemed to be a slight lag in how other vehicles’ avatars are displayed on the Model S’ instrument cluster, particularly when they are overtaking the electric car. That said, considering that blind spot monitoring is utilizing video feeds from the side and rear cameras, these slight lags could be due to the system shifting from one camera to another. This was particularly notable at around the 1:25 mark in the owner-enthusiast’s video.

Which leads me to believe that the sensors (cameras, front radar, ultrasonics) are sufficient, but why the NN hardware upgrade is going to be necessary. Tesla bet that it was going to be cheaper to drive the cost of the processing hardware down vs being beholden to LIDAR, and it looks like they might be right.

[1] https://en.wikipedia.org/wiki/Structure_from_motion



I'd bet that lag is simply a front-end display issue.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: