Hacker News new | past | comments | ask | show | jobs | submit login

A full discussion on this would require an article several times as long as this one as well as some understanding of how vehicle control systems work. I do a fair amount of work with OEMs in this space and to summarize it in a horribly short way:

1) A system that relies primarily on cameras is necessarily limited by camera constraints. No, just because humans primarily rely on vision does not mean an ML model with cameras can do the same and achieve the same results in situations as complex as urban driving.

2) Every major player other than Tesla in this space spends a huge amount of time architecting an entire computing, data exchange, and control platform for autonomous vehicles that is bulky by necessity, but far more robust both in terms of sensor capabilities and ability to cross-check between data inputs to decide what control signals to send. Tesla basically added some cameras and low-resolution radar (they're improving the radar now, but it sure as hell wasn't in the platform they originally promised was FSD-ready) and is hoping it will be enough when everyone else studied the capabilities years ago and decided it wasn't.

To summarize, what Tesla has built is a platform with capabilities that very strongly seem to be limited to Level 3 automation, while engineers working on Level 4 long ago concluded that the Tesla approach is insufficient, and then adjusted their own hardware and software strategies accordingly.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: