Hacker News new | past | comments | ask | show | jobs | submit login

Could someone who understands this space weigh in on how technically interesting this is? (Or isn't?) In particular, their research paper on "End to End Learning for Self-Driving Cars"[1] seems to yield a system that requires an unacceptable amount of manual intervention: in their test drive, they achieve autonomous driving only 98% of the time. But I have no real expertise in this space; perhaps this result is impressive because it was end-to-end or because of the relatively little training? Is such a system going to be sufficiently safe to be used in fully autonomous systems? Or is NVIDIA's PX 2 interesting but not at all for the way it was used in their demonstration system?

[1] http://images.nvidia.com/content/tegra/automotive/images/201...

It's incredibly freaking amazing if they are using deep learning to drive via mainly cameras only 98 percent of the time. No one else can do that. 98 percent is obviously a lot.

Thanks -- that answers the question! So fair to say that it's impressive because of the absence of LIDAR and/or other sensors -- and that by adding LIDAR to such a system one could presumably get towards 0% manual intervention?

The difference between 98% and 99.999% is very difficult to solve, and it's not going to happen in the next 5 years. LIDAR can't help, for example, with obeying a police officer gesture.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact