Hacker News new | past | comments | ask | show | jobs | submit login

I think at this stage, nobody is certain. But I tend to side with more information is better, and you filter what you don't need. Same reason sensor fusion algorithms use the gyroscope and accelerometer in concert to measure movement.



Dunno, seems like Lidar often results in seeing plastic bags or tumbleweeds as car killing objects.


Yes, but that's what I was getting at. You filter those out from the camera. The reverse is probably true as well, where you need depth since an image can't give you enough information.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: