Hacker News new | past | comments | ask | show | jobs | submit login

I understand the idea that you can get the exact positions of the surrounding, if you have an accurate position of the device. But how are they achieving that? I don't know of any such small tech capable of being accurate to 1cm, without external help.

I don't see how can they do this without an accurate position of the device.

An example when it breaks down without the position information:

Pointing the device perpendicular at an uniformly colored wall and moving parallel to the wall. Then the visual information is undefined and you have to rely on very accurate position sensors( which don't exist ) to correctly scan the environment.




I'm sure tiny gyros and accelerometers have improved in the past couple years. They've probably advanced dead-reckoning using existing sensors. And maybe they can use both front and back cameras for position information.


But they haven't improved. Oculus is having these problems right now. Their best solution is having an additional camera looking at the user, and they have a much simpler task with a stationary user.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: