Hacker News new | past | comments | ask | show | jobs | submit login

I've tried thinking about how virtual meetings would work, but to me it seems to be flawed. Virtual reality allows us to view a 3D environment as if we are there, but how would we capture that 3D environment to begin with? Is there technology that allows us to capture not just 3D, but 3D from all angles? I expect that for "presence" to work, when you move your head, the perspective that the camera sees (on the other end) would need to change, and I can't imagine how this would be achieved.



Many technologies exist to achieve this, they're just not in the "consumer ready" stage yet.

3D scanning is best for static objects like walls in your room. They never move or change, so scan the whole space. When you turn your head in VR, you'll see accurate 3D representations of that 3D environment.

Full 6DOF tracking is another method. Given a 3D scanned static object, like a coffee cup, you can track it using various optical and IMU-based techniques. This means a static object can be dynamically tracked as it moves, giving you accurate information about the 3D world to pick up your coffee cup without spilling it.

Optical tracking can be used to replicate facial movements into 3D models (see: Faceshift), to track finger movements (see: LeapMotion/Kinect), or full body motion.

Video can fill in the gaps for dynamic content. This can be achieved in two ways: 1) depth cameras which combine RGB video like a normal camera with depth information, allow us to reconstruct dynamic objects at proper depth. 2) 360-degree stereoscopic video which gives a fixed perspective to the full 3D environment. This isn't great for "3D from all angles" as non-rotational head movements won't see an accurate view.

A combination of these technologies will enable a VR user to see a real 3D environment, and feel like they are in it.

One thing is certain: it's coming. It's just a matter of time. The technology is rapidly maturing.


Here is some very interesting tech from OTOY that is basically doing that, http://uploadvr.com/diy-wormholes-hands-on-with-otoys-light-...

"Recap: the rig takes two Canon DSLRs and spins them around in a circle, capturing a dense light field image. The resulting data, when properly processed, can be rendered with OTOY’s pipeline and delivered to a VR headset with full head tracking. It’s a static scene but OTOY have alluded to future video capture."




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: