
VR still a novelty, but Google light-field technology could make it serious art - rbanffy
https://www.technologyreview.com/s/610458/vr-is-still-a-novelty-but-googles-light-field-technology-could-make-it-serious-art/
======
luma
If you have a compatible headset (Vive/Oculus/WinMR), check out the experience
here:
[http://store.steampowered.com/app/771310/Welcome_to_Light_Fi...](http://store.steampowered.com/app/771310/Welcome_to_Light_Fields/)

This approach is a pretty major step for creating believable VR experiences.
You're still limited to a pretty small space (great for seated users), but the
effect is very convincing and entirely unlike traditional photo-captured
experiences previously available for VR. You can move your head around (but
only within the sphere) and everything tracks the way you'd expect: parallax
works, light rays through windows move, reflections change as your perspective
changes, and people's eyes can track you as you move.

~~~
gautamb0
Seconded, they look absolutely gorgeous. Photogrammetry scenes are comparable
but are still a touch behind these in realism.

~~~
rachelmetz
They really do! I was impressed. (I'm the reporter who wrote that article.)

~~~
gautamb0
I've been following your AR/VR coverage for a couple of years now and have
enjoyed it and have been enlightened by it since.

~~~
rachelmetz
thank you so much!

------
fenwick67
"Surely _this_ will make VR mainstream!" \- journalists since 2012

~~~
WorldMaker
I think you've misspelled 1992

------
heavenlyblue
You can't capture the actual light field with a usual camera, can you?

~~~
John_KZ
actually not even that spinning camera array can capture a decent
approximation of it (assuming it wants to cover the entire volume of the
sphere to account for head movement). Lightfields are not going to look very
good until we get DeepRendering or something.

~~~
VikingCoder
I think you're discounting something...

If you consider how Google Cardboard works, it exploits the fact that the
camera has a field of view - the image is perspective. So when you hold the
camera at 10 degrees left from center, it takes the right-most column of
pixels... And when you hold the camera at 10 degrees right of center, it takes
the left-most column of pixels. And these columns of pixels converge - as
though your head were as wide as the distance between those two points where
the camera was. (Something like that.)

Point being, if you move the camera through enough points, aimed in enough
directions (even if it's on the surface of a sphere, always aimed out), you
eventually capture rays of light as though they passed through every voxel in
that space, from every direction.

And, just to highlight, they are doing a TON of compression. This is
essentially inside-out tomography, or something like that, to derive what the
color is, from every voxel, in every direction... And then doing a
5-dimensional compression on that data (2D surface of a sphere of pixels, at
every position inside a 3d sphere.)

So, I think it's a much more "decent approximation" than you're giving it
credit for.

Or at least, it has the potential to be, if the algorithms are good enough.

~~~
John_KZ
You're absolutely right. I was thinking in terms of a previous attempt by some
people, using a planar camera array with no motion involved. In that case it
was impossible to reconstruct the lightfield satisfactorily. You only got a
small pyramidal volume and a lot of ugly interpolations because of the small
number of cameras. In this case however you can reconstruct the real deal at
an arbitrary precision by increasing the scan time. If they can render it
properly in real time, it will be amazing.

------
jhayward
(Site blocks ad-blockers)

