

Light Field Photography - cnahr
http://news.kynosarges.org/2014/05/11/light-field-photography/

======
win_ini
I agree with most of the statements - but one aspect I think a lytro camera is
better at: street photography.

Quick, shots of street moments. Shots where you have no time to focus
properly, but want to focus the field of view on a subject...a lightfield
camera is great for these situations and I think isn't played up enough.

------
bridger
I took a class that explained light field photography, but I thought
understanding the concepts through static diagrams was difficult. I made a
little Mac app that simulates a 2D scene of light sources, lenses, and light-
field sensors. The sensors shows their captured output in a graph.

The source is at [https://github.com/bridger/optic-
workbench](https://github.com/bridger/optic-workbench)

A bit of explanation (and a link to the pre-built binary) is at
[http://www.bridgermaxwell.com/index.php/blog/optic-
workbench...](http://www.bridgermaxwell.com/index.php/blog/optic-workbench-
light-field-capture/).

(Sorry it is only for Macs. I would love to redo it in Javascript someday,
once I know Javascript.)

------
mapt
The problem with lightfield photography, other than the inherent image quality
calculus of splitting a sensor into a bunch of lower-resolution, lower-
illumination, differently-focused subfields, is that it's so easily simulated
if anyone actually attempts to do scene capture properly - through adaptations
of stereo-vision algorithms for higher-N arrays of semi-calibrated sensors, or
near-infrared structured light (like the Kinect) for a 4D scene, or structure
from motion for a static 3D scene.

The new HTC One M8 uses crap smartphone sensors, _only two of them_ , without
any structured light, with a first-generation stereo-blurring algorithm dumb
enough to run quickly on a smartphone, and it produces a rough approximation
of the same product as Lytro.

~~~
jychang
As a programmer who also happens to be a photographer, that's flat out wrong.

The M8 produces a very crappy imitation of out of focus blur of what the bokeh
of an camera with a thin depth of field looks like. [1] The linked article is
actually being "neutral" and fairly generous; the pictures, frankly, look like
shit.

It is difficult to properly do DoF, even if you know the Z-distance of every
object in the scene. It's not just a simple blur of things that are deemed out
of focus; things like spherical differences in the lens to produce hard/soft
bokeh matter. [2]

"So easily simulated" couldn't be further from the truth. A bunch of academic
papers have been written about this, [3] which include ways to reproduce
effects that are similar to the "thin DoF" effect in large sensor cameras, but
few try to tackle _all_ of the issues and nuances present.

[1] [http://www.trustedreviews.com/opinions/htc-one-m8-camera-
vs-...](http://www.trustedreviews.com/opinions/htc-one-m8-camera-vs-a-proper-
camera) [2]
[http://jtra.cz/stuff/essays/bokeh/](http://jtra.cz/stuff/essays/bokeh/) [3]
[http://dl.acm.org/citation.cfm?id=2389316.2389318](http://dl.acm.org/citation.cfm?id=2389316.2389318)

~~~
mapt
in [1]: "Now, this simulation above may look too artificial and unintuitive.
Real lenses do not behave like this, right? They actually do. Just look at
this picture below."

So he produces an accurate convolution model of how bokeh looks in soft/hard
configurations, and that's proof it's not easily simulated? If it's not a
straight Gaussian blur (as the HTC is likely using), that's fine - it can
still be simulated.

What plenoptic cameras do is use a microlens array to perform focus
bracketing. One can use a macrolens array as well, if one lives in a rich
enough 3D-interpolating image processing pipeline.

Take a brick-sized lump of plastic, apply, say, 20 tiny cameras (each of four
corners and center get a sub-array of medium_NIR_filtered-far-medium-near),
and four temporally staggered Kinect structured light projectors on side, and
you get all your cues double-checked by redundant information. Your algorithms
get to play with direct low-resolution texture-focus info, stereo distance for
a bunch of long-baseline pairs in two dimensions, redundant short-baseline
pairs, active focus returns on laser dots as an anchor, stereo returns on
laser dots, everything. One checkerboard calibration and such a system cross-
calibrates itself and characterizes a position, focus, & lens distortion model
for every lens.

Add additional lenses if you want to play with more of or some other type of
bracketting.

The point is, attempting _proper_ computer vision scene capture involves a
crapload of sensors checking their results against each other, ideally diverse
sensors and code. We simply haven't tried that, outside _perhaps_ professional
motion capture suites. The sky's the limit on such things.

[1]
[http://jtra.cz/stuff/essays/bokeh/#what_affects_bokeh_shape](http://jtra.cz/stuff/essays/bokeh/#what_affects_bokeh_shape)

------
erikpukinskis
The thing that excites me most about Light Field Photography is that LF
cameras are the only thing that could take a photo that would actually feel at
home in an Oculus Rift. Binocular cameras fake the 3D experience by capturing
two 2d images and fixing them to each eye. But in the Oculus you can move your
head around so two 2d images aren't enough. You need an actual depth map in
order to generate images in response to head movements. LF cameras can provide
one.

[http://www.cs.berkeley.edu/~ravir/lightfield_ICCV.pdf](http://www.cs.berkeley.edu/~ravir/lightfield_ICCV.pdf)

~~~
nakedrobot2
Not really. Without the correct IOD (interocular distance) [1] you won't get
"correct" stereoscopy, and the effect you get will not feel right at best, and
make you sick in the worst case.

A LF camera only gives a stereoscopic effect proportional to the size of the
sensor. So unless you have a sensor even bigger than a medium format camera,
you are not going to achieve the effect you are describing.

[1]
[https://en.wikipedia.org/wiki/Stereoscopy](https://en.wikipedia.org/wiki/Stereoscopy)

~~~
mapt
The premise here would be _two_ LF cameras, mounted rigidly in an IOD-like
frame.

Or ideally, more than two, for flexibility of the IOD and a better stereo-
vision fix, potentially used in 2D-3D compositing.

------
spankalee
I'm not sure that consumers don't care about bokeh, they just don't know how
to achieve it. I've been using the Google camera app that does lens blur and
everyone I show it to loves it. It's a little cumbersome to use though and
require a still subject. Maybe light field could be one of the features that
keep compact cameras around in the face of competition from phones.

I also wonder if there are computer vision applications. I don't know how
accurate depth fields from stereoscopic images are, and maybe two cameras are
not always practical...

~~~
dpark
Bokeh isn't just reduced depth of field. Bokeh is the quality of the out-of-
focus areas. Shallow depth of field makes the bokeh more apparent, but the two
are somewhat orthogonal.

Looking at Lytro's examples, e.g. [1], I would say that the Illum demonstrates
poor quality bokeh as a result of the sensor limitations. The bokeh is quite
noisy and pixelated (which is a strange thing to say about bokeh, but it's the
result of the microlens sampling the Lytro sensor does).

The technology is very interesting, but looking at Lytro's pictures has
convinced me that their camera will not suit my needs any time soon.

[1]
[https://pictures.lytro.com/lytro/albums/149429/embed?token=6...](https://pictures.lytro.com/lytro/albums/149429/embed?token=6cb04136-c43a-11e3-9416-22000a8b14ce)

------
fuzzythinker
I don't see the $1.5k Lytro is going to gain enough market share to be
profitable too. The only option I see for Lytro is to miniaturize their
technology so that it is significantly better than Google's software offering
and to partner with a major phone manufacturer.

------
NIL8
The Lytro gallery:

[https://pictures.lytro.com/lytro/albums/149429/embed?token=6...](https://pictures.lytro.com/lytro/albums/149429/embed?token=6cb04136-c43a-11e3-9416-22000a8b14ce)

I want this.

~~~
hnriot
a photo where nothing is in focus. great.

lytro needs a different name than photography because it's something else.

~~~
bobbles
Are you serious? Click where ever you want to focus the shot

~~~
dpark
The low sensor resolution means that nothing looks in focus. There's
insufficient resolution (and maybe processing artifacts add to the issue?) for
anything to look crisply focused.

------
porker
Could LFP be used to 3D-map a scene - assuming you can extract the data from
their proprietary format? It would be great to be able to turn a collection of
photos into a virtual world.

------
acous
> "Holographic Recording ... remains irrelevant to consumers until we have
> convenient holographic displays to match."

I'm not sure why Lytro hasn't touted this side of the technology... the first
consumer holographic display (oculus rift) is due relatively soon.

~~~
phreeza
The rift is not really holographic, it's 'just' stereoscopic. Nvidia is
working on light field glasses though, and I believe this might be the future,
because it allows for much sleeker eyepieces, at a higher computational cost.
[https://research.nvidia.com/publication/near-eye-light-
field...](https://research.nvidia.com/publication/near-eye-light-field-
displays)

~~~
acous
I believe[1] that the head tracking qualifies it as holographic. That light
field stuff looks super interesting...

1\. [http://doc-ok.org/?p=337](http://doc-ok.org/?p=337)

~~~
sp332
It's not holographic because you can't focus your eyes at different depths in
the image.

~~~
dynode
It's not holographic because it doesn't use diffractive / interferometric
effects.

[1]
[https://en.wikipedia.org/wiki/Holography](https://en.wikipedia.org/wiki/Holography)

