
Revolutionary "Light Field" camera tech - shoot-first, focus-later - hugorodgerbrown
http://allthingsd.com/20110621/meet-the-stealthy-start-up-that-aims-to-sharpen-focus-of-entire-camera-industry/
======
sbierwagen
The downsides, which, of course, this press release doesn't mention:

\- Greatly, greatly reduced image resolution. Great big dedicated-camera sized
lens and image sensor, cellphone-camera sized pictures. 1680×1050, at most.
(1.76MP)

\- Color aberration. The microlenses have to be small, of course, so they're
going to be made of single physical elements, rather than doublets.[1]

\- Various amusing aliasing problems. (note the fine horizontal lines on some
of the demo shots)

\- Low FPS. Each image requires lots of processing, which means the CPU will
have to chew on data for a while before you can take another image.

\- Proprietary toolchain for the dynamic images. Sure, cameras all have their
particular RAW sensor formats, but this is also going to have its own output
image format. No looking at thumbnails in file browsers. Photoshop won't have
any idea what to do with it. Can't print it, of course.

\- - You can just produce a composite image that's sharp all over, but why not
use a conventional camera with stopped-down[2] lens, then?

\- It's going to be really thrillingly expensive. This is a given, of course,
with new camera technology.

[1]: <http://en.wikipedia.org/wiki/Doublet_(lens)> [2]:
<http://en.wikipedia.org/wiki/F/stop#Effects_on_image_quality>

~~~
jws
_Proprietary toolchain for the dynamic images_ – you could embed the full
dynamic information in a JPEG extension and use the JPEG thumbnail as a
thumbnail and the JPEG image as your selected representation. Then you could
use your nonstandard code to regenerate the JPEG image data from your full
data when you wished.

 _You can just produce a composite image that's sharp all over…_ – but what
fun is that? I'm having a grand time with SynthCam on my iPhone to act like a
large lens camera to regain distance dependent focus even though I have a tiny
lens.

 _It's going to be really thrillingly expensive._ – It doesn't need to be.
Tiny micro lenses on the image sensor might be much cheaper than large chunks
of precision glass. Think "inkjet-like print head squirting one of the resins
used for plastic eyeglasses into etched depressions relying on surface tension
to form the lens". Just guessing there. Maybe placing precision sized beads in
each depression and then heating to reflow into a surface tension defined lens
would work better.

~~~
bad_user

         You can just produce a composite image that's sharp 
         all over… – but what fun is that?
    

With DSLRs that have huge lenses with large aperture sizes, the depth of field
varies a lot and those lenses also have a sweetspot in focal-length / aperture
for which images produced are the sharpest. Lowering the aperture size
increases the depth of field, but then you've got another problem as the
shutter speed also has to be adjusted and you end up using a tripod.

Getting a picture that has everything in focus and is tack sharp can be
seriously challenging.

To tell you the truth, while these photos with adjustable focus seem cool,
focus is not my pain - what I want is to be able to take great photos (the
kind that 35mm cameras can do) for reasonable prices, preferably with
something that fits in my pocket.

And to expand on the point above - focus is not painful when the camera has
enough focus points. I played with a Nikon D3s that has a whooping 51 auto-
focus points; let me tell that it's freaking awesome, as it can track your
subject as it moves. The problem is that consumer-level DX DSLRs only have
like 11 focus points, which is still cool, but point&shoots suck badly in this
area, most of them focusing only in the center of the image.

Another problem with this project that I can see - people don't like playing
with their images on the computer. When you take 500 photos in a single day,
and another 600 photos the next day (like when going on a trip), it's really
painful to carefully adjust each image, not to mention that the RAW formats
are huge and seriously cuts in the number of photos you can take ... yeah,
making adjustments is great, but I prefer making more photos, that's why I
shoot in JPG and don't regret it.

~~~
hristov
Focusing with point and shoots should not be a problem. They all have a huge
depth of field due to their small sensors, so you have to try really hard to
put something out of focus. Smudged pictures on point and shoots are usually
not a result of lack of focus, but something else (e.g. shaky camera).

Even with DSLRs you usually only need one focus point. You can focus at the
center and re-frame. It is very simple to do, and is much simpler than
choosing a focus point (and much safer than trusting the camera to
automatically choose a focus point for you). Multiple focus points can be very
useful in certain rare circumstances: when shooting something really fast off
of center without being able to pre-focus or when your camera is bolted down
on a stand. Even then I cannot possibly imagine one anyone would need 51
points. This is obvious feature creep.

But yeah, I am really not sure who the intended market for this camera is.
Focusing is just not a pain point, in my opinion. This camera could be used by
artists and professional photographers to play around with the depth of field
to get a great artistic shot, without making their subjects wait. But with the
micro-lens design, will it have enough image quality for professionals? I
guess we will see.

~~~
bad_user
You cannot re-frame when shooting moving subjects - kids playing, sports,
birds, cars, motorcycles, boats - there's all kinds of instances in which your
subject doesn't stand still and even has unpredictable moving patterns, so
keeping your subject in the center of the frame and/or re-framing is in many
instances not feasible.

Nikon DSLRs have this 3D tracking feature in which you select an object to
keep in focus and it refocuses based on its movement inside the frame when it
hits the focus points. And when the subject exists the frame and re-enters,
auto-focus comes back. 51 focus points may seem like feature creep, but as I
said, it's freaking awesome when shooting moving targets like birds.

Even for subjects that are still, like for portraits, you have a lot more
freedom for composition as you just select the person's eyes and then you can
move around while the eyes are kept in focus.

Of course, you can do a good job with a single focus point, but professionals
and amateurs need predictable results, because good moments for taking photos
are rare and you don't want to screw up because your camera wasn't properly
focused.

That's why I can see partly the utility of this technology here, but on the
other hand I can see serious problems with it too, the biggest one being that
for most people quantity of photos trumps quality. Another problem that I can
see is the one I mentioned above; precise and predictable focus is not that
much of a problem with modern cameras. And yet another: mega-pixels and
quality of optics count a lot. Well, maybe once passed a certain threshold,
the there's less ROI from a higher MP, but still, under 6 MP a camera is only
usable for publishing on Facebook.

My consumer DLSR has 4 FPS and I don't worry about focus as I just
continuously shoot like 20-30 pictures in a row to make sure one of them is
good, and usually one of them is.

~~~
gjm11
> under 6MP a camera is only usable for publishing on Facebook.

2560x1600 pixels is 4.1MP. At 1:1 that will completely fill the most monstrous
computer monitors one can get for under about $10k. At 300 pixels per linear
inch (a very reasonable resolution for photos; about the same as the iPhone 4
"retina" display) it will give you a picture with 10" diagonal.

Unless you're making posters or big prints for photographic competitions or
something, even at 4MP raw pixel count is not going to be your problem. It's
completely untrue that below 6MP your pictures will be useless for anything
beyond Facebook.

(Of course pixel count starts to matter more if, e.g., you're taking pictures
of distant birds or distant celebrities with a not-especially-long lens and
you need to crop heavily. Most photographers, most of the time, are not doing
that.)

~~~
finisterre
Don't forget about pixel quality (it's understandable because the industry has
been encouraging it for years).

A 6MP sensor would produce a fantastic 10"-diagonal print if its pixels
weren't affected by noise and it was combined with a high quality lens.
Unfortunately sensors of that resolution are typically small (=noisy pixels)
and placed in point-and-shoot cameras (=cramped, low-quality optics).

~~~
oikjhgbpokj
Or you could buy a top of the range pro DSLR from 5years ago with this
resolution, with fantastic lenses and bullet proof build for the price of a
modern entry level camera

------
pgbovine
FYI Ren Ng (the founder of this company) won the 2006 ACM Doctoral
Dissertation award for the research that turned into this product:
<http://awards.acm.org/doctoral_dissertation/>

~~~
ja27
I found a couple of his publications:

<http://graphics.stanford.edu/papers/fourierphoto/>
<http://graphics.stanford.edu/papers/lfcamera/>
<http://graphics.stanford.edu/papers/lfmicroscope/>

~~~
dgreensp
I read these papers in grad school in 2007. Apparently he's been working on
commercializing it ever since. This guy is the real deal.

------
ricardobeat
I remember seeing this "news" years ago...

edit: here's the article from 2005
<http://graphics.stanford.edu/papers/lfcamera/>

The company that flourished from this research in 2008:
<http://www.crunchbase.com/company/refocus-imaging>

And another startup already doing this for mobile phones:
<http://dvice.com/archives/2011/02/pelican-imaging.php>

~~~
awarzzkktsyfj
The article is about a company named Lytro. 'Refocus Imaging' is the previous
name of Lytro.

------
schwabacher
One awesome use for this technology is in microscopes! Instead of having to
focus on each slide, slides can be run through much faster, photographed once,
and interesting objects (like cells in a culture) can be found by processing
afterwards.

And even cooler IMO, is that a display panel with proportionately sized
microlenses can be used (after a little image processing) to recreate the
light field for a glasses free 3d display.

------
ggchappell
Very nice. But it leaves me wondering about a couple of things:

(1) Given the info captured by the camera, can we, without further human
input, create an image in which _everything_ is in focus?

(2) _What the heck are these people thinking?_ Going into the camera business?
That means that, in order to get my hands on this technology, I am stuck with
whatever zillions of other design decisions they made. One product. No
competition. No multiple companies trying different ways to integrate this
idea into a product. And if this company goes belly-up, then the good ol'
patent laws mean that the tech is just gone for more than a decade. <sigh>
_Please_ license this.

P.S. FTA:

> Once images are captured, they can be posted to Facebook and shared via any
> modern Web browser, including mobile devices such as the iPhone.

Surely there must be a more straightforward, but still understandable to non-
techies, way to say "the result is an ordinary image file".

~~~
wisty
(1) - cameras already do this. Photography 101 - if you use a small apature,
everything is in focus (minus diffraction and motion blur). Cheap lenses (like
most cellphones) have low apertures, and don't really need to focus. Expensive
lenses (like the iPhone 4 lens) often have high apertures, which let you take
faster photos, and artistically blur stuff.

Photographers don't like to do this, as blurring the background draws your
attention to the subject.

~~~
TeMPOraL
> Cheap lenses (like most cellphones) have low apertures, and don't really
> need to focus.

Some companies seem to follow this idea and then, surprise surprise, barcode
scanning app doesn't really work on my phone because someone decided not to
install AF with the camera :/.

------
EdgarZambrana
Imagine it being combined with technology that tracks your eyes motion,
focusing the part of the image you're looking at automatically.

~~~
dstein
Then take it few steps further with a 360 degree fish eye lense strapped to
your forehead, and some sort of VR helmet display for playback. Then take the
camera while skiing, mountain climbing etc., and you would essentially have a
brainscanning device like in the movies Strange Days, or Brainscan.

------
jianshen
Looking forward, I see interesting applications of this tech in motion
graphics and film. Where 3D movies have failed in forcing the user to focus on
something, I can see this bringing photos and eventually film to life in ways
that let the audience control more of what they want to experience.

On the motion graphics side, I imagine all kinds of creative potential in
compositing photography together with procedural or rendered graphics.

------
shadowpwner
The ability to focus afterwards is at the tradeoff of image size and quality,
assuming they use a microlens array similar to the study located here:
<http://graphics.stanford.edu/papers/lfcamera/>. However, this is cleverly
marketed towards the social media crowd, which has little use for high
resolution photos.

------
gmatty
I'm not an optics expert, but couldn't this be used to generate 3d depth maps?
By stepping through each field depth you could find the edges of objects (by
how clear they were at each depth) and map those edges onto a mesh.
Effectively, doing what the kinect does but without any of the infrared
projections...

~~~
m3koval
You could probably do it, but it's not clear why it would be superior to
stereo vision. Both approaches have the same pitfalls (CPU-intensive, need
texture to work well) and stereo vision is largely a solved problem that works
on commodity hardware.

~~~
ugh
Because you don’t need a second sensor and a second lens?

~~~
m3koval
Stereo vision does need two sensors and two lenses, where a micro-lens
approach would only require one of each. However, the micro-lens camera would
need a much larger and higher resolution sensor to produce a depth image that
has the same resolution of the equivalent stereo camera.

Ignoring the size of the sensor, producing two standard camera lenses will
always be cheaper than producing an array of multiple (i.e. more than two)
micro-lenses. This is doubly true considering that the micro-lens technology
is already encumbered by patents.

Finally, stereo is very well understood and has already been implemented on
the GPU, on FPGAs and in ASICs (commonly known as STOC, Stereo On-Chip). I
would personally love to see a demo of a micro-lens array used for creating a
depth map, but I just don't see any practical advantages over stereo.

~~~
ugh
Sure, but you get it for free with this camera. It wouldn’t make sense to
build a second sensor and lens into a light field camera.

This is obviously not the main use case – and we seem to be talking past each
other.

------
sajid
Raytrix already have a plenoptic camera on the market:

<http://raytrix.de/index.php/r11.185.html>

------
DanielBMarkham
Now just give me this in full stereoscopic, hi-res, for my cellphone. With
video.

Of course (hopefully), that's version 4 or 5. This initial roll-out is looking
great! Can't wait to play around with one of the units in the local photo
shop.

Looking at the demos, I wonder what the depth-of-field is? Is it entirely
calculable, or is it just a few feet and then the user sets the target? It
looks like it is tiny, but I'm guessing it's set that way to show off the cool
features of the technology.

------
revorad
This sounds very exciting. To play the devil's advocate however, on most of
the example photos on Lytro's site, you really only need two points of focus -
roughly near and far. Clicking on those two shows you everything there is to
see in a picture.

If someone comes up with software to allow refocusing on two distance points
with existing photos, they could eat Lytro's lunch. Can Picasa do something
like this?

~~~
rythie
If you have a photo that's all in focus (i.e. f15 aperture or something) you
can throw parts out later with using masks in a photo editing tool, though
it's time consuming.

I'd suggest you could get a similar effect with a camera that had two or three
lenses using different focal lengths. Fujifilm already released a camera with
two lenses: <http://www.dpreview.com/news/0809/08092209fujifilm3D.asp> so it's
just a software modification for that.

~~~
tintin
Using a photo where all is in focus you only need a depth-map to process the
blur. I think something like the Kinect is making this possible already.

------
erikpukinskis
Once it can capture high speed video of the light field, so that you can
actually change the timing and exposure of each shot, as well as the focus...
then we'll really be somewhere. Then you can just aim the camera, click the
button some time shortly after something cool happens, and go back and get the
perfect shot. Hell, capture a 360 degree panorama and you can even aim after
the fact!

------
sp332
This is much more useful than a simple depth map, since it works with
translucent and amorphous things like steam, and other things that are hard to
model with meshes like motes of dust. Also, if you have a shiny object,
focusing at one depth might show the surface of the object in focus, but
focusing at a different depth would show the reflection in focus.

------
humanfromearth
Doesn't Magic Lantern already do this? I mean the Focus Bracketing shoots 3
(or maybe more) pictures as fast as the camera is able to do it at different
focal distances. You just make sure the depth of field is wide enough to cover
the distances between those points and you should have a similar effect.

------
ugh
It's possible to buy a light field camera right now, for example from a German
compamy named Raytrix (<http://raytrix.de/index.php/home.181.html>). I don't
know whether they are the only example or whether there are other companies.

They don't name a price on their website (write them to find out) and, looking
at the applications they are naming on their website
(<http://www.raytrix.de/index.php/applications.html>), they certainly do not
target consumers.

Here are their camera models if you are interested:
<http://www.raytrix.de/index.php/models.html>

------
clc
This is a very interesting concept, I would be doubly interested to see this
technology used for video in camcorders. However, I'm curious to see if they
have the resources available to go toe to toe with Nikon, Canon, and Olympus.
The camera industry is so competitive... and the life cycles on digital
cameras are so quick nowadays. They may find it difficult to keep up.

~~~
thomasgerbe
I don't think they need to keep up with the middle-to-high end yet.

Given the choice of glass and bodies, I think most professionals would still
keep with Nikon and Canon (especially for print).

But for entry-level, I could see it being a killer because of the ease.

------
dennisgorelik
These guys clearly target investors money, not consumers. 1) Lots of hype long
before the product is released. 2) Ignoring market trend (consumers prefer
smartphone integration over picture quality). 3) Instead of focusing on
refining and selling technology, they want to reinvent the wheel and produce
their own camera.

I'd say that investors would lose lots of money on that venture.

~~~
rooshdi
_I'd say that investors would lose lots of money on that venture._

I'd bet against that. They definitely solve a problem people have with taking
focused photos and I see this technology becoming even more popular as it
becomes integrated into videos. Just the other day I tried to take a quick
photo of my niece and cat playing, and trust me when I tell you it was a real
hassle trying to keep them in focus. This issue is just one of the many that
can be solved with this technology. Even if there isn't strong demand for
their own camera, they still should be able to license the technology to
camera makers down the road and eventually integrate it into the smartphone
market.

~~~
dennisgorelik
I'm not saying that technology would be useless. It might be at least somewhat
useful.

What I'm saying is that this particular business approach would fail (too much
hype, not focusing on teams' advantages, ignoring customers' preferences).

Investors would over-invest, but business would not get enough revenue to pay
them back.

~~~
rooshdi
_Investors would over-invest, but business would not get enough revenue to pay
them back._

We shall see, but just for the fact their product and technology improves a
previous experience in such an obvious way, I have much less of a problem
seeing this company receiving a lot of hype and funding than say Color, for
example.

------
ralfd
See also the discussion three weeks ago on hn:
<http://news.ycombinator.com/item?id=2596377>

There is also an (unrelated) iPhone App by the inventor for playing with depth
of field: <http://sites.google.com/site/marclevoy/>

------
mikecane
Foveon was supposed to revolutionize digital photography too. Hell, there was
even a book written about it.

------
romansanchez
Props to the innovation, but in terms of reaching the consumer market I doubt
the appeal will suffice for widespread reach. Even if it did, a licensing deal
would be more appropriate just for the sake of them investing in the
innovation which is what they're good at, not distribution.

------
mortenjorck
How closely is Lytro's method related to Adobe's Magic Lens demonstrated last
summer? <http://www.youtube.com/watch?v=-EI75wPL0nU>

------
DLarsen
As a geek, I think this is totally awesome. As a casual photographer, I'm less
excited. I've pretty much got the hang of focusing my shots as I'm taking
them. Why focus later what I can focus now?

~~~
thomasgerbe
I'm a casual photographer too.

The advantage I see in the future is that you are 'guaranteed' a potentially
sharp picture.

Even when I do portraits with a wide aperture (1.2/1.4) there are times when I
miss the focus on a tiny detail that I wish was more in focus. And since I
prefer doing candid poses, redoing a situation just isn't that preferable.

For sports or wildlife, I imagine it can be hard to focus too, sometimes just
missing a shot of a bird because of a split second.

It does make me wonder how the motion blur on this would work.

------
epo
Another Segway? Lets see if any reviewers ever get their hands on one.

------
benjoffe
Combine this with eye tracking (to the level that my focal depth can be
detected) and an automatic lens over my monitor and you'd have a pretty
immersive picture.

------
SocratesV
Didn't Adobe showcased the same technology in September 2009?

Of course they haven't delivered a consumer product with it yet... But neither
has this company.

Let's wait and see...

------
MasterScrat
I think some kind of Ken Burn effect with transitions between the different
planes would make a good screensaver.

------
hdeo
There may be an opportunity in 'doing something' with all the data collected
..

------
jaekwon
can this light field tech be used in reverse to create 3d holographic images?

