The focus shift feature is nice, but after seeing 50, it seems nothing more than a gimmick, and one that gets darn boring darn fast. This latest feature is cute as well, perhaps more useful even, but again not earth-shattering.
It's just a matter of time before a Samsung simulates the same thing via 'focus bracketing' and storing lots of smaller pictures (they'd probably still be bigger than the Lytro "resolution"), apply a little "focus stacking" algorithm like Photoshop to determine "focal plane" hotspots, and load the focal plane based on the selected hotspot. This little video shows a "fake lytro" image created by taking 10 DSLR pix: http://www.youtube.com/watch?v=KozKofC01_Q
This latest feature could possibly be "simulated" (equalled?) by slight physical movement (circular/conical?) of the sensor in the focal plane.
The light field technology is rather ancient by Internet standards, e.g. see this lovely video on the Stanford Light camera, from 2006 (!) which also explains both features rather well: http://www.youtube.com/watch?NR=1&v=9H7yx31yslM
Finally, if you're still interested in experimenting with this, check out a $10 solution: http://web.media.mit.edu/~raskar//Mask/
So, where does this leave Lytro?
The cool thing I saw about Lytro isn't their flash based player. Its the idea of giving you another degree of freedom after a picture is taken. Like colour correction, you can do focus correction, DOF correction, etc. I would still export one flat image as a result, but gaining that flexibility without carrying around a monopod to take 10 bracketed images is awesome to me.
By the way, there are several apps already which build 3D scenes from two separate pictures. Some can even generate 3D models. HDR used to require manual exposure bracketing in phones too. Now technology has caught up.
Correct me if I'm wrong, but the Lytro stereoscopic field looks fixed at a relatively short distance. At the moment it seems to be a very subtle parallax effect, one that might be mimicked by an ever so slight movement of a camera sensor.
Edit: after a bit of digging, it looks like they make the micro lens array themselves and buy the sensors without one. This obviously makes acquisition easier, but as the pixel pitch decreases ( by an order of magnitude, to compete with a modern SLR ) the complexities of building and attaching that array increase significantly.
First off, the current Lytro model does not, and will not, replace any traditional camera I own. I like the technology, and it has it's uses even beyond demoing the refocusing abilities, but it's definitely a supplement to my camera arsenal and not a replacement for anything.
You're right that you can "simulate" focus-stacking with a traditional camera, as long as the object you're shooting is sitting still and there's no motion of the camera or in the frame at all. Any motion at all and your simulated focus stacking doesn't hold up well.
Likewise, you can get a perspective shift from multiple images by slight movement with a traditional camera. Assuming again that no actual motion happens and that no observable length of time transpires between the successive offset shots.
Since the Lytro doesn't have these time limitations, and since focus can be done after the fact, I've been able to get a couple of shots of events that I could never get with a traditional camera, just based on the ability to point the camera and snap the shutter without having to even wait for autofocus to happen.
The hardware in the camera to me feels far ahead of the software that drives it. One of the biggest criticisms I have of the Lytro is that the camera captures a vast amount of light field information, and through The Appropriate Application of Mathematics, you should really be able to do a lot more with the captured pictures than you can now. I remain hopeful that as the camera software progresses that more of these possibilities will come to light, and so far other than the time involved to develop the software I haven't been disappointed - there have been pretty regular software updates, and features are being incrementally added.
The Lytro isn't perfect - I'll grant you that - but it's a radical change in technology, and I think that over time the technology can only get better. The price for the current model is a little higher than "what the heck, I'll try one out" typically justifies.
One thing I use my camera for - more than the refocusability - is macro photography. The Lytro has an absolutely insane macro focus capability. It can focus right down to the front of the exterior lens, and it does a great job at shooting small objects at short range - better than any SLR macro lens I have, and I have a couple of "not cheap" ones to compare it to.
To qualify my remarks: I would not propose that anyone takes several images manually or a photographer moves the camera around physically as an alternative.
I'm suggesting that competitors (mainstream cameras) could take multiple pictures in rapid succession INTERNALLY (very feasible in lower resolutions). The canon 7D can take videos at 720p @ 60FPS. Higher framerates seem likely in the future. Once one reaches 120FPS, slight variations in the focus whilst shooting 1s of video would result in acceptable shutter speeds (1//120th) for most moving objects and (theoretically) a stack of 120 pictures with different focal points that could be "merged" into a "lytro-like" image through in-camera focus-stacking and some "hotspot" mechanism.
Similarly (but less likely) combined with a quick physical circular movement of the sensor, or a lens element (just imagine a "Lytro-lens" for your Canon) would allow us to emulate (match?) the other effect.
Perhaps no movement is needed at all if the Canon Lytro Lens had an internal "microlens" array that subdivided the existing sensor, or uses the "pattern filter" technique.
My point: Lytro is nice, but has limited usability (for me at least) and I see it merely as a stepping stone towards inclusion of similar features in the mainstream. As a company they will have a very tough time defending whatever patents they might have. Theory and technology go back many many years.
Also look at the site by the author of said video:
I bought one not for what the current camera could do at the time, but in the hopes that they'd be successful enough to come up with the Lytro 2 which did more.
I was really disappointed when I found that out. I thought the software on the computer was to process the raw data, but I guess not.
I still might be interested, but the the form factor is odd. Worse is that since they decided to go with that shape, they had to make the screen very very tiny.
It's an interesting product, considering it's the first of it's kind on the consumer market. I hope there is a version 2, because I think it could be very interesting. If it could shoot video, that could be pretty fascinating.
Heck, tell other people how to use the format. A series of focal planes and a depth map? Seems like hackers might be able to come up with some pretty fun stuff.
The image is stored as a raw Bayer array on the camera. The conversion to stacked JPEG occurs on the computer. The camera has a pretty beefy DSP on board to do live refocusing on the preview.