So it's a TWO-trick pony. I was going to order one. Severely underwhelmed after seeing the first images. The flash player requirement to view the multifocal pictures makes them next to useless (can't share on social networks, can't view on iPad, etc.).
The focus shift feature is nice, but after seeing 50, it seems nothing more than a gimmick, and one that gets darn boring darn fast. This latest feature is cute as well, perhaps more useful even, but again not earth-shattering.
It's just a matter of time before a Samsung simulates the same thing via 'focus bracketing' and storing lots of smaller pictures (they'd probably still be bigger than the Lytro "resolution"), apply a little "focus stacking" algorithm like Photoshop to determine "focal plane" hotspots, and load the focal plane based on the selected hotspot. This little video shows a "fake lytro" image created by taking 10 DSLR pix: http://www.youtube.com/watch?v=KozKofC01_Q
This latest feature could possibly be "simulated" (equalled?) by slight physical movement (circular/conical?) of the sensor in the focal plane.
Man, tough crowd. I have a Lytro, and I love it. I'm quite a fan of the technology. It's not perfect, and it's uses are (for now) somewhat limited, but I feel your criticism of it may be a little overly harsh.
First off, the current Lytro model does not, and will not, replace any traditional camera I own. I like the technology, and it has it's uses even beyond demoing the refocusing abilities, but it's definitely a supplement to my camera arsenal and not a replacement for anything.
You're right that you can "simulate" focus-stacking with a traditional camera, as long as the object you're shooting is sitting still and there's no motion of the camera or in the frame at all. Any motion at all and your simulated focus stacking doesn't hold up well.
Likewise, you can get a perspective shift from multiple images by slight movement with a traditional camera. Assuming again that no actual motion happens and that no observable length of time transpires between the successive offset shots.
Since the Lytro doesn't have these time limitations, and since focus can be done after the fact, I've been able to get a couple of shots of events that I could never get with a traditional camera, just based on the ability to point the camera and snap the shutter without having to even wait for autofocus to happen.
The hardware in the camera to me feels far ahead of the software that drives it. One of the biggest criticisms I have of the Lytro is that the camera captures a vast amount of light field information, and through The Appropriate Application of Mathematics, you should really be able to do a lot more with the captured pictures than you can now. I remain hopeful that as the camera software progresses that more of these possibilities will come to light, and so far other than the time involved to develop the software I haven't been disappointed - there have been pretty regular software updates, and features are being incrementally added.
The Lytro isn't perfect - I'll grant you that - but it's a radical change in technology, and I think that over time the technology can only get better. The price for the current model is a little higher than "what the heck, I'll try one out" typically justifies.
One thing I use my camera for - more than the refocusability - is macro photography. The Lytro has an absolutely insane macro focus capability. It can focus right down to the front of the exterior lens, and it does a great job at shooting small objects at short range - better than any SLR macro lens I have, and I have a couple of "not cheap" ones to compare it to.
Very good points. I especially like the macro aspect.
To qualify my remarks: I would not propose that anyone takes several images manually or a photographer moves the camera around physically as an alternative.
I'm suggesting that competitors (mainstream cameras) could take multiple pictures in rapid succession INTERNALLY (very feasible in lower resolutions). The canon 7D can take videos at 720p @ 60FPS. Higher framerates seem likely in the future. Once one reaches 120FPS, slight variations in the focus whilst shooting 1s of video would result in acceptable shutter speeds (1//120th) for most moving objects and (theoretically) a stack of 120 pictures with different focal points that could be "merged" into a "lytro-like" image through in-camera focus-stacking and some "hotspot" mechanism.
Similarly (but less likely) combined with a quick physical circular movement of the sensor, or a lens element (just imagine a "Lytro-lens" for your Canon) would allow us to emulate (match?) the other effect.
Perhaps no movement is needed at all if the Canon Lytro Lens had an internal "microlens" array that subdivided the existing sensor, or uses the "pattern filter" technique.
My point: Lytro is nice, but has limited usability (for me at least) and I see it merely as a stepping stone towards inclusion of similar features in the mainstream. As a company they will have a very tough time defending whatever patents they might have. Theory and technology go back many many years.
To my knowledge, accurately building and calibrating a stereoscopic camera is a significant pain. Saying you could just wave your phone around and extrapolate a 3D scene is easily 10+ years out, because people make terrible tripods, and the requirements for planarity, etc. to do stereoscopic well are pretty strict. That said, there are 3D phones with parallax screens and stereo cameras; they're big and bulky and not that great.
The cool thing I saw about Lytro isn't their flash based player. Its the idea of giving you another degree of freedom after a picture is taken. Like colour correction, you can do focus correction, DOF correction, etc. I would still export one flat image as a result, but gaining that flexibility without carrying around a monopod to take 10 bracketed images is awesome to me.
See my response above. Did not mean to suggest we should all start waving our cameras. Something could "wave" internally, or this waving isn't even needed of Canon came out with a microlens-adaptor.
By the way, there are several apps already which build 3D scenes from two separate pictures. Some can even generate 3D models. HDR used to require manual exposure bracketing in phones too. Now technology has caught up.
Correct me if I'm wrong, but the Lytro stereoscopic field looks fixed at a relatively short distance. At the moment it seems to be a very subtle parallax effect, one that might be mimicked by an ever so slight movement of a camera sensor.
I'd love to have that freedom... combined with everything else that I've come to expect from a DSLR. Lytro's cameras are nice novelty gadgets but it's not clear to me if they're trying to move in the direction of working with a sensor/DSLR maker to enable this. I think that's what they should focus on...
I gave this some thought below, but unfortunately fitting a new, more expensive sensor is probably not going to happen soon. The current 1.8 megapixel is a cheap commodity sensor, and it's still probably costing them buckets to have them fabbed with their custom microlenses. Something like a Kodak 8MP sensor would probably be 20x as expensive, and talking them into making you a custom SKU would be much harder.
Edit: after a bit of digging, it looks like they make the micro lens array themselves and buy the sensors without one. This obviously makes acquisition easier, but as the pixel pitch decreases ( by an order of magnitude, to compete with a modern SLR ) the complexities of building and attaching that array increase significantly.
I think Lytro lost some of the magic for me when I realized it was just utilizing a depth map to generate an iris blur. Looks like the same deal for the 3d effect, around edges you can see reconstruction artifacts. I'm impressed by the fact one lens can capture depth, but with two lenses at a narrow aperture they could create the same effect in higher resolution and quality.
My understanding is that the camera's file format is essentially a stack of JPEGs and a depth map. That's how the camera can do it's own refocusing without needing a huge processor. It's not generating the image from the light field, it's combining regions of a series of JPEG images.
I was really disappointed when I found that out. I thought the software on the computer was to process the raw data, but I guess not.
I still might be interested, but the the form factor is odd. Worse is that since they decided to go with that shape, they had to make the screen very very tiny.
It's an interesting product, considering it's the first of it's kind on the consumer market. I hope there is a version 2, because I think it could be very interesting. If it could shoot video, that could be pretty fascinating.
Heck, tell other people how to use the format. A series of focal planes and a depth map? Seems like hackers might be able to come up with some pretty fun stuff.
The camera's raw format is indeed an entire light field (I've used nrp's tools, they're great). The camera import software creates a layered JPEG out of the raw data for uploading and sharing, since the size and the amount of math involved in refocusing the raw image on the fly is significant.