- Greatly, greatly reduced image resolution. Great big dedicated-camera sized lens and image sensor, cellphone-camera sized pictures. 1680×1050, at most. (1.76MP)
- Color aberration. The microlenses have to be small, of course, so they're going to be made of single physical elements, rather than doublets.
- Various amusing aliasing problems. (note the fine horizontal lines on some of the demo shots)
- Low FPS. Each image requires lots of processing, which means the CPU will have to chew on data for a while before you can take another image.
- Proprietary toolchain for the dynamic images. Sure, cameras all have their particular RAW sensor formats, but this is also going to have its own output image format. No looking at thumbnails in file browsers. Photoshop won't have any idea what to do with it. Can't print it, of course.
- - You can just produce a composite image that's sharp all over, but why not use a conventional camera with stopped-down lens, then?
- It's going to be really thrillingly expensive. This is a given, of course, with new camera technology.
You can just produce a composite image that's sharp all over… – but what fun is that? I'm having a grand time with SynthCam on my iPhone to act like a large lens camera to regain distance dependent focus even though I have a tiny lens.
It's going to be really thrillingly expensive. – It doesn't need to be. Tiny micro lenses on the image sensor might be much cheaper than large chunks of precision glass. Think "inkjet-like print head squirting one of the resins used for plastic eyeglasses into etched depressions relying on surface tension to form the lens". Just guessing there. Maybe placing precision sized beads in each depression and then heating to reflow into a surface tension defined lens would work better.
You can just produce a composite image that's sharp
all over… – but what fun is that?
Getting a picture that has everything in focus and is tack sharp can be seriously challenging.
To tell you the truth, while these photos with adjustable focus seem cool, focus is not my pain - what I want is to be able to take great photos (the kind that 35mm cameras can do) for reasonable prices, preferably with something that fits in my pocket.
And to expand on the point above - focus is not painful when the camera has enough focus points. I played with a Nikon D3s that has a whooping 51 auto-focus points; let me tell that it's freaking awesome, as it can track your subject as it moves. The problem is that consumer-level DX DSLRs only have like 11 focus points, which is still cool, but point&shoots suck badly in this area, most of them focusing only in the center of the image.
Another problem with this project that I can see - people don't like playing with their images on the computer. When you take 500 photos in a single day, and another 600 photos the next day (like when going on a trip), it's really painful to carefully adjust each image, not to mention that the RAW formats are huge and seriously cuts in the number of photos you can take ... yeah, making adjustments is great, but I prefer making more photos, that's why I shoot in JPG and don't regret it.
Even with DSLRs you usually only need one focus point. You can focus at the center and re-frame. It is very simple to do, and is much simpler than choosing a focus point (and much safer than trusting the camera to automatically choose a focus point for you). Multiple focus points can be very useful in certain rare circumstances: when shooting something really fast off of center without being able to pre-focus or when your camera is bolted down on a stand. Even then I cannot possibly imagine one anyone would need 51 points. This is obvious feature creep.
But yeah, I am really not sure who the intended market for this camera is. Focusing is just not a pain point, in my opinion. This camera could be used by artists and professional photographers to play around with the depth of field to get a great artistic shot, without making their subjects wait. But with the micro-lens design, will it have enough image quality for professionals? I guess we will see.
If your camera offers the option, choose a focusing composition that will put your subject as close as possible to its final position in the picture and use the focus point closest to that position.
Nikon DSLRs have this 3D tracking feature in which you select an object to keep in focus and it refocuses based on its movement inside the frame when it hits the focus points. And when the subject exists the frame and re-enters, auto-focus comes back. 51 focus points may seem like feature creep, but as I said, it's freaking awesome when shooting moving targets like birds.
Even for subjects that are still, like for portraits, you have a lot more freedom for composition as you just select the person's eyes and then you can move around while the eyes are kept in focus.
Of course, you can do a good job with a single focus point, but professionals and amateurs need predictable results, because good moments for taking photos are rare and you don't want to screw up because your camera wasn't properly focused.
That's why I can see partly the utility of this technology here, but on the other hand I can see serious problems with it too, the biggest one being that for most people quantity of photos trumps quality. Another problem that I can see is the one I mentioned above; precise and predictable focus is not that much of a problem with modern cameras. And yet another: mega-pixels and quality of optics count a lot. Well, maybe once passed a certain threshold, the there's less ROI from a higher MP, but still, under 6 MP a camera is only usable for publishing on Facebook.
My consumer DLSR has 4 FPS and I don't worry about focus as I just continuously shoot like 20-30 pictures in a row to make sure one of them is good, and usually one of them is.
2560x1600 pixels is 4.1MP. At 1:1 that will completely fill the most monstrous computer monitors one can get for under about $10k. At 300 pixels per linear inch (a very reasonable resolution for photos; about the same as the iPhone 4 "retina" display) it will give you a picture with 10" diagonal.
Unless you're making posters or big prints for photographic competitions or something, even at 4MP raw pixel count is not going to be your problem. It's completely untrue that below 6MP your pictures will be useless for anything beyond Facebook.
(Of course pixel count starts to matter more if, e.g., you're taking pictures of distant birds or distant celebrities with a not-especially-long lens and you need to crop heavily. Most photographers, most of the time, are not doing that.)
A 6MP sensor would produce a fantastic 10"-diagonal print if its pixels weren't affected by noise and it was combined with a high quality lens. Unfortunately sensors of that resolution are typically small (=noisy pixels) and placed in point-and-shoot cameras (=cramped, low-quality optics).
...until they have a great shot ruined by improper focus. Or worse, an entire afternoon of great shots at a memorable event ruined by the autofocus switch set to "off". Us geeks may notice in time, but most consumers/users won't.
Not necessarily. If you're using a plenoptic camera purely for producing 2d images, you can reduce pixel size further than you would normally. This is because each pixel in the output image is the average of many primary pixels in the sensor. The primary pixels can be smaller and noisier while still getting a smooth output image. In rough terms the resolution loss is a cosine term over the pixel areas involved, so it's modest.
The technology also allows a tradeoff between resolution and sharpness that is novel, including the possibility of realizing resolutions beyond the diffraction limit.
The cameras used in research projects suffered reduced resolution because they were a frankenstein modification of an off the shelf DSLR. A camera designed to be plenoptic from the beginning has different constraints.
"color aberration and aliasing"
I believe these can be addressed in the processing stage. I don't see anything about either of these issues that's insurmountable.
"Low FPS. Each image requires lots of processing"
The processing can be done at any time after the fact. The primary image capture is just that, a raw image, same as any other camera.
The processing is also relatively simple convolution which can be done via FFT. Overall it's comparable to common video and image compression algorithms, not something new that requires a supercomputer in your camera.
There's no reason we can't standardize raw plenoptic images. Also, once the plenoptic raw has been processed it can be saved as any available 2d or 3d format, from png to psd to jpeg. These files will be fine in Photoshop or on the web.
"You can just produce a composite image that's sharp all over, but why not use a conventional camera with stopped-down lens, then"
This allows depth of field independent of aperture size, which is a new capability. This will greatly aid low light photography, particularly dim landscapes.
But secondly, the images contain depth information, with all that implies. This is a very different tool from a stopped down lens. It's capable of realizing images that are physically imposible with a traditional lens and sensor.
Citation needed. AFAIK each piece of technology involved is well understood and readily manufactured.
I suppose that's the advantage. I could see myself changing the settings so that you preview the image right after the shot (fashion, portrait) or have no preview at all and keep processing power focused on shooting as many photos as possible (sports, wildlife, candid).
I'm gonna be all over this. You know, for the kids.
"The picture resolution, he added, was indistinguishable from that of his other point-and-shoots, a Canon and a Nikon. Eliminating any loss of resolution in a camera like Lytro’s, which is capturing light data from many angles, is a real advance, said Shree Nayar, a professor at Columbia University and an expert in computer vision."
This will improve and processor speed improves eventually making it a nonissue.
"- Proprietary toolchain for the dynamic images. Sure, cameras all have their particular RAW sensor formats, but this is also going to have its own output image format. No looking at thumbnails in file browsers. Photoshop won't have any idea what to do with it. Can't print it, of course."
Sure, the raw image needs to be processed but you can export the focused product and do what you like, can't you? I can invasion a raw pic and a "best guess on what you'd like to be in focus" jpeg along with it. You can then improve on the jpeg by using the raw image to change what's in focus and recreate the jpeg.
Sure, the raw image needs to be processed but you can export the
focused product and do what you like, can't you?
This will improve and processor speed improves eventually making it a nonissue.
With 5+ megapixel cell phone cameras being common these days, what resolution do you imply in that sentence?
So, I don't know. 1280×720? 1680×1050 at most? They don't quote a megapixel number on their site, of course.
edit: here's the article from 2005 http://graphics.stanford.edu/papers/lfcamera/
The company that flourished from this research in 2008: http://www.crunchbase.com/company/refocus-imaging
And another startup already doing this for mobile phones: http://dvice.com/archives/2011/02/pelican-imaging.php
And even cooler IMO, is that a display panel with proportionately sized microlenses can be used (after a little image processing) to recreate the light field for a glasses free 3d display.
(1) Given the info captured by the camera, can we, without further human input, create an image in which everything is in focus?
(2) What the heck are these people thinking? Going into the camera business? That means that, in order to get my hands on this technology, I am stuck with whatever zillions of other design decisions they made. One product. No competition. No multiple companies trying different ways to integrate this idea into a product. And if this company goes belly-up, then the good ol' patent laws mean that the tech is just gone for more than a decade. <sigh> Please license this.
> Once images are captured, they can be posted to Facebook and shared via any modern Web browser, including mobile devices such as the iPhone.
Surely there must be a more straightforward, but still understandable to non-techies, way to say "the result is an ordinary image file".
This is seriously awesome.
B. This tech isn't going anywhere. If the camera succeeds they might license it. If the camera fails they will surely try to license it. Note also that the technique is apparently not wholly new so the key patents are already running down.
And your point about all those design choices that go into the camera cuts both ways. If they license this tech to a consumer electronics company that flubs the execution they will lose money, as the lousy execution will reflect badly on the tech and will prevent it from getting popular sooner. (The sooner every camera buyer wants this tech, the more profits there will be before the patents expire.) In a world consisting mainly of (a) Apple and (b) hardware companies that cannot design software to save their lives, keeping control of your own fate seems wise. The popularity of this technique among the general public will presumably depend crucially on the UI, both when taking the photo and when displaying it. Better to screw that up yourself than outsource the screwing up to someone else. ;)
And for what it's worth, the just-one-design thing has worked out pretty well for Apple.
Photographers don't like to do this, as blurring the background draws your attention to the subject.
Some companies seem to follow this idea and then, surprise surprise, barcode scanning app doesn't really work on my phone because someone decided not to install AF with the camera :/.
On the motion graphics side, I imagine all kinds of creative potential in compositing photography together with procedural or rendered graphics.
Ignoring the size of the sensor, producing two standard camera lenses will always be cheaper than producing an array of multiple (i.e. more than two) micro-lenses. This is doubly true considering that the micro-lens technology is already encumbered by patents.
Finally, stereo is very well understood and has already been implemented on the GPU, on FPGAs and in ASICs (commonly known as STOC, Stereo On-Chip). I would personally love to see a demo of a micro-lens array used for creating a depth map, but I just don't see any practical advantages over stereo.
This is obviously not the main use case – and we seem to be talking past each other.
Of course (hopefully), that's version 4 or 5. This initial roll-out is looking great! Can't wait to play around with one of the units in the local photo shop.
Looking at the demos, I wonder what the depth-of-field is? Is it entirely calculable, or is it just a few feet and then the user sets the target? It looks like it is tiny, but I'm guessing it's set that way to show off the cool features of the technology.
If someone comes up with software to allow refocusing on two distance points with existing photos, they could eat Lytro's lunch. Can Picasa do something like this?
I'd suggest you could get a similar effect with a camera that had two or three lenses using different focal lengths. Fujifilm already released a camera with two lenses: http://www.dpreview.com/news/0809/08092209fujifilm3D.asp so it's just a software modification for that.
They don't name a price on their website (write them to find out) and, looking at the applications they are naming on their website (http://www.raytrix.de/index.php/applications.html), they certainly do not target consumers.
Here are their camera models if you are interested: http://www.raytrix.de/index.php/models.html
Given the choice of glass and bodies, I think most professionals would still keep with Nikon and Canon (especially for print).
But for entry-level, I could see it being a killer because of the ease.
I'd say that investors would lose lots of money on that venture.
I'd bet against that. They definitely solve a problem people have with taking focused photos and I see this technology becoming even more popular as it becomes integrated into videos. Just the other day I tried to take a quick photo of my niece and cat playing, and trust me when I tell you it was a real hassle trying to keep them in focus. This issue is just one of the many that can be solved with this technology. Even if there isn't strong demand for their own camera, they still should be able to license the technology to camera makers down the road and eventually integrate it into the smartphone market.
What I'm saying is that this particular business approach would fail (too much hype, not focusing on teams' advantages, ignoring customers' preferences).
Investors would over-invest, but business would not get enough revenue to pay them back.
We shall see, but just for the fact their product and technology improves a previous experience in such an obvious way, I have much less of a problem seeing this company receiving a lot of hype and funding than say Color, for example.
There is also an (unrelated) iPhone App by the inventor for playing with depth of field:
The advantage I see in the future is that you are 'guaranteed' a potentially sharp picture.
Even when I do portraits with a wide aperture (1.2/1.4) there are times when I miss the focus on a tiny detail that I wish was more in focus. And since I prefer doing candid poses, redoing a situation just isn't that preferable.
For sports or wildlife, I imagine it can be hard to focus too, sometimes just missing a shot of a bird because of a split second.
It does make me wonder how the motion blur on this would work.
Of course they haven't delivered a consumer product with it yet... But neither has this company.
Let's wait and see...