These days lens design is already highly automated with simulation software to do the optimization. High-end lenses are just lenses where you've relaxed some of the conditions to get better quality (bigger, heavier, expensive types of glass, lower manufacturing tolerances). And yet some of the parameters lenses are still optimized for (color transmission, field curvature, etc) are increasingly something you could fix in post-production if you had an accurate model of the lens.
This makes particular sense for non-interchangeable lens cameras where the sensor+lens combo is known and manufacturing tolerances are already low, so your lens modeling can be quite good. It should be particularly useful in smartphones where all the other restrictions are so tight (small size and weight require it).
The example ErsatzVerkehr mentioned in m43 and is done in some compacts is more "we have a lens+sensor+software combo that spits out a correct photo and you never see uncorrected results". The difference is that you can design lenses that without correction produce unacceptable results.
I'm still blown away today by the choices in lithography methods used in patterning silicon. Because large high-quality lenses get so expensive and difficult to manufacture, we have adopted rigs with systems of stepper motors to move about the wafer and lens, so that we can use both a smaller lens, and a slit (instead of annular) lens. I always think, "how can that be both easier and cheaper!", but it is...
Visual aid to illustrate a "scanner" method: http://www.lithoguru.com/images/lithobasics_clip_image012.gi...
The "scanner" method is the slit lens, the "stepper" method is the smaller lens, and today we do both.
To be fair, SIGGRAPH has a strong focus on applied work, and the contributions are often extremely amazing, despite the fact that the underlying method has been developed by someone else and published before.
The complex PSF isn't important for camera applications because the complex field carries the transparency information. I don't want to see transparent things with my camera.
The Lytro camera is the closest I'm aware of to bringing such things to market.
This is more useful to the homeowner and corner gas station than to government installations which have the budget to install good cameras.
So basically if you can replace optics with software you end up with better pictures for less money and more compact imagers. While better camera phones are always cited I think web cameras, security cameras, and visual recognizers (things that track items on assembly lines, or people in a store, or anything where you can set a visual condition to alert on) will be the big winners here.
Some details :
What is not clear from the video is how this algorithm performs on elements that are out of focus due to narrow depth of field. It seems likely that near the edge of the focal distance, there will be significant artifacts as the algorithm misinterprets slight depth of field blur as lens aberration.
Although it's not the exact same, I'm sure this sort of software can be applied to restore old photos!
Considering it's a SIGGRAPH paper I'm probably just wrong but from the pictures in the article that's what it looks like to me.
They shot the photos with a hand-made, one-element lens. This is a very bad lens.
The photos may look like they're out-of-focus, but they're actually in-focus. (The width of lines does not change after the processing.) They just have an extreme amount of diffusion, chromatic aberration, and all sorts of other distortions.
Can you generate a PSF as part of a compression step that will turn a smudged and compressed image back into a better-than-conventional-compression approximation of the high-quality original?
Compare the two process.
1. raw_image -> image_compress(raw_image)
2. raw_image -> shift_color_channels(raw_image) -> image_compress(shift_color_channels(raw_image))
* Thinking out aloud *
Is the 2nd process feasible?
Is it possible that the current image compression algorithms pick up on the aberration patterns which invalidates the need for the 2nd process?
In the case of image compression using wavelet transforms (which many methods use) and if wavelets can pick-up on the aberration patterns, could the hurdle be finding a finite set of wavelet functions that can work for majority of the lenses?
In this case their simple lens (akin to a Lensbaby) results in zoom-like blur in the corners and thus more extreme PSFs than the Canon lens, which has more or less the same PSF shape overall, just wider in the corners.
Also it appears that they used 32 wavelengths for computing the PSFs in the simple lens case vs. just three for the Canon lens.
And since this is a post action this could also be used as plugin for Photoshop, Gimp and others.
As far as I understand this method just requires the PSF data for different settings to be shipped as well (at different apertures/focal lengths), and Photoshop could do the same thing using this method.
Abberations in a poor lens comes from two factors: 1) design flaw/compromise and 2) sample variation. If you have a lens profile in Photoshop with PSF data it can only correct for the problems inherent in the design of the lens (1), not problems due to variations in manufacturing or lens damage problems.
It's how lens manufacturers keep their 'professional series' lenses lucrative... don't want people being satisfied with what they can afford!
(that said, if you want sharp, the 50/1.8 at f8 will be sharp enough. but sharp isn't everything. )
That a good photographer is able to deliver a far better photo despite the constraints of his tools goes without saying, but it's not what this is about.
Well, aside from the problem that information can't be created from thin air. You can fix certain lens errors, but you cannot extract details that aren't there on the original.
The most obvious consequence is that you can't use this technique to extract more megapixels than the sensor has.
Less obvious limitations might be the sensibility to noise; all these samples have been taken in bright light, but if your crappy lens produces a very noisy image in low light, this method won't fix it.
Furthermore, the numerical aperture (NA) of the lens defines the highest possible resolution. Even with this method you can't get a higher resolution than wavelength/NA.
Unfortunately, there's no way around the principle "Garbage in, garbage out." Lensmakers rejoice, your business wasn't made obsolete after all!
Nevertheless, I can see exciting applications for this method; one thing that comes to mind are improving the pictures taken by photographic scanners used for digitizing old books.
So it even works on noisy images where you get a better quality image of the (still) noisy image.
They never mention that they extract details that aren't there. They just present details in a way that humans perceive as "sharper images".
You note about scanners is interesting. There once was a project that converted scanned disk record into MP3s. You had to scan the disk several times because of the scanners aberration (http://www.cs.huji.ac.il/~springer/DigitalNeedle/index.html).
A better sensor will only get you so far; since light is quantized (a ray consists of individual photons), there are physical limits to the intensity resolution possible at low light.
Once information is lost, there is no way to recover it. And that's why you just can't make up for a crappy lens with sofware.