
Researchers Develop Method for Getting High-Quality Photos from Crappy Lenses - Xcelerate
http://petapixel.com/2013/09/29/newly-developed-software-helps-get-high-quality-photos-crappy-lenses/
======
pedrocr
This is going in a direction that manufacturers have so far mostly avoided
probably out of tradition.

These days lens design is already highly automated with simulation software to
do the optimization. High-end lenses are just lenses where you've relaxed some
of the conditions to get better quality (bigger, heavier, expensive types of
glass, lower manufacturing tolerances). And yet some of the parameters lenses
are still optimized for (color transmission, field curvature, etc) are
increasingly something you could fix in post-production if you had an accurate
model of the lens.

This makes particular sense for non-interchangeable lens cameras where the
sensor+lens combo is known and manufacturing tolerances are already low, so
your lens modeling can be quite good. It should be particularly useful in
smartphones where all the other restrictions are so tight (small size and
weight require it).

~~~
ErsatzVerkehr
This is already in use in the micro-4/3rds cameras, which feature
interchangeable lenses, but still tight integration between the lens and the
camera. There are even firmware updates for the _lens_.

[http://m43photo.blogspot.de/2010/09/lumix-20mm-distortion-
co...](http://m43photo.blogspot.de/2010/09/lumix-20mm-distortion-
correction.html)

~~~
pedrocr
Neat! I had seen it done on compacts but didn't realize anyone had done in on
interchangeable cameras. There are plenty of lens correction packages for DSLR
lenses but those are "since the lens has compromises we fix it
programatically" not "the lens was designed with the correction in mind".

------
sliverstorm
Lenses are such an interesting topic, a holdout that resists what has in
general been a march of progress in manufacturing ability.

I'm still blown away today by the choices in lithography methods used in
patterning silicon. Because large high-quality lenses get so expensive and
difficult to manufacture, we have adopted rigs with systems of stepper motors
to move about the wafer and lens, so that we can use both a smaller lens, and
a slit (instead of annular) lens. I always think, "how can that be both easier
and cheaper!", but it is...

Visual aid to illustrate a "scanner" method:
[http://www.lithoguru.com/images/lithobasics_clip_image012.gi...](http://www.lithoguru.com/images/lithobasics_clip_image012.gif)

The "scanner" method is the slit lens, the "stepper" method is the smaller
lens, and today we do both.

More info:

[http://www.lithoguru.com/scientist/lithobasics.html](http://www.lithoguru.com/scientist/lithobasics.html)

[http://en.wikipedia.org/wiki/Stepper#Scanners](http://en.wikipedia.org/wiki/Stepper#Scanners)

------
frozenport
Its always fun when somebody from another field (computer science) discovers
something that is very well know in your field (E&M). One of the parts missing
from this paper is that the PSF's of optical systems have a real and imaginary
part but measuring the imaginary part is quite difficult and is mostly only
done for biological samples.

~~~
Osmium
There are some really clever techniques out there for recovering the imaginary
parts actually, at least in a similar domain (transmission electron
microscopy), but they're still somewhat new.

~~~
quasque
Could you please elaborate on these techniques? I'd be most interested in any
relevant papers. Thanks!

~~~
Osmium
Can't find a good holistic reference off-hand, but you're looking at focal
series reconstruction (or, if you're using specialised equipment, holography)
to recover phase information of your wave. That can then be used to
reconstruct transfer functions and the like. [This isn't something I do
personally so I don't want to go into too much detail in case I explain
something incorrectly!]

------
potatolicious
This is exciting stuff. I found the discussion on the /r/photography subreddit
pretty interesting also:

[http://www.reddit.com/r/photography/comments/1n94tp/cool_tec...](http://www.reddit.com/r/photography/comments/1n94tp/cool_technique_for_getting_sharp_images_from/)

~~~
contingencies
From the limitations mentioned here and on the subreddit, it sounds like it
will be first successfully applied to security cameras with approximately
fixed focal lengths and known light conditions and/or facial snaps at
airports. Just what we need, more surveillance tech :(

~~~
ars
> Just what we need, more surveillance tech :(

This is more useful to the homeowner and corner gas station than to government
installations which have the budget to install good cameras.

~~~
bigiain
<cynical-hat> What's the bet someone from a company on a Powerpoint slide in
Snowden's cache is preparing to offer "security camera image enhancement as a
service", so all that homeowner and gas station surveillance gets sent
straight to some PRISM-like data gathering program as well as providing
enhanced image to the camera operators?

------
ChuckMcM
Computational Photography is an interesting thing. The Lumia 1020 with its 41
megapixel camera is probably the best test bed at the moment capturing a lot
of the light field. Lytro being a more explicit example.

So basically if you can replace optics with software you end up with better
pictures for less money and more compact imagers. While better camera phones
are always cited I think web cameras, security cameras, and visual recognizers
(things that track items on assembly lines, or people in a store, or anything
where you can set a visual condition to alert on) will be the big winners
here.

------
Sharlin
I'd like to see examples on how this compares to regular run-of-the-mill post-
process sharpening. If the "before" images were straight from the camera, the
comparison isn't really fair.

------
acomjean
Canon has the "Digital Lens Optimizer" which purports to model the optics on
each lens so to allow better conversion to raw files/ jpgs. Also lens
distortion can be corrected too. Lamentably they seem to correct only decent
lens and the file size grows substantially.

Some details :

[http://www.bobatkins.com/photography/digital/DPP_v3-11-10.ht...](http://www.bobatkins.com/photography/digital/DPP_v3-11-10.html)

------
mistercow
These results are very impressive, and I think that if applied to a lens that
was less aggressively simplified, they would be even better.

What is not clear from the video is how this algorithm performs on elements
that are out of focus due to narrow depth of field. It seems likely that near
the edge of the focal distance, there will be significant artifacts as the
algorithm misinterprets slight depth of field blur as lens aberration.

------
kibaekr
This is awesome! I actually was just thinking this earlier today looking at
one of my profile pictures. I liked a particular photo a lot but it had bad
resolution when magnified and I was wondering if it would be possible to make
it high def through some software that automatically divided and colored in
the pixels.

Although it's not the exact same, I'm sure this sort of software can be
applied to restore old photos!

------
ibrahima
Looks pretty cool and all, but am I missing something or are the first
pictures just slightly out of focus? If the images were actually in focus
would the improvement look as significant?

Considering it's a SIGGRAPH paper I'm probably just wrong but from the
pictures in the article that's what it looks like to me.

~~~
relix
I think they took the samples from the edge of a full resolution image. One
difference between good and cheap lenses is that the center of the image will
always be good, while on the edges more and more distortion might happen in
the cheap lenses. It's possible these images were perfectly in focus on the
focuspoint.

~~~
tanzam75
Those are nearly full-sensor images. If you download the supplemental
materials, the large images are all 3492 x 2205. That's 7.7 megapixels -- and
the Canon 40D is listed as a 10.1 megapixel-camera.

They shot the photos with a hand-made, one-element lens. This is a very bad
lens.

The photos may look like they're out-of-focus, but they're actually in-focus.
(The width of lines does not change after the processing.) They just have an
extreme amount of diffusion, chromatic aberration, and all sorts of other
distortions.

------
willvarfar
How can this be applied to, say, recovering highly-compressed images and
video?

Can you generate a PSF as part of a compression step that will turn a smudged
and compressed image back into a better-than-conventional-compression
approximation of the high-quality original?

~~~
haffi112
I would think not. By lossy compressing the image you have already lost
information necessary to reconstruct it again. The method uses information
from all three color channels to reconstruct the image. So all the information
is there, the image is just blurry because the color channels are shifted
(chromatic aberration).

~~~
has2k1
To extend willvarfar's question, can you shift color channels in a reversible
way so as to get compression.

Compare the two process.

1\. raw_image -> image_compress(raw_image)

2\. raw_image -> shift_color_channels(raw_image) ->
image_compress(shift_color_channels(raw_image))

* Thinking out aloud *

Is the 2nd process feasible?

Is it possible that the current image compression algorithms pick up on the
aberration patterns which invalidates the need for the 2nd process?

In the case of image compression using wavelet transforms (which many methods
use) and if wavelets can pick-up on the aberration patterns, could the hurdle
be finding a finite set of wavelet functions that can work for majority of the
lenses?

------
ohwp
Nice! This is working better than I imagined. It's also smart they are using
each channel separate since different wavelengths bend different.

And since this is a post action this could also be used as plugin for
Photoshop, Gimp and others.

~~~
qznc
I'm not sure about the "plugin for Photoshop, Gimp and others". They seem to
require a calibration with the actual lense.

~~~
Ecio78
I think it depends how much two lenses of the same model/type are different:
maybe you could have presets for common lenses and/or finetune the system for
your own one doing some calibration shots.

~~~
ZeroGravitas
There's a database of lens info that grew out of some open source panorama
stitching tools, I wonder if similar would be applicable:

[http://lensfun.berlios.de/](http://lensfun.berlios.de/)

------
devx
I hope we see this in smartphones soon.

~~~
informatimago
This is what [http://www.dxo.com](http://www.dxo.com) has been selling for
years.

------
sswong
just curious what's diff between this & ps' filter
[http://www.adobe.com/inspire/2013/06/photoshop-camera-
shake....](http://www.adobe.com/inspire/2013/06/photoshop-camera-shake.html)

~~~
ygra
Camera shake affects all color channels uniformly across the image. That makes
it unsuitable for removing any kind of lens distortion.

------
stinos
Quite nice, but doesn't come close to what you get from a decent SLR or DSLR,
quality lense and somewhat skilled photographer.

~~~
dingaling
There are plenty of crappy lenses on DSLRs, too, with a very few rare gems in
the sub-$1000 range. Canon's 85/1.8 and 40/2.8 stand out in my mind as
excellent but the remainder of their mainstream ( non-L ) range is pretty
mediocre.

It's how lens manufacturers keep their 'professional series' lenses
lucrative... don't want people being satisfied with what they can afford!

~~~
wiredfool
On the other hand, the 'average' kit not terribly fast kit zoom lenses have
gotten way better over the last 10+ years. There are way fewer total dogs. Put
one in the hands of a decent photographer and they can get a good image.

(that said, if you want sharp, the 50/1.8 at f8 will be sharp enough. but
sharp isn't everything. )

------
jakobe
My god! Someone finally discovered the "ENHANCE" filter!

Well, aside from the problem that information can't be created from thin air.
You can fix certain lens errors, but you cannot extract details that aren't
there on the original.

The most obvious consequence is that you can't use this technique to extract
more megapixels than the sensor has.

Less obvious limitations might be the sensibility to noise; all these samples
have been taken in bright light, but if your crappy lens produces a very noisy
image in low light, this method won't fix it.

Furthermore, the numerical aperture (NA) of the lens defines the highest
possible resolution. Even with this method you can't get a higher resolution
than wavelength/NA.

Unfortunately, there's no way around the principle "Garbage in, garbage out."
Lensmakers rejoice, your business wasn't made obsolete after all!

Nevertheless, I can see exciting applications for this method; one thing that
comes to mind are improving the pictures taken by photographic scanners used
for digitizing old books.

~~~
ohwp
This is not about reducing noise it is about fixing aberration and distortion.

So it even works on noisy images where you get a better quality image of the
(still) noisy image.

They never mention that they extract details that aren't there. They just
present details in a way that humans perceive as "sharper images".

You note about scanners is interesting. There once was a project that
converted scanned disk record into MP3s. You had to scan the disk several
times because of the scanners aberration
([http://www.cs.huji.ac.il/~springer/DigitalNeedle/index.html](http://www.cs.huji.ac.il/~springer/DigitalNeedle/index.html)).

~~~
jakobe
You are right. I made my statements in reference to claims in the article like
"This technique (...) may some day provide a software alternative for those
who can’t afford high-end glass". The technique can definitely improve the
image quality of a given lens, but it will never allow a "cheap" lens to
replace an expensive lens.

~~~
Steuard
Maybe I'm missing something, but most of your comments seem to be focused on
limitations of the sensor, not on the lens itself. (It's hard for me to
picture how a _lens_ would behave differently at low light intensity than at
high intensity: the light rays all bend the same way regardless, right?) Your
point about diffraction-limited resolution is well taken, but for two lenses
with the same aperture projecting onto identical sensors it sounds like this
technique could make low-end products more competitive with high-end ones.
(Let me know if I've missed your point.)

~~~
jakobe
This method tries to correct specific lens errors. To do this, you need very
good intensity resolution. If you have poor intensity resolution, information
is lost and the lens errors cannot be corrected anymore. In low light, the
signal to noise ratio will lead to poor intensity resolution, and the lens
errors become irreversible.

A better sensor will only get you so far; since light is quantized (a ray
consists of individual photons), there are physical limits to the intensity
resolution possible at low light.

Once information is lost, there is no way to recover it. And that's why you
just can't make up for a crappy lens with sofware.

