If the true blur kernel is more complicated -- perhaps a wavy line -- then you probably need a blind deconvolution tool, which this is not (yet?).
If you're interested in blind deconvolution in general, Dr. Levin of MIT put together a nice overview paper a few years ago: http://www.wisdom.weizmann.ac.il/~levina/papers/deconvLevinE...
(Disclaimer: I'm the developer of Blurity, a blind deconvolution product)
I still have several questions:
- How well does the state of the art deblurring work with digital blur of known algorithms for forensic purposes?
- How does blind deconvolution compete with some manual recovery processes done by experts, if there exist such manual methods?
- Can approaches in blind deconvolution extend to techniques of resolution enhancing from a sequence of photos taken with the same parameters?
- Are there researchers trying to integrate visual cognition frameworks into making deconvolution results more visually recognizable? I remember one somewhat related research was posted on HN: http://news.ycombinator.com/item?id=4241266 (and it happens to be coming from the same university as the paper you posted)
- Are you asking about recovery of information that was intentionally blurred? If so, there have been some exciting results in this area in recent years. For example, it turns out that given reasonable priors on the underlying glyphs, it's possible to reconstruct even heavily blurred text. (e.g., http://dheera.net/projects/blur.php ) That doesn't involve deblurring per se, since the goal is simply to find the glyphs (and combinations thereof) with maximum likelihood given the blurred text. Still, it's interesting, and similar techniques have been applied to super-resolution problems lately, where the approach is often called "hallucination."
- Blind deconvolutions often are used by experts. However, it's possible to help things along if a glint exhibiting the blur of interest is visible in the photo. That is: if a bright dot of light is visible in the photo, it will be blurred too, and so in the blurred photo the point of light will look like the point spread function (the blur kernel), and that can then be extracted and used with a non-blind deconvolution approach to recover the latent sharp image.
- Blind deconvolution and super-resolution are related problems, so advances in one might certainly help in the other. The research you cited makes use of image hallucination (mentioned above) as a key part of its super-resolution algorithm.
Using the program I was able to get: https://dl.dropbox.com/u/24903613/hn/blur-edited.jpg with the following parameters:
Defect type: Out of Focus Blur
Correction Strength: 23%
Edge Feather: 34%
For example, check out: Fast image deconvolution using hyper-laplacian priors (http://cs.nyu.edu/~dilip/research/fast-deconvolution/).
edit: I changed the link to the author's page, which has Matlab code and a GPU implementation.
This one is obviously comedy, but always enjoyable: http://www.youtube.com/watch?v=KUFkb0d1kbU
>But the Enhance Button simply ignores the fact that the big blocky pixels you get when you zoom in too close on a picture are the only information that the picture actually contains, and attempting to extract more detail than this is fundamentally impossible. No matter what you do or how you do it, you're merely guessing, if not making stuff up outright.
And other program I'be heard but haven't tried before is UnShake: http://www.zen147963.zen.co.uk/Unshake/
It seems that if the blur is caused by moving the camera you can try to find the direction the movement happend and restore the original photograph quite nicely afterwards.
What sets the Adobe tool apart from the rest (so far) is that you can define a non-linear path for motion blur. From what was said at the demo, there seems to be some hope for automatically determining a non-linear motion path. That would be really cool.
1. The article states "the operation which is opposite to convolution is equivalent to division in the frequency domain" which is not correct. 2. "Deconvolution" has no mathematical definition (as implied by that quote), it is the name of various algorithmic approaches used in signals processing. 3. Finally, the Wiener filter is not deconvolution, it is just a filter.
Nonetheless, a great article, with great illustrations.
If you blur a region and than resize you loose that extra bit information you'd normally need to be able to do a good de-blur afterwards.
But this does makes me think, who knows terrorists could be using regular photo's, looking not important at all at first sight, to store messages. So you'd have to de-blur an out-of-focus zone to reveil the message.
That's a decade of compromising images about to become significantly less anonymous...
This isn't a convolution, but the story's the same — the guy clearly thought that the 'swirl' was irreversible / lossy, but it isn't.
A short explanation can be done using the Fourier transform. Blurs like Gaussian blur, and photographic camera blurs (with some simplifying assumptions) are convolutions; they apply a 'blur kernel' to each pixel of the image, which spreads energy from that pixel to all the neighboring pixels based on the shape of the kernel. Visualizing the outcome of a convolution is not straightforward for complex scenes, but here the Fourier transform helps.
When looked at in the frequency domain, the convolution operator turns into a multiplication operator; the spectrums of the image and the blur kernel are simply multiplied frequency-by-frequency. So you can directly see where information is being lost in the final image, by seeing what frequencies of the blur kernel are zero - at those frequencies, the output image has lost all original information.
Deconvolution techniques are all trying to restore the original image; in theory all you need to do is to take the blur kernel, and divide by its frequency spectrum to obtain the original. Assuming you have no noise, etc, in the process, this works fine, except where the blur kernel is zero - division by zero doesn't get you very far, and the information there is truly lost.
With camera handshake, the shape of the blur kernel tends to be a squiggle (the path of the camera motion), and the frequency spectrum is reasonably nicely behaved - there may be no zeros or just a few spots. So reversing the blur is possible, maybe with some additional interpolation to cover up the nulls. Out-of-focus blur (bokeh) is much harder, since it tends to be much more uniform and smooth, like a gaussian blur.
A gaussian blur turns out to have a gaussian frequency spectrum as well - that means the blur kernel has 0 frequency content past a cutoff point, and the wider the blur, the lower the frequency cutoff, and the more information is lost. So deconvolution can't really work directly; you can make assumptions about what was there before (priors), to guide the reconstruction. But at some point it's about as good as pasting a random face from the internet on the blurred head. The question is mostly about where that cutoff is - how much can your knowledge of 'this is a face' make up for the zeroed-out information? In practice, you're probably pretty safe if you've blurred the face to the point where no features remain. If you're really worried about it, throwing in some random noise, etc, makes the problem even more impossible.
So in short: We can probably do OK on camera shake and maybe out-of-focus bokeh. We can't recover from smooth uniform blurs like gaussian blurs.
But I suppose a trapdoor blur -- a convolution that's difficult to reverse -- is possible. Maybe random noise is enough.