
Photoshop 'unblur' leaves MAX audience gasping for air - suivix
http://9to5mac.com/2011/10/10/photoshop-unblur-leaves-max-audience-gasping-for-air/
======
snikolov
If you are wondering how they might be doing it, here is one approach that I
saw in a computer vision class (no idea if they are doing anything similar to
this)

(slides:
[http://cs.nyu.edu/~fergus/presentations/fergus_deblurring.pd...](http://cs.nyu.edu/~fergus/presentations/fergus_deblurring.pdf)
(~60 MB ppt) paper: <http://cs.nyu.edu/~fergus/papers/deblur_fergus.pdf> (~10
MB pdf) )

The basic idea is that you have an unknown original image and it is convolved
with an unknown blurring kernel to produce the observed image. It turns out
that problem is ill-posed. You could have a bizzare original image blurred
with just the right bizzare blurring kernel to produce the observed image. So
to estimate both the original image and the kernel, you have to minimize the
reconstruction error with respect to the observed image, while penalizing
unlikely blurring kernels or original images. If one extracts enough
statistics from a dataset of natural images, one can tell whether an image is
likely or not by comparing that image's statistics to the corresponding
statistics of your dataset of natural images. Similarly, simple blurring
kernels are favored over complex ones (think "short arc of motion" vs.
"tracing the letters of a word with your camera")

~~~
chopsueyar
I assume the RedLaser barcode app uses something similar from captured video
frames combined with data from the accelerometer, no?

~~~
Mvandenbergh
Barcodes are designed to be easy to decode. Even blurred, the 1 or 2
dimensional frequency information is pretty well preserved. If I needed to
combine data from several blurred images of a barcode I'd extract the likely
values of the barcode from each image separately and then combine those.

~~~
chopsueyar
Does it make a difference on the technique used if the image is out-of-focus
versus blurred by camera motion?

~~~
ToastOpt
So long as the motion blur is a small enough angle/translation, the difference
is only in the shape of the blurring kernel. The kernel will be circular or
gaussian for out of focus, while a line or arc for motion.

In one dimension, you'll be looking at a slice (more accurately, a summed
projection) of the kernel. Focus = guassian, motion = square wavelet (line) or
irregular (arc)

------
Geee
It's called blind deconvolution. Blind means that they have to first estimate
the original convolution/blur kernel and in the second phase, apply the
deconvolution. If there's acceleration sensor on the camera, you can use data
from that for the blur kernel.

It's nothing new really, but algorithms for it have advanced tremendously. For
example, there's some results from 2009
<http://www.youtube.com/watch?v=uqMW3OleLM4>

Teuobk on HN also made a startup/app based on this, but it seems to be down
now: <http://news.ycombinator.com/item?id=2460887>

~~~
wickedchicken
One key problem with deconvolution is it's very susceptible to noise. I'm
guessing they developed a way to ramp up the coefficients so you see an
increase in clarity while keeping the noise below visible levels. So much of
image (and audio!) processing is about getting away with noise the person
can't detect :)

On a side note: does anybody know of workable deconvolution algorithms that
vary the kernel over the image? The example would be compensating for a bad
lens.

~~~
elithrar
> So much of image (and audio!) processing is about getting away with noise
> the person can't detect :)

Adobe's noise processing algorithms in their software are something else,
especially in ACR6/Lightroom 3.

I moved 'down' from a 5D2 to a Panasonic GF1 as I don't shoot pro anymore, and
ISO1600 on this thing with a slightly-off exposure can be pretty noisy.
Lightroom cleans it up incredibly well without losing clarity/sharpness.
Before that, I'd just do all I could to keep the ISO down so I didn't 'lose'
images to noise.

~~~
dpe82
It's a very different kind of noise. Artifacts, is what he meant to write.
Motion deconvolution done wrong tends to lead to things like ring artifacts
and other things that look like the image has been really badly and
inaccurately oversharpened.

The basic theory for doing this type of deblurring isn't too bad, but making
it easy and automatic becomes a really difficult computer vision-related
problem. Adobe has been working on this (and a lot of other pretty cool stuff)
for quite a while. It'll be interesting if they ever ship it.

~~~
dhimes
I'm guessing that there is a whole lot of analysis left out by loading pre-
configured coesfficient files to deconvolve these images. Getting _that_
process to be relatively easy for the lay-person will be a challenge.

------
Kliment
The random anti-intellectual comments from the guys in the wheely chairs were
extremely annoying and unfunny. This guy is there, showing something truly
amazing, and they're all "What's an algorithm? Haha!". And they'll get away
with it too.

~~~
artursapek
Rainn Wilson is actually a really smart guy. He was playing the classic stupid
guy being wowed by a genius card, it was an actor's way of complimenting him.

~~~
elliottkember
Anti-intellectualism aside, they still interrupted an amazing talk without
adding any value. An actor's way of dominating the stage even when it's not
your turn. A bit rude and unnecessary.

~~~
wahnfrieden
He was paid by the company to interject with quips during the demos, so any
rudeness is understood by both parties. An awkward situation for all involved.

~~~
DavidSJ
Being paid to be rude doesn't make it not rude.

~~~
bluedanieru
He was paid to be rude to the people who were paying him to be rude? That
might not make it not rude, but I can't imagine anything that would come
closer.

~~~
atomicdog
It was rude to be rude to the people paying him to be rude because it appears
to the other members of the audience who were not being paid to be rude that
being rude and interrupting the talk is acceptable practice.

~~~
jh3
How about you all lighten up, yeah?

~~~
atomicdog
_Whoosh_

~~~
jh3
I said all, not just you.

------
dlsspy
Let me load the specially constructed set of parameters specific to this image
so that when I do the next step you get a really clear image.

That was a little too hand-wavy. I'm a little dubious until I see what went
into that phase.

~~~
stan_rogers
To an extent, this is already available -- for example in the Topaz Labs
InFocus Photoshop plugin. There are some params to play with that make it
easier to find the blur trajectory when the blur is motion-related (although
if you leave it in "figure it out for yourself" mode, it gets it right often
enough). InFocus (the current version) will only do linear trajectories,
though -- it can't handle curves as well as this Photoshop sneak does.

The parameter preload isn't cheating -- if they're anything like the InFocus
params, they're pretty obvious but somewhat tedious. They're things like
telling it that you're trying to correct motion blur rather than focus blur,
what level of artifacting you're willing to put up with (for forensics or text
recovery, you can put up with a lot of noise in the uninteresting part of the
picture), the desired hardness of recovered edges, that sort of thing. It
would have just been a time-waster for the demo (and, like in the demo,
InFocus allows you to save the params as a preset).

~~~
slantyyz
Yeah, I tried the Topaz InFocus plugin after it got a bit of buzz from the
TWIP podcast. It wasn't quite as magical as the demo images on the site made
it seem, and I ended up not buying it (Topaz's Denoise plugin, OTOH, is quite
incredible).

I suspect people will have to manage their expectations with this Adobe
plugin/feature as well.

~~~
stan_rogers
I've gotten some rather amazing results with InFocus myself, but it does take
a lot of tweaking of sliders and so on. It was mostly recovering irreplaceable
stuff, otherwise it wouldn't have been worth the bother. (Taking better
pictures is always the better option when you have it.) I do prefer the output
of a mild application to unsharp masking for photos that aren't actually
blurry, though. And, of course, running it after DeNoise just gets you your
noise back, often sharper and more noticeable than before.

I sort of expect Adobe to do better -- they've got a lot more resources to
work on the problem. Maybe I'll finally find a reason to upgrade from CS3.

~~~
slantyyz
Curious - how badly blurred were your images?

When I tried InFocus, I used them on some shots where I blew the focus at
wider apertures (now that I'm using a camera with better high ISO performance,
that kind of defocus is rarely an issue, because I have the luxury of stopping
down), and I couldn't get adequate results and I wasn't willing to spend a lot
of time tweaking sliders.

I am totally with you on the idea of taking better pictures though. The more
you can reduce your effort in post with technique, the better. A lot of stuff
is unfixable in post.

------
ck2
I'm more impressed with that overhead display - seems impossible?

How does it disappear at the end - or is that a virtual digital overlay?

Wait, is the entire background rear projected, like a borderless movie theater
screen? Must be massive resolution ?!

~~~
nknight
I'm inclined to say much or all of the background is indeed all rear-
projected. If you look at the top-right around 5:06 or 5:07, you see what
looks a lot like it might be light-emitting text floating in midair. (EDIT:
Correction! It's not just floating there, it scrolls left just as the camera
is coming down, I missed that on the first viewing. So it's definitely not
just on a physical banner.)

As for "massive resolution", slicing up a framebuffer and shooting out the
components to multiple projectors wouldn't be a new idea, and I'll bet that's
what was done here.

~~~
voidpointer
Yes, it was all projected. (also the side walls of the theatre). Some 20+
projectors pushing 300 million pixels/sec. The intro to the keynotes was
pretty amazing as well, meshing the projection with light effects and live
performance:
[http://www.youtube.com/watch?v=VrDPgUjqTQ8&feature=relat...](http://www.youtube.com/watch?v=VrDPgUjqTQ8&feature=related)

------
benwerd
Forensic police drama writers everywhere: vindicated.

This is seriously cool technology.

~~~
ookblah
hahaha i was just thinking the same thing.

"zoom in on that. good. now....ENHANCE."

~~~
mathgladiator
"... and ROTATE"

~~~
aguynamedben
The reflection... <http://www.youtube.com/watch?v=Vxq9yj2pVWk>

------
po
Does this work with just motion blur or also with aperture blur? It seems like
they are calculating the motion of the camera so perhaps just the former.

~~~
klodolph
Defocus (or "aperture blur") cannot be corrected by the methods they mention
in the video. However, there are other kinds of blur you can correct.

~~~
vilius
This is an interesting way of completely avoiding defocus blur:
<http://www.lytro.com/picture_gallery>

------
waitwhat
How is this different from what FocusMagic <http://www.focusmagic.com/> has
been offering for over a decade?

~~~
teuobk
FocusMagic handles only focus blur or linear motion blur, and either way, it
requires a high level of user interaction to direct the deblurring. In effect,
it is "non-blind" deconvolution.

The Adobe approach, on the other hand, handles complex (non-linear) motion
blur and does so in a so-called "blind" way.

------
shazam
Wish they applied that algorithm on the video...

~~~
hopeless
Indeed. And held the camera steady. I have motion sickness now, and I'm
sitting at my desk :(

------
alanh
Been hoping for this for a while! The information is _there_ , it’s just
distorted. Great to see Adobe keep pushing this kind of photo editing magic
forward. I bet the maths are crazy.

~~~
mturmon
The information is not really there, because the phase is not captured by the
sensor. All you have is the intensity of the light.

------
chrislo
I believe the speaker mentioned this algorithm was based on the Point Spread
Function[1] but modified to model the movement of the point in time. Dougherty
has[2] a static PSF deconvolution implementation that is fun to play with.

[1] <http://en.wikipedia.org/wiki/Point_spread_function> [2]
<http://www.optinav.com/Iterative-Deconvolution.htm>

------
bartwe
To me it seems the magic is in getting the blur kernel in the first place, how
do you get that ?

------
kondro
Now all Photoshop needs is an unCrash feature.

------
kstenerud
Well, it KINDA looked like stuff was being unblurred, but it's really hard to
tell with the camera panning around out of focus. The only part I could really
be sure was actually unblurred was the phone number.

------
pixcavator
Here's a relevant image:
[http://inperc.com/wiki/index.php?title=A_common_view_of_digi...](http://inperc.com/wiki/index.php?title=A_common_view_of_digital_imaging).

------
nethsix
I suppose this is more of image sharpening rather than reconstruction. Is this
very different from technology on cameras/phones that tries to reduce of photo
blurness due to unsteady hands?

~~~
klodolph
What makes you suppose that? On an abstract level, you can model blur and
camera shake with a convolution kernel. You can then invert the kernel and get
back the original image. As an analogy, imagine that someone gives you an
audio file with an echo. You can subtract the echo with a filter. Camera shake
is harder because of the extra dimension. (Of course, you only get back the
exact original in the world of mathematics)

~~~
jjm
Should hopefully get easier with a dedicated asic crunching on the extra
dimension gathered from future built-in gyroscopes in cameras.

------
ibuildthings
One of the tricky thrones in this method is extracting/guessing the camera
motion path purely from image measurements. Better the estimate, the better
the deblur kernel will be. What might be cool is if they can extract out meta-
information using some kind of inertial and gyroscope-akin sensors (which are
fast becoming standard in phones and cameras), which can supplement the motion
path computation algorithm.

------
TelmoMenezes
So now we need blurring algorithms that cause actual information loss (I'm
sure they already exist, but now there's suddenly a bigger market for them).

~~~
hopeless
We always did need those algorithms. There was a paper a few years ago on
decoding gaussian-blurred documents to reveal the redacted passages. The only
safe way is to completely remove the original pixels, e.g. by drawing a black
box over the text.

~~~
shabble
and not by just drawing that box over them in the original (textual content)
pdf. You need to squash it down to images, then destroy the bits you need to.
(There might be other ways, but enough Big Government Agencies have screwed it
up to make it worth noting)

Is the 'decoding' paper this one: <http://dheera.net/projects/blur.php> ? The
blur function is just smearing pixel values across their neighbours in blocks,
so you can treat it as a hashing function, and then generate enough candidates
that eventually you get something that hashes correct (or close enough)

I'm not sure how practical it'd be on data longer than a credit-card number,
but it's an interesting hack nevertheless.

------
KevinEldon
I've read a few of the very technical responses and they are great, but, for
me, the take away was the audience response. It's exactly what I look for when
I write software. I want that gasp, that moment where someone realizes they
can do a hard thing much easier. Where they realize that they just got a few
moments of time back.

------
Aloisius
Can this overcome some of the soft blurring media companies/journalists use to
hide naughty bits and to protect identities?

------
51Cards
Now that is a feature I would upgrade for.

------
natex
Imagemagick can already use this "algorithm". See "fourier transform"
applications such as:

[http://www.fmwconcepts.com/imagemagick/fourier_transforms/fo...](http://www.fmwconcepts.com/imagemagick/fourier_transforms/fourier.html#convolution_deconvolution))

~~~
teuobk
Deconvolution is easy. The main difficulty is estimating the blur kernel in a
so-called "blind" manner. That's the advance being shown off by Adobe.

~~~
natex
Right. It wasn't clear to me in the video that Adobe is using "blind"
deconvolution. I did catch a glimpse of a small black square within a pure
white field (on the right side palette in the video). I'm assuming that it is
a spatial domain motion blur filter - variable by which the interface tweaks
the effect.

~~~
regularfry
I thought that was an _output_ field, showing what motion it had estimated.
Guess we'll have to wait until it's released to find out :-)

------
goodweeds
Is this much different than the Lytro "focus later" camera?
<http://www.lytro.com/>. I don't know much about imaging, but I've been
drooling over the demos I've seen online.

~~~
joshu
Completely different. This is a way to remove motion blur. Lytro is a way to
move some of the focusing elements from physical to computational.

------
daimyoyo
What they should do is partner with a DSLR maker and put this in the firmware
of the camera itself. Imagine one button blur correction. That'd be amazing.

~~~
MichaelApproved
I don't think DSLRs have the processing power needed to unblur the images. It
took several seconds to unblur a section of an image. It'd probably be faster
to upload the image to a cloud processor and have it unblur it for you.

~~~
slantyyz
_It'd probably be faster to upload the image to a cloud processor and have it
unblur it for you._

It might be more efficient to just teach people how to use their cameras so
that they can minimize _unwanted_ blurring.

------
oomkiller
Maybe they could apply this technology to Flash, so that video streaming on
YouTube isn't so blurry :)

------
dextorious
My version of Photoshop had that feature for years.

1) Load image. 2) Filter -> Gaussian Blur 3) Undo

;-)

------
georgieporgie
So, why did the whole video look really fake? It seemed to bob around in a
very predictable manner. When the first sharpening took effect, it panned and
zoomed exactly in time with the appearance of the second image.

I'm not claiming the demo is fake, I'm just wondering why the video looks so
strange.

~~~
StavrosK
I think it would be unreasonable to claim that it is fake anyway, since
relevant research has been public for some time. If there's an amount of
trickery involved, it's in the parameter files he loaded (they might be
nontrivial/nonautomatic to produce).

~~~
georgieporgie
Again, I'm not questioning the demo, I'm asking why the _video_ has so many
very strange traits.

~~~
StavrosK
I know, I was referring to people who might question it (such as the posters
below you), hence the "anyway".

------
zwischenzug
So who shot Kennedy?

------
kr1shna
Need this for my wife. Will that camera's RAW format need to be compatible
with this algorithm?

------
aguynamedben
ENHANCE <http://www.youtube.com/watch?v=Vxq9yj2pVWk>

~~~
beej71
It leaves out a couple of my favs:

Super Troopers: <http://www.youtube.com/watch?v=KiqkclCJsZs>

Red Dwarf: <http://www.youtube.com/watch?v=KUFkb0d1kbU>

