
Parallax effect from Google Lens Blur photos - bergie
http://depthy.stamina.pl/
======
aleyan
Well done. I have been working with animating 2d images with a hand
constructed depth map in the past to a decent 3d effect[1]. The illusion when
guided by your mouse as in the OP seems to be much stronger then in left-right
strafing that I did.

One point that can be worked on, as pointed out by other people here is the
occluded pixels (this is an issue when an image has sharp changes in distance
from the camera). You simply can't show what wasn't seen by the camera, so you
have to fake it. In the pre-baked animations I did, the occluded pixels were
duplicated and then manually fixed by me in Photoshop.

Perhaps the solution here would be to capture and store images from several
perspectives. These could then be used to generate the depthmap using what
algorithms Lens Blur used, and also to interpolate the occluded pixels when
viewing the photo from different perspectives.

[1] [http://fooladder.tumblr.com/](http://fooladder.tumblr.com/)

~~~
kremlin
<off topic>5 bottles is really good!</off topic>

------
roeme
Surprisingly well faked in face of the fact that there's no real 3D
information in the pictures.

There simply cannot be new 'pixels' appearing that were (i.e. before applying
the effect) hidden from objects in the cameras line-of-sight. This is evident
upon closer inspection (look at edges around the approximate middle of the
DOF). The trompe-l'œil then falls a bit apart.

Interestingly, I did not notice above with the IOS7 background parallax; I'm
wondering why? Special images, or a stricter constraint on movement?

~~~
panrafal
It's exactly that - single image, and a depth map calculated from a series of
shots made upwards. Both these are bundled in a fake-bokeh image. There is
obviously no more pixels. However... If you choose your subject wisely, the
effect can be pretty believeable, using a simple displacement map.

On iOS though it's something different. There's no depth map in there, just a
flat image moving counterwise to your hand movements.

~~~
baddox
It sounds like there are two issues here. One is how the depth map is
generated, and the other is how the resulting image file is formatted. For the
former, several still images are collected while the camera is moving, which
provides parallax which can be used to generate the depth map. For the latter,
I don't know, but it would certainly be possible to bundle both the depth map
_and_ multiple "layers" of photograph that could be used to actually expose
new pixels in the background when the foreground moves.

~~~
panrafal
There is an app for iOS - seene.co. The amount of movements you have to do to
capture enough pixels is prohibitive for my taste. I think that google has
nailed it - it's super simple.

As for storing the layers - you would only have the "from above" pixels, and
only a few. Probably that's why there is only a LensBlur in their app in the
first place.

If you just want a small displacement effect like on depthy - then the key is
no sharp edges in the depthmap. We will try to tackle this in one of the
upcoming updates...

------
arscan
It might be helpful to separate the code that creates the parallax effect from
the angular app that displays the website & examples. This is a really cool
trick and it would be nice to be able to easily reuse it. For those looking,
it seems like the "magic" happens in a couple of spots (I think):

[https://github.com/panrafal/depthy/blob/master/app/scripts/p...](https://github.com/panrafal/depthy/blob/master/app/scripts/pixi/DepthFilter.js)

and in

[https://github.com/panrafal/depthy/blob/master/app/scripts/c...](https://github.com/panrafal/depthy/blob/master/app/scripts/controllers/viewer.js)

~~~
panrafal
There already is a library for extracting the depth map here:
[https://github.com/spite/android-lens-blur-depth-
extractor](https://github.com/spite/android-lens-blur-depth-extractor)
(internet rocks, isn't it ;) )

As for the Shader - it's just a few lines of actual code.

Depthy is a quick weekend hack. There's a loong way before it.

~~~
aantix
Has anyone managed to replicate the lens blur effect that's being utilized in
the new Android camera app? Or at least know what research paper it's based
on?

~~~
panrafal
Yes, here:
[http://jabtunes.com/labs/lensblur/index3.html](http://jabtunes.com/labs/lensblur/index3.html)
and here:
[http://quasimondo.com/QuasimondoLibsJS/demos/DepthMapLensBlu...](http://quasimondo.com/QuasimondoLibsJS/demos/DepthMapLensBlur.html)

~~~
comex
That seems to just be using the depth map information stored by the app, not
replicating the effect from scratch.

~~~
panrafal
They recalculate the blur. Were you asking about calculating the depthmap?

~~~
aantix
Yes, I was wondering how to fully replicate the effect without the need for
the app.

------
bostonpete
Any way to save the hypnotize view to an animated gif? That'd be really
cool...

------
bergie
Source code:
[https://github.com/panrafal/depthy](https://github.com/panrafal/depthy)

------
ctdonath
Interesting contrast, the next-gen Lytro camera featuring such refocusing &
perspective shift: [https://preorder.lytro.com/lytro-illum-pre-
order](https://preorder.lytro.com/lytro-illum-pre-order)

------
juanpdelat
How is the parallax effect working without using my mouse and shaking my
laptop? Am I going crazy or do Mac Books Pro have accelerometers?

Edit: Had to google it, they have something called Sudden Motion Sensor
[http://support.apple.com/kb/HT1935](http://support.apple.com/kb/HT1935)

------
andybak
Nice work! Are you doing any 'content-aware fill' style magic on the invented
occluded pixels?

I wonder what else can be done with the depth data... Fog is the obvious one.
You could potentially insert elements into the scene that knew when they were
behind foreground items.

I'm sure some kind of point-cloud or mesh could also be derived but not sure
how good it would be.

Funnily enough I nearly posted to /r/android/ earlier "I wish you could save
the generated z-buffer" \- it didn't occur to me to actually look!

~~~
panrafal
Fog, refocus, background separation, chroma shifting.

As for the fill, if the depthmap is of good quality and without sharp edges -
there is no need for that, unless you go berzerk with the scale of
displacement.

------
kordless
I can't seem to get the top view when I upload an image. The image and map
display correctly below. I'll try downloading it and running it myself.

~~~
rellik
Same here (in firefox). Tried in Chrome and it works fine.

------
aaronetz
Related (works from a single image):
[http://make3d.cs.cornell.edu/](http://make3d.cs.cornell.edu/)

------
than
Damn this is cool. Even though there's some smearing to make up for the lack
of information, especially behind the railings on the deck, the effect is
striking, and subtle enough to be believable at a casual glance.

------
hadem
"Try using Chrome on your desktop or Android device"... I am...

~~~
panrafal
I will add <span ng-if="Modernizr.Android">another </span>Android ;)

------
prawn
This could be an effective differentiator for a real estate web site featuring
property photos. Or even an online store where product photos are done in this
way.

------
jameshart
Interesting that on the shelf image, it's the slightly diffuse reflections of
the items on the shelf in the wood grain that breaks the illusion for me.

------
general_failure
Very nice. Is there a writeup on how this works? It appears the shadow on the
pole in the second picture moves! (maybe it's just some gimmick).

------
tst
I would be interested to see how this effect affects the conversion rate. I
could imagine that it could work very well on physical products.

------
Aardwolf
This requires a depth map in the image.

Would it also work on _any_ image by calculating a depthmap from the
blurriness?

~~~
wtmcc
No, because “blurriness” (low local contrast) may indicate something besides a
particular depth.

Consider, for instance, a head-on photograph of a print of a shallow-focused
photo. The region that print embodies will have plenty of variation in
contrast, but exist at a single depth. Also, consider that blurring increases
in both in front of and behind the center of focus; how could we tell which
depth the blurring indicated?

Something similar to what you suggest is, however, done in software autofocus,
which can take repeated samples at different focal distances to clear things
up. Maybe that’s something to think about, e.g. for a static subject.
[http://en.wikipedia.org/wiki/Autofocus#Contrast_detection](http://en.wikipedia.org/wiki/Autofocus#Contrast_detection)

~~~
darkmighty
Yup, there's no simple way to recover depth from blur: there will be
featureless regions where you can't tell if there was any blur -- in other
words, there is no universal way to tell if a region has gone through a low-
pass filter or if it is natively low frequency.

Would heuristics work well? I can think of a handful, but none really good.

------
redthrowaway
I'm putting the over/under on how long till this is used for porn at 4.5 days.
Any takers?

~~~
MrScruff
As I understand it, the depth map is created by slowly moving the camera while
taking images of the same objects. It probably wouldn't work as well with a
moving subject (even small movements).

~~~
shawabawa3
"slowly" is < 0.5seconds. Would work fine on models

------
splitbrain
All I see is a black page? (Chromium 33.0.1750.152)

404s for the script and CSS files

~~~
bergie
Yep, seems he deployed a bad build. Was working a bit earlier, and hopefully
soon again

Edit: works now [http://depthy.stamina.pl/](http://depthy.stamina.pl/)

------
ahassan
It would be cool if this was added to the iOS parallax effect.

------
judk
Android Chrome gets "your browser does not support WebGL. You'll not see the
parallax effect... Try using Chrome on your desktop or Android device"

~~~
pornel
Works in Firefox for Android.

