One point that can be worked on, as pointed out by other people here is the occluded pixels (this is an issue when an image has sharp changes in distance from the camera). You simply can't show what wasn't seen by the camera, so you have to fake it. In the pre-baked animations I did, the occluded pixels were duplicated and then manually fixed by me in Photoshop.
Perhaps the solution here would be to capture and store images from several perspectives. These could then be used to generate the depthmap using what algorithms Lens Blur used, and also to interpolate the occluded pixels when viewing the photo from different perspectives.
There simply cannot be new 'pixels' appearing that were (i.e. before applying the effect) hidden from objects in the cameras line-of-sight. This is evident upon closer inspection (look at edges around the approximate middle of the DOF). The trompe-l'œil then falls a bit apart.
Interestingly, I did not notice above with the IOS7 background parallax; I'm wondering why? Special images, or a stricter constraint on movement?
On iOS though it's something different. There's no depth map in there, just a flat image moving counterwise to your hand movements.
As for storing the layers - you would only have the "from above" pixels, and only a few. Probably that's why there is only a LensBlur in their app in the first place.
If you just want a small displacement effect like on depthy - then the key is no sharp edges in the depthmap. We will try to tackle this in one of the upcoming updates...
If you have an Android device around, try depthy on it - it's way better this way imo.
But still, as a whole it's remarkable effective. I'd like to be able to appreciate it without moving my mouse -- I wonder what kind of camera movement function, at what speed, would make for the most unobtrusive effect that would still give the perception of depth?
Not just different left/right angles for each eye, but also can rotate the angles by tilting head. That would be a spectacular way to view still photos in VR.
Also, as the depth map only has a single depth value per pixel, you can get aliasing around the edges of the objects in focus as they have a halo around them.
As for the Shader - it's just a few lines of actual code.
Depthy is a quick weekend hack. There's a loong way before it.
Edit: Had to google it, they have something called Sudden Motion Sensor http://support.apple.com/kb/HT1935
I wonder what else can be done with the depth data... Fog is the obvious one. You could potentially insert elements into the scene that knew when they were behind foreground items.
I'm sure some kind of point-cloud or mesh could also be derived but not sure how good it would be.
Funnily enough I nearly posted to /r/android/ earlier "I wish you could save the generated z-buffer" - it didn't occur to me to actually look!
As for the fill, if the depthmap is of good quality and without sharp edges - there is no need for that, unless you go berzerk with the scale of displacement.
Would it also work on any image by calculating a depthmap from the blurriness?
Consider, for instance, a head-on photograph of a print of a shallow-focused photo. The region that print embodies will have plenty of variation in contrast, but exist at a single depth. Also, consider that blurring increases in both in front of and behind the center of focus; how could we tell which depth the blurring indicated?
Something similar to what you suggest is, however, done in software autofocus, which can take repeated samples at different focal distances to clear things up. Maybe that’s something to think about, e.g. for a static subject. http://en.wikipedia.org/wiki/Autofocus#Contrast_detection
Would heuristics work well? I can think of a handful, but none really good.
404s for the script and CSS files
Edit: works now http://depthy.stamina.pl/