
Lens Blur in the new Google Camera app - cleverjake
http://googleresearch.blogspot.com/2014/04/lens-blur-in-new-google-camera-app.html
======
grecy
We had an interesting discussion about this a few nights ago at a
Photojournalism talk.

In that field, digital edits are seriously banned, to the point multiple very
well known photo journalists have been fired for one little use of the clone
tool [1] and other minor edits.

It's interesting to think I can throw an f/1.8 lens on my DSLR and take a very
shallow depth of field photo, which is OK, even though it's not very
representative of what my eyes saw. If I take the photo at f/18 then use an
app like the one linked, producing extremely similar results, that's banned.
Fascinating what's allowed and what's not.

I find even more interesting is the allowance of changing color photos to B/W,
or that almost anything that "came straight off the camera" no matter how far
it strays from what your eyes saw.

[1] [http://www.toledoblade.com/frontpage/2007/04/15/A-basic-
rule...](http://www.toledoblade.com/frontpage/2007/04/15/A-basic-rule-
Newspaper-photos-must-tell-the-truth.html)

~~~
jawns
When you say "digital edits are seriously banned," I think that's
overreaching. I'm a former newspaper editor, and "digital edits" in the form
of adjusting levels, color correction, etc., are performed on every single
photo.

What's not allowed, as you allude to, is retouching a photo.

So does introducing blur after the photo was taken count as retouching, or
does it fall into the same category as color correction? It's an interesting
question. On the one hand, it has the potential to obscure elements of the
picture, which seems like retouching, but on the other hand, you could just as
easily achieve the same effect with a DSLR and there would be no outcry.

~~~
acjohnson55
Those permitted edits you mention seem to all preserve the pixel-level
integrity of the image. The fact that intentional blurring does not do this
seems to be a distinction to me. I think that the fact that the blur layer
must be synthesized heuristically indicates that not all of the information of
the final image was really captured from the real world.

I suppose that resizing (resampling) an image might be said to not preserve
the integrity of the original pixels, but I think it does if you consider the
original pixels to be a reflection of the continuous field at the sensor.

Question for professionals -- how are noise reduction, masks (unsharp), etc.
treated?

~~~
grecy
>Those permitted edits you mention seem to all preserve the pixel-level
integrity of the image.

Curiously, you're allowed to convert color to black and white, which in my
opinion is not preserving the pixel-level integrity. An algorithm is making a
guess at what level of black to convert a pixel to.

> Question for professionals -- how are noise reduction, masks (unsharp), etc.
> treated?

I'm not a pro yet, but my understanding is it's a big no-no.

~~~
coldtea
Huh? Noise reduction is not a no no by any means. At least in some newspapers
I know.

Also, I have to add that it differs for news reporting versus stuff like
interview shots, travel shots etc.

~~~
grecy
> Also, I have to add that it differs for news reporting versus stuff like
> interview shots, travel shots etc.

Right, it really depends on if the shot is being used for "news" or if it's
just an artistic shot to fill space.

That's why at the start of this whole thread I reference photojournalism.

~~~
coldtea
> _That 's why at the start of this whole thread I reference photojournalism._

I saw it, but it's not that clear cut.

What you write, "an artistic shot to fill space" implies to me generic
illustration pictures, which the above isn't an example of.

I think the restrictions to editing are mostly contrained for photos about
stuff like politics, world affairs, crime etc -- stuff that is presented as
100% dry news.

But the term photojournalism covers other stuff too, right? Isn't, say, a
travel article written by a journalist with a photographer photojournalism
too? Or the images taken by a photojournalist for a piece on dance culture,
the burning man, stuff like that. Or for a sports feature.

~~~
grecy
> But the term photojournalism covers other stuff too, right? Isn't, say, a
> travel article written by a journalist with a photographer photojournalism
> too? Or the images taken by a photojournalist for a piece on dance culture,
> the burning man, stuff like that. Or for a sports feature.

I agree, and as I understand it, anything beyond some basic level/color
adjustments and cropping is a no-no in those areas if you want to keep your
integrity.

------
jawns
Regarding the technology (achieving shallow depth of field through an
algorithm), not Google's specific implementation ...

Up until now, a decently shallow depth of field was pretty much only
achievable in DSLR cameras (and compacts with sufficiently large sensor sizes,
which typically cost as much as a DSLR). You can simulate it in Photoshop, but
generally it takes a lot of work and the results aren't great. The "shallow
depth of field" effect was one of the primary reasons why I bought a DSLR.
(Yeah, yeah, yeah, quality of the lens and sensor are important too.) Being
able to achieve a passable blur effect, even if it's imperfect, on a cellphone
camera is really pretty awesome, considering the convenience factor. And if
you wanted to be able to change the focus after you take the picture, you had
to get a Lytro light field camera -- again, as expensive as a DSLR, but with a
more limited feature set.

Regarding Google's specific implementation ...

I've got a Samsung Galaxy S4 Zoom, which hasn't yet gotten the Android 4.4
update, so I can't use the app itself to evaluate the Lens Blur feature, but
based on the examples in the blog post, it's pretty good. It's clearly not
indistinguishable from optical shallow depth of field, but it's not so bad
that it's glaring. That you can adjust the focus after you shoot is icing on
the cake, but tremendously delicious icing. The S4 Zoom is a really terrific
point-and-shoot that happens to have a phone, so I'm excited to try it out.
Even if I can use it in just 50% of the cases where I now lean on my DLSR,
it'll save me from having to lug a bulky camera around AND be easier to share
over wifi/data.

~~~
raldi
_> Up until now, a decently shallow depth of field was pretty much only
achievable in DSLR cameras (and compacts with sufficiently large sensor sizes,
which typically cost as much as a DSLR_

In 2008, I had no trouble taking shallow-depth-of-field photos with a dirt-
cheap Canon A570 pocket camera. For example:

[https://farm3.staticflickr.com/2069/2076688334_aeae12583b_b....](https://farm3.staticflickr.com/2069/2076688334_aeae12583b_b.jpg)

~~~
mikeg8
Your example looks fucking delicious... (currently living abroad and missing
double doubles more than ever)

~~~
hamburglar
animal-style, no less

------
DangerousPie
Isn't this just a copy of Nokia's Refocus?

[https://refocus.nokia.com/](https://refocus.nokia.com/)

edit - better link: [http://www.engadget.com/2014/03/14/nokia-refocus-camera-
app-...](http://www.engadget.com/2014/03/14/nokia-refocus-camera-app-lumia-
update/)

~~~
timbre
The method at least is very different. The Google app is doing structure from
motion, which essentially uses parallax to get the 3D shape of the scene. From
there, you can blur/deblur according to depth. The Nokia app uses focal sweep,
i.e. it just takes lots of pictures of the same scene focussed at different
depths. I'm not sure what the pros and cons of each approach are.

~~~
Orangeair
The Nokia app requires a longer exposure time, encompassing multiple shots,
which basically means that anything moving is a no-go. The quality of its
pictures is theoretically superior, though, because the depth effect comes
from the photos themselves, and not a simulation.

Google's version, on the other hand, works with only a single photo, so it's
more versatile. It may suffer issues with quality, though (arising from the
difficulties in accurately extracting depth values from a 2D image). That has
yet to be seen.

~~~
zmmmmm
> Google's version, on the other hand, works with only a single photo, so it's
> more versatile.

This doesn't seem to be what everyone else is saying. Most people here are
saying it takes multiple shots as you move the camera.

------
dperfect
I believe the algorithm could be improved by applying the blur to certain
areas/depths of the image _without_ including pixels from very distant depths,
and instead blurring/feathering edges with an alpha channel over those distant
(large depth separation) pixels.

For example, if you look at the left example photo by Rachel Been[1], the hair
is blurred together _with the distant tree details_. If instead the algorithm
detected the large depth separation there and applied the foreground blur edge
against an alpha mask, I believe the results would look a lot more natural.

[1]
[http://4.bp.blogspot.com/-bZJNDZGLS_U/U03bQE2VzKI/AAAAAAAAAR...](http://4.bp.blogspot.com/-bZJNDZGLS_U/U03bQE2VzKI/AAAAAAAAARI/yQwRovcDWRQ/s1600/image3.png)

~~~
aray
I'd love to see some test images with that (just get a side-by-side with the
app and a larger aperture camera).

As I understand what you're proposing, I'm not sure it would actually be
closer to what a large-aperture camera would capture. The light field from the
farther depth field _should_ be convolving with the light field from the near
depth field.

Still, side-by-side would be the best way to view these :) I'll do it later
this weekend if I get the chance.

~~~
kbrower
Side by side comparison
[http://onionpants.s3.amazonaws.com/IMG_0455.jpg](http://onionpants.s3.amazonaws.com/IMG_0455.jpg)

~~~
aray
Thanks! Great example. Does it look like the large camera is focusing a few
inches back from the front of the ball and the Google Camera is sharpest at
the center of the ball?

Edit: I was trying to look at the sharpness of features on the ball --
specifically the lettering on the left side.

~~~
kbrower
Focus point of the large camera should be the exact center of the image. The
center of the ball is a little out of focus as the center of the image is the
top left of the ball. User Error :)

------
salimmadjd
Is the app taking more than one photo? It wasn't clear in the blog post. AFAIU
to have any depth perception you need to take more than one photo. Calculate
the pupil distance (the distance the phone moved) then match image features
between the two or more images. Calculate the amount of movement between the
matching features to then calculate the depth.

As described you then map the depth into an alpha transparency and then apply
the blurred image with various blur strength over the original image.

Since you're able to apply the blur after the image, it would mean the google
camera always takes more than one photo.

Also a Cool feature would be to animate the transition from no blur to DOF
blur as a short clip or use the depth perception to apply different effect
than just blur, like selective coloring, or other filters.

~~~
slaven
Yes - you have to move the camera upwards as it takes a series of photos. It
is not working from a single photo.

~~~
JimmaDaRustla
I just used it, it seems to use the movement to calculate the depth, but the
initial image is not blurred or mutated in any way other than how it has
calculated the depth.

------
nostromo
I sure wish you could buy a DSLR that just plugs into your iPhone. I don't
want any of that terrible DSLR software -- just the hardware.

I think many devices should become BYOD (bring your own device) soon,
including big things like cars.

edit: I don't just want my pictures to be saved on my phone. I'd like the
phone to have full control of the camera's features -- so I can use apps (like
timelapse, hdr, etc.) directly within the camera.

~~~
aray
They have large-aperture digital cameras that are similar already:
[http://store.sony.com/smartphone-attachable-lens-style-
camer...](http://store.sony.com/smartphone-attachable-lens-style-camera-
zid27-DSCQX100/B/cat-27-catid-All-Cyber-shot-Cameras)

I'm interested why you'd want DSLR, though, because if it attaches to my phone
i'd probably be happy to use the phone screen as the viewfinder and save the
depth and weight that would otherwise go to a moving mirror assembly.

------
kbrower
I did a quick comparison of a full frame slr vs moto x with this lens blur
effect. I tried to match the blur amount, but made no other adjustments. Work
really well compared to everything else I have seen!
[http://onionpants.s3.amazonaws.com/IMG_0455.jpg](http://onionpants.s3.amazonaws.com/IMG_0455.jpg)

~~~
piyush_soni
Humm ... Looks only 'ok-ish' to me ... Though I've done some experiments (no
DSLR here) just with people, and it looks good there.

------
themgt
Is looking at the examples giving anyone else a headache? It's like the
software blur falls into some kind of uncanny valley for reality.

~~~
Zigurd
I agree that it irks me. But I find that irritation hard to justify: Shallow
depth of field is artificial in all cases. Your vision doesn't really work
that way. Even saying "Your eye is like a camera" is only partly true. You
can't get at the unprocessed image. What you think you see isn't the image on
your retina.

So saying that the effect as a result of a wide-open aperture is more truthful
than algorithmically blurring the background of a photo seems odd. Both are a
photographic artifice that approximates what you think you see when your
attention is on one object in your field of view.

The same is true for the effects of focal length. A longer lens _approximates_
, but can never actually reproduce, the effect of the brain trying to make
same-sized things look the same size. A shorter focal length does the
opposite, and puts more emphasis on foreground objects.

~~~
blt
I do not understand what you mean by _" Shallow depth of field is artificial
in all cases. Your vision doesn't really work that way."_ If I hold my hand
very close and focus on it, the background obviously becomes blurred.

~~~
Zigurd
Your eye has long-ish depth of field. Your brain also compensates for focus.
So you perceive an in-focus area several inches to many feet deep.

With an f1.2 lens, I can put objects just a few millimeters in the foreground
and background of a subject out of focus.

Photographers, consciously or otherwise, use a language of optics effects to
suggest ways of seeing, but they never work the same way as your vision
system, which also lacks the ability to introspectively show you the raw data
from your eye. So the saying that "your eye is a camera" is true, but the
camera image is not directly accessible to your own mind.

So, somewhat ironically, this faux DoF effect might work more like your eye,
putting a whole foreground object in sharp focus, and making the background
uniformly "blurry."

------
jnevelson
So Google basically took what Lytro has been using hardware to achieve, and
did it entirely in software. Pretty impressive.

~~~
ISL
The difference is that Lytro actually does it, instead of simulating it.

It'd be fun to play around with the software to see in which cases it breaks
(perhaps taking a photo of a framed landscape photo with another landscape
behind, for example)

~~~
apu
Lytro is also "just" simulating it, but with slightly more/different data.
They capture a light field but that doesn't magically give them depth values;
they have to estimate them using an optimization, and then render the final
image using a very similar algorithm.

~~~
ISL
Edit: Whoops: I mis-read the article; 'Lens Blur' uses multiple frames. That
lets you get a lot more information.

~~~
ISL
Well, The Lytro will still win on moving subjects where a sequence convolves
subject motion with the depth map.

------
fidotron
Doesn't look totally convincing, but it's good for a first version.

The real problem with things like this is the effect became cool by virtue of
the fact it needed dedicated equipment. Take that away and the desire people
will have to apply the effect will be greatly diminished.

------
sytelus
Wow.. this is missing the _entire_ point on why lens blur occurs. Lens blur in
normal photographs is the price you pay because you want to focus sharply on a
subject. The reason photos with blur looks "cool" is not because the blur
itself but its because the subject is so sharply focused that its details are
order of magnitude better. If you take a random photo, calculate depth map
somehow, blur our everything but the subject then you are taking away
information from the photo without adding information to the subject. The
photos would look "odd" to the trained eyes at best. For casual photograph, it
may look slightly cool on small screens like phone because of relatively
increased perceived focus on subject but it's fooling eyes of casual person.
If they want to really do it (i.e. add more details to subject) then they
should use multiple frames to increase resolution of the photograph. There is
a lot of research being done on that. Subtracting details from background
without adding details to subject is like doing an Instagram. It may be cool
to teens but professional photographers know it's a bad taste.

~~~
alkonaut
> Subtracting details from background without adding details to subject ...

Not sure what you mean by this. Blur is only due to focus distances and
aperture sizes. Making the depth of field narrower (making the OOF regions
more blurry) does not add detail to the areas that are _in_ focus. Usually,
it's even the other way around.

Example:

Say we shoot a portrait of a person at 5m with a forest 30m away in the
background, at 3 different apertures: f/1.4, f./11 and f/20

At the largest aperture (f/1.4) the background will be completely out of focus
and the face of the subject will have sharpness at "80%" of what my
lens/sensor combo can do in terms of resolution. The less-than-excellent
subject sharpness is because lenses aren't perfect and using the largest
aperture will reveal this. Even if you use an expensive professional lens, it
will have it's maximum sharpness at some aperture that is smaller than the
largest. What does happen in the shallow DOF shot is that we have a form of
perceived sharpness (usually referred to as "pop") which is an effect that is
simply due to the fact that the subject is so distinct from the background.

At f/11 the subject sharpness is better than at f/4\. It is now probably near
100% of the maximum resolution the sensor/lens combination can deliver. The
background is significantly more discernible/focused now. If it was a green
blur in the f/1.4 shot is now a forest of very slightly blurred trees.

At f/20 the subject's sharpness is again less (e.g. 90%), this time due to the
physical limitation known as diffraction that occurs for very small apertures
compared to the wavelength. This shot has completely focused trees in the
background.

To put it another way: when you take the f/11 portrait and go to f/1.4 you
take away almost ALL of the background information, and SOME of the foreground
information, while adding NO new information. The entire shot will be less
focused when you do.

~~~
swimfar
But there are still advantages. With a larger aperture the sensor/film is
receiving more light which means the shutter speed can be increased. If the
subject isn't perfectly still this can result in decreased blurriness. It's
been a while since my HS photography class but I think that's correct.

~~~
alkonaut
Yes, my wall of text above applies to static subjects and cameras only.

Whether the freezing of subjects results in more information (detail) or less
(without movement info) is subjective.

------
nileshtrivedi
With these algorithms, will it become feasible to make a driverless car that
doesn't need a LIDAR and can run with just a few cameras?

Currently, the cost of LIDARs are prohibitive to make (or even experiment
with) a DIY self-driving car.

~~~
rjdagost
These algorithms can allow you to have a self-driving car with only cameras.
But, there would be a lot of problems if you tried to make a camera-only
system for consumer vehicle navigation. Vision systems need distinct
"features" in images to find and track across frames to allow you to compute
distance, speed, etc. If you don't have many features, pure vision approaches
won't work. Nighttime operation is a big problem, as is driving on relatively
smooth, featureless terrain.

The basic downside is that standard consumer cameras are passive devices.
That's why Google uses LIDAR- it's an "active" technology that creates its own
features. And driving is an application where the usual computer vision "it
works most of the time" is just not good enough. Time of flight cameras are
interesting sensors that combines active with passive technology. As this
technology matures it might allow for self-driving cars without LIDAR.

~~~
nileshtrivedi
Thanks for that explanation. I have been thinking of experimenting with
automated driving with cameras and got encouraged by things like these:
[https://www.youtube.com/watch?v=dcm9NpMNi68](https://www.youtube.com/watch?v=dcm9NpMNi68)
. But yeah, I can understand how ideal conditions are very different from the
real world.

------
scep12
Impressive feat. Took a few snaps on my Nexus 4 and it seems to work really
well given a decent scene.

~~~
jevinskie
I tried it out as well. Good results, especially from their first public
iteration.

------
angusb
A couple of other really cool depth-map implementations:

1) The Seene app (iOS app store, free), which creates a depth map and a
pseudo-3d model of an environment from a "sweep" of images similar to the
image acquisition in the article

2) Google Maps Photo Tours feature (available in areas where lots of touristy
photos are taken). This does basically the same as the above but using
crowdsourced images from the public.

IMO the latter is the most impressive depth-mapping feat I've seen: the source
images are amateur photography from the general public, so they are randomly
oriented (and without any gyroscope orientation data!), and uncalibrated for
things like exposure, white balance, etc. Seems pretty amazing that Google
have managed to make depth maps from that image set.

------
gamesurgeon
One of the greatest features is the ability to change your focus point AFTER
you shoot. This is huge.

~~~
Panoramix
Kind of like an all-software Lytro
[https://www.lytro.com/](https://www.lytro.com/)

------
Spittie
I find it funny that this was one of the "exclusive features" of the HTC One
M8 thanks to the double camera, and days after it's release Google is giving
the same ability to every Android phones.

I'm sure the HTC implementation works better, but this is still impressive.

~~~
dannyr
Well, only Android 4.4.x phones. But coming to other Android version.

~~~
pjmlp
Where do they state < 4.4 users would ever get it? I only find mentions of
4.4.

------
mauricesvay
The interesting part is not that it can blur a part of the image. The
interesting part is that it can generate a depth map automatically from a
series of images taken from different points of view, using techniques used in
photogrammetry.

------
bckrasnow
Well, the Lytro guys are screwed now. They're selling a $400 camera with this
feature as the main selling point.

~~~
sjtrny
Except you don't have to move the Lytro

------
jestinjoy1
This is what i got with Moto G Google Camera App
[http://i.imgur.com/a6AxO4e.jpg](http://i.imgur.com/a6AxO4e.jpg)

~~~
r00fus
Your pic looks better than the ones on Google's blog.

------
Lutin
This app is now on the Play Store and works with most phones and tablets
running Android 4.4 KitKat. Unfortunately it seems to crash on my S3 running
CM 11, but your experience may vary.

[https://play.google.com/store/apps/details?id=com.google.and...](https://play.google.com/store/apps/details?id=com.google.android.GoogleCamera)

------
Splendor
Isn't the real story here that Google is continuing to break off core pieces
of AOSP and offer them directly via the Play Store?

------
frenchman_in_ny
Does this pretty much blow Lytro out of the water, and mean that you no longer
need dedicated hardware to do this?

~~~
sp332
The Lytro images have a lot more data in them (although probably fewer
pixels). You can move around slightly in a Lytro photo because it samples
light from all directions. Also, Google's version doesn't seem to use the
multiple exposures in the final photo. It only uses them to determine which
pixels to blur in the first photo, and uses normal gaussian blur instead of
simulating a lens.

~~~
avaku
It does simulate the lens, it says so in the article

~~~
sp332
Oh... well it doesn't do a very good job. The last example has her hair
blurred where it should be sharp. That's not very lens-like.

~~~
avaku
Yeah, in the first example too (hair at the bottom). But it's still _much_
better than Gaussian blur...

------
DanielBMarkham
Lately I've been watching various TV shows that are using green
screen/composite effects. At times, I felt there was some kind of weird DOF
thing going on that just didn't look right.

Now I know what that is. Computational DOF. Interesting.

Along these lines, wasn't there a camera technology that came out last year
that allowed total focus/DOF changes post-image-capture? It looked awesome,
but IIRC, the tech was going to be several years until released.

ADD: Here it is. Would love to see this in stereo 4K:
[http://en.wikipedia.org/wiki/Lytro](http://en.wikipedia.org/wiki/Lytro) The
nice thing about this tech is that in stereo, you should be able to eliminate
the eyeball-focus strain that drives users crazy.

------
anigbrowl
It's interesting that the DoF is calculated in the app. I am wondering if this
uses some known coefficients about smartphone cameras to save computation, but
in any case I hope this depth mapping becomes available in plugin forms for
Photoshop and other users.

As an indie filmmaker, it would save a lot of hassle to be able to shoot at
infinity focus all the time and apply bokeh afterwards; of course an
algorithmic version would likely never get close to what you can achieve with
quality optics, but many situations where image quality is 'good enough' for
artistic purposes (eg shooting with a video-capable DSLR) then faster is
better.

------
panrafal
I've created a parallax viewer for lens blur photos. It's an open source web
app available at [http://depthy.stamina.pl/](http://depthy.stamina.pl/) . It
lets you extract the depthmap, works on chrome with webgl and looks pretty
awesome on some photos. There is quite a few things you can do with this kind
of images, so feel free to play around with the source code on github
[https://github.com/panrafal/depthy](https://github.com/panrafal/depthy)

------
guardian5x
I guess that is exactly the same as Nokias Refocus that is on the Lumia Phones
for quite some time: [https://refocus.nokia.com/](https://refocus.nokia.com/)

~~~
refrigerator
Refocus is awesome and I owe many but not as powerful as this at all - Refocus
takes like 10 photos at different levels of focus, so you're limited to the
amount of bokeh that the small phone sensor can produce, i.e you only get a
nice blurred background when you're taking a really close up picture of
something. The new Google Camera feature lets you take a portrait at a normal
distance, and simulate f1.8, which is a lot better than what Refocus can do.

------
kingnight
I'd like to see an example of a evening/night shot using this. I can't imagine
the results are anything like the examples here, but would love to be
surprised.

Are there more samples somewhere?

~~~
avaku
Next they should introduce an infrared laser for measuring depth like in Xbox,
then is will totally work in the dark :)

~~~
spyder
Google Project Tango:
[https://www.google.com/atap/projecttango/](https://www.google.com/atap/projecttango/)

------
goatslacker
On iOS you can customize your DoF with an app called Big Lens.

Normally apps like Instagram and Fotor let you pick one point in the picture
or a vertical/horizontal segment and apply focus there while blurring the
background. Big Lens is more advanced since it lets you draw with your finger
what you'd like to be in focus.

They also include various apertures you can set (as low as f/1.8) as well as
some filters -- although I personally find the filters to be overdone but
others might find them tasteful.

------
techaddict009
Just installed it. Frankly speaking I loved the new app!

------
marko1985
Happy for this "invention" but I would wait for this kind of stuff when
smartphones will have all their laser sensors for depth measurment, so this
calculations doesn't require a sequnce of taken picture, as the main character
could move quickly and deform the final picture or the blur effect. But for
static photography or selfies looks amazing.

------
mcescalante
I may be wrong because I don't know much about image based algorithms, but
this seems to be a pretty successful new approach to achieving this effect.
Are there any other existing "lens blur" or depth of field tricks that phone
makers or apps are using?

I'd love to see their code open sourced.

~~~
theoh
SynthCam is one, free but not open source. You might be interested in the
various bits of lightfield rendering code released by Stanford:
[http://lightfield.stanford.edu/](http://lightfield.stanford.edu/)

~~~
aray
Synthetic aperture-like algorithms are also common at SIGGRAPH if you go
through the past two decades.

A quick search didn't unearth any, but there is open source software to do
parallax depth inferencing, and you could just apply proportional gaussian
blur kernels to each depth segment to get a very similar effect.

~~~
ygra
Gaussian blur looks _very_ different from the blur produced by an aperture.
You need a round blur kernel, not one with a bell curve fall-off.

------
thenomad
So, is there a way to get the depth map out of the image separately for more
post-processing?

Fake DOF is nice, but there are a lot more fun things you can use a depth map
for. For example, it seems like ghetto photogrammetry (turning photographs
into 3D objects) wouldn't be too far away.

------
jheriko
This sounds clever but also massively complex for what it does. I don't have
anything finished but I can think of a few approaches to this without needing
to reconstruct 3d things with clever algorithms... still very neat visually if
technically underwhelming

------
defdac
Is this related to the point cloud generation feature modern compositing
programs use, like Nuke? Example/tutorial video:
[http://vimeo.com/61463556](http://vimeo.com/61463556) (skip to 10:27 for
magic)

------
tdicola
Neat effect--I'm definitely interested in trying this app. Would be cool to
see them go further and try to turn highlights in the out of focus areas into
nice octagons or other shapes caused by the the aperature blades in a real
camera.

------
spot
i just noticed i have the update and i tried it out. wow, first try. amazing:
[https://plus.google.com/+ScottDraves/posts/W4ozBLTBmKy](https://plus.google.com/+ScottDraves/posts/W4ozBLTBmKy)

------
anoncow
How is Nokia Refocus similar or different to this? It allows refocusing a part
of the image which blurs out the rest.(Not a pro)
[https://refocus.nokia.com/](https://refocus.nokia.com/)

~~~
pollen23
Nokia Refocus is quite different.

Refocus is a set of photos with different focal lengths, and a look up table.

For each "pixel" (The look up table isn't full resolution) the look up table
tells which of the photos has the most variance (i.e., is the most focused) at
that point, and the viewer simply switches the photo that's shown.

~~~
anoncow
Thanks!

------
sivanmz
It's a cool gimmick that would be useful for Instagram photos of food. But
selfies will still be distorted when taken up close with a wide angle lens.

It would be interesting to pair this with Nokia's high megapixel crop-zoom.

------
the_cat_kittles
Isn't it interesting how, by diminishing the overall information content of
the image by blurring it, it actually communicates more (in some ways,
particularly depth) to the viewer?

~~~
selmnoo
Well, usually it's for a specific purpose (drawing attention to something) or
aesthetic that you use a shallow depth of field. There have been artists in
the past that tried to go extreme in the other direction, trying to capture as
wide DOF as they could.

This is a good example:
[http://gallery.realitydesign.com/dof.jpg](http://gallery.realitydesign.com/dof.jpg)

Both of these are good individually. But what is best depends totally on what
exactly you're going for. Stanley Kubrick did very interesting experimentation
with this.

------
benmorris
The app is fast on my nexus 5. The lense blur feature is really neat. I've
taken some pictures this evening and they have turned out great. Overall a
nice improvement.

------
insickness
> First, we pick out visual features in the scene and track them over time,
> across the series of images.

Does this mean it needs to take multiple shots for this to work?

~~~
sjtrny
Yes it says so in the previous paragraph

------
zmmmmm
If nothing else, these improvements make HTC's gimmick of adding the extra
lens while giving up OIS seem all the more silly.

------
servowire
I'm no photographer, but I was tought this was called bokeh not blur. Blur is
more because of motion during open shutter.

~~~
yellow
"Bokeh" is specific to photo characteristics. "blur" is more general but still
applies: "a thing that cannot be seen clearly".

------
dharma1
the accurate depth map creation from 2 photos on a mobile device is
impressive. The rest has been done many times before

This is cool, but I am waiting more for RAW images exposed in Android camera
API. Will be awesome to do some cutting edge tonemapping on 12bits of dynamic
range that the sensor gives, which is currently lost.

------
coin
Shallow depth of field is so overused these days. I much prefer having the
entire frame in focus, and let me decides what to focus on. I understand the
photographer is trying to emphasize certain parts of the photo, but in the end
it feels too limiting. It's analogues to mobile "optimized" websites - just
give me all the content and I'll choose what I want to look at.

------
CSDude
I wonder what is the exact reason that my country is not included. It is just
a fricking camera app.

------
spyder
But it can be used only on static subjects because it needs series of frames
for depth.

------
bitJericho
If you couple this with instagram does it break the cosmological fabric?

------
ohwp
Nice! Since they got a depth map, 3D-scanning can be a next step.

------
matthiasb
I don't see this mode. I have a Note 3 from Verizon. Do you?

------
thomasfl
I wish google camera gets ported to iOS. The best alternative for iOS seems to
bee the "Big Lens" app, where you have to manually create a mask to specify
the focused area.

------
avaku
So glad I did the Coursera course on Probabilistic Graphical Models, so I
totally have an understanding of how this is done when they mention Markov
Random Field...

------
apunic
Game changer

------
alexnewman
Got me beat

------
seba_dos1
Looks exactly like "shallow" mode of BlessN900 app for Nokia N900 from few
years ago.

It's funny to see how most of the "innovations" in mobile world presented
today either by Apple or Google was already implemented on open or semi-open
platforms like Openmoko or Maemo few years before. Most of them only as
experiments, granted, but still shows what the community is capable of on its
own when not putting unnecessary restrictions on it.

------
sib
If only they had not confused shallow depth of field with Bokeh (which is not
the shallowness of the depth of field, but, rather, how out-of-focus areas are
rendered), this writeup would have been much better.

[http://en.wikipedia.org/wiki/Bokeh](http://en.wikipedia.org/wiki/Bokeh)

Cool technology, though.

~~~
jcampbell1
Bokeh = shallow depth of field _effects_

The author isn't the one confused.

~~~
chernand
Hi,

I am the author of the Lens Blur blog post and the sentence was indeed wrong
as a result of multiple edits. Bokeh and shallow depth of field are indeed two
different things. By Bokeh we mean that the blur is synthesized using a disk
kernel, e.g. as opposed to a Gaussian Blur. The blog is now fixed.

~~~
josephagoss
Hey, is there any chance we can download the RAW data alongside the JPG file?

~~~
chernand
You can extract the computed depthmap, all-in-focus image, and focus settings
from the XMP data of the jpeg. See

[https://developers.google.com/depthmap-
metadata/](https://developers.google.com/depthmap-metadata/)

for the depthmap format.

~~~
dharma1
That is awesome! But I think he meant the RAW image that hasn't still been
debayered or saved in a lossy 8bit format (jpg)

It made the news last year but I guess hasn't still landed? For post
processing, the RAW is so much more useful than a jpg

[http://connect.dpreview.com/post/2707133307/google-
android-a...](http://connect.dpreview.com/post/2707133307/google-android-api-
camera-raw)

