

Depth data viewer for Google Camera  - steve19
http://www.clicktorelease.com/code/depth-player/

======
voltagex_
This is really cool - now imagine what Google can do with project ATAP! [1]

If anyone knows anyone remotely involved with ATAP, can you please get them to
get in contact with me. Google have inadvertently created a solution to the
following scenario:

"Yes, place XYZ is completely wheelchair accessible" "Oh, we forgot about ABC
(step, bathroom, camber, terrain)"

Scan an area with ATAP and send it to me - I'll be able to tell whether it's
really accessible or not.

1:
[https://www.google.com/atap/projecttango](https://www.google.com/atap/projecttango)

~~~
krilnon
Is added depth information really a game-changer? People have already used 2D,
photosphere-style images to do what you're suggesting for streets and places
visible on StreetView. [1] I understand that more information is going to be
helpful, but are there many scenarios that significantly benefit from a depth
map to judge accessibility?

Combining crowdsourcing and google street view to identify street-level
accessibility problems:
[http://dl.acm.org/citation.cfm?id=2470744](http://dl.acm.org/citation.cfm?id=2470744)

~~~
rasz_pl
>Is added depth information really a game-changer?

it is if you want to automate it instead of relying on meatbags.

Your link, just like many other research projects (LA pool database for
example) rely on wetware looking at pictures. Depth maps let computers do all
the work. You get (well, will get with tango) Quake style maps from simple
video clips.

~~~
krilnon
Sure, of course. I was just responding to the context voltagex_ brought up,
about sending him depth enhanced photos to look at.

------
chenster
So the technique to take the photo is very similar to the Seene app on iPhone
that takes 3D photos. So the same technology could be potentially used for
lens blur? Seene website: [http://seene.co](http://seene.co)

~~~
andybak
Other way round. People are using the lens blur feature from the new Google
Android camera app and figuring out other uses for the depth data.

------
ohwp
By taking pictures of different angles a point cloud can be generated from the
depth data which can be triangulated into a 3D model.

I own a 3D printer and hope 3D scanning apps will soon be available.

On the other hand, this demo is showing 3D generation by just filming the
object:
[https://www.youtube.com/watch?v=vEOmzjImsVc](https://www.youtube.com/watch?v=vEOmzjImsVc)

I wonder why this didn't take off.

~~~
votingprawn
> I wonder why this didn't take off.

I think mainly because there isn't much of a usecase for most people.

We use Structure from Motion techniques a lot here where I work (Research
Institute) to construct 3D models of buildings and land from UAV imagery.

The current methodologies are great for this use, but they lack the accuracy
you'd want to model an object on your desk for 3D printing. That's not to say
you can't do it - there's just often a lot of work to be done on the mesh
afterwards to make it work.

Autodesk make a pretty good app called 123D catch [0], which uploads your
images to their servers for processing.

Alternatively VisualSFM[1] is what I mostly use, I get great results but I
don't know how good it is with small things.

[0] [http://www.123dapp.com/catch](http://www.123dapp.com/catch) [1]
[http://ccwu.me/vsfm/](http://ccwu.me/vsfm/)

------
Oculus
This is absolutely phenomenal, really cool!

Can't remember where I read that images to 3D is hard, but this application
really gives a good visualization of that problem. It's not the obvious parts
that are difficult, but the relatively flat surfaces that are a little distant
that cause the big issues.

------
meursault334
This is amazing! I wonder how well distance accuracy would compare to
something like the structure.io sensor attachment (same as in project ATAP). I
wonder if there are other applications to this more coarse/less accurate depth
information.

------
victorhooi
I wonder how the new Lens Blur feature compares with light-field cameras, like
the Lystro:

[http://www.theverge.com/2014/4/22/5625264/lytro-changed-
phot...](http://www.theverge.com/2014/4/22/5625264/lytro-changed-photography-
meet-the-new-illum-camera)

It seems like cool technology - it'd be a shame for it to fall by the wayside
because people think software can achieve the same (although if I'm wrong, and
it can, please correct me).

~~~
krilnon
I have a Lytro and Nexus 5, and the biggest immediate difference in the user
experience is that the Lytro captures all of the data in a single exposure.
The Google Camera lens blur feature gets its depth data from having the photo-
taker take a photo, then move the camera up (several inches to a couple feet,
it seems, depending on subject distance). So if the subject is moving or you
have unsteady hands, the capture will be cancelled.

But at least my phone is always with me…

