
Researchers Plug Google’s Project Tango Into a Drone - gvb
http://techcrunch.com/2014/05/22/researchers-plug-googles-project-tango-into-a-drone-to-let-it-fly-itself-around-a-room/?ncid=rss
======
frik
How does Google's Project Tango work anyway?

What I know, it's not LIDAR, nor Stereo-camera or Kinect 1 (infrared laser
combined with a monochrome CMOS).

Does it use time-of-flight camera (like Kinect 2) or structure-from-motion
(like MS Photosynth, ETH Zurich app)?

Tango uses this special processor (half as powerful as high-end GPU at
160Watt, but it consumes thousand times less battery power):

    
    
      It produces over 1 teraflop of processing power on only 
      a few hundred milliwatts of power
    

\-- [http://techcrunch.com/2014/02/20/inside-the-
revolutionary-3d...](http://techcrunch.com/2014/02/20/inside-the-
revolutionary-3d-vision-chip-at-the-heart-of-googles-project-tango-phone/)

Edit: someone answered it in another thread (today):
[https://news.ycombinator.com/item?id=7788881](https://news.ycombinator.com/item?id=7788881)

~~~
gvb
It is "stereo camera" except, instead of using two cameras at a single
instance in time it uses one camera at two (or more) instances in time and
uses the shifting position of the camera over time to get the multiple
perspectives.

The very hard parts are tracking the camera's movement accurately enough to
get valid multiple perspectives and the computer horsepower to turn the
multiple images (perspectives) into 3D. Obviously, your 3D accuracy is going
to be heavily dependent on the accuracy of your camera position, the
resolution of the camera, and the ability to do the math in (near) real time.

One trick I've noticed in demos is that they overlay the image on the 3D
model. This is really cool, but your eyes and brain will fool you into
thinking the 3D model is has better resolution than it really is. If you just
display the 3D model as surfaces, you will better see the limitations of the
3D model.

~~~
frik
it's called "structure from motion", I am pretty sure Google uses ETH's code
(was open source, now ETH's project website vanished), see link above (near
"edit").

------
randomaxes
Skycatch is already doing 3D modeling of huge areas using autonomous robots:
[https://vimeo.com/81128563](https://vimeo.com/81128563)

~~~
djb_hackernews
Except this isn't 3D modeling of huge areas. This is a drone that can navigate
using no other sensors or input except for the Tango.

It's SLAM in a 3 dimensional space without the expensive and cumbersome
lasers.

~~~
smrtinsert
skycatch appears to be doing realtime 3d mapping though as well.

~~~
nschuett
I don't think skycatch is doing realtime 3d mapping. Most likely they're using
photogrammetry that takes a couple hours to process after the flight.

