
Robot will beam live Moon pictures to Oculus users - charlesmarshall
http://www.bbc.co.uk/news/technology-29704953
======
c0nc3rn3ng1n33r
I work on that robot. There has never been a serious attempt to use an oculus
to control it. Also, Daniel did not build any part of that robot and is trying
to take credit for other people's work. The BBC refuses to contact the
Robotics Institute at CMU or Astrobotic to verify any details in the story.

~~~
bsimpson
Will the feeds be available to consume with a Google Cardboard?

~~~
c0nc3rn3ng1n33r
Calculate the bandwidth required to send two high definition streams from the
moon. Add the bandwidth required for telemetry from the rover and lander. See
what communication speeds are available to and from the moon.

Also, try using an oculus with seconds of latency and see how pleasant it is.

The real point is that Daniel Shafrir borrowed that robot and a video camera
and proceeded to take credit for years of work by a team of students and staff
at CMU and somehow got a BBC reporter to publish his claims without verifying
any information.

~~~
anigbrowl
That seems perilously close to academic fraud, although there's also the
possibility that an ignorant reporter mistook speculative commentary for
specific intentions.

------
robin_reala
Presumably they’ve got a 360º camera module that’ll be proxied through an
Earth-local server, rather than actually sending movement controls to a stereo
camera on the lander? If nothing else the multi-second latency would be
nauseating.

~~~
raimondious
I've thought about this a bit, but how can you achieve 360° stereo? If you
have two 360° cameras side by side, as soon as you look 90° L/R, you no longer
have depth and one eye would just see the side of the other camera. If you
just have one 360° camera, you don't have any depth.

~~~
electrograv
You could build a 360 degree 3D camera that works for any viewing
configuration, but it would involve nontrivial image processing and wouldn't
be totally free from small visual glitches/artifacts in some cases.

1\. Build an array of a few (let's say 8 or so) cameras pointing outward from
the center.

2\. Use a stereo matching algorithm to extract a depth map from the
perspective of each camera. Keeping track of the position and orientation of
each camera in 3D space, these depths become a point cloud associated with
each camera.

3\. Determine the 3D location and orientation of each "eye" you want to
render, then render all point clouds in 3D space to reconstruct a
"reprojected" version of the scene from any desired viewpoint. Of course, the
farther the eyes deviate from actual camera locations the more
stretched/warped the image will appear, but that won't matter much as long as
you keep the eye coordinates within the physical space occupied by the camera.

Honestly I'd be kind of disappointed if CMU doesn't actually try this. It's
disappointing to think that perhaps all this buzz about "hackathons" (as the
article mentions) is encouraging -- even at major research universities --
quickly slapping together components to make something kind of work, as
opposed to fundamental algorithm development and proper engineering solutions.

~~~
btown
If you pair it with a laser scanner (which won't have any atmospheric
interference to deal with on the moon!) you can get a near-perfect depth map
without needing to do stereo correspondence (whose accuracy can be thrown off
if there are identical features in nearby points - insert joke about all moon
rocks looking the same). You could even throw a still DSLR with a telephoto
lens on a perfectly calibrated, positionable mount, aim it at rocks in
sequence, and get high-resolution textures. For bonus accuracy, have it so
that the DSLR mount and the laser-scanner mount can swap positions, so you're
getting the textures from the _exact_ same position as the scanner. Send the
depth map and textures back to Earth in non-real-time. Program the robot to
autonomously explore regions where the depth map hasn't covered. Then on the
client side, just feed those textures into a gaming engine or render farm.
Bam. _That_ would be a scalable Moon Explorer.

------
sixdimensional
It looks like getting real-time HD video from the Moon may be more in reach
than one might think using laser-based communication technology:
[http://www.nasa.gov/topics/technology/features/laser-
comm.ht...](http://www.nasa.gov/topics/technology/features/laser-comm.html)

------
erikpukinskis
We really need voxel cameras. I think the rift will act as a forcing function
for their developent.

------
zobzu
An occulus is just a dual display device (ie 2 cameras video feeds need to be
transmitted)

There is no "beam" or magic. Its 2 cameras, 2 video feeds. That's great and
exciting if as general public, we have access to those video feeds.

~~~
anigbrowl
'Beam' is a common synonym for 'transmit'.

~~~
zobzu
of course it is else it wouldn't make sense now would it. Synonyms don't have
the same connotations.

