
Magic Leap Demo Video – A Technical Analysis - tb100
http://www.zappar.com/blog/magic-leap-demo-video-a-technical-analysis/
======
afawefawef
There is no way they can fit a light field projector, dynamic masking, GPU/CPU
capable of driving the simulation, cameras/sensors to provide occlusion/depth
data on the scene, battery, and so on and so forth in a compact system let
alone consumer priced.

The biggest problem I have is they have a CGI company Weta digital on their
team which is obviously very good at photo realistic CGI and we are taking
their word at the claim of no compositing or special effects when that is
impossible since the glasses are doing both as their advertised functions.

For all we know it is a fully opaque glasses that overlays live video and does
the special effects on a offline computer with tons of latency before it sends
it to the "glasses" that we don't perceive because the video recorded. In that
manner no compositing or special effects were added but its still an highly
artificial example.

Also if your being picky adding the text disclaimer on the video itself is a
clear sign of compositing but the entire disclaimer is false because the whole
point of AR is compositing reality with special effects and the main problems
are latency, field of view, bulk, cost, battery life. None of witch are
demonstrated.

~~~
tb100
The video makes no claims of how small the tech is, but I agree it's unlikely
to be in a consumer package at this stage. The display itself is mainly
passive optical components. The input light can come from a single optical
fibre from a pocket-sized smartphone-like battery/compute device. Scanning
single-optical-fibre projection has been demonstrated by people now at Magic
Leap - see the 1mm projection midway down this page: [http://gizmodo.com/how-
magic-leap-is-secretly-creating-a-new...](http://gizmodo.com/how-magic-leap-
is-secretly-creating-a-new-alternate-rea-1660441103)

Magic Leap have clearly put out mocked-up, offline composited videos before -
they are clean, have perfect tracking, and do not use purely additive blending
for the virtual content (they look much like Microsoft's HoloLens "live
demos"). In both cases I have no doubt those are fake.

However this one definitely looks genuine to me - it's got all the hallmarks
we'd expect from a scanning light-field display such as described in their
patent application - in some frames it's possible to see two frames of virtual
content in one frame of the video, suggesting a 60fps refresh of the content.
The graphics on show are not beyond the capability of smartphone-type devices,
and 3D depth cameras have been available in consumer elecronics since the
first Kinect. I'd expect the demos are currently driven by desktop-type PC
hardware, but there is nothing on show here that seems unrealistic to me.

Of course the watermark in the top right and the text at the bottom was
composited afterwards but I'll let them get away with that.

------
tb100
Hi. I'm the author of this blog. I'll try and keep on top of any comments or
questions posted here.

Sorry it's a bit long, but hopefully I'm not the only person who finds this
stuff interesting :)

~~~
achr2
One thing to note is that light perception in human vision is not purely
additive - especially when focal planes are taken into consideration. With
proper light-field technology (i.e. actual projections within a focal plane)
the effect of non-opacity could be _greatly_ reduced. In any case, your
example images showing additive light do not mimic reality very well. For
example, in the far right images, the view of mars that would be perceived by
a human eye would look nearly perfectly opaque (except where the 'dark bands'
are less bright than the background). You certainly would not perceive the
black screen bezel at all.

~~~
tb100
Thanks for the note, I'll add some more caveats to that section; I don't
really know anything about the human vision system. Do you know of any better
mathematical models of human perception that I could use to produce more
accurate mock-ups?

I've tried on some of the glasses from Meta and Epson that consist of more
traditional micro-displays and optics to set a fixed virtual focus plane a
couple of metres away. It's definitely possible to shift attention between the
real and the virtual, and not feel too distracted even with relatively busy
background scenes (using some of the "shades" definitely helps to boost
contrast of the virtual content though). When real and virtual are at
different focal planes it seemed to make maintaining attention on the virtual
content easier, and stereo also seems to help with that.

However the true promise of "Mixed Reality" experiences are when you don't
have to choose to attend to real or virtual, but simply look at a part of the
combined scene. When virtual and real are so closely connection it seems
likely to me that the depth of the virtual content would need to be matched
with to the depth of the real world, so the virtual and real focal planes
would be the same and the user would lose the ability to use depth cues to
direct attention.

If real and virtual are at the same depth, how would you imagine the scene
would look? Is the additive mock-up still unrealistic?

