
Hands-on with HoloLens: On the cusp of a revolution - ingve
http://arstechnica.com/gadgets/2016/04/hands-on-hololens-on-the-cusp-of-a-revolution/
======
VonGuard
I played with Hololens at Build this week. I believe, after seeing it, using
it, and even developing a little with it, that this device is truly
revolutionary, and Google is wasting its money in Florida.

I shook my head back and forth like a dog trying to dry itself off, and the
images I saw barely wiggled. The images are bright enough to completely
occlude reality. The software dev stuff is from MS, so it is polished, simple,
and powerful.

Voice recognition on the device, for example, is just handled by calling the
voice functions in the core library, then GIVING IT A STRING. It listens for
this plain text string. That's it. Sooooo simple and powerful.

~~~
badsock
What does Hololens do that Magic Leap doesn't? Information is very sparse, but
I had the impression that the latter is a superset of the former.

~~~
JabavuAdams
It exists and is available to developers?

~~~
badsock
First-mover is an advantage, sure, but it hardly means that anyone who isn't
first to the gate is wasting their money.

Anyway, I see downthread that VonGuard has made the silly claim that "Hololens
can do everything Magic Leap is supposed to do", so I think it's safe to say
that the "wasting money" thing is just a throwaway line that's not based on a
lot of actual knowledge in the field. I feel kind of trolled.

~~~
VonGuard
All I will say is: try Hololens, then come back here and tell me it is
lacking, dramatically.

It's a VERY good device. Surprisingly good. I expected it to suck.

~~~
badsock
I think what's happened is that you meant "supposed to do" as in "it fills the
need that ML was aiming at", and I took as "it has feature parity". The latter
is clearly not true - ML is supposed to have a very wide FOV, handle the
accomodation reflex, simulate occlusion, do depth of field, etc. - none of
which Hololens can do.

------
djrogers
I believe that this has way more chance to revolutionize human/computer
interaction than anything Occulus is doing. Outside of specific areas (like
gaming and certain types of video based entertainment), this type of AR/MR is
much more useful than Rift style VR.

You can interact with others, your environment, and other systems without the
isolation or nausea of VR. On top of that, being able to supplement the real
world instead of only temporarily replacing it seems far more valuable to me.

Now we just gotta shrink it down to contact lens size, and we're good to go!

 __edited for formatting __

~~~
YZF
I think these are two fundamentally different things: AR, as would suggest,
augments reality. VR however allows for creation of a completely synthetic
reality. Reality can then be anything and you're liberated from reality.

So fundamentally VR has more potential than AR. Things like nausea may take
some time to work out but they're not unsolvable. For example, if the issue is
the disconnect between the acceleration/orientation sensors in your ears and
the virtual reality we can just bypass those sensors and inject a signal
directly into the relevant nerves. It might even be possible to do this
through the skin in combination with training. The frame rate/resolution
issues will also improve until eventually virtual reality will be
indistinguishable from reality (ignoring tactile input for a sec). Sure
physical movement in this environment is another problem. But the end game is
something that is simply incredible/unimaginable.

With AR- sure you can add more synthetic information over reality. You still
have tactile challenges (you can't touch it or interact with it). It is very
difficult to combine seamlessly with the outside environment to make it more
than just a fancy heads-up-display.

Not saying AR isn't cool and full of potential but I don't think we can
compare its potential with VR. VR is the holodeck... AR is ... a heads-up
display? Not to mention the need to carry AR with you wherever you go if you
want to use it in the "real world", power it, compute, etc.

EDIT: One more thought while I'm at it. You can always inject the real reality
into VR and thus make it AR but you can't remove reality from AR... Or if you
could the two would basically merge. So AR can be seen as VR with
"transparency" ... And VR is a superset of all ARs where you can't completely
remove reality.

~~~
Razengan
> So fundamentally VR has more potential than AR.

Fundamentally, VR just has a _different_ potential than AR.

> Not to mention the need to carry AR with you wherever you go if you want to
> use it in the "real world", power it, compute, etc.

You need to carry the VR headsets and their host-devices with you too.

I believe we should clearly delineate the definitions of VR vs AR first.

Virtual Reality should be things like The Matrix. A reality that an individual
experiences without modifying the real world for anybody else. Current
technology simply shoves a screen right up against our eyes but the goal is
obviously to plug that content straight into our very brains.

Augmented Reality should be things like an audio speaker or a video display;
devices that "inject" manmade media into the real world; devices which
generate reproducible experiences that _everybody, even animals,_ can see,
hear, touch and smell.

In AR a single device may provide content passively to multiple viewers. Like
your music system or television or a hologram generator. Even your pets can
see and hear whatever's playing and they certainly don't to purchase or wear
anything.

VR aims to interface with your neurons and you're guaranteed to experience the
full content. AR projects content into the real world which may or may not
bounce on to our senses.

TL;DR: VR replaces reality. AR enhances reality. But that's just my opinion.
:)

~~~
goldenkey
Thank you for your explanation. It seems like for any renaissance man
programmer, either of these areas are ripe for indivuality and invention in
terms of new experiences in the form of apps for these vr and ar devices.
Thoughts?

And to maybe extend the discussion abit, what about devices that plug right
into the centers of consciousness. Is that just more invasive, arguably better
AR or is it VR? What do better hearing aids that sample at 10 ghz and connect
directly to the nerves instead of hair cells. What are they considered? What
about when I pipe my FLAC uncompressed music through the hearing implant -- VR
or AR? It seems like the future is a combination of the two in far more
complex ratios than just one or the other or split screen. VR probably has
limits of perceptiveness if it fails to take model of logical non chaotic
reality. It therefore can eventually become an abstraction of an existing
event or well..present stimulus. Anyhow, I think the ideas are all really
profundo. Looking forward to replies.

[1] [https://teddybrain.wordpress.com/2013/08/28/a-brief-
review-o...](https://teddybrain.wordpress.com/2013/08/28/a-brief-review-on-
consciousness-from-medical-interest/)

~~~
Razengan
My reply to daveguy might apply to some of your questions, not sure if you saw
it:
[https://news.ycombinator.com/item?id=11412809](https://news.ycombinator.com/item?id=11412809)

------
daveguy
What I got from the video:

The gaphics integration into the real world is phenomenal. If you put a
virtual object in a room you can walk around it. If you put something on a
wall it stays there no matter where you walk.

Unfortunately, the interface -- how you interact with virtual objects -- is
completely janky. Awkward pinching gestures to select keys from a floating
virtual keyboard. Cursors that are supposed to represent your physical
movement, but instead jump around.

It is tantalizingly close, but it seems like it would be just as annoying as
it is neat or useful until someone comes up with a better (simple and
reliable) interface.

~~~
hacker_9
Also the fox runs through objects that are even slightly complex, as it can't
comprehend their depth (the camera woman).

Another interesting depth issue is when it ran to the back of the table and
straight on to the floor as if they were continuous, because the view angle
made it so the table and floor were right next to each other only separated by
a line, so the fox didn't do a jump animation. I suspect a depth map of the
room needs to be constructed to solve that issue, no mean feat.

These kind of issues just show me this tech is still years away from being
what we'd expect.

~~~
daveguy
Good points. I was most impressed with the wall identification/ interaction.
If you sent up a big empty room, then everything could be VR, except with a
concept of where the ends are. So it would be a walkable immersive
environment. But I agree, it is not to the point where the average consumer
would be happy. Or they would probably be happier with a much cheaper standard
low-quirk vr headset.

~~~
quanticle
_If you sent up a big empty room, then everything could be VR, except with a
concept of where the ends are._

That, plus the collaboration features would make HoloLens pretty awesome for
any kind of collaborative product design. Take cars, for example. Right now,
car mockups are done in clay, and while clay is easier to mold than metal, it
still takes time and limits your ability to iterate. But with Hololens, you
can have a design review where each of the participants puts on an AR unit,
and now you can all see the same car and _make changes to it_ in real time.

I'm excited to see what sorts of applications will be opened up by being able
to display virtual objects in a "real" setting. I also agree that this won't
be consumer-facing technology (at first). This is the sort of thing that'll
take off with architects and engineers before it takes off with "normal
people".

------
RangerScience
...Dayum. And I say that working in AR. That is /solid/ tracking - although
the "I'm a competitor" in me feels compelled to point out that this was done
in a room specifically set up for it. Your mileage /will/ vary in your own
spaces, but still... damn, that's good tracking.

~~~
kefka
Unfortunately it's not solid tracking.

It's the standard structured IR light over the scene. Which means that
sunlight, reflected sunlight, florescent lights, or any other IR emitter will
wreck havoc over the SLAM algo they have running.

What they need is a visual SLAM and not just the IR solution. It works well in
their dark candle lit studio where they did the demos. But outside, near
windows, near reflections from outside, and flood sources all kill or badly
affect it.

~~~
RangerScience
Damn/awesome. Someone I know got to see their demo at a conference a few
months ago, and that someone figured they had IR targets under visible-light-
opaque elements.

Do you know if the structured light is made by the unit, or is it an
unmentioned peripheral?

~~~
DrPizza
HoloLens is a derivative of Kinect. Kinect 1 (with the Xbox 360) was
structured light. Kinect 2 (with the Xbox One) is time-of-flight. I'm not sure
which tech the HoloLens uses (I don't think Microsoft has specified one way or
another, and I didn't have an IR camera to look to see if it was emitting a
pattern), but I'm assuming it's based on the newer tech, because my
understanding is that time-of-flight systems can much better handle having
multiple devices scanning the same space. HoloLens handled this situation very
well; I've seen at least 8 devices all scanning the same area without getting
confused.

It's all driven by the unit, btw; no need for markers or external
illumination.

~~~
josephpmay
Don't quote me on this, but I believe I've read that it is structured light.
From my experience time of flight cameras are excellent at things like body
detection and terrible at things like SLAM. (I have no idea why and could be
wrong... it's been awhile since I've looked at the literature)

------
ggoss
I'd love to see this coupled with a cheap thermal-IR camera, or microphone
array (to see the spatial localization of sound around you; e.g. where is that
darn leak coming from?!), or really with any sensor that collects spatial
information outside the range of natural human perception. Some of these exist
already in other forms (e.g. 'night-vision' goggles), but the power of this +
that would be incredible, especially when you start combining multiple sensory
'images' into one seamless visualization.

------
tempestn
Watching this video made me think, there's no reason why a VR setup couldn't
similarly map the surroundings and incorporate that into the virtual world.
One limitation of VR is that you can't really move around in the virtual
environment, because you will bump into things in the real world (barring an
omni-directional treadmill or similar). But if the system mapped out your
real-word surroundings and either simulated or even just walled off any real-
world objects and boundaries, you would avoid running into things, while
having much more freedom to explore the virtual environment.

I'm not saying this would remove the use cases of AR, just that it could be a
significant enhancement or enabler to many VR experiences.

------
KaiserPro
The motion tracking is _almost_ perfect.

The picture stuck to the window almost perfectly, which as someone in VFX I
find very impressive.

Tracking head orientation and location is possibly the hardest thing to get
right(next to object recognition)

------
onetimePete
This baby is very hungry. Room decoration - Nom. Gaming Nom. Fashion Industry.
Nom. Gadget and Entertainment Industry (TV, Posters). Nom. Advertising
Industry - and Self Expression. Nom.

Next ten years will be spend with feeding everything to it. He who wraps the
last layer around the world, to sell it to the users, makes everything wrapped
his organelles over time. Well played Microsoft, well played.

If we could decentralize the way consensed augmented reality is shared, we
could even do something good for the world. Open Source WiFi Hubs offering
free processing power and untraceable sharing. Mmmh..

~~~
personjerry
I don't see it. Could you explain some of those industries? And the layers and
organelles analogy? Also (and especially) the wifi point, I didn't get how
that related at all.

~~~
benvan
Not sure why onetimePete is being downvoted. Although worded unusually, I
concur with his implication - that after some time, advancements in this kind
of technology will likely move us towards a ubiquitously augmented world.
Although I personally expect we would settle on an open protocol for sharing
and receiving public / private augmentations, it makes sense that the
organisations who control augmentation technology wield an awful lot of power.
Principally, the ability to make the world look, to you, however they want it
to... perhaps subtly.

[edit: grammar]

------
zobot
It needs similarly untethered hand controllers like the vive's but with their
own 'HPU's and depth camera system tracking. Then you won't need this awkward
pinching system, and you can use buttons, which I've become very accustomed to
in my time using computers.

------
julianpye
This is the type of technology that is crying out for a killerapp, but may end
up struggling to find one.

I had a Vodafone R&D team working with the Sony 3D AR glasses (aka 'Tom Hanks
CES glasses'). We loved the tech and had a great playground, but couldn't get
to the creation of one app that solved a clear painpoint.

We ended up with glorious tech demos, but they were all suited for a short-
lived themepark attraction but not for ongoing use.

Anyway, I am excitedly waiting for someone who can do better than us.

~~~
Natanael_L
It needs the equivalent of the PC desktop environment. An abstraction that
makes sense for how to interact with it, and a way for it to interact with
your surroundings.

Maybe IoT plus some connectivity / control standard similar to DLNA would make
for a good base, where for example you could look at devices and bring up
their menus to control them.

------
iamleppert
Once again, nothing at all to do with holography. Its amazing the shameless
way tech companies who know better allow their marketers to misappropriate
words to promote their parlor tricks. Microsoft is simply using their version
of the Oculus with some overcomplicated video rig for AR to do something far
from new. Yet another Rube Goldberg machine destined to be a punch line.

Real holography doesn't require glasses to view, and is not subject to the
limitations and discomfort of route stereographic effects. This has been
around in one form or another since the early 90's, it never gained wide
appeal because of the fact you are secluded from your environment by the
headset, and because using it (due to the inherent nature of stereographic
effect with fake, nauseating parallax) can only be done for short time
periods.

History repeats itself, and life imitates art.

Edit: For more on why I think this is a huge fad, just read this article about
the death of 3D TV (which is fundamentally based on the same stereoscopic
concepts):

[https://www.avforums.com/article/in-memoriam-the-death-
of-3d...](https://www.avforums.com/article/in-memoriam-the-death-
of-3d-tv.12427?utm_campaign=March_2016_Newsletter&utm_source=Suite26&utm_medium=Email%20Campaign)

Choice quote:

"However the single biggest obstacle that 3D faced wasn't different versions,
incompatible glasses, exclusivity, lack of content or screen sizes, it was
simply that people didn't like wearing the glasses at home. Consumers were
happy to wear 3D glasses at the cinema, which are predominantly the cheap and
light passive variety but they were less keen to do so in their lounge."

Consumers didn't like wearing glasses then, and they won't like wearing them
now.

~~~
badsock
Well, Google Chrome isn't made from actual chrome either ;)

And while certainly VR goggles may eventually share the same fate as 3D TV, I
don't think the comparison is valid (other than they both involve
stereoscopy).

It's a very different thing when sub-millimetre head tracking is involved, and
when the virtual interocular distance is fixed to what a person's actual
distance is. When everything appears to be the correct size and fixed in
space, it crosses a significant perceptual threshold. I don't know if you've
ever put on modern VR goggles, but there's no way I'd put it in the same
category as 3D TV; in fact I don't think I'd put it in the same category as
anything else I've experienced either.

And so I don't think you can take the lessons learned from people's
willingness to wear glasses for a low-incremental-value 3D TV experience when
the VR goggles have a markedly different offering - it's certainly plausable
to me (though far from certain) that VR's value is high enough that people
will be willing to put up with glasses.

------
amelius
What is the apparent resolution of the browser window shown in the AR world?

Is it comparable to conventional screens?

Just wondering whether I can replace my monitor by this thing :)

~~~
moron4hire
You almost certainly can't, for 2 reasons:

A) almost no software right now is designed properly for VR/AR. The best we
can do right now is splat a WIMP interface inside a cylinder centered on you.
We haven't figured out or translated much of anything to a VR native interface
yet.

B) the resolution-cross-FOV is just too low for current, standard WIMP
applications on all current headsets. I think a VR native interface wouldn't
have as many problems at the current low resolutions, because the depth gives
you more to work with for reconstructing images in your brain as you move
around the objects. But again, they are largely missing still.

We'll get there eventually, and I'm banking on sooner rather than later. But
as of right now, you won't be using any HMD as a desktop multimonitor
replacement for anything other than driving or flight sims.

~~~
iofj
From the video it appears that a WIMP interface tethered to open surfaces,
like a wall, is up & running.

------
ascorbic
It would be amazing if they could integrate this with something like
Ultrahaptics, so you could feel the virtual objects too.

[1] [http://ultrahaptics.com/](http://ultrahaptics.com/)

------
DiabloD3
Can you wear these while wearing glasses?

~~~
discordance
The Ars reviewer in this video is wearing glasses

------
anc84
Huh, that felt more like native advertising than something I would expect
honest journalism to be like.

~~~
sp332
Ars Technica doesn't do native advertising. All Condé Nast properties clearly
call out native ads, so you'd know if they did. What feels dishonest about it
anyway?

