
How does the Hololens 2 matter? - xwipeoutx
http://stevesspace.com/2019/02/how-does-hololens2-matter/
======
alanbernstein
"The field of view (FoV) is the extent of the observable world that is seen at
any given moment."

FOV is better characterized by a 2-dimensional measurement, like square
degrees, or steradians, than a single 1-dimensional measurement, like diagonal
angle. "2x improvement" sounds like a perfectly valid way to describe a 2.4x
increase in FOV area to me.

~~~
mojomark
The author discusses square degrees, but then makes the following statement:

"if we stack 19 Hololens 2 units perfectly, we have ourselves a fully
Holographic sphere)...So don’t get hyped on that!"

...and it's rediculous assesments like this (e.g., which have no bearing on a
viable AR display) that cause me to stop reading.

------
DanAndersen
As someone doing AR research, I can definitely echo the importance of higher
FOV for getting closer to the "holograms." I am currently doing some
investigations involving tracked hand-held devices (like smartphones/tablets)
with the AR HMD, and I've found that users tend to unnaturally hold the device
really far out in an attempt to see any AR content that is placed relative to
the tracked object. Hopefully even a modest improvement in FOV can overcome
this subconscious tendency and open up a wider range of interactions.

~~~
mojomark
That's interesting; something I never really paid attention too when I was
working with an mobile device AR app company. Is this research somehow tied
into the relationship between UI and imaging AR head mounted displays or are
you focussing on tablet/phone AR?

I too feel FOV is a key attribute, but for me it's all about it being an
important "immersion cue". Objects that we don't first (subcontiously) notice
in our peripheral vision, we simply don't believe.

This is actually why I think Magic Leap made a smart decision by making their
device so 'enclosed' (as opposed to MSHL that is extremely open and
unoccluded), which effectively artificially narrows the peripheral vision. So,
by simply blocking off the area of a display window through/from which your
device cannot generate image content, they improve immersion. Cheating, but
who cares, so long as effect is better than without.

~~~
DanAndersen
>Is this research somehow tied into the relationship between UI and imaging AR
head mounted displays or are you focussing on tablet/phone AR?

It's focused on having virtual/AR content that is rendered by an AR HMD like
the HoloLens, but which is rigidly anchored to a handheld object. So, for
example, having extra windows or panels on a phone that float beyond the
rectangle that defines the physical smartphone shape.

>This is actually why I think Magic Leap made a smart decision by making their
device so 'enclosed' (as opposed to MSHL that is extremely open and
unoccluded), which effectively artificially narrows the peripheral vision.

I think this was an issue that the HoloLens 1 had/has in terms of the plastic
"eyeglasses" region -- because it looks like it wraps around the full eye FOV
when looking at the headset before putting it on, users are disappointed when
they wear it and it's revealed just how (relatively) low the FOV is. Setting
proper expectations is important for users.

------
GaryNumanVevo
The UI for Hololens 1 was terrible. Having to hold your gaze perfectly at tiny
buttons and trying to get the Air Tap to work successfully was super annoying.
You'd think they'd be able to project your fingers into 3d space from your
approximate eye location.

Glad they've revisited those flaws!

~~~
moron4hire
To be fair, it _was_ state of the art... five years ago. HoloLens 1 was an
amazing feat of engineering. Yes, it was a terrible user experience, but we
wouldn't have the latest hardware without it.

As for projecting the fingers, it is possible. It's just not a great
experience. Nose-pointer is a (application level) compromise. I've built apps
in HL1 where direct hand gestures are essential. And it's definitely a master
interface. It's not something you can easily train new users on.

~~~
GaryNumanVevo
No question it was cutting edge, the spatial mapping feature worked 99.999
percent of the time, something which other platforms haven't come close to
achieving.

------
kgwxd
Instead of getting rid of the keyboard, mouse and standard 2D display, why not
start with augmenting the reality around those? Keep the keyboard, keep the
mouse, keep the high-speed, high-definition display, there's nothing more
efficient. Touchscreens don't even come close to matching that combo in any
domain, let alone exceeding it. What chance do laggier, less touchable
figments have? If you can't find something that augments that reality for the
better, it's time to move on.

~~~
calciphus
Ever watch a bartender ring up a complex order? That's a domain where touch
exceeds keyboard / mouse hands down. Digital art, such as drawing (or even
just whiteboarding an idea).

There is no one universally great input method.

~~~
flukus
> Ever watch a bartender ring up a complex order?

Too many times to count and I'd disagree it's better than a keyboard
equivalent. It's easier for new/casual staff and can be easier for visual
people but it's awful compared to someone with a bit of training for complex
things. They seem to fail at making the complex things achievable for the less
trained as well IME.

Plastering drink logos on big buttons is easy for some, but for non-visual
people and those not familiar with the products it's harder/slower than an
alphabetical list or typing the first 2 characters. Want to put something on
my account? Good luck with the touch keyboard.

Compare those modern touch screen apps to something like a TUI that used to
run in the local video shop 30 years ago and there is no comparison in speed
and efficiency.

~~~
TomMarius
I like TUIs because their limits have made their creators strive for extremely
efficient UX.

------
aaaaaaaaaaab
Eh... the author doesn't know how to compare solid angles.

Anyway, I wonder what happens when VR manufacturers increase FOV to the point
where the distortion from the traditional perspective transform (homogeneous
linear transform) becomes impractical due to distortion towards the edges
(example:
[https://www.youtube.com/watch?v=ICalcusF_pg](https://www.youtube.com/watch?v=ICalcusF_pg)).

You need >120 FOV for an immersive experience, but current graphics pipelines
are built on the assumption that straight lines in world space map to straight
lines in screen space, so you can't do proper curvilinear wide-angle
perspective with the existing triangle rasterizer architecture.

~~~
GistNoesis
>"straight lines in world space map to straight lines in screen space"

Then you post-process the screen space, with something like a fish-eye shader
to warp your image appropriately. Sure you'll lose some resolution near the
borders but the human eye won't care because it's not in its high resolution
area.

~~~
iaml
The problem is without eye tracking, nothing stops you from just looking at
the borders and seeing artifacts.

~~~
rtkwe
When I was using VR a lot I just kind of stopped looking with my eyes outside
of a small area around the middle and mostly used my neck. When looking
towards the edge doesn't look great I think most people adapt and just start
subconsciously working around the limitations. Maybe it partially comes from
being a glasses wearer almost all my life (since around 7-8) so my vision is
already basically useless for detail around the edges but it wasn't a huge
jump for me.

------
alphakilo
Very happy to see the holo lens improving. I'm hoping this can bring a growth
in labour jobs which require technical skill for individuals who do not have
any high-level training (e.g. electricians or mechanics). From a safety
perspective, there is a lot that can be learned in envirnments like power
plants, oil refineries or factories.

~~~
djrogers
Why would you assume that electricians and mechanics don't have any 'high-
level training'? Is it just because they didn't get a BS from Stanford?

Tech schools are a thing - and many of them are very good. And apprenticeship
has been around for millennia, and is a wonderful way for someone to learn a
trade or craft.

~~~
EamonnMR
I think the parent's point was that unskilled people could preform those
tasks. For example, I could look at my breaker box with a hololens and it
could shade in the deadly parts.

~~~
SiempreViernes
And the convenience of AR over an annotated picture is what makes the former
much safer than the latter?

~~~
papa_bear
I would think so, when you consider the lack of convenience involved in the
following steps:

\- Realize that you should try to research which parts of your breaker box are
deadly in the first place

\- Find the model number on the breaker box

\- Google it, and sift through the results to find relevant information about
which parts to touch/not touch

The mental loops you have to jump through would convince most people not to
try at all, and just call up a professional. An annotated picture would be
just as good, but having the hololens understand what you're looking at and
the context of your situation (no doubt a difficult set of problems to solve)
would make a huge difference.

~~~
SiempreViernes
Yes, those are all steps needed to get your AR glasses to superposed an
annotated picture on top of your breaker box; unless you just postulate the
glasses come with such software preinstalled and the manual for the box
doesn’t have annotated pictures.

~~~
Ari_Ugwu
I think this is why MS is starting with the corp/industrial space. The
comments I see here (for better or worse) do demonstrate the kind of up hill
battle MS would experience when dealing with consumer critique and
expectations.

I would postulate _exactly_ that the glasses would come with software
preinstalled. Similar to how YouTube videos supplement so many instruction
manuals today. Even then it's no directed primarily at the person that wants
to repair a breaker box but instead the technician that would _install_ it.
Though once the form factor is as ubiquitous as cell phones I'd imagine every
support service will have an offering. I bet every Verizon tech would love to
see exactly what Grandma is looking at when asked to reset her router. Which
_then_ us to the opportunity for first responders to assist one another
remotely. It's a value multiplier when we reduce time and increase quality in
the same swing.

Look at the story about Azure Kinect helping to reduce hospital bed falls from
11,0000 a year to 0. We'll see similar reductions to: House fires do to rushed
wiring jobs, defects in nearly any manufacturing process.

Remember that this is 'augmented' reality. What I hope isn't lost on folks is
that the thing we're augmenting is ourselves. Granting a super human level of
awareness and cognitive ability.

tl;dr - This is Tony Stark level tech.

~~~
gmueckl
You sound pretty hyped about AR. The Hololens seems to have considerable
shortcomings. But I agree that eventually, AR can do a lot of good. I believe
that truly intelligent AR headsets have the ability to support e.g. certain
cases of people with mental disorders enough in their daily lives to become
much more independent and free. These devices operate in real world contexts
and can record them, process them and act on them. For starters, imagine an
assistant that remembers perfectly where you left your car keys or your
glasses while you are still half asleep in the morning. In the long term,
these devices can do much more. The flipside is that they need to have always
on cameras and maybe always on micriphones to be useful. This has
ramifications of its own.

------
Legogris
I wonder how text rendering performs. If it's good, a Hololens 2 IDE would be
an improvement over a laptop for working while traveling.

~~~
stuntkite
Text readability in specific context was passable on the original so I think
this should be pretty good! The problems are going to be more in that it’s not
quite a desktop window manager yet. It’s not designed to be a screen
replacement. That is coming in the next couple years, but the bumps you’ll
experience to make it your environment involve implementing and establishing
paradigms that don’t exist yet and are maybe availible as a collection of
weird bit rotting tech demos.

That said, you could probably wire it up to terminal and split out movable
windows with a tmux wrapper. Impress your friends and strangers while you hack
the Gibson, then take it off and use your laptop screen because you’re
frustrated.

~~~
thom
It always surprises me how much these technologies try to run before they can
walk. Giving people a great desktop experience equivalent to multiple high-end
monitors just seems like the obvious first baby step. Everything else seems
like fluff if they can't get that to be desirable.

~~~
stuntkite
You would think that, but as someone who has been in the Natural User
Interface R&D field for a decade, I’ll tell you that this is what walking
looks like. Our tools as of this year are just on the edge of being able to
deliver what a consumer expects. Adding a 3rd dimension makes everything
harder and none of the existing assets, let alone functional principals from
the 2D world translate.

To get what you just called a baby step is an act of barn raising. By
thousands of dedicated professionals, tinkerers, artists, scientists, and
large corporations. If we could have willed the “baby step” into being before
now we would have. Actually many people have, but then quickly notice what’s
lacking and get back to raising the barn.

Here are a couple projects I pulled with a quick google. I have a bitbucket
wiki with dozens of similar projects, some in AR, VR, projection mapped,
single user parallax, with gesture tracking, with custom controllers, etc...
This is definitely one of those problems where it's easy to imagine so people
imagine it's easy. Maybe we need you to help! Get a spatial display of some
sort and lets get to work!

[https://github.com/SimulaVR/Simula](https://github.com/SimulaVR/Simula)

[https://github.com/letoram/safespaces](https://github.com/letoram/safespaces)

~~~
thom
Yes but I'm saying _don't_ add a 3rd dimension in any but the most basic
sense. Just give me what is effectively a very large 2D workspace, as if I was
staring at one or more nice big monitors. No fancy metaphors, no gestures -
with Hololens I can still touch type and use my existing mouse.

I am a curmudgeonly luddite, obviously, but I would pay thousands of dollars
just to have a multi monitor setup that required zero effort and worked on the
move (to the extent that I'd be willing to go out with the equivalent of a
Segway attached to my head).

If _that's_ still impossible (i.e. it's impossible to accurately and quickly
orient the device in space, even in the controlled environment of my desk, or
it's just not possible to display text that's nice to read on these devices)
then I don't see the point of even attempting the more esoteric stuff as
anything but pure research. Clearly several multi-billion dollar companies
disagree, so I'm happy if all this comes to pass either way.

------
chaosbutters
My biggest question on it, isn't FOV, resolution, etc etc, but it is how long
does it take to make content for it? A fraction of my job involves AR/VR
content creation and I don't have days to program in clever features.
Everything needs to be drag and drop, 1-click solutions with a few hours of
programming to tidy things up. Since not only am I making content for VR/AR,
but then I have to go sell its business use

~~~
BSVogler
I saw a demo of a company working on this in 2018 on the automatica Munich.
They built an editor for easy setup of training software using the hololens.
Looked really promising. Unfortunately I can not remember the name.

------
k__
I think it's like with VR, but a magnitude more expensive right now.

Hololens is the Vive/Rift of AR and Vuzix Blade is the Focus/Quest of AR.

No end user will buy a Hololens if it looks like that and costs thousands.

But something like google glass in good looking, with more power and good
voice control could really replace Smartphone in the future.

Still the Hololens will sell to enterprise customers and AR play-rooms where
you can use them for a small fee for an hour (like we have VR rooms right
now).

So I don't expect much more from new Hololens generations than evolutionary
improvements.

~~~
juniper411
I could see smartwatches like Apple Watch integrating well with smart glasses,
somehow, in the future.

------
teraflop
> Hololens 2 now has a 52° diagonal FOV and a 3:2 aspect ratio - so 43°
> horizontally and 29° vertically. [...] but it IS more than 4x the area
> (525°² vs 2236°²)

It looks like the author got confused and multiplied the wrong numbers. 43 x
29 = 1247, so this is about a 2.4x increase in area.

~~~
greeneggs
The Pythagorean theorem only holds on a flat surface, not on the inside of a
sphere. For small angles, it won't make much difference, but these angles are
big enough to matter.

Using this solid angle calculator to compute the solid angle covered by a
rectangle [1], I get that a 43° x 29° rectangle subtends 0.335 steradian (sr),
while a 30° x 17.5° rectangle subtends 0.151 sr, making for a 2.2x increase in
solid angle.

However, the numbers 43° and 29° apparently come from applying Pythagoras to
the 52° diagonal field-of-view (fov). That's also incorrect, and I haven't
done the math to correct it. (As an extreme case, for example, a 180° diagonal
fov gives 180° horizontal and 180° vertical fov, so Pythagoras clearly breaks
down.)

[1]
[http://tpm.amc.anl.gov/NJZTools/XEDSSolidAngle.html](http://tpm.amc.anl.gov/NJZTools/XEDSSolidAngle.html)

------
lwansbrough
I left this comment in a prior HoloLens related thread on HN the other day,
but I'd like to reiterate (so I'll just copy/paste it) so I can get some more
opinions on the concept. I'll add that the functionality I describe below
would have limited use cases without a GPS or a device that's really
useful/wearable outdoors and available to the public. But I think this should
be the long term vision for holographics, as opposed to "apps" that run
individual experiences.

> We need a search engine for holographic layer services. It would be like
> Google, but for MR experiences. Holographic services would use a protocol
> that defines a geofence to be discovered by the layer search engine's
> crawler over the internet (this could just be a meta tag on a classic
> website). The HoloLens or whatever MR device would continuously ping the
> search engine with its location, and the results would be ordered based on
> their relevance (size of geofence and proximity are good indicators). The MR
> device would then show the most relevant available layer in the corner of
> the FOV. Selecting the layer would allow enabling it either once or always,
> and the device would then deliver the holographic layer over the internet.
> The holographic layer would behave like a web service worker (in fact, it
> could be a web service worker) and would augment a shared experience which
> contains other active holographic layers. For example, your Google Maps
> holographic layer could be providing you with a path to walk to the nearest
> Starbucks, and once you're outside Starbucks, the Starbucks layer is also
> activated, which allows you to place an order.

> This concept of activated layers, I think, is a great way to avoid a future
> where we're being bombarded with augmented signage and unwanted experiences.
> In fact, you could go further and enable blocking notifications about
> specific/certain types of available services. (ie. don't notify me about
> bars or fast food restaurants.)

I also think this could have applications within intranet environments in
corporate/enterprise contexts: several teams could each develop their own
layers used for different purposes. That would make something like this worth
while to pursue today, seeing as HL2 is solely targeting those environments
for now.

~~~
bluesign
It doesn't matter if it is 'layers' or 'apps' basically the problem begins
when you have more than one layer/app at a time.

So I guess for a long time we will stuck with one app a time, and some kind of
manual switching.

Also there is privacy angle, I don't think nobody wants pinging their location
all the time, maybe some QR code/ beacon solutions can help, but I am not much
optimistic there too.

~~~
majewsky
> Also there is privacy angle, I don't think nobody wants pinging their
> location all the time, maybe some QR code/ beacon solutions can help, but I
> am not much optimistic there too.

Maybe you could have something like K-anonymity? You discretize your location
into chunks of maybe 100x100 m, take the hash of the chunk ID, and request
results for all chunks whose hash starts with the same 4 or 5 digits as your
chunk's hash.

Would probably require some more thinking since a consecutive series of
requests for adjacent chunks could be used to uncover which particular chunks
you were interested in.

------
germinalphrase
While the face blocking of a Hololens would be a nonstarter for this role, as
a high school level teacher I think an AR toolset for instructors could be
extremely useful.

For instance, it’s always more efficient to intervene/support a student in the
moment they are having trouble/ready for the next idea, but there’s a natural
limit to how much information about a class full of students is
available/understandable; better tools for allowing me to know how my students
are engaging/progressing/struggling/succeeding _while they do the work_ would
be wonderful.

------
lordnacho
I've got a question about AR goggles. If someone knows, please tell me:

My main use case would be to free me from sitting by a screen. I want to be
able to access mainly Slack, a browser and some IDEs so I can casually do some
code review or chat to team members as I do some chores around the house. I'll
need notifications as well. But mainly I want to be able to read stuff while
being mobile.

Are there AR goggles that will let me do that?

~~~
pault
Not yet, in the way you describe.

------
ddmma
There are many applications into enterprise in regards to Mixed Reality use
cases and this product is targeted for them (and military as MS employees
tried to argue). Nevertheless MWC19 was for China and Huawei to position
themselves in front of EU telcos and bend imagination with foldable phones,
difficult to find Hololens in between 5G logos everywhere

------
byw
Don't know if it's possible with Hololens, but one thing I thought would be
pretty neat is a flight sim with physical instruments you can touch. Render
the world outside the plane in a gaming rig and project it in Hololens as a
video stream where the windows would be.

------
danschumann
If someone could figure out a way to bring more "real world" into people's
lives.. it would probably do better than more digital.. people are already
full of digital and want more real life.

~~~
knodi123
I wish I could improve the FOV on my glasses.

------
asasidh
Did Google Glass 2 matter? it was marketed for the enterprise/industrial use
case too.

What makes them think there is a market for these beyond gaming and game like
scenarios like military ?

~~~
gmueckl
The device is marketed for industry applications. The presentation never
mentioned any other kind of application for it. The price point precludes
gaming outright at 3500$. The applications are in fields where contextual
information in the real world can make a difference. Consumer AR is a few
years off and might be something very different. The Hololens isn't that.

------
beezischillin
Does any of the Hololens technology trickle down into Windows Mixed Reality?

