Hacker News new | past | comments | ask | show | jobs | submit login
Microsoft’s HoloLens 2: a $3,500 mixed-reality headset for the factory (theverge.com)
285 points by T-A 58 days ago | hide | past | web | favorite | 143 comments



So does anyone think this is at the point where I can pick one up and do away with multiple monitors? (Hypothetically, I see they aren't selling to consumers yet)

I play a game called Elite Dangerous with my Oculus rift, and the UX of being able to have holographic windows around my person that I can look at to activate and control is so insanely powerful and so freeing, and I'm literally willing to drop multiple thousands of dollars on something right now that can give that experience with normal desktop software right now.

Sadly the Rift lacks any way to see the real world when it's on (which makes using a keyboard and mouse more difficult), and the resolution still has a ways to go before I can comfortably use it like this, not to mention the software still needs work for this use case.

But I can't wait until the day when I can get rid of the monitors on my desk and replace them with a headset. Being able to place normal 2d windows around my person, be able to maybe use some limited gestures to control them, and then the same keyboard and potentially mouse that in used to.


IMHO, what they need is to work on "ClearType for 3D" and/or "DirectWrite for 3D".

In the demo they use giant fonts, because text rendering sucks if you simply render text to a texture, then put that texture on a rotated plane in 3D. If that last stage was bypassed, e.g. when rendering text, the renderer would somehow know the rotation and distance, and draw the vector font directly into the screen pixels, they could get it crisp enough for most apps.


This challenge is one of the reasons we're so excited about pcwalton's work on Pathfinder (https://github.com/pcwalton/pathfinder )


I'm ashamed to realize that despite being well aware of Pathfinder, and of Mozilla's somewhat surprising pivot toward "mixed reality", this connection had never occurred to me.


There's also Slug which handles variable-resolution shading of text (http://sluglibrary.com/), though it's unclear how its performance would stack up vs pathfinder's in these scenarios. IIRC when it was last discussed in this context it was unclear whether Pathfinder would work well for variable-resolution text that isn't screen aligned boxes?


You can do that by rendering text yourself, particularly in a full-immersion app. Rendering distance field text is cheap and really easy to do properly in that situation, but I don't know if anyone has actually done it in practice.


SDF is great and all but ClearType is a necessity if we want text as ubiquitous and cheap as 2D computing. If SDF is the way to go, Microsoft should standardize it themselves so we don't need to pack texture sheets of every language of Ariel into every app.


If anyone else is wondering what SDF is, a bit of googling found me this blog post: https://aras-p.info/blog/2017/02/15/Font-Rendering-is-Gettin...

It gives a pretty good summary and a lot of other vocab words you can google, and touches on the Pathfinder project


> full-immersion app

IMHO, that's another dead end (or rather intermittent tech), like full-screen apps, or single task environments (hello, DOS).


Gosh. I run all my apps full screen. The only thing I need to switch to is a web browser to look stuff up. I never have more than one window visible at once.


I don't think I run anything full-screen. I am irked by not being able to see the other things I need while working (files, time tracking, notes, email, FTP, etc). But maybe it depends on whether you have a single screen or more than one? I have four displays (MacBook Pro plus externals: 34" curved, 26" landscape, 26" portrait) and 14 different app windows currently visible in one way or another. I feel very cramped if I use the laptop on its own.


how big is your monitor ? When I am on my laptop I run everything full screen. When I am on my desktop I have two 20" monitors and I regularly have multiple windows open on each. I have found that splitting vertically on one monitor with the code I am actively working on and firefox for looking up things works really well for me.


I have a 27" monitor, and 90% of the time, it shows a single maximized app, with the browser second in the Alt+Tab stack (so you can switch back and forth just pressing it once).


Does that not make for either a lot of wasted space, or ridiculously wide text?


For the browser, it does (wasted horizontal space), but you can also rephrase it as "less distraction".

Most productivity apps, however, have their own way to subdivide space within the app efficiently. E.g. in the IDE, there are various panes and tool windows. Put a file browser on one side, a documentation browser on the other, and a terminal pane on the bottom, and it actually starts feeling kinda crowded.


This guy is either trolling or he's the UX designer behind Windows 8

Either way he should be ashamed of himself!


Doing exactly this with a tiling windows manager was a game-changer for me, and I can't imagine going back to an "Alt-Tab" environment.


Agreed. Like windowing systems allowed sharing the 2D screen, AR/VR systems will need to be able to share 3D space. I need my mapping app showing the way at the same time as my social networking app is popping up info about the friends in my view at the same time my pokemon-go-ar is showing the latest pokemon hiding in the bushes, at the same time as restaurant app is showing which restaurants have seats right now. (or whatever)

Sure, some people will want to tune out / turn off all other apps from time to time but the ability to run them together is really key it seems.

I wish browser VR supported popping 3D things out of the screen. Currently browser VR is click a button, browser display's that page's VR presentation in your VR. But run one of the many VR virtual desktops that show your computer's desktop as a virtual monitor in VR and suddenly I want a standard so that a webpage can make a 3D object and it pops out of the virtual monitor. Ideally like a browser tab I could pop out lots of different VR-VR apps/pages/etc all running at the same time, all integrating in the same 3D space.


I heard two and a half talks at last year's BUILD with the folks working on _Windowing_ in Windows and directions things are headed, and the really interesting stuff was all the hints of "we can't talk about it in detail yet, but" stuff and almost all of that was hints of what it even means to be a Window in 3D and how things interact in that sort of space. (Not just for today's version of Windows Mixed Reality, but also potentially how that shakes up all of Windows' base windowing capabilities whether 3D or 2D, how it may drive even 2D windowing moving forward, depending on how things shake out.)

(Something I don't think a lot of people notice, too, is how important 3D is to the Fluent Design System, and not just for Material Design reasons of faking paper stacks for visual interest in 2D, but probably because a lot more of Microsoft is taking 3D very seriously than it currently seems obvious that they are given the current intentional "no hype" approach.)

It certainly sounds like the Windows team has been thinking about all of this sort of stuff for years now, and I'm very curious to see a lot more of it come to light rather than just "we're thinking about it". This article also hints at some more of it getting released as real services SOON™.

I'm presuming we'll hear a lot more about Azure Spatial Anchors at this year's BUILD, and that could be quite interesting based on the hints in this article.


Personally, I like the idea of doing everything of mine through a full-immersion app. But my ideal use-case for these is also to just have windows from my desktop anywhere in space, which isn't exactly the norm.


Subpixel AA for VR/AA helps, but it's nowhere near enough. You also need better panels and better optics.


The reason why ClearType works so well on low-rez LCDs is that pixels are made of 3 RGB subpixels with guaranteed shape, size and layout.

Based on the parts of the video where they mention display technology, clear type is probably out, they're projecting lasers, I don't think there're subpixels.


Yet many of us turn off ClearType because the subpixel artifacts are worse (subjectively) than the increase in resolution.

Ironically I feel that subpixel rendering works better (and becomes worthwhile) in high-dpi displays where the artifacts aren't as noticeable.


On Windows, subjectively, it works OK for a range of resolutions: 15.6", 13.3", 4.5" FullHD, 22" 1200p, and 27" 4k.

I also have 8" 1280x800 windows tablet which doesn't look good regardless on the setting but it was very cheap, something like $120.


fonts being defined as a set of bezier curves you'd have thought it would quite easy to write a shader where each fragment computes its distance from the curve & assigns smoothsteped color as a function of that distance


This is kind of expensive actually. Its usually done with a precalculated texture sheet of glyphs which trades memory for shader perf. See: Signed Distance Field


ok but a precomputed texture isn't scalable


When its an SDF texture it is scalable because the distance function produces continuous data. The aliasing is greatly reduces because data is easily calculable from neighboring pixel samples.

Edit: Here's Valve's Siggraph paper on the technique https://steamcdn-a.akamaihd.net/apps/valve/2007/SIGGRAPH2007...


Perhaps Pathfinder mentioned above is implementing exactly that, but I am not familiar.


DWRITE is very extensible so you can always implement an IDWriteTextRenderer and do all the magic staff in it.

Link: https://docs.microsoft.com/en-us/windows/desktop/directwrite...


I read the Hololens 2 has a better pixel density of 47 pixels/degree of visual field. Not sure if I recall if the units I stated are completely correct, but I have no idea if that's enough to render good font quality at all levels of presentation.


Handy calculator:

http://phrogz.net/tmp/ScreenDensityCalculator.html#find:dist...

Looks like it's probably similar to using a 27" monitor on your desk, depending on the resolution/size of the display


47 pixels per degree (PPD) would certainly be good enough to emulate a desktop monitor. In reality, 35 PPD is sufficient, albeit only capable of emulating low end monitors. 62 PPD is roughly the limit of human eye resolution for a general static image.

I would be surprized if this Hololens is up to 47PPD, but to my pleasant surprise MS did make a big leap (no pun intended) in optics his go-round. So, I suppose it's possible. In my opinion, they are definitely (finally) on the right track in terms of pursuing a virtual retinal display (I.e. laser projection on the retina).


I thought the pixel density is the same as Hololens 1, but the display is bigger.


I think we also need way higher res HMDs than today, like at least 2K per eye


I picked up a Magic Leap a month or two ago with the goal of doing this. I'm building a Wayland compositor that forwards its rendered windows over to a headset, so you can have an arbitrarily large 'display'; windows can be positioned anywhere in space.

The ML1 probably isn't good enough for actual use, due to the text rendering, but I think the HL2 will be. I had the HL1 and its text rendering was plenty good, but the FOV killed it; the 2 should fix that.


I found a simple 55" curved 4K TV to be an excellent "desktop" where I can arrange all my windows around me. No, it's not virtual or 3d. Yes, I can lay out everything I'm using with no overlap. It turns out our "desktop" GUIs have failed to follow the metaphor very well - they've got too many features that make assumptions based on small monitors. I still like it though.


In terms of keyboard/monitor, Logitech is aiming to solve it.

https://www.youtube.com/watch?v=XVXvk1X1Gbs


Nope. It is nowhere clear and contrasty enough for that and the eye strain you get from wearing these displays for a long time would make you get the monitors back in a hurry.

It may work if all you need is playing a game for an hour or two but if you need to actually work for hours on a lot of text or graphics, the technology is not anywhere close yet.


I already somewhat routinely spend multiple hours playing text-heavy games in it, and while I'm sure it would be a problem if I spent my whole work day in it, I really feel that the resolution is the only thing that would be needed to be bumped up to stop the little strain I do feel.

That's what annoys me so much. I feel like the tech is already here, but nobody is putting together all the right parts for some reason. We have high enough res displays (IMO), we have the tracking, we have the "AR" aspect, and we have the ability to project screens into the world around you, but nobody is putting all those parts together in one package!


One part of the package imo: the ability to type without a physical keyboard (just using the finger movements)


People are trying. Apple is very likely to come out with “Lens” in 2020 and magic leap still exists.


This looks like what we're waiting for. They almost look like regular glasses:

https://www.redsharknews.com/vr_and_ar/item/6168-these-smart...

"Still in the lab, but given a demo at Mobile World Congress, these glasses actually deliver 4K per eye and yield a 120 degree field of view (FoV)."


The 2x field of view will still be smaller than most anticipate so its likely still not suitable for anything outside of commercial applications.

I was fortunate enough to try the v1 of hololens when a Microsoft representative bought it into my work, the tech was amazing but the FOV was far less than I anticipated. From what I can recall it was akin to looking through a tissue-box sized area roughly 1 foot from your face.


Both Hololens and Oculus are far from being replacements for traditional displays(i.e for programming)


Yes, but why? What is the limiting factor here, because from my perspective it's only missing a few small things on each platform that would solve it.

I'd be honestly happy with my Rift as-is if they bumped up the resolution a bit, and had some way to see the real world with the headset on (even camera passthrough is fine!).

I've used virtual desktop apps in the Rift, and they aren't bad to use, and playing games like tabletop simulator is so intuitive and nice that I'm floored tht this hasn't been tried commercially yet.

I assume there are some pretty significant hurtles that I'm missing, or maybe the market is still hilariously small at this point, but it just seems so close!


The resolution is way too low. Needs to be 4-16x greater for desktop replacement.

Also the headsets are too hot and bulky to comfortably wear for a full workday.


How old are you if you don't mind me asking? I'm pushing 50 and the real world is rather lacking in resolution nowadays.

I solve this by leaning forward or back. In VR/AR the same thing works.


I'm your age. I find my oculus totally useless for real world work. I'm used to a 4k monitor as a desktop.


I'm used to a 14" laptop + 21" monitor (resolution doesn't matter), which together occupy maybe 20% of my field of view while using them. If I could take the same quantity of information and arrange it arbitrarily in my environment, eliminating the need for any monitors, it would be a win even if the text had to be larger.


You'd need to move your head a lot to look at stuff. Gets old quickly. Try out the VR desktop on oculus and you'll quickly discover the need for much more dense pixels.


4-16x is an exaggeration. I've tried headsets that are 2x higher resolution than the Rift (i.e. 2k x 2k per eye), and I could view text the same size as my monitor renders.

No doubt 4-16x resolution would be nice, though. True "retina" screens are still at least a decade away.


But would it need to be a true "retina" screen? The Hololens has cameras that track where your eyes are focused, so the laser image could have a "fovea" that moves with your eye movements so as to retain a high focus region just at the spot you're looking at.


That solves the graphics card horsepower issue. You still need a lot of pixels even if you're not driving all of them at high resolution all the time.


Depends on your preference I suppose. For me it needs to be at least 4x the density of the oculus.


Back in the day, all monitors were 4-16x less resolution than yours, and we loved them.


Indeed. I was back in that day too. Have no interest in going back to 1080p let alone VGA for daily work.


For Oculus to be useful as a traditional display it would probably need a higher resolution(i.e 8k per eye), it needs to be wireless, lightweight and possibly use the camera to blend-in the outside environment so that you don't feel trapped in Matrix while you work. It would probably take 5+ years to get there



But we have the tech to do that right now! Maybe 8k per eye is outside our current capabilities without being ludicrously expensive, but 4k should be doable!

And the rest already exists in other products!


4k is not good enough. What you need is 8k or even 16k(whenever it becomes available) per eye. Otherwise you better stick to a traditional display.


Feature parity. Having one interface that is trying to keep up with and surpass basically all others, is just setting up something that will take a long time to succeed.

Note,I am not claiming it won't succeed. Just that it will take a long time.


eye strain.


But imagine a Smalltalk-like environment on steroids with that sort of 3D environment. Still need a virtual keyboard or voice-input for the actual code, but the ability to place Smalltalk GUI elements anywhere in your visual field would be interesting.


https://en.m.wikipedia.org/wiki/Croquet_Project is basically what you described. Dynamicland also has similarities. Neither are mixed reality like Hololens but long-term I think it will be the application model that wins for MR.


I always thought croquet/cobalt would be a nice fit for vr:

https://en.m.wikipedia.org/wiki/Croquet_Project

http://www.opencobalt.net/


I actually can't imagine this. As someone described by aphantasia, this level of graphical immersion sounds off, to me.

More, this will have less rigor than standard 2d graphical builders. And those don't exactly have a great track record. :(


I used the HL1 as my full-time computer for a while. The display itself was actually fine -- the text rendering was great, so it worked out well. The issues were 1) software (I built an app called VNCaster that was a decent implementation of VNC, but you were restricted to working within that window, which kind of killed a lot of the benefits), 2) FOV (way too small), and 3) comfort (it got uncomfortable quick -- 2-3 hours was about my cap).

I think that with the HL2, many of these are going to go away. I can't speak to the comfort, but I know that the first two are now non-issues for me.


Be wary of HL1 and HL2 as abbreviations. I'm already on a hair trigger hoping for HL3 (Half Life that is...)

It's only a few characters and you saving typing breaks my Ctrl+F and web search. :-)


Fair point, thanks!


FOV was the main problem in my opinion as well. If that's taken care of, it will be significantly more impressive than the first HoloLens.


I'm awaiting a time where I can do this as well. My issue so far has always been the loss in resolution due to the subsampling required.

You can try a service like this: https://immersedvr.com/ to try out just piping your desktop to an Oculus Go. There is a bit of a latency which will disappear with something like the Hololens.

However, you'll notice that they are taking a 1080p desktop and sampling it on to the Oculus (or Hololens) spherical glass. That loss in resolution is quite noticeable.


Greetings, Commander. Just commenting to say that the VR experience in E:D is indeed incredibly good. This was coming from just using an ultra cheap cheap Lenovo Explorer, headset only, which I recently purchased. The fidelity of the tracking as you stand up from your chair and walk around in the room astounded me, especially as you lean in on the extra panels.

I definitely agree with your points on using AR for real work, it has amazing potential.


What you would be looking is the pimax 8k[0]. Unfortunately it does support native 8k output due to limitations on the input. Hopefully they will release something soon with 8k support as I am in the same boat as you.

[0]:https://pimaxvr.com/pages/8k


I don't have a VR rig, but do have the game. What's the difference in experience like between non-VR vs. VR?


It's amazing how well it translates to VR. You can feel the size of planets and stations in a way that you can't even describe.

And things like the left and right menus you just look at to activate and control, and when you look away they deactivate. It's incredibly intuitive. There's also the effect of being able to look your head out the various windows, and even lean and sit forward to see out a bit more when you need to. Battle is amazing as you can really get into a dogfight and spin your ship to keep them in view and keep an eye on what they are going to do next!

It's pretty amazing, and it's the only way I play it now. I highly recommend a Rift even for just that game!


There's a bunch of VR setups claiming much better resolution. Here's one

Varjo VR-1: https://varjo.com/

For me I need to ditch the headset. That thing covering my head is too heavy and makes my face sweat.

I wish I could get more into VR. I have a Rift, a PSVR, a Daydream. I've loved a few experiences but mostly I turn them on once every 3-4 months at this point. The quality experiences (for me) are few and far between.

rant mode on: The new Oculus home UI is horrible. It uses every button on the controller instead of just like say one button and putting all the UI in VR. It's become a huge frustrating pressing the wrong button or having to test every button to figure out which one does what. It's like it was designed by an xbox dev instead of a VR dev.


SimulaVR/Simula is about to release a working prototype for VR Linux Desktop with the HTC Vive. See https://github.com/SimulaVR/Simula. Check it out at the end of next month.


I don’t want to use a keyboard and a mouse. Can we move past that too?

In fact, I’d like to get past being tied to a desk.

In this article on HN a few days ago, Stephen Wolfram had a laptop strapped to himself so he could work while on walks. We would all benefit getting away from our desks a little more often.

https://news.ycombinator.com/item?id=19220889

Direct link to Wolfram’s blog:

https://blog.stephenwolfram.com/2019/02/seeking-the-producti...


I love how he just causally has an Iridium satphone.


Iridium side note -- I was testing a RockBlock (rock7) recently. They are really easy to use, ~$200. Awesome that this stuff is getting more approachable.

http://www.rock7mobile.com/products-rockblock

https://www.sparkfun.com/products/13745


Interesting.

It's $180 for the module then $11/mo and $12 for 100 credits (5kb of data transfer). So to get up and running would be $312 a year for what would be at most a location beacon (6B per 24 digit location transfer).

Not at all unreasonable for how much of an offline system this is.


I agree. What’s interesting is that it’s compatible with most devices/libraries (arduino etc), compared to using a consumer level garmin device with sms/data functionality that requires a contract (to be affordable at least).

Also, even though location info is typically the main feature, I believe it’s included automatically in the msg envelope (a kind of satellite caller id), which means you can use that more expensive bandwidth for your own requirement, while still getting the lat/lon metadata.


There's a VR program that simulates multiple monitors called 'Virtual Desktop.' I haven't tried it but it sounds pretty cool, if only as a proof-of-concept.

https://store.steampowered.com/app/382110/Virtual_Desktop/


We got decent VR but meanwhile screen technology have also advanced. You can get 3-4 really good monitors for that price.


I had the same thought and my idea would be to buy the pimax 8k (or 5k). I read that the resolution is high enough to do desktop stuff with it. I dont have the money to buy it and the problem doesent bother me too much, but you could try it.


I'm waiting for the time when you don't even need a headset.


do you look at your keyboard and mouse that often, or is it just an uncanny feeling when you can't see them in your periphery?


No it's not an uncanney valley thing (and I can touch type, my keyboard doesn't even have symbols on it!), but it shocked me how much that little bit of peripheral vision helps with things.

The game I play most with the Oculus Rift can use a keyboard and mouse (and a flight joystick!). At least a few times an hour I'll blindly find the home row on the keyboard by touch only to realize too late that I've put my left hand on fghj instead of asdf. And finding a mouse blind after taking your hand off it is really hard! Especially when trying to do it without bumping it and a accidentally clicking or moving it a lot.

I think some kind of visual "rough outline" of where those devices are in reference to the headset would help a TON.


I'm surprised that it's so difficult to accurately find the home row without visual feedback. Being legally blind, I've done it my whole life without thinking, and I don't recall it being hard to learn. But I suppose it might be hard to relearn after years of using sight to find the home row. Maybe a generation of kids raised on a VR headset and a keyboard would think nothing of it.


It may be partly the games fault, you are in a space ship with stuff around yourself that you need to physically look at to interact with (and by look at, I mean you need to point your head at it, there is no eye tracking), so i'll often take my hands off the keyboard, spin my head and use both hands on the joystick and buttons, then have to quickly find the home row again right after.

But I do think a part of it is that my brain has gotten very used to having the general location of the keyboard. So for instance I know the home row (for my left hand) is middle of the keyboard vertically, and on the left side horizontally. I'll quickly find the middle vertically by touch, but then I search for the nub on the f key next and if I was only a few keys too far to the right, I'll find j instead and then instantly assume I'm at the right spot, and it's quite jarring when it's wrong!

With sight, I'm able to know and move my hand toward the right hand side without feeling around more than a key in any direction for the most part.


The Vive gives a rough tronlike outline (or even a transparent camera overlay) that solves this for me (a backlit keyboard helps, too, so I can turn it mostly transparent and it still stands out). The outlines appear when you go outside the standing-room boundaries and my flight chair setup is right beside that area.


Problem with that is only you will ever see your screen. Forget about have friends over


That's mostly solved by keeping a screen around for others. I've found that sending the PC output of what I'm seeing in VR to a TV in the same room turns an isolating VR experience into a group activity fairly well.

And if AR ever becomes ubiquitous, the expectation would be that multiple people could wear the headsets and all see the same AR info.


For finger tracking, version 1 used random forests [1], because of the performance/hardware budget trade-off: they're harder to train than a traditional deep learning algorithm, but are much more efficient to compute on the device (branching being basically free on a CPU).

Version 2 uses a deep learning accelerator [2], which makes it possible to do the heavier computation of DNNs (which involve floating-point operations, which would be much more expensive on the CPU).

From an engineering perspective, I just love seeing how it touches all abstraction layers of the stack, and the types of solutions that come out of thinking about the silicon and the high-level ML models at the same time.

[1] https://www.microsoft.com/en-us/research/wp-content/uploads/...

[2] https://homes.cs.washington.edu/~kklebeck/lebeck-tech18.pdf


Unless the paper specifically calls it out in a spot I didn't see, it's not necessarily the case that the DNN operations are floating-point. Some networks use FP16 or FP32 (it's my understanding that this is very common during training) but actual production use of a trained network can happen using int8 or int4. You can see this if you look at what the 'Tensor' cores in modern geforce cards expose support for and what Google's latest cloud tensor cores support. NV's latest cores expose small matrices of FP16, INT8 and INT4 (I've seen some suggestions that they do FP32 as well but it's not clear whether this is accurate), while Google's expose huge matrices in different formats (TPUv1 was apparently INT8, TPUv2 appears to be a mix of FP16 and FP32).

In non-DNN image processing it's quite common to use ints as well (iDCT, FFT, etc) for the potential performance gains vs. floating point.


When you mention version 1 and 2, are you referring to the original hololens and the new one?


Yes. I'm referring to the original HoloLens and HoloLens 2, so to the hardware versions.


2x field of view (or possibly area of view, description could be read either way). Eye tracking. MEMS display. High grade low latency hand tracking. Improved set of default gesture recognizers. Spatial Anchors service for cross platform localizing (see same AR in same location on HoloLens, ARKit, AR Core). Some sort of geometry streaming service. Support for hardware customization (hardhats, etc). Also showed a separate stand-alone IoT Kinect + CPU/GPU hardware device.


I was disappointed to see Microsoft is still working with the concept of apps. I think we need to rethink how we start experiences in mixed reality. Pointing and tapping an app makes sense for a touch/mouse device, but not for a device that's supposed to continuously immerse the user. Here's my alternative concept:

We need a search engine for holographic layer services. It would be like Google, but for MR experiences. Holographic services would use a protocol that defines a geofence to be discovered by the layer search engine's crawler over the internet (this could just be a meta tag on a classic website). The HoloLens or whatever MR device would continuously ping the search engine with its location, and the results would be ordered based on their relevance (size of geofence and proximity are good indicators). The MR device would then show the most relevant available layer in the corner of the FOV. Selecting the layer would allow enabling it either once or always, and the device would then deliver the holographic layer over the internet. The holographic layer would behave like a web service worker (in fact, it could be a web service worker) and would augment a shared experience which contains other active holographic layers. For example, your Google Maps holographic layer could be providing you with a path to walk to the nearest Starbucks, and once you're outside Starbucks, the Starbucks layer is also activated, which allows you to place an order.

This concept of activated layers, I think, is a great way to avoid a future where we're being bombarded with augmented signage and unwanted experiences. In fact, you could go further and enable blocking notifications about specific/certain types of available services. (ie. don't notify me about bars or fast food restaurants.)


That would make zero sense for Hololens - Hololens is an enterprise device that will in practice run one or two applications (e.g. the company training or maintenance tool). Also most industrial networks are behind heavy firewalls, so anything that actually depends on internet access is a problem.

Hololens also doesn't really work outdoors (display contrast and tracking are the largest limitations), so there is no point of designing a system for supporting that when the hardware is not usable in such scenario.


It would make zero sense for HoloLens today. The HoloLens doesn't even have a GPS chip. But Microsoft is building the future of MR in general. It's a space they want to own or at least lead, and their investment in Azure services for HoloLens today are proof of that. Obviously what I've described does not really favour industrial use (though I dispute the idea that this would not be achievable in industrial/enterprise environments.)

Indeed, HoloLens isn't great outdoors, however the current technology is capable of it (despite the HL design.) The visor can be darkened to function more like sunglasses, which solves the contrast problem.

I'm just looking 5-10 years out, in terms of what's going to be needed.


It doesn't make sense to develop for 5-10 years out; you need to build for today and sell for today. If that means running apps, then you run apps. When the need arises, it will be built.

Evolving technology from an existing place is generally more successful than trying to invent the end goal at the start.


This was surprisingly insightful.


The core idea could also work on intranet scale, inside the corporate firewall.

Information tends to end up in different systems. You have for example something responsible for the IoT tracking data of machinery, something else for the HVAC and maybe another system for the maintenance manuals. The proposed system could pull this information together, collecting from different systems based on location (and access rights) within the facility.

Outside the facilities, it might be good to combine information from electricity, water, telco companies when you are at the construction site. Think for example excavator operator having augmented reality view of all the pipes and cables in the ground. This kind of information might be already available via the web if you have suitable access rights. Something similar could be applied here.


Check Azure Spatial Anchors,

"Across augmented reality (AR) platforms, “anchors” are a common frame of reference for enabling multiple users to place digital content in the same physical location, where it can be seen on different devices in the same position and orientation relative to the environment. With Azure Spatial Anchors, you can add anchor persistence and permissions, and then connect anchors in your application so that your users can find nearby content."

https://azure.microsoft.com/en-us/services/spatial-anchors/


Nice. AR for the industry is an interesting field. I've worked a bit on real-time recognition and registration of CAD to point cloud, for another industrial grade AR headset, developed by another company. But unfortunately an exec from Qualcomm came in, brought in his buddies and flown the thing into the ground. Hopefully Microsoft will pick up the flag of AR for industrial use.


>Nice. AR for the industry is an interesting field

Yep I'm also happy about this and like the direction Microsoft seems to be going. Industrial applications have always seemed much more interesting and potentially widely beneficial than just targeting consumers and entertainment.


I played with a HoloLens 1 briefly, which was both a revelation- AR feels like it could be useful in a way VR doesn't- and very frustrating. The worst part was the gesture recognition, which was the absolute bare minimum. The Air Tap was horrible.

Twice the FoV and with real hand tracking? That suddenly makes this thing viable in one step.


I generally feel like anything that requires your hands to be in the field of view (at anything near its current size) is an immediate non-starter, which is part of what ruined Air Tap for me. Plus it was just horribly imprecise.

We do stuff with our hands largely away from the center of our vision. You might make a brief glance to calibrate yourself, but e.g. grabbing a thing doesn't require constant focus on the thing through the entire act.


Air tap doesn't actually require your finger to be in field of view. Even in the Hololens 1, there is, in fact, a downward-facing camera for gestures whose field-of-camera extends all the way down to your waist.


This may be the least insurmountable challenge though: A pair of smart gloves will mostly solve this problem.


yep, that'd be the most comprehensive way certainly. downward-facing cameras could probably get most of it tho.


The piano demo was by far the most exciting for me as someone who developed for the first HoloLens. Having used the first one extensively, the improved gestures support is IMO a more significant improvement than the FOV upgrade for long term use cases.

I’m interested to see how Firefox handles web browsing on HoloLens. I’ve always envisioned URLs in that context as a way of loading a new holographic experience into a shared space, as opposed to each URL providing its own separate experience (though this would be good as an option.)


What's cool about the use-case they describe is that it's the perfect bridge between what humans are good at: improvised problem solving and movement, and what computers are good at: crunching and surfacing data. Is this the sweet spot?


>Is this the sweet spot?

Yes it is!

What is actually hilarious about this tech is that as the hardware gets better, the information it feeds you will ultimately be the most valuable piece. Crazy to think about when verifying validity a firehouse amount of data.


So frustrating when videos I think of as marketing on YouTube are preceded by other ads. I get that this content is from theverge, not Microsoft, but the same could be said for YT shoving ads at you when you're already just trying to watch a movie trailer.

I'm sure a lot of folks rarely watch many entire segments of ads unless it's on their terms. Does further ad-walling that content not just reduce the overall number of eyeballs? It's like the only ad metrics seen as meaningful are the hostile ones..


My other pet peeve is movie trailers channels that start their videos with a 2-second mini trailer of the trailer.


That’s pretty exciting, especially the doubled field of view


Wake me up when you can use these in outdoor and dark lighting conditions. At that point they will be useful for several of my scientific needs.


It's not a hologram, people. It's a very clever projector, but it's not a hologram at all.

The example given, showing someone where to put a bolt, is about making a human do a robot's job. The computer decides what has to be done, and tells the human exactly what to do. Like the picking system in Amazon warehouses. We're getting ever closer to Marshall Brain's "Manna".

Machines should think. People should work.


So same as gglass now, how is that one doing in the enterprise market btw?


For the war


This is quite disappointing. Lot of good people I've known who joined this team at early stage are all gone and it shows. Kipman simply didn't seem to get this. Magic Leap should have at least taught them that it is very bad idea to have so much weight on headset. The AR is pretty much dead at this point and all these push towards creating demos for factories is pure media stunt. They even managed to lure US army and I'd like to know the story of how they did this but from my own experience trying to use Hololens in such situations would be a tax, not benefit. The battery life is miserable, its too heavy for wearing on head and induces headaches for most people. Also, using it in custom situation requires huge amount of software development costs which are rarely justified given inability to actually use the device.

Kipman has become an expert in working on projects that has initial oh-ah factor and media appeal but he doesn't think beyond that at all. Kinect's main use case was never gaming. It's too inaccurate for pleasant use after initial wonder fads off. Hololens's main appeal is never creating this virtual world but rather becoming a sensor that can be leveraged in all kind of applications. But that would be the least sexiest thing for Kipman to do and ofcourse that may not generate as many media articles.


Also selling it to the military to "increase lethality".

https://www.theverge.com/2019/2/22/18236116/microsoft-holole...


I really don't get the opposition to this. Either we need the military, or we don't. If we do, then we want to have one that's can beat anything that could be thrown at it with as few casualties as possible. If we don't, then going after VR devices rather than the rifles is all wrong on priorities.


Agreed - to pretend that other countries aren't devoting the top tech talent to advance their capabilities in a similar way is beyond naive.

I _want_ to agree with this "ethical opposition" approach, but I know too much game theory / history to know that I also don't want my power grid knocked out on a moment's notice because, say, Googlers opposed stronger cyber security for our military.


I can think of specific projects that might be unethical in that sense, and cannot be justified even by "well, they're doing that".

But I can't see why this one would be in that category.


Well I assume the military generally doesn’t want decreased lethality when purchasing equipment.


Actually they do. Only because collateral is a PR nightmare.


No they don't. Collateral damage is a function of inefficient lethality (killing people who are not your target is wasteful and inefficient).

Increasing lethality decreases collateral.


No that's not quite right.

Lethality is the effectiveness of desired outcome. Was the target eliminated?

Collateral is the side effects of the desired outcome. Was anyone or anything else eliminated?

They are two different metrics. Consider two contrasting analogies which break your understanding:

Ballistic missile: high lethality, high collateral probability.

Polonium tea: high lethality, low collateral probability.

About 20 years ago, I was working in the defence sector, on guidance systems, as an engineer. Sat through many a powerpoint about how collateral could be marketed away. Turns out they won that battle eventually...

Incidentally I bailed out when I saw what a fucking mess of a world I was creating.


This is bad logic.

By your metrics, the most lethal thing in the world is a rusty spoon stabbed into some ones eye; either it kills the target instantly or kills them later due to infection. Very close to 100% death rate. Very lethal. No collateral damages, either.

And yet no professional militaries are armed with rusty spoons


That's because the lethality of a rusty spoon is actually very low... The lethality of having been stabbed in the eye might be high, but not of spoons themselves since it is hard to successfully wound the eye of an enemy soldier with a rusty spoon at a distance when said enemy soldier is armed with ranged weaponry, and it is still difficult to do even in melee range relative to how easy it is to wound with bayonets and combat knives.


I don't know what logic you're seeing but that's not an accurate description. A spoon is low lethality, low collateral.


They had swords that did that. Then they found out that guns had higher lethality with lower risk to the attacker (another primary metric). Collateral wasn't much of a problem back then. No one gave a shit.


What is a bullet if not a very carefully shaped rusty spoon stabbed into someone via a complicated launching mechanism?


The military also buys iPhones, and Androids. What phone do you use? That very phone was probably used to order a drone strike!

Did you use GPS technology this past week? That was the tech guiding the drone to it's target. And this thing called the internet was invented to network the military


GPS is a military technology opened to civilians... going the other way feels different, for whatever reason.


No, GPS was actually designed from the start as a dual military and civilian system. The only part that wasn't originally available to civilians was the high precision signal.


> GPS was actually designed from the start as a dual military and civilian system

No, what prompted GPS to be opened to civilian uses was KAL007 being shot down by the Soviet Union. [1] 269 killed, including a US Rep.

[1] https://en.wikipedia.org/wiki/Korean_Air_Lines_Flight_007


No, as usual Wikipedia isn't quite accurate. Here's an actual interview with the "father of GPS".

"BRAD PARKINSON: Absolutely. And there are a lot of misconceptions about that. You hear people saying, well it was a military system and eventually made a civil system. That was not true.

When I testified before Congress back about 1975, I said this is a combination military civil system."

https://www.sciencefriday.com/segments/how-gps-found-its-way...


Re GPS - civilian GPS signals are not the same as military GPS. P(Y) and M-code GPS signals chirp at 10x the rate as the un-encrypted (civilian) signals.

In the time you get one navigational update for your car/iPhone GPS, a cruise missile's GPS has received 10 updates, presumably helping him guide himself up the ass of some terrorist at ~500MPH.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: