Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: What are your impressions of the HoloLens so far?
101 points by rmccoy6435 on April 13, 2017 | hide | past | favorite | 116 comments
Microsoft shipped out their HoloLens development kits within the past few weeks. If you happened to receive one: what has your experience with it been like? Information is relatively sparse online recently about the HoloLens, and I kind of expected more blogging to be done about user feedback and what kind of projects are being worked on, but for the most part there hasn't really been all that much posted.

I intend on doing a project with one in the next few months and I want to get other developers feedback on the device and kind of a general synopsis on the feelings regarding it, as the emulator can only give so much "immersion".

There also hasn't been much discussion about this device on HN (at least for a few months) and I'd like to know how developers feel about it: pros, cons, first impressions, etc.




Some bullet points I wrote when I tried it:

- The display technology is very nice. I was very impressed by how good the object permanence was: when you put an object somewhere, there is no lag or jitter when you move your head and it stays anchored to the spot. The holograms are reasonably bright and opaque.

- Also, when you pin an object somewhere, it stays there even when you walk around the room. It even stays if you pin it in like the middle of the room where there are no obvious reference points or anchors to use.

- The field of view is neither great nor terrible. It's usable but more would of course be better.

- The major downside is the interaction: "air-clicking" is not great and the gestures to trigger various actions aren't very reliable. It really needs hand controllers like the Vive has.

- The unit itself is comfortable, much more so than the Vive. There was an annoying lens-flare-like glare below the field of view. Not sure if that was my unit not set up correctly or a problem common to all of them.

Overall I'm quite impressed, although I probably wouldn't buy one even if I had $3,000 to burn. V2 will probably be the one to get, if they expand the FOV.


This is pretty much exactly my review. A couple things I'd add is that being completely wireless is pretty amazing. Having no trackers to set up or tower to connect it to is also pretty amazing. Turn it on and it just works in any room you throw it into.

Also the sound is well done. Similar to vision, it doesn't cover up or plug your ears so you can maintain awareness of your environment. I didn't have high hopes for the sound quality but I was pleasantly surprised.

The gesture thing needs a ton of work though. There are only a handful of gestures, none of them interact directly with the virtual objects. You need to turn your neck to look at something, then make a click gesture to select that item. There is no ability to grab something, mold clay, punch bad guys, etc. Just look around and click, occasionally drag though if you try to drag something out of the field of view of the sensor it gets dropped.


> The major downside is the interaction: "air-clicking" is not great and the gestures to trigger various actions aren't very reliable. It really needs hand controllers like the Vive has.

I agree on the gesture reliability being a current limitation, but wouldn't a move towards hand controllers sort of be a step backwards? Interested to hear what other people think but seems like moving away from hand controllers is an inevitability, will just take some time and further investment in gesture control.


Why not both? Gestures are great for broad things, controllers are great for fiddly things. Use whatever combination of gestures and controllers feels comfortable to what you are working on.


That's definitely true for now, but shouldn't the goal be improving gesture control to the point where the difference is trivial? Might make for a less versatile product today but a better product down the road.


I think Microsoft's research is somewhat proving/showing that we will always be multi-modal in input schemes, for the very least for accessibility reasons. A person might not be physically capable of making an accurate enough gesture, whether that is due to disability as one example or tiredness/exhaustion as another. On the flipside, a person may struggle with a controller but have no difficulty making extremely accurate gestures...

Speech, gestures, touch, controllers, keyboards, mice, trackpads, trackballs, everything else in the giant spectrum of input devices, all have different pros/cons and different accessibility. A failure often in sci-fi virtual reality is that everyone magically switches to a single mode of input, when even computing today suggests that every mode of input may be welcome (or even necessary) to users. VR/AR doesn't shift those accessibility needs as some imagine.

The interesting thing is that AR/MR suggest new input modes as well. There's a reverse skeumorphism that becomes more apparent in AR, along the lines of: in real life to accomplish a task I might turn some knob and that knob has seen decades of being the right tool for that task, why can't the AR respond to that knob as well. As AR/MR become more common and you start to see more "reality" in computing bleeding in from the other side, digital objects responding to physical ones, I think you'll start to see an expansion of what it even means to provide input to a machine. You might think of that as gesture controls, at least at first glance and some that may simply be advanced enough gesture recognition, but I think there's going to be a lot of bleed-over into the IoT space for custom controllers and sensors and touch surfaces. Some of those are going to look like existing physical objects, but extending their footprints deeper into "digital spaces" (many already are to one extent or another: your car's steering wheel, as one among many examples, may already have a digital footprint of some sort). I wouldn't be surprised if some of that leads to entirely novel seeming input devices in specific subdomains of human behavior.


Would like to note that the HoloLens comes with a clicker that makes a lot of those precise moments easier to maneuver, such as clicking and dragging, and scrolling using motion controls!


I've been making apps on it since mid last year. It's an amazing device, the image stability and quality is very good and in a well-designed app the small FOV becomes an afterthought.

Clicking/selecting objects with gaze is often an antipattern. Much better to use alternate input.

Analytics is kind of a mess.

Everybody recommends unity but performance will suffer. I wrote my own framework instead. Most of the open-source code is bad, if you have figured out how to make apps it's a competitive advantage. MS wrote literally thousands of new APIs for UWP and mixed reality so many many features are barely documented with no real world examples.

It's a totally new paradigm in UX. Most designers fall back to poor decisions like using small buttons or overly detailed models.

Feel free to ask anything specific I'll do my best to answer.


What is your development setup like? Do you have the hololens on while you are developing for it? Does the hololens do all the things a desktop computer can do, but in AR?


Win 10 Pro, I was using an Ultrabook but just upgraded to a Surface Pro i7 16GB. The strength of machine depends on how complex your 3D models and build chain are, and whether you're running the emulator. The emulator needs 8GB RAM and two cores to itself. Visual Studio community edition with the occasional Sublime Text for quick refactors.

I have the hololens on about half the time. So I've spent a ton of time in it.

Yes surprisingly the Hololens is "just" an x86 Windows compile target. I've found a lot of software that "just works" as long as the UI isn't hardcoded.


Is AR the future?

I demo'd google cardboard VR to a bunch of kids (prob about 200 ish) who thought it was amazing. They really enjoyed it and the feel from teaching them using VR was that this was another tool people will use in the future.

Kids are already wired up with headphones, phones in pockets, phones wired to external battery boosters. I say this because I thought cumbersome cables from phone to goggles would be a no no and the technology would have to wait for a wireless solution. But I think I am wrong, kids already have cables snaking around their bodies.

Personally I think external cameras that can show 'the outside world' inside the VR world is the future. I'm interested in your thoughts on AR... this I see as a great tool for industry, but not for the general public?


I've believed it's the future since long before I had a hololens, so I am biased. But to me AR is far more than glasses, AR should be a natural extension of human function for all senses.

People really don't like the concept that I can see a world they can't experience. Some people become offended. Making people feel comfortable has a long way to go, but as you mention kids start with no preconceived notions so this may not be a stigma for future generations.


> People really don't like the concept that I can see a world they can't experience. Some people become offended

This is the /exact/ kind of elitism that killed Google Glass. The whole "I have this and you don't" mentality is childish and damaging.


It's not the attitude of the wearer, I and other devs I've met really enjoy sharing it with everyone and wish it was more accessible. But I don't have a case full of hololenses to create a shared experience. Nevertheless in a group if one or two people are nerding out sometimes somebody finds it unpleasant. I can understand.


Do you think a dedicated "streaming output" would help?

When I use my Google Daydream, being able to "cast" my screen to a TV and it really helps with this feeling as everyone else can watch along.

If the cost didn't increase too much, I feel that an onboard camera that can overlay what the user is seeing to show a 3rd party could help with that feeling. (Or if it already has a camera onboard, perhaps some better software to make casual sharing of your perspective easier)


Yes, it's possible to do this using the HTTP server that hololens runs. It's pretty sweet it streams performance data as well as the muxed camera+virtual stream. We got some really good results today recording a demo that way.

Microsoft did an awesome job with providing built-in tools and the vast majority of MIT open source hololens examples.

It's very technical and dev heavy, it would be great to see an app that will provide the streaming feature without the admin pairing currently necessary for the "server" access.


We've demoed the HoloLens to hundreds of people and I find this the case alot, that is, until they put it on themselves and understand/experience what's going on. MR/AR is one of the hardest things to explain and understand until you have experienced it yourself but once people put the HoloLens on, it takes only a minute or two until it becomes second nature whether they are 6 or 60. There's just something natural about the whole experience.


AR is the primary use case I see going forward, and live-feed-augmented VR is really just an extension of AR.


> live-feed-augmented VR is really just an extension of AR

Two things you can get from live-feed-aug'd VR that you can't get with current thru-glass AR solutions (maybe hololens is different?):

1. The "color" black (and true opaque objects)

2. Wide field-of-view (FOV)

The first is a physic limitation of thru-glass AR implementations; it's not likely to be different any time soon. The second I would expect to over time be improved.

The downside of a live-feed-aug'd VR system is resolution - the resolution of everything is not high enough to be what people want. While I doubt the resolution of an AR system is any better, because the FOV is smaller, and a thru-glass view system is used instead of a live feed, it becomes much less of an issue.


Any chance you could expand on analytics a bit?

Also, I've looked through the MVA for the HoloLens and done some messing around in Unity, but are there any good, active blogs about the HoloLens? I know there are a few like Mike Taulty and some other blogs that might post about it occasionally, but I would like a "Morning Dew" for HoloLens.


Think about trying to track a funnel or "happy path" in AR and understanding intent from user's gaze and motion. There is no "go-to" analytics pattern or framework that I'm aware of yet.

The AR blogs I find are mostly content-free, not exactly their fault because most demos are just PR demos and the real dev is under wraps at private industry. Similar to the comments here, whoever knows what's up is not talking. Lots of NDAs and patents around the technology.


That makes sense. I think that's a hard realm to develop a framework for, because I feel like you can only predict so much for so many people, much like regular programming, and a lot of it would be catered specifically to what your program does.

As for blogs that's kind of what I've found. I've really been itching to get my hands on one and fiddle around with it. I see this as a technology that will only get bigger, and I want to get in on it early and develop a strong knowledge of how it works, and building applications for it properly, and the emulator can really only replicate so much of the true look and feel. I guess for now I'll just have to imagine what it's like with my Vive on...


Would you mind sharing the blog that you have found?


Mike Taulty [0] has been posting ~2 posts per month on creating applications for the HoloLens (he just published his 13th yesterday) - he's a UK Microsoft Developer. There are a few other smaller blogs that occasionally post stuff like Abhijit's Blog (inactive) [1] and El Bruno [2]. Althought not a blog, the HoloLens subreddit sometimes has some interesting posts, but a lot of the time it's just articles about AR or videos of people showing off a demo (and not code or explanations per se) [3]. Lastly I recommend looking at the WindowsAppDev Daily Digest, as sometimes stuff is posted there regarding the HoloLens, or tangentially related topics [4].

[0] https://mtaulty.com/?s=Hitchhiking

[1] https://abhijitjana.net/category/hololens/

[2] https://elbruno.com/category/hololens/

[3] https://www.reddit.com/r/HoloLens/

[4] http://windowsappdev.com/


Shameless self-plug but it's applicable to the conversation! -- https://www.youtube.com/theholoherald We make content centered around mixed reality and the HoloLens!


would you be able to share some dashboard/library or anything that you have developed?


We run a youtube channel called The Holo Herald you can check us out we put out almost daily content centered around mixed reality and the HoloLens, maybe not as core tech related as you want but definitely content you'd find interesting!


Thanks! What resources do you recommend for app UI designers to get their hands dirty with this 3D interface paradigm?


You have to use the device. Often. To try to accomplish something real and useful. Then the limitations present themselves quickly.

As far as a design workflow I've heard many designers are happy with their existing 3D tools like Maya and Unity. Any standard 3D object and texture can be loaded into the hololens.


Hey"doublerebel",

I am interested in learning more about Analytics/monitoring that you did. Would you be able to share more info? Please let me know how we can sync up so that I could learn more about it?


What do you think this new paradigm in UX will look like?


Subtle. Flat design, typography heavy. Outlines and hints over occlusion and forced actions.

I also see a class separation between people who are augmented and who are not. It's going to blow apart industries and relationships. Eventually.


I've been doing development on it for a bank for about two months now.

Things I like:

-Let's you have an infinite number of virtual monitors with applications such as word, outlook, browsers etc. -Developing for it is really easy with tools like Unity -Battery life is not too shabby, rarely have to take it off to charge while I'm doing something -Great demo piece

Things to work on:

-Field of view isn't terrible, but could still use improvement -Price point precludes a lot of consumer applications -Feels like you're always wearing sunglasses indoors. This takes away from the augmented reality bit as it can be pretty hard to interact with the real world sometimes (e.g. hard to read my real monitor when I have it on) -Gets kind of uncomfortable on your nose after a while, though that may depend on your face morphology -Interacting with voice commands in an office setting can be awkward/amusing -My colleagues think I'm never working


I'm thinking to buy hololens. But I'm concerned about some aspects i've been reading and watching. Hope you guys help me. My main use would be for write/read several documents in word at same time (like 10 different word documents). Do you recommend to buy hololens for this purpose? What about resolution for read/write? Is it possible to have 10 different documents working at same time? What about FOV, will it be enough for this use? would you recommend me hololens or meta 2 or magic leap. Or even Htc vive?. I'm opting to choose AR instead VR because I think it's better interact world and work(AR), instead only work (VR). Comments are welcome too


The price point is what was explained to me by a developer/project lead as the hardest hurdle. It's a hard sell when someone sees it's $3,000 and not available to consumers for at least another year or two. I have a two pronged question: Is this something you approached the bank about, or is it something they approached you about, and what is your level of autonomy on it (as it is the banking world and there are heavy restrictions on stuff).

Also, what steps did you take to fully learn the technology (MVA, Blogs, VRDC, etc.)?


The bank had the HoloLens for a few months before I had started, but they never did much with it. Basically, they had a budget to expend on R&D and that's one of the tools they felt could be useful (and also probably to market the department to their higher-ups). Me having experience with Unity, the Kinect (the precursor technology to the HoloLens) and just barely with the HoloLens (had a chance to try it before it was publicly available while I was at Microsoft) made me an ideal candidate for the bank to hire and engage in AR projects with it.

I can do whatever I want with it. I was surprised to find out after I joined that the regulations are really not that bad, less even than when I was at Microsoft. I think it really depends on the department and the function. The HR department for example is a lot more stringent (though believe it or not, they also explored VR with our team).

They have a pretty good set of tutorials on the HoloLens site, though they're kind of monotone and over polished for my taste. I think the key thing is to know Unity, which I've mainly done through doing and exploring their API or youtube videos. MVA is a pretty good resource, though I haven't needed to use it.


Huh, I was expecting there to be a lot more invested in the technology if a bank already had it. That's actually very neat that you can just use it whenever you want, I'm kind of jealous about that.

I'll be doing a summer research project and senior capstone project with it (although I have yet to try it on, I saw a few other people demo it though), and I know enough about Unity to be dangerous. I was expecting more barrier to entry on it, thanks for the information!


I'm curious about the virtual monitors.

Are they usable for working? Is the resolution good enough that you can use virtual displays for coding / browsing / looking at YouTube videos?


Definitely not for coding. Would code on a phone before I would on the HL. It's pretty handy when you just need a display to read or watch content off of while you do something else. For example, when youre looking at a solution on stack overflow while coding. Or perhaps when you're trying to replicate a circuit diagram using real components.


I think both FOV and price point are the most likely weak points to see continuing incremental improvements. Thinking back to my first smartphones in 2003-2004, it would've been easy to say "well yeah, the screens are too small, there's not enough RAM, and they're way too expensive!"

But over time, the steady improvement of touch displays, SoC's, and supply chains brought sub-$400 devices that are likely more powerful than my desktop PC at the time.

I think they made a reasonable compromise between FOV and ergonomics based on the current state of available hardware tech. While UI and software applications remain to be developed to make use of the tech, I'd be the least concerned about things like FOV going forward. It seems like a tradeoff that was made to avoid the bundles of tether cables and high-powered host machines seen in much of the current VR space.


100% correct. First off as someone who uses the HoloLens daily FOV becomes less and less of an issue. People have to realize with FOV that it's a happy medium, most tech is about compromise and they did a great job with the hololens. Let's say they decided to add a wider FOV, they would have to add more glass, more processing power, longer battery life, more R&D, all of these things add up to a greater price, a heavier system and later release date. It's about equalibrium.


I feel the same way. Not being tethered imo is a fantastic benefit that I aught to have mentioned. I don't really feel that FOV is a huge issue or anything, was just putting it out there.

Will be interesting to see where things go given they've switched suppliers for key parts supposedly in preparation for a 2019 release.


Is there any kind of image stabilization? Or is that not needed?


In what sense do you mean exactly? the virtual objects stay perfectly stationary wherever you place them and though the gaze cursor is pretty sensitive, I never feel like I can't target what I want to interact with.


Been using HoloLens for about a year now, it's awesome, probably my fav bit of tech I've tried since the first iPhone. It's kinda exactly as you'd expect, a pretty decent but not mind blowing projected holographic interface augmented into reality. FOV is very mediocre, and you have to put that aside to enjoy the device, but if you're willing to look past the FOV, you really get a sense for where this will go. As others have said, the gestures are super annoying. It also doesn't really fit well and hope they refine the actual way the device sizes to your head. We do software for cities, so as you can imagine there are very many places you can take AR and city planning. FWIW: I think there is a lof of VC cash deployed into this space, but I also think it's a paradigm-shifting technology and is one of the few things I feel the hype around is justified. As a side note, I went to college for digital imaging technology and started a started a studio out of college with a buddy (13 years ago) - we took advantage of the transition from analog to digital filmmaking and ended up winning three Emmys and building a 10MM rev business. If I wasn't doing what I was doing, I'd be focusing on that shift here, there will be a lot of opportunity for very forefront startup VR studios. Here is a video of me messing around with a hololens at office last year: http://john.je/iDpX


HoloLens is cool but most of the HoloLens applications you write will be consumed on the $299 software-compatible Mixed Reality headsets that ship later this year (it's amazing how few people are paying attention to this announcement - Microsoft uses Mixed Reality as its branding but these are basically high end VR headsets with integrated tracking for a third the price of Rift and Vive devices)[0][1]

From an application developer's perspective, the only difference between HoloLens coding and Mixed Reality coding is that when constructing 3D scenes your HoloLens app should have a transparent background so the person can see their room through the viewport because that's what they're buying the expensive headset for and in Mixed Reality you should have an opaque background because it's VR not AR.

The really big thing though is that $299 is roughly what you'd otherwise pay for a pair of big monitors. Full on virtual desktop support with floating windows for these devices is being shipped to every Windows 10 machine starting this week via Windows Update with the intent being you don't need old-school monitors just work in the headset, or with your monitors, or however you want.

Windows now has (or will shortly depending on your Windows Update timing) a built-in developer mode simulator for application testing of Mixed Reality code without a physical headset. The simulator is still a little buggy and incompletely undocumented (remember to shut it off when you're not using it) but it's pretty incredible and more than enough to start building and testing applications.

[0] https://www.engadget.com/2017/04/12/acer-microsoft-vr-mixed-...

[1] https://developer.microsoft.com/en-us/windows/mixed-reality


> Full on virtual desktop support with floating windows for these devices is being shipped to every Windows 10 machine starting this week via Windows Update

How well will this work though? Virtual desktops exist for the Oculus Rift and the Vive but from what I understand, the limited resolution doesn't make it a great experience, and can't really replace monitors yet for text heavy work.


You're mixing up $300 VR headsets with AR headsets.

> it's amazing how few people are paying attention to this announcement

It's not amazing because they're not showing anything except what appear to be fake, marketing videos. They've always been cagey with the HoloLens and what it was actually like and it looks like they're continuing down that line.


Please check the links on my post, and the places where I discuss the differences between HoloLens as AR and Mixed Reality as VR.

I'll also add I'm not sure what you mean by "they're always cagey with the HoloLens" it's been a shipping device for about a year now, they're available in the wild, lots of us outside of Microsoft have plenty of time in the HoloLens. The suggestion that they've been cagey about what's coming for the Mixed Reality headsets is similarly nonsense. If you want to stay a Microsoft hater because of something some guy said about Java more than 20 years ago, that's totally your privilege, I personally prefer to work in the present.


Please link to any video from Microsoft that shows an actual, simluated view of wearing a current generation HoloLens.. I've been looking, still haven't found one. All I can find are marketing videos that show fantastical visions of what AR will look like some day but not today.


Random video from first page of a google search for hololens capture, someone somewhere in Asia is my guess, it's easy to shoot this kind of stuff when you have a hololens and Microsoft recently released an open source hardware kit for people who want to make more serious video captures.

https://m.youtube.com/watch?v=bYpbvaGpCkE

There's plenty more, but honestly if an LMGTFY link like this can answer your question it sounds like you're actively trying to keep yourself from finding out what's going on. There are no conspiracies here, it's just tech, it works, it's very cool, and there are even fully functional simulators/emulators allowing you to code without access to the devices. It's a shame you're so sure there's something wrong that you're preventing yourself from getting involved with something that clearly interests you.

[edit] I see now you said you wanted a video "from microsoft" - that's in the very first video hit searching for hololens capture[0]. This video has other stuff in it as well, including discussion of an open source hardware and software system for making hololens captures, but there is absolutely straight hololens capture video in the link, just like in the other link I sent. Again, it feels like you're trying hard to not let yourself find or see this stuff.

[0] http://www.theverge.com/2017/2/13/14588178/microsoft-hololen...


You're too caught up in it. Take a breath, read what's being written, then respond.

The reason I asked for video from Microsoft is because users consistently crop the views to only show the postage stamp AR view HoloLens offers and presents it as the full video which is a misrepresentation of what HoloLens is capable of. Also, that second video doesn't actually include any video from the HoloLens from a wearer's perspective but thanks for trying.

I don't want to interact with you anymore.


> that second video doesn't actually include any video from the HoloLens from a wearer's perspective

45 seconds into the clip on the verge site are four separate captures shot by the HoloLens showing the wearer's perspective, and the other clip is 100% user POV shot by the HoloLens. The verge clip is produced by Microsoft, the other clip by a user. Neither the user nor Microsoft cropped the HoloLens footage, both clips were shot with the integrated HoloLens camera which is tuned to match the active field of view of the headset. The rest of the verge clip shows how to build an open source capture device capable of recording a wider field of view at higher capture camera resolution and shows video shot using that system with a DSLR and a HoloLens. There is no "misrepresentation of what HoloLens is capable of" in either of those videos.


The first video linked is a very accurate representation of the users view. We have a current gen model at my company.

It's not vaporware, though you may not be able to find the exact videos you personally are looking for. Maybe that means it's just not for you. No bigs.


> Full on virtual desktop support with floating windows for these devices is being shipped to every Windows 10 machine starting this week via Windows Update with the intent being you don't need old-school monitors just work in the headset, or with your monitors, or however you want.

Are you implying hololens could replace my monitors (in a practical sense I mean)?


I have to admit, if the resolution was good enough (the equivalent of 1920x1024 at arm's length would be sufficient to my mind), I'd be quite OK with this. Virtually unlimited screen space for tiling applications would be pretty awesome.

One of the biggest problems I run into is even with a 30" 4k display, I'm always out of room to run concurrent windows.


You'd need like a 6k display to even start getting close to the 1920x1024 at arm's length equivalence. That's at least a generation or two ahead of where we're at today. That's why all this Mixed Reality stuff coming out of Microsoft today is just so much talk.

I'm not arguing against them working on it, there should be a foundation but we're currently at the equivalent of the late-90s, early 2000s VR devices when it comes to HoloLens and AR.


If you've not worn a HoloLens, I think you'll be surprised. There are plenty of things to complain about with the HoloLens but the resolution is amazing.


I'm fascinated by the downvote someone gave the parent, because I've never heard anyone complain about the resolution on the HoloLens after using it. If you're still in the thread and are able to discuss the scenarios where resolution was a problem, you have me curious.


Because the discussion was about replacing physical monitors, not complaining about the resolution. You're jumping at shadows.


That's where Microsoft is headed. I've only spent a few days that way (others have gone much longer) and it definitely made me a believer. These devices only run the newer UWP apps natively but worst case use Remote Desktop or VNC as a wrapper to pull up your non-UWP apps in the goggles.


I would really like to dispense with my monitor(s) and plaster big virtual monitors over the walls of my cube. But at this point, I really doubt that the hardware is usable for that use case (low resolution, weight). I'd love to be proven wrong.


CastAR might let you do that, to a limited extent. They work by projecting stereoscopic light on to retro-reflective material. However, it depends on the resolution of their projectors and if you can some how get your computer's video to them(from the sounds of it, the current generation is an all-in-one device).


HoloLens resolution is absolutely there today.


It's not, it's only really 720p at a fixed focal distance. The display does a great job of aliasing with extra light points but there is no substitute for pure pixels. Dense text and symbols/code is not a great experience.


I was recently at a Microsoft Training Centre, where we also had a chance to test the Hololens. All in all, it's crap. It is pretty heavy, so I can't imagine wearing it for more than 10 min. The latency was ok, but still somewhat disturbing. The gesture recognition was bad. I, and later on also the Microsoft guy, had to tap twice several times to trigger an action. The shown floor shop example was a bad choice. Speed at the shop floor is key, for workers and for other functions, and this is what the Hololens didn't have. During the show off they had to restart the Hololens - a clear fail I would say, but judge for yourself.


I was wearing it for multiple hours straight without any fatigue. You need to tighten it and use the right nosepiece, having it rest on your forehead rather than your nose.

Gesture recognition was really good when I was testing it, it really doesn't take long to learn. Rarely would I have to tap twice. The standard gesture is bringing your index finger and thumb together, I definitely had a hard time explaining that to users.


The gestures are easy if the device is well adjusted, otherwise it's hugely inconvenient (usually the case when demoing).

The clicker makes things easier however.


Yeah, I couldn't find the clicker when I was using it! I wish I had a chance to use it or get people to demo it with the clicker!


Last year I was able to play with one for a couple of hours. The most impressive and exciting part for me was that it wasn't bad. I don't know about others, but I had expected an unpolished feel, and to be continuing to say "oh this will be great when they ______". The latency is much lower (comparable to modern VR) than I expected; the occlusion of virtual objects by real ones works surprisingly well, even with weird shapes; even the gesture recognition worked well. My overall takeaway was that it was much further along than expected. It was genuinely fun to play with, and I felt able to walk around my office while wearing it. Obviously the FOV is an issue to be worked on, but overall I was just impressed. I wish I still had one I could play with.


This is pretty close to my thoughts when I've tried it. One additionally thing is how good the object tracking is, placing virtual objects in the real world is very cool.


I've used it over a period of a few months.

Pros: Very intuitive controls after maybe 5 minutes of using it. Building in voice commands is easy in Unity, can't speak for other platforms. AR has more practical applications (but VR is more mature). Microsoft listens, and will try and add features that people ask for. The forums were very helpful for someone a year out of programming to come back and learn. Spatial mapping was really cool. I didn't think something could be that accurate in the space of a few minutes.

Cons: Controls can be a steep learning curve for older individuals (based on my experience). Development setup was hard when I started, but has gotten much better from what I hear. Trying to show what you're doing in the hololens live was very hard. Had to build that in, but I think now they've cleaned that up as well through Unity. I think the previous con points out that this is a very new platform, and things are going to change. Keep that in mind, and don't get too mad if things break. It's not super powerful, so you'll have to move to directx if you want to pull every inch of performance out of it. Shaders are your friend (I'm a newbie when it comes to game dev, so this was a lot of learning for me).

I know that people mentioned that FoV is bad, or could be improved, but honestly I didn't have a problem with it. With AR, and how you can still see the world around you, it wasn't a hindrance for users that would demo. That being said, I wouldn't oppose an improvement!


I'm thinking to buy hololens. But I'm concerned about some aspects i've been reading and watching. Hope you guys help me. My main use would be for write/read several documents in word at same time (like 10 different word documents). Do you recommend to buy hololens for this purpose? What about resolution for read/write? Is it possible to have 10 different documents working at same time? What about FOV, will it be enough for this use? would you recommend me hololens or meta 2 or magic leap. Or even Htc vive?. I'm opting to choose AR instead VR because I think it's better interact world and work(AR), instead only work (VR). Comments are welcome too


As hardware, it's a nice job. It's self-contained and wireless. The form factor is tolerable. Compare the HTC Vibe, which is as clunky as the VR headsets of the 1990s and still needs cables. The HoloLens has much better balance, too; the VR headsets are far too front-heavy. None of this gear is really compatible with wearing glasses, though.

It's surprisingly good at "drawing dark". It can't, really, so it just puts a neutral density filter in front of the real world to dim out the background. This, plus some trickery with drawing intensity, allows overlays on the real world. At least the indoor real world; the grey filter is fixed, and the display will be overwhelmed in sunlight.

The field of view is too small for an immersive illusion. The resolution is too low for the "infinite number of monitors" some people want. It's useful for putting an overlay on what you're working on, which suggests industrial and training applications.

It's not clear there's a mass market for this. Certainly not at the current price point. If it became cheap enough to sell to the Pokemon Go crowd, it might work for that.

A useful metric is, "Is it good enough for Hyperreality?"[1] As yet, it's not. But it could get there. Watch that video. What hyperreality needs is 1) really good lock to the real world, 2) adequate but not extreme resolution, 3) wearability, 4) wide field of view, 5) useable under most real-world lighting conditions, and 6) affordablity. The Hololens has 1 under good conditions, has 2, arguably has 3, lacks 4, 5, and 6. Not there yet.

[1] https://vimeo.com/166807261


There are a lot of very impressive things about the device, but for me, the dealbreaker is the FOV. It's distractingly small. I haven't done development on it though (just tried it out).


I had the same bad experience.


I was at a talk with someone who demoed building an application from scratch in about an hour using the unity hololens vr toolkit(?).

And I was able to try on on at a meetup.

Considering all the whole thing is self contained and is handling the rendering on the device is amazing. With some of the dev tools you can see it building models of everything and one in the room in real time.

I played the Conquer game which was fun to watch the characters hide behind chairs and stuff. And the maps sort of build them selfs to the room and worked even with lots of people in the meetup.

Getting the hand gestures take's a second but are pretty intuitive with "clicking" stuff sort of pinching your index and thumb together.

The field of view is actually only the glasses under the visor. The visor I believe is more to help with improve contrast and block a bit of light.


I'm quite biased on the whole AR thing, as I worked at Meta for almost three years, but I think that the HoloLens is a fantastic piece of technology, and that Augmented Reality Head Mounted Displays will be the next big computing revolution.

Currently I'm working on a large HoloLens project for the aircraft industry. But the amount of possibilities I can think of with a HoloLens (or similar device) is limitless.

The HoloLens has amazing tracking and latency. In a couple more years, when HoloLens and/or competitors release a device with a large field of view, HoloLens-like tracking/latency, and leap motion-like hand recognition, it's going to be very exciting.


There have already been some developers asking for feedback and other discussions on https://www.reddit.com/r/HoloLens/


It's a revolutionary device. I was blown away during a demo. When the public sees it they are going to go apeshit.


The demo will blow your mind. My biggest takeaway was that AR probably has more potential in the long-term than VR. VR is immersive sure, but you quickly run into physical boundaries or your mind becomes out of sync with your body. AR has all the benefits of VR but layered on top of your physical environment, enriching it and providing a reference point.

To speculate, I'd say VR will find its killer app in gaming/entertainment (similar to TV), and AR will become the next great I/O interface between humans and computers (similar to phones/tablets).


We received our unit in August of last year and have documented our experiences with it using our YouTube channel: The Holo Herald. Some quick things that we noticed:

-While the FOV is less than ideal it is not experience breaking

-The device is more comfortable than most headgear technology out today(there is also adjusters such as a nose piece and headband that make it more comfortable for a long duration)

-It is intuitive. This device can and will be easily picked up by many people. We found older people who could barely stand trying to operate a smartphone throw it on and almost instantly understand it. There is just something about this device that makes people feel like they can handle it without too much work. And the fact is that they can, it is very simple to use and the hand gestures may be the main reason for it.

-While the hand gestures may not be the most reliable it does come with a clicker that remedies this quite satisfactory. To give this Vive-esque controllers would completely ruin the experience and what Microsoft was trying to accomplish.

-The UI and operation are unobtrusive which means that while it doesn't have much productivity use right now, it will in the future.

If you would like to get a better idea of what the HoloLens does and can do we urge you to find our YouTube channel. We try and deliver our content in a non-technical way as to explain how an end user really see's it without all the tech jargon getting in the way.


You can definitely see the quality in apps increasing through your videos. The more recent apps you preview are a lot more intense than the ones in the beginning! Cool Channel!


I tested one in November.

Very narrow field of vision: I had to fish for objects turning on myself and looking up and down. Not good for AR.

No black, obviously. They can't block light from going through rendered objects. This in turn makes colors somewhat ghostly.

Very stable. Once I get an object I can walk around it and it stays there like a real one.

"Clicking" on an object is hard, but maybe it was hard with a mouse when I used it for the first time.


I had a high school student working with me last summer who did some development on it (no previous experience with unity/c#). His goal was to visualize crystal structures. My main comment is that the FOV is small and the question of what makes for a good user experience is still open. I wish I had more time to play with it.


I'd be impressed if someone could give me a grab bag of real world use cases for the mass market. I'm just super not convinced that this isn't a Kinect sitting on your face.

And Lego/Minecraft on the tabletop.... no thanks I don't want games set in my lounge room, that's an incredibly boring place to set a game in.


In industry. Every factory I've worked in would benefit from AR.

Where's that bit? Where's that tool? How do I repair this thing? Who attached that bit to this thing.


I'm not about to share what application I'm working on but I fully expect it to revolutionize the industry I work in.


Educational...my kids love it. its a very immersive way to learn.


TV replacement...Imagine having 5 large screen tv's in your living room watching every march madness game that is currently happening all at the same time.


But do people really want to wear a headset? And comb their hairs or get a shampoo after a long session? The only headsets people wear, sometimes even as fashion items, are glasses, air straps, hats and helmets. All of them are much lighter than a Hololens (maybe except some helmets). Hats and helmets suffer from the same combing problem as Hololens. Furthermore I don't see people going around with a bulky Hololens. With something like the glasses of the Denno Coil anime or the contact lenses of Vinge's Rainbows End, yes. It's a long way to there.


I have a tv.


Not on the ceiling above your bed. Not on the bus. Not in the passenger seat of your car.


A TV everywhere I go and in any direction I choose to see one is not something appealing in the least. In fact it's a repelling idea.


Don't buy it then.

I think glancing out to sea and seeing who owns the ships and where they are going would be neat. Then bringing up aperture and shutter speed for my old rangefinder so I can take the perfect photo. Looking up into the sky and identifying the planes and birds and stars and planets.

Seeing people's name tags next to them, maybe with some recent photos from facebook.

All stuff I can do on my phone, but without having to take it out of my pocket.

And before you go all Luddite on me ... "That sounds terrible!" I actually rarely take my phone with me and 90% of places I go have no signal. So for me this technology is almost like magic (if they can make it work)

I would like to remove my TV and put a plant there.


> And before you go all Luddite on me ... "That sounds terrible!" I actually rarely take my phone with me and 90% of places I go have no signal. So for me this technology is almost like magic (if they can make it work)

The way applications are written today, your HoloLens won't be able to show you information about planes and ships and such without an Internet connection :(


How about a computer monitor? Or overlay of information about your world (this bus will next arrive in 5 minutes; find out more about that statue; follow this blue line to your destination)?


like... my phone?


So, your phone overlays a line in your field of vision indicating which direction you're going?

Or does it overlay an arrow on a digital representation of the street you're traveling, on a screen located away (and at a different focal length) from your actual view of the road?


In every room?


I've had the opportunity to develop with two HoloLens. From a consumer standpoint, it's a wash. You're spending $3,000 device on a device that can't do more than pin UW apps to your walls. There are no killer apps yet.

From a developer standpoint, it's terrible. Unity only just now supports UWP apps and only just, many many libraries just don't work. We are making a collaborative 3D app that needs access to the entire screen and a lot of system level resources. The only nice thing is that the anchor system is an operating system level abstraction.

TL;DR: After using one regularly for a few months, I'd say pas on this device, it's a barely usable AR platform with poor battery life and poor FOV, and it's absolutely unusable AR gaming platform.


I worked on HoloLens software at MS, so I was more or less using one all day every day. We all just sort of pushed them to the backs of our heads while we were coding. Anyway, my impression is it's fucking amazing.


Customers love it. It a huge wow factor when you bring it into a place.

Positive: Voice Commands, No Computer needed, Unity is great - development is easy

Negatives: Field of view is just weird, Not as intuitive as it could be, Cannot sell it - dev only


I've tried the development hololens from the March 30, 2016 batch several times.

My observations:

Getting it to recognize my air clicks is the bane of my existence. Object permanence works very well.

Before I used it, I thought people were hyperbolic when mentioning the narrow AR FOV. It really limits the experience.

Moving objects around is very annoying when it doesn't seem to recognize half my gestures. However when it does recognize my gestures it's fairly straight forward to move objects on each of the three axis.

Peers make fun of you for wearing something cumbersome.


My initial review was here - https://m.youtube.com/watch?v=5w3MwzG3IiQ - where I was really impressed with great industrial design and very promising features.

After playing with it for a while though, I have to conclude it's not yet a consumer product and probably won't be for many years. Maybe it will find a Place in the enterprise.


Have done significant work for it (source: work in consulting) and its feeling like a new paradigm much more than VR or anything else. Biggest thing is the "layer on the virtual world" onto what you are looking at.

I would say info isn't that sparse (as it used to be). Search the Holographic Academy, watch their youtube channel, and subscribe to the Windows MR blog/newsletter.

Have demos of stuff I built, feel free to DM if you want to see.


Underwhelming, to say the least. I've had the opportunity to use HoloLens on many occasions, interacting with many different types of applications, and in many different environments. The extremely limited FOV cripples user experience and usability. There is no feeling of immersion whatsoever.

It's a fun proof of concept, but not much more.


Honestly, if interacting with physical objects around you at the same time, the FOV was actually more of a benefit. You do however need to be strategic with what objects you're virtually placing in the scene to improve the experience.


Have only used ours a few times since we got it. I like the display tech and image stability. I dislike the incredibly narrow FOV, imprecise and cumbersome gestures, and how difficult it is to get a comfortable fit on my head. I wouldn't pursue the first generation unless they make tons of progress on FOV and fitting.


I have the opposite opinion. The gesture recognition works perfectly and its easy to wear. However the voice recognition was really flaky in my experience. (Tried integrating voice into a Unity app)

There is a trick to getting the hololens comfortable, just make sure the inner-strap is tilted ~45 degrees. This might sound obvious but I observed a lot of people just leaving the inner strap horizontal, which means a lot of weight is on the nose and the strap has to much pressure on the forehead.

As for the tech, same opinion as everyone else. Idea is awesome, works great, FOV is limiting factor.

As for development, was super easy. Unity has inbuilt support, so just press play in unity and it automatically gets sent to the Hololens.

I developed for the Hololens and wore it a lot over a couple of weeks.


Almost completely useless as an actual device for doing real work, but much more in line with what future such devices will be like. In contrast, the HTC Vive is useful, usable, and a much more pleasant experience all around, but also kind of a dead end in terms of design.

Get a Vive now, wait 2 years before getting an AR device.


The AR experience is great, but the hardware in the HoloLens is a bit slow, only 2 GB of RAM. If You develop bigger apps you will notice some lag, ie separation of white lines into red, green and blue when you move your head. But overall an nice Gadget.


Is there any reason why HoloLens can't be both VR & AR?

If I want to watch a movie in a public area for instance, I'd love a VR mode to tune out everything else.


I think the biggest problem is that even if you (and I'm basing this off the original models) darken the entirety of the visor so you can only see what's being showed by the built-in screens, you'll still get a ton of peripheral light and noise. To get to VR levels of visual isolation would require something closer to what you get with a Rift or Oculus headset.


So we could have a good be setup in a darkened room? Like people have with projection TV's?


I've spoken about this before on here. We developed on HoloLens for a couple months. Working on the HoloLens app was actually my first foray into 3D development, and also required converting ThreeJS JSON into Unity models which was a mess.

The user experience --------------

HoloLens is mesmerizing. I'm not big into VR or anything, and will often make the arguement that VR hype will die out and is a fad. But there's something very different about what Microsoft is doing. The ability to incorporate reality as a first class citizen in your 3D applications (or vice versa) is groundbreaking. People often complain about the FOV when they first try it out, and I had the same complaint, but your brain is able to compensate once it gets used to it, and then you stop noticing it. That's something you don't get from a short trial of it at a tech demo. The user inputs are indeed very clumsy still. We'll need vast improvements in this area before HoloLens can feel immersive. But the amazing thing is that this first pass isn't that bad. It can track your hands and it's a computer that sits on your head. I mean, come on! I'm only 22 and even I think that's amazing.

The developer experience ------------------------

One of the major short comings of HoloLens development is its dependency on Unity. C# isn't the problem. I love C# and use it daily now for web development. The problem is Unity uses .NET 2.0, and good luck finding C# libraries that are compatible. So for every new thing you want to do, you're going to have to find a "Unity compatible" C# library, which is very annoying.

Unity will work for what you need most of the time, but it turns out if you want to try something custom (like your own gestures) then you're out of luck, because the Unity APIs are limited in that way.

I suppose I'm mostly just not a fan of Unity's component model. Constantly switching between adjusting settings in the IDE and coding feels like a bad way of developing.

Okay, so maybe you want to try something a little lower level. Microsoft offers a C++ API as well, and for the most part this is what you want if you need to harness the limited power of the HoloLens. I haven't played around with all of the APIs, but I know of one in particular that left a bad taste in my mouth (this applies to Unity too) -- the spatial anchor API. For those of you who are unfamiliar, the spatial anchor API is the only way to acquire a durable and persistent reference to a real world location. This is done (I think) with sensor data (orientation, lighting, and images captured by the 4 on board spatial mapping cameras.) This is really an incredible feat of engineering, however it produces a binary which is around 15MB. Far too large to store in a database at scale. I'd like to see MS open up raw access to those sensors so middleware developers can try their hand at improving this aspect of HoloLens.

If C++ isn't your thing, there's a library called HoloJS. You guessed it, it's a JS runtime for HoloLens with access to native libs. I actually started my own variation on this (called HolographicJS) before Microsoft released theirs, but I'm happy they've taken over.

The future ----------

So what does this all mean for a device that seemly has its share of problems to overcome? Well, after trying it I'm fairly confident that MR as Microsoft calls it, is here to stay. The ability to mix reality with virtual reality, and augment that with a layer of environmental understanding is really incredible. I think we're just scratching the surface of the possiblities.

HoloLens is the first in a new field of devices that I believe will come to replace all forms of computers we currently use: phones, laptops, desktops, tablets, etc. Even things like IOT devices. Why spend time building your own interfaces when you can just augment the users'?

If v2 had better FOV and improved input tracking, I'd consider it a major success. But if it also included improved spatial mapping and a reliable GPS, that could bring us into a whole new world, quite literally.

The way I see it, the first company to solve outdoor use of an MR device, and solve what I'm calling the "universal spatial map" problem, will run the world of tomorrow.

Imagine every machine being capable of interfacing with you without the need for a screen or separate device. Imagine walking down the street, gesturing to a restaurant and placing an order before you even get inside.

Further down the line. What if we could transfer consciousness out of a dying car crash survivor into a computer. What if that person could then be virtually transferred back to the scene of the accident, to be greeted by those who are augmented.

Anyway, that's all crazy futurism; but the point is that reality starts with what is being done with HoloLens, and I think it's an incredible thing to be a part of.

To me, HoloLens feels like the Apple II.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: