I play a game called Elite Dangerous with my Oculus rift, and the UX of being able to have holographic windows around my person that I can look at to activate and control is so insanely powerful and so freeing, and I'm literally willing to drop multiple thousands of dollars on something right now that can give that experience with normal desktop software right now.
Sadly the Rift lacks any way to see the real world when it's on (which makes using a keyboard and mouse more difficult), and the resolution still has a ways to go before I can comfortably use it like this, not to mention the software still needs work for this use case.
But I can't wait until the day when I can get rid of the monitors on my desk and replace them with a headset. Being able to place normal 2d windows around my person, be able to maybe use some limited gestures to control them, and then the same keyboard and potentially mouse that in used to.
In the demo they use giant fonts, because text rendering sucks if you simply render text to a texture, then put that texture on a rotated plane in 3D. If that last stage was bypassed, e.g. when rendering text, the renderer would somehow know the rotation and distance, and draw the vector font directly into the screen pixels, they could get it crisp enough for most apps.
It gives a pretty good summary and a lot of other vocab words you can google, and touches on the Pathfinder project
IMHO, that's another dead end (or rather intermittent tech), like full-screen apps, or single task environments (hello, DOS).
Most productivity apps, however, have their own way to subdivide space within the app efficiently. E.g. in the IDE, there are various panes and tool windows. Put a file browser on one side, a documentation browser on the other, and a terminal pane on the bottom, and it actually starts feeling kinda crowded.
Either way he should be ashamed of himself!
Sure, some people will want to tune out / turn off all other apps from time to time but the ability to run them together is really key it seems.
I wish browser VR supported popping 3D things out of the screen. Currently browser VR is click a button, browser display's that page's VR presentation in your VR. But run one of the many VR virtual desktops that show your computer's desktop as a virtual monitor in VR and suddenly I want a standard so that a webpage can make a 3D object and it pops out of the virtual monitor. Ideally like a browser tab I could pop out lots of different VR-VR apps/pages/etc all running at the same time, all integrating in the same 3D space.
(Something I don't think a lot of people notice, too, is how important 3D is to the Fluent Design System, and not just for Material Design reasons of faking paper stacks for visual interest in 2D, but probably because a lot more of Microsoft is taking 3D very seriously than it currently seems obvious that they are given the current intentional "no hype" approach.)
It certainly sounds like the Windows team has been thinking about all of this sort of stuff for years now, and I'm very curious to see a lot more of it come to light rather than just "we're thinking about it". This article also hints at some more of it getting released as real services SOON™.
I'm presuming we'll hear a lot more about Azure Spatial Anchors at this year's BUILD, and that could be quite interesting based on the hints in this article.
Based on the parts of the video where they mention display technology, clear type is probably out, they're projecting lasers, I don't think there're subpixels.
Ironically I feel that subpixel rendering works better (and becomes worthwhile) in high-dpi displays where the artifacts aren't as noticeable.
I also have 8" 1280x800 windows tablet which doesn't look good regardless on the setting but it was very cheap, something like $120.
Edit: Here's Valve's Siggraph paper on the technique https://steamcdn-a.akamaihd.net/apps/valve/2007/SIGGRAPH2007...
Looks like it's probably similar to using a 27" monitor on your desk, depending on the resolution/size of the display
I would be surprized if this Hololens is up to 47PPD, but to my pleasant surprise MS did make a big leap (no pun intended) in optics his go-round. So, I suppose it's possible. In my opinion, they are definitely (finally) on the right track in terms of pursuing a virtual retinal display (I.e. laser projection on the retina).
The ML1 probably isn't good enough for actual use, due to the text rendering, but I think the HL2 will be. I had the HL1 and its text rendering was plenty good, but the FOV killed it; the 2 should fix that.
It may work if all you need is playing a game for an hour or two but if you need to actually work for hours on a lot of text or graphics, the technology is not anywhere close yet.
That's what annoys me so much. I feel like the tech is already here, but nobody is putting together all the right parts for some reason. We have high enough res displays (IMO), we have the tracking, we have the "AR" aspect, and we have the ability to project screens into the world around you, but nobody is putting all those parts together in one package!
"Still in the lab, but given a demo at Mobile World Congress, these glasses actually deliver 4K per eye and yield a 120 degree field of view (FoV)."
I was fortunate enough to try the v1 of hololens when a Microsoft representative bought it into my work, the tech was amazing but the FOV was far less than I anticipated. From what I can recall it was akin to looking through a tissue-box sized area roughly 1 foot from your face.
I'd be honestly happy with my Rift as-is if they bumped up the resolution a bit, and had some way to see the real world with the headset on (even camera passthrough is fine!).
I've used virtual desktop apps in the Rift, and they aren't bad to use, and playing games like tabletop simulator is so intuitive and nice that I'm floored tht this hasn't been tried commercially yet.
I assume there are some pretty significant hurtles that I'm missing, or maybe the market is still hilariously small at this point, but it just seems so close!
Also the headsets are too hot and bulky to comfortably wear for a full workday.
I solve this by leaning forward or back. In VR/AR the same thing works.
No doubt 4-16x resolution would be nice, though. True "retina" screens are still at least a decade away.
And the rest already exists in other products!
Note,I am not claiming it won't succeed. Just that it will take a long time.
More, this will have less rigor than standard 2d graphical builders. And those don't exactly have a great track record. :(
I think that with the HL2, many of these are going to go away. I can't speak to the comfort, but I know that the first two are now non-issues for me.
It's only a few characters and you saving typing breaks my Ctrl+F and web search. :-)
You can try a service like this: https://immersedvr.com/ to try out just piping your desktop to an Oculus Go. There is a bit of a latency which will disappear with something like the Hololens.
However, you'll notice that they are taking a 1080p desktop and sampling it on to the Oculus (or Hololens) spherical glass. That loss in resolution is quite noticeable.
I definitely agree with your points on using AR for real work, it has amazing potential.
And things like the left and right menus you just look at to activate and control, and when you look away they deactivate. It's incredibly intuitive. There's also the effect of being able to look your head out the various windows, and even lean and sit forward to see out a bit more when you need to. Battle is amazing as you can really get into a dogfight and spin your ship to keep them in view and keep an eye on what they are going to do next!
It's pretty amazing, and it's the only way I play it now. I highly recommend a Rift even for just that game!
Varjo VR-1: https://varjo.com/
For me I need to ditch the headset. That thing covering my head is too heavy and makes my face sweat.
I wish I could get more into VR. I have a Rift, a PSVR, a Daydream. I've loved a few experiences but mostly I turn them on once every 3-4 months at this point. The quality experiences (for me) are few and far between.
rant mode on: The new Oculus home UI is horrible. It uses every button on the controller instead of just like say one button and putting all the UI in VR. It's become a huge frustrating pressing the wrong button or having to test every button to figure out which one does what. It's like it was designed by an xbox dev instead of a VR dev.
In fact, I’d like to get past being tied to a desk.
In this article on HN a few days ago, Stephen Wolfram had a laptop strapped to himself so he could work while on walks. We would all benefit getting away from our desks a little more often.
Direct link to Wolfram’s blog:
It's $180 for the module then $11/mo and $12 for 100 credits (5kb of data transfer). So to get up and running would be $312 a year for what would be at most a location beacon (6B per 24 digit location transfer).
Not at all unreasonable for how much of an offline system this is.
Also, even though location info is typically the main feature, I believe it’s included automatically in the msg envelope (a kind of satellite caller id), which means you can use that more expensive bandwidth for your own requirement, while still getting the lat/lon metadata.
The game I play most with the Oculus Rift can use a keyboard and mouse (and a flight joystick!). At least a few times an hour I'll blindly find the home row on the keyboard by touch only to realize too late that I've put my left hand on fghj instead of asdf. And finding a mouse blind after taking your hand off it is really hard! Especially when trying to do it without bumping it and a accidentally clicking or moving it a lot.
I think some kind of visual "rough outline" of where those devices are in reference to the headset would help a TON.
But I do think a part of it is that my brain has gotten very used to having the general location of the keyboard. So for instance I know the home row (for my left hand) is middle of the keyboard vertically, and on the left side horizontally. I'll quickly find the middle vertically by touch, but then I search for the nub on the f key next and if I was only a few keys too far to the right, I'll find j instead and then instantly assume I'm at the right spot, and it's quite jarring when it's wrong!
With sight, I'm able to know and move my hand toward the right hand side without feeling around more than a key in any direction for the most part.
And if AR ever becomes ubiquitous, the expectation would be that multiple people could wear the headsets and all see the same AR info.
Version 2 uses a deep learning accelerator , which makes it possible to do the heavier computation of DNNs (which involve floating-point operations, which would be much more expensive on the CPU).
From an engineering perspective, I just love seeing how it touches all abstraction layers of the stack, and the types of solutions that come out of thinking about the silicon and the high-level ML models at the same time.
In non-DNN image processing it's quite common to use ints as well (iDCT, FFT, etc) for the potential performance gains vs. floating point.
We need a search engine for holographic layer services. It would be like Google, but for MR experiences. Holographic services would use a protocol that defines a geofence to be discovered by the layer search engine's crawler over the internet (this could just be a meta tag on a classic website). The HoloLens or whatever MR device would continuously ping the search engine with its location, and the results would be ordered based on their relevance (size of geofence and proximity are good indicators). The MR device would then show the most relevant available layer in the corner of the FOV. Selecting the layer would allow enabling it either once or always, and the device would then deliver the holographic layer over the internet. The holographic layer would behave like a web service worker (in fact, it could be a web service worker) and would augment a shared experience which contains other active holographic layers. For example, your Google Maps holographic layer could be providing you with a path to walk to the nearest Starbucks, and once you're outside Starbucks, the Starbucks layer is also activated, which allows you to place an order.
This concept of activated layers, I think, is a great way to avoid a future where we're being bombarded with augmented signage and unwanted experiences. In fact, you could go further and enable blocking notifications about specific/certain types of available services. (ie. don't notify me about bars or fast food restaurants.)
Hololens also doesn't really work outdoors (display contrast and tracking are the largest limitations), so there is no point of designing a system for supporting that when the hardware is not usable in such scenario.
Indeed, HoloLens isn't great outdoors, however the current technology is capable of it (despite the HL design.) The visor can be darkened to function more like sunglasses, which solves the contrast problem.
I'm just looking 5-10 years out, in terms of what's going to be needed.
Evolving technology from an existing place is generally more successful than trying to invent the end goal at the start.
Information tends to end up in different systems. You have for example something responsible for the IoT tracking data of machinery, something else for the HVAC and maybe another system for the maintenance manuals. The proposed system could pull this information together, collecting from different systems based on location (and access rights) within the facility.
Outside the facilities, it might be good to combine information from electricity, water, telco companies when you are at the construction site. Think for example excavator operator having augmented reality view of all the pipes and cables in the ground. This kind of information might be already available via the web if you have suitable access rights. Something similar could be applied here.
"Across augmented reality (AR) platforms, “anchors” are a common frame of reference for enabling multiple users to place digital content in the same physical location, where it can be seen on different devices in the same position and orientation relative to the environment. With Azure Spatial Anchors, you can add anchor persistence and permissions, and then connect anchors in your application so that your users can find nearby content."
Yep I'm also happy about this and like the direction Microsoft seems to be going. Industrial applications have always seemed much more interesting and potentially widely beneficial than just targeting consumers and entertainment.
Twice the FoV and with real hand tracking? That suddenly makes this thing viable in one step.
We do stuff with our hands largely away from the center of our vision. You might make a brief glance to calibrate yourself, but e.g. grabbing a thing doesn't require constant focus on the thing through the entire act.
I’m interested to see how Firefox handles web browsing on HoloLens. I’ve always envisioned URLs in that context as a way of loading a new holographic experience into a shared space, as opposed to each URL providing its own separate experience (though this would be good as an option.)
Yes it is!
What is actually hilarious about this tech is that as the hardware gets better, the information it feeds you will ultimately be the most valuable piece. Crazy to think about when verifying validity a firehouse amount of data.
I'm sure a lot of folks rarely watch many entire segments of ads unless it's on their terms. Does further ad-walling that content not just reduce the overall number of eyeballs? It's like the only ad metrics seen as meaningful are the hostile ones..
The example given, showing someone where to put a bolt, is about making a human do a robot's job. The computer decides what has to be done, and tells the human exactly what to do. Like the picking system in Amazon warehouses. We're getting ever closer to Marshall Brain's "Manna".
Machines should think. People should work.
Kipman has become an expert in working on projects that has initial oh-ah factor and media appeal but he doesn't think beyond that at all. Kinect's main use case was never gaming. It's too inaccurate for pleasant use after initial wonder fads off. Hololens's main appeal is never creating this virtual world but rather becoming a sensor that can be leveraged in all kind of applications. But that would be the least sexiest thing for Kipman to do and ofcourse that may not generate as many media articles.
I _want_ to agree with this "ethical opposition" approach, but I know too much game theory / history to know that I also don't want my power grid knocked out on a moment's notice because, say, Googlers opposed stronger cyber security for our military.
But I can't see why this one would be in that category.
Increasing lethality decreases collateral.
Lethality is the effectiveness of desired outcome. Was the target eliminated?
Collateral is the side effects of the desired outcome. Was anyone or anything else eliminated?
They are two different metrics. Consider two contrasting analogies which break your understanding:
Ballistic missile: high lethality, high collateral probability.
Polonium tea: high lethality, low collateral probability.
About 20 years ago, I was working in the defence sector, on guidance systems, as an engineer. Sat through many a powerpoint about how collateral could be marketed away. Turns out they won that battle eventually...
Incidentally I bailed out when I saw what a fucking mess of a world I was creating.
By your metrics, the most lethal thing in the world is a rusty spoon stabbed into some ones eye; either it kills the target instantly or kills them later due to infection. Very close to 100% death rate. Very lethal. No collateral damages, either.
And yet no professional militaries are armed with rusty spoons
Did you use GPS technology this past week? That was the tech guiding the drone to it's target. And this thing called the internet was invented to network the military
No, what prompted GPS to be opened to civilian uses was KAL007 being shot down by the Soviet Union.  269 killed, including a US Rep.
"BRAD PARKINSON: Absolutely. And there are a lot of misconceptions about that. You hear people saying, well it was a military system and eventually made a civil system. That was not true.
When I testified before Congress back about 1975, I said this is a combination military civil system."
In the time you get one navigational update for your car/iPhone GPS, a cruise missile's GPS has received 10 updates, presumably helping him guide himself up the ass of some terrorist at ~500MPH.