He's an engineer that understands the limitations of physics, especially when it comes to optics and light. He is very good a parsing marketing hype vs reality. (He called BS on Magic Leap years before it launched.)
His posts are exceptionally well researched and explained, even if you don't have a background in physics or optics.
He recently did an analysis of the Apple Glass leaks. I expect he'll post his thoughts on this new technology from FB soon.
I've been reading Karl's takedowns for a few years now and, while there is never anything technically wrong about what he says, it's also just not as important as he thinks it is.
Yes, waveguide optics aren't the best possible visual experience once could have. But does that matter all that much? I think Karl's terrible diagrams point out why he makes this mistake. He doesn't understand that design is the much more important consideration here.
The technical limitations of the display technologies that we have today are not impediments to product development. Good software can be designed to work around these issues. Can't render black? Don't design around dark themes. Have a narrow field of view? Don't require people to try to keep mental track of things around them by vision alone.
Karl looks at the HoloLens, sees the waveguides, and misses all the amazing operating system features. Speech recognition, spatialized audio, a fully spatialized desktop metaphor. These things are important and they go a long way towards the usability of the system.
And that was why the Magic Leap failed. Not because the displays were crap, but because the entire system was crap. It was basically "just a" stock Android system with super flaky WiFi and no systems view on delivering a unified product. The entire product was fundamentally mismanaged. The hardware was slightly better than the first HoloLens, but you were far more limited in making good software for the Magic Leap than you were for the HoloLens.
i suppose parent was downvoted bc some thought he was questioning the "hard" laws of physics. but perhaps he could correct this by traveling back in time and changing his wording ;-)
Fundamental skepticism is nice in a thought experiment, but what value are you trying to add with such a vague statement?
> I expect he'll post his thoughts on this new technology from FB soon.
1. Weight and overall comfort. Pressure on the forehead, sweating if you're playing physical games and the room is warm can make the lens foggy; even the rubbery-plasticky smell of it. It's just not fun to use physically.
2. Resolution/graphics. I just wish it could have higher resolution and I won't see the pixels. Would make everything much more immersive.
3. Usefulness/content. It's cool but it gets old pretty fast. I just found myself barely playing after a couple of weeks of the initial excitement.
Motion sickness wasn't a problem for most games, which was a surprise since I get game sickness from FP games on a normal screen and can't play them more than a couple of minutes.
So I guess 1 and 2 can be solved by this technology.
The intensity of light is quite high outside, could an LCD not darken it enough to have sufficient contrast if you were OK with it only working outside?
The best example of transparent display I've seen in production is:https://glas.johnsoncontrols.com/
But the material on that seems darker and I believe they use an edge light to provide higher contrast.
I think the distance from the eye, as well, would be difficult if you aren't doing some type of projection, like what is used in the the article.
Anyways any LCD can be a transparent LCD if you peel off backlight layers if you want to experiment with. Search for “DIY transparent LCD” or “LCD side panel casemod”
If you are building an AR system there is always an awkward balance between "letting the environment shine through" and "having projected items be bright enough to be visible". If you put something black in front of the holograms at least now you have just one problem instead of two problems.
It is a lot of details to work out, which is why current AR headsets are still at the bridesmaid and not the bride phase.
VR headsets are workable, but somewhat expensive, and content is lacking.
AR headsets are very expensive, have poor image quality, and even less content. It seems every defense contractor and electronics conglomerate got patents for holographic waveguides in the 1990s when the F-35 was under development; that headset is not so bad but it costs $250,000 and the original version was heavy enough to break your neck when the election seat fires.
Apple may be working on an AR headset, it may be a big hit in the end, but I will believe in product-market fit when I see it.
I suspect they've tried to augment the device for AR already since many of the forward looking trends focus on AR.
I think the head tracking is not as forgiving for an AR system.
I know this may not seem like "intended use case", but the developer experience can use some innovation for a change. Also one way to bring these technologies close to developers.
Roughly, per-eye resolution is in the same ballpark as HD displays, but stretched over a 90+ degree field of view. Fonts need to be very large to be legible. You can create a theater sized virtual monitor, but it's just taxing to use. Aliasing artifacts make it worse.
At least for text-focused tasks, I'd take virtually any display built in the past 40 years over a modern VR headset.
Spoken like someone who hasn't programmed in VR yet.
I don't think you'll be programming any operating systems in VR anytime soon, but there is still a lot of programming, specifically object scripting, that could be done in VR. A number of people--including myself--have built demos that prove out the concept.
One of the reasons is that text legibility is not strictly about display resolution. Motion within the view improves legibility significantly. Yes, the fonts render to very large pixels. But the specific pixels they render to are constantly changing. Your brain fuses those images over time. I'm not able to find the paper right now, but the US Navy did a study that proved pilot visual acuity improved when they were in a dynamic scenario. The study performed a visual acuity test where pilots had to identify letters in view from within a flight simulator. One group had full use of the simulator in motion, one was told the simulator motion systems were broken, but they still sat in it to perform the same test rendered on the same screen.
And as you said, larger fonts are easier to read. There is a lot of spatial resolution in VR that is not used very often. You're used to thinking about organizing your code on a 2D display, but you have an entire 3D environment around you. That environment could be a zoomable interface where code editors are linked to live objects. Use individual editors for individual code units. Organize them in a tree structure linked to the object. Trees organizers are a lot easier to navigate in 3D than on a 2D screen, especially if you eliminate window scrolling.
Window scrolling was created to account for the limited spatial resolution of 2D displays. But in the process, you lose spatial memory of where things are located. Things like windows and tabs and desktop workspaces were invented to try to wrangle that problem more, but they are not as good as a real, spatial filing system.
Think about it. You probably know exactly where your favorite book is on your bookshelf. You could probably walk over to it and pick it off the shelf without even opening your eyes. But there is very little chance you can pick any particular file you want in a 2D GUI system, specifically because of the absence of spatial relationships.
So a combination of "text legibility is not as bad as you think it is" and "code could be a lot more organized than it is on 2D displays" means that programming in VR is a lot better than you're making it out to be.
Oh dude, thank you, I was aware of your project 6ish months ago (there's only like 3-4 canvas text editors so I try to follow them all, most are abandoned though) but what you're doing now fits perfectly with what I'm looking for. Thank you, I'll make sure to use it in v0.1. I couldn't get the webgl demo working though. https://github.com/capnmidnight/Primrose/blob/master/demo3d.... returns a 404.
But my long term goal is to actually 3d render the text completely using SDF or MSDF techniques.
You can see it running, sans webxr here: https://www.primrosevr.com/demo3d.html
SDF came up as an issue. It's not straight forward: https://github.com/capnmidnight/Primrose/issues/162
SDF would definitely be tricky, but like I want to ideally render text in 3d space without having a mesh in between. I do realise optimising an approach like that would be extremely hard but in terms of UX, it'd be better than anything possible with meshes and enable a few interesting possibilities.
Just going from your username, if you're actually available, I'd love to hire you for a bit of your help/expertise on my project. Like I'd like to render Primrose with a transparent/translucent background without the text being constrained inside a height limited viewport (render all lines at once) and implement scrolling on the mesh itself (also improve scrolling performance), and just support all keyboard + mouse shortcuts.
My email is firstname.lastname@example.org, please message me there if you're interested.
VR is cool, but it's seems like such a more useful concept to me to have actual reality with enhanced information.
Imagine wearing glasses and looking at a plate of food then having it estimate + track calories and macros, or paint GPS direction arrows on surfaces realtime, or put people's names you've met before above their head so you can avoid awkwardly admitting you've forgotten it.
Is it technical limitations, or cost?
Edit: Many people replied with really informative answers to this already. I genuinely appreciate your time and insight, thank you :)
There used to be a good blog post from Michael Abrash when he was at Valve that also talked about two main issues. Latency, and drawing black effectively.
Latency is critical since low latency is a requirement for things looking real (since humans have fast visual systems), but that's ultimately a hardware problem that should get solved in time.
Drawing black is harder because AR uses ambient light and putting a black line on the screen in front of your face doesn't work for focus.
Unfortunately it looks like Valve killed their blog, but the way back machine has it: https://web.archive.org/web/20200503055607/http://blogs.valv...
My bet is that Apple will pull it off Apple watch style with front facing Lidar: https://www.youtube.com/watch?v=r5J_6oMMG7Y
Probably at first they will be mostly for notifications and interacting with apps in a window in your visual field, getting most of the power from the phone. Things like looking at food for calories and names, etc. will come later when a front facing camera is acceptable and there's existing UI in place.
I think this is probably the next platform after mobile devices, looking at little glass displays is a lot worse than having a UI in your visual field (if it can be done well).
[Edit]: A more recent blog post from Abrash on this topic https://www.oculus.com/blog/inventing-the-future/
Of course you need a lot of space to route incoming light through that focal plane and then to your eyes.
You want to be able to use the full power of human vision when looking at the world, not literally be looking at some subset in a display right next to your face all the time.
It seems it might be a useful distinction though; I always took AR to be a distinction of interface: reality plus augmentation; but it might also be a tech type too: augmentiong normal vision versus "virtual" AR, or AR in VR..
That said, I don't understand your comment "use the full power of human vision"; If VR headsets improve to the point VR environments are as detailed (wrt human perception) ad reality, then virtualised AR shouldn't differ either.
TBH my own concerns are how hard VR is to use while is block you from your surroundings: noticing when people approach, handling headset/controllers/keyboard etc. I can't replace my monitors with VR b/c I cannot see my keyboard in VR, I see my coffee mug, or notice when people approach in order to not jump every time someone taps my shoulder; VR needs to be partially augmented with my true surroundings just to operate within a normal space.
Because demand is too low, and prior attempts at hyping it up with marketing ended like google glass?
I do work in an engineering consultancy that did a dozen of VR/AR toys over the last 5 years.
Some of quite big brands around use our tech, and engineering, though NDAs, NDAs, NDAs...
There is no magic trick behind any of product on the market. Physics, and optics of AR/VR glasses is very simple, high school level simple. Just too many companies want to make add "smoke, and mirrors" into the optical scheme...
Making AR/VR goggles power efficient, and lightweight enough for daily use is possible even with current day tech. It's not much of a secret now that there is an IP blocker on a critical technology owned by Beijing University that shuts everything down.
Microsoft, Facebook, Apple experimenting with lasers now is all about them trying to work around that blocker.
> a solid state microled chip with microlenses
For someone with no knowledge about AR hardware, how is this different (from the perspective of the end user) than a traditional hololens style display?
The biggest problem of all waveguide systems is that they are freaking inefficient, with optics consuming 50-90%+ of all light.
And it is the same problem with pretty much all complex optical systems in AR/VR glasses.
This is why I am a proponent of using mirror optics in this application.
It's coming but everyone is waiting for the tech to be ready.
Everyone remembers the "glasshole" problem with Google Glass.
What keeps costs up are the technical limitations. Microsoft and Magicleap invested both more than a billion each to solve the tech challenges, but the truth is that the displays are not good enough. Framerate is too low (compare with VR what would be necessary), FoV still rather limiting, colors and blacks are still to faint for many environment lighting conditions, room scanning is still too inaccurate, battery life too short.
This is not to say there hasn't been great progress. Hololens 2 solves the comfort problem and is a nice step with resolution and FoV. They just messed up with image quality (color banding/rainbows are a big issue).
Just think of how it feels to look at your phone halfway through a long hike. I always feel it lessens the magic, and introduces low grade anxiety.
The hardware is significantly harder to get right. VR devices work well specifically because they don't care about your environment. AR devices, to be any good, need to have a semantic understanding of your environment. That's very hard to do, especially on the power and compute budgets that mobile devices allow. And an AR device that isn't mobile is a stupid AR device.
The software is significantly harder to get right. It's a lot easier to model a static scene and make some physics-based interactions in it than it is to try to figure out how to make overlays that react to a real environment.
But I think, much more importantly, the people producing most software in the immersive software space just don't care about your immediate environment. They care a lot more about giving you a canned experience. It's hard to find funding for anything that isn't some sort of media consumption. This is true across both VR and AR. In VR it's ok, because there is no environmental context to exploit anyway. But on AR devices, you just end up with a bad VR app: all the hardware limitations of a mobile AR device with none of the differentiating features. So because you don't get software that cares about your environment, you don't get good user experiences on the hardware.
Open up your iPad or HoloLens or Magic Leap app stores and take a survey of apps that are there. How many of them have any understanding of your environment? There are a lot that don't even take into account the "room mesh", the solid surfaces that the device can see--say nothing about what those solid surfaces represent! I'd estimate it's upwards of 50% on AR headsets and maybe 30% on iPad that do absolutely nothing with any surfaces beyond asking you to find a flat space on the floor. That's just crappy VR. As for the ones that attempt to understand what is in your room? Vanishingly few.
You can make pretty good consulting career out of making what is largely just a PowerPoint presentation in 3D: a collection of canned elements that the user can click buttons to get scene transitions and animations, all with a directed narrative that is trying to tell you something. Advertisers want it. Media companies want it. A lot of big-industry companies completely unrelated to media want it just to show off at conferences to "prove" they are "forward looking".
And you'll get a lot of those clients asking--even demanding--you make that as an AR app, especially on iPads. But it sucks. It's just not anything about what's good in immersive experiences. It fits a little better in VR. It still sucks in VR. But it comes around from backwards priorities. These companies start from wanting VR/AR and work backwards to a use-case. And often they lack any sort of experience or even actual interest in immersive design. What they want is to just do a marketing piece. There are very few companies that start with a use case and then find out whether or not VR or AR is the right solution.
But that's where the bread-and-butter money is. And it sucks the air out of the room. It leaves the real, good, immersive experience development to people who are independently wealthy enough to do it on their own, or to hobbyists hacking it together in their spare time.
Do you have any people or communities or reading materials you'd recommend to get more familiar with AR in industry?
I have been trying to start a reading list, but it's woefully incomplete. I'll copy the content here (I don't have it publicly online yet). I'm primarily centered on VR, but I've also done a lot of AR work. I think good application design is very similar in both. or rather, all the "bad" apps I talked about out there are similarly bad in the failure to take the immersiveness of the experience into account. But I do think there is a lot of overlap in terms of needing to take less of a traditional, compartmentalized application mindset and start thinking about immersive software as more akin to clothing, overlays on top of the world.
Kent Bye's "Voices of VR" Podcast (Site: https://voicesofvr.com/, Twitter: https://twitter.com/kentbye?s=20). Kent Bye has been a singular voice in the VR and AR community for the entirety of the contemporary VR movement. He brings a philosophy and social impact perspective. I think a lot of application design--immersive or otherwise--doesn't take human factors into account often enough.
Road to VR (Site: https://www.roadtovr.com/, Twitter: https://twitter.com/RtoVR?s=20) is also a very long-standing blog on all things AR and VR. They are more focused on gaming, but they also cover industry trends, new hardware, and companies
John Palmer has a few blog posts covering spatial interfaces that are very insightful (https://darkblueheaven.com/)
This Medium post by Douglas Rushkoff talks about some of the problems with digital media, which I believe VR can help solve (https://medium.com/team-human/digital-media-still-isnt-very-...)
Jaron Lanier is one of the "fathers" of VR, part of the "first-wave" of VR work in the late-80s/early-90s. The Verge did an excellent interview with him (https://www.theverge.com/2017/12/8/16751596/jaron-lanier-daw...). You can get to his books from his website (http://www.jaronlanier.com/), which are all excellent treatises on humanity's relationship to technology.
Incidentally, here's Kent Bye interviewing Jaron Lanier (http://voicesofvr.com/600-jaron-laniers-journey-into-vr-dawn...)
Liv Erickson has an excellent blog on technology that covers a lot of issues in VR, accessibility, and machine learning (https://livierickson.com/blog/). In particular, "6 Questions to Ask Before Diving Into VR Development" is great primer on VR concepts (https://livierickson.com/blog/6-questions-to-ask-before-divi...)
Tom Forsyth (Twitter: https://twitter.com/tom_forsyth) has an excellent blog post about different technical aspects of the optics in VR systems (http://tomforsyth1000.github.io/blog.wiki.html#%5B%5BVR%20op...)
Jesse Schell's article "Making Great VR: Six Lessons Learned From I Expect You To Die" is a little old but still excellent (https://www.gamasutra.com/blogs/JesseSchell/20150626/247113/)
This is an excellent article on the importance of audio in immersive applications (https://arinsider.co/2019/10/02/sound-ars-unsung-modality/)
This is an interesting video made by a man who spent a whole week in VR, eating, sleeping, working, and living with a VR headset on 24/7 (https://www.youtube.com/watch?v=BGRY14znFxY)
How did you find yourself in this field?
That was right about when Google Cardboard hit. That was 2014? I saw it during a Google I/O livestream and just so happened to have a few lenses left over from some still-photography experiments involving lasers and... never mind, another thing that went nowhere, I just had some lenses around. I quit watching the Google I/O stream and immediately hacked together a new cardboard box viewer with the lenses.
Saw Versailles, which I have never been to, but I have been to Linderhof Palace in Bavaria, which is modelled after Versailles. Saw the Galapagos, where my wife and I had just spent our honeymoon about a year or so before. I immediately saw the experience was closer on the spectrum to really being somewhere than it was to seeing it on TV.
Then I saw RiftSketch (https://www.youtube.com/watch?v=db-7J5OaSag). It blew my mind. Started chatting with people. Brian Pieris talked about wishing he could get syntax highlighting into the app, and complained about how he had to use CSS 3D transforms to position the box on top of his WebGL view. At the time, I recognized how early everything was and how primitive the tools were. So I thought, if I could make developer-oriented tools that made making VR easier, I could make something out of that. I thought Brian could be my first user. So I made a RiftSketch clone, added it into my WebVR framework, and that became what was eventually called Primrose.
I got a small amount of internet fame out of Primrose. People started recognizing me at conferences. A "startup" hired me to be their head of VR. That turned out to be a different kind of hell. It crashed and burned after about a year. We had a kid and another one due any day, and we were completely out of money. I thought he VR dream was finally over. Started applying to jobs back in web and DB work.
In the mean time, I had just started working in Unity at the startup, I had all this time on my hands, and Unity was offering full-access to their learning materials for the first month for new customers. It was clearly designed to go through in 3 months, but I churned through all of it in a month. Then one of the folks that I had hired on at the startup to work on VR stuff made a connection for me at a gigantic, multinational consultancy, for their Unity dev team. He thought I was pretty good even before I learned Unity properly, so I breezed through the interview. It was also the most money I'd ever been paid. Seemed like a huge win!
Then the reality of giganto-consultingware companies set in. You've not seen office politics until you've worked in an organization that runs under a partnership model. It wasn't exactly the worst job experience I've ever had, but it definitely ranks up there. But it basically got me a ton of Unity development experience. They mismanaged the hell out of the team and eventually had to layoff half of everyone. I also ended up meeting a few folks along the way who got me an interview at my current place.
I'm now the head of VR at a foreign language instruction company. Our main client is the military's foreign language school, the very same place where my parents met and then shortly thereafter had me. Things are going well. The company is stable. It's on its second owner, who brought it back from bankruptcy in the early 2000s. I report directly to the president of the company. He thinks the world of me and lets me do whatever I think is best. We have weekly meetings where we geek out about video games and VR. I just got to hire my first employee to work on the project with me. On a weekly, sometimes even daily basis, the company does something that proves "our employees are our biggest asset" isn't just a platitude for them. It's amazing. I've never worked anywhere this nice before.
Thank you for sharing your story, it was quite the journey and it is uplifting that you were able to eventually culminate all the dotted things you've worked on out of passion into something that you're now sought after for!
That's a funny wording, because it was actually going through the process of "finding myself" that I ended up in immersive software. It simultaneous feels like it came out of nowhere, but also that my entire life prepared me for it.
I'd always had a fascination with 3D imaging as a kid, both stereo graphics and holographics. I don't know what was going on in the late 90s, but there were a lot of cyan-magenta anaglyph comic books at the time, and a lot of Marvel comics were doing holographic overlays for special edition covers. I read every book I could get out of the local library on optics and holographics.
I had a lot of trouble with my relationship to work. My first job wanted to move me out to the middle of nowhere, gave me very little notice on the decision. It was terrible, so I also quit without notice. My next job was OK, but the work was mind-numbingly boring and I wanted to move away from home. Found a job near Philadelphia, near some friends. That job was at a company that was basically a cult, though there was one highlight where code I wrote helped catch a criminal by proving there was no way they were delivering the products they said they were because the timestamps between stops were too short for the distances they had to travel. My next job, for a company that makes used car database websites (so already soul-sucking to start), had all the anti-corporate, pro-developer surface details I thought were the problem with my previous jobs. I hated it even more, the constant Nerf gun battles and the lack of focus, at the same time as being constantly brow-beaten for not getting anything done in that environment. I quit after 3 months.
I thought the problem was consulting. I wanted to get out of consulting and into product development. A friend got me an interview at a company working in home-automation devices. It was a huge paycut, but I was supposedly "getting in on the ground floor" of a "hot startup". Turned out the "small startup" was actually "a poorly managed company that couldn't find a market fit and did some shady deals to rebrand every 3 years to escape their reputation". I ended up right back in web and database consulting work there. The systems were terrible. Most of my work was manual data entry and fixing stupid timing bugs in the device configuration tool. I fixed everything, made tools to automate the data entry, made simulators of devices to speed up testing of the configuration tools, fixed all the stupid code that made bad assumptions that led to comms race conditions. I got fired because I refused to work overtime. I refused to work overtime (unpaid, mind you!) because there was no overtime work to do. I got told that I needed to log 60 hours a week no matter what. I told my boss that if she wanted to lie about the work I was doing, she could fudge the invoices to the client herself and leave me out of it.
I thought maybe I hated programming. I eventually learned that I didn't hate programming, I just hated the people I was working for. I decided I was not going to look for a "job" and I was going to stick to being independent for as long as possible. I had joined a hackerspace when I first moved to Philly, started using it as my office. I tried starting a t-shirt printing business. I tried selling photography prints. I built a couple of museum installations on contract, these Arduino-based things. I tried making music-teaching toys. I did pyrotechnics on an indie film. It was all over the place.
One of those things I tried to build at the hackerspace was basically Google Cardboard, 4 years before it was a thing. When smartphones first came out and started to show promise with 3D rendering and high performance motion sensing, I built my first "phone in a cardboard box" stereo viewer. It... worked... for values of "worked" that include massive headaches. I didn't have lenses, the box was relatively long to be able to focus and fuse the images, but seeing stereo animation of my own making for the first time was amazing. This was around 2010 or so.
At the same time, I also started playing with different camera-oriented apps that took motion sensing into account. Built a few different apps that could help you take stereoscopic photo pairs and render them in side-by-side or color anaglyph. Tried to build a turn-by-turn direction app and a point-of-interest discovering app using Google Maps data. That sort of stuff. Discovered sensor data and rendering were still not quite good enough to make a good experience.
Eventually, a friend from the hackerspace got me an interview at a company that makes tilt sensors. I thought the job sounded boring, but I needed the money. They hired me on a 3-month contract-to-hire, and when the 3 months were up, I asked them if they would be willing to keep me as a freelancer. They agreed. I spent 3 years there, the longest I had spent anywhere up to that point. I met a girl. She lived in DC, so I started pushing to let me work remote so I could travel to see her. They agreed. I eventually moved to DC and became 100% remote, which they were cool with. I even got to where I had hired a few people part-time on my own to help out with the work. They thought it was great. I actually loved it, for a time. Someone in the company got a bug up their ass about me having my own employees. I guess someone parallel to the guy I reported to got the owner convinced that it was a "security risk" or something. I don't know what, all I know is that this other guy took over and I got slowly squeezed until there was only enough work for myself. I was right back into having a bad boss again, so I was on the lookout for an exit.
I think one of the highest-value things that is in progress right now is the work Apple is doing to combine feature detection and location. They get your rough location with GPS, stream a feature-point cloud to your device, then figure out your precise location based on the camera view. Just having a reliable, to the centimeter, position and orientation of a user in the full, real world is going to be a huge enabling factor for AR applications.
Without it, the best thing you could do is turn-by-turn directions, because GPS precision and drift prevent you from doing anything in close proximity. But if it's not in close proximity, then really limited in the detail you can provide people looking through their phone.
With it, you can start to do a lot more stuff. Art installations, public information kiosks, event-based things. And it's out in the world where other people will see people doing it. Exactly like how people got interested in Pokemon Go because they saw other people in the street playing it.
I think we're still a long way off from useful object recognition. I've seen a lot of concepts around brands wanting their appliances detected and giving users instruction manuals, repair manuals, or value-added services. The problem is, state-of-the-art object recognition can fairly reliably tell you "I see a refrigerator", they just can't tell you "this is a Whirlpool refrigerator" say nothing about the specific model. So that kind of goes back into the funding problem. Whirlpool, GE, Frigidaire, they all want AR, but not if it's going to work with other brands. Same with basically every other product on the market. So object recognition is kind of at the same point that location tracking is, with regards to detail. It's going to either take an unrelated company going out on a limb to support multiple brands without the brands' involvement, or it's going to take an unlikely development in object recognition that can reliably detect brands and models.
Things that were just coming along when I stopped doing fulltime AR work (hopefully they've gotten better in the last year):
Microsoft and IBM were doing a lot of great work in improving object detection, plus providing it as a service to be utilized in applications. That's another problem, most of the stuff you see is so research-grade that it's still years away from being productized, if it doesn't get canned first. But at least some of the high-level object recognition stuff is productized right now. It's slow, though, so you have to be smart in your UX about how you manage the queries. A neural network can tell you that it saw a cat in one picture you sent it a full second ago, but it can't tell you that it sees a cat right now in your video stream. But if you can work with detecting things in still images, it could be usable.
PTC was doing some much simpler, but still very interesting work with their Vuforia system and pivoting to value-added services on top of native AR subsystems, rather than just providing the AR subsystem. Vuforia was great 3 - 5 years ago when we didn't have any AR subsystem, but Google and Apple have basically ripped the rug out from under them. Image-target tracking is both terrible and great. It's terrible because it's not very flexible. But it's great because it tells you something contextual about the user: they have my image target in view. They have a live-annotation system for spatial drawing in 3D that's really interesting for teleconferencing. Except Vuforia is positioning it for industrial repair. Again, "who is paying for this" gets in the way.
If appliance brands could get their collective heads out of their asses and accept that maybe a new paradigm requires a little flex and adaptation on their own part, I could see new products being developed that are visually easier to detect.
I also think all the work going into speech recognition and semantic understanding of text is super important for AR and VR. It's not just about making user interfaces for people to use these systems hands-free (though that's important, too, because there are a lot of scenarios where a user might not have free use of their hands). Having reliable, contextual information about what people are talking about in, say, a meeting, could enable virtual assistant technologies that aren't dumpster fires.
Similarly, reliable facial recognition would be a huge help for AR systems, in a lot of very obvious ways.
But facial recognition leads me to the unfortunate thought that we're going to run into some intractable problems in machine learning that are going to prevent the full, perfect future of AR. Even disregarding the moral hazard of selecting an appropriate training set, the problem is that ML-based techniques are inherently biased. That's the entire point, to boil down a corpus of data into a smaller model that can generate guesses at results. ML is not useful without the bias.
Bias is OK in some contexts (guessing at letters that a user has drawn on a digitizer) and absolutely wrong in others (needlessly subjecting an innocent person to the judicial system and all of its current flaws). The difference is in four areas, how easily one can correct for false positives/negatives, how easy it is to recognize false output, how the data and results relate to objective reality, and how destructive bad results may be.
Things like product suggestions or voice dictation systems work because, when we get a bad result, we can easily recognize and correct for it, often by just retrying. And part of why we can tell that there is a problem is because they results are linking back to some notion of an objective reality. In contrast, a NN that dreams up photos of dogs melting into a landscape has no impact on reality.
But facial recognition runs into so many problems here. If you're trying to detect a particular person's face, they don't have another face that you can try to see if you get better results. If you don't know who the person is that you're trying to detect (e.g. identifying a person from a photos), then you don't even know when the results are wrong to try to get a different answer. And because you're bringing these results back to an action in objective reality, the consequences of wrong answers have real impact on real people (e.g. identifying suspects from security camera photos).
So yeah, I'm not too hopeful for AR. The tech is cool. I certainly want to be able to have good AR tech. But some of the farther afield ideas on how the tech might enhance semantic understanding of the world... I think a lot of it is a pipe-dream. I suspect the actually achievable maxima is strictly limited to entertainment and productivity.
VR is better because you can fabricate entire worlds and spaces for any task. Instantly useful. And if you really needed to combine the real world with generated content, you could theoretically do it by just overlaying content onto a video feed in VR.
I played with the Hololens 2 last year and I certainly can see the benefits. Especially when communicating with someone else.
However for $6000 that device is not going to happen for me ;) AR needs an "Oculus Quest" approach to be useful to consumers first.
I do love VR as well but I do think there is a big usecase for AR once it becomes good and affordable at the same time.
“Finally, holographic lenses can be constructed so that the lens pro- files can be independently controlled for each of the three color primaries, giving more degrees of freedom than refractive designs. When used with the requisite laser illumination, the displays also have a very large color gamut.”
They have full colour working, and apparently well, in the benchtop prototype.
There’s discussion towards the end as to options for implementing full colour in the HMD prototype.
Digilens is a company that's been around for 15 years doing all kinds of eye wear stuff.
I suppose you could incorporate both.
Facebook definitely takes a platform approach with Oculus. The Oculus Quest is a great device at a great price. It has done the most to make VR a mainstream accessible thing. And yet because of Facebook's behavior and what we know about Facebook as a company in general, we really, really must not let it become the dominant system. Hopefully, other companies will improve their user experiences to match (because Oculus certainly doesn't have a hardware advantage, everyone is pretty much running the exact, same hardware profile with only minor differences).
TBH, this thing happens. look at gog vs steam vs <shall not be named>; this strategy doesn't always pay off though, consider the Nintendo console lineup (curated, small but higher quality) compared to Sony PlayStation - in the end Sony won that one, and my own belief is the reason that less curation led to some indie (not developed by Sony) gems.
And you're still stuck using Oculus' APIs. There are no open source APIs for accessing the device sensor data or rendering. Regular Android apps (minus Google Play Services) can run on the devices as tiny windows, and then you can get some super-hacky input as touch events on the app view, but it's really not usable. I think it even forces software rendering, because I've seen some really bad performance out of it (I found out about it after not having an app configured incorrectly before uploading it on my own for my own development). It's basically there as a fallback for Android's default Settings view, which you use to configure the developer settings like enabling USB debugging.