Hacker News new | past | comments | ask | show | jobs | submit login
Holographic optics for thin and lightweight virtual reality (fb.com)
344 points by onurcel on June 30, 2020 | hide | past | favorite | 104 comments

For analysis of new AR/VR technology I highly recommend Karl Guttag's blog: https://www.kguttag.com/

He's an engineer that understands the limitations of physics, especially when it comes to optics and light. He is very good a parsing marketing hype vs reality. (He called BS on Magic Leap years before it launched.)

His posts are exceptionally well researched and explained, even if you don't have a background in physics or optics.

He recently did an analysis of the Apple Glass leaks. I expect he'll post his thoughts on this new technology from FB soon.

For someone who claims to be so concerned about display quality, he makes some really, really shitty choices in font colors when annotating images.

I've been reading Karl's takedowns for a few years now and, while there is never anything technically wrong about what he says, it's also just not as important as he thinks it is.

Yes, waveguide optics aren't the best possible visual experience once could have. But does that matter all that much? I think Karl's terrible diagrams point out why he makes this mistake. He doesn't understand that design is the much more important consideration here.

The technical limitations of the display technologies that we have today are not impediments to product development. Good software can be designed to work around these issues. Can't render black? Don't design around dark themes. Have a narrow field of view? Don't require people to try to keep mental track of things around them by vision alone.

Karl looks at the HoloLens, sees the waveguides, and misses all the amazing operating system features. Speech recognition, spatialized audio, a fully spatialized desktop metaphor. These things are important and they go a long way towards the usability of the system.

And that was why the Magic Leap failed. Not because the displays were crap, but because the entire system was crap. It was basically "just a" stock Android system with super flaky WiFi and no systems view on delivering a unified product. The entire product was fundamentally mismanaged. The hardware was slightly better than the first HoloLens, but you were far more limited in making good software for the Magic Leap than you were for the HoloLens.

I second Karl’s blog. Been following the ML journey through his eyes from the beginning and his insights couldn’t have been more accurate. Despite the many criticism Magic Leap Fanboys gave him.


Nitpick: giving slack usually means being less strict, perhaps you meant flak as in excessive and abusive criticism.

I'd be careful about statements about 'limitations of physics'. Yes, there are actual physical limitations, but we frequently have beliefs about the limitations of physics that are not actually limitations of physics.

to strengthen your argument here an example where a beliefed limitation was circumvented (and resulted in a nobel prize): see https://en.wikipedia.org/wiki/STED_microscopy and the Abbe limit

i suppose parent was downvoted bc some thought he was questioning the "hard" laws of physics. but perhaps he could correct this by traveling back in time and changing his wording ;-)

Like what? This is a wriggling statement without making any claim, stuff like thermodynamics, speed of light etc. is so fundamental to how we build machines (that work) that most of the limitations we accept as limitations are highly unlikely to be wrong.

Fundamental skepticism is nice in a thought experiment, but what value are you trying to add with such a vague statement?

There's nothing on this website about OP's news.

Noted in the comment you are replying to:

> I expect he'll post his thoughts on this new technology from FB soon.

I'll say this from a consumer/product point of view and not a technical or engineering, but WOW! Assuming they could pull off a finished, high performance product, (big if) this could be the form factor that bring VR to dominance. The silly looking, awkward to wear headset is a big hindrance to adoption because it absolutely matters if you look silly or cool when it comes to consumer tech.

The fact that these glasses are clearly opaque remind me of Marty's reaction when he sees Doc's brown futuristic glasses in "Back to the Future". It does look a bit silly, but at the same time it looks useful, and not heavy and encumbering like most VR headsets.

There's that word again. "Heavy." Why are things so heavy in the future? Is there a problem with the Earth's gravitational pull?

To be fair, they wouldn't look weird if you just held a cane while wearing them. They could probably even fit some electronics into a cane form factor.

Would make for some excellent ad hoc light saber games too!

Batteries would fit perfectly in the cane, and possible acoustic proximity sensors like those on cars.

I'd argue motion sickness is a bigger existential threat to VR than appearance, although weight is definitely a factor.

I just sold my Oculus Quest after a couple of months. For me the down sides were:

1. Weight and overall comfort. Pressure on the forehead, sweating if you're playing physical games and the room is warm can make the lens foggy; even the rubbery-plasticky smell of it. It's just not fun to use physically.

2. Resolution/graphics. I just wish it could have higher resolution and I won't see the pixels. Would make everything much more immersive.

3. Usefulness/content. It's cool but it gets old pretty fast. I just found myself barely playing after a couple of weeks of the initial excitement.

Motion sickness wasn't a problem for most games, which was a surprise since I get game sickness from FP games on a normal screen and can't play them more than a couple of minutes.

So I guess 1 and 2 can be solved by this technology.

For most people, motion sickness goes away fairly quickly as they get ‘get their VR legs’ and get used to VR.

Dude looks like an emoji with those glasses on.

I've been curious about something in AR for a while, and I can't seem to find the right terms to query this. Why can't ambient light be used for illumination, and an LCD + polarizer to darken each pixel? If you can approximate the illumination of each point of light going through the LCD (for example, with an outward-facing camera above each lens capturing a full-color image and interpolating it down to where it would be overlayed on top of the LCD) you would know how much you would need to darken each sub-pixel's color to compensate for the light coming in. Then it would be super low-power since there would be no backlight. Also, if you wanted "Transparent" mode you could make it fairly clear. I'm sure there's some reason this is untenable but I'm not really sure what it is - perhaps the inability to be accurate with the compensation?

If you can only subtract light the contrast of what you're viewing against the background will always be poor, almost by definition. Also doesn't handle color.

Why could it not handle color if it had R/G/B subpixels and adapted the intensity of each pixel based on the color of the light passing through that area of the screen?

The intensity of light is quite high outside, could an LCD not darken it enough to have sufficient contrast if you were OK with it only working outside?

Just do a quick search for transparent LCDs, I think it will answer your question. (Not enough light gets through, the result looks more like high resolution stained glass than a Retina display)

LCD can’t darken, aren’t that transparent in the first place, also can’t focus without microlenses and you won’t be able to look through microlenses. If you still think it viable you can prototype the optics you described using old Nokia phone LCDs or some later GameBoy models that has easily removable backlights.

Are you being specific about the word darken? Because LCDs are design to either allow or block light. So technically an LCD can "darken" incoming light. But I think the point of inconsistent lighting from the environment would be the biggest challenge.

The best example of transparent display I've seen in production is:https://glas.johnsoncontrols.com/

But the material on that seems darker and I believe they use an edge light to provide higher contrast.

I think the distance from the eye, as well, would be difficult if you aren't doing some type of projection, like what is used in the the article.

Darken as in response to “can’t LCD darken enough” in parent.

Anyways any LCD can be a transparent LCD if you peel off backlight layers if you want to experiment with. Search for “DIY transparent LCD” or “LCD side panel casemod”

How could you determine the color of the light passing though that area of the screen?

I don't know too much about optics and VR/AR, but can this technology can be adapted for AR applications (a la Google/Apple Glass)? Personally, I think AR applications are much more interesting than VR (especially in the short term)...

That is what Hololens 2, Magic Leap, and some other products are. These are "cool" but have not set the world on fire in terms of product-market fix.

If you are building an AR system there is always an awkward balance between "letting the environment shine through" and "having projected items be bright enough to be visible". If you put something black in front of the holograms at least now you have just one problem instead of two problems.

So you think that instead of something black, if something clear was kept in front, it would technically be an AR system? Of course, then the question is how to make sure the projected items are bright enough with the environment shining through.

Yep. Also that the grooves in the hologram don't screw up the light that passes in through the front. For instance you might get "lens flare" effects which could get obnoxious.

It is a lot of details to work out, which is why current AR headsets are still at the bridesmaid and not the bride phase.

Oh okay very interesting. I didn't realize that AR headsets were significantly behind compared to VR headsets. But the question is how behind is it, because clearly Apple is able to somehow pull it off, right?

Well, to "pull it off" involves multiple levels of development.

VR headsets are workable, but somewhat expensive, and content is lacking.

AR headsets are very expensive, have poor image quality, and even less content. It seems every defense contractor and electronics conglomerate got patents for holographic waveguides in the 1990s when the F-35 was under development; that headset is not so bad but it costs $250,000 and the original version was heavy enough to break your neck when the election seat fires.

Apple may be working on an AR headset, it may be a big hit in the end, but I will believe in product-market fit when I see it.

I suspect there's a fundamental reason the backstop is opaque. Keep in mind, to scale the size down, they may have had to cram some stuff directly behind the lenses.

I suspect they've tried to augment the device for AR already since many of the forward looking trends focus on AR.

True... It could also be likely that they haven't published the AR version of the technology, if it's possible...

> it would technically be an AR system

I think the head tracking is not as forgiving for an AR system.

For sure, but it's technically an AR system, right? ;)

Good AR is much harder than VR. So it makes sense to tackle VR first.

Thing is we already have fairly good VR but acceptable AR is yet to be seen.

Are there are AR/VR technologies which are addressing software developers? I would very much like to replace my shitty monitor with AR/VR glasses for development.

I know this may not seem like "intended use case", but the developer experience can use some innovation for a change. Also one way to bring these technologies close to developers.

Not for coding. The resolution just isn't there yet.

Roughly, per-eye resolution is in the same ballpark as HD displays, but stretched over a 90+ degree field of view. Fonts need to be very large to be legible. You can create a theater sized virtual monitor, but it's just taxing to use. Aliasing artifacts make it worse.

At least for text-focused tasks, I'd take virtually any display built in the past 40 years over a modern VR headset.

> Not for coding. The resolution just isn't there yet.

Spoken like someone who hasn't programmed in VR yet.

I don't think you'll be programming any operating systems in VR anytime soon, but there is still a lot of programming, specifically object scripting, that could be done in VR. A number of people--including myself--have built demos that prove out the concept.

One of the reasons is that text legibility is not strictly about display resolution. Motion within the view improves legibility significantly. Yes, the fonts render to very large pixels. But the specific pixels they render to are constantly changing. Your brain fuses those images over time. I'm not able to find the paper right now, but the US Navy did a study that proved pilot visual acuity improved when they were in a dynamic scenario. The study performed a visual acuity test where pilots had to identify letters in view from within a flight simulator. One group had full use of the simulator in motion, one was told the simulator motion systems were broken, but they still sat in it to perform the same test rendered on the same screen.

And as you said, larger fonts are easier to read. There is a lot of spatial resolution in VR that is not used very often. You're used to thinking about organizing your code on a 2D display, but you have an entire 3D environment around you. That environment could be a zoomable interface where code editors are linked to live objects. Use individual editors for individual code units. Organize them in a tree structure linked to the object. Trees organizers are a lot easier to navigate in 3D than on a 2D screen, especially if you eliminate window scrolling.

Window scrolling was created to account for the limited spatial resolution of 2D displays. But in the process, you lose spatial memory of where things are located. Things like windows and tabs and desktop workspaces were invented to try to wrangle that problem more, but they are not as good as a real, spatial filing system.

Think about it. You probably know exactly where your favorite book is on your bookshelf. You could probably walk over to it and pick it off the shelf without even opening your eyes. But there is very little chance you can pick any particular file you want in a 2D GUI system, specifically because of the absence of spatial relationships.

So a combination of "text legibility is not as bad as you think it is" and "code could be a lot more organized than it is on 2D displays" means that programming in VR is a lot better than you're making it out to be.

That's the intended use case for Arcan [0] iirc

[0] https://arcan-fe.com/

You can use Virtul Desktop or ImmersedVR. Resolution of the Quest is good enough to work on code.

I'm working on this http://github.com/manugill/eye. Not anywhere near ready for testing yet though, I'm aiming for end of this year for v0.1.

Hi there. I don't see any code in your repo. What were you considering for the text editor component? I wrote a text editor that renders to HTML5 canvas elements specifically for use as textures on WebGL meshes. I'm not working on VR programming environments anymore, but I recently did a complete refresh on all the code https://www.primrosevr.com/

There is code if you go to packages/webxr. I only have code for the browser (functional but needs perf improvments) and terminal (should be done in a few weeks).

Oh dude, thank you, I was aware of your project 6ish months ago (there's only like 3-4 canvas text editors so I try to follow them all, most are abandoned though) but what you're doing now fits perfectly with what I'm looking for. Thank you, I'll make sure to use it in v0.1. I couldn't get the webgl demo working though. https://github.com/capnmidnight/Primrose/blob/master/demo3d.... returns a 404.

But my long term goal is to actually 3d render the text completely using SDF or MSDF techniques.

Oh, I must have moved the code and forgot to update the README. That file is here https://github.com/capnmidnight/Primrose/blob/master/js/demo...

You can see it running, sans webxr here: https://www.primrosevr.com/demo3d.html

SDF came up as an issue. It's not straight forward: https://github.com/capnmidnight/Primrose/issues/162

Nice thanks (I previously opened it in Safari and assumed it was broken).

SDF would definitely be tricky, but like I want to ideally render text in 3d space without having a mesh in between. I do realise optimising an approach like that would be extremely hard but in terms of UX, it'd be better than anything possible with meshes and enable a few interesting possibilities.

Just going from your username, if you're actually available, I'd love to hire you for a bit of your help/expertise on my project. Like I'd like to render Primrose with a transparent/translucent background without the text being constrained inside a height limited viewport (render all lines at once) and implement scrolling on the mesh itself (also improve scrolling performance), and just support all keyboard + mouse shortcuts.

My email is me@manuis.in, please message me there if you're interested.

LOL, I created my account 11 years ago. I have a job I really like now.

Thought so. But even so, I'd love to get advice from you regarding the points I raised above.

My advice is to save your eyes and put the money you’d spend on VR over the next decade and buy a very nice monitor now instead. Every aspect of the experience will be superior, and if you’re a developer the payback time will be relatively short.

If anyone here is in the know with the AR/VR scene, could they shed some light on why AR hasn't taken off yet?

VR is cool, but it's seems like such a more useful concept to me to have actual reality with enhanced information.

Imagine wearing glasses and looking at a plate of food then having it estimate + track calories and macros, or paint GPS direction arrows on surfaces realtime, or put people's names you've met before above their head so you can avoid awkwardly admitting you've forgotten it.

Is it technical limitations, or cost?


Edit: Many people replied with really informative answers to this already. I genuinely appreciate your time and insight, thank you :)

It's in progress, currently limited by both hardware and cost.

There used to be a good blog post from Michael Abrash when he was at Valve that also talked about two main issues. Latency, and drawing black effectively.

Latency is critical since low latency is a requirement for things looking real (since humans have fast visual systems), but that's ultimately a hardware problem that should get solved in time.

Drawing black is harder because AR uses ambient light and putting a black line on the screen in front of your face doesn't work for focus.

Unfortunately it looks like Valve killed their blog, but the way back machine has it: https://web.archive.org/web/20200503055607/http://blogs.valv...

My bet is that Apple will pull it off Apple watch style with front facing Lidar: https://www.youtube.com/watch?v=r5J_6oMMG7Y

Probably at first they will be mostly for notifications and interacting with apps in a window in your visual field, getting most of the power from the phone. Things like looking at food for calories and names, etc. will come later when a front facing camera is acceptable and there's existing UI in place.

I think this is probably the next platform after mobile devices, looking at little glass displays is a lot worse than having a UI in your visual field (if it can be done well).

[Edit]: A more recent blog post from Abrash on this topic https://www.oculus.com/blog/inventing-the-future/

Note you can "paint" black by putting an LCD at a focal plane, then you can black out a light source that is in focus.

Of course you need a lot of space to route incoming light through that focal plane and then to your eyes.

Does AR require ambient light? couldn't you use a VR approach where everything is drawn in, and reality is derived from cameras?

I’d argue that’s not really AR then (though no need to dispute definitions [0]), the blog post talks about that too - whatever that is, it’s not really a satisfying approach and wouldn’t be the next platform.

You want to be able to use the full power of human vision when looking at the world, not literally be looking at some subset in a display right next to your face all the time.

[0]: https://www.lesswrong.com/posts/7X2j8HAkWdmMoS8PE/disputing-...

TBH though, you link seems to describe situations where people leverage the confusion to win arguments, but point taken (thanks for the link btw, interesting article, I added my own comment).

It seems it might be a useful distinction though; I always took AR to be a distinction of interface: reality plus augmentation; but it might also be a tech type too: augmentiong normal vision versus "virtual" AR, or AR in VR..

That said, I don't understand your comment "use the full power of human vision"; If VR headsets improve to the point VR environments are as detailed (wrt human perception) ad reality, then virtualised AR shouldn't differ either.

TBH my own concerns are how hard VR is to use while is block you from your surroundings: noticing when people approach, handling headset/controllers/keyboard etc. I can't replace my monitors with VR b/c I cannot see my keyboard in VR, I see my coffee mug, or notice when people approach in order to not jump every time someone taps my shoulder; VR needs to be partially augmented with my true surroundings just to operate within a normal space.

Presently, to do the AR processing to extract features from a space uses either cameras or some kind of LIDAR or time of flight scheme to get a point cloud of locations of objects around you. The data from a single camera that is say ~1-2Mpixels (say 1280x800 RGB camera) at say 30fps is only processable on an SOC or an asic. If you choose to process the raw image locally then you need to have feature detection, extraction, etc algorithms that can run locally on that processor. For AR, a more realistic approach to a true all-in-one solution that sits on your face is to create an ASIC that simply gobbles up raw camera data, H265 compresses it, then sends it over a low latency link to your phone sitting in your pocket. Then when you consider that the chip also needs to drive a display (or 2 displays) of some sort, then you realize you need to also must have a way to receive that video data (potentially H265 video decoder now also needed), a way to display that video stream and do some potential fix-up on that video stream (at least some kind of GPU needed) -- now you realize you need a relatively capable SOC that runs on your AR device on your head. So you need a cable / battery pack etc -- even if your phone in your pocket is still the main thing running the show. So it just means that the combination of needing to use cameras, drive displays, and receive data wirelessly means you have some fixed costs, power needs, and limited set of SOCs to choose from at present. So its do-able, just its early days for the hardware to support it.

> If anyone here is in the know with the AR/VR scene, could they shed some light on why AR hasn't taken off yet?

Because demand is too low, and prior attempts at hyping it up with marketing ended like google glass?

I do work in an engineering consultancy that did a dozen of VR/AR toys over the last 5 years.

Some of quite big brands around use our tech, and engineering, though NDAs, NDAs, NDAs...

There is no magic trick behind any of product on the market. Physics, and optics of AR/VR glasses is very simple, high school level simple. Just too many companies want to make add "smoke, and mirrors" into the optical scheme...

Making AR/VR goggles power efficient, and lightweight enough for daily use is possible even with current day tech. It's not much of a secret now that there is an IP blocker on a critical technology owned by Beijing University that shuts everything down.

Microsoft, Facebook, Apple experimenting with lasers now is all about them trying to work around that blocker.

Any chance on expanding on what the critical blocking tech is?

Very interesting!

> a solid state microled chip with microlenses

For someone with no knowledge about AR hardware, how is this different (from the perspective of the end user) than a traditional hololens style display?

That display type can be made bright enough to be outdoor visible while still keeping power consumption on sane levels.

The biggest problem of all waveguide systems is that they are freaking inefficient, with optics consuming 50-90%+ of all light.

And it is the same problem with pretty much all complex optical systems in AR/VR glasses.

This is why I am a proponent of using mirror optics in this application.

I wondered why google glass was canned, and I wondered if there was worries a bout liability for eyesight problems. I can imagine poorly matched AR could damage your eyes over time, in a way I suspect monitors currently do.

Lots of reasons: No big company has pushed it yet. Battery constraints. Processing constraints. Space constraints, no one wants to look like a cyborg. Software to make it useful is in itself a large undertaking. Making ar reactive the surrounding requires really good machine perception. Which required good sensors and large batteries and powerful processors.

It's coming but everyone is waiting for the tech to be ready.

Microsoft has pushed it, but it costs too much for Pokemon Go.

Everyone remembers the "glasshole" problem with Google Glass.

It is not taking off in the consumer space because of cost. Hololens 2 is the best headset available today that has a relevant software platform and it isn't even sold to end customers anymore but just to enterprises.

What keeps costs up are the technical limitations. Microsoft and Magicleap invested both more than a billion each to solve the tech challenges, but the truth is that the displays are not good enough. Framerate is too low (compare with VR what would be necessary), FoV still rather limiting, colors and blacks are still to faint for many environment lighting conditions, room scanning is still too inaccurate, battery life too short.

This is not to say there hasn't been great progress. Hololens 2 solves the comfort problem and is a nice step with resolution and FoV. They just messed up with image quality (color banding/rainbows are a big issue).

Lots of opinions here, but it's simpler than most people are saying. The display technology doesn't exist yet. It can't succeed in the mass market until it looks like regular non-dorky eyeglasses and has specs exceeding current VR displays plus transparency and sunlight-level brightness. To a first approximation nothing else matters.

I think the deluge of information would get old really quickly. Two dimensional flashing lights really pales in comparison to the real world. And I think having that be in your face all the time decreases the experience of the real world instead of augmenting it.

Just think of how it feels to look at your phone halfway through a long hike. I always feel it lessens the magic, and introduces low grade anxiety.

Three reasons:

The hardware is significantly harder to get right. VR devices work well specifically because they don't care about your environment. AR devices, to be any good, need to have a semantic understanding of your environment. That's very hard to do, especially on the power and compute budgets that mobile devices allow. And an AR device that isn't mobile is a stupid AR device.

The software is significantly harder to get right. It's a lot easier to model a static scene and make some physics-based interactions in it than it is to try to figure out how to make overlays that react to a real environment.

But I think, much more importantly, the people producing most software in the immersive software space just don't care about your immediate environment. They care a lot more about giving you a canned experience. It's hard to find funding for anything that isn't some sort of media consumption. This is true across both VR and AR. In VR it's ok, because there is no environmental context to exploit anyway. But on AR devices, you just end up with a bad VR app: all the hardware limitations of a mobile AR device with none of the differentiating features. So because you don't get software that cares about your environment, you don't get good user experiences on the hardware.

Open up your iPad or HoloLens or Magic Leap app stores and take a survey of apps that are there. How many of them have any understanding of your environment? There are a lot that don't even take into account the "room mesh", the solid surfaces that the device can see--say nothing about what those solid surfaces represent! I'd estimate it's upwards of 50% on AR headsets and maybe 30% on iPad that do absolutely nothing with any surfaces beyond asking you to find a flat space on the floor. That's just crappy VR. As for the ones that attempt to understand what is in your room? Vanishingly few.

You can make pretty good consulting career out of making what is largely just a PowerPoint presentation in 3D: a collection of canned elements that the user can click buttons to get scene transitions and animations, all with a directed narrative that is trying to tell you something. Advertisers want it. Media companies want it. A lot of big-industry companies completely unrelated to media want it just to show off at conferences to "prove" they are "forward looking".

And you'll get a lot of those clients asking--even demanding--you make that as an AR app, especially on iPads. But it sucks. It's just not anything about what's good in immersive experiences. It fits a little better in VR. It still sucks in VR. But it comes around from backwards priorities. These companies start from wanting VR/AR and work backwards to a use-case. And often they lack any sort of experience or even actual interest in immersive design. What they want is to just do a marketing piece. There are very few companies that start with a use case and then find out whether or not VR or AR is the right solution.

But that's where the bread-and-butter money is. And it sucks the air out of the room. It leaves the real, good, immersive experience development to people who are independently wealthy enough to do it on their own, or to hobbyists hacking it together in their spare time.

This was insightful - thanks!

Do you have any people or communities or reading materials you'd recommend to get more familiar with AR in industry?

I've been working on VR and AR stuff for about 8 years now. I started doing it full time about 4 years ago. A lot of what I know has been from just being in the zeitgeist as things develop, so unfortunately I don't have a "reading list" to get people started.

I have been trying to start a reading list, but it's woefully incomplete. I'll copy the content here (I don't have it publicly online yet). I'm primarily centered on VR, but I've also done a lot of AR work. I think good application design is very similar in both. or rather, all the "bad" apps I talked about out there are similarly bad in the failure to take the immersiveness of the experience into account. But I do think there is a lot of overlap in terms of needing to take less of a traditional, compartmentalized application mindset and start thinking about immersive software as more akin to clothing, overlays on top of the world.

Kent Bye's "Voices of VR" Podcast (Site: https://voicesofvr.com/, Twitter: https://twitter.com/kentbye?s=20). Kent Bye has been a singular voice in the VR and AR community for the entirety of the contemporary VR movement. He brings a philosophy and social impact perspective. I think a lot of application design--immersive or otherwise--doesn't take human factors into account often enough.

Road to VR (Site: https://www.roadtovr.com/, Twitter: https://twitter.com/RtoVR?s=20) is also a very long-standing blog on all things AR and VR. They are more focused on gaming, but they also cover industry trends, new hardware, and companies

John Palmer has a few blog posts covering spatial interfaces that are very insightful (https://darkblueheaven.com/) https://darkblueheaven.com/spatialinterfaces/ https://darkblueheaven.com/spatialsoftware/

This Medium post by Douglas Rushkoff talks about some of the problems with digital media, which I believe VR can help solve (https://medium.com/team-human/digital-media-still-isnt-very-...)

Jaron Lanier is one of the "fathers" of VR, part of the "first-wave" of VR work in the late-80s/early-90s. The Verge did an excellent interview with him (https://www.theverge.com/2017/12/8/16751596/jaron-lanier-daw...). You can get to his books from his website (http://www.jaronlanier.com/), which are all excellent treatises on humanity's relationship to technology.

Incidentally, here's Kent Bye interviewing Jaron Lanier (http://voicesofvr.com/600-jaron-laniers-journey-into-vr-dawn...)

Liv Erickson has an excellent blog on technology that covers a lot of issues in VR, accessibility, and machine learning (https://livierickson.com/blog/). In particular, "6 Questions to Ask Before Diving Into VR Development" is great primer on VR concepts (https://livierickson.com/blog/6-questions-to-ask-before-divi...)

Tom Forsyth (Twitter: https://twitter.com/tom_forsyth) has an excellent blog post about different technical aspects of the optics in VR systems (http://tomforsyth1000.github.io/blog.wiki.html#%5B%5BVR%20op...)

Jesse Schell's article "Making Great VR: Six Lessons Learned From I Expect You To Die" is a little old but still excellent (https://www.gamasutra.com/blogs/JesseSchell/20150626/247113/)

This is an excellent article on the importance of audio in immersive applications (https://arinsider.co/2019/10/02/sound-ars-unsung-modality/)

This is an interesting video made by a man who spent a whole week in VR, eating, sleeping, working, and living with a VR headset on 24/7 (https://www.youtube.com/watch?v=BGRY14znFxY)

This is fantastic - thank you.

How did you find yourself in this field?

I just tried to reply but HN complained that my comment was too long. Never seen that one before. I will need to reply in two parts.

Part 2:

That was right about when Google Cardboard hit. That was 2014? I saw it during a Google I/O livestream and just so happened to have a few lenses left over from some still-photography experiments involving lasers and... never mind, another thing that went nowhere, I just had some lenses around. I quit watching the Google I/O stream and immediately hacked together a new cardboard box viewer with the lenses.

Saw Versailles, which I have never been to, but I have been to Linderhof Palace in Bavaria, which is modelled after Versailles. Saw the Galapagos, where my wife and I had just spent our honeymoon about a year or so before. I immediately saw the experience was closer on the spectrum to really being somewhere than it was to seeing it on TV.

So I'm sitting there, this cardboard box in my hand, this powerful smartphone in my hand, thinking to myself "I have to do this, I have to make these things". But all I knew was C# and JavaScript. Unity was garbage at the time. It's still garbage, but also back then. I had tried my hand at native app development and just didn't have the patience for Xcode or Android Studio. So I started hacking with Three.js and JavaScript. I ended up making the first JavaScript API for VR, releasing the first version about a month before the first announcements of WebVR.

Then I saw RiftSketch (https://www.youtube.com/watch?v=db-7J5OaSag). It blew my mind. Started chatting with people. Brian Pieris talked about wishing he could get syntax highlighting into the app, and complained about how he had to use CSS 3D transforms to position the box on top of his WebGL view. At the time, I recognized how early everything was and how primitive the tools were. So I thought, if I could make developer-oriented tools that made making VR easier, I could make something out of that. I thought Brian could be my first user. So I made a RiftSketch clone, added it into my WebVR framework, and that became what was eventually called Primrose.

I got a small amount of internet fame out of Primrose. People started recognizing me at conferences. A "startup" hired me to be their head of VR. That turned out to be a different kind of hell. It crashed and burned after about a year. We had a kid and another one due any day, and we were completely out of money. I thought he VR dream was finally over. Started applying to jobs back in web and DB work.

In the mean time, I had just started working in Unity at the startup, I had all this time on my hands, and Unity was offering full-access to their learning materials for the first month for new customers. It was clearly designed to go through in 3 months, but I churned through all of it in a month. Then one of the folks that I had hired on at the startup to work on VR stuff made a connection for me at a gigantic, multinational consultancy, for their Unity dev team. He thought I was pretty good even before I learned Unity properly, so I breezed through the interview. It was also the most money I'd ever been paid. Seemed like a huge win!

Then the reality of giganto-consultingware companies set in. You've not seen office politics until you've worked in an organization that runs under a partnership model. It wasn't exactly the worst job experience I've ever had, but it definitely ranks up there. But it basically got me a ton of Unity development experience. They mismanaged the hell out of the team and eventually had to layoff half of everyone. I also ended up meeting a few folks along the way who got me an interview at my current place.

I'm now the head of VR at a foreign language instruction company. Our main client is the military's foreign language school, the very same place where my parents met and then shortly thereafter had me. Things are going well. The company is stable. It's on its second owner, who brought it back from bankruptcy in the early 2000s. I report directly to the president of the company. He thinks the world of me and lets me do whatever I think is best. We have weekly meetings where we geek out about video games and VR. I just got to hire my first employee to work on the project with me. On a weekly, sometimes even daily basis, the company does something that proves "our employees are our biggest asset" isn't just a platitude for them. It's amazing. I've never worked anywhere this nice before.

Sorry I never saw this. This both was captivating and inspiring!

Thank you for sharing your story, it was quite the journey and it is uplifting that you were able to eventually culminate all the dotted things you've worked on out of passion into something that you're now sought after for!

Part 1:

That's a funny wording, because it was actually going through the process of "finding myself" that I ended up in immersive software. It simultaneous feels like it came out of nowhere, but also that my entire life prepared me for it.

I'd always had a fascination with 3D imaging as a kid, both stereo graphics and holographics. I don't know what was going on in the late 90s, but there were a lot of cyan-magenta anaglyph comic books at the time, and a lot of Marvel comics were doing holographic overlays for special edition covers. I read every book I could get out of the local library on optics and holographics.

Through college, I studied a lot on computer graphics. It was fascinating to see how much of the optics stuff I learned as a kid applied to my studies in college. But also learned I hated 3D modelling. This was also around the time the first stories about how terribly exploitative of employees the game development industry started coming out. And it was also around the end of the dotCom bust, though as I was graduating we didn't know it was the end yet. So I ended up going into web and database development, because one of the other things I had done as a kid was learn HTML and JavaScript.

My first job was in Geospatial Information Systems. I did a lot of work in map rendering and web-based rendering. A lot of study of projection systems. This was pre-HTML5, so I made a lot of filthy hax in JavaScript to make DIV tags look like pixels. Bresenham's Line Algorithm, which everyone in else in college thought was a waste of time to study, was my savior. I ended up making a JavaScript library that was essentially Java's Graphics2D API, rendered in DIV tags.

I had a lot of trouble with my relationship to work. My first job wanted to move me out to the middle of nowhere, gave me very little notice on the decision. It was terrible, so I also quit without notice. My next job was OK, but the work was mind-numbingly boring and I wanted to move away from home. Found a job near Philadelphia, near some friends. That job was at a company that was basically a cult, though there was one highlight where code I wrote helped catch a criminal by proving there was no way they were delivering the products they said they were because the timestamps between stops were too short for the distances they had to travel. My next job, for a company that makes used car database websites (so already soul-sucking to start), had all the anti-corporate, pro-developer surface details I thought were the problem with my previous jobs. I hated it even more, the constant Nerf gun battles and the lack of focus, at the same time as being constantly brow-beaten for not getting anything done in that environment. I quit after 3 months.

I thought the problem was consulting. I wanted to get out of consulting and into product development. A friend got me an interview at a company working in home-automation devices. It was a huge paycut, but I was supposedly "getting in on the ground floor" of a "hot startup". Turned out the "small startup" was actually "a poorly managed company that couldn't find a market fit and did some shady deals to rebrand every 3 years to escape their reputation". I ended up right back in web and database consulting work there. The systems were terrible. Most of my work was manual data entry and fixing stupid timing bugs in the device configuration tool. I fixed everything, made tools to automate the data entry, made simulators of devices to speed up testing of the configuration tools, fixed all the stupid code that made bad assumptions that led to comms race conditions. I got fired because I refused to work overtime. I refused to work overtime (unpaid, mind you!) because there was no overtime work to do. I got told that I needed to log 60 hours a week no matter what. I told my boss that if she wanted to lie about the work I was doing, she could fudge the invoices to the client herself and leave me out of it.

I thought maybe I hated programming. I eventually learned that I didn't hate programming, I just hated the people I was working for. I decided I was not going to look for a "job" and I was going to stick to being independent for as long as possible. I had joined a hackerspace when I first moved to Philly, started using it as my office. I tried starting a t-shirt printing business. I tried selling photography prints. I built a couple of museum installations on contract, these Arduino-based things. I tried making music-teaching toys. I did pyrotechnics on an indie film. It was all over the place.

One of those things I tried to build at the hackerspace was basically Google Cardboard, 4 years before it was a thing. When smartphones first came out and started to show promise with 3D rendering and high performance motion sensing, I built my first "phone in a cardboard box" stereo viewer. It... worked... for values of "worked" that include massive headaches. I didn't have lenses, the box was relatively long to be able to focus and fuse the images, but seeing stereo animation of my own making for the first time was amazing. This was around 2010 or so.

At the same time, I also started playing with different camera-oriented apps that took motion sensing into account. Built a few different apps that could help you take stereoscopic photo pairs and render them in side-by-side or color anaglyph. Tried to build a turn-by-turn direction app and a point-of-interest discovering app using Google Maps data. That sort of stuff. Discovered sensor data and rendering were still not quite good enough to make a good experience.

Eventually, a friend from the hackerspace got me an interview at a company that makes tilt sensors. I thought the job sounded boring, but I needed the money. They hired me on a 3-month contract-to-hire, and when the 3 months were up, I asked them if they would be willing to keep me as a freelancer. They agreed. I spent 3 years there, the longest I had spent anywhere up to that point. I met a girl. She lived in DC, so I started pushing to let me work remote so I could travel to see her. They agreed. I eventually moved to DC and became 100% remote, which they were cool with. I even got to where I had hired a few people part-time on my own to help out with the work. They thought it was great. I actually loved it, for a time. Someone in the company got a bug up their ass about me having my own employees. I guess someone parallel to the guy I reported to got the owner convinced that it was a "security risk" or something. I don't know what, all I know is that this other guy took over and I got slowly squeezed until there was only enough work for myself. I was right back into having a bad boss again, so I was on the lookout for an exit.

So who's doing interesting applied work in scene understanding for AR?

I've not paid close attention to AR in the last year. I almost exclusively did AR at my previous job, but I always really only wanted to do VR. My current job is 100% VR focused again, so I'm not completely up to date. But I do still see stuff in passing on Twitter and such.

I think one of the highest-value things that is in progress right now is the work Apple is doing to combine feature detection and location. They get your rough location with GPS, stream a feature-point cloud to your device, then figure out your precise location based on the camera view. Just having a reliable, to the centimeter, position and orientation of a user in the full, real world is going to be a huge enabling factor for AR applications.

Without it, the best thing you could do is turn-by-turn directions, because GPS precision and drift prevent you from doing anything in close proximity. But if it's not in close proximity, then really limited in the detail you can provide people looking through their phone.

With it, you can start to do a lot more stuff. Art installations, public information kiosks, event-based things. And it's out in the world where other people will see people doing it. Exactly like how people got interested in Pokemon Go because they saw other people in the street playing it.

I think we're still a long way off from useful object recognition. I've seen a lot of concepts around brands wanting their appliances detected and giving users instruction manuals, repair manuals, or value-added services. The problem is, state-of-the-art object recognition can fairly reliably tell you "I see a refrigerator", they just can't tell you "this is a Whirlpool refrigerator" say nothing about the specific model. So that kind of goes back into the funding problem. Whirlpool, GE, Frigidaire, they all want AR, but not if it's going to work with other brands. Same with basically every other product on the market. So object recognition is kind of at the same point that location tracking is, with regards to detail. It's going to either take an unrelated company going out on a limb to support multiple brands without the brands' involvement, or it's going to take an unlikely development in object recognition that can reliably detect brands and models.

Things that were just coming along when I stopped doing fulltime AR work (hopefully they've gotten better in the last year):

Microsoft and IBM were doing a lot of great work in improving object detection, plus providing it as a service to be utilized in applications. That's another problem, most of the stuff you see is so research-grade that it's still years away from being productized, if it doesn't get canned first. But at least some of the high-level object recognition stuff is productized right now. It's slow, though, so you have to be smart in your UX about how you manage the queries. A neural network can tell you that it saw a cat in one picture you sent it a full second ago, but it can't tell you that it sees a cat right now in your video stream. But if you can work with detecting things in still images, it could be usable.

PTC was doing some much simpler, but still very interesting work with their Vuforia system and pivoting to value-added services on top of native AR subsystems, rather than just providing the AR subsystem. Vuforia was great 3 - 5 years ago when we didn't have any AR subsystem, but Google and Apple have basically ripped the rug out from under them. Image-target tracking is both terrible and great. It's terrible because it's not very flexible. But it's great because it tells you something contextual about the user: they have my image target in view. They have a live-annotation system for spatial drawing in 3D that's really interesting for teleconferencing. Except Vuforia is positioning it for industrial repair. Again, "who is paying for this" gets in the way.

If appliance brands could get their collective heads out of their asses and accept that maybe a new paradigm requires a little flex and adaptation on their own part, I could see new products being developed that are visually easier to detect.

I also think all the work going into speech recognition and semantic understanding of text is super important for AR and VR. It's not just about making user interfaces for people to use these systems hands-free (though that's important, too, because there are a lot of scenarios where a user might not have free use of their hands). Having reliable, contextual information about what people are talking about in, say, a meeting, could enable virtual assistant technologies that aren't dumpster fires.

Similarly, reliable facial recognition would be a huge help for AR systems, in a lot of very obvious ways.

But facial recognition leads me to the unfortunate thought that we're going to run into some intractable problems in machine learning that are going to prevent the full, perfect future of AR. Even disregarding the moral hazard of selecting an appropriate training set, the problem is that ML-based techniques are inherently biased. That's the entire point, to boil down a corpus of data into a smaller model that can generate guesses at results. ML is not useful without the bias.

Bias is OK in some contexts (guessing at letters that a user has drawn on a digitizer) and absolutely wrong in others (needlessly subjecting an innocent person to the judicial system and all of its current flaws). The difference is in four areas, how easily one can correct for false positives/negatives, how easy it is to recognize false output, how the data and results relate to objective reality, and how destructive bad results may be.

Things like product suggestions or voice dictation systems work because, when we get a bad result, we can easily recognize and correct for it, often by just retrying. And part of why we can tell that there is a problem is because they results are linking back to some notion of an objective reality. In contrast, a NN that dreams up photos of dogs melting into a landscape has no impact on reality.

But facial recognition runs into so many problems here. If you're trying to detect a particular person's face, they don't have another face that you can try to see if you get better results. If you don't know who the person is that you're trying to detect (e.g. identifying a person from a photos), then you don't even know when the results are wrong to try to get a different answer. And because you're bringing these results back to an action in objective reality, the consequences of wrong answers have real impact on real people (e.g. identifying suspects from security camera photos).

So yeah, I'm not too hopeful for AR. The tech is cool. I certainly want to be able to have good AR tech. But some of the farther afield ideas on how the tech might enhance semantic understanding of the world... I think a lot of it is a pipe-dream. I suspect the actually achievable maxima is strictly limited to entertainment and productivity.

Short answer (as others have said): It's harder to make AR than VR. There are a couple of reasons, but most importantly it's because of the way optics work. You can read more on my post here, where I've also linked some further resources that go deeper into the issues: https://shafyy.com/post/ar-vr-two-sides-of-the-same-coin/

It’s nice to disconnect from the wall of numbers and just eat the food.

AR is shit. Garbage technology, useless features. Requires too many coincidental factors. That about sums it up.

VR is better because you can fabricate entire worlds and spaces for any task. Instantly useful. And if you really needed to combine the real world with generated content, you could theoretically do it by just overlaying content onto a video feed in VR.

Well I think it's mainly because nobody has done AR well yet for the mainstream. The only one I can think of is the Vuzix Blade but that isn't really AR. It doesn't recognise you environment, it just projects a picture in your eye in the same spot wherever you look.

I played with the Hololens 2 last year and I certainly can see the benefits. Especially when communicating with someone else.

However for $6000 that device is not going to happen for me ;) AR needs an "Oculus Quest" approach to be useful to consumers first.

I do love VR as well but I do think there is a big usecase for AR once it becomes good and affordable at the same time.

Impressive, but I doubt even FRL can overcome this design limitations. Digilens tried to do that since forever, and when you introduce more colors you will face the same kind of problems seen in Hololens 2.

There’s a host of further info in the paper itself:


“Finally, holographic lenses can be constructed so that the lens pro- files can be independently controlled for each of the three color primaries, giving more degrees of freedom than refractive designs. When used with the requisite laser illumination, the displays also have a very large color gamut.”

They have full colour working, and apparently well, in the benchtop prototype.

There’s discussion towards the end as to options for implementing full colour in the HMD prototype.

Would you mind elaborating on (or provide a link to) some of the problems you mentioned Hololens 2 having? Also, I've never heard of Digilens; do you have a preferred source for learning more about that device and what its limitations are?


This article does a good job: https://www.kguttag.com/2019/12/18/hololens-2-not-a-pretty-p...

Digilens is a company that's been around for 15 years doing all kinds of eye wear stuff.

Can you expand on what you mean by "more colors" ? Toward the end of the article they show a multi-color image from the larger benchtop version of the prototype. Do you mean that adding colors doesn't scale down beyond a certain size?

To be completely honest, I would settle for an early monochrome version - I'm sure people can still make some great games/features like this. I think the form factor (and hopefully price) really is so much more important at this point.

Launch day Virtual Boy ports.

I remember reading Mamoine's original Pinlight display concept [1]. It blew my mind.

1. https://www.researchgate.net/publication/266659406_Pinlight_...

"The First" series on Hulu reminded me of this in terms of the glasses technology they use quite frequently but coupled nicely with voice and gestures. I was watching this yesterday then saw this thinking this could be just around the corner. Maybe Apple is doing something similar?

What has a better chance of miniaturization. A full lens screen like this or a projector in something like the hololens? The projector has the benefit of supporting passive pass through AR with a semi-reflective mirror.

I suppose you could incorporate both.

We're getting closer and closer to the virtual light from William Gibson's novel of the same name.

I think Oculus is pretty cool, but I don't really see the synergy with facebook here.

the end goal involves wearing a device that enables Facebook content in your field-of-view 12-15 hours a day

VR and AR are the next computing platform and Facebook wants to own the hardware.

surely they are just the VDU, not the platform?

That's what most of us independent developers in the industry had hoped would happen, but unfortunately it's not happening. Facebook locks the Oculus app store down harder than Apple theirs. They spend a lot of development effort on Oculus-exclusive social integration features, like avatars, lipsyncing of avatars to user speech, and a bunch of other stuff that nobody ever asked for and nobody is using, yet they still keep pushing it. And they're spending a lot of investment dollars to buy up independent studios and lock them into platform exclusives.

Facebook definitely takes a platform approach with Oculus. The Oculus Quest is a great device at a great price. It has done the most to make VR a mainstream accessible thing. And yet because of Facebook's behavior and what we know about Facebook as a company in general, we really, really must not let it become the dominant system. Hopefully, other companies will improve their user experiences to match (because Oculus certainly doesn't have a hardware advantage, everyone is pretty much running the exact, same hardware profile with only minor differences).

I don't know much about Oculus - Is in not possible to use alt-app-stores?

TBH, this thing happens. look at gog vs steam vs <shall not be named>; this strategy doesn't always pay off though, consider the Nintendo console lineup (curated, small but higher quality) compared to Sony PlayStation - in the end Sony won that one, and my own belief is the reason that less curation led to some indie (not developed by Sony) gems.

There is one project called SideQuest that--if you enable developer options on your Go or Quest, which requires a developer account with Oculus--uses Android side-loading from a PC to get apps onto the Quest. When you side-load an app on the Go, you also have to have a key file embedded in it that was generated from the device's own serial number, so SideQuest has to have you generate an OSIG for your device, upload it to your profile, then hack your OSIG into any APKs you download from their store. Quest doesn't have the OSIG requirement, but you still need the developer account to enable developer options.

And you're still stuck using Oculus' APIs. There are no open source APIs for accessing the device sensor data or rendering. Regular Android apps (minus Google Play Services) can run on the devices as tiny windows, and then you can get some super-hacky input as touch events on the app view, but it's really not usable. I think it even forces software rendering, because I've seen some really bad performance out of it (I found out about it after not having an app configured incorrectly before uploading it on my own for my own development). It's basically there as a fallback for Android's default Settings view, which you use to configure the developer settings like enabling USB debugging.

So true.

Applications are open for YC Winter 2022

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact