Hacker News new | past | comments | ask | show | jobs | submit login

It's in progress, currently limited by both hardware and cost.

There used to be a good blog post from Michael Abrash when he was at Valve that also talked about two main issues. Latency, and drawing black effectively.

Latency is critical since low latency is a requirement for things looking real (since humans have fast visual systems), but that's ultimately a hardware problem that should get solved in time.

Drawing black is harder because AR uses ambient light and putting a black line on the screen in front of your face doesn't work for focus.

Unfortunately it looks like Valve killed their blog, but the way back machine has it: https://web.archive.org/web/20200503055607/http://blogs.valv...

My bet is that Apple will pull it off Apple watch style with front facing Lidar: https://www.youtube.com/watch?v=r5J_6oMMG7Y

Probably at first they will be mostly for notifications and interacting with apps in a window in your visual field, getting most of the power from the phone. Things like looking at food for calories and names, etc. will come later when a front facing camera is acceptable and there's existing UI in place.

I think this is probably the next platform after mobile devices, looking at little glass displays is a lot worse than having a UI in your visual field (if it can be done well).

[Edit]: A more recent blog post from Abrash on this topic https://www.oculus.com/blog/inventing-the-future/






Note you can "paint" black by putting an LCD at a focal plane, then you can black out a light source that is in focus.

Of course you need a lot of space to route incoming light through that focal plane and then to your eyes.


Does AR require ambient light? couldn't you use a VR approach where everything is drawn in, and reality is derived from cameras?

I’d argue that’s not really AR then (though no need to dispute definitions [0]), the blog post talks about that too - whatever that is, it’s not really a satisfying approach and wouldn’t be the next platform.

You want to be able to use the full power of human vision when looking at the world, not literally be looking at some subset in a display right next to your face all the time.

[0]: https://www.lesswrong.com/posts/7X2j8HAkWdmMoS8PE/disputing-...


TBH though, you link seems to describe situations where people leverage the confusion to win arguments, but point taken (thanks for the link btw, interesting article, I added my own comment).

It seems it might be a useful distinction though; I always took AR to be a distinction of interface: reality plus augmentation; but it might also be a tech type too: augmentiong normal vision versus "virtual" AR, or AR in VR..

That said, I don't understand your comment "use the full power of human vision"; If VR headsets improve to the point VR environments are as detailed (wrt human perception) ad reality, then virtualised AR shouldn't differ either.

TBH my own concerns are how hard VR is to use while is block you from your surroundings: noticing when people approach, handling headset/controllers/keyboard etc. I can't replace my monitors with VR b/c I cannot see my keyboard in VR, I see my coffee mug, or notice when people approach in order to not jump every time someone taps my shoulder; VR needs to be partially augmented with my true surroundings just to operate within a normal space.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: