Hacker News new | past | comments | ask | show | jobs | submit login
Another heavily-funded AR headset startup is shutting down (techcrunch.com)
125 points by mathattack 33 days ago | hide | past | web | favorite | 116 comments



I was at Daqri from employee ~12 to ~300. It was a wild ride and I've got some really amazing memories with only a bit of scarring to show for it :)

Daqri was really good at recruiting amazing people, and really bad at turning them into a functioning company. Communication between teams (and particularly offices) was really lacking; I remember back when the helmet was built on Android (in LA) and all the vision scientists were working on iOS (in Sunnyvale), they had this amazing world-tracking demo that we just couldn't use.

The Two Trees photonics tech is absolutely amazing, check it out if you get a chance - field programmable holograms. Point the thing at any glass surface and you get a hologram "in" it. Blew my mind the first time I saw it! Dunno if it ever made it into headware.


Where do you feel the fault was, wrt communication problems and leadership?

Was it because of incentives being wrong in the company, the wrong people in charge, something else?


Senior leadership didn't talk to individual contributors enough. They'd try all sorts of things to set us up for success, but it was generally the wrong thing, or at the wrong time, or the right thing in the wrong way. Re-orgs were really common but really inneffective; I'd find myself on a new team with a new boss and a week later I'd find myself working with the original team and the original boss again, because to do what I needed to do those were the people I needed to work with, not the new team and the new boss. I could generally see what they'd be trying to accomplish with such re-orgs, but not why they thought it would actually accomplish that.

The last re-org I was a part of before I left, our team was dissolved and I was re-assigned to a new boss I'd had a pretty terrible time with in the past to do exactly what I'd been doing on the dissolved team, minus many of the best parts. He was absolutely surprised that I was not excited at this change, and could not answer when I asked what the org change was supposed to fix and/or improve. If they'd actually talked to any of us - including my direct manager at the time - prior to springing the change on us, could've probably gotten us on-board and aligned, but that wasn't a thing that usually happened.

Incentives and having the right people in charge are a structure to facilitate good communication, they are not a surrogate for that communication.

Thanks for the question!


It didn't look like a particular fault of anyone. Building a viable AR headset is a Tony Stark type of affair. The complexity is on par to AV.


PS - I can't actually offer an AMA - there's things I can't or shouldn't talk about - but I am absolutely up for questions :)


Are there any videos of the Two Trees tech, or does that not do it justice?

How far out do you think consumer AR is realistically?


I don't know either way, but, it's supposed to be in every 201x Land Rover, I think? If you see a hologram projected into a car windshield on behalf of the driver, that might be the tech.

3-5 years, would be my optimistic guess? The tech is getting there (Magic Leap and Hololens), but it's not there yet, and it's not going to seriously accelerate until there's a really compelling consumer use-case / app to go with it.


I believe it should be essentially what Envisics has, which is probably represented best below.

What we are used to: https://youtu.be/p-lH4mufXHw?t=44

What they have: https://youtu.be/p-lH4mufXHw?t=63


The company I work for is trialing AR solutions for remote support purposes. We've tried the glasses and customers hate it because it literally makes them motion sick.

Our current thing is phone/tablet based where you can basically stream real time video to our support staff and each party can draw on the screen, and whatever they draw stays where it is in physical space.

Customers love it, it works great.

Nobody wants dorky glasses that make you sick.


> and whatever they draw stays where it is in physical space.

Wow... that's a great way to utilize AR.


I've seen demos of this with Hololens. It really opens up new interactions. Neat stuff.

Disclaimer: I work at Microsoft, but not on Hololens. All opinions my own, etc.


At Daqri we called this "remote expert", and it was always one of the potential products with the highest demand.

My favorite use-case is that in nuclear power plants, they have two people do everything - one to do, one to watch (not in a big brother sense, but more in like a pair programming sense), but this means two people in an irradiated environment. With the AR tech, the second person can be in another room.

(Although you run into other issues, like no wifi (or power!) inside reactors; although apparently leaky coax is a really effective solution to the first)


Leaky coax is perfect attack vector for bad actors.


Were you working with any human factors or related teams in the nuclear industry?


A couple of times, although I don't remember / know if it was "human factors" or otherwise. Both cases were primarily about letting someone see something where it was dangerous to be, although the first was about the head-mounted cameras and the second about the head-mounted displays.


Yeah, our customers find great value in it. It's very nice for our support folks to be able to draw on something and say "try adjusting this control" and things of that nature.


What's the company?


The company with the software we're trying?

It's PTC's Vuforia Chalk: https://chalk.vuforia.com/


What's the company you work for?


Sorry, I'm not comfortable with sharing that. I'll just say that it's a company that manufactures high-end industrial converting equipment.


Looks like https://szy.io/


Well, that's my own little dealio where I make/sell electronics kits and resell used equipment. That's not involved in AR at all and it's just a little side-thing I do.

I have a separate full time job and I'm not totally comfortable with sharing who I work for, my apologies.


Sounds similar to: https://www.streem.pro/


Nice, I developed something similar but for the medical field and HoloLens!


I spent 3 months working on an AR project on the Magic Leap (the best funded AR startup, I think?). My conclusion was that the hardware just isn't there yet.

VR tech feels impressive because the hardware works well. AR tech, on the other hand, still feels very much like a prototype, and far from real production use. Some of the main deal-breaker issues are heat, discomfort, dizziness, and software instability (I wrote more about the specific issues I encountered with the Magic Leap in a past thread [1]). To me it felt significantly worse than the Oculus Rift Development Kit 2 (the second public release of a VR device back in 2014). Back then when I got that version because I was super hyped for VR, I thought "Woah, clearly needs more work but this is amazing". With AR, my experience thus far has sadly been "Oh, this is it?".

A lot of it is presumably because AR is much harder to do well. A very simplified way of thinking of VR is basically it's a screen close to your face that can track its own pitch/yaw/roll/small amounts of movement and redraw accordingly. This is tricky, but totally doable (and thus, it was done). Smartphones have basically been able to do most of this for years (not so much the movement, but that's also a pretty new thing in VR).

AR actually has to interact with the real world. If you have a virtual image anchored to, say, a pendulum, that's fairly non-trivial computation-wise to recalculate without prior knowledge of the pendulum's movements. It's basically object detection which ML models can do fairly well, but they do it with latency. And when the latency is in something your brain is trying to process as the real world, it's nauseating. Add on the fact that it takes a lot of energy to do inference, and that AR devices need to be standalone devices that can be taken anywhere by themselves, and you also have a serious heat problem where the compute needed to constantly analyze the real world means you just have a hot brick on your head or in your pocket.

I think it's going to take a lot more time to get it right. It feels like a hard enough problem that I think the first truly production-grade AR device will have to come from one of the big companies, possibly Microsoft with HoloLens.

Of course, I write all this with the definition of AR as "something you wear on your head that overlays your full field-of-view with virtual graphics". Other forms of AR like Snapchat filters or camera overlays are probably less challenging.

[1] https://news.ycombinator.com/item?id=20253177


I managed to try out Microsoft's HoloLens 2 at MS Build this year in Seattle. My impressions were that it's a very cool, very impressive prototype. FOV is still quite low, although I hear it's significantly improved from v1. Framerate felt too low for real-world stuff; as you were alluding to, our brains are used to screens operating at 60fps, but there's something about putting the screen right up next to your eyeballs and adding tracking (whether VR or AR) that kills that illusion and mandates at least 90fps, and very smooth. I'm not surprised by the reports of people getting motion sick.

Having said that, the fundamental interactions are very cool. I had slight difficulty figuring out where the tracking was actually registering (e.g., how far do I have to move my finger to 'tap' a button), but once I figured that out it worked just like you expected it to.


I tried out the original. My impression was that the small FoV helped with motion sickness & immersion. I could see the objects in the room (table, wall, etc.) with infinite frame rate, and when I looked directly at them the augmentation (poster/tv on wall, something on the table) was where I expected it.

I always felt the small FoV was a complaint by those that couldn't see the bigger picture (haha)... at least, for the current-gen tech that has built-in latency, and how the smaller FoV might actually be a benefit.

Anyways, my own thoughts and ramblings (I work at MS, but not on Hololens).


If history is good at predicting the future, I will guess that Microsoft will get it almost right just like with Kinect. Somewhere along the way, it will be mismanaged enough for an Amazon or Google to make a similar system that just works. Hopefully, I'm wrong since Nadella is at the wheel now and he's done things very different from his predecessor.


I would be happy with simple text tags over things I see, for location-based knowledge. Like proper nouns, life pro tips, distance, reminders, object relationships, adverts (need to pay for it somehow), snapshots (upload photo), etc. AR doesn't need to involve graphics (besides the recognition aspect).


The most widespread, successful use case AR which I can see currently is AR Filter i.e. snapchat filters and those who copied it.

The reason IMO, is most of the users who use snapchat filters wouldn't even know what Augmented Reality is. It is that intuitive use case which can make high-tech get widespread consumer adoption.

Looking at the rate at which limited attention, instant gratification apps are conquering market, the demand for AR filters aren't going to decline anytime soon and those who offer AR as a service would have enough buyers in the form of the copy cat apps wanting AR filters.

In fact, snapchat itself supposedly acquired the AR filter tech from a Ukranian company called Looskery in 2015.


This isn't a disagreement, but I was thinking the other day that the most widespread practical (non entertainment) AR app has got to be backup cameras and displays in cars.

Modern ones show both the actual live view + a composited on graphic of where your current backup line will take you based on the tire angle, etc.


I wouldn't have categorized this as AR, but perhaps you're right. Would this instead be called a "heads up display"? I guess not, since it's not something that's shown on the windows you're looking through.


I would agree this is AR. It's similar to the AR introduced in sports broadcasts that overlayed first-down lines or field goal long-distance records onto football fields, hockey puck trails for speed indication, and on and on...


I'd say it's more of a HUD. If there was a 3D render of where walls might be or bumps might be, then I'd consider that AR.


But it's the literal opposite of heads-up; you lower your view and stare at the radio to see the camera. If you were looking at the mirror and it was adding to the normal reflection you're already seeing, that would be heads-up.


Technically, a wrist watch is AR since it is augmenting your reality with information.


This wouldn't fall into the category of AR because the primary defining technologies across AR use cases aren't involved. For example you don't need any kind of SLAM, Visual Odometry or otherwise for backup cameras to work seamlessly. It's not 6DOF. There is no 3D rendering. Camera localization isn't required etc...

So calling this AR means you're casting too wide of a net from a complexity perspective.


According to Steve Mann, the father of AR, any wearable compute is AR.


Last I checked a backup camera isn't wearable

Also, just because Steve Mann said it doesn't make it canon, and he didn't invent AR - arguably Louis Rosenberg built the first functional AR system.


Consumer AR is very, very different from industrial AR. The business case for industrial is huge, I have no idea what the business case is for consumer :P


The problem with Industrial AR is that most use cases I've seen (maintainance, instruction manuals, support etc) are cost centers, not profit centers. It will generally cost more to produce than the current methods - so business investments will me made only where the benefits are an order of magnitude than the "paper or iPad manual" solution. I'm sure there will be a few use cases but not as many as most people seem to expect.


Now instead of content being constrained to a tiny 2D rectangle - it can now encompass your entire field of view. The world will be overlaid with holograms that everyone can see and interact with.


This video always comes to mind:

https://www.youtube.com/watch?v=YJg02ivYzSs


I can see why other people want me to have that experience, but not why I would want to have that experience (as a consumer). Then again, I'm not exactly a representative sample.


AR seems like one of those things that needs to be executed "perfectly" to work. VR can benefit a bit from suspension of disbelief etc. ...but with AR the real world is harsh :)

But I bet we'll have "the iPhone" of AR doing it right and popping open the market! Someone will sure get lucky betting on it!


Apple has the resources, the install base, and (crucially) the data to make it happen.

The "iPhone of AR" is probably going to be an iPhone.


It always seems like incumbents have it all (resources, distribution, talent, etc) until they don't. For ex, the "iPhone" theoretically should have come from Nokia by that logic. AR particularly requires truly novel innovations for it to work in mass-market. There might be niche-cases that work for AR (studio/arcade games, sports tech) before the mass market use-cases.


Apple is well documented (for it being a secret project) as throwing tons of resources into compelling AR products.

I'm not saying they're going to succeed, but they're certainly trying - unlike Nokia, who was in the middle of a massive internal war over Symbian when the first iPhone hit.


Apple is (or used to be) very good at waiting for the right time when all necessary components for a good product are ready or close to ready. There is a lot of value in watching other companies fail and learn from them.


Apple has ARKit on iPhones, and it's usable right now.

Apple has the built-in "measure" app, though that can only measure things on a tabletop, and the Mac Pro table-top demo (https://www.apple.com/mac-pro/ in Safari on iOS). The iOS game Egg, Inc. has a mode where you can see your farm projected on top of a nearby surface. It's kind of neat to see the little chickens run around on my kitchen table.

What's holding AR back (on the iPhone) then, is a killer app.

Vuforia Chalk is one for remote support. See what the supportee sees, and point out things, visually, to them. Circle which lever to pull, and using Apple's ARKit[] means the circle stays on the right lever, even when the phone is jostled. It's a cool use case, though their UX is horrible.

IMO the problem is a hand-held smartphone is an awkward AR platform.

[0] Note that there's some vendor lock-in by Apple. The Vuforia Chalk feature I mentioned above is image stabilization and an overlay. Image stabilization (and face detection) is an old problem in computer vision, it's commonly used to add silly hats to faces in video calls.


> IMO the problem is a hand-held smartphone is an awkward AR platform

...doooh! ARKit might be good, but it needs to be coupled with a google-glass-successor. Imo GG was the "nokia 900" of its era, proof that it can be done and there's sort of a market, but worthless without something to make it worth the inconveniences for regular users.

To be honest I'd bet on a social-media experienced company like Facebook or on whoever does a good job at partnering with a social media giant because the only AR applications I'd imagine compelling enough for "average people" will involve social things, like walking around a bar and seeing fb and twitter and maybe tinder profiles of people around you near their heads like game stats, sourced based on facial recognitions from people who set themselves to "open"... will be creepy beyond belief, but I'd bet on a mix of social, mildly sexual, with a dash of gaming thing, like a mix of tinder and pokemon go. Now, god knows who has the chops to pull that one right without getting it too creepy...


It's really important to understand that funding for AR startups cooled off when Apple and Google decided to go hard into it in 2017.

Almost across the board every major VC got spooked, then when Magic Leap turned out to be a turd the bottom fell out.

I firmly believe that Apple and Google successfully took the entirely of the market opportunity for a startup to win the AR game long term.

You could argue why this is good or whatever, but at the end of the day AR isn't going to be the platform a startup could knock FAANG off of their perch, and one or all of them will own the next hardware platform.


Can someone explain to me why sufficiently good VR can't function as AR?

Just to lay out my point: AR has to have extremely good detail, framerate, and even depth of field to work well. It has to track everything perfectly and blend the two lightfields seemlessly or the immersion is broken and you risk nausea or at least annoyance.

But if you have high framerate, ability to simulate depth of field, high knowledge of the environment (enough to overlay seemlessly), etc, then couldn't you just... operate purely in VR with reality streamed in via lightfield cameras? That would allow you to do things that are impossible in AR, like display darker overlays on lighter surfaces, and everything would be at the same framerate so it'd be easier to seemlessly integrate everything.

So what does really, really good AR get you that really, really good VR with reality overlay does not? Seems like you can avoid a lot of super hard to effectively impossible hardware optics problems by just doing things with an opaque visor...

I'm not an expert in this field, so I'm probably using terminology incorrectly.


With sufficiently good cameras you can already pass through video. Setting up boundaries in the Oculus Quest has you do so doing exactly that.

The current issues are:

* A limited field of view. AR headsets are more limited but don’t interfere as much with normal vision.

* Latency of video display causing a noticeable mismatch to head movement.

* Shutting the rest of the world out. No one can see your face properly so it’s much more invasive than AR might be.


Good point about others (not wearing headsets) not being able to see your face properly. I hadn't considered that.


I would like to note that the shortcomings mentioned above can be fixed:

- a vr passthrough can achieve a much wider FOV than any ar headset.

- latency isssues are debatable, I would say it's a non-issue

- eye-tracking and front-facing displays solve gaze opaqueness. we already have cut-out camera holes in consumer grade displays

I would say that THE current issues with passthrough vr is solving the convergence issue. varifocal lenses are not consumer level technology yet.


VR pass through can get a wider FOV than an AR display but it still is a lot less wide than natural vision. Which was my point.

Latency issues might be subjective but there are clearly going to be population trends. I imagine these will be very similar to latency trends with VR and as such latency will be a significant issue.

Front facing displays are even weirder than someone with a headset on. I’m confused that anyone thinks sticking a view of someone’s eyes on a flat screen is going to be anything other than weird. It’s diving head first into the uncanny valley.


A number of challenges exist for pass through AR. I’m short the camera system that you need does not exist.

Cameras add latency. This causes nausea. Camera resolution is low. When the screen is less than 10 cm from your eyes the grain is very noticeable. Cameras only have limited focal range. Camera dynamic range is low. I’m many settings it is hard to capture images that your eye could handle.


Thank you. That makes sense. I know some crazy high resolution lightfield cameras exist, but the hardware is so heavy there's no chance of getting it on your head. (In principle you could address this with a huge array of tiny cameras on a headset, I suppose? Like this: https://www.deccanchronicle.com/150421/technology-mobiles-an... or this: https://newsbeezer.com/austriaeng/lg-patent-shows-smartphone... )

Apparently display technology is also a limitation.

Here's one attempt to pass-through AR (thank you for that vocabulary, BTW): https://www.youtube.com/watch?v=hNbTCpURpQs

Are you aware of any who do it better?


Retinal projection work for display tech. https://www.bynorth.com/focals


Pretty cool, but could you project 3d object on these?


> the camera system that you need does not exist

VR can work at 90Hz. If you spend money on a quality camera and a quality screen you can manage less end-to-end latency for AR. Focal range will be fine if you have a good sensor. Grain will be fine if you have a good screen. Compressing the dynamic range won't break anything. It doesn't have to work in every setting.


I work in VR and my personal theory is the version you outline above is the long term model for immersive computing. Most people I talk to disagree, but as you mention my argument is that in the limit a system which allows full photonic control will be the final mechanic. Incidentally it’ll maybe happen sooner anyway, since viable VR pass through tech is arguably software gated today on a platform like the Quest. The chicken and egg problem is who is a) going to get it working well and b) have the guts to start wearing a eye covering headset for more and more % of their day. It may be a long iteration cycle and transparent AR may rule the day for a few generations until the world is used to the idea first.


Potential legal minefield, too. Take drone flying, for example. It requires that you maintain visual line of sight (most of the time, without special exceptions). But, as a drone flyer in a commercial setting, you frequently want to see through the camera also. Given that, AR suits nicely, since you can overall both. VR doesn't because it is blocking your true visual line of sight.

In a perfect world, I agree with you. VR would be nicer, especially once the Cameras include thermal/night vision/other.


Good point about the thermal/night vision thing. We effectively do this already with nightvision goggles. It'd be super amazing to use pass-through stereographic ultrasound images for diving in murky waters (or smoke?). A potential game changer in some instances.

Imagine directly overlaying a "4D" ultrasound in realtime with careful sensor position registration... it'd make all manner of medical procedures a lot clearer. You could immediately see what way a baby was turned (without training, even), for instance. I guess in those cases, you wouldn't even need conceptually seemless immersion, just precise enough for eye-hand coordination.


>But, as a drone flyer in a commercial setting, you frequently want to see through the camera also

Commercial drone photo/videographers should be using a pilot to control the drone, while they themselves control the camera.


I think the idea is that even the pilot may want to see through the camera, i.e. first person flying.


That's what Varjo is betting on: https://varjo.com/xr-1/


Tried the XR-1 at SVVR. It's very impressive. The camera is very high rez and low latency. If I look for something to complain about, the colors are not very rich, it doesn't handle glaring light well, and your perspective is shifted a bit forward of your face because of the physical position of the cameras.

But, still... I'm nitpicking an awesome piece of kit.


My uneducated take - AR in stationary is almost useless except in a few industrial applications. You want AR to be super mobile ideally. VR with reality piped in is a decent way to get what you can get out of stationary AR, but it’s not quite as useful.


Part of the magic of VR is the immersion. AR doesn't appeal to me, it seems more of a gimmick.


Personally, I feel the exact opposite.

I don't see VR providing a whole lot outside the area of entertainment. How often do you want to be completely removed from the physical world on a regular basis. Even when you do want immersion a good story or a game with good mechanics is far more important in entertainment than the medium it's delivered though. This is the reason people still read books, why retro video games are still played, and why 3D continually fails. A high tech medium, doesn't improve bad entertainment, but great entertainment can be delivered though a low tech medium. I'm not saying VR has no value, but I don't see it being some giant revolution.

On the flip side, AR doesn't provide immersion, but rather enhances the world around you. AR allows pretty much any information you get from your phone or computer currently to be instantly hands free and always available. If I told you I had a pill that allowed you to remember 20% more than you currently do, I can't think of many industries that wouldn't be improved by giving that to every employee. Once headsets become light enough and cheap enough, that's what AR provides. Instant access to relevant information without diverting focus.

VR seems to me like an extension of 3D technology, AR seems like an extension of the smartphone.


I'd love it. If I could go to a space where I'm immersed in code, data, paint or sound and could interact with it to create, I'd be all sold on VR. It would provide me with a whole new medium to transfer my thoughts in. But AR? What can I get with it that I can't with a phone? The information speed is near instant to what I need as a human. And what can I get with it that won't be ads?


> If I could go to a space where I'm immersed in code, data, paint or sound and could interact with it

You can code in VR, but it's crap.

VR is great, but trying to use VR as a replacement for classic computers isn't the right way. It's an entirely new paradigm. Take Tilt Brush VR for example - you're creating virtual 3D art which can only be viewed in a headset and that's how I'd view content creation in VR.

No, right now VR's best market is games and storytelling, that's what will really push the technology forward and eventually make the business cases a possibility as the tech becomes viable.


> If I could go to a space where I'm immersed in code, data, paint or sound and could interact with it to create, I'd be all sold on VR.

I'm not saying that wouldn't be cool, but using coding as an example, how much more immersed are you going to be coding in VR as opposed to going into a quite room where no one will disturb you with multiple monitors. The extra immersion is cool, but I don't being slightly more immersed in the way VR provides will lead to any sort of technological revolution.

> But AR? What can I get with it that I can't with a phone?

It's not about getting new information, it's about not needing the phone to provide that information. Think about a surgeon having the parts of heart overlaid on the body while doing surgery. Think about firefighters having the blueprints of the building they are entering in front of their eyes.

The value of AR isn't providing new information, it provides an alternative to relying on memory for that information.


> The extra immersion is cool, but I don't being slightly more immersed in the way VR provides will lead to any sort of technological revolution.

Well, having access to as many monitors of any size you want without taking up any space in your home or office is a good enough reason for many.


> and why 3D continually fails

I must be mis-understanding what you mean by 3D continually fails. 3D titles are the majoriy of the 140billion game business. Titles like Fortnite, Minecraft, League of Legends, Call of Duty, to name a few billion dollar titles are all 3D. I don't know if there is a single billion dollar 2D title. Maybe the top 3 mobile games?


Generally when people refer to 3D they mean each eye receiving different images as opposed to 3D models being shown on a 2D screen.


I think you're mixing up 3D like you get with stereoscopic glasses and the freedom to move in 3D (versus, say, 2D Donkey Kong or Pac Man).

Fortnite, Minecraft, Call of Duty etc. are (largely) presented on two dimensional screens


In the particular sentence I was refering to the context seemed to be retro 2D games vs 3D games (from Doom to Forenite). It was not 3D games on 2D screens vs 3D games in VR or at least that's how I took it.

Even if it was about 3D on 2D screens vs 3D in VR, VR wins in almost every case (except market share, haha). Put a non-gamer in a well designed 6DOF VR game with hand controllers and they'll pick it up immediately because instead of having to learn how to control the camera they just look, and instead of having to learn which of the 8 buttons to press on their Xbox/PS4 controller to do anything they just reach out and touch stuff.

Anacdata: My mother (77) and my step father (82) have both never played a video game in their lives. I'm not sure my mom has even played Pong or PacMac or Tetris but I brought a Oculus Rift home 2 years ago to show them and they were both enamored and wanted to know how much and where they could buy one. I advised them not to do it since it would likely be used for 2-3 days and then not touched (not enough content) but still, it was neat to see them pick it up easily.

Of course there are poorly designed VR experiences. The ones that use less controller buttons and more in world UIs arguably work best.


What about a headset (or glasses, contacts, whatever) that can provide anything from AR-style world annotations to a full VR experience, or anything in between? I agree that AR is something you'd use all the time, like a smartphone, but I do think the distinction between AR and VR is a bit artificial and due to what is technologically possible right now. There are many current VR experiences that would port surprisingly easily to AR (e.g., Tilt Brush, Oculus Medium, table tennis games, etc.) if there were AR hardware that was currently good enough.


If you have to read something, then your focus most certainly will be diverted, even if it’s relevant to what you were looking at or thinking about.

There in lies the problem — if information pops up that’s relevant to our conversation, you’re still getting pulled out of the moment in an asymmetric way. That has negative consequences to human interaction.

Plus don’t forget — popping up the right information at the right time is an extraordinarily hard problem to solve.


> if information pops up that’s relevant to our conversation, you’re still getting pulled out of the moment in an asymmetric way.

I agree with you there, but I'd argue it's less distracting than trying to pull something not readily available from memory. Sure having to read something is distracting for a second, but it's far less distracting that trying to remember something that's just on the tip of your tongue, or having to pull out your phone.

> popping up the right information at the right time is an extraordinarily hard problem to solve.

It is for a generic solution, but I feel like custom built solutions will be what really revolutionizes the work place. If I'm a UPS driver, I can have a custom UPS app that allows me to look into my truck and instantly see the package id of each box overlaid on the box. That's not a hard problem to solve with the proper AR tools and it instantly provides value.


Except for the fact you spend most of your time in the real world.

I can probably think of a million different uses for an AR display.

What’s really missing is a good way to interact with that display. Voice recognition and hand gesture recognition need to improve.


Yes, I spend most of my time in real life, which is why the modern equivalent of a pop-up book isn't that interesting to me. AR needs to have its "killer app" to draw me in. It's not there yet.

Meanwhile, VR can take me to another world. Games like Thumper, Audioshield and Beat Saber are enough to sell me on a crazy musical experience.


The cool thing about AR is you can think of any single app and it could be a killer app people would pay for.

Imagine eye glasses that allowed you to just "zoom in" by clicking your fingers. Or glasses that seamlessly translate written text into your native language. Or highlighted the path you should take whilst walking to work using GPS navigation. Any of those alone would be compelling products, the cool thing is these are just possible "apps" for an AR device.


Once you reach an actual "eye glasses" form factor that is capable of general purpose use, a lot more things before viable.

But the "killer apps" you mention are all just slightly more convenient versions of existing products (smart phones, binoculars) and are thus very dependent on a small enough form factor to realize that convenience.

Not having to carry and pullout a laptop out to view live Google maps directions is a killer app. Not having to pull my phone out is a much smaller convenience. It only becomes a killer app when the convenience of wearing smart glasses surpasses that of having a smart phone in your pocket (which is a long way off).

I do think there are killer apps that are in the nearer future. Anything where the rapidity of access to information conveys significant advantages is far less dependent of the form factor for success. You see this in the existing success of AR for business where those efficiencies can boost productivity.


I’d argue the opposite is true. Yes immersion is fun/cool... briefly. I liken VR to a rollercoaster — it’s very fun but very intense, and therefore I don’t want it everyday. Maybe once a year or so is enough. Also don’t forget that a significant portion of the population will experience motion sickness, and there’s nothing you can do to prevent that (there are many techniques that have successfully minimized it, but there still remains a portion of the population that cannot handle it, period.)

AR promises to be much more of a light touch in mixing digital content with the physical. You’re still grounded in reality so the motion sickness goes away (but if not well calibrated/tracked and is overused can still be annoying).

That said, the total number of applications where this is desirable is way less than predicted by those who want a big new product category. You’re still wearing a screen on your face, in a world that’s largely starting to rebel against screen culture. Everyday use has a lot of problems in terms of attention management, leading to dystopia if not done correctly. There are targeted everyday applications that can be useful, although many are hard to justify over the cheaper 2D screen you already have in your pocket and certainly won’t get rid of simply because of both keyboard and the cognitively grounded value of direct manipulation.


When the resolution gets high enough, AR could eventually replace every screen (phone, TV, computer).


Will still need gesture and voice recognition to be on a reliable level.

Also how would I text privately with someone? One theory I have is that we will eventually just carry around phone like slates which AR would display normal phone stuff on.


I imagine you could just carry around a little keyboard or simply type into any flat surface just as you do now with a phone screen.


Without tactile feedback the hologram keyboards you see in sci-fi movies will feel terrible to use.


I remember when the industry spent 6 years sweating over how to make real buttons emerge from the smartphone display because tapping on a flat glass screen wasn't tactile enough. Eventually someone turned that into a new, faster means of input - swipe style keyboards. Maybe we'll see something similar with AR.


Without tactile feedback is how the majority of people type out text messages. In my opinion it's actually easier to type on a phone screen than a tiny smartphone keyboard (of which I had many).

But I agree. I never said AR would replace keyboards -- just screens.


I'm suddenly feeling super behind on the news. While I believe I've heard rumors about Apple working on augmented reality, I can't think of any such product announcements coming down the pipeline.

> While Apple and Microsoft strain to sell augmented reality as the next major computing platform,

Is that accurate? Is Apple putting a lot of effort into selling AR?


Well you have the references to some kind of Glass-esque product in Xcode, and ARKit has been a part of iPhones since the X. It's interesting stuff but Apple doesn't seem to be interested in the fully immersive headsets so many other companies are. Microsoft seems to be doing a bit of both? But they aren't super focused on it.

Frankly I think the use of the word "strain" in that quote is a strain itself. They're definitely working on it, and it's there, but I wouldn't call it a feature they expect to shift units.


Not that I've heard. They have ARKit which is getting decent updates but it seems to be limited to the Pokemon GO / AR measurement tool that ships with iOS -type phone use-cases. Not one of their top priority afaik.


Industrial VR/AR is above the only place it makes sense, and its failing there already. Imagine what thr future holds then for the far less compelling home use situation.

Like many other I would like to be able to keep an eye on my kids and kiss my wife while I watch tv.

I know, how disgustingly human of me, right?


Brief mention of Meta in that article. I had missed that they shut down, though I can't say I am surprised. Their first headset was uncomfortable and lacked the field of view or functionality of the Hololens. It was a cabled only experience.

They were secretive and lacked decent SDK support.

When we tried to buy the second one it was delayed again and again by an insistence on US manufacturing and an inability to scale.

When they finally announced that our preorder was available 18 months later they wanted me to create a completely new order since my credit card was no longer valid (I decided not to waste my time or money).


Magic Leap next?


What have they achieved, has anyone been blown away by them? For me just getting into VR thanks to Oculus Quest which I have to say is an amazing product for the price.


The AR hype bubble is collapsing now. The not-actually-tech bubble (WeWork, Uber) is next. I think it'll be AI after that (is that bubble even still around?). Great times to be picking up cheap office furniture!


ML is unlikely to bust; it actually works well and it's integrated into most services any of us uses regularly; just because you don't feel it/see it advertised openly, doesn't mean it is not going to be needed. I'd rather assume it's going to accelerate for even more (cognitive) automation, unlike AR/VR/BTC etc.


I tried their demo at CVPR a few months ago. It was super immersive and crisp. You interact with an AI "guide" who shows you how to look around, pick things up, stick them on the wall, etc. It was surprisingly unnerving to look the AI in the eyes and have her react to my gestures. Beyond a sweet demo, not sure what the value in the product would be for a general consumer option, but cool technology for sure.


Interesting considering this comment from about the same time frame.

https://news.ycombinator.com/item?id=20253177


They have a demo now???


Had to wait forever to get a time slot. They do 10-15 minute demos in a temporary room they put up on the expo floor.


We collaborated with them to build a demo for visualizing Google Cloud infrastructure thru their system: it was super impressive and fun, made me think of building a whole AR vNOC. Wild times!


I've heard it's pretty meh, not the revolution they promised - Hololens 2 is supposed to be better.


I don't like the clumsy look of Magic Leap or Hololens 2. Nreal has created glasses which look like sunglasses. That's the future. I could wear them in public. https://www.theverge.com/circuitbreaker/2019/1/9/18176083/nr...


Solution looking for a problem to solve? I have seen VR demos at SIGGRAPGH and film festivals the past 20 years without seeing the 'killer app' yet.


Shhh! There are still fools with too much money looking to invest!


Is AR this generations VR? Full of promises and fantasy, but all of which is far from being usable or practical at any mainstream level.


Less practical than VR. VR has a niche.


It's a very small niche, and it peaked around the 2017 holiday season. The VR game thing didn't sell. Except for Beat Saber.

John Carmack, in a recent talk, doesn't seem to consider VR as a way to visit a virtual environment any more. It's just a way to get a wider field of view of screen space in a mobile-sized package. Watching Netflix with VR goggles emulating a wide screen, for people who don't have enough space for a wide screen, is seen as a primary use.


Innovation has always had many victims.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: