Daqri was really good at recruiting amazing people, and really bad at turning them into a functioning company. Communication between teams (and particularly offices) was really lacking; I remember back when the helmet was built on Android (in LA) and all the vision scientists were working on iOS (in Sunnyvale), they had this amazing world-tracking demo that we just couldn't use.
The Two Trees photonics tech is absolutely amazing, check it out if you get a chance - field programmable holograms. Point the thing at any glass surface and you get a hologram "in" it. Blew my mind the first time I saw it! Dunno if it ever made it into headware.
Was it because of incentives being wrong in the company, the wrong people in charge, something else?
The last re-org I was a part of before I left, our team was dissolved and I was re-assigned to a new boss I'd had a pretty terrible time with in the past to do exactly what I'd been doing on the dissolved team, minus many of the best parts. He was absolutely surprised that I was not excited at this change, and could not answer when I asked what the org change was supposed to fix and/or improve. If they'd actually talked to any of us - including my direct manager at the time - prior to springing the change on us, could've probably gotten us on-board and aligned, but that wasn't a thing that usually happened.
Incentives and having the right people in charge are a structure to facilitate good communication, they are not a surrogate for that communication.
Thanks for the question!
How far out do you think consumer AR is realistically?
3-5 years, would be my optimistic guess? The tech is getting there (Magic Leap and Hololens), but it's not there yet, and it's not going to seriously accelerate until there's a really compelling consumer use-case / app to go with it.
What we are used to:
What they have:
Our current thing is phone/tablet based where you can basically stream real time video to our support staff and each party can draw on the screen, and whatever they draw stays where it is in physical space.
Customers love it, it works great.
Nobody wants dorky glasses that make you sick.
Wow... that's a great way to utilize AR.
Disclaimer: I work at Microsoft, but not on Hololens. All opinions my own, etc.
My favorite use-case is that in nuclear power plants, they have two people do everything - one to do, one to watch (not in a big brother sense, but more in like a pair programming sense), but this means two people in an irradiated environment. With the AR tech, the second person can be in another room.
(Although you run into other issues, like no wifi (or power!) inside reactors; although apparently leaky coax is a really effective solution to the first)
It's PTC's Vuforia Chalk: https://chalk.vuforia.com/
I have a separate full time job and I'm not totally comfortable with sharing who I work for, my apologies.
VR tech feels impressive because the hardware works well. AR tech, on the other hand, still feels very much like a prototype, and far from real production use. Some of the main deal-breaker issues are heat, discomfort, dizziness, and software instability (I wrote more about the specific issues I encountered with the Magic Leap in a past thread ). To me it felt significantly worse than the Oculus Rift Development Kit 2 (the second public release of a VR device back in 2014). Back then when I got that version because I was super hyped for VR, I thought "Woah, clearly needs more work but this is amazing". With AR, my experience thus far has sadly been "Oh, this is it?".
A lot of it is presumably because AR is much harder to do well. A very simplified way of thinking of VR is basically it's a screen close to your face that can track its own pitch/yaw/roll/small amounts of movement and redraw accordingly. This is tricky, but totally doable (and thus, it was done). Smartphones have basically been able to do most of this for years (not so much the movement, but that's also a pretty new thing in VR).
AR actually has to interact with the real world. If you have a virtual image anchored to, say, a pendulum, that's fairly non-trivial computation-wise to recalculate without prior knowledge of the pendulum's movements. It's basically object detection which ML models can do fairly well, but they do it with latency. And when the latency is in something your brain is trying to process as the real world, it's nauseating. Add on the fact that it takes a lot of energy to do inference, and that AR devices need to be standalone devices that can be taken anywhere by themselves, and you also have a serious heat problem where the compute needed to constantly analyze the real world means you just have a hot brick on your head or in your pocket.
I think it's going to take a lot more time to get it right. It feels like a hard enough problem that I think the first truly production-grade AR device will have to come from one of the big companies, possibly Microsoft with HoloLens.
Of course, I write all this with the definition of AR as "something you wear on your head that overlays your full field-of-view with virtual graphics". Other forms of AR like Snapchat filters or camera overlays are probably less challenging.
Having said that, the fundamental interactions are very cool. I had slight difficulty figuring out where the tracking was actually registering (e.g., how far do I have to move my finger to 'tap' a button), but once I figured that out it worked just like you expected it to.
I always felt the small FoV was a complaint by those that couldn't see the bigger picture (haha)... at least, for the current-gen tech that has built-in latency, and how the smaller FoV might actually be a benefit.
Anyways, my own thoughts and ramblings (I work at MS, but not on Hololens).
The reason IMO, is most of the users who use snapchat filters wouldn't even know what Augmented Reality is. It is that intuitive use case which can make high-tech get widespread consumer adoption.
Looking at the rate at which limited attention, instant gratification apps are conquering market, the demand for AR filters aren't going to decline anytime soon and those who offer AR as a service would have enough buyers in the form of the copy cat apps wanting AR filters.
In fact, snapchat itself supposedly acquired the AR filter tech from a Ukranian company called Looskery in 2015.
Modern ones show both the actual live view + a composited on graphic of where your current backup line will take you based on the tire angle, etc.
So calling this AR means you're casting too wide of a net from a complexity perspective.
Also, just because Steve Mann said it doesn't make it canon, and he didn't invent AR - arguably Louis Rosenberg built the first functional AR system.
But I bet we'll have "the iPhone" of AR doing it right and popping open the market! Someone will sure get lucky betting on it!
The "iPhone of AR" is probably going to be an iPhone.
I'm not saying they're going to succeed, but they're certainly trying - unlike Nokia, who was in the middle of a massive internal war over Symbian when the first iPhone hit.
Apple has the built-in "measure" app, though that can only measure things on a tabletop, and the Mac Pro table-top demo (https://www.apple.com/mac-pro/ in Safari on iOS). The iOS game Egg, Inc. has a mode where you can see your farm projected on top of a nearby surface. It's kind of neat to see the little chickens run around on my kitchen table.
What's holding AR back (on the iPhone) then, is a killer app.
Vuforia Chalk is one for remote support. See what the supportee sees, and point out things, visually, to them. Circle which lever to pull, and using Apple's ARKit means the circle stays on the right lever, even when the phone is jostled. It's a cool use case, though their UX is horrible.
IMO the problem is a hand-held smartphone is an awkward AR platform.
 Note that there's some vendor lock-in by Apple. The Vuforia Chalk feature I mentioned above is image stabilization and an overlay. Image stabilization (and face detection) is an old problem in computer vision, it's commonly used to add silly hats to faces in video calls.
...doooh! ARKit might be good, but it needs to be coupled with a google-glass-successor. Imo GG was the "nokia 900" of its era, proof that it can be done and there's sort of a market, but worthless without something to make it worth the inconveniences for regular users.
To be honest I'd bet on a social-media experienced company like Facebook or on whoever does a good job at partnering with a social media giant because the only AR applications I'd imagine compelling enough for "average people" will involve social things, like walking around a bar and seeing fb and twitter and maybe tinder profiles of people around you near their heads like game stats, sourced based on facial recognitions from people who set themselves to "open"... will be creepy beyond belief, but I'd bet on a mix of social, mildly sexual, with a dash of gaming thing, like a mix of tinder and pokemon go. Now, god knows who has the chops to pull that one right without getting it too creepy...
Almost across the board every major VC got spooked, then when Magic Leap turned out to be a turd the bottom fell out.
I firmly believe that Apple and Google successfully took the entirely of the market opportunity for a startup to win the AR game long term.
You could argue why this is good or whatever, but at the end of the day AR isn't going to be the platform a startup could knock FAANG off of their perch, and one or all of them will own the next hardware platform.
Just to lay out my point: AR has to have extremely good detail, framerate, and even depth of field to work well. It has to track everything perfectly and blend the two lightfields seemlessly or the immersion is broken and you risk nausea or at least annoyance.
But if you have high framerate, ability to simulate depth of field, high knowledge of the environment (enough to overlay seemlessly), etc, then couldn't you just... operate purely in VR with reality streamed in via lightfield cameras? That would allow you to do things that are impossible in AR, like display darker overlays on lighter surfaces, and everything would be at the same framerate so it'd be easier to seemlessly integrate everything.
So what does really, really good AR get you that really, really good VR with reality overlay does not? Seems like you can avoid a lot of super hard to effectively impossible hardware optics problems by just doing things with an opaque visor...
I'm not an expert in this field, so I'm probably using terminology incorrectly.
The current issues are:
* A limited field of view. AR headsets are more limited but don’t interfere as much with normal vision.
* Latency of video display causing a noticeable mismatch to head movement.
* Shutting the rest of the world out. No one can see your face properly so it’s much more invasive than AR might be.
- a vr passthrough can achieve a much wider FOV than any ar headset.
- latency isssues are debatable, I would say it's a non-issue
- eye-tracking and front-facing displays solve gaze
opaqueness. we already have cut-out camera holes in consumer grade displays
I would say that THE current issues with passthrough vr is solving the convergence issue. varifocal lenses are not consumer level technology yet.
Latency issues might be subjective but there are clearly going to be population trends. I imagine these will be very similar to latency trends with VR and as such latency will be a significant issue.
Front facing displays are even weirder than someone with a headset on. I’m confused that anyone thinks sticking a view of someone’s eyes on a flat screen is going to be anything other than weird. It’s diving head first into the uncanny valley.
Cameras add latency. This causes nausea.
Camera resolution is low. When the screen is less than 10 cm from your eyes the grain is very noticeable.
Cameras only have limited focal range.
Camera dynamic range is low. I’m many settings it is hard to capture images that your eye could handle.
Apparently display technology is also a limitation.
Here's one attempt to pass-through AR (thank you for that vocabulary, BTW):
Are you aware of any who do it better?
VR can work at 90Hz. If you spend money on a quality camera and a quality screen you can manage less end-to-end latency for AR. Focal range will be fine if you have a good sensor. Grain will be fine if you have a good screen. Compressing the dynamic range won't break anything. It doesn't have to work in every setting.
In a perfect world, I agree with you. VR would be nicer, especially once the Cameras include thermal/night vision/other.
Imagine directly overlaying a "4D" ultrasound in realtime with careful sensor position registration... it'd make all manner of medical procedures a lot clearer. You could immediately see what way a baby was turned (without training, even), for instance. I guess in those cases, you wouldn't even need conceptually seemless immersion, just precise enough for eye-hand coordination.
Commercial drone photo/videographers should be using a pilot to control the drone, while they themselves control the camera.
But, still... I'm nitpicking an awesome piece of kit.
I don't see VR providing a whole lot outside the area of entertainment. How often do you want to be completely removed from the physical world on a regular basis. Even when you do want immersion a good story or a game with good mechanics is far more important in entertainment than the medium it's delivered though. This is the reason people still read books, why retro video games are still played, and why 3D continually fails. A high tech medium, doesn't improve bad entertainment, but great entertainment can be delivered though a low tech medium. I'm not saying VR has no value, but I don't see it being some giant revolution.
On the flip side, AR doesn't provide immersion, but rather enhances the world around you. AR allows pretty much any information you get from your phone or computer currently to be instantly hands free and always available. If I told you I had a pill that allowed you to remember 20% more than you currently do, I can't think of many industries that wouldn't be improved by giving that to every employee. Once headsets become light enough and cheap enough, that's what AR provides. Instant access to relevant information without diverting focus.
VR seems to me like an extension of 3D technology, AR seems like an extension of the smartphone.
You can code in VR, but it's crap.
VR is great, but trying to use VR as a replacement for classic computers isn't the right way. It's an entirely new paradigm. Take Tilt Brush VR for example - you're creating virtual 3D art which can only be viewed in a headset and that's how I'd view content creation in VR.
No, right now VR's best market is games and storytelling, that's what will really push the technology forward and eventually make the business cases a possibility as the tech becomes viable.
I'm not saying that wouldn't be cool, but using coding as an example, how much more immersed are you going to be coding in VR as opposed to going into a quite room where no one will disturb you with multiple monitors. The extra immersion is cool, but I don't being slightly more immersed in the way VR provides will lead to any sort of technological revolution.
> But AR? What can I get with it that I can't with a phone?
It's not about getting new information, it's about not needing the phone to provide that information. Think about a surgeon having the parts of heart overlaid on the body while doing surgery. Think about firefighters having the blueprints of the building they are entering in front of their eyes.
The value of AR isn't providing new information, it provides an alternative to relying on memory for that information.
Well, having access to as many monitors of any size you want without taking up any space in your home or office is a good enough reason for many.
I must be mis-understanding what you mean by 3D continually fails. 3D titles are the majoriy of the 140billion game business. Titles like Fortnite, Minecraft, League of Legends, Call of Duty, to name a few billion dollar titles are all 3D. I don't know if there is a single billion dollar 2D title. Maybe the top 3 mobile games?
Fortnite, Minecraft, Call of Duty etc. are (largely) presented on two dimensional screens
Even if it was about 3D on 2D screens vs 3D in VR, VR wins in almost every case (except market share, haha). Put a non-gamer in a well designed 6DOF VR game with hand controllers and they'll pick it up immediately because instead of having to learn how to control the camera they just look, and instead of having to learn which of the 8 buttons to press on their Xbox/PS4 controller to do anything they just reach out and touch stuff.
Anacdata: My mother (77) and my step father (82) have both never played a video game in their lives. I'm not sure my mom has even played Pong or PacMac or Tetris but I brought a Oculus Rift home 2 years ago to show them and they were both enamored and wanted to know how much and where they could buy one. I advised them not to do it since it would likely be used for 2-3 days and then not touched (not enough content) but still, it was neat to see them pick it up easily.
Of course there are poorly designed VR experiences. The ones that use less controller buttons and more in world UIs arguably work best.
There in lies the problem — if information pops up that’s relevant to our conversation, you’re still getting pulled out of the moment in an asymmetric way. That has negative consequences to human interaction.
Plus don’t forget — popping up the right information at the right time is an extraordinarily hard problem to solve.
I agree with you there, but I'd argue it's less distracting than trying to pull something not readily available from memory. Sure having to read something is distracting for a second, but it's far less distracting that trying to remember something that's just on the tip of your tongue, or having to pull out your phone.
> popping up the right information at the right time is an extraordinarily hard problem to solve.
It is for a generic solution, but I feel like custom built solutions will be what really revolutionizes the work place. If I'm a UPS driver, I can have a custom UPS app that allows me to look into my truck and instantly see the package id of each box overlaid on the box. That's not a hard problem to solve with the proper AR tools and it instantly provides value.
I can probably think of a million different uses for an AR display.
What’s really missing is a good way to interact with that display. Voice recognition and hand gesture recognition need to improve.
Meanwhile, VR can take me to another world. Games like Thumper, Audioshield and Beat Saber are enough to sell me on a crazy musical experience.
Imagine eye glasses that allowed you to just "zoom in" by clicking your fingers. Or glasses that seamlessly translate written text into your native language. Or highlighted the path you should take whilst walking to work using GPS navigation. Any of those alone would be compelling products, the cool thing is these are just possible "apps" for an AR device.
But the "killer apps" you mention are all just slightly more convenient versions of existing products (smart phones, binoculars) and are thus very dependent on a small enough form factor to realize that convenience.
Not having to carry and pullout a laptop out to view live Google maps directions is a killer app. Not having to pull my phone out is a much smaller convenience. It only becomes a killer app when the convenience of wearing smart glasses surpasses that of having a smart phone in your pocket (which is a long way off).
I do think there are killer apps that are in the nearer future. Anything where the rapidity of access to information conveys significant advantages is far less dependent of the form factor for success. You see this in the existing success of AR for business where those efficiencies can boost productivity.
AR promises to be much more of a light touch in mixing digital content with the physical. You’re still grounded in reality so the motion sickness goes away (but if not well calibrated/tracked and is overused can still be annoying).
That said, the total number of applications where this is desirable is way less than predicted by those who want a big new product category. You’re still wearing a screen on your face, in a world that’s largely starting to rebel against screen culture. Everyday use has a lot of problems in terms of attention management, leading to dystopia if not done correctly. There are targeted everyday applications that can be useful, although many are hard to justify over the cheaper 2D screen you already have in your pocket and certainly won’t get rid of simply because of both keyboard and the cognitively grounded value of direct manipulation.
Also how would I text privately with someone? One theory I have is that we will eventually just carry around phone like slates which AR would display normal phone stuff on.
But I agree. I never said AR would replace keyboards -- just screens.
> While Apple and Microsoft strain to sell augmented reality as the next major computing platform,
Is that accurate? Is Apple putting a lot of effort into selling AR?
Frankly I think the use of the word "strain" in that quote is a strain itself. They're definitely working on it, and it's there, but I wouldn't call it a feature they expect to shift units.
Like many other I would like to be able to keep an eye on my kids and kiss my wife while I watch tv.
I know, how disgustingly human of me, right?
They were secretive and lacked decent SDK support.
When we tried to buy the second one it was delayed again and again by an insistence on US manufacturing and an inability to scale.
When they finally announced that our preorder was available 18 months later they wanted me to create a completely new order since my credit card was no longer valid (I decided not to waste my time or money).
John Carmack, in a recent talk, doesn't seem to consider VR as a way to visit a virtual environment any more. It's just a way to get a wider field of view of screen space in a mobile-sized package. Watching Netflix with VR goggles emulating a wide screen, for people who don't have enough space for a wide screen, is seen as a primary use.