I'm co-founder and research director of a mobile AR software company, Zappar.
I blogged on Magic Leap a few weeks ago, and came to some of the same conclusions as the author of the gizmodo piece - namely that a device that can be like Google Glass and Oculus and everything in between could be a real game changer.
Edit: I also talk a bit about the requirements for an AR experience, existing approaches to AR HMDs and the big problems to be tackled on the road to consumer adoption. I've got a PhD on the software side of AR but am not an expert on the hardware, but have sufficient knowledge to hopefully offer an interesting perspective.
I'm new to HN too, so let me know if I should be posting this as a separate item or not!
It's more about capturing the spectrum from AR to VR in a single device. With controllable opacity (which I argue is required for a truly useful AR display) you also have the option to deliver purely VR experiences by blocking out the real world completely.
I bet some of it is also about expectations. If you just say AR, people think of low-poly characters in a webcam on top of a black and white checkerboard. They're not interested.
And from the video linked in the Gizmodo article, Graeme Devine claims they don't call it AR because "that is just a 2D HUD", and that's why they insist on the weird marketing phrase "Cinematic Reality".
To me and most people who have been around AR for years it is very clear that registration of objects in 3D is a core part of a lot of AR experiences. That was the part of the video that struck me as most weird; that Magic Leap would make a lot of noise around AR and yet have a Creative Content VP seemingly so unaware of the existing AR field.
By far the best novel I've seen that looks at a near future (2025) world where most people spend most of their time using augmented reality is Vernor Vinge's Rainbows End:
I loved Rainbows End. Recently I read Daemon/FreedomTM, and it paints a very different picture. But this technology could be used to create the Darknet the alternative web/economy in the books.
> That book (along with a small handful of others) made near-future scifi my favorite genre. Any other suggestions?
Which others?
If you've read Daemon and Freedom (TM), maybe you've already read Kill Decision, also by Daniel Suarez. It's about autonomous, swarming drones used in the military. Daemon is a better book though IMO.
There's Hieroglyph: Stories & Visions for a Better Future, which is a collection of short stores in the near-future sci-fi genre. Neal Stephenson found Project Hieroglyph with the intent of advancing positive (non-dystopian) sci fi in the near future, and thus spurring on the next generation of scientists and engineers to go ahead and create it.
Peter Watt's Blindsight and Echopraxia are also good (currently still reading the latter). It has scientifically plausible zombies and vampires, so that's something.
Ramez Naam's Nexus has very interesting ideas on what happens to the world with very good brain-computer interfaces. It's set in the near-future, but I do think the tech is implausible here for so soon.
William Hertling's Avogadro Corp is about a paperclip maximizer built by mistake by a Google-like company, with a Gmail-like service that is manipulated by said paperclip maximizer.
The Martian (Andy Weir), Mind's Eye (Douglas Richards), and Wool/Shift/Dust (Hugh Howey, set quite a bit in the future, but the story started (chronologically) in with near-future sci-fi)
I haven't read Kill Decision yet, I've heard mixed reviews. The thing I loved most about Daemon and The Martian is that it's all completely plausible that these things could happen in my lifetime. Hell, they could happen right now if someone was willing to put forth the money. Nothing is particularly outlandish or impossible.
I'll have to check out your suggestions, thanks. It's my favorite genre, but I struggle to find worthwhile books in that genre.
I really enjoyed Accelerando by Charles Stross. The book spans from present-day to post-singularity, and plays with the idea of what AI is along the singularity curve.
There's also the even-closer-future of REAMDE by Neal Stephenson.
I think that the HUDs in Daemon/FreedomTM are the least necessary to the possible future Suarez paints. Being able to see virtual stuff is impressive, but having a stable digital currency, an actually democratic political system, the manufacturing technology required to create self-sustaining local polises, etc., etc.
Interesting! The plot device reminds me of a dystopian short-story, in which the perfectly ordered world the protagonist inhabits ultimately transpires to be retinally projected image seen through government prescribed and mandated glasses that are permanently fixed to all citizens from birth.
Only when he fails to heed warnings to visit his optometrist to have them adjusted does the reality come to light.
Probably that all hardware has to be based on a government controlled "Secure Hardware Environment" (SHE). Security is a big part of the book - as you might expect from something that has key characters called Alice and Bob.
Remember when they bring down the central bank and every computer, every car, everything stops working because the certificate authority chain has failed?
Whenever I try to get motivated to work on VR, CAVE and similar projects I reread "Ready Player One" and secretly that's the world I'm building towards :)
I couldn't get into Snow Crash because it seemed very overdone, seemed forced (the hero/protagonist's name is Hiro Protagonist for crying out loud). Would I be able to get into Ready Player One?
Haven't read Snow Crash, but really enjoyed RP1. Though it's set in the semi-near future, it's got a plot device that means the (very well constructed) story packed with pop culture references from the early 80s to early 90s. If you're the right age to have had an Atari, you're likely to enjoy it more than most.
That said, Lucky Palmer is too young to have grown up with this stuff, but is apparently such a big fan of the book that he asks all new hires at Oculus to read it before their first day on the job.
I read Ready Player One first and found it really compelling and interesting, I highly recommend it. Then I picked up Snow Crash and found it too preposterous in its delivery, physics, and libertarian social narrative to make it more than a few chapters - and I'm someone who loves reading Heinlein.
Great read. Love the Oasis, although if you want a more in depth read you should check out the Otherland series. Very long, but great representation of what a fully immersive VR world would be like.
People often talked about Steve Job's reality distortion field but Apple shipped real products that did change the world. I will believe in a magic leap when I see it appear, not when I hear all the hype machine output.
I think it's odd that they're doing this kind of PR push when their product is so tenuous. If you look at Carmack's keynote and the low level detail of how Occulus is struggling with latency and precision in head tracking, the idea that these guys will tackle a whole swag bag of deep problems in some kind of predictable time frame is pretty ridiculous. My guess is they've made promising inroads in the core display technology (lcd occlusion plus fiberoptic scan projection) that is very impressive on its own, but everything else they've got is merely state of the art.
PR push? This article doesn't seem like a PR push at all. It seems like someone scraping as much information out of as many sources as possible.
If it was a PR piece, they'd have animations and pretty renders about how all the things work and where they're going. Instead there are patent diagrams and quotes from job listings.
Yes! People's take on Magic Leap is so credulous. It's like they believe that this company is going to produce something with all the virtues of the Oculus Rift + Google Glass and none of the problems with either on a fraction of the budget and in a fraction of the time.
Agreed on some level. It leaves me with, well either they have some awesome breakthroughs at each point along the critical path or that they are so close that its possible that they overcome whatever is left.
Comparing it with Apple is totally unfair of course.
Even Apple does pre-announcements (e.g. for the iPhone and WATCH) when it makes sense. That said, given that they have patents on some of this stuff, you'd think they'd at least show some demos when they're trying to get developers interested in the technology (see the Unite 2014 presentation).
Personally I think published demonstrations of working products should be a requirement for obtaining patents (and, where appropriate, source code should be placed in escrow and released when the patent expires).
We are talking about something new in the works. Just like everyone else besides Apple they need to get the word out BEFORE the product to get funding etc.. I don't see how Magic Leap deserves a remark alluding to vaporware???
Yeah, I got a really weird feeling in the video of Graeme Devine from the bottom of the article. He introduced himself as being from magic leap, and kind of paused, as if waiting for people to break out into applause for the company that no one really knows anything about.
It sounds like they've got some trick that gives some experience of real objects - probably far from perfect (hi-latency, lo-res, etc); but that's a good thing for a company. It means they have somewhere to go when competitors catch up.
Maybe opaque and depth of field really fool your eyes?
As a comparison, consider how surprising it is that animation works: line-drawn cartoons look like real objects, animals etc; secondly, that if you just change the images at a mere 24 fps, they come to life (the meaning of animation at the time).
The threshold is lower - a hell of a lot of hand-drawn animation is done 'on twos', with each frame shot twice, for an effective framerate of 12fps. 'Going on ones' is mostly reserved for motion that has to match a panning background (panning the whole screen at 12fps looks terrible, and so does someone whose feet are siding back and forth with respect to the ground), or for really fast motion. Or occasionally for really smooth, subtle motions.
In my experience 10fps is when cartoon images start to break down into "a sequence of drawings" rather than "moving images". This is right on the line for me; I'm not sure how common that is for people who haven't trained their eyes to the point where they can pick up on a single frame being shot 3x instead of 2x.
I mean, yeah, 24fps looks better, 30 or 60 fps is even better, and the maximum frame rate of the human eye seems to be anywhere between 255fps to 1kfps (I'm getting a lot of conflicting info from a quick google for that), but 12fps is pretty solidly acceptable for moving drawings.
Maybe very low-quality AR can be convincing, provided you're stationary (i.e. no panning-like effect)?
Relatedly, Carmack found some serious latency problems with Oculus Rift, but only with some particular combinations of object movement and changing viewing direction. He's thinking of fast video games with lots of movement. Just having a dragon floating in front of you is quite a lot more constrained.
Though I meant it as a general observation, i.e. animation is surprisingly effective, if you do it the right way; perhaps AR will be too, if it's done in (so
I am a bit puzzled by the names of Austin Grossman,
Dave Gibbons and Andy Lanning being cited as employee (whatever that means).
Are they so confident in the technology that they have started to heavily invest in the content? Is this a way to position themselves as an entertainment company rather than a tech company?
1. An elephant on a flower isn't going to get someone to pay $500-$1000 for this hardware. You need very high quality content to get lots of consumers on board.
2. Moving from current cinema production to VR production is going to be a huge learning curve for a lot of artists. You need to create best practices and examples of how it is done by artists.
Those sound like real obstacles. Content is always a big problem, but because of the amount of money that has been poured into this and the people/companies backing it up, I'm pretty sure that they already have that one solved.
I agree with this. On top of already doing unbelivable disruptive technology they are also doing several video games, feature films and "being in the same room as your favorite band" content?
Trying to do everything themselves seems overly ambitious. Then again I doubt they get funding like this without something provably awesome and a solid plan to move forward.
How can we be "ahead of schedule" when we still don't have the first couple tasks from a decade ago? Dictation, for example, is pretty good but it's not a solved problem.
* Translating telephones allow people to speak to each other in different languages.
* Machines designed to transcribe speech into computer text allow deaf people to understand spoken words
>> Translating telephones allow people to speak to each other in different languages.
The technology is there to build this kind of device, I agree not 100% accurate but still practical enough to be used in real life. http://en.wikipedia.org/wiki/IraqComm
>> Machines designed to transcribe speech into computer text allow deaf people to understand spoken words
This too, the technology is there to build a device. I would say we've much more powerful devices in form of smart phones than just doing 1 or 2 things.
The talko app on iPhone is actually much more than just speech to text converter. http://www.talko.com/
I would also like to say that Singularity predictions doesn't employ that the technology will be so mainstream that every person will be using that. But even if we can build a technology in some lab, it still becomes an event on Singularity timetable.
I've yet to see any off the mark prediction from Singularity timeline.
Pretty interesting quotes from a guy like Brian Cox:
"It's the premiere of a technology that allows you to put digital images into your field of vision directly," he said. "I saw the prototype in Miami a few months ago and it's stunning."
"It is going to be transformative technology, there's no doubt about that."
The experience will "disturb" audiences and put them "off balance", he predicted. "That's what it did when I saw it demonstrated."
The article also says that a demonstration will be made as a part of the Manchester International Festival next July.
On top of all the real people walking there , you'll have virtual people as well who are all in their living rooms yet still interacting and conversing with everyone as if they were really there.
Everyones Magic Leaps would be synced to show the same composite reality.
You could have virtual assistants to show you around the city , explaining the various attractions or just to give you some company.
Ofcourse you'd need a gigantic network of overlapping 3D cameras to support such a feat but we won't let concerns like feasibility get in our way. I dreamt of this once and now I'm really excited that I might get to see this within my lifetime.
$542 million! I'm used to big numbers from following the tech startup world but this seems huge. How many other half-a-billion dollar investments in tech are there?
Doing some Google research, I found some lists of top VC deals:
If this technology is as powerful as it claims to be, I could easily see Magic Leap eclipsing Apple as the most valuable company in the world as long as they hold those patents.
I suspect their strategy for finding investors is like this:
1. Potential investor is incredulous about the company's value
2. Investor gets a demo of their technology, and it's mind-blowingly amazing.
3. Investor asks them to name any price and gets out their checkbook, knowing that this has the potential to be the biggest invention of the century.
This seems like the future for Google Glass -- those articles a few days back about how the program is waning / losing steam seem to overlook Magic Leap, which feels more like Google doubling down.
I think Google Glass (Aka notifications on your eye glass with a camera and augmented reality are apples and oranges.
For notifications and wearables glasses was an idea that has proven to be a bad choice for social as well as practical reasons.
Augmented reality is well adding to our reality and I believe what people really want and NOT virtual reality. I can image Amazon Echo with a butler image would be a HUGE hit as opposed to the audio only option.
By all accounts, they are working on some sort of device that projects a light field onto the retina. But that is strictly additive. You can't darken anything. This image you see of the elephant in your hand - no way to do that. At best it'd be rather transparent.
Unless there is some aspect of vision that somebody here would like to chime in on [locality on the retinal sensor allowing to perceive darkness by lightening up other areas?], color me deeply skeptical. It seems like "fullscreen" Google Glass, at best.
This patent describes the "occlusion mask device" which is supposed to do exactly this - darken light coming from certain angles to grant the perception of blackness.
A big unanswered question I have is what the level of transparency is going to be when the display is inactive. For me it needs to be basically 100% to enable an "always worn" piece of hardware. The patents around the lightfield waveguides suggest polarization is used to allow independent optical paths for rays from the real world vs rays coming from the virtual image. Again, I don't know much about optics so wonder if there's an electronic way to have that polarisation turned off for areas without any virtual content, or if there's always going to be a darkening effect on the real world.
Given they have hired creative writers, filmmakers, etc, I don't think they want to create an "always on" device but rather an entertainment device you use at home. See also the article about Brian Cox describing something which sounds similar to a Planetarium like experience on BBC.
My main point is that making the ideal AR display is hard. To get anywhere close needs controllable opacity and virtual focus planes. From the patents that's definitely the direction Magic Leap are heading in.
If such a device exists and is 100% transparent when not in use then there are reasons why you might want to wear it all the time - think notifications like Google Glass but that can be embiggened and responded to on demand in a much larger portion of your field of view.
Early hardware is probably not quite going to be there in terms of portability, battery life, fashion, etc. So perhaps it makes sense to focus on in-home use cases first. However in the home-use scenario the value of AR is less clear; an Oculus planetarium may well be better than an AR one in my living room - I don't really want to limit the awesomeness of the size of the universe by the small size of my flat.
A lot of cases where AR adds value is in small experiences but deeply context-relevent ones. The consumer value equation for those experiences only really makes sense if the device is always-on. That's the future I think we're heading towards.
Yeh, the controllable opacity is really important to the utility of the device IMHO.
I've yet to study the linked patent so can't say if that solves it but it's not as simple as just a layer of LCD pixels, as they would not be in focus.
It's sort of the same as the light-field generation problem - to generate images at arbitrary depths you need to control both the colour and direction of light rays. To generate sharp occlusions at arbitrary depths using a single 2D "masking" layer will require the layer selectively passing or blocking rays depending on their direction.
The patent mentions variable diffraction gratings, which I suspect might be part of the solution to controllable opacity based on ray directions, but I don't know enough about optics really to say for sure.
To expand upon what the problem is: imagine rays of light reflecting from a point going in all directions (essentially a sphere). A cone of those rays make it into your eye through the pupil. Putting a 2D layer just near to the eye means light from that one point is actually spread across a circle on that 2D layer, with diameter close to the pupil size (getting on for 4mm in indoor lighting). There is therefore significant overlap between the circles from distinct 3D points - so to block out just one of the points you need to selectively block the rays over that entire circle, but only those in the direction of the point we wish to block.
Well that's not fair. The "occlusion mask" is referenced in a patent application, but basically is just a LCD that blocks out light that sits in front of your eye. re: fullscreen google glass, at best.
Well, that's exactly the point: an anti-display which lets through what parts of reality the system wants you to see, and blocks light from what it wants you to not see - so another display can render something to fill in the blocked-out areas. Rather than the "ghosting" of putting an elephant over your hand, with the light of both being added, the LCD acts as a dynamic precision cutout shutter, blocking the light from where the elephant would be and letting another display fill in _all_ the light you'd expect from that visual addition.
Does this mean that people looking at me using my magic leap will be able to see an outline of shapes which I'm projecting? That's going to be awkward.
\tangent I've always loved the idea of eye-tracking. It helps the whole graphics stack, because don't need to render the whole "screen", just the exact part you're looking at. And you only need render the center of it in sharp detail (what impinges on your hi-res fovea); the surroundings can be blurry and even monochrome (no cones outside the fovea, only rods!).
Unfortunately, because eyes saccade (twitch) extremely fast, latency must be ridiculously low or it would send you insane (figuratively).
It's the inverse of the camera demo on the page though, so that means they have figured out how to project light with the lasers down an optical fiber. Stick that nearly transparent fiber pointing straight at your eye, in your eyeglasses, say, and you have a way to paint on your visual input areas.
If it works, Magic Leap will mark an entirely new era in machine-human interface. It could eliminate the need for screens, keyboards, and even other input devices like light switches and thermostats by turning any surface into a tactile interface. Why actually attach a piece of hardware to the wall when I can just render it there whenever I look in that direction? Why carry along a physical keyboard when I can just 'project' one on any desktop at hand?.
There's also an interesting health & wellness aspect to this -- could this be a way to free people working digitally from sitting at desks in front of screens all day?
This is, in my opinion, the most exciting technology in the popular press right now.
A lot of old technologies tend to hang on within a niche that they are optimal within. We still have radio, cinema, and telephones even when each of those technologies has a better more modern equivalent. I don't doubt that AR will find a niche but I doubt it will be replacing 10$ keyboards or 50c dials on thermostats. It is fantastic that Magic Leap has moved beyond the horribly mundane "helping people find a restraunt in San Fransisco" model that Glass seemed to adopt. Fantasy AR overlays sound ridiculosly addictive as it would allow contsant enveloping escapism.
I'm not sure if Fantasy AR overlays are really all that compelling. Unless they make significant progress in AI as well, most likely those will be entirely predictable and boring after a while. Although who knows maybe there are enough people around who want to have a virtual pet dragon. Most of the real world outside is entirely unsuitable as a canvas for any kind of interesting movie like experience. Traditional movie story telling artfully uses both cuts in time and transitions in space to tell a compelling story, neither of which is readily available.
> There's also an interesting health & wellness aspect to this -- could this be a way to free people working digitally from sitting at desks in front of screens all day?
Probably to the extent technology has freed people from commuting to the office every day. This is more a matter of culture than of technology alone. If we don't make a push to fix broken habits and change detrimental work culture these things will never really change.
> There's also an interesting health & wellness aspect to this -- could this be a way to free people working digitally from sitting at desks in front of screens all day?
I dreamed of a future where I could have magic leap contact lenses while hiking in the mountains. My brain being quite active when walking or running, that would be an awesome way to do productive and/or creative work.
Yes, because on a touch screen the keyboard is on the screen where you're looking anyway. I would never in my life use a projected keyboard on my desktop/laptop. Touch typing and tactile feedback.
I wonder if anyone has done work on some kind of chorded keyboard input just from touching fingers to your thumb - that could be done with some kind of sensor gloves and seems a pretty natural motion.
Might be quite tricky to learn though!
Edit: I see that Magic Leap are working on sensor gloves.
Good point, but some touch typing does happen with the keyboard in the field of vision, just not as the focal point -- when typing on a laptop and looking at the screen, for instance.
Can't believe anyone here didn't mention this but I'll be the immature one. Imagine porn on this device. I mean just, you know, all the Scarletts and Jessicas.
It's not immature to make the observation. Porn has been a driver of new media technology throughout history - Magic Leap reminds me of nothing so much as a portable kinema viewer.
I wonder how such a "display" of the Magic Leap is controlled. Could it be that it's no longer a rasterized display? Maybe rather than coloring pixels on a plane, the developer is required to send display information of a volumetric object like in a volumetric display: http://en.wikipedia.org/wiki/Volumetric_display
In general you just to provide the depth map too. You're still only rendering for one viewpoint (well OK, one for each eye) so the display just needs to simulate the depth and colour of the point nearest to you along any ray.
...distinct from volumetric displays which support multiple simultaneous viewers so need to actually render in terms of voxels.
Combining that with tracking depth cameras and sharing the map of the environment over the network would let multiple co-located users see and interact with the same 3D object (much like with a volumetric display) but the actual rendering of each user's view is decoupled.
Combining this with eye-tracking, as \hyp0 mentions, takes this to the limit - you don't need to construct a "holographic display" that renders super high resolution light fields of rays in controllable directions to all the infinite possible viewpoints around the display. Instead just stick it on your head, locate the users in a shared depth map, and just render the slice of that their eye needs to see right now. In terms of CPU/GPU/bandwidth issues it's much more plausible with today's technologies. The display side is still challenging!
This. I refuse to click on gawker links but I would love to read the article.
It would be nice with a pastebin / imgur link when people post things from gawker.
If a "lightfield" is projected into your eye, how do you cope with (additive) light input from the real world?
if it is real, will there be a law to tag 'virtual' things so i don't accidentally kill a real person when I thought that was part of my game? You know how congress or the senate would design that one ;)
Sounds awesome, but if it is as gizmodo describes it feels like one of Google's moon shots. We are just barely starting to see good VR... combining a real and virtual world together is another order of magnitude of difficulty.
I'd love to be proven wrong but don't see this for many years.
That sounds awesome. Also much less tangible than the Rift, obviously. But with that kind of funding, obviously some smart people believe they've got real potential. Colour me excited.
In the land of "not really good enough". Additively mixed at a single focus depth plane, not dissimilar to the hacks people have been using in research for AR for a long time.
It does however really annoy me the amount of press Magic Leap have managed to get without saying anything public at all. For me it's an opportunity to think and talk about the properties of an ideal AR HMD and why it might be an interesting device. I have no idea how close Magic Leap are going to get to that ideal.
I blogged on Magic Leap a few weeks ago, and came to some of the same conclusions as the author of the gizmodo piece - namely that a device that can be like Google Glass and Oculus and everything in between could be a real game changer.
I chose to call the concept "Controllable Reality" (I make no claim that I've invented the concept or the term!): http://www.zappar.com/blog/google-glass-magic-leap-and-the-i...
Edit: I also talk a bit about the requirements for an AR experience, existing approaches to AR HMDs and the big problems to be tackled on the road to consumer adoption. I've got a PhD on the software side of AR but am not an expert on the hardware, but have sufficient knowledge to hopefully offer an interesting perspective.
I'm new to HN too, so let me know if I should be posting this as a separate item or not!
Simon