Hacker News new | past | comments | ask | show | jobs | submit login
Magic Leap Announces Its Augmented Reality Developer Platform (techcrunch.com)
90 points by xasos on June 2, 2015 | hide | past | favorite | 51 comments



> This chip powers its augmented reality headset that works by shooting light directly onto your eye, rather than sticking a screen in front of it.

I'm curious to know exactly what they mean by shooting light directly into your eye. Wouldn't you see the source? That is basically what a screen does, though less direct. It makes light, some of which enters your eyes. You see more light when you look at the source, the screen.


Every single display works by projecting photons into the eye.


If that where the case only one person could see your monitor at a time. Instead monitors send light across a vary wide viewing angle.


Light is made of photons. Every single display shoots photons into your eyes. Traditional displays send photons into everybody's eyes.


Keyword is directly, to contrast it with, say, a page of [e]paper that reflects light into your eyes.


I'm thinking "looking into a tiny projector focused on your retina", but I'm having a hard time imagining what what would be like, since I've literally never experienced anything like that. Looking into a normal projector wouldn't be comparable really, as the focus for those is a field much larger than your retina. You'd need something with a lens that would focus it down to dime-size or smaller to get a similar effect, and I doubt you'd want to do this with anything more powerful than a pocket projector. Might be a fun weekend project to hack a new lens system into one and see what it can do if the internals are even hackable enough to make something like that possible (I'm thinking focal lengths and lens types).


I think they are talking about virtual retinal displays? Avengant Glyph is planning to use them in their kickstarted VR headset.

http://en.wikipedia.org/wiki/Virtual_retinal_display



No it's absolutely NOT the tech they are using.

They are using this tech: https://www.youtube.com/watch?v=1UF9fJtZHAY

It's like a reversed type of piezo-endoscope that projects the image on the eye itself.

The patent can be seen here: https://encrypted.google.com/patents/WO2014113506A1?cl=en

The full term for this tech is: scanning fiber display with piezoelectric actuator


Ah great, I wonder why so many news articles are so "How does it work?! Nobody knowws... ooo..." when the details are all there.

Btw if nobody wants to read it, the way the scanning fibre endoscope works is that the fibre acts as a very fast one pixel camera (send down R, G and B light and record the reflection magnitude). To build an image the tip is vibrated in a circular pattern of varying diameter using a piezo crystal.

The display version would modulate the strength of the R, G and B lights to transmit an image.


I've been following this tech like sherlock on crack, it's just that exciting!

The idea of vibrating fibre is just so amazing that i really do think that they will pull this off, i'm just curious as to how large the controllbox will be and if they can manage to shrink it to glasses sizes.


So this would project image onto flat surface, how does it actually project directly to your retina?


Very interesting, thank you. Seems I was wrong in my assumption, though I'm interested in why they keep using the term 'Lightfield'. This is a better patent claim page for my understanding on how the final product will work. https://www.google.com/patents/WO2013077895A1?cl=en&dq=inass...


I believe they're actually researching both.


do you have any proof? I thought so as well...but apparently not.


It's been whispered on old blogs that i can no longer find that the founder started with AR software and then stumbled upon this new tech - so you might both be right.


Well, time will tell. Thanks for the heads up.


How does Magic Leap compare with Microsoft's Hololens? It appears that both are implementing AR is slightly different ways. Was completely blown away though by the demo videos of the Hololens -

https://www.microsoft.com/microsoft-hololens/en-us


Magic Leap are promising MUCH more - realistic occlusion of virtual objects by real ones, dynamic transparency (allowing both AR and fully immersive VR), natural depth of field (refocusing on virtual objects without simulated blur), a scanning fiber light-field display, fully accurate hand tracking, both very near (holding something in your hand) and far (things flying through streets) depth tracking.. you get it. Not to mention, they haven't even said anything about their cloud platform yet, which you can bet Google are heavily involved with.. combining Tango area mapping with Google Maps.. they are trying to solve every problem in AR/Computer Vision, so it's no wonder people are skeptical. But if anyone's going to succeed, they have a pretty impressive team & investor lineup.

Not to mention, Magic Leap are trying to build a whole ecosystem designed for AR experiences, right down to retails stores. Whereas Microsoft simply seem to be making 'Windows-AR'.


There's no way to occlude real objects. Think about it, it's just shining more light in your eyes. You can't shine "dark" into someone's eyes.

I don't believe they've promised that anywhere either. In fact where are you getting any of this information?


This is the patent: https://www.google.co.nz/patents/US8950867

Abstract: A system may comprise a selectively transparent projection device for projecting an image toward an eye of a viewer from a projection device position in space relative to the eye of the viewer, the projection device being capable of assuming a substantially transparent state when no image is projected; an occlusion mask device coupled to the projection device and configured to selectively block light traveling toward the eye from one or more positions opposite of the projection device from the eye of the viewer in an occluding pattern correlated with the image projected by the projection device; and a zone plate diffraction patterning device interposed between the eye of the viewer and the projection device and configured to cause light from the projection device to pass through a diffraction pattern having a selectable geometry as it travels to the eye.


The OP claims the reverse: "realistic occlusion of virtual objects by real ones"

With a good 3D model of the real world, that is possible (but not easy to get right with a possibly rapidly moving camera)


And also complains dynamic transparency. That would be a way to do real objects occluded by virtual ones, depending on what it means.


Yes there is, you render the real environmental map as a "cloak." Pretty simple conceptually.


You're going to have to expand on that vague explanation because I still think it is fairly obviously impossible. The best you could do is be much brighter than the background (which may be quite effective in a dimly lit room but isn't true occlusion).


It seems to me that every time anyone tries to do everything at once, they almost always fail. On the other hand I can see how progress is incremental, and we have amazing things after competition and many incremental improvements. Apple is a partial, notable exception, and thus benefited immensely. So, I do hope they succeed, AR would be amazing, but your description of their ambition makes me doubt it.


The "screen based VR" companies should be right out nervous if they manage to accomplish even the basics of those lofty goals. Though, from what have been described from those who have tested the Magic Leap, it is currently a HUGE device. I'm sceptical if they can get it down to even Oculus size.


I thought they would be nervous too but there is still a market for screen based VR. Like the holodeck experience. Sometimes you just need to take people to a whole different place, one that doesn't include your current surroundings. #readyplayerone


Magic Leap have realistic occlusion. That means that black can be projected on the screen - thus, VR is possible.


Working out hardware is first part. Getting enough developers working on VR is much more important. I think Google is going clever way here by building up the hype and attracting them early. Also lots of 'traditional' VR people will switch.


> realistic occlusion of virtual objects by real ones

This - how is this possible?

Also, in the demo I noticed that the robots that landed on the higher floor where placed at the correct height. I know its a demo and not based on a PoC; but if they deliver that it will be amazing tech.


Actually, that is their PoC. It actually works. Watch this video by Graeme Devine for more on how they do it https://www.youtube.com/watch?v=oAAqbKSjRrk


This Reddit thread suggests that the video is a concept video, not taken from their PoC: http://www.reddit.com/r/oculus/comments/2zvuoz/occlusion_mas...


Despite there been a lot of journalits who have used Hololens, it is surprisingly difficult to get an account of how it works or how well it works.

I have however read that most of the augmented graphics on the hololens is semi transparent and the area of the visual field it can augment is limited.

I also wonder how well it calibrates. Saw a presentation the other day about calibrating AR and the top of the line calibration in academia is not super good.


Oliver Kreylos probably has the best description of the current demo: http://doc-ok.org/?p=1223


Those who have tested the latest version complained a lot about the FOV being very small, thus breaking immersion.

Though it's the first gen version, so i guess it will become much better.


As someone who is developing in the same space (AR) I am wondering if ML is going to be like the Segway in that there is something interesting there but the reveal is not going to be that much of a breakthrough.

It just has too much backing from people that don't get suckered for that to be the case though, so it is really interesting.


Everything is 'upcoming' with this company.


Yes and I'd be loathe to spend any real time learning the SDK of a device that is so far mostly concept videos (as far as I can tell).

That won't make a whit of difference to them of course, it's not like I'm known in the gaming space anyway...

I'm pessimistically hopeful that they aren't just hype and don't spend so long trying to get it right that they're irrelevant by the time they launch.


They have a ton of very high-profile investors, it can't be purely hype. It's hard to believe Google would put half a billion dollars into a company without seeing something concrete and impressive.


And Bezos and Jobs invested in the Segway, I'm not saying that it's pure hype, just that all they've shown is hype.


To be fair, the segway did what was promised, but it turned out that it wasn't so useful in practice.


A plausible outcome for this technology as well.


Glass.


This is the future and is going to revolutionize everything. Would be worth it (lucrative) to start now!

I am thinking of abandoning my current, well paying, web dev career just to do these types of games and experiences.


As a musician, I'm curious about the potential for using the VR/AR envrionment as a production tool. DAWs are pretty static, but giving depth to composition by way of more granular interaction seems possible. With so much digital progress in emulation and networking, it would be very cool to finally have a VR/AR "jam room" that could unite musicians across the country/globe in one room to collaborate. A distant future, sure, but a reasonable application.

As a capitalist, my biggest question is...I'm half joking here, but tongue in cheek serious...How is this eventually going to be used for porn?


Network latency is going to a significant factor for musicians jamming together. I could see it for musicians living in the same city or maybe a few hundred miles from each other. If they're on different continents, the speed of light will become a limiting factor:

http://royal.pingdom.com/2007/06/01/theoretical-vs-real-worl...

It comes to about 1/10th of a second for people on the opposite side of the globe under ideal network latency. Okay for conversation, but could be a problem for musicians playing together.

Don't get me wrong, I think there are a lot of interesting uses for this technology if they can get it to work as promised.


Yeah, it's been one of the more prolonged riddles.

However I can say from firsthand experience that the JamKazam platform is usable and functional. Distance and connectivity do matter, that is true. Even on my very limited connection using a relatively modest USB audio device, I've had success on several occasions with one to several participants.


VDK - vapourware development kit.


I'm not really understanding what the announcement is here. They've had a Developers section on their site for almost 6 months?


Still only have his word to go on. Still no hardware actually demonstrated. Won't believe it till I see it.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: