> This chip powers its augmented reality headset that works by shooting light directly onto your eye, rather than sticking a screen in front of it.
I'm curious to know exactly what they mean by shooting light directly into your eye. Wouldn't you see the source? That is basically what a screen does, though less direct. It makes light, some of which enters your eyes. You see more light when you look at the source, the screen.
I'm thinking "looking into a tiny projector focused on your retina", but I'm having a hard time imagining what what would be like, since I've literally never experienced anything like that. Looking into a normal projector wouldn't be comparable really, as the focus for those is a field much larger than your retina. You'd need something with a lens that would focus it down to dime-size or smaller to get a similar effect, and I doubt you'd want to do this with anything more powerful than a pocket projector. Might be a fun weekend project to hack a new lens system into one and see what it can do if the internals are even hackable enough to make something like that possible (I'm thinking focal lengths and lens types).
Ah great, I wonder why so many news articles are so "How does it work?! Nobody knowws... ooo..." when the details are all there.
Btw if nobody wants to read it, the way the scanning fibre endoscope works is that the fibre acts as a very fast one pixel camera (send down R, G and B light and record the reflection magnitude). To build an image the tip is vibrated in a circular pattern of varying diameter using a piezo crystal.
The display version would modulate the strength of the R, G and B lights to transmit an image.
I've been following this tech like sherlock on crack, it's just that exciting!
The idea of vibrating fibre is just so amazing that i really do think that they will pull this off, i'm just curious as to how large the controllbox will be and if they can manage to shrink it to glasses sizes.
Very interesting, thank you.
Seems I was wrong in my assumption, though I'm interested in why they keep using the term 'Lightfield'. This is a better patent claim page for my understanding on how the final product will work.
https://www.google.com/patents/WO2013077895A1?cl=en&dq=inass...
It's been whispered on old blogs that i can no longer find that the founder started with AR software and then stumbled upon this new tech - so you might both be right.
How does Magic Leap compare with Microsoft's Hololens? It appears that both are implementing AR is slightly different ways. Was completely blown away though by the demo videos of the Hololens -
Magic Leap are promising MUCH more - realistic occlusion of virtual objects by real ones, dynamic transparency (allowing both AR and fully immersive VR), natural depth of field (refocusing on virtual objects without simulated blur), a scanning fiber light-field display, fully accurate hand tracking, both very near (holding something in your hand) and far (things flying through streets) depth tracking.. you get it. Not to mention, they haven't even said anything about their cloud platform yet, which you can bet Google are heavily involved with.. combining Tango area mapping with Google Maps.. they are trying to solve every problem in AR/Computer Vision, so it's no wonder people are skeptical. But if anyone's going to succeed, they have a pretty impressive team & investor lineup.
Not to mention, Magic Leap are trying to build a whole ecosystem designed for AR experiences, right down to retails stores. Whereas Microsoft simply seem to be making 'Windows-AR'.
Abstract: A system may comprise a selectively transparent projection device for projecting an image toward an eye of a viewer from a projection device position in space relative to the eye of the viewer, the projection device being capable of assuming a substantially transparent state when no image is projected; an occlusion mask device coupled to the projection device and configured to selectively block light traveling toward the eye from one or more positions opposite of the projection device from the eye of the viewer in an occluding pattern correlated with the image projected by the projection device; and a zone plate diffraction patterning device interposed between the eye of the viewer and the projection device and configured to cause light from the projection device to pass through a diffraction pattern having a selectable geometry as it travels to the eye.
You're going to have to expand on that vague explanation because I still think it is fairly obviously impossible. The best you could do is be much brighter than the background (which may be quite effective in a dimly lit room but isn't true occlusion).
It seems to me that every time anyone tries to do everything at once, they almost always fail. On the other hand I can see how progress is incremental, and we have amazing things after competition and many incremental improvements. Apple is a partial, notable exception, and thus benefited immensely. So, I do hope they succeed, AR would be amazing, but your description of their ambition makes me doubt it.
The "screen based VR" companies should be right out nervous if they manage to accomplish even the basics of those lofty goals. Though, from what have been described from those who have tested the Magic Leap, it is currently a HUGE device. I'm sceptical if they can get it down to even Oculus size.
I thought they would be nervous too but there is still a market for screen based VR. Like the holodeck experience. Sometimes you just need to take people to a whole different place, one that doesn't include your current surroundings. #readyplayerone
Working out hardware is first part. Getting enough developers working on VR is much more important. I think Google is going clever way here by building up the hype and attracting them early. Also lots of 'traditional' VR people will switch.
> realistic occlusion of virtual objects by real ones
This - how is this possible?
Also, in the demo I noticed that the robots that landed on the higher floor where placed at the correct height. I know its a demo and not based on a PoC; but if they deliver that it will be amazing tech.
Despite there been a lot of journalits who have used Hololens, it is surprisingly difficult to get an account of how it works or how well it works.
I have however read that most of the augmented graphics on the hololens is semi transparent and the area of the visual field it can augment is limited.
I also wonder how well it calibrates. Saw a presentation the other day about calibrating AR and the top of the line calibration in academia is not super good.
As someone who is developing in the same space (AR) I am wondering if ML is going to be like the Segway in that there is something interesting there but the reveal is not going to be that much of a breakthrough.
It just has too much backing from people that don't get suckered for that to be the case though, so it is really interesting.
They have a ton of very high-profile investors, it can't be purely hype. It's hard to believe Google would put half a billion dollars into a company without seeing something concrete and impressive.
As a musician, I'm curious about the potential for using the VR/AR envrionment as a production tool. DAWs are pretty static, but giving depth to composition by way of more granular interaction seems possible. With so much digital progress in emulation and networking, it would be very cool to finally have a VR/AR "jam room" that could unite musicians across the country/globe in one room to collaborate. A distant future, sure, but a reasonable application.
As a capitalist, my biggest question is...I'm half joking here, but tongue in cheek serious...How is this eventually going to be used for porn?
Network latency is going to a significant factor for musicians jamming together. I could see it for musicians living in the same city or maybe a few hundred miles from each other. If they're on different continents, the speed of light will become a limiting factor:
It comes to about 1/10th of a second for people on the opposite side of the globe under ideal network latency. Okay for conversation, but could be a problem for musicians playing together.
Don't get me wrong, I think there are a lot of interesting uses for this technology if they can get it to work as promised.
Yeah, it's been one of the more prolonged riddles.
However I can say from firsthand experience that the JamKazam platform is usable and functional. Distance and connectivity do matter, that is true. Even on my very limited connection using a relatively modest USB audio device, I've had success on several occasions with one to several participants.
I'm curious to know exactly what they mean by shooting light directly into your eye. Wouldn't you see the source? That is basically what a screen does, though less direct. It makes light, some of which enters your eyes. You see more light when you look at the source, the screen.