The sad thing is that the hardware actually looks pretty neat. This device should be cool enough that a realistic demo could easily sell it without misleading people. I hope Kickstarter starts cracking down on projects using pie-in-the-sky concept videos to raise expectations that they can't possibly deliver on.
- The demo included glowing hands of ice and fire. I did see my own hand with ice and fire glowing from the finger tips. I'm sure I saw it as blended around my fingers, rather than an overlay on top.
- There was no latency at all, he let me type some stuff out on the virtual keyboard and the feedback was instant.
My only problem was it kept falling off my nose :)
Looks like it's an Intel depth camera (http://goo.gl/ivrwY) glued to some Epson glasses. With the depth map from the camera, occluding virtual objects is possible.
Theoretically, it should be possible to overpower the background light and have an object appear to be totally opaque, which is similar to the principle behind one-way mirrors. That amount of light might be blinding in certain circumstances, though.
In practice, overpowering background light is not possible with current displays. They're simply not bright enough.
I think I'm going to have to rewatch the promo video in depth to see if they have any unrealistic demos with glasses-wearing users.
I still believe this technology would be really useful for doing Real Work. You could get around the shadow problem by working in a black surface. I am imagining an interface with a black drafting desk, small bluetooth keyboard, and a 3d tracked pen with this AR display could be absolutely revolutionary for certain engineering disciplines.
I was thinking the same just based on the repeated "there is nothing like this being developed", which I may have believed if I didn't come across Oculus Rift just a few hours prior.
The difference in experience is for VR, the whole world lags so an added object will always be at the right place. With hard AR, the added object will be trying to catch up as the image moves.
From what I gather on google glass, they mostly aim for soft AR. This is where you add information on top of the real world but it doesn't map directly to a location in the real world: if you move your head, the added image stays the same. This is much more resistant to delays and could be useable for extended periods of time without discomfort.
For a better and more technical writeup:
Maybe it's meant to include themselves?
Why not use something like the occulus rift? Instead of projecting new objects over the top of existing ones; replace the users field of vision completely.
I'd love that.
One of the guys in our hackerspace (plug: heatsync labs, Phoenix AZ) got an occulus, and we've been talking about how cool it would be to build a "virtual office" of sorts. Sit down with an occulus and some noise cancelling headphones, and have an infinitely large workspace.
2 monitors? or 1000 monitors; it doesn't matter because your entire field of vision (or your entire environment) is being rendered for you.
I think people are very attached to the idea of your eyes seeing the "real" world instead of a re-displayed one. I understand that, but I think that ideology is going to hold AR back for a while.
It will be a while before the VR experience becomes seamless. Resolution and dynamic range of VR still lag the real world by an order of magnitude. Latency is extremely touchy for scenes of any complexity, especially given hardware that is designed to be wearable. Even if these issues only manifest subconsciously, the differences add up: the human eye has ~100megapixel resolution and hardwired motion tracking so detailed that the information has to be lossfully compressed 100:1 before being shipped across the optic nerve (the first level of feature extraction happens in your eye). Those compression "algorithms" suffer under VR.
People will probably be willing to put up with the quality drop to play games because games are viscerally engaging in a way that allows you to push through sensory obstacles. Workplaces, however, lack that energy and are subject to the constraint of productivity, which I think will make them much harder to penetrate with VR: tiny psychological stresses quickly become intolerable if you're already frustrated by dependency management / tedious busywork / whatever, while the virtual world doesn't actually allow you to do all that much that you couldn't do before. Pro apps will probably take 5-10 years to adjust in a meaningful way, and I doubt that more than a handful of industries will actually be revolutionized by the difference.
In the meantime, AR dramatically helps the smoothness problem while still enabling almost all of the gee-whiz features of VR that would have relevance in a professional setting. Military applications are the obvious exception (full VR potentially offers protection from lasers/irritants/bullets to the face) but even military funding won't be enough to close the gap between VR and AR instantaneously.
In the long term, you're right, VR will win, but given that the "short term" will probably last for decades, I think you're jumping the gun on VR.
My idea was to make a window compositor for X--the idea being that all open windows would be scattered around the inside of a sphere or hemisphere or perhaps hypercone with the user's head at the center.
You look at a window and slowly turn to face directly forward, and the window blows up and moves into focus. A shake of the head dismisses the window, and a sharp snap to either side cycles through open windows.
Shoot me an email if you'd like to talk more.
EDIT: X programming is fucking arcane. :(
Take that and extrapolate. Doesn't seem that "crazy" really. Compiz and KWin both already have window animations you're describing. There was even a Wii-head-tracking plugin for compiz that would give your desktop a third perspective dimension. (Moving your head would reveal the edges/bits of the other windows underneath higher ones).
It'd be a neat demo, but it sounds like a nightmare for daily usability, etc.
(I reserve the right to eat my words and recant this misguided decision later.)
I don't think it's fair calling "ideology" the fact screens still suck, and they suck hard. Dynamic range, refresh rate, color reproduction and depth perception are all big shortcomings not solved yet.
Just look at digital cameras. If EVFs are not even ready to replace conventional optical viewfinders yet, it's utopia to ask people to slap one in their faces 24/7.
In this age of nanotechnology I find it entirely ridiculous that manufacturers are cost cutting by multiplexing column and row transistors. LCDs are such awesome tech that you don't even need to have refresh rate. It might increase initial complexity but in the long run it will be worth it.
Imagine a virtual office with a thousand monitors that is your apartment that has been plopped down in the middle of a forest with the walls knocked out.
While the occulus rift is cool, I think you're missing the point. VR has been around for quite some time, and while it has it's applications, AR, IMHO, is more immediately applicable today. Even if you are selectively reproducing pieces of reality in your VR, that's going to take extra processing power and not going to be quite realistic (or real-time) for a long time. AR, on the other hand, is just rife with applications that we can use now. The classic one I always think of is helping someone with Alzheimer's get home by "painting" a path of footsteps on the ground. Try that with an occulus rift, or any other VR setup. Almost guaranteed to be cumbersome and dangerous, if it can even be used outside of a lab at all.
2) FWIW, world lens is described as augmented reality. Also, it has a much narrower FOV, probably doesn't update in real-time (and probably doesn't need to), doesn't require wearing a bulky headset, doesn't occlude your entire FOV, and in any case, is not connected to an oculus rift.
And bringing it up was to address your claim that it was computationally infeasible to retransmit video with some of the content changed.
And "practically real time" isn't going to cut it for something which occludes your view, that was why I brought that, and my other points up. I didn't claim it was infeasible; I claimed it was currently infeasible to do it for the entire FOV of a human, and with that being true, it was impractical to assume that VR and AR can currently be applied to the same problems. In the future, the distinction between AR and VR may not need to be made, but before that can happen, VR (as it currently is) has a long way to go.
I've never considered this. As many terminals as the eye can fit, with a living rainforest as a backdrop. If it doesn't seriously hurt the eyes to do it for programming-session durations, then I'd gladly pay good money for that.
And yes, the ARI comparisons are not lost on me.
Filming stereoscopic video through the glasses is very complicated which is why we used concept visualisation to give an idea how it will feel. We're working on a rig to film through the glasses, but right now there's no way we'll be able to convey how impressive it is in person through live video.
Couple of components which are not going to work:
1. A see-through glass with Field of View as shown in the video just doesn't exist today. The model they are going to use are more like a ‘tiny TV-screen floating in your view’ and not even close to the visualization they created.
2. Real time 3D gesture recognition from point cloud data on ARM (+ overhead for applications + games all in low latency)
3. Real time 3D environment reconstruction from moving point cloud data (requires something like quad-core i7 + 32 GB RAM + desktop-class GPU processing)
They want to achieve it on an ARM running from tiny batteries!!!
+ On top of this would come the whole application / game experience, something they seem to be concentrating on, instead of getting the basics right.
4. Then there is latency, which is just not going to be solved for the next 5 but more probably 10 years, just read Michael Abrash blog about the reality of Augmented Reality glasses (http://blogs.valvesoftware.com/abrash/).
To be clear, I'm not saying that they won't be able to make what they promise, I'm saying that not even Google or anyone will nearly be able to achieve it for at least 5 years, and everyone knows this who is even a little bit into augmented reality.
So personally I find the Kickstarter campaign to be a fake campaign and it is just bending the rules of Kickstarter which requires a real-world hardware prototype. So they made a glued together prototype with a fake visualization, with the whole campaign built around the video.
Nonetheless, the campaign has a chance of being a massive hit, because every sci-fi fan is dreaming about it for decades and is willing to back it if he has the funds. In that case, it might have a chance of the biggest Kickstarter failures of all time. The best case for them would be a quick Google acquisition and integration into the Glass team.
* The Moverio consists of two parts, the glasses and the control box. The two connect via a seemingly proprietary connector. The control box runs Android 2.2, archaic by today's standards. USB host mode was introduced in Android 3.1, so there would be no straightforward way to feed the depth camera's information into the control box.
* Unity3D, which Meta's software stack claims to be using, does run on the control box once you output your Unity project to an Android application. For the app to run, I had to tweak the build settings to support both ARMv6 and ARMv7 (The app failed to start when built for ARMv7 only). This was doable in Unity 3.5.x. However, Unity 4 removes support for ARMv6.
So I'm full of question marks:
* Did the Meta team somehow obtain/reverse-engineer the specifications for the Moverio glasses' connector, plug it into a more powerful device, and ditch the control box?
* Did the Meta team replace the Moverio control box's OS with a more modern version?
* Is Meta 1 stuck with the older Unity 3.5.x?
* Or am I doing it wrong, and is it indeed possible to run Unity 4-built apps on the Moverio control box?
Also, other have mentioned but the field of view is disappointingly small with this device - just a small window in in the middle of your view.
Did you guys receive the specs from Epson directly, receive dev glasses which doesn't use that connector, or reverse-engineer it yourselves? Is that information confidential or under NDA?
I strongly feel that ancient control box is holding back the full potential of the glasses and would love to connect different hardware to it. The connector spec/protocol is the only thing preventing that. I'm not very hopeful but if there's any information on it I would love to know.
This is actually pretty exciting tech, but it's going to be absolutely nothing like what they have to show in the pitch video.
The first screen shot from the second video shows exactly what their gesture tracking looks like. When doing the perceptual challenge, this was mainly the stuff we were thinking of as applications for the hardware, funny to see someone now taking it and simply mounting it on glasses.
I'm mostly thinking about ARQuake and the like, where the AR objects are walking around the room or hallways rather than being confined to a table in front of you.
The Kinect is perfectly capable of creating sets of objects based on its depth map. Normally it's a "skeleton" in a whole body, but it can just as easily make a skeleton of your hand, with the fingertips automatically tracked and used as activation points.
Looks really promising and they are not asking for much.
First thing I ever back in Kickstarter, too.
Sounds like a great recipe for disappointment. They should have at least posted a real-life video (or a realistic rendition).
I have to feel v2 will, given the possibilities of applications as demoed here.
3d input is still useful, but less fun.
In principle (assuming some off-device computation), you could run SLAM and get a decent estimate of head orientation to then do AR with Google's glasses. Unlikely in the near future, but inevitable at some point.
Look at the quick "From the labs" video down the page. It's grainy, messy, and raw. There's simply too much noise for it to look good. Without a significant bump in sensor resolution (it's only 320x240! https://forum.libcinder.org/topic/future-is-here-time-of-fli...), it's not possible to have both smooth and performant results.
I have also seen the Intel depth camera in action first-hand and that alone is a very promising piece of hardware.
Give me something like this that I can run through my MC worlds on and I'm paying.
Upvoted, busy, threads float nearer me.
Flame-fests shrink back from me.