I certainly don't want to bad mouth a guy who builds something cool, but there's a significant difference between this and Project Glass. He states in his blog post (http://www.willpowell.co.uk/blog/?p=210) The Vuzix glasses are driven by stereoscopic feeds, which are fed by the HD cameras.
What he's seeing is coming through the cameras. He's not looking at reality with an overlaid display. The glasses he is using are not see-through, they are displays. That's a very different experience from augmented reality and reduces your viewing to HD quality.
That also explains why at the beginning of the video we see him put on the glasses and there's no transition to seeing what he sees through them.
This got me thinking about whether "fly-by-wire" vision will ever be viable (or preferable) to human vision and how various sight impairments would be dealt with using such a system. Certainly myopia would be a trivial fix since the image could be placed directly in front of the eye, but how would you correct for hyperopia without a convex lens between the eye and the display?
Consumer partially transparent head mounted displays are sort of available. SiliconMicroDisplay has one that is bulky and not really transparent enough to walk around in, but could be used to approximate Project Glass: http://www.siliconmicrodisplay.com/st1080.html
A useful wearable augmented reality experience would require a retinal scanning display, which is presumably what Google's prototypes use. Brother has been showing off a product like this, but it has been nothing more than expo fodder for the last few years: http://www.brother.com/en/news/2010/airscouter/
More importantly, it explains why both the world and the overlays appear in focus. I have yet to see an explanation for how Project Glass does the same. Apparently it's based on the same technology that went into Babak Parviz's contact lenses, which use Fresnel lenses to make the overlay appear in focus when your eye focuses on real-world objects several feet in front of you. Fresnel lenses also impair image quality. And you'd need another lens on the opposite side to undo the projection on light coming in from the world around you.
I'm wondering whether the reason that Google's glasses only use a small display in the corner of one eye is that the see-through image quality is simply not good enough to justify covering the wearer's entire field of view. If you look at the photos of Sergey Brin wearing them, they are not very transparent, at least from the angle the photos were taken: http://www.theverge.com/2012/4/6/2929927/google-project-glas...
One of the professors in my school, Steve Mann, is notorious for working on wearable computing devices for three decades now. There's a small group of niche hackers who are into this kind of thing, but it's definitely not a new phenomenon.
If Google used Mann's "diminished reality" idea, they could replace ads we see in everyday life (billboards and the like) with some ratio of information chosen by the user and ads from Google. Thus, using g-Glasses, one might see a net reduction in the amount of advertising one is subjected to. And Google could tell billboard companies to pay a toll if they ever want to get through the filter.
What about that HNers who lamented that a Google wearable will be plastered with ads, what if it reduced the overall number of ads you saw?
Mann is a Media Lab alum. The Media Lab did extensive work on wearable computing a decade ago . Rich DeVaul  is another alum who worked extensively on wearable computing there, and is possibly involved in Project Glass.
I'd love it if, instead of 'close menu', you just said 'thanks' to finalize an action and leave the current context (or hide the entire UI). That seems more natural; it's what I'd do in person if I asked someone for the weather or the time.
The marvel of what Google is trying to accomplish is in making the product small and consumable, which is a challenge the article doesn't address in its 'David vs. Goliath' language. It takes a lot of effort to keep size, capacity, and power consumption in mind for a product with more mass appeal—not to mention the UX and style elements involved.
Exactly this. Everything needed for iPhone/iPad already existed off-the-shelf years prior as well. It takes a lot of oomph to condense all that into a beautiful little device that doesn't suck during real life day-to-day use.
I still think Google made a big mistake showcasing their product a year earlier. It reminds me of when they showed ChromeOS one and a half years before it actually launched. What's the point of doing that? Sure you might get some benefit from the feedback, but I believe the disadvantages of showing it so much earlier are much bigger than the benefits.
And user feedback doesn't even mean that much when you're trying to build an entirely new product category (ok, maybe the basic idea existed for a long time, but I think we can all agree their concept of the AR glasses is a lot more modern and practical, and might actually turn into a popular commercial product if they do it right).
Apple understands this, not showing a new product or concept until it is either unavoidable or available. What the Glass video shows is an idea whose time has come: all the pieces are ready, just needs someone to fuse it all together in a robust affordable manner which - and here's the hard part - will behave the way people will want it to when they see it done right. Having released the video prematurely, everyone will be jumping on the concept trying to beat Google to the punch (hey, all the pieces are out there ready to assemble); question is who is the precognitive telepath able to grok what users will want when they're given what they say they want.
I think your statement is somehow not well thought out, and by "your" statement I mean "almost everyone's" actually, I just semi-randomly picked out yours to comment on because it captures my counter-argument well. You're saying Google released the video prematurely, based on your assumption google does not want everyone to try beat Google to the punch. It's obvious that Google is well aware that every piece to realize this tech is out there already. What I don't understand is how it is not immediately obvious to everyone that Google wants this idea out there, for people to experiment with it, for people to share their thoughts about UX concerning such a piece of technology which is what they really need at this point to develop this product.
The linked video and the comments it generates are an excellent example of this.
These glasses do look a bit silly with how far they come off his face and the large microphone hanging down but still, bravo. I love seeing people hack together things like this that large companies are touting about as the next big thing yet don't actually release any real evidence that they work.
At least it is accurate and being built in a day, it's more than sufficient. There is really no honest truth as to how well Google's device even works.
That being said, this is a perfect example of how Google entirely screwed up with this. This would have been the ultimate keynote release to Google IO. Not saying that it won't still be, but now other competitors have a great idea to build off of. Google should have just been quiet until this product was at least to a manufacturing stage.
I don't know if you knew this, but Google, Inc. isn't Apple, Inc. Two separate companies! I know it's surprising, it took me a while to figure out myself. You may be also surprised to know that there are actually more ways to release products than how Apple does it. In fact, there many ways, each with various tradeoffs.
Cost of keeping engineer/supplier secrecy [financial, intellectual, and morale cost], unable to use real people as beta testers, more difficult to involve outside research centers, encourages insular culture. "Oh, is that public now? I don't even know anymore."
In the case of something that has a developer culture around it (say, a chumby or something) you have a higher number of developers ready to go 'at the start' since they've had time to think about killer apps.
One could point at how early-stage multiplayer videogames are developed for a good counterexample.
Awesome effort. I'm a bit concerned that the reporter effectively says "to all naysayers, Will says it's true so it must be", and am slightly skeptical as Will hasn't put up information for others to build their own / sourcecode, which I'd expect unless he plans to sell the product himself. That said I think I believe this is genuine, and full kudos to the guy for his efforts if so.
What he did is very impressive, and I understand he was attempting to replicate what appeared in the Project Glass video, but why does all the graphical interaction have to obscure the user's vision (for Project Glass and his live demo)? The icons layout horizontally too close to the center of your view. If the whole point is to have this augmented experience, the user still needs to have a mostly unobstructed view of their surroundings. For the most part, fake video game HUDs handle this pretty well. Project Glass just has the interface getting in the way of everything. Why not just sit down somewhere and use another device at that point? You obviously won't be able to do anything else. I'm not walking down the street or through a store with a big icon in the center of my field of vision.
One difference between this and Google Glass is that Glass is monocular. Unless you're blind in one eye, you will be able to see "through" the projection.
I also don't think the representation in the Glass demo video is accurate: I'd expect the images it projects to add to the background rather than replace it entirely. I don't think they could make an "opaque" projection if they wanted to.
As far as the hardware goes, a lot of this stuff is already out there in different forms: Epson Moverio, Vuzix (as shown in this demo, ...), albiet not in the form factor that would allow for mass market penetration. That's where Google, and hopefully Apple & Microsoft, will do well and allow for the app makers to come in and add value.
That being said, there are still a great deal of Computer Vision problems that will need to be solved to successfully implement "Strong AR". The video was a nice proof of concept, but that's what the Google video was too.