On a slightly more serious note, I'm glad to see this since I feared that they might keep the API for later, instead opting to try and make a "product" first (a la iPhone 1 or Google+).
This API is definitely not what I was envisioning though - I expected another API add-on to Android, where you can take over and do what you wish with the display, so perhaps it is a bit like webapps-as-apps on the iPhone 1. I'd be interested to see what early adopters do with it and if they find this API too limited.
Google Mirror API is for puting stuff in front of user eye. As far as I know you can not get information from Google Glass with this API.
It's worth pondering how significantly new I/O devices change the game -- the first tty, the commercial keyboard & mouse, the touch screen, multi-touch trackpads, and voice activated smartphones.
The still-disappointing Plus API pretty much tipped Google's hand when it comes to how flexible they want to be on providing APIs for future products and on top of that there are some pretty substantial privacy issues with giving developers low-level access to the vast amount of personal data Glass will constantly be collecting. I'm already worried enough about Google having that data that I'm sitting out Glass for the foreseeable future (despite the fact that I suspect it will be useful for a lot things), but if random third parties could access that data at a low-level I'd be even more worried.
This comment actually explains my complete disinterest in Glass :)
I don't see how voice-activated smartphones have changed the game yet, IMHO they're in the same league as Glass - "this might be worth it later" (especially dubious for Siri). I use two touch-screens and a trackpad every day and yet I could probably go back to only a keyboard and die happy.
What has changed my life was connectedness, and Glass does not do more than a smartphone in that area.
Yes it does. It completes the connection between what you see and the rest of your digital world. That's significant.
What do I search for to buy a wearable computer?
My point was, just because something isn't for sale in stores right now doesn't mean it never existed. As far as I know, Steve Mann's EyeTap glasses have never been for sale in stores, but kaolinite's argument was just silly – hence my comment.
If you don't like the 'reel-to-reel tape recorder' example, imagine I said 'enriched plutonium'.
When discussing UX and how this will affect the general population, this technology is basically brand new.
It's a jsfiddle-like sandbox that behaves like a Glass (device) frontend.
Considering the API only allows viewing cards, taking pictures, sending your current location and taking textual input, there's nothing that prevents them to have a glass implementation on Android to test things out, other than the time/resources to develop such an implementation.
The whole point of technologies like Glass is that they should be as unobtrusive as possible and just work by themselves when you need to.
The Google Mirror API allows you to build web-based services, called Glassware, that interact with Google Glass. It provides this functionality over a cloud-based API and does not require running code on Glass.
Not entirely, it's obvious that it's just pinging some endpoint for a protocol buffer. The server will return a protocol buffer with a "continue" message, but you can just spoof that.
Also: who will get the first glass xss bounty?
However, the API doesn't seem to offer any way to run code on the actual device aka "apps".
Glass: 640X360 25" HD display from 8 ft
Vuzix M100: 400X240 4" mobile screen at 14"
If I place a ~4" mobile device 14" from the top right of my field of vision, I think I could live with that amount of obscured vision, but is it feasible to create that with 720p resolution? Why would you want a 25" display 8ft away? That seems like it would just be good for placing display ads and not really for most useful things aside from quick notifications.
Send full screen images and video at a 16x9 aspect ratio.
Target a 640x360 pixel resolution.