Why would you ask that? This type of product is so different from what's available on the market that I doubt the Project Glass team would receive any type of suggestions they'd find useful.
That would be like Apple asking people "What do you want the iPhone to do?" before it was first released in 2007.
If you're creating the product, create it. Find the features you want it to have and research the features it needs to have. Asking the public "what do you want it to have" will not do anyone any favors. Decision by committee can be truly atrocious.
That said, perhaps Google doesn't care about releasing perfectly marketed hit products a la Apple. If wearable computers catch on that make it easier to check-in to the places you go and +1 the stuff you like, they'll make tons more money from useful targeted ads than units of hardware shipped, just as they do now.
You know, I have respect for the people Google employs - they are certainly a very talented bunch. But what has Google ever engineered, at least physically, that has set your expectations so high?
There's a huge difference between a research project and a shipping product. I'm most excited for this development as well, but I'll be withholding my acclaim until they actually deliver it. So far, all of this stuff is an elaborate Montessori school science project.
In other words, they could ship "now" if they weren't doing such a revolutionary product. If you make a trivially incremental improvement on an already-existing product (iPad3) then that's a lot easier
Does a driverless car even count as basic research that they can spin off into a consumer product?
A driverless car is not just another car. It's not a fancy smartphone. It's a paradigm shift that will reshape society and erase tens of thousands of jobs in a relatively short timeframe. And it's one of the rare cases where you can actually predict and quantify these consequences with relatively good accuracy.
I'd assume quite a few parties (read: all of them) will want to have a say in such a rollout, beyond technical peculiarities. For the better or the worse.
In the mean time, isn't it prudent to have intermediate products? Products to help people get used to the idea of driverless vehicles? Products to give them experience on how to rollout their bigger vision?
Airplanes fly constantly, because each second they're on the ground is a second they're not paying for their capital cost. Ever see how barren a 6 lane highway is at 3am in the morning on a weeknight?
Like the "paradigm shift" that was to be the Segway, what problem does this solve exactly?
If they can get such vehicles legal and available to the general public in California, it will be legal everywhere else within a year or two at most. California sets or has had the ability to set a lot of standards for the country, from vehicle emissions, textbooks, to now, apparently, driverless cars.
You want to pick nits over Google being able to ship?
That's they problem with google, they still haven't figured out they're a one hit wonder, real business is much harder edged than the "playtime" for kids that goes on there.
You don't consider GMail, their dominance in ads, Android's ubiquity, the YouTube acquisition, etc as successes? Because all of those seem to have improved vastly since Google took control of them.
(Yes, that generates a lot of data, and we do use a lot of that data for things that might not thrill you, like targeting ads. But ultimately, a lot of companies know a lot about you, and few try to be as transparent about it as Google does. At least you can stop giving Google data. Try to opt out of the credit reporting system.)
Take a look at other tech companies, they've created fortunes based on almost nothing
* apple, consumer electronics that anyone could do
* amazon, selling stuff online that anyone could do
* oracle, selling databases when their competition includes ibm and even free software
Youtube should have been itunes, amazon instant, netflix, hulu all rolled in one. But they're not, because they don't actually know how to do business - one hit wonder.
Another example, google, until recently, had no lobbyists - why on earth would you do that? (when you have enough piles of cash to buy ountries). Investors should be filing lawsuits for this reason alone.
Another example of business cluelessness, they're not flexing their patent muscles.
Android ubiquity - linux is ubiquitous too, far more so - hint, it's FREE. Not very hard to give away free stuff.
It's worth noting that Google's prospectus  as filed with the SEC says that Google will always pursue long-term opportunities even if they hurt Google in the short term. Patents are an example here; Google could sue all of its "competitors" out of existence today, but would that really benefit Google in the long term? Would you work at Google if they were the company that put every tech startup out of business with "method for computing the sum of two integer" patents? Probably not. So aggressive patent lawsuits are not what Google's shareholders want. And doing what the shareholders want is not exactly "business cluelessness" :)
You might want to go review that. Or just stop posting so much about Google altogether.
Further, you don't actually need to respond to every stupid, incorrect statement on the Internet that's written about your new employer. Really.
By the way, his email address is in his profile. If you were really that concerned about "communicating with care" you could be a professional about it and mention it privately to him. Of course, that wouldn't satisfy your need to respond to every stupid, incorrect statement he makes about his new employer, really.
(Ultimately, I see the risk in saying "I think XXX" if Google decides to someday do the opposite of XXX. But there is some limit; I'm still going to say that writing unit tests is good even if Google's official message to investors becomes "testing is a waste of time". That's because I'm me, and I have my own opinions. If Google wants to make an official statement, they have a blog for that. If someone wants to misconstrue a personal opinion of mine as an official statement, well... let's just say that their Pulitzer hopes may be dashed...)
A business with a basic sense of decency... imagine that.
Even when I was working on autonomous vehicles 10+ years ago, the entire setup was basically a "solved problem" minus environmental sensing accurate and reliable enough to be put in the critical path of human life. The state machine for how to behave on paved roads has existed in various forms for a long time.
They have however put money and legal weight behind the problem. If Google deserves credit for anything, it would be getting them allowed on public streets.
"Environmental sensing accurate and reliable enough" isn't the problem either. Although fancy lasers and radars can help, we have had cameras, microphones, and accelerometers superior to human eyes, ears, and vestibular system for years now. The missing piece is the software, which is exactly the piece Google is working on.
Even the early Daimler-Benz projects of the 90's had fully functional software stacks that had implemented all the required decision making logic of driving.
While it is true that we have cameras and microphones that work better than our own eyes and ears, I don't think anyone would dispute the fact that the sensory processing isn't to human levels yet. The advancements that have been made are in different types of sensing devices that eliminate the need for heavy post-processing of data (image pattern recognition, etc) in software.
I do give credit to Google for making it happen, both as an integrator, and in lobbying efforts. But there is a lot of standing on the shoulders of giants, in comparison to pure software fields where they have made huge industry disrupting shifts (like BigTable, GFS, etc).
And that's the exact problem Google is working on, and where all previous efforts have failed miserably, no matter how good their "rules of the road" state machines were in theory. Saying Daimler-Benz solved autonomous driving years ago is like saying SHRDLU solved natural language processing.
The advancements that have been made are in different types of sensing devices that eliminate the need for heavy post-processing of data
There are exactly two new kinds of sensing devices used on the Google cars AFAIK: small automotive radars and the Velodyne spinning LIDAR. The latter owes its existence not to giant defense contractors or car manufacturers but to a small loudspeaker company, of all things. Furthermore, it absolutely does not eliminate the need for heavy post-processing of data.
After-all the 'worst case' in a car is basically solved 99.9% of the time by staying in the correct lane, obeying stop lights / signs, speed limits, and simply hitting the breaks if your going to hit something. Sure, you could improve on that, but get that to work reliably and your already doing better than human drivers, who get distracted, drunk, tired, impatient, angry, and just plain overwhelmed.
In all three categories (particularly portable music devices) - I would argue that Apple didn't so much as make the technology possible, as it did make them _popular_.
Likewise, with a autonomous vehicle - it isn't so much building a car that can drive itself - it's making a car that can drive itself _that I can buy_ that I get excited about.
Of course, Google has yet to deliver on the part about me buying them - but can you identify any other company that has done so much to get them driving on roads _with other people_?
A buddy of mine, when he was a kid, used to tie toilet paper across the road at windshield height then hide in the bushes. A car would come along at ~30 mph, see it, then come to a screeching halt. Some would get out and tear it down, some would stomp the gas. Point is, every human stopped for a single strand of toilet paper blocking the road.
Screeching halts have a different effect at higher speeds in denser traffic. People would be less likely to slam on the brakes for a trash bag on the freeway.
But either way it is a dangerous, controversial, split second, human risk calculation. I can't say how I would handle it in advance, much less how a robot car should be programmed to handle ambiguous inputs at highway speed.
The CEO's job is to allocate resources to initiatives that will make the company money. NOT to fool around with worthless projects like driver less cars and glasses that are just complete waste of money and resources.
Shouldn't they be spending time and money fixing Android and the Android market place?
Either way, it's not that they are taking money and resources from Android and dedicating them to cars and augmented reality. I don't think more money and more people is what Android needs.
Maybe Google won't license it, maybe Google will give it away like they do Android. Since I don't see them manufacturing cars, those are about the only options other than canceling the effort.
Also people working on these projects are scientists at googleX they are completely different from the Android team ... Google has enough resources (people and money) to pursue these long term research project without hurting their short term business too much.
Actually that's not strictly true. The CEO's job is to satisfy the shareholders/board of directors. Normally that coincides with making the company money, but not always - the idea of a legal duty to maximise share value is a widely held myth. In the case of Google, most of the shares are still held by the founders. If they just want the company to do cool and impractical things they have the right and the capacity to make it do so.
I don't think they've developed a physical consumer product that is revolutionary, but I believe they have the talent to do so.
I'm from the home-town of Philips, the company that became big selling light-bulbs but then expanded into a myriad of great technical innovations.
We were always joking however about how they were able to make the greatest products, but were never able to sell them.
There are so many examples of absolutely great products/innovations which were eventually canned or, if they weren't IP protected, were successfully released to market several years later by a competitor.
I really hope Google doesn't fall into this trap, because the even the greatest still need to be sold...
I expect that they have patented "location based XYZ implemented by wearable computing devices" up the wazoo before this announcement. Even if Google Glass doesn't become the market leader, they're set to reap its rewards in more ways than one.
Did you hear about how Google dogfooded Google+ for months, and when it was released to the public, people used it entirely differently than they had used it internally?
It seems to me that Google is so stuck in the mindset of "Release a Beta, assess the adoption, iteratively improve" that they don't know how to make a product that people will actually want from the start.
Chrome took a while to be adopted because everyone was still stuck with useful extensions in other browsers. Wave had neat interaction, but failed the adoption hypothesis. Google+ correctly assumed the need for a sharing-centric network to replace Facebook, but botched the actual social aspect of it.
Google is the polar opposite of Apple in this regard. They need to learn to keep things under wraps and understand the userbase for the product before it actually reaches them. In the case of Google+, for instance, it's impossible to change the sharing mechanism that has been there from the start, as fundamentally flawed as it may be.
You can't always design products from the top down. Sometimes you have to run actual experiments and collect data. The Apple aura that they can sit in an ivory tower and perfectly craft things that everyone wants will fade away the first time they release a product that bombs.
Right now, they are successful, so everyone says go emulate them. That's a shortsighted MBA mentality, that there's this formula for success, and you just need to emulate the practices of others.
But you're right that Apple's way is only one way, and it's non-obvious how to go about replicating their processes even if you wanted to.
If anything should be learned from Apple, it's every aspect of the experience should be considered part of the product, including anticipation and expectations. That can mean a big media event unveiling a top-secret project, or it can mean launching a developer wiki before the product is even finished. Either way, managing the experiences of your customers, even pre-purchase, is part of the job.
You're already starting to see some of that now. Building excitement is great, but if you over do it, people will be disappointed.
And you are right about refinements. There is a nice piece just about that: http://tidbits.com/article/12856
Also, nothing they showed was very exotic. Google already has walking navigation. Google already has Hangouts, Screen Sharing. Google already has Goggles image recognition. The only 'sci fi' element was the always-on microphone handling Siri-like queries. They could have showed a lot more crazy stuff: face detection, augmented reality tracking, augmented reality games. They didn't show anything that isn't technically possibly to achieve. It's rather mundane if you think about the features they showed as applications on your phone instead of displayed on your eye glasses. The only thing different is the input/output device.
Apple announced the iPhone 6 months before you could buy it. The initial version didn't do much beside fixed functions and Web browsing. It wasn't until the App Economy that it's potential was unlocked. People speculated for months about app development only to be told much much later that there was none. People speculated over flash. Over Java support. MMS messaging. In fact, I have an old post from 2007 exactly discussing all of this naysaying due to speculation: http://cromwellian.blogspot.com/2007_01_01_archive.html
I think Google has actually put forward a very pragmatic concept video that is fully realizable, albeit with latencies and delays and voice recognition accuracies that don't quite match up to the real world (but even Apple commercials show apps serving up answers faster than they do in reality)
As a developer who is working on a product that's very well suited to take advantage of this product I'm very ashamed of Google's lack of third-party developer involvement and business vision. Sure the product looks great, but is not solely about products; it's about people, about delivering value for customers the world over. And the full value of this product would only be realized if Google takes a platform approach, opens up the ecosystem and lets everybody in (including, and, especially Facebook).
Summary of the good things I saw in the demo:
- Very clean user interface
- Nice hardware design
- Interesting functionalities
- Nice integration with Google products
- Slick animations
- Seems to be pretty fast
- Video calls in version one? What about battery life? Sometimes is better to keep some things out on first iterations
- No hints to integrations with other platforms
- Too much use of voice (we all know the state of the art in voice recognition and how long voice processing currently takes)
- I didn't see a single Web search in the video... how come?
All and all this is a VERY promising product and a very important one for the whole industry. I hope Google opens up this platform so that it can reach all the momentum and followers it deserves.
I think it might take a while of having people walking around with an initial, closed version for the truly novel and exciting obvious to really become obvious.
Connect to the open Web (with a powerful summarization technology) would be a killer on this platform. Otherwise we would we reinventing the wheel.
At any rate, the counter argument is that opening it up to 3rd party developers means letting people shit up your field of vision with ads, which might not be so cool.
In different areas however it could be a big improvement, if you are repairing stuff with no hands free in a dimly lit area and you need info from a manual or you want others to see what you see and help you.
While others see only what's wrong to what Google is doing.
Also, I assume this is going to work with a data plan as well? Well at least this time I really hope Google will not give in to carriers, and compromise on nothing. Don't give carriers the power Google. When this product becomes extremely successful, as I believe it will become, they will be begging you to sell it on their network. Although I'd probably prefer something like the former Nexus store, that intended to commoditize carriers, rather than give them exclusives.
You can see it is a fairly short leap to producing all manner of interesting hardware, including AR glasses. I think Google is starting to realise that so many of their ventures are getting stymied by depending on 3rd parties for hardware that it's eventually going to be a threat to their core mission (organizing information, etc.) I bet they've had high level talks with OEMs about things like this and always get frustrated by the low margins they operate on and the need to reap immediate profits for anything to happen.
I honestly have no idea what stage this project is in. Are they in a phase that's more ideation? Are they refining in their design funnel? I think it's a bit curious to point at an example of a product by one company and say essentially 'They did it this way and were successful, clearly this is how all things should be designed'.
I'm not sure this is an issue of them asking a question analagous to "What do you want the iPhone to do?" so much as it's maybe "How annoying are people who aren't researchers and designers going to find wearing HUD glasses for 3 hours a day?" or "Will people feel uncomfortable using them in pedestrian or cycling settings given that they're voice-activated and pervasive?" Granted, there are maybe analogous products on the market that will maybe inform how people will interact with this new medium, but I have a few years experience interacting with someone using this sort of one-eyed projective displays (my former research advisor wore one) and there are some funny things involved with face to face interaction with someone using them that you might not anticipate. In fact, he felt that the eyepiece was often enough of a social interaction barrier that he'd take his off and tuck it into his shirt pocket when he was having a conversation.
You're not necessarily going to figure these things out in the product design lab. Maybe it's stupid of Google, but I suspect they're trying to better do design for the wild. Alternately, they're just people who don't know how to make products.
make dennu coil happen.
I agree - make it happen.
So guess they might get a FEW good ideas :-)
* Guy wakes up, puts on glasses. They immediately tell him what brand of breakfast he should be eating today.
* When he looks at the window to see the weather, buildings and places are censored, and advertisements are replaced with google ads, yet he doesn't care
* During the text message with his friend, after he says "Strand books", Google suggests Chapters or Noble or another "featured" store, and he has to opt out of the text replacement.
* When the Subway is out, advertisements for cab companies come up on his "screen". He must opt out of them to get the walking route. Google begins caching the advertisements he will be seeing during the rout superimposed on the scenery.
* The board of posters he steps up to is blurred out and is instead a different, large "pop" label, not some indie band.
* In the book store, Google quotes lower prices for each book at "Google Books" and offers free shipping.
* The picture he takes at the end has a superimposed image of "Sponsored by Google!"
* It ends with the sunset having a superimposed text advertisement for vacations.
A less cynical version:
Apologies for getting political. But you paint an all-too-possible picture that's worryingly dystopic.
Why is that? No one's forcing anyone to wear these glasses.
Do you think Adblock plugins should be illegal? If not, should they be illegal if they display a different ad instead? What if they remove all ads, but put a banner at the bottom of each page?
Thinking that government regulation can be healthy is a horribly naive and extremely dangerous idea.
I assume you don't mean for this to apply to others areas, like food and drug inspection, vehicle safety standards, etc.
But these could be useful in situations where you don't have your hands free to hold a smartphone. For example, I would very much like GPS as a heads up display for cycling around London. And skiers/snowborders would enjoy them for other information (How many Gs am I pulling in this turn? How fast am I descending? Where am I?)
I'm rather creeped out by this project. In almost all the situations presented he could have asked a freaking human instead of his glasses :
- bookstore : ask the bookseller that's what he's here for,
he may even give you better recommendations
- maps in the street : ask somone
- where is your friend ? call him
Ask the bookseller where to find a book? I don't do that now unless I've looked for the book myself and failed to find it, because it makes me feel lazy, and because it's not always easy to find someone without going to the cashier.
Ask someone for directions? People suck at giving directions, and they're slow at giving those bad directions. There's a reason people buy GPS units, and it's not because they're anti-social. It's because they work so much better.
Call your friend just to ask where he is when you expect him to arrive any minute? That comes off as impatient. And it's an inconvenience for your friend who has to answer his phone just to tell you he's almost there.
I think that as the way information is handled transforms and evolves the world will have to adapt.
I suspect getting floor plans of each and every store where these glasses operate wouldn't be feasible so having staff around to point people to specific books may still be useful. But I strongly believe that we shouldn't shun a new technology on the basis that it renders a current job useless. Just think of how many jobs that computers destroyed, and made.
I'm always excited when I think of what the future will look like because so much has changed with the Internet, and so much will continue to change as products like smartphones, tablets and AR systems come into play.
Imagine if someone from the 1800's could visit 2050. He might assume we are all extremely high-functioning schizophrenics.
Remember that movie 'Time After Time'? HG Wells and Jack the Ripper both visit the 1980's. Guess who fit in better? :-)
- where is your friend? just wander around. you'll find him.
Anyway so me too. Once I know I'm gonna be thought weird for asking someone, because most people just use their gadgets to find out, I don't ask, unless I'm not worried about being weird. And I'm just making it worse, of course. Oh dear.
I thought it was going to cost a few thousand, but it's only 300/400, wow.
I can only imagine what a company with the talent and resources of Google could do with more advanced iterations of HUD technology.
I'm an EMT (and Paramedic student). There are plenty of scenarios in emergency medicine where you're following a time based algorithm. In the example of a cardiac arrest patient, everything revolves around two minutes cycles of CPR, medications, and (if applicable) defibrillation. Even something as simple as a clock that was alway superimposed in the corner of my field of vision would be great. If it kept track of upcoming medications and other actions, that would be even better.
I think this technology has applications in technical fields (medicine, mechanics, etc), long before it will be a common thing to wear out in 'normal life'.
The military has spent a fair amount of time working on something similar to what you describe for vehicular repair. I think you might find the videos off of this page ( http://graphics.cs.columbia.edu/projects/armar/index.htm ) and this page ( http://singularityhub.com/2010/01/11/augmented-reality-to-he... ) very suggestive of the kind of thing you are looking for in medicine.
I haven't seen the same level of thing yet for medical systems, though it may exist. I have seen some work on using AR for visualization of imaging (e.g. map the CT scan onto the body I am looking at). I think the sort of direct procedural guidance you describe is probably harder for people than it is for vehicular repair. At the core that is probably just a computational problem though.
Edit: To expand a bit, take an autistic adult that wants to do something, like go to the movies. It's simple for us, we wash up, get dressed, go outside, go down the straight, hope on the 132 Bus for 3 stops, get off, walk 2 blocks, pay for tickets, and go into theater number 5. For someone with autism, they can struggle with things like this. These glasses provide them with visual cues base upon their location, so when they finally do get to the theater, the glasses can show them what to do next, and give them that visual cue.
Currently working on an app for tablets for this sort of things, but having it work in glasses would be simply amazing. God, what I wouldn't give to be apart of this.
It's also hard not to recognize that consumer technology shifts further towards entertainment than utility. It's bizarre and selfish how consumer technology alerts you immediately about friend requests but not local emergencies. It's bizarre and selfish how your phone can show people your apps, contacts, and calendar, but not emergency medical information. Having apps x, y, and z can someone solve these for an individual, but as a society we haven't progressed much with this technology. The idea that the person beside me might have a heart attack, and I'd have to navigate their phone to find an ICE contact just baffles the hell out of me.
It shifts towards whatever people will pay for. There are a lot more wallets interested in entertainment than local emergencies.
No idea what the odds are, but you should try to contact google (whatever that means). Seems like they would be all about having their magic glasses helping autistic folks.
Yes, the constant interruptions wouldn't be good, but that doesn't mean we couldn't build something for them using that same technology.
Like the Segway, significant portions of the society might associate negative connotations with the device ("dorky", antisocial, pretentious etc).
I remember when a friend bought a pair of Oakley sunglasses with built in headphones. I think amongst my friends, the almost unanimous consensus was that it was not a good social statement to say the least.
This will be a challenge, but I'm sure there's a solution.
The Segway obviously lacked that practicality, as did the Oakleys. True augmented reality glasses would certainly not, but I don't know how far along they are towards that goal.
But imagine having these glasses on while performing a complicated technical procedure providing you with checklist of activities, enabling you to take pictures as you go.
Or giving a lecture/presentation without having to turn away from the public to peek at your presentation/notes.
Plus in comparison to Segway and bluetooth headphones these would make you look cool.
It just lacks a Brain Computer Interface.
I want one, of both.
It creates a very clear seperation between you and the person you're facing, a complete loss of intimacy and focus which right now is the last refuge for non distracted human contact. It's analgous (roughly) to having someone talking to you while looking at their iPhone the whole time. In the personal (ubiquitous) scenario, I can see how glasses like these can be a violent intrusion in our relationships.
*Exceptions apply. At the beach is fine.
That might explain my lack of aversion to the idea, though. Have to look into that...
It did not show them on her face, either. That made me wonder whether it would, at some time, become socially acceptable or even the norm to have a video call where both parties do not see each other, but what the other is looking at. I think it might become the norm soon, if there are no technical challenges (how much stabilization do such images need? How much lag will that stabilization introduce? How will the experience be if the connection isn't good enough?)
That sounds like a fun way to induce vertigo in your conversation partner. I get nauseous watching people play first-person-shooters on large televisions, I could not deal with following alone with someone's bobbing viewpoint.
If I saw this on the streets right now I'd get a "bluetooth-headset-douchebag"-vibe
While true, don't you think that the world would be a poorer place without the research from Xerox Parc? People who were notoriously bad at shipping, but their ideas were used to create product elsewhere.
My only point is that shipping isn't everything for research, unlike product development. So the question is really, are they looking to build a product, or research the future of computing? Perhaps both. Having said that, I do wish I could get my hands on one...
Apple's difference isn't that they iterate, it's that they don't talk about their new stuff until it ships.
The Google car won't be truly safe until it has logged 1000x miles.
Can you imagine the potential liability Google faces if they try to bring this to market without having done due diligence? This is a car, not an iPad.
It's a silly debate anyways considering all the extra hardware you'd need.
The project looks promising but it still has a long way to go.
The more so as it's adversely selecting hazardous conditions for the human driver.
My own stats, space travel excepted, are about a quarter million miles driven. Three parking scrapes (insufficient clearance / hitting unseen object), twice rear-ended (neither major damage) both times by a trailing car driving too close, too fast.
Says that they do have the occasional human intervention. So even if it has never been in an accident up to that point there had only been 7 cars that had driven 1,000 miles without any human intervention.
You'd have to have a lot more data (such as a much higher number of miles logged without human intervention and 'all weather' exposure) before you could make such a grand claim.
Siri isn't perfect, but it's a great start. Apple will almost certainly improve Siri in the next iteration of iOS.
I don't know either. It'd be nice for some explanation.
Kidding apart, it's nice to see Google innovating in many fronts :)
What about liability? Say the google car goes on sale and gets in a ton of wrecks due to a software bug. Who is liable? The owner of the car or Google?
However the display part doesn't really matter, what really matters is ubiquitous environmental recognition, voice recognition - not just of yourself but of everyone around you, no matter which languages they are currently speaking - and (three dimensional) image recognition. This will completely change how people interact and explore the world and each other.
How long before Google collects and mines all this video like they did with Google StreetView but in realtime? Combined with the huge advances in facial recognition the privacy implementations are frightening. It reminds me of the scifi film "The Entire History of You". 
In the future will people who want to opt out of face recognition and tracking have to wear identifiers*, e.g. QR codes, on their person when talking to Google Glasses users, much like Google's wifi policy? 
As for facial recognition, anyone who can write a few lines of Perl could easily scrape social network profile photos and start matching pictures of people on the Internet to names. It's trivial. And doing this sort of thing manually is popular: search for "human flesh search engine".
Privacy in public just doesn't exist.
Or perhaps a combination - a semi-transparent mask that constantly displays shifting patterns, like Rorschach's mask.
I really want these to work, and I can see in limited situations (manually turn them on to look something up, etc.) they could. But I really deeply doubt they can work they way people are imagining.
The features and functions shown in that video were basically removing the pain of pulling your phone out of your pocket. It was, essentially, a more elegant system than strapping your phone to your face.
It's a good start for the idea of putting processing power in front of your vision, but I wouldn't buy it until it actually augments my reality. If I look at my friend it should show upcoming appointments between us. If I look at a concert poster it should bring up links for articles about the band, reviews, links to buy tickets. That would be augmentation.
I think most people will want to try this, myself included. But personally I draw the line here and I think many others will. I would never want to own a pair. This will be a niche product for those people who already use Bluetooth headsets, who are a minority.
Unless something significant develops between now and release, I expect the average person's reaction will be to wonder why in the world they would need one. What problem would it solve for them?
That's not to say that that question can't be answered, but I do think it's fair to say that it hasn't been answered, at least not yet.
You could have said the same about any number of technologies that are now ubiquitous (and many people did say those same things).
"Who wants an iPhone? Crappy virtual keyboard and the only apps are from Apple."
"Cell phones will never catch on. Anyone can call you at any time? How obnoxious. And you pay even when you didn't call? Ridiculous."
"No one will pay for TV with ads."
"No one will pay more for TV without ads."
Times change, and the "I'd never buy this" statements become suppressed memories once everyone else owns it and you feel left out. What matters is how much benefit the product delivers. People will get over their hangups if the value is high enough.
I'm not saying that this will be an awesome product (or that it will ever even come to market). But if it is, people will buy it, and they will use it.
I'm not sure which way it'll go, but I can imagine that happening.
But it could also be a major altering of norms. Are women comfortable with the idea that any guy looking at them on the street can capture their image?
It seems like the biggest difference is that if a ton of people were wearing these all the time, police couldn't tell everyone to turn them off. If they looked like normal glasses, they wouldn't even know.
The difference is speed. You'd always have a camera aimed at whatever you were looking at. If you saw a mugger grab a purse, you might not have the presence of mind to fumble with your phone, but you might be able to press a button or say "take picture" (or whatever). I'd imagine that a decrease in response time could be a real deterrent.
I know there've been spy cams forever, but I'd argue that merely owning one (let alone openly displaying it) would mark you for suspicion among a lot of people and I don't think they're widely used. I'm not saying it's the end of the world, but there are some new privacy norms that will be established if these are to take off.
With social networks and internet, managing one's "online identity" and privacy is increasingly important. Everything you say online might be stored and retrieved later. It's hard enough when you are an adult. But what about kid/teenn, who is still trying to find his identity, and might say things that he regrets later on?
What happens when real life is logged?
While asking for public input into what they should be able to do, it is clear the real value is that all Google employees can wear them, where they can stream snippets of other teams that are working on other parts of your project to you so that you can co-ordinate with your team. This should really streamline the release process, allowing groups of Googler's to get way more done than they would working alone and relying on email or other social media tools to co-ordinate. Creating dynamic hang outs, streaming the feeds of those around you, allowing for collective action against problems and to respond immediately and on target with just the right resources. A real break through in organizational efficiency.
The only hangup is that the legendary mobility of people in Google, being able to switch jobs almost at will, means that trying to hard code names into the stream is really time consuming. The current work around is to just use numbers for the groups, so if you're the group working on adding Picasa support which is group number 15 on the p pages, you might be referred to as 3 of 15 ...
Well, the answer's obvious: it's not where the money is. It's sad that actual advancement to our society is only getting the funding through piggy-backing on our increasing intrusive means of entertainment.
One thing I did think about after seeing the concept photos they have on the Google+ page (https://plus.google.com/111626127367496192147/posts) and in this NYT article, what about us poor people not gifted with 20/20 vision? But again they are probably just concept images and shouldn't really be taken as much more than that.
There are reportedly dozens of other shapes and variations of the glasses in the works, some of which can sit over a person’s normal eyeglasses.
The glasses the way they're designed are smart, not in between but rather in a fixed spot and only a part of the field of view. I definitely wouldn't want a computer screen between my eyes and 'the real world' in normal day to day life. Believing what my eyes see is important to me and with another layer in between you'll never know whether you are reacting to an overlay or the real thing. Especially if the images are created with input from a remote source which opens all kinds of interesting possibilities. Superimpose a picture of a traffic accident on the glasses of someone that is driving and you could very well end up with the real thing.
Seeing is believing they say and the response to visual input can be very reflexive. Audio alone can be distracting enough.
So google seems to have that part done right, it'd be great to play around with these.
I suspect that this is his profile:
However there are some clear points why did they choose this path:
- they only needed to make the display tech. Everything other could come later.
- the lifelogging aspect could get hyped, making the whole thing sexy.
But still I am a bit dissapointed, now let's jump on hand and object recognition!
Put it on your finger (or if a cheap and small RFID chip, glue it onto your fingernail, or even paint each of your fingernails with invisible slightly radioactive nail polish), pair it with your glasses and you can use your finger(s) as a pointing device in the AR overlay.
Depending on input resolution, the movements could be subtle -- you don't have to raise your hands as if conducting an orchestra although that would be fun to see. Wiggle your fingers to control your headset.
Or how about keyboard trousers. Invisible areas on the outside of your trousers that you can tap as a small 10-key keyboard.
These glasses look 1/2 as good as prototype and they are hackable at all, then the future is near. And look matters. I don't have problem stickin out. But many situations having bunch of wires and contraptions on your head is stickin out too far. esp now days that people think lightbrites are WMDs.
Very, very excited.
I also want a version that shoots laser beams into my eye.
Combined with a chording keyboard (Frogpad (except they don't seem to be making them anymore) or "The Twiddler" and I can be happy when commuting.
There are very few visible pieces of technology that we carry and display in public (probably only glasses and watches). Visible wearable technology has a long history of being either geeky or douchebaggish (bluetooth headsets). I'm suspecting the current incarnation of Google's glasses would elicit both perceptions.
But I agree that if they got this far already, they can probably make them even more minimalistic in version 2.0 or 3.0. But really, they look very modern already. Kind of like some of the best looking bluetooth headsets out there - just longer.
There were two key problems - one was the weight (resulting in wearer fatigue), and the other was the refresh rate which caused nausea during prolonged use.
Neither of these are significant barriers, and given Moore's Law I wouldn't be surprised if these things are as ubiquitous as smartphones in 5-10 years.
Tom Furness developed similar devices decades ago: Once out of the military lab, this was a key project of the Human Interface Technology Lab at UW in Seattle in the early 1990's.
Updating for smaller chips and components contributes to the current form-factor which includes the front-facing camera not present in the earliest versions.
The upside of course is that this might finally get some traction with Google's budget and efforts.
I might be too old to get excited about stuff like this. I suppose if it helped with my work I would use it, but otherwise I really enjoy looking directly at the world.
One of the things I LOVE about Apple is...if they released this video, I know I could expect to see it on store shelves in a month or at most a few months. Sometimes even as little as a few weeks.
With this....only God He knows when we will ever see this.
I want something like this, maybe in a pair of contacts, but even contacts can be irritating. How about a genetically engineered eyeball?
Regarding genetically engineered eyeballs: what if they get a virus?
Otherwise, are they much more than a head mount display accessory for Android?
Head mount displays for mobile computers have been in industrial use for some time - e.g. http://www.stereo3d.com/hmd.htm
hopefully what google announces at i/o will be shockingly advanced