For geeks, probably not. For normals? It's a pretty big deal. Normals weren't entirely convinced by the mouse. They learned it. Sort of. But many never even grokked the whole 'click vs double-click vs right-click' thing.
And the mouse was at least a fairly consistently-behaved indirect pointing device. Tapping and swiping an inconsistent, indirect touch surface, particularly after having learned direct-manipulation touch-screens, is not going to go over well.
Keep in mind that to make a Glass-style wearable make sense to a casual user, it has to be more efficient than simply taking out their cell phone. And for people whom HUDs are a natural advantage, it needs to be efficiently usable without requiring that person's hands being used.
So seemingly minor annoyances can add up quickly to a determination of "not worth it". You've probably got about a half-second of grace before people go back to their phone.
> "Once Thalmic's MYO is fully operational, I don't think navigation will be much of a problem."
Hand navigation might be easier, but the social problems will get massively amplified. Nodding/tapping/talking is 'weird' enough. Throw in some finger/hand/arm movements and this thing's never leaving the den of specialized technologists.
And requiring hand gestures can kill usefulness for those (hands-full) people that most naturally benefit from a HUD.
To me, Glass is looking more and more like Microsoft's stab at tablets. It's an early attempt that's going to make a class of specialized users very happy. But it's not going to be in casual use on trains, in coffee shops, etc. It's supposed efficiency gains are largely hamstrung by interactivity problems that are just annoying enough to send most users back to the alternative tools.
The real test for Google, is whether they address these problems or pretend they don't exist -- as Microsoft did -- until someone else comes along and eats their lunch with a far more modest solution.