Contrast that with a few christmas' ago when I left my iPad out and found my dad watching aeroplane videos on youtube. He didn't know what an ipad is (or youtube for that matter). Yet, he still managed to find something he could appreciate with it with no guidance.
Us tech savvy people couldn't figure out how to turn google glass on. Those with glasses could barely see it even when it was.
If I'm going to wear something on my head like a nutcase, I would hope that it has been designed for _usability_ not _Discoverability_. Making everything idiot-proof or at least "didn't bother to read the manual proof" leaves us with only devices suited for idiots and people too lazy to read the manual.
EDIT: That said....I'm not going to wear something on my head like a nutcase, because no matter how cool it is, I'm too shy/vain/self conscious to do so. The only people willing to wear glass I would bet are people who are wiling to learn to use something to get the most benefit out of it (I would bet a huge overlap with emacs and vim users).
Devices "suited for idiots and people too lazy to read the manual" have, in the past decade, driven massive growths in computing, made it more accessible to more people, changed the landscape of the entire world, and even fueled a few revolutions.
But by all means, when people need to read a book just to use a device, we'll consider it a badge of honor instead of a failure of design.
The touch devices I assume you're referring to are intuitive because as a human you're naturally good at using your hands to make things move around.
Push a button and it acts like a button, push a thing that looks like a sheet of paper and it moves in a way that you would expect a paper on your "real" desk in front of you to move.
They're "idiot proof" because we've been trained to use them most of our lives.
>But by all means, when people need to read a book just to use a device, we'll consider it a badge of honor instead of a failure of design.
Glass is something we interact with through an entirely alien interface and potentially an extremely powerful tool. I'm not talking about a podcast app, but a head mounted computer you might potentially be using for hours at a time.
Unless you eat a lot of acid, you're probably not familiar with how to interact with imaginary things that float in space before you.
To me, "they had to tell me how to use it" is a much weaker complaint than "it was easy but tedious to use" for something that sits on the side of your head.
This blog post, and the ones it links to, do a much better job of exploring this than I could here:
(somewhat relevant and fun top gear segment on the evolution of car interfaces) http://www.streetfire.net/video/125-top-gear-first-modern-ca...
Given that Glass doesn't have a launcher like most Android devices, the real challenge will be in actively using Glass. Passive use, whereby you are connected to applications using the Mirror API, means that you just receive notifications pushed by remote services, and if you don't respond to a card, it just gets pushed down the stack. You can navigate backwards to find those cards again, but just as you don't respond to every Facebook post, you can just let them flush out and choose to ignore it.
The two problems that have to be resolved, are how do you control "native" applications, built using the GDK, and how do you control the flood of extraneous information when you might subscribe to tens or hundreds of services that send you messages through the Mirror API.
While you can control what notifications you receive, it would be very easy to subscribe to "important" services and still feel overwhelmed by notifications that aren't particularly interesting just so that you do receive the notifications that you do care about. Culling this stream into something more useful is where some of the AI research may also pay off. It is a difficult balance to figure out today, but as this tech evolves, I have no doubt this problem can be solved too.
In retrospect drag and drop, pinching, double clicks, long holds, swipes, acceleration etc are all obvious. But at one point they weren't.
-I don't think the screen is not bright enough, and I haven't had the issue where you have to look at something dark to see it.
-I also disagree about "dangerous while driving." In fact, I would argue quite the opposite. It's SO much safer than looking at your phone for directions. The fact that any information I need is a simple, and fast, glance away is much better than holding up my phone and looking back and forth between that.
(Side note- it's also fantastic for directions while biking. Previously I'd have to do this scary "check my phone while biking" thing, which was no good at all.)
-Looking at the screen has felt incredibly natural to me. You're not looking at the extreme top right. I don't have a lot more to say here other than YMMV, I guess.
As far as what other people can hear, definitely something, but not super well. Also, depends on how quiet it is.
Judging from the preliminary specs and information, it's a much nicer product: full developer docs available, an open system where you can deploy any apps you like, a very nice hardware platform and to top it all off, a much nicer price than Google Glass.
I also find the explicit design much more appealing — they don't try to hide the fact that there is a display on your face.
Still a spiffy idea.
Hardware we could have built for the last 10 years, and really unobtrusive. You'd need better UI and software than with video, since information needs to be more closely tailored for a lower bitrate channel, but I think Google (or a smart startup) could do it.
99% of the reasons I use video are because I need to do the filtering and postprocessing in my head. If I had complete trust in a great software agent, I could just let it tell me what to do, vs. showing me enough data to make a decision. e.g. for driving, you can give directions by voice (if you're a talented codriver) which do NOT require any visual information to the driver during a high-speed rally. Almost no car nav systems are that smart, but if Google can build a self-driving car, they should be able to make an awesome codriver/navigator.
Extends to almost anything. I don't need to see a picture of someone and a dossier; just remind me of the most critical facts as needed, by voice.
This should be just as good as having a clone of yourself, or an entire team of ops people, watching/listening to what you do, and giving you voice prompts, just like a video game, or being the President on TV in a live debate, or whatever.
The hardware is trivial; it's all software and back-end processing, which Apple sucks at and Google/Amazon/Startup should rock at.
To expand: Big companies may have the need for very talented particulars, like Peter Norvig, who can drive innovation over big teams and have a very precise mind, and all the humility required. But in the case of Kurzweil, it looks more like he is invited to play the clown and amuse the assistance, than to lead anything concrete.
...it looks more like he is invited
to play the clown and amuse the assistance,
[rather] than to lead [to] anything concrete.
Some great scientists, some ho-hum and some that are there mostly for political or other reasons. I mean Wozniak in 1979? What had he done at that point to be in the same list with Knuth or even Bill Joy?
Since you invoke the list, are we to expect the same great stuff from Kurzweil in Google, as we got from Wozniak post 1979, Richard Stallman past 1990 and Stroustrup past 1993?
Woz was largely responsible for designing the Apple I & Apple II, both of which played a pretty significant part in starting the entire PC industry. Seems like a pretty spectacular achievement to me.
It should support something more reliable than SCO and an input touchpad like Glass. It would also make sense to have some of the initial NLP processing in the device, or even a full ARM processor like the Glass running arbitrary applications.
You would be able to make calls through an offloaded cell phone using a more advanced profile than SCO HS.
If I could remove the device and place it on the car visor to use as a noise-cancelling speakerphone like the Motorola Bluetooth unit I have, even better.
I pitched exactly that idea to Logitech & Inria in Switzerland about 15 years ago and I agree with you, we could have done that long ago and it is one of the best interfaces that you could have. Think 'personal assistant'.
The general idea was to simply open a voice channel with a server farm somewhere that listens to your ambient audio and pics up cues / clicks and responds with audio.
The project stranded on privacy concerns, and on speech decoding issues that Inria foresaw would be hard to overcome.
Not sure what to make of this statement. The design is beautiful, but I don't see why that would be an artifact of mass-production. The early prototypes for Glass were much sexier in terms of build integrity (optics, for example) than the current mass-produced models. One of the big challenges, in fact, was lowering expenses for mass-producibility. However, yes, the design is wonderfully intelligent. You'll notice some optical tricks when you look at Glass. For example, you'll see as if the frame takes up a portion of the sides, but in reality, there is circuitry hidden behind it--by looking at it you'd never know :)
The interface is not intuitive. It is actually very difficult to use the first time, for seemingly no reason. ... I would have expected more design attention to have been spent on interacting with the software.
I really feel this is a non-issue. Yes, you don't know how to use it when you first get it, but after a few instructions from a friend, I navigated the entire interface just fine for about an hour without needing additional help. Mostly, I just had to learn about the well-disguised touchpad. I've used a lot of silly software and hardware, and Glass is not one of them. It's a new product class, and it's going to have a new interface.
Glass doesn't communicate with you very much, and when it does, it doesn't use audio. It makes heavy use of the screen when possible. When navigating Glass, you can rarely speak selections. The only way to fully navigate the interface is to use the touchpad by holding your hand up near your face.
Navigation is definitely an issue. However, the Glass team planned on external devices being used to navigate Glass. Once Thalmic's MYO is fully operational, I don't think navigation will be much of a problem.
The battery life is dreadful. After ten minutes of use, the battery level reported went down by at least 8%. The owner told me that it would probably last about two hours with constant use. (This is hopefully a temporary handicap that will be improved in the future, but I find it hard to consider even this level of battery life good enough for a device that is sold.)
This is supposed to be fixed. Especially if you use video recording, the battery will drain really quickly. The idea by final product, is to have it last about 12 hours with passive use (kind of like a cell phone).
For geeks, probably not. For normals? It's a pretty big deal. Normals weren't entirely convinced by the mouse. They learned it. Sort of. But many never even grokked the whole 'click vs double-click vs right-click' thing.
And the mouse was at least a fairly consistently-behaved indirect pointing device. Tapping and swiping an inconsistent, indirect touch surface, particularly after having learned direct-manipulation touch-screens, is not going to go over well.
Keep in mind that to make a Glass-style wearable make sense to a casual user, it has to be more efficient than simply taking out their cell phone. And for people whom HUDs are a natural advantage, it needs to be efficiently usable without requiring that person's hands being used.
So seemingly minor annoyances can add up quickly to a determination of "not worth it". You've probably got about a half-second of grace before people go back to their phone.
> "Once Thalmic's MYO is fully operational, I don't think navigation will be much of a problem."
Hand navigation might be easier, but the social problems will get massively amplified. Nodding/tapping/talking is 'weird' enough. Throw in some finger/hand/arm movements and this thing's never leaving the den of specialized technologists.
And requiring hand gestures can kill usefulness for those (hands-full) people that most naturally benefit from a HUD.
To me, Glass is looking more and more like Microsoft's stab at tablets. It's an early attempt that's going to make a class of specialized users very happy. But it's not going to be in casual use on trains, in coffee shops, etc. It's supposed efficiency gains are largely hamstrung by interactivity problems that are just annoying enough to send most users back to the alternative tools.
The real test for Google, is whether they address these problems or pretend they don't exist -- as Microsoft did -- until someone else comes along and eats their lunch with a far more modest solution.
I expect high quality from hand produced jewelry, art, etc. Knowing nothing else, I wouldn't expect high build quality and beauty from handmade consumer electronics. I assumed the OP was just saying that it doesn't look like a prototype or hack.
I was able to use a head mounted (and tracking) 3d display with a computer back when ROTT was popular (circa 1994). It's a novelty at best, and a headache inducing nightmare at worst. I didn't see the magic then, and I certainly don't see the magic now.
A seamless hud, on the other hand, has the potential to do great things for your interaction with the environment. Based on reviews, however, it's still years out.
Actually, no, this one didn't. It was fantastic technology. The resolution was the same as the monitors on the other computers, and the tracking was very fast. The only downside to the technology itself was that it was heavy.
The problem was that it was only a window, and a relatively small window at that. It's a glorified 3d monitor that denies you vision of your surroundings (and works very poorly with anyone with glasses).
Worse? You still need some other sort of controller (which you can't see when you're wearing them), and unless you're standing up, you can only look a limited amount around you. In any environment other than wandering around a virtual world, it's a curiosity at best.
FPS games - the head moves too slowly to make an accurate method of aiming (and think of your neck muscles afterwards), so you still need to use a mouse as your primary aiming device, and a keyboard to move.
MMO games (perhaps the ideal target for these) require extensive use of the keyboard and mouse (neither of which you can see with the device on), and are rarely played in first person view.
I just can't really see a market outside of VR, and then not without a whole new class of controllers and tactile feedback methods. It's the first (and arguably the easiest) part of a new class of technology, which when combined could be interesting. Until then... meh.
The final one will have full HD, but even then I think it needs a quadrupling in res before it's even acceptable.
They're very different devices. The Rift immerses you in 3D worlds. Every way you tilt your head immediately changes your viewpoint. My perception of Glass is that it tries to overlay information on top of reality and add more senses to humans. They are similar means to completely different ends.
I suppose it makes sense, every technology seems to take about 20 years to reach the consumer market. I wonder if it has anything to do with patents lasting for 20 years.
It has the drawback of not integrating very well with most of the current games because the game's UI itself needs to be drawn in 3D perspective. For example if you don't do that then you'll see two crosshairs when you focus your vision in the distance. And if you focus on the crosshair, you'll see everything else in double vision.
But that's minor. You should experience Google Streetview with it. You can google Eiffel Tower, look up, and almost feel like you're there.
I remember watching members of the sailing world cup wearing glass equivalents like what, 10 years ago?
First Rift equivalents are like 30 years old or older.
Rift is now riding the wave of affordable and good solid state accelerometers or gyros, created by car and cell phone industries.
Turning a novelty that is unappealing to most people into a mass market product is damn well revolutionary. Not many companies are able to pull that off. (Sometimes that transition is more gradual and evolutionary - I would argue that was the case with digital cameras - but sometimes it's actually a single product that makes this transformation happen. I would say the iPad is the prime example for that. Sometimes it's something between evolution and revolution.)
Yes, inventions do matter - but turning mere technology into products people actually want also matters. Both can be evolutionary, both can be revolutionary. Sometimes (though probably rarely) it's possible to do both in one step, often it's not.
If the Rift catches on with more than a handful gamers and starts a sustained era of affordable, high quality, low latency head tracking 3D HMDs it will be a revolutionary product - and it doesn't matter even a little bit if something a bit like it existed 30 years ago.
The Rift gives you an instant 3D experience whereas Glass sits there idle most of the time and waits for it to be used as a tool when needed.
So to really assess the value of Glass you would have to use for a few weeks, while the Rift only has to be used for a few minutes to really experience what it is all about.
I hope Google Glass is an experiment, because voice reco isn't particularly important outside of when you're driving. When I get tired of texting and consider voice reco, 2 things hit me (and others, I've done solid research on this)
1. Voice reco would be slightly better now that I'm tired of texting, but not much.
2. It is nowhere near socially acceptable to be speaking to no one. That's why you only see old people using bluetooth - how is that not an unbeatable sign? You can't just be speaking to no one.
The next step in communication has to be either low input "aware" communcation (aka, the check-in) or communication that somehow otherwise reads your mind.
Voice control isn't there yet. You could have a card deck in your timeline that consisted of individual steps of a recipe, but you can't use your voice to advance to the next card; you have to swipe. Hopefully that will change.
So I can't see how replacing the phone-and-person with wearable-and-computer-agent is going to see social change along any further/faster.
It is not about the thing in itself. It is about: "F*ck you, my computer is more interesting that you attitude"
With a tablet or big phone you could integrate everything into a social context. You could search something socially for the entire group to benefit. Show pictures for the entire group. Whatever.
Or you could look like a social retard that uses the computer to isolate herself.
And I find that I am frequently alone - and there you don't have to worry about the taboo. I think that voice interfaces have a lot of potential in non-social settings. Personally I dictate much of my morning emails when out taking the dog on a walk.
This is the "big thing". The battery issues will be a minor footnote if the majority of your use of Glass is it providing information as you need it, rather than you fussing with it trying to get the information you need. Google Now allows for this to happen - the better it becomes, the better Glass becomes.
I've never quite figured out what's up with Google Now—I have it on my phone, and I hear it's supposed to be some sort of (semi) intelligent assistant type of thing, but so far it doesn't seem to do very much at all, much less anything "intelligent."
On my phone, it gives me weather info every day (which is nice), and dutifully tells me how to get to work in the morning and return home home at night (thanks Google Now!) but ... other than that, it basically does nothing.
Am I doing something wrong? Is there some setting I need to set ("intelligence: on") that will make Google Now suddenly start telling me what upcoming movies I might like, or interrupting my dinner to tell me about that great TV show I'm about to miss?
There are three points in that sentence that exclude me.
Apple makes sure that before you ever use it that almost every single thing akin to what Dustin pointed out is a non-issue.
Google gives you an alpha/beta product and wants you to find the flaws for them, and maybe help shape it.
Using bread as an analogy, Apple gives you a beautiful tasty loaf ready to eat; Google gives you some dough and tells you to start kneading if you want some bread. With Apple you know you are getting a fantastic loaf and with Google you get to help bake.
Each has their benefits and drawbacks.
Apple is like one that makes the absolute best vanilla and chocolate ice creams you've ever had. And maybe in the back, they're working on an amazing mint-chocolate-chip, but they haven't quite found just the right supplier of peppermint extract, and until then, nobody gets to try it.
Google is like that crazy experimental ice cream shop that has all sorts of weird flavors available at any time, and sometimes they don't quite work out (like "pancakes") but sometimes they're huge successful breakthroughs (like "bourbon and cornflakes"), and they have no way of knowing which is going to be which until the public starts to taste them.
After all, the flip side is, "or announce that a flavor that you paid $600 for barely 14 months ago is end-of-life and will not be getting any updates".
The Google ice cream shop is actually a vending machine.
But it gives you the opportunity to take photos that you'd have missed with less available hardware.
It does feel like the head mounted camera in Glass is hardware that hasn't found its killer app yet. Yet it is easy to imagine lots of possibilities (auto face recognition so you never forget a name, photo search with commands like: "Identify this leaf" and it takes a picture and tells you the plant name, the input side of a WordLens style app, etc), which I think is exactly why it is early adopters and developers that have them now.
I am much more excited about self-driving cars. Maybe Google could transfer some of the tech to heads-up displays for drivers.
That's what I thought, though oddly, I haven't seen this obvious fact mentioned anywhere else. It can be fixed in future I'm sure - but in the short term it might cause some problems:
"OK Glass. Tweet 'I'm an idiot'." - someone shouting in the subway
Am I just missing something? I suppose Google could be mirroring what Apple did with the first iPhone and letting the tech stack settle down before opening things up to third-party development.
(C'mon, who doesn't want to make a game where you turn your head to aim and say "pew pew pew" to fire?)
I'm not sure what they will do about battery life. Maybe if they'll use more efficient chips like Cortex A7 or Cortex A53 instead of Cortex A9, battery life would increase. But I think the main reason is the small battery, and the reason they can't increase it much more is because they don't want it to be heavy. So they can only really try to optimize the device with a more efficient chip, more efficient voice recognition, more efficient video recording, and so on.
Not true! Completely wrong! You don't have to focus on the glass device itself. I doubt anyone here even can. It would hurt very soon and you would have an extreme cross-eyed look when trying to see the display.
The display is set-up for a focal plane somewhere in front of your eyes. Google mentioned it is equivalent to a 25" at 8 feet distance.
I'm cautiously optimistic about the product, I just hope that Google can nail these issues before they release it, otherwise wearable tech might get a bit of a bump back in terms of adoption.
You certainly don't focus at the glass cuboid right in front of your eyes.
Yes, that's a problem!
PCs, due to their general purpose nature, cannot be shaped to fit any one situation perfectly. They also require a lot of interaction to tell them which particular function they're supposed to perform in a particular moment.
I'm sure there will still be wearable PCs, but it's going to be transitional.
And not of the "beta coming in Fall" kind, the check-back-in-ten-years kind.
If, say, Apple was also presenting such pre-demo quality stuff, they would have flooded the media with BS. E.g they could have shown their iPad prototypes in 2004, in a similar crappy form.
Google only presents this half-baked shit to show us it "innovates". I'd rather know if they can deliver.
I'm not sure Android is a good example of "Google making their money from usage".
For one, have the Android development costs been recouped even? Including the Motorola buyout.
Second, the more successful companies (Amazon with Kindle and Samsung, which is 95% of all Android sales IIRC), are either already having or in the process of locking Google out of it. The Fire's already doing it, and Samsung is investigating it's own path there too.
And if they could replace the web search to something else, they probably would -- but there's nothing much at the moment, which is why even iOS using Google as the default.
NB given I have never ever heard of this happening i suspect there's a hole in my knowledge of optics but it would be awesome if it was possible.
That last link is a DIY (terrifying) project. Your eyes, your choice. Retinas are delicate.
EDIT: One company doing stuff is "Microvision" - (http://www.microvision.com/index2.html) but they've changed to pico-projectors.
It's not nature, but everyone who already wears glasses knows what this is like. It's just part of my software by now.
If I physically move my head, nothing I am looking at is moving with me, i.e. the computer screen does not move to where I look.
The point is that you look through glasses, yet the Google Glass screen makes you focus on it, not through it, and hence it moves with your general head movement.
Or have I misunderstood what you said?
I still fail to see the point. Maybe because I never try to look at my own glasses...
edit: When I want to look over the rim of my glasses, I still move my head while looking up with my eyes, otherwise I wouldn't be able to look at what I was looking at before.
Would Google have been smarter to not include a camera, at least initially?
Is the tech there to make this a viable product yet?
taken from Google PR?
He just wants readership/attention/validation, hence the opinionated and slightly snarky blog post.
Given confirmation bias, that'd basically skew all reviews about products to the positive - hardly useful or objective.
I've never had any trouble using it in sunlight myself either. I half suspect he just saw the screen was a little faded and went, woot, another complaint I can make instead of actually using it. A lot of his other issues are similar. Other people voice controlling it is almost never an issue since you are the one who taps it or looks up right beforehand - it isn't on all the time. He doesn't know the shortcuts to skip to Googling or skip to the menu either.
Who is the only person with the AI and data sets to power things like Google Now and really make form factors like Glass or Watch practical? You don't hear anyone talking about that. They're talking about SDKs or users or app stores or whether or not I can set my own default browser on my phone.
Google Now is the big deal here that everyone glazes over when talking about Glass. They're already putting it into Google's web interface (for the desktop, yeah), Chrome (though it makes more sense in Google.com rather than Chrome to me), Glass and I'm sure more and more services will continue to feed into it. They announce more every few weeks.
"There are some positives in the product. It's probably likely to appeal to certain vertical markets. The likelihood that it has broad appeals is hard to see."
I think that's actually a fairly well-balanced statement. He also says this about glasses and watches:
"Nothing that's going to convince a kid that's never worn glasses or a band or a watch or whatever to wear one. At least I haven't seen it. So there's lots of things to solve in this space."
This isn't saying that Glass will never work or that it's a bad idea, just that as it currently stands it needs a lot of work to have a mass appeal, which is probably true. I, for one, am looking forward to see what Glass becomes in two or three iterations time.
"Nothing that's going to convince a kid that's never worn glasses or a band or a watch or whatever to wear one."
I'm kind of embarrassed by Tim's comments. He won't even make a non-pessismistic comment about watches, which, I guess everyone could be wrong but it seems they'll introduce one soon; it's as if he's already pushing the notion "Apple iWatch is magical and people want to wear it and not the three competing Android watches that better integrate with your phone and have more features".
Who knows, it's taken HTC years (and losing tons of market share) to design fashionable looking hardware. Maybe Apple will win purely on design and Tim's comments will be vindicated. Either way, I hope he's not putting all his eggs in that iPhone basket, unless iOS7 just prints money.
with the AI and data sets to power things like Google Now
And I've used Google Now and I fail to see what makes it so incredibly advanced that no one else could possibly replicate it. Maybe I am missing something but it doesn't seem to hold a candle to say Watson.
I've used this example before, but Google Now already recommends destinations based on what I Google Maps at my desk before I head out. It also knows what is on my "grab when I go by the grocery store" list before I quickly use Google Now and say "Google: note to self, pick up milk" and it's in there.
[soon] So now, when Google's autonomous car is driving me home and I'm watching the road attentively, Google Now pops up in Glass and offers to redirect the car to the grocery store most minimizing the way out of my route (or the one that has the best deals listed with Google Shopping, or that has the best coupons, or that Google Wallet has my rewards card for). And it reminds me to grab my milk. [/not yet]
And then on the way out of the store it again offers directions back to my house.
This is all there (except the Keep intuition bit that I sincerely hope is coming and will really prove Google Now). (And sadly, and I'm whining, but I didn't get into the super special Glass program :[)
Sorry, I've rambled, back on point: I don't doubt that those you list have the power and smart people to build the tech, but I don't think they have the data. I think Apple Maps was but a small example of the advantages Google has in areas of data.
What Google is hoping for is to tease us into wanting something, that no one has asked for.
Google is not alone in this historically.
Let's step back a decade, when we were all told that Microsoft (by Microsoft) had the biggest R&D budget in the world.
What products came out of that R&D? ZERO.
Apple on the other hand spent years developing the iPhone with iPod profits and never let on what they were going to reveal. What happened? A mobile revolution.
Google is telegraphing R&D to try and leapfrog Apple in the public mind while having no viable consumer product in the pipeline much less on retail shelves. Thats tantamount to product suicide.
I think Glass or something like glass could be important, but I think that Googles approach to pushing alphaware in front of the masses, will, if it really is viable force google to play catch up to Apple or someone else once again.
Google has never, not once, in its history, had a successful launch of a hardware product.
Sure, Apple brought us the iPad Mini because the Nexus 7 failed so hard. The Nexus 4 and 10 are also known as big failures that customers hate.
Google has never had a successful launch of a hardware product, not even once.
It was nevertheless a successful hardware product because users and reviewers loved the Nexus devices and they sold everything they produced.
Regardless, it's hardly clear why a community of people intent on founding new businesses would consider a history of successful product launches all that important.
Uhh, what, huh?
You mean to tell me all those movies, cartoons, thousands of sci fi books where people explore a future with such objects, all these people exploring and creating, or describing what a future might be like...
... are not actually asking for it at all.
Instead, they're just waiting for what Apple will tell them they wanted all along?
Sorry, even as someone who has an MBP, iPhone, iPad, I find this perspective hilarious and borderline offensive.
The point of research isn't products (and you can ask famous labs like PARC and Bell if you don't believe me) but MSR has done a bunch of stuff http://en.wikipedia.org/wiki/Microsoft_Research