The other big example is Wii, and now what Microsoft has been demoing in their Natal project (assuming it ships in the near future). In these cases, the key factor is that you are operating a device from a distance. That allows the use of a wider range of human motion in controlling the device, now that the technology to support it exists.
I suppose the lesson is, form factor and innovative interface design cannot be decoupled.
This is a actually a favorite nitpick of mine. I don't really buy the argument that established standards dominate even after they've been proven to be inferior. First of all, the Dvorak vs. Qwerty situation isn't as clear cut as it is commonly made out to be -- while there's lots of anecdotal evidence, the few independent studies (i.e. those not done by Dvorak himself) aren't very conclusive. At the very least they don't show a 5x improvement.
The cost of switching to the metric system likely isn't as great as commonly believed, and there's no harm in running both side-by-side for a while. There are many precedents for this from other countries. While as a European, I consider the imperial system to be clearly inferior, I think the real reason why the U.S. doesn't adopt it has more to do with emotional attachment and xenophobia than with cost. The point here is, if Americans genuinely consider the imperial system to be better, switching would not be an improvement.
The electric plug situation is rapidly improving, in part because of homogenization pressure, in part because electronic devices don't particularly care what kind of voltage you feed into them. (Be careful with adapter plugs and hairdryers though, you might start a fire if the voltage is too high.) Also, this is a case where none of the existing standards is inferior to any other, so it's not even an example of a bad design becoming dominant.
In short, my point is that if a new convention is clearly better, it's usually possible to switch gradually, and that this is usually done. The effect of "dominant design" is greatly exaggerated.
Indeed. Reason Magazine did an in-depth article on this a few years back.
Ultimately, it is indeed not as clear-cut a situation as it's made out to be, either in Dvorak's or Qwerty's favor.
The specific rant was aimed mostly at desktop. There's lots of energy to move i-phonish UI to the desktop, but it will mostly fail there for ergonomic reasons.
Sci-fi movies are horrible predictors of future, especially when it comes to UI. Fun and inspiring which is good, but using it as something to copy is ridiculous.
The 3rd world argument is a good one. If people can skip past our desktops they dodge the dominant design issues. But then it also has to be cheap. Cell phones are a great story - cell phones are the first phones much of the world has ever had. Cell towers are cheaper than landlines.
> jorsh wrote:
> I'm paying attention to the applications,
> not the OS windows.
He's totally right, You rarely need a paradigm shift to achieve whatever it is you imagine the effect of your work to be. That was my point at the end.
Computers now: http://www.digitaltrends.com/wp-content/uploads/2009/11/appl...
Sure, the basics are the same. However, nothing about these changes are "boring." I would argue we have nothing to worry about. In 5, 10, 15 years, we'll have some pretty cool gadgets to play around with.
Take a look at cell phones- everything is starting to use touch screens. Tablets are going to start becoming popular, and eventually that will seep into desktops and laptops.
Don't worry, we'll be fine.
Voice recognition isn't viable either. Do people really want to talk to these machines all day–in offices, subways, coffee shops? Doubtful.
In other words we believe there's room for two types of devices: Viewing devices with limited input (like tablets) and input devices (like PCs)
To give a personal example the most obvious usage case here is the e-book. If you could have every book you've ever owned at the touch of your finger that would be worth something even if you were limited to basic input on the device. Now combine e-books with all your movies and music and you start to see how there's room for devices with limited input.
(For the record we've been using ELO Touch Screens to test usage scenarios until the perfect tablet shows up)
On the other hand, e-book readers have found success by doing things laptops can't, like lasting for a week on a charge, being legible in bright sunlight, and being extremely lightweight. I don't see the same happening for movies (laptops are better for movies since you don't have to hold them up) or music (iPhone/Android wins for that).
Also, e-book readers are replacing books, so while you're carrying another device around, it's probably a net reduction in size and weight for most people.
Here's a concept that would sell like hotcakes: Laptop with a dual mode LCD/e-ink 10", touch-sensitive, swivel display that can be held like a tablet, 24hr (or more) battery life, half-inch thick and less than a pound. Not sure if it's technically possible right now, and I'm sure it'd be quite expensive if it is, but when such a device becomes widely affordable, these niche tablet/e-book use-cases would just be part of your laptop.
When I used to use handwriting recognition - back about when OSX first came out, I found it to be far more comfortable than typing, and I'm a fairly decent touch-typist (~100wpm).
You can have tactile feedback with touch screens.
I think the kind of paradigm shift Gruber was getting at, is away from things like hierarchical 'file' systems and proprietary application-centric file formats.
Personally I think the biggest missed-innovation in the iPhone SDK was not having Newton-esque app-agnostic shared data stores.
Though I still believe there's a strong chance that some unforeseen UI innovation will appear and change the game.
That said, I don't want to discount some new innovation taking over :).
On the other hand, voice recognition for specialized vocabularies is happening. I've noticed a progressive replacement of "push 1 for X, push 2 for Y" with voice interfaces in support calls. My car has voice recognition for the sound system that actually works, e.g. "Play Track Shelter" plays a song of that name. While there are still freaky behaviors, these work well in that they don't waste my time when I'd rather be doing something else -- like having my problem fixed or listening to Icon of Coil. Because they are for specific narrow domains, these applications of voice recognition coexist with my typing, touch, et al.
Put another way, new UIs and input devices seem to work best when they have a path for incremental adoption. It's rare to see a complete overnight change. Even the iPhone, as amazing and revolutionary as it is, built on pre-existing experience with touch screen phones. (Maybe "reacted against" is a better term than "built on.") Plus the iPhone still has a keyboard, even if it's a neat touchscreen one instead of a hardware one.
Just a simple example: US has a wide usage of 1G mobile phones, then China has a huge 2G GSM market, now the US goes from 1G directly to 3G, China should prepare 4G now.
I think this cycle has not yet been facilitated in the current globalization, plus peer competition can invoke more innovations. It's like cold war without hostility.
@IsaacL: Well, future UIs will be like those on scifi movies, but with usability in mind. I mean a UI that don't let you do what you want to do, it's pretty useless.
(i.e did you start designing a UI and end up with something sci-fi like, or did you take a sci-fi interface and try to make it useable?)
As for the usable thing, most of UIs I design are usable from the start, in the sense that they don't suffer the designer syndrome.
If you want to see a scifi that suffers usability
Mozilla's Aurora Concept http://www.adaptivepath.com/aurora/
I'd love to see any data at all about this. Or even a few not cherry picked examples.
You also wrote: "The electric plug situation is rapidly improving".
I have seen zero evidence for this. I've yet to travel to any other continent without this being a problem.
Anyways, as little as 20 years ago, you'd routinely need adapters when traveling from one European country to the next. That's hardly an issue anymore, and I consider this a vast improvement.
You say you have problems when traveling to "other continents", without saying which continent you start from. I'll assume you're from North America, because then you're indeed a bit out of luck, as this map shows: http://upload.wikimedia.org/wikipedia/commons/d/d7/WorldMap_... However, that's a North American problem, not an international one.
As to your other point, I didn't cherry pick the examples, I addressed the examples from an article that argues the exact opposite.
No, sir, that won't be distracting in the least.
OS X was candy colored. A decade ago. The iPhone lost the candy, Mac OS X has some vestigial buttons and scrollbars but is otherwise very much moving towards battleship grey. But its past, present and future aren’t exactly boring.