Hacker News new | past | comments | ask | show | jobs | submit login
Gestural interfaces: A step backwards in usability (jnd.org)
43 points by ssp on May 29, 2010 | hide | past | favorite | 20 comments



"But the place for such experimentation is in the lab."

That's a silly, elitist statement. No matter how much testing you do in the lab, the ultimate test is in the hands of end users. Lots of dumb WIMP UI ideas, like modal dialogs, MDI windows, and nested menu hierarchies, survived the labs at Xerox, Apple, and Microsoft, but fell out of favor over years of real user experience. On the other side of the coin, lots of now-popular desktop and web UI elements were conceived by run-of-the-mill developers far from any research campus. As the new-car smell wears off touch devices, plenty of touch UI conventions will get drop-tested in the wild, the stupid ones will get discarded, and the good ones will get copied by everyone and become taken for granted. This will happen far faster in the cutthroat app marketplace, under the eyes of millions of customers, than in any HCI lab.


Judging from the evidence of my browser, and every other common browser out there --including new IE's!-- I'm not so sure the S- vs. MDI debate is completely settled. I think that a lot of the text editors for the Mac --TextMate, maybe?-- have tabbed interfaces, too; vim and emacs certainly do.

(I don't know anything about mobile browsers, so the statement may well be false for them.)


Tabs are still SDI. MDI was a full windowing environment inside of the program itself, including icons and sub windows and a separate, distinct notion of maximization, minimization, window position, and... well, it was as if it were designed to be as confusing as possible by adding a second unnecessary dimension of potential confusion to virtually every core concept in the UI. Tabs are just tabs, nowhere near as complex on any level while still accompling pretty much everything you could want.


I love Don Norman's books, but you have to understand:

He's a consultant. He runs a usability lab. That's his business.

Get it?

This is not a pure critique, although it certainly has valid points. It's a drum-up-money speech. You don't drum up money by saying "Yes, it has these problems, but hey, when personal computers were only 3 years old, we all had to swap floppies to move to the second half of our program!"

Being fair and pragmatic doesn't get you massive clicks and money, unless you are very, very skilled at delivering pragmatic advice in a delicious and entertaining package. (And Norman is not.)

EDIT: And neither does it do to undermine your business by admitting that, regardless of these flaws, people love the crap out of the damn things.


A good first step may be to have a standard way to show the gestures that are currently enabled (e.g. pinch, swipe). That way, you at least know what is significant to the current app in the current mode, and don't have to try a bunch of things.

Though in some respects, it is unfair to criticize touch technology based on its initial use in phones, because those screens are small. Some of the article's goals aren't practical on a small screen, e.g. GUIs have "discoverable" interfaces because of always-visible menus, but a phone can't do this without seriously reducing usable screen space.

On the other hand, the touch screen of an iPad probably should fix more of these problems. For instance, an iPad app can probably have a discoverable "menu bar" like a Mac app does, and not really hurt usable screen space.


A friend worked with a high end image manipulation program that had "pie" menus. You held the menu button down on your tablet, and after a pause a circular menu popped up at the mouse position. You dragged in the direction of your choice. If there were submenus for that choice, another circle popped up and so on.

Because of fitts' law, this was far more efficient than standard popup menus: the 'target' for any choice was larger and therefore much easier to hit.

The gestural part was this: If you held the menu button down and immediately started to move the mouse, it would select the choice without bothering to display the pie. Advanced users quickly learned the sequence of zigs and zags required for their most common menu operations and preferred using the pie menus to keyboard shortcuts or standard menus in the window panes.

As the OP points out, this system provided discoverability as well as consistency. The same gesture always produced the same result.


For searching ease, this variant of Pie Menus is called Marking Menus. PDF of paper http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.93....


That's what I've found too. I'm also using something similar to pie menus for my main project.


That is an incredibly good idea - it deserves to be widely copied.


I had to laugh: when pushing the 'menu' button in Android:

> (The keyboard does not always appear. Despite much experimentation, we are unable to come up with the rules that govern when this will or will not occur.)

So apparently these 'user interface experts' have been unable to figure out what I and everybody else I know managed to understand implicitly without any particular effort: that a long press is a different action to a short press. No wonder they think it is a poor UI - they must be utterly confused all over the place. But honestly, this is a fairly common UI concept in space / button limited devices. I somehow doubt they are really as 'expert' as they suggest if they haven't come across this before.


I've had a N1 for several months now and I didn't know that pressing the menu button for a longer time is different from pressing it briefly. Nor did I notice the same concept for other GUI elements. Sorry, but you are wrong.


Users will have to acquire some basic vocabulary on their own, or the language of interaction just won't be rich enough to talk with them.

They only need a few simple things, like long touch or double click. Anybody born after these conventions are introduced will never remember how they learned them, they'll just be second nature. Early adopters may suffer a bit before they stumble on them.


Do you also claim to be a user interface expert and write articles about it?

This technique is pervasive throughout Android. You should try long pressing on every UI element - especially the home button and the home screen - to see all the different actions you can do.


The point is not "can a user interface understand this menu", the point is "can an average user understand this interface".

I agree that once the different behavior emerges, it might not be that hard to find the source. But I am not sure - the keyboard never appeared for me because I never pressed the menu button for a longer time. Now I tried it and it worked, but I already knew why because of the article.

In the other hand, I use my iPod Touch only very rarely. I think there is supposed to be some kind of tap bar (I know this from reading about PhoneGap), and I managed to bring up a menu once in a game I have installed by tapping around on the screen wildly. I was unable to bring it up a second time, so no, it is not always easy to identify the source of some activity.


which android version?

do you mean the "menu" hardware button, or the "home" hardware button?

on android 2.0.1, holding the menu button does nothing. long-press only does anything on the home button (-> task switching). holding the menu button just shows me hotkeys for menu items.


Key criticisms:

"gestures cannot readily be incorporated in menus: So far, nobody has figured out how to inform the person using the app what the alternatives are."

"Accidental activation is common in gestural interfaces, as users happen to touch something they didn't mean to touch. It may not even be obvious what action got you there. If a finger accidentally scrapes an active region, there is almost no way to know why the resulting action took place. The trigger was unintentional and subconscious."

"When a mouse is clicked one pixel outside the icon a user intended to activate, the mouse pointer is visible on the screen so that the user can see that it's a bit off. [On a touch screen] Users frequently touch a control or issue a gestural command but nothing happens because their touch or gesture was a little bit off. Since gestures are invisible, users often don't know that they made these mistakes."

"When users think they did one thing but "really" did something else, they lose their sense of controlling the system because they don't understand the connection between actions and results. The user experience feels random and definitely not empowering."


"Bold explorations should remain inside the company and university research laboratories and not be inflicted on any customers until those recruited to participate in user research have validated the approach."

So much for innovation.


It’s hard to tell what they’re actually saying here other than that a newly popular, relatively open platform has some bad UIs. Why’s it gotta sound so grouchy?



nice background color. and they presume to talk about usability?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: