Hacker News new | past | comments | ask | show | jobs | submit login

You've hit upon one of the fundamental problems of gesture recognition systems: They are not self-revealing, and they don't support reselecting and browsing.

There is no easy way for them to reveal to the user what gestures are possible (short of a showing a palette of commands, including animations of gestures which are directionally sensitive), and no clear and wide separation of distinct gestures, so they're difficult to learn and remember, and their ambiguity leads to a high error rate. And they're not suitable for applications where it's not easy and inconsequential to undo mistakes (like real time games, nuclear power plant control, etc).

For example, handwriting recognition has a hard time distinguishing from "h" and "n" and "u", or "2" and "Z", so systems like Graffiti avoid lower case characters entirely, and force you to write upper case characters in specially contrived distinct non-standard ways, in order to make them distinct from each other (widely separated in gesture space). It's important for there to be a lot of "gesture space" between each symbol, or else gesture recognition has a high error rate.

https://en.wikipedia.org/wiki/Graffiti_(Palm_OS)

Graffiti is an essentially single-stroke shorthand handwriting recognition system used in PDAs based on the Palm OS.

https://medium.com/@donhopkins/gesture-space-842e3cdc7102

Gesture Space

The space of all possible gestures, between touching the screen / pressing the button, moving along an arbitrary path (or not, in the case of a tap), and lifting your finger / releasing the button. It gets a lot more complex with multi touch gestures, but it’s the same basic idea, just multiple gestures in parallel.

OLPC Sugar Discussion about Pie Menus: Excerpt About Gesture Space

I think it’s important to trigger pie menus on a mouse click (and control them by the instantaneous direction between clicks, but NOT the path taken, in order to allow re-selection and browsing), and to center them on the exact position of the mouse click. The user should have a crisp consistent mental model of how pie menus work (which is NOT the case for gesture recognition). Pie menus should completely cover all possible “gesture space” with well defined behavior (by basing the selection on the angle between clicks, and not the path taken). In contrast, gesture recognition does NOT cover all gesture space (because most gestures are syntax errors, and gestures should be far apart and distinct in gesture space to prevent errors), and they do not allow in-flight re-selection, and they are not “self revealing” like pie menus.

Pie menus are more predictable, reliable, forgiving, simpler and easier to learn than gesture recognition, because it’s impossible to make a syntax error, always possible to recover from a mistaken direction before releasing the button, they “self reveal” their directions by popping up a window with labels, and they “train” you to mouse ahead by “rehearsal”.

[...]

Swiping gestures are essentially like invisible pie menus, but actual pie menus have the advantage of being “Self Revealing” [5] because they have a way to prompt and show you what the possible gestures are, and give you feedback as you make the selection.

They also provide the ability of “Reselection” [6], which means you as you’re making a gesture, you can change it in-flight, and browse around to any of the items, in case you need to correct a mistake or change your mind, or just want to preview the effect or see the description of each item as you browse around the menu.

Compared to typical gesture recognition systems, like Palm’s graffiti for example, you can think of the gesture space of all possible gestures between touching the screen, moving around through any possible path, then releasing: most gestures are invalid syntax errors, and they only recognizes well formed gestures.

There is no way to correct or abort a gesture once you start making it (other than scribbling, but that might be recognized as another undesired gesture!). Ideally each gesture should be as far away as possible from all other gestures in gesture space, to minimize the possibility of errors, but in practice they tend to be clumped (so “2” and “Z” are easily confused, while many other possible gestures are unused and wasted).

But with pie menus, only the direction between the touch and the release matter, not the path. All gestures are valid and distinct: there are no possible syntax errors, so none of gesture space is wasted. There’s a simple intuitive mapping of direction to selection that the user can understand (unlike the mysterious fuzzy black box of a handwriting recognizer), that gives you the ability to refine your selection by moving out further (to get more leverage), return to the center to cancel, move around to correct and change the selection.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: