
Visualizing Fitts's Law (2007) - kevin
http://particletree.com/features/visualizing-fittss-law/
======
oneplane
IIRC this is one of the reasons the UI on the Mac opted for a fixed context-
dependant menu bar at the top of the screen instead of the per-window one used
by Windows (and Java).

It's basically 'fling your pointing device at the top' and 'go left or right
to get the button you want'. Due to the lack of borders/stops, this would be
harder if it was sandwiched between a titlebar and window content.

~~~
205guy
The reasoning is that mouse stops the border of the screen no matter how far
the mouse is moved, making an effective target that is huge off the screen, so
easy and quick to hit. This would hold true even with large screens, unless
you dial down the acceleration of the mouse for fine control--as others have
pointed out.

But the issue is Apple broke the whole mechanism with hot corners. Now if I
move fast anywhere near a hot corner, it gets activated. And now the menu bar
near the corners is tiny and hard to hit with a "huge" hot corner right nearby
(the hot corner gets the benefit of the inifinte off-screen target). I find
the same problems will full-size browsers (with tabs along the top), I'm
always hitting the hot corners instead of the top lerpft and right tabs. I
guess I can always change my corner settings.

Additional gripe about the top menu in MacOS: the biggest fault I've found is
that it can be active for an app whose windows are hidden or that currently
have any windows, thus creating a mismatch between what you see (other
windows) and what is active (responding to keyboard shortcuts for example).

~~~
lloeki
> But the issue is Apple broke the whole mechanism with hot corners.

Well I always hated hot corners, and anyway by default they're disabled on
macOS.

------
kurthr
The engineers way of thinking about Fitt's Law is as a human control system.
We motion control our hands by using feedback (visual, tactile,
proprioceptive). The servo time response to a step function (new location to
click) of that feedback loop depends on the required accuracy and allowed
overshoot. The larger the target, the higher a velocity/acceleration you can
use to hit it without missing. You learn very quickly that large objects (like
edge of screen) allow much more gross movements than single pixel target...
and the farther you have to go the larger the time at a given tracking
velocity.

What is at least as interesting is the cognitive load of tracking/pointing,
clicking/chording . Mental load and apparent time appear to be the reason why
typing can be slower than a menu system, but it feels faster. Similarly,
people will report a feeling like a trackpoint (IBM keyboard nipple) takes
longer than a mouse even when they're actually faster in hitting targets.
Presumably, this is because they have to track the cursor to know velocity and
position, while a mouse or touchpad uses your body's knowledge of hand
position/velocity that is missing from a force based input.

What you're used to feels right in any case.

------
ProxCoques
And after you've read that, give yourself The Quiz!

[http://www.asktog.com/columns/022DesignedToGiveFitts.html](http://www.asktog.com/columns/022DesignedToGiveFitts.html)

------
deventis
Laws of UX has a nice overview of the different laws that exist in user
experience! Fitt's law: [https://lawsofux.com/fittss-
law](https://lawsofux.com/fittss-law)

~~~
seanmcdirmid
Principles of interaction design has even more entries and is quite detailed.
Unfortunately, it’s a book that isn’t free!

~~~
whoisjuan
I think you meant: "Principles of Interactive Design" by Lisa Graham. Right?

~~~
seanmcdirmid
No. I got the name completely wrong:

[https://books.google.com/books?id=3RFyaF7jCZsC&printsec=fron...](https://books.google.com/books?id=3RFyaF7jCZsC&printsec=frontcover&dq=universal+principles+of+interaction+design&hl=en&sa=X&ved=0ahUKEwjlma7GwfbZAhUSz2MKHWjdA3AQ6AEIJDAA#v=onepage&q=universal%20principles%20of%20interaction%20design&f=false)

Universal principles of design. The book is like an encyclopedia of design
principles.

------
todd8
On some early graphical computer user interface, I can’t remember which one,
one could specify that the mouse cursor would “wrap” to the opposite edge. It
was like the ultimate non-Fitt’s law configuration. I hated it when I tried
it, I would lose the cursor and not be able to find it.

~~~
radiorental
Just to be pedantic, while somewhat related, that's not Fitts Law.

Fitts Law is specifically about targeting.

The 'feature' to wrap the pointer may compound targeting issues but it's
secondary. Again, Fitts' is about distance to and size of a target.

~~~
todd8
You’re right. What I discovered was that I would lose the mouse cursor because
I couldn’t quickly move it to an edge without tracking it visually all the way
to begin with and once the cursor crosses the edge it breaks visual continuity
by jumping to the opposite side. Today’s multimonitor configurations have the
same problem to some extent because they have so much area with small
discontinuities at the edge where the cursor jumps to a different monitor.

------
macqm
Windows 8 Start UI was designed to take advantage of this. Theoretically it
was great: when you open start menu the mouse pointer is in the bottom left
corner, tiles close to you are wide and tall, tiles far from the pointer are
smaller, wider at the bottom, narrower at the top. Hot corners were supposed
to be easily accessible (infinite distance). Yet is was a failure, because
uses were not familiar with it, it broke their habits.

~~~
Doxin
The whole problem with the tiles is that they are much too big. Bigger targets
are easier to click, but targets farther away are again harder to click. The
targets in the start menu were plenty big to start with anyways.

------
simula67
Previous discussion :
[https://news.ycombinator.com/item?id=11208463](https://news.ycombinator.com/item?id=11208463)

I posted this in previous discussion too, a much simpler explanation :
[https://www.youtube.com/watch?v=E3gS9tjACwU](https://www.youtube.com/watch?v=E3gS9tjACwU)

------
rwmj
Clicks edge of screen above NetworkManager icon ... Confirms it's still broken
in XFCE 4.

------
walterbell
How does this work with radial menus and touch interfaces?

~~~
DonHopkins
Pie menus benefit from Fitts' Law by minimizing the target distance to a small
constant (the radius of the inactive region in the menu center where the
cursor starts) and maximizing the target area of each item (a wedge shaped
slice that extends to the edge of the screen).

They also have the advantage that you don't need to focus your visual
attention on hitting the target (which linear menus require), because you can
move in any direction into a big slice without looking at the screen (while
parking the cursor in a little rectangle requires visual feedback), and you
can learn to use them with muscle memory, with quick "mouse ahead" gestures.

[http://www.donhopkins.com/drupal/node/100](http://www.donhopkins.com/drupal/node/100)

An Empirical Comparison of Pie vs. Linear Menus

Jack Callahan, Don Hopkins, Mark Weiser (+) and Ben Shneiderman. Computer
Science Department University of Maryland College Park, Maryland 20742 (+)
Computer Science Laboratory, Xerox PARC, Palo Alto, Calif. 94303. Presented at
ACM CHI'88 Conference, Washington DC, 1988.

Abstract

Menus are largely formatted in a linear fashion listing items from the top to
bottom of the screen or window. Pull down menus are a common example of this
format. Bitmapped computer displays, however, allow greater freedom in the
placement, font, and general presentation of menus. A pie menu is a format
where the items are placed along the circumference of a circle at equal radial
distances from the center. Pie menus gain over traditional linear menus by
reducing target seek time, lowering error rates by fixing the distance factor
and increasing the target size in Fitts's Law, minimizing the drift distance
after target selection, and are, in general, subjectively equivalent to the
linear style.

[http://www.donhopkins.com/drupal/node/98](http://www.donhopkins.com/drupal/node/98)

The Design and Implementation of Pie Menus -- Dr. Dobb's Journal, Dec. 1991

There're Fast, Easy, and Self-Revealing.

Copyright (C) 1991 by Don Hopkins.

Originally published in Dr. Dobb's Journal, Dec. 1991, lead cover story, user
interface issue.

Introduction

Although the computer screen is two-dimensional, today most users of windowing
environments control their systems with a one-dimensional list of choices --
the standard pull-down or drop-down menus such as those found on Microsoft
Windows, Presentation Manager, or the Macintosh.

This article describes an alternative user-interface technique I call "pie"
menus, which is two-dimensional, circular, and in many ways easier to use and
faster than conventional linear menus. Pie menus also work well with
alternative pointing devices such as those found in stylus or pen-based
systems. I developed pie menus at the University of Maryland in 1986 and have
been studying and improving them over the last five years.

During that time, pie menus have been implemented by myself and my colleagues
on four different platforms: X10 with the uwm window manager, SunView, NeWS
with the Lite Toolkit, and OpenWindows with the NeWS Toolkit. Fellow
researchers have conducted both comparison tests between pie menus and linear
menus, and also tests with different kinds of pointing devices, including
mice, pens, and trackballs.

Included with this article are relevant code excerpts from the most recent
NeWS implementation, written in Sun's object-oriented PostScript dialect.

[https://www.youtube.com/watch?v=Jvi98wVUmQA](https://www.youtube.com/watch?v=Jvi98wVUmQA)

Demo of Pie Menus in SimCity for X11. Ported to Unix and demonstrated by Don
Hopkins.

[https://www.youtube.com/watch?v=SG0FAKkaisg](https://www.youtube.com/watch?v=SG0FAKkaisg)

Pet Rock Remote Control: Pie menu remote control touch screen interface for
sending commands to pet rocks.

[https://www.youtube.com/watch?v=2KfeHNIXYUc](https://www.youtube.com/watch?v=2KfeHNIXYUc)

MediaGraph Music Navigation with Pie Menus Prototype developed for Will
Wright's Stupid Fun Club: This is a demo of a user interface research
prototype that I developed for Will Wright at the Stupid Fun Club. It includes
pie menus, an editable map of music interconnected with roads, and cellular
automata.

[https://www.youtube.com/watch?v=-exdu4ETscs](https://www.youtube.com/watch?v=-exdu4ETscs)

The Sims, Pie Menus, Edith Editing, and SimAntics Visual Programming Demo:
This is a demonstration of the pie menus, architectural editing tools, and
Edith visual programming tools that I developed for The Sims with Will Wright
at Maxis and Electronic Arts.

~~~
walterbell
Thanks for the references!

Any idea why these are not often used with touchscreen mobile interfaces, e.g.
press for contextual pie menu? Even without OS support, they could be
implemented within apps.

~~~
DonHopkins
There have been various implementations of pie menus for Android [1] and iOS
[2]. And of course there was the Momenta pen computer in 1991 [3], and I
developed a Palm app called ConnectedTV [4] in 2001 with "Finger Pies" (cf
Penny Lane ;). But Apple has lost their way when it comes to user interface
design, and iOS isn't open enough that a third party could add pie menus to
the system the way they've done with Android. But you could still implement
them in individual apps, just not system wide.

Also see my comment above about the problem of non-transparent fingers.

Swiping gestures are essentially like invisible pie menus, but actual pie
menus have the advantage of being "Self Revealing" [5] because they have a way
to prompt and show you what the possible gestures are, and give you feedback
as you make the selection.

They also provide the ability of "Reselection" [6], which means you as you're
making a gesture, you can change it in-flight, and browse around to any of the
items, in case you need to correct a mistake or change your mind, or just want
to preview the effect or see the description of each item as you browse around
the menu.

Compared to typical gesture recognition systems, like Palm's graffiti for
example, you can think of the gesture space of all possible gestures between
touching the screen, moving around through any possible path, then releasing:
most gestures are invalid syntax errors, and they only recognizes well formed
gestures.

There is no way to correct or abort a gesture once you start making it (other
than scribbling, but that might be recognized as another undesired gesture!).
Ideally each gesture should be as far away as possible from all other gestures
in gesture space, to minimize the possibility of errors, but in practice they
tend to be clumped (so "2" and "Z" are easily confused, while many other
possible gestures are unused and wasted).

But with pie menus, only the direction between the touch and the release
matter, not the path. All gestures are valid and distinct: there are no
possible syntax errors, so none of gesture space is wasted. There's a simple
intuitive mapping of direction to selection that the user can understand
(unlike the mysterious fuzzy black box of a handwriting recognizer), that
gives you the ability to refine your selection by moving out further (to get
more leverage), return to the center to cancel, move around to correct and
change the selection.

Pie menus also support "Rehearsal" [7] -- the way a novice uses them is
actually practice for the way an expert uses them, so they have a smooth
learning curve. Contrast this with keyboard accelerators for linear menus: you
pull down a linear menu with the mouse to learn the keyboard accelerators, but
using the keyboard accelerators is a totally different action, so it's not
rehearsal.

Pie menu users tend to learn them in three stages: 1) novice pops up an
unfamiliar menu, looks at all the items, moves in the direction of the desired
item, and selects it. 2) intermediate remembers the direction of the item they
want, pop up the menu and moves in that direction without hesitating (mousing
ahead but not selecting), looks at the screen to make sure the desired item is
selected, then clicks to select the item. 3) expert knows which direction the
item they want is, and has confidence that they can reliably select it, so
they just flick in the appropriate direction without even looking at the
screen.

I wrote some more stuff about pie menus in the previous discussion of Fitts'
Law. [8]

[1] Android Pie Menus:
[https://play.google.com/store/apps/details?id=com.lazyswipe](https://play.google.com/store/apps/details?id=com.lazyswipe)

[2] iOS Pie Menus: [https://github.com/tapsandswipes/iphone-pie-
menu](https://github.com/tapsandswipes/iphone-pie-menu)

[3] Momenta Pen Pie Menus:
[https://www.microsoft.com/buxtoncollection/detail.aspx?id=17...](https://www.microsoft.com/buxtoncollection/detail.aspx?id=170)

[4] Palm ConnectedTV Finger Pie Menus:
[http://uk.pcmag.com/connectedtv/29965/review/turn-your-
palm-...](http://uk.pcmag.com/connectedtv/29965/review/turn-your-palm-into-a-
tv-remote)

[5] Self Revealing: [http://uxmag.com/sites/default/files/uploads/Brave-NUI-
World...](http://uxmag.com/sites/default/files/uploads/Brave-NUI-World-Sample-
Chapter.pdf)

Self-revealing gestures are a philosophy for design of gestural interfaces
that posits that the only way to see a behavior in your users is to induce it
( afford it, for the Gibsonians among us). Users are presented with an
interface to which their response is gestural input. This approach contradicts
some designers’ apparent assumption that a gesture is some kind of “shortcut”
that is performed in some ephemeral layer hovering above the user interface.
In reality, a successful development of a gestural system requires the
development of a gestural user interface. Objects are shown on the screen to
which the user reacts, instead of somehow intuiting their performance. The
trick, of course, is to not overload the user with UI “chrome” that overly
complicates the UI, but rather to afford as many suitable gestures as possible
with a minimum of extra on-screen graphics. To the user, she is simply
operating your UI, when in reality, she is learning a gesture language.

[6] Reselection:
[https://www.billbuxton.com/PieMenus.html](https://www.billbuxton.com/PieMenus.html)

In general, subjects used approximately straight strokes. No alternate
strategies such as always starting at the top item and then moving to the
correct item were observed. However, there was evidence of reselection from
time to time, where subjects would begin a straight stroke and then change
stroke direction in order to select something different.

Surprisingly, we observed reselection even in the hidden menu groups. This was
especially unexpected in the Marking group since we felt the affordances of
marking do not naturally suggest the possibility of reselection. It was clear
though, that training the subjects in the hidden groups on exposed menus first
made the option of reselection apparent. Clearly many of the subjects in the
Marking group were not thinking of the task as making marks per se, but of
making selections from menus that they had to imagine. This brings into
question our a priori assumption that the Marking group was using a marking
metaphor, while the Hidden group was using a menu selection metaphor. This may
explain why very few behavioral differences were found between the two groups.

Reselection in the hidden groups most likely occurred when subjects began a
selection in error but detected and corrected the error before confirming the
selection. This was even observed in the "easy" 4-slice menu, which supports
the assumption that many of these reselections are due to detected mental
slips as opposed to problems in articulation. There was also evidence of fine
tuning in the hidden cases, where subjects first moved directly to an
approximate area of the screen, and then appeared to adjust between two
adjacent sectors.

[7] Rehearsal:
[https://www.billbuxton.com/MMUserLearn.html](https://www.billbuxton.com/MMUserLearn.html)

Requirement: Novices need to find out what commands are available and how to
invoke the commands. Design feature: pop-up menu.

Requirement: Experts desire fast invocation. Once the user is aware of the
available commands, speed of invocation becomes a priority. Design feature:
easy to draw marks.

Requirement: A user's expertise varies over time and therefore a user must be
able to seamlessly switch between novice and expert behavior. Design feature:
menuing and marking are not mutually exclusive modes. Switching between the
two can be accomplished in the same interaction by pressing-and-waiting or not
waiting.

Our model of user behavior with marking menus is that users start off using
menus but with practice gravitate towards using marks and using a mark is
significantly faster than using a menu. Furthermore, even users that are
expert (i.e., primarily use marks) will occasionally return to using the menu
to remind themselves of the available commands or menu item/mark associations.

[8] TLDR: bla bla bla pie menus bla bla bla. ;)
[https://news.ycombinator.com/item?id=11219792](https://news.ycombinator.com/item?id=11219792)

------
dang
We merged the earlier discussion
([https://news.ycombinator.com/item?id=16610903](https://news.ycombinator.com/item?id=16610903))
into this one.

I invited Kevin to repost his old article that was posted in 2007 but never
got any discussion on Hacker News:
[https://hn.algolia.com/?query=Visualizing%20Fitts%27s%20Law&...](https://hn.algolia.com/?query=Visualizing%20Fitts%27s%20Law&sort=byDate&dateRange=all&type=story&storyText=false&prefix=false&page=0).

