It sounds like at one point pie menus were even being worked on as part of Firefox core (see https://www.extremetech.com/computing/103589-a-walk-down-fir...), but were eventually abandoned, which is sad. (I can understand why, lots of people seem to have trouble wrapping their minds around the idea, but still.)
Pie menus worked really well in HyperTIES, a hypermedia browser we developed in 1988 at HCIL, as well as Emacs, and I miss them, especially when my MacBook Pro overheats and gets really really really slow, as bad as a Sun 3/50 running NeWS!
And there are some big gaps between implementing one particular menu, to enabling programmers to implement any menu, to enabling designers to implement any menu, to enabling users to implement any menu, to motivating and educating users how to design great menus for themselves.
Bridging the last gap requires providing good defaults and examples, intelligent automatic layout and constraints, and somehow teaching users to intuitively understand Fitts's Law and other important principles of user interface design.
That's what I was getting at with the "Crazy Example: PieCraft".
With games, it's possible to put pressure on users to figure it out for themselves, to award points for good performance, to punish them when they don't perform well (by death, if necessary), and to organically reward them when they do it right (with virtual fame and fortune, not just as a prize, but as the natural result of having well designed menus, and being able to fight and defend themselves better).
But that's not so user friendly for a text editor to challenge, punish and reward the user like that. But the lessons they learn playing a game can still transfer over to their text editor.
>One idea I’ve had is to develop a game called “PieCraft”, that has user-editable pie menus that are first-order in-game player craftable artifacts. [..]
>You could find empty or pre-populated pie menus in the world, pick them up and make them your own, and edit them by moving items around, putting them inside of other menus, and modifying them as you leveled up and earned more powerful menu editing skills and resources.
>The capacity and layout and allowed contents of some menus could be restricted, to constrain the ways you used them, forcing you to figure out which were your most important items, and giving them front row seats so they were easiest to select.
>To put further pressure on players to design efficient menus, your menus could be vulnerable to attack from warriors or theft from pickpockets while they were popped up, and only be able to take a limited amount of damage (depending on if they were crafted from wood or diamond).
>When hit hard enough, items could break loose and fall on the ground, or a whole slice or menu could break open like a piñata and spill all its items into the world. Then you (and your enemies) would have to scramble to pick up all the pieces!
>The sense of urgency and vulnerability while the menu was popped up would compel you to “mouse ahead” to get through the menus quickly, and arrange your most important items so you could find and select them as quickly as possible.
>It would reward you for successfully “mousing ahead” swiftly during combat, by avoiding damage from attack and loss from thieves, and awarding power-up points and experience.
The "Awesome Example: Monster Hunter: World — Radial Menu Guide" shows the benefits of user created radial menus, and what a profound effect good menu design can have on gameplay.
>Monster Hunter: World is a wonderful example of a game that enables and motivates players to create their own pie menus, that shows how important customizable user defined pie menus are in games and tools.
>Want access to all your items, ammo and gestures at your fingertips? Here’s a quick guide on the Radial Menu.
>With a multitude of items available, it can be challenging to find the exact one you need in the heat of the battle. Thankfully, we’ve got you covered.
>Here’s a guide on radial menus, and how to use them:
>The radial menu allows you to use items at a flick of the right stick.
>There are four menus offering access to eight items each, and you can fully customize them, all to your heart’s content.
>Radial menus are not just limited to item use, however.
>You can use them to craft items, shoot SOS flares, and even use communication features such as stickers and gestures.
But it doesn't go quite as far as to elevate player crafted menus into first class customizable parameterized in-game artifact that players can craft, discover in the world, destroy in combat, and buy high quality designer menus from other players at the auction house, empty or pre-populated with all kinds of other sub-menus and other items, like gift baskets of food, magic spells or flower arrangements.
I usually just end up writing a lot of ugly Rube Goldbergesque spaghetti event handling code with lots of global state and flags and modes.
The problem doesn't seem to break down very cleanly into a bunch of nice clean little components that don't know very much about each other, like mouse oriented widgets do, so you need a lot of global event management and state machine code and friendly objects that know about each other, in order to keep track of what's really going on, and to keep from tripping over your own fingers.
Michael Naimark discusses some interesting stuff in his articles "VR / AR Fundamentals — 3) Other Senses (Touch, Smell, Taste, Mind)" and "VR / AR Fundamentals - 4) Input & Interactivity"! (Read the whole series, it's great!)
I wrote some stuff in the "Gesture Space" article about the problem of multi touch map zoom/pan/rotate tracking, and how it's desirable to have a model that users can easily comprehend what's going on:
>Multitouch Tracking Example
>One interesting example is multitouch tracking for zooming/scaling/rotating a map.
>A lot of iPhone apps just code it up by hand, and get it wrong (or at least not as nice a google maps gets it).
>For example, two fingers enable you to pan, zoom and rotate the map, all at the same time.
>The ideal user model is that during the time one or two fingers are touching the map, there is a correspondence between the locations of the fingers on the screen, and the locations of the map where they first touched. That constraint should be maintained by panning, zooming and rotating the map as necessary.
>The google map app on the iPhone does not support rotating, so it has to throw away one dimension, and project the space of all possible gestures onto the lower dimensional space of strict scaling and panning, without any rotation.
>So the ideal user model two finger dragging and scaling without rotation is different, because it’s possible for the map to slide out from under your fingers due to finger rotation. So it effectively tracks the point in-between your fingers, whose dragging causes panning, and the distance between your fingers, whose pinching causes zooming. Any finger rotation around the center point is thrown ignored. That’s a more complicated, less direct model than panning and scaling with rotation.
>But some other iPhone apps haphazardly only let you zoom or pan but not both at once. Once you start zooming or panning, you are locked into that gesture and can’t combine or switch between them. Whether this was a conscious decision on the part of the programmer, or they didn’t even realize it should be possible to do both at once, because they were using a poorly designed API, or thinking about it in terms of “interpreting mouse gestures” instead of “maintaining constraints”.
>Apple has some gesture recognizers for things like tap, pinch, rotation, swipe, pan and long press. But they’re not easily composable into a nice integrated tracker like you’d need to support panning/zooming/rotating a map all at once. So most well written apps have to write their own special purpose multitouch tracking code (which is a pretty complicated stuff, and hard to get right).
For example, if one finger drags, and two fingers can scale and rotate, you might want to implement inertia when you let go, so you can drag and release while moving, and the object will flick in the direction of your stroke with the instantaneous velocity of your finger.
But what happens if you release both fingers while rotating? Should that impart rotational inertia? What about if you start spinning with two fingers and then lift one finger -- do you roll back to panning but impart some rotational inertia so you spin around the point you're touching to pan? Should it also impart rotational inertia from the rotation of the iPad in the real world from the gyros, when you release your fingers? It gets messy!
I implemented some variations of that for Pantomime on Unity for iOS and Android, so you can pan yourself through the virtual world by dragging one finger across the screen, and rotate around the vertical axis through the center of the screen by twisting two fingers around.
Pantomime – Interactive Multiplayer Virtual Reality
For Pantomime, supporting inertia for panning and rotating gestures made sense and was lots of fun, and it also integrated the rotational motion in the real world from the gyros, so you could spin and skate around with your fingers, lift them and continue spinning around while skating too, all the while turning the actual iPad itself around!
Or you could grab an object to twist it with two fingers, then rotate it by rotating the iPad itself instead of dragging your fingers across the screen! (It's actually a lot easier to turn things that way, I think! No friction.) So the tracking needs to happen in 3d space projecting the touch point on the screen into the 3d world, so you can touch the screen with a single finger and drag an object by pointing with the screen instead of dragging your finger, or combine it with dragging your finger for fine positioning.
Another wrinkle is that the user might be holding the iPad in one hand and touching the screen with two fingers of their other hand, to rotate. Or the user might be holding the iPad in two hands like a steering wheel, one at each side, with both thumbs touching opposite sides of the screen.
In the "steering wheel" situation (which is a comfortable way of holding an iPad, that you control it with your thumbs), you might want to have a totally different tracking behavior than the two finger touch gesture (like each thumb controls an independent vertical slider along the screen edge, instead of two finger scaling, for example), so you have to define a recognizer with a distance threshold or a way of distinguishing those two gestures.
But when only one thumb has pressed, you don't know which way they're holding it yet, whether to expect the second finger will touch nearby or at the opposite side, so the initial one finger tracking has to be compatible for each way of holding it.
Another approach is instead of the app trying to guess how it's being used, for the app to INSTRUCT the user which way it expects them to operate the device, and how it will interpret the gestures, in a way that the user has control of what mode it's in (like touching the screen or not).
So you could switch between different modes by wielding different tools or weapons, and the user interface overlay changes to show you how to hold and operate the iPad to maintain the illusion of pantomiming walking or paddling.
Pantomime switches between showing two hands holding the screen like a steering wheel (when no fingers are touching, you're walking), and one hand holding it like a paddle (when one finger is touching the screen, you're paddling, pivoting on your elbow by the side of the screen you're touching).
And you can detect when the iPad is sitting flat with the screen facing up, and then you can switch into a different mode with different touch tracking, since you know they're probably not holding it like a steering wheel or waving it around if it's flat and not moving.
Here's a good demo that shows panning, rotating, inertia, walking and paddling, with magic cans of different gravities, explained with in-world Help Monoliths:
Here's a demo with a terrible bug:
Here's a four-year-old playing with Pantomime -- "I'm so good at this!" he says:
You have to think long and hard how people are going to interact with the device in the real world, and not follow the official operating instructions of your app! There might be two people touching the screen with their fingers near each other. Or it could be a cat swatting or a baby licking the iPad! You can never tell what's going on in the real world.
For Pantomime, I used the TouchScript multitouch tracking library for Unity3D on iOS and Android.
It seemed to be able to handle a certain set of complex gesture situations, but not the complex gesture situations I needed it to handle. But it might work for you, and it's free! I think there are other versions of it on different platforms, too. And it handles proxying events from remove devices (or from Flash to Unity). And it can handle attaching different gesture recognizers to different levels of the transform hierarchy (perhaps to control which colliders detect the touches), but I'm not sure what that's good for.
What I needed to do was full screen multi touch tracking, not tracking multiple gestures on individual objects, so I didn't use everything TouchScript had to offer, so I can't comment on how well that feature works.
It had a separate drag recognizer and rotate recognizer that could be active at the same time, and you can configure different recognizers to be friends or to lock each other out, but still all the different handlers had to know a hell of a lot about each other to be able to roll between them properly with any combination of finger touches and lifts. It was not pretty.
It's free, and it's certainly worth looking at the product description and manual to see which complex gesture situations it can handle, if you're interested.
>TouchScript makes handling complex gesture interactions on any touch surface much easier.
>- TouchScript abstracts touch and gesture logic from input methods and platforms. Your touch-related code will be the same everywhere.
>- TouchScript supports many touch input methods starting from smartphones to giant touch surfaces: mouse, Windows 7/8 touch, mobile (iOS, Android, Windows Store/Windows Phone), TUIO.
>- TouchScript includes common gesture implementations: press, release, tap, long press, flick, pinch/scale/rotate.
>- TouchScript allows you to write your own gestures and custom pointer input logic.
>- TouchScript manages gestures in transform hierarchy and makes sure that the most relevant gesture will receive touch input.
>- TouchScript comes with many examples and is extensively documented.
>- TouchScript makes it easy to test multi-touch gestures without an actual multi-touch device using built-in second touch simulator (activated with Alt + click), TUIOPad on iOS or TUIODroid on Android.
>- It's free and open-source. Licensed under MIT license.
It's not too hard to track full screen gestures, where one object is tracking all the fingers.
The problem is when you have several gestures going on at the same time, or several different objects tracking different gestures.
Are there two objects tracking single finger dragging gestures at the same time, or is one object tracking double finger dragging?
How do you properly roll between one, two and three finger gestures when you raise and lower fingers?
The thing that's frustrating to a programmer used to tracking a mouse is that users can touch and remove their fingers in any order they please, and it's easy not to think things through and figure out how to cover every permutation. They can put down three fingers A B and C one by one, then remove them in a different order, or touch two fingers at once, or almost at once.
So you need to be able to seamlessly transition between 1, 2, 3, etc, finger tracking in any order or several at once.
I also tried implementing web browser pie menus for a gesture tracking library called hammer.js, by making my own pie menu gesture recognizer. Overall hammer was pretty nice for touch screen tracking, but my problem was that at the time (several years ago, I don't know about now) you couldn't make a gesture that tracked while the button wasn't pressed, and mouse based pie menus need to be able to track while they're clicked up. So I needed to do some ugly hack to handle that.
I am guessing hammer.js was designed mainly for touch screen tracking, but not necessarily mouse tracking (since touch screens can't track "pointer position" when no finger is touching the screen). It would be nice if it better supported writing gesture recognizers that seamlessly (or as much as possible) worked with either touch screen or mice. Maybe it's better at that now, though.
It's not hammer.js's fault, but you must beware the minefield of browser/device support:
With a mouse, you can do things like "warping" the mouse pointer to a new location when the user tries to click up a pie menu near the screen edge, but there's no way to forcefully push the user's finger towards the center of the screen.
But then again, the amazing Professor Hiroo Iwata has figured out a "heavy handed" approach to solving that problem:
3DOF Multitouch Haptic Interface with Movable Touchscreen
>Shun Takanaka, Hiroaki Yano, Hiroo Iwata, Presented at AsiaHaptics2016.
This paper reports on the development of a multitouch haptic interface equipped with a movable touchscreen. When the relative position of two of a user’s fingertips is fixed on a touchscreen, the fingers can be considered a hand-shaped rigid object. In such situations, a reaction force can be exerted on each finger using a three degrees of freedom (3DOF) haptic interface. In this study, a prototype 3DOF haptic interface system comprising a touchscreen, a 6-axis force sensor, an X-Y stage, and a capstan drive system was developed. The developed system estimates the input force from fingers using sensor data and each finger’s position. Further, the system generates reaction forces from virtual objects to the user’s fingertips by controlling the static frictional force between each of the user’s fingertips and the screen. The system enables users to perceive the shape of two-dimensional virtual objects displayed on the screen and translate/rotate them with their fingers. Moreover, users can deform elastic virtual objects, and feel their rigidity.
(There is some other seriously weird shit on the AsiaHaptics2016 conference video list -- I'm not even gonna -- oh, all right: relax and tighten, then look for yourself: https://www.youtube.com/channel/UC8qMmIgmWhnQBeABjGlzGbg/vid... ... I can't begin imagine what the afterparties at that conference were like!)
Don't miss Hiroo Iwata's food simulator!
I haven't tried Unity yet, although it sure looks like fun. So far I feel like 3-D user interfaces have been kind of a mess in terms of usability. Maybe you guys at Pantomime will come up with a solution.
I'd tried Hammer a few years back, and gave up and rolled my own gesture tracking because of kind of similar issues.
In terms of pinch-to-zoom as establishing constraints — have you seen Daniel Vogel's Pinch-to-Zoom Plus? https://www.youtube.com/watch?v=x-hFyzdwoL8 He argues that there's actually a somewhat more usable mode of pinch-zooming which breaks the constraints, but not in the chintzy way you're correctly criticizing.
I'll try to respond in more detail after reading through and watching what you've linked!
Behold: The foot menu!
While coding, they scroll with taps to get some exercise, and then set a breakpoint by using a whole foot right tap.
A kick forward starts a debug session, and they move away from the keyboard to get a break.
While debugging they can step into code with forward toe taps.
Step over code with backward toe taps.
And step out of code with right or left toe taps.
After stepping through the execution like this, they set another breakpoint with a left whole foot tap, and run the code to the end with a forward whole foot tap.
Having got a good break while getting work done, they return to the keyboard to fix the bug.
For example, Android tablets have three or four icons at the bottom, by default. (Minor wars have broken out over the order in which they are displayed.) Back, Home, Menu, Recents.
For a few years now I've been installing LMT https://forum.xda-developers.com/showthread.php?t=1330150 on all my Android devices. LMT implements a radial menu that your finger can slide in from any (selectable) side of the device screen. And, of course, you can customize how many slices, what each slice does, color, translucency, highlight color, and a myriad of other small choices which can make the thing feel exactly right for you.
It frequently confuses other people, and why not? It's not like I'm going to let other people use my phone.
If the setup feature on the iPhone 11 started by teaching you how to use a radial or pie menu system, people would swear it was the best thing Apple ever invented, that it's completely intuitive and toddlers can learn it.
(Part of this is because they say that about every Apple feature except fragile keyboards.)
I'm pretty sure they say that about the keyboards too -- i've seen comments like "the j and k keys are stuck, but i love it so much"
Various layout strategies have been developed for justifying text around the pie menu, some are better than others.
Rotating text is usually pretty terrible and hard to read because you have to turn your head and it's jaggy, but now days it's not quite as jaggy with anti-aliasing as PostScript graphics on a bitmap screen was.
The simplest thing to do is to define an inner label radius, and justify against it the center bottom of the top item (if there is one), the center top of the bottom item (if there is one), the left middle of all right items, the right middle of all the left items.
Here's a pie menu with iconic symbols justified that way, and a half pie menu with labels justified that way (just no bottom label since it has an even number of slices, and is half size):
Here's an "eight days a week" menu that shows longer weekday name labels, including ones at the top and bottom -- it looks fine for wide text labels:
And here's a window management menu, that shows a menu with labels, and a grab submenu (for immediately grabbing a corner or edge) with icons.
The top and bottom icons look a little hinkey since their center tops and bottoms are pinned against the inside label radius, so they're pushed out a bit too much. (It also may have to do with the icon metrics). To handle that, pie menus with automatic layout should let you tweak each item's exact position with a relative dx, dy offset (I guess I never got around to fixing that grab menu).
Here's a ringed menu with a shitload of very short single character labels, justified that way. Don't try this at home, kids! I'm not saying it's a good design, but I just had to try to see and feel how it worked. Somebody with better graphical design sense than I could do a lot better!
Another strategy is to use icons, but show a menu description when no item is selected, and the label of the selected item somewhere, like this:
SimCity Tools menu, nothing selected: "Select a SimCity editing tool, or the zone or build submenu."
SimCity Tools menu, bulldozer selected: "Bulldozer editing tool."
Some of these images were from "OLPC Sugar Pie Menu Discussion" where there's lots more discussion about graphical design, layout, and tracking:
The OLPC can get away with using a lot of nice big bold square icons, because it's designed for kids who may not be able to read. But I think it's important to support not just text labels, but also full text descriptions.
Another feature all kinds of menus should really have, is a way for users to discover why the fuck a menu item is disabled, and what they can do to enable it. It's so terribly frustrating that disabled items just ignore you and won't even let you select them, but there they are staring at you, not giving you any help, just watching you suffer, knowing all along what's wrong and why they're disabled, just not allowed to tell you!
At least let users point at menu items that aren't enabled, and tell them why they're not enabled in the help text at the bottom of the menu, and even provide some context sensitive hints describing what you have to do to enable the item, if not offering to do it for you.
I would love a "square pie" menu that combined the improvements of pie menus (click-and-drag to select, infinite size) with a square, table-like layout surrounding the cursor above, below and to the sides of the initial mouse click point.
Something like this (where <¬ is the mouse cursor):
| A | B | C |
| D | <¬ | E |
| F |
| G |
Here is the seminal Google Tech Talk about it:
Here is a demo of using Dasher by an engineer at Google, Ada Majorek, who has ALS and uses Dasher and a Headmouse to program:
Another one of her demonstrating Dasher:
Ada Majorek Introduction - CSUN Dasher
Here’s a more recent presentation about it, that tells all about the latest open source release of Dasher 5:
Dasher - CSUN 2016 - Ada Majorek and Raquel Romano
Here's the github repo:
Dasher Version 4.11
>Dasher is a zooming predictive text entry system, designed for situations where keyboard input is impractical (for instance, accessibility or PDAs). It is usable with highly limited amounts of physical input while still allowing
high rates of text entry.
Ada referred me to this mind bending prototype:
D@sher Prototype - An adaptive, hierarchical radial menu.
>( http://www.inference.org.uk/dasher ) - a really neat way to "dive" through a menu hierarchy/, or through recursively nested options (to build words, letter by letter, swiftly). D@sher takes Dasher, and gives it a twist, making slightly better use of screen revenue.
>It also "learns" your typical useage, making more frequently selected options larger than sibling options. This makes it faster to use, each time you use it.
>More information here:
Dasher is even a viable way to input text in VR, just by pointing your head, without a special input device!
Text Input with Oculus Rift:
>As part of VR development environment I'm currently writing ( https://github.com/xanxys/construct ), I've implemented dasher ( http://www.inference.org.uk/dasher ) to input text.
One important property of Dasher is that you can pre-train it on a corpus of typical text, and dynamically train it while you use it. It learns the patterns of letters and words you use often, and those become bigger and bigger targets that string together so you can select them even more quickly!
Ada Majorek has it configured to toggle between English and her native language so she can switch between writing email to her family abroad and co-workers at google.
Now think of what you could do with a version of dasher integrated with a programmer's IDE, that knew the syntax of the programming language you're using, as well as the names of all the variables and functions in scope, plus how often they're used!
Here’s some discussion on hacker news, to which I contributed some comments about Dasher:
A History of Palm, Part 1: Before the PalmPilot (lowendmac.com)
So I've written up a 30 year retrospective:
This article will discuss the history of what’s happened with pie menus over the last 30 years (and more), present both good and bad examples, including ideas half baked, experiments performed, problems discovered, solutions attempted, alternatives explored, progress made, software freed, products shipped, as well as setbacks and impediments to their widespread adoption.
Here is the main article, and some other related articles:
Pie Menus: A 30 Year Retrospective.
By Don Hopkins, Ground Up Software, May 15, 2018.
Take a Look and Feel Free!
This is the paper we presented 30 years ago at CHI'88:
An Empirical Comparison of Pie vs. Linear Menus.
Jack Callahan, Don Hopkins, Mark Weiser () and Ben Shneiderman.
Computer Science Department University of Maryland College Park, Maryland 20742
() Computer Science Laboratory, Xerox PARC, Palo Alto, Calif. 94303.
Presented at ACM CHI’88 Conference, Washington DC, 1988.
Open Sourcing SimCity.
Excerpt from page 289–293 of “Play Design”, a dissertation submitted in partial satisfaction of the requirements for the degree of Doctor in Philosophy in Computer Science by Chaim Gingold.
Recommendation Letter for Krystian Samp’s Thesis: The Design and Evaluation of Graphical Radial Menus.
I am writing this letter to enthusiastically recommend that you consider Krystian Samp’s thesis, “The Design and Evaluation of Graphical Radial Menus”, for the ACM Doctoral Dissertation Award.
Constructionist Educational Open Source SimCity.
Illustrated and edited transcript of the YouTube video playlist: HAR 2009: Lightning talks Friday. Videos of the talk at the end.
How to Choose with Pie Menus — March 1988.
BAYCHI October Meeting Report: Natural Selection: The Evolution of Pie Menus, October 13, 1998.
The Sims Pie Menus.
The Sims, Pie Menus, Edith Editing, and SimAntics Visual Programming Demo.
The Design and Implementation of Pie Menus.
They’re Fast, Easy, and Self-Revealing.
Originally published in Dr. Dobb’s Journal, Dec. 1991.
Empowered Pie Menu Performance at CHI’90, and Other Weird Stuff.
A live performance of pie menus, the PSIBER Space Deck and the Pseudo Scientific Visualizer at the CHI’90 Empowered show. And other weird stuff inspired by Craig Hubley’s sound advice and vision that it’s possible to empower every user to play around and be an artist with their computer.
OLPC Sugar Pie Menu Discussion
Excerpts from the discussion on the OLPC Sugar developer discussion list about pie menus for PyGTK and OLPC Sugar.
Designing to Facilitate Browsing: A Look Back at the Hyperties Workstation Browser.
By Ben Shneiderman, Catherine Plaisant, Rodrigo Botafogo, Don Hopkins, William Weiland.
Pie Menu FUD and Misconceptions.
Dispelling the fear, uncertainty, doubt and misconceptions about pie menus.
The Shape of PSIBER Space: PostScript Interactive Bug Eradication Routines — October 1989.
Written by Don Hopkins, October 1989.
University of Maryland Human-Computer Interaction Lab, Computer Science Department, College Park, Maryland 20742.
The Amazing Shneiderman.
Sung to the tune of “Spiderman”, with apologies to Paul Francis Webster and Robert “Bob” Harris, and with respect to Ben Shneiderman.
And finally this has absolutely nothing to do with pie menus, except for the shape of a pizza pie:
The Story of Sun Microsystems PizzaTool.
How I accidentally ordered my first pizza over the internet.
P.S.: I'm one of it's contributors ;-)
- Messagease that I've been using since Palm Pilot days http://www.exideas.com/ME/index.php) is pie-menu-like
- Inspired by pie menus and Messagease, I experimented with a touch-screen native calculator (instead of skeuomorphic) back in the Maemo days: https://wiki.maemo.org/Ejpi
cross-posting from reddit: https://www.reddit.com/r/programming/comments/8k9ylk/pie_men...
David Levitt and I developed a Palm app called "ConnectedTV": a universal remote control integrated with a personalized TV guide.
Stroke replaces Poke: ConnectedTV was designed so you can use it with one hand, without a stylus, with your thumb or finger. That's because you always lose your stylus in the dark living room couch cushions. And also because when you're watching TV in the dark, you usually need your other hand for something else like holding a beer or eating popcorn or whatever (perhaps holding another Palm running PalmJoint).
It had "Touch Tuning" to change the TV channel by touching the name of a TV program, and "finger pies" for stroking out up to four different commands plus tapping on any button or TV program.
"Touch Tuning" with ConnectedTV is like speed dialing with the remote: you could forget all those channel numbers, instead you just touch the name of the show you want to watch, and ConnectedTV sent the numbers to change the channel.
ConnectedTV also featured "Finger Pies" which enable you to quickly and reliably select several different commands from one button by stroking in different directions.
You can stroke the buttons with your finger, to invoke different commands in different mnemonic directions.
For example: stroke left and right to page to the previous and next program; stroke up to change the channel to the current program ("send to TV"); stroke down to read more about the current program ("show me more"); stroke the ratings button up ("thumbs up") to add a program to your favorites list; stroke it down ("thumbs down") to add it to your bad programs filter.
ConnectedTV was indispensable if you had hundreds of digital cable or satellite channels, because you can filter out the channels and shows you don't like, and mark your favorites so they're easy to find whenever they're on.
When working on the SimCity user interface, whenever I considered popping up a window or dialog on top of the map, I thought about how many acres of prime real-estate I was covering up and hiding from the user. Or how many miles I was forcing the user to move their mouse back and forth between the map and the toolbar. ;)
My first encounter with the concept was "Secret of Mana"
One more example for your catalog: http://tvtropes.org/pmwiki/pmwiki.php/Main/RingMenu
Not exactly the same since the input was a directional pad instead of a mouse, but close enough...
There were two buttons, one labeled "Our Web Site", the other labeled "Our Competitor's Web Site".
When you moved the mouse over the "Our Competitor's Web Site" button, it would quickly slide out from under your cursor before you could click it!
Then when you stopped moving your mouse, the "Our Web Site" button would slyly slide right underneath your mouse!
Dammit Microsoft!!! ;)
Edit : oh man, totally misunderstood the distinction. I’ve axtually never thought about these menus before... but now after this post I saw one within five minutes in Fortnite!
But maybe there was a model of iPod or some other device that had a knob that you twist a little bit in either direction, but didn't spin -- I can't quite remember.
What I do have terrible memories of is the misguided cocaine-addled skeuomorphism in Apple's Quicktime player, which earned its eternal place in the User Interface Hall of Shame.
>the quicktime 4.0 player contains many examples of how the software must adopt the limitations of the physical device it is based on, but the first example the user is likely to discover is the volume control. since a real-world hand-held electronic device typically employs a thumbwheel to control the volume, the designers concluded that it would work just as well in a software application. what the designers failed to realize is that a thumbwheel is designed to be operated by a thumb, not a mouse. watching new users try to adjust the volume can be a painful experience. the user invariably tries to carefully place the cursor at the bottom of the exposed portion of the control, then drags it to the top of the control and releases, then carefully positions the cursor again at the bottom of the control, drags upward, and well, you get the picture.
But there's another fundamental difference between pie menus and gestures, something I was trying to get at by coining the term "gesture space", defined here (I'd love to know if other people have come up with a better term or more rigorous definition):
>Gesture Space: The space of all possible gestures, between touching the screen / pressing the button, moving along an arbitrary path (or not, in the case of a tap), and lifting your finger / releasing the button. It gets a lot more complex with multi touch gestures, but it’s the same basic idea, just multiple gestures in parallel.
Excerpt from OLPC Sugar Discussion about Pie Menus:
>I think it’s important to trigger pie menus on a mouse click (and control them by the instantaneous direction between clicks, but NOT the path taken, in order to allow re-selection and browsing), and to center them on the exact position of the mouse click. The user should have a crisp consistent mental model of how pie menus work (which is NOT the case for gesture recognition). Pie menus should completely cover all possible “gesture space” with well defined behavior (by basing the selection on the angle between clicks, and not the path taken). In contrast, gesture recognition does NOT cover all gesture space (because most gestures are syntax errors, and gestures should be far apart and distinct in gesture space to prevent errors), and they do not allow in-flight re-selection, and they are not “self revealing” like pie menus.
>Pie menus are more predictable, reliable, forgiving, simpler and easier to learn than gesture recognition, because it’s impossible to make a syntax error, always possible to recover from a mistaken direction before releasing the button, they “self reveal” their directions by popping up a window with labels, and they “train” you to mouse ahead by “rehearsal”.
Excerpt from a Hacker News discussions:
>Swiping gestures are essentially like invisible pie menus, but actual pie menus have the advantage of being "Self Revealing"  because they have a way to prompt and show you what the possible gestures are, and give you feedback as you make the selection.
>They also provide the ability of "Reselection" , which means you as you're making a gesture, you can change it in-flight, and browse around to any of the items, in case you need to correct a mistake or change your mind, or just want to preview the effect or see the description of each item as you browse around the menu.
>Compared to typical gesture recognition systems, like Palm's graffiti for example, you can think of the gesture space of all possible gestures between touching the screen, moving around through any possible path, then releasing: most gestures are invalid syntax errors, and they only recognizes well formed gestures.
>There is no way to correct or abort a gesture once you start making it (other than scribbling, but that might be recognized as another undesired gesture!). Ideally each gesture should be as far away as possible from all other gestures in gesture space, to minimize the possibility of errors, but in practice they tend to be clumped (so "2" and "Z" are easily confused, while many other possible gestures are unused and wasted).
>But with pie menus, only the direction between the touch and the release matter, not the path. All gestures are valid and distinct: there are no possible syntax errors, so none of gesture space is wasted. There's a simple intuitive mapping of direction to selection that the user can understand (unlike the mysterious fuzzy black box of a handwriting recognizer), that gives you the ability to refine your selection by moving out further (to get more leverage), return to the center to cancel, move around to correct and change the selection.
>Pie menus also support "Rehearsal"  -- the way a novice uses them is actually practice for the way an expert uses them, so they have a smooth learning curve. Contrast this with keyboard accelerators for linear menus: you pull down a linear menu with the mouse to learn the keyboard accelerators, but using the keyboard accelerators is a totally different action, so it's not rehearsal.
>Pie menu users tend to learn them in three stages: 1) novice pops up an unfamiliar menu, looks at all the items, moves in the direction of the desired item, and selects it. 2) intermediate remembers the direction of the item they want, pop up the menu and moves in that direction without hesitating (mousing ahead but not selecting), looks at the screen to make sure the desired item is selected, then clicks to select the item. 3) expert knows which direction the item they want is, and has confidence that they can reliably select it, so they just flick in the appropriate direction without even looking at the screen.
>Pie menus completely saturate the entire possible gesture space with usable and accessible commands: there is no such thing as a syntax error, and you can always correct any gesture to select what you want, no matter how bad it started out, or cancel the menu, by moving around to the desired item or back to the center to cancel.
>Handwriting and gesture recognition does not have this property, and it can be quite frustrating because you can't correct or cancel mistakes, and dangerous because mistakes can be misinterpreted as the wrong command. Most gestures are syntax errors. Blind gesture recognition doesn't have a good way to prompt and train you with the possible gestures, which only cover a tiny fraction of the possible gesture space. All the rest of the space of possible gestures is wasted, and interpreted as a syntax error (or worse, misinterpreted as the wrong gesture), instead of enabling the user to correct mistakes and reselect different gestures.
>Even "fuzzy matching" of gestures trades off gestural precision with making it even harder to cancel or correct a gesture, without accidentally being misinterpreted as the wrong gesture. That's not the kind of an interface you would want to use in a mission critical application such as a car or airplane.
It took me a while to learned to spell Shneiderman without the extra c. It helps to have a good mnemonic. Newman is new, and Wiseman is wise! And Weiser is wiser, but spelled wrong.
I heard a funny story that Donald Michie once overheard his secretary telling someone on the phone how to pronounce his name (in a Scottish accent): "It's Donald, as in Duck, and Michie, as in Mouse." He was so pissed he refused to speak to her for a month! ;)
Thank you for making the change and thank you even more for posting all this. What a mess with that patent situation, one more strike against software patents. I really can't stand them!
Poor Donald! I've seen my name spelled in at least 20 different ways, it no longer bothers me. But this also leads to funny situations: when a guy called Jake Matthews was in a group that I was in and I was sure they meant me...
HCIL Demo - HyperTIES Browsing: Demo of NeWS based HyperTIES authoring tool, by Don Hopkins, at the University of Maryland Human Computer Interaction Lab.
A funny story about the demo that has the photo of the three Sun founders whose heads puff up when you point at them:
When you point at a head, it would swell up, and you pressed the button, it would shrink back down again until you released the button again.
HyperTIES had a feature that you could click or press and hold on the page background, and it would blink or highlight ALL of the links on the page, either by inverting the brightness of text buttons, or by popping up all the cookie-cut-out picture targets (we called them “embedded menus”) at the same time, which could be quite dramatic with the three Sun founders!
Kind of like what they call “Big Head Mode” these days! https://www.giantbomb.com/big-head-mode/3015-403/
I had a Sun workstation set up on the show floor at Educom in October 1988, and I was giving a rotating demo of NeWS, pie menus, Emacs, and HyperTIES to anyone who happened to walk by. (That was when Steve Jobs came by, saw the demo, and jumped up and down shouting “That sucks! That sucks! Wow, that’s neat. That sucks!”)
The best part of the demo was when I demonstrated popping up all the heads of the Sun founders at once, by holding the optical mouse up to my mouth, and blowing and sucking into the mouse while secretly pressing and releasing the button, so it looked like I was inflating their heads!
One other weird guy hung around through a couple demos, and by the time I got back around to the Emacs demo, he finally said “Hey, I used to use Emacs on ITS!” I said “Wow cool! So did I! What’s was your user name?” and he said “WNJ”.
It turns out that I had been giving an Emacs demo to Bill Joy all that time, then popping his head up and down by blowing and sucking into a Sun optical mouse, without even recognizing him, because he had shaved his beard!
He really blindsided me with that comment about using Emacs, because I always thought he was more if a vi guy. ;)
And especially his delightful thesis work:
I really love how the little nubs preview the structure of the sub-menus, and how you can roll back to the parent menu because it reserves a slice in the sub-menu to go back, so you don't need to use another mouse button or shift key to browse the menus.
That looks like a nice visual representation with a way to easily browse all around the tree, into and out of the submenus without clicking! I can't tell from the video if it's based on a click or a timeout. But it looks like it supports browsing and reselection and correcting errors pretty well! (That would be something interesting to measure!)
There's another useful law related to Fitts's law that applies to situations like this, called Steering Law:
The steering law in human–computer interaction and ergonomics is a predictive model of human movement that describes the time required to navigate, or steer, through a 2-dimensional tunnel. The tunnel can be thought of as a path or trajectory on a plane that has an associated thickness or width, where the width can vary along the tunnel. The goal of a steering task is to navigate from one end of the tunnel to the other as quickly as possible, without touching the boundaries of the tunnel. A real-world example that approximates this task is driving a car down a road that may have twists and turns, where the car must navigate the road as quickly as possible without touching the sides of the road. The steering law predicts both the instantaneous speed at which we may navigate the tunnel, and the total time required to navigate the entire tunnel.
The steering law has been independently discovered and studied three times (Rashevsky, 1959; Drury, 1971; Accot and Zhai, 1997). Its most recent discovery has been within the human–computer interaction community, which has resulted in the most general mathematical formulation of the law.
Also here's some interesting stuff about incompatibility with Wayland, and rewriting Gnome-Pie as an extension to the Gnome shell:
BobTheSCV> Neverwinter Nights also implemented them, and they worked very well. I had just assumed it was patent trolls or something that kept them from being widely adopted.
You are absolutely correct about the patent trolls!
Bill Buxton at Alias and his marketing team spread a bunch of inaccurate FUD about their "marking menu patent", which I accidentally discovered and tried to correct and get him to stop doing decades ago, but he refused, and continued to spread FUD.
So Alias kept advertising their "patented marking menus" for DECADES, purposefully and successfully discouraging their competition 3D Studio Max, AND many other developers of free and proprietary apps as collateral damage, from adopting them.
When I asked Buxton about the "marking menu patent" before it was granted, he lied point blank to me that there was no "marking menu patent", so I couldn't prove to Kinetix that it was OK to use them, or contact the patent office and inform them about the mistakes in their claims about prior art, and the fact that the "overflow" technique they were claiming in the patent was obvious.
The whole story is here:
Pie Menu FUD and Misconceptions: Dispelling the fear, uncertainty, doubt and misconceptions about pie menus.
>There is a financial and institutional incentive to be lazy about researching and less than honest in reporting and describing prior art, in the hopes that it will slip by the patent examiners, which it very often does.
>Unfortunately they were able to successfully deceive the patent reviewers, even though the patent references the Dr. Dobb’s Journal article which clearly describes how pie menu selection and mouse ahead work, contradicting the incorrect claims in the patent. It’s sad that this kind of deception and patent trolling is all too common in the industry, and it causes so many problems.
Even today, long after the patent has expired, Autodesk marketing brochures continue to spread FUD to scare other people away from using marking menus, by bragging that “Patented marking menus let you use context-sensitive gestures to select commands.”
A snapshot of Alias's claim about "Patented marking menus" from one of their brochures that they are still distributing, even years after their bad patent has expired:
>"Marking Menus: Quickly select commands without looking away from the design. Patented marking menus let you use context-sensitive gestures to select commands."
>The Long Tail Consequences of Bad Patents and FUD
>I attended the computer game developer’s conference in the late 90’s, while I was working at Maxis on The Sims. Since we were using 3D Studio Max, I stopped by the Kinetix booth on the trade show floor, and asked them for some advice integrating my existing ActiveX pie menus into their 3D editing tool.
>They told me that Alias had “marking menus” which were like pie menus, and that Kinetix’s customers had been requesting that feature, but since Alias had patented marking menus, they were afraid to use pie menus or anything resembling them for fear of being sued for patent infringement.
>I told them that sounded like bullshit since there was plenty of prior art, so Alias couldn’t get a legitimate patent on “marking menus”.
>The guy from Kinetix told me that if I didn’t believe him, I should walk across the aisle and ask the people at the Alias booth. So I did.
>When I asked one of the Alias sales people if their “marking menus” were patented, he immediately blurted out “of course they are!” So I showed him the ActiveX pie menus on my laptop, and told him that I needed to get in touch with their legal department because they had patented something that I had been working on for many years, and had used in several published products, including The Sims, and I didn’t want them to sue me or EA for patent infringement.
>When I tried to pin down the Alias marketing representative about what exactly it was that Alias had patented, he started weaseling and changing his story several times. He finally told me that Bill Buxton was the one who invented marking menus, that he was the one behind the patent, that he was the senior user interface researcher at SGI/Alias, and that I should talk to him. He never mentioned Gordon Kurtenbach, only Bill Buxton.
>So I called Bill Buxton at Alias, who stonewalled and claimed that there was no patent on marking menus (which turned out to be false, because he was just being coy for legal reasons). He said he felt insulted that I would think he would patent something that we both knew very well was covered by prior art.
At the time I didn't know the term, but that's what we now call "gaslighting": https://en.wikipedia.org/wiki/Gaslighting
Gee, who do we all know who lies and then tries to turn it all around to blame the person who they bullied, and then tries to play the victim themselves? https://en.wikipedia.org/wiki/Donald_Trump
Gordon Kurtenbach, who did the work and got the patent that Alias marketing people were bragging about in Bill Buxton's name agrees:
Gordon> Don, I read and understand your sequence of events. Thanks. It sounds like it was super frustrating, to put it mildly. Also, I know, having read dozens of patents, that patents are the most obtuse and maddening things to read. And yes, the patent lawyers will make the claims as broad as the patent office will allow. So you were right to be concerned. Clearly, marketing is marketing, and love to say in-precise things like “patented marking menus”.
Gordon> At the time Bill or I could have said to you “off the record, its ok, just don’t use the radial/linear combo”. I think this was what Bill was trying to say when he said “there’s no patent on marking menus”. That was factually true. However, given that Max was the main rival, we didn’t want to do them any favors. So those were the circumstances that lead to those events.
What's ironic is that Autodesk now owns both Alias and 3D Studio Max. Gordon confirmed that Alias's FUD did indeed discourage Kinetix from implementing marking menus or pie menus, which were not actually covered by the patent:
Gordon> After Autodesk acquired Alias, I talked to the manager who was interested in getting pie menus in Max. Yes, he said he that the Alias patents discouraged them from implementing pie menus but they didn’t understand the patents in any detail. Had you at the time said “as long we don’t use the overflows we are not infringing” that would have been fine. I remember at the time thinking “they never read the patent claims”.
Don> The 3D Studio Max developers heard about the Alias marking menu patent from Alias marketing long before I heard of it from them on the trade show floor.
Don> The reason I didn’t know the patent only covered overflows was that I had never seen the patent, of course. And when I asked Buxton about it, he lied to me that “there is no marking menu patent”. He was trying to be coy by pretending he didn’t understand which patent I was talking about, but his intent was to deceive and obfuscate in order to do as much harm to Kinetix 3D Studio Max users as possible, and unfortunately he succeeded at his unethical goal.
What's even worse is that in Buxton's zeal to attack 3D Studio Max users, he also attacked users of free software tools like Blender.
>The Alias Marking Menu Patent Discouraged the Open Source Blender Community from Using Pie Menus for Decades
>Here is another example that of how that long term marketing FUD succeeded in holding back progress: the Blender community was discussing when the marking menu patent would expire, in anticipation of when they might finally be able to use marking menus in blender (even though it has always been fine to use pie menus).
>As the following discussion shows, there is a lot of purposefully sewn confusion and misunderstanding about the difference between marking menus and pie menus, and what exactly is patented, because of the inconsistent and inaccurate definitions and mistakes in the papers and patents and Alias’s marketing FUD:
>"Hi. In a recently closed topic regarding pie menus, LiquidApe said that marking menus are a patent of Autodesk, a patent that would expire shortly. The question is: When ? When could marking menus be usable in Blender ? I couldn’t find any info on internet, mabie some of you know."
>The good news: Decades late due to patents and FUD, pie menus have finally come to 3D Studio Max just recently (January 2018)!
Radially - Pie menu editor for 3ds Max: https://www.youtube.com/watch?v=sjLYmobb8vI
Possibly interesting links:
Display and control of menus with radial and linear portions
Methods and system of controlling menus with radial and linear portions
Method and apparatus for producing, controlling and displaying menus
Here is the brochure they are still distributing to this day:
Here is an illustration from that brochure:
It says "Marking Menus. Quickly select commands without looking away from the design. Patented marking menus let you use context-sensitive gestures to select commands."
What Bill Buxton told me back then (and still tells me now) directly contradicts the claim in that brochure, and the claims that Alias marketing people were making.
Alias marketing claimed point blank that "of course marking menus are patented!"
That brochure (and many others) claim point blank that "Patented marking menus let you use context-sensitive gestures to select commands."
Bill Buxton claimed back then (and still claims now) point blank that "marking menus are not patented", and that I would be an idiot to believe Alias marketing.
So there is an obvious glaring contradiction here.
At the time he made that claim to me, he understood very well because I explained quite clearly that I was asking about any patent relating to marking menus in general, because I did not believe it was possible for marking menus to be patented, and Alias marketing was lying and invoking his lie to support their false claims.
I thought there was some patent related to marking menus that they were talking about, and exaggerating its claims to spread FUD. And there was. And they were.
But he refused to admit the existence of any "marking menu patents" at that time by any definition of the term, even though we all now know they those patents exist, and Alias marketing brochures specifically refer to "patented marking menus".
My problem is that Alias marketing was spreading FUD about the patents, and when I asked them about it, they specifically mentioned Bill Buxton's name as the person who patented them, yet when I called Buxton within minutes of hearing that ridiculous claim to ask him about it, he denied there was any such thing as a marking menu patent by any definition, and he got quite abusive and demeaning, calling me an idiot for believing them, and claiming to be insulted that I'd think he'd patent marking menus, like his marketing people told me he did. And he flat out refused to ask his marketing people to stop lying in his name on his behalf.
My problem is not Bill Buxton's or Alias marketing's obvious insincerity and lack of credibility (that's their own problem). It's their intention and result of their lies that I'm concerned about: their FUD was meant to and succeeded in preventing Kinetix (and others) from adopting marking menus or pie menus for 3D Studio Max.
And there is evidence that it also discouraged open source applications like Blender from using marking or pie menus, when they had nothing to be afraid of.
Gordon Kurtenbach, who now works at Autodesk, which now ironically owns both Alias and 3D Studio Max, confirmed those facts to me:
>Don, I read and understand your sequence of events. Thanks. It sounds like it was super frustrating, to put it mildly. Also, I know, having read dozens of patents, that patents are the most obtuse and maddening things to read. And yes, the patent lawyers will make the claims as broad as the patent office will allow. So you were right to be concerned. Clearly, marketing is marketing, and love to say in-precise things like “patented marking menus”.
>At the time Bill or I could have said to you “off the record, its ok, just don’t use the radial/linear combo”. I think this was what Bill was trying to say when he said “there’s no patent on marking menus”. That was factually true. However, given that Max was the main rival, we didn’t want to do them any favors. So those were the circumstances that lead to those events.
>After Autodesk acquired Alias, I talked to the manager who was interested in getting pie menus in Max. Yes, he said he that the Alias patents discouraged them from implementing pie menus but they didn’t understand the patents in any detail. Had you at the time said “as long we don’t use the overflows we are not infringing” that would have been fine. I remember at the time thinking “they never read the patent claims”.
But as I said before, the 3D Studio Max developers at Kinetix heard about the Alias marking menu patent from Alias marketing, and made the decision to ignore their users and not support them, long before I heard of it from them on the trade show floor, or read the patent. So of course I couldn't say that at the time.