Hacker News new | comments | ask | show | jobs | submit login
Pie Menus: A 30-Year Retrospective: Take a Look and Feel Free (medium.com)
130 points by DonHopkins 9 months ago | hide | past | web | favorite | 49 comments



Many moons ago I had a plugin in my Firefox that made common navigational operations (back, forward, etc.) accessible via pie menus. It was absolutely mind-blowing how much more fluent they made browsing feel. Once you had the muscle memory in place to know where the operations were in the pie without having to look, you would just fly. I'm not sure what happened to that plugin, but I sure miss it.

It sounds like at one point pie menus were even being worked on as part of Firefox core (see https://www.extremetech.com/computing/103589-a-walk-down-fir...), but were eventually abandoned, which is sad. (I can understand why, lots of people seem to have trouble wrapping their minds around the idea, but still.)


Yes indeed, its sad they abandoned them!

Pie menus worked really well in HyperTIES, a hypermedia browser we developed in 1988 at HCIL, as well as Emacs, and I miss them, especially when my MacBook Pro overheats and gets really really really slow, as bad as a Sun 3/50 running NeWS!

And there are some big gaps between implementing one particular menu, to enabling programmers to implement any menu, to enabling designers to implement any menu, to enabling users to implement any menu, to motivating and educating users how to design great menus for themselves.

Bridging the last gap requires providing good defaults and examples, intelligent automatic layout and constraints, and somehow teaching users to intuitively understand Fitts's Law and other important principles of user interface design.

That's what I was getting at with the "Crazy Example: PieCraft".

With games, it's possible to put pressure on users to figure it out for themselves, to award points for good performance, to punish them when they don't perform well (by death, if necessary), and to organically reward them when they do it right (with virtual fame and fortune, not just as a prize, but as the natural result of having well designed menus, and being able to fight and defend themselves better).

But that's not so user friendly for a text editor to challenge, punish and reward the user like that. But the lessons they learn playing a game can still transfer over to their text editor.

>One idea I’ve had is to develop a game called “PieCraft”, that has user-editable pie menus that are first-order in-game player craftable artifacts. [..]

>You could find empty or pre-populated pie menus in the world, pick them up and make them your own, and edit them by moving items around, putting them inside of other menus, and modifying them as you leveled up and earned more powerful menu editing skills and resources.

>The capacity and layout and allowed contents of some menus could be restricted, to constrain the ways you used them, forcing you to figure out which were your most important items, and giving them front row seats so they were easiest to select.

>To put further pressure on players to design efficient menus, your menus could be vulnerable to attack from warriors or theft from pickpockets while they were popped up, and only be able to take a limited amount of damage (depending on if they were crafted from wood or diamond).

>When hit hard enough, items could break loose and fall on the ground, or a whole slice or menu could break open like a piñata and spill all its items into the world. Then you (and your enemies) would have to scramble to pick up all the pieces!

>The sense of urgency and vulnerability while the menu was popped up would compel you to “mouse ahead” to get through the menus quickly, and arrange your most important items so you could find and select them as quickly as possible.

>It would reward you for successfully “mousing ahead” swiftly during combat, by avoiding damage from attack and loss from thieves, and awarding power-up points and experience.

The "Awesome Example: Monster Hunter: World — Radial Menu Guide" shows the benefits of user created radial menus, and what a profound effect good menu design can have on gameplay.

>Monster Hunter: World is a wonderful example of a game that enables and motivates players to create their own pie menus, that shows how important customizable user defined pie menus are in games and tools.

>Want access to all your items, ammo and gestures at your fingertips? Here’s a quick guide on the Radial Menu. >With a multitude of items available, it can be challenging to find the exact one you need in the heat of the battle. Thankfully, we’ve got you covered. >Here’s a guide on radial menus, and how to use them: >The radial menu allows you to use items at a flick of the right stick. >There are four menus offering access to eight items each, and you can fully customize them, all to your heart’s content. >Radial menus are not just limited to item use, however. >You can use them to craft items, shoot SOS flares, and even use communication features such as stickers and gestures.

But it doesn't go quite as far as to elevate player crafted menus into first class customizable parameterized in-game artifact that players can craft, discover in the world, destroy in combat, and buy high quality designer menus from other players at the auction house, empty or pre-populated with all kinds of other sub-menus and other items, like gift baskets of food, magic spells or flower arrangements.



Hey, Don, what do you think we should be doing with multitouch? I'm guessing you have some better ideas than "scroll a linear menu and tap to pick from it" or "emulate a mouse, then add some undiscoverable chording gestures," and I have some ideas I'd be happy to tell you about for hours, but I'd love to hear what you've been thinking.


Good questions! It's a very difficult problem, and I don't know of a universal solution. I haven't been very happy with any of the higher level multitouch tracking API's that I've used.

I usually just end up writing a lot of ugly Rube Goldbergesque spaghetti event handling code with lots of global state and flags and modes.

The problem doesn't seem to break down very cleanly into a bunch of nice clean little components that don't know very much about each other, like mouse oriented widgets do, so you need a lot of global event management and state machine code and friendly objects that know about each other, in order to keep track of what's really going on, and to keep from tripping over your own fingers.

Michael Naimark discusses some interesting stuff in his articles "VR / AR Fundamentals — 3) Other Senses (Touch, Smell, Taste, Mind)" and "VR / AR Fundamentals - 4) Input & Interactivity"! (Read the whole series, it's great!)

https://medium.com/@michaelnaimark/vr-ar-fundamentals-3-othe...

https://medium.com/@michaelnaimark/vr-ar-fundamentals-4-inpu...

I wrote some stuff in the "Gesture Space" article about the problem of multi touch map zoom/pan/rotate tracking, and how it's desirable to have a model that users can easily comprehend what's going on:

Gesture Space

https://medium.com/@donhopkins/gesture-space-842e3cdc7102

>Multitouch Tracking Example

>One interesting example is multitouch tracking for zooming/scaling/rotating a map.

>A lot of iPhone apps just code it up by hand, and get it wrong (or at least not as nice a google maps gets it).

>For example, two fingers enable you to pan, zoom and rotate the map, all at the same time.

>The ideal user model is that during the time one or two fingers are touching the map, there is a correspondence between the locations of the fingers on the screen, and the locations of the map where they first touched. That constraint should be maintained by panning, zooming and rotating the map as necessary.

>The google map app on the iPhone does not support rotating, so it has to throw away one dimension, and project the space of all possible gestures onto the lower dimensional space of strict scaling and panning, without any rotation.

>So the ideal user model two finger dragging and scaling without rotation is different, because it’s possible for the map to slide out from under your fingers due to finger rotation. So it effectively tracks the point in-between your fingers, whose dragging causes panning, and the distance between your fingers, whose pinching causes zooming. Any finger rotation around the center point is thrown ignored. That’s a more complicated, less direct model than panning and scaling with rotation.

>But some other iPhone apps haphazardly only let you zoom or pan but not both at once. Once you start zooming or panning, you are locked into that gesture and can’t combine or switch between them. Whether this was a conscious decision on the part of the programmer, or they didn’t even realize it should be possible to do both at once, because they were using a poorly designed API, or thinking about it in terms of “interpreting mouse gestures” instead of “maintaining constraints”.

>Apple has some gesture recognizers for things like tap, pinch, rotation, swipe, pan and long press. But they’re not easily composable into a nice integrated tracker like you’d need to support panning/zooming/rotating a map all at once. So most well written apps have to write their own special purpose multitouch tracking code (which is a pretty complicated stuff, and hard to get right).

For example, if one finger drags, and two fingers can scale and rotate, you might want to implement inertia when you let go, so you can drag and release while moving, and the object will flick in the direction of your stroke with the instantaneous velocity of your finger.

But what happens if you release both fingers while rotating? Should that impart rotational inertia? What about if you start spinning with two fingers and then lift one finger -- do you roll back to panning but impart some rotational inertia so you spin around the point you're touching to pan? Should it also impart rotational inertia from the rotation of the iPad in the real world from the gyros, when you release your fingers? It gets messy!

I implemented some variations of that for Pantomime on Unity for iOS and Android, so you can pan yourself through the virtual world by dragging one finger across the screen, and rotate around the vertical axis through the center of the screen by twisting two fingers around.

Pantomime – Interactive Multiplayer Virtual Reality https://www.youtube.com/watch?v=T43b5ywnYpo

For Pantomime, supporting inertia for panning and rotating gestures made sense and was lots of fun, and it also integrated the rotational motion in the real world from the gyros, so you could spin and skate around with your fingers, lift them and continue spinning around while skating too, all the while turning the actual iPad itself around!

Or you could grab an object to twist it with two fingers, then rotate it by rotating the iPad itself instead of dragging your fingers across the screen! (It's actually a lot easier to turn things that way, I think! No friction.) So the tracking needs to happen in 3d space projecting the touch point on the screen into the 3d world, so you can touch the screen with a single finger and drag an object by pointing with the screen instead of dragging your finger, or combine it with dragging your finger for fine positioning.

Another wrinkle is that the user might be holding the iPad in one hand and touching the screen with two fingers of their other hand, to rotate. Or the user might be holding the iPad in two hands like a steering wheel, one at each side, with both thumbs touching opposite sides of the screen.

In the "steering wheel" situation (which is a comfortable way of holding an iPad, that you control it with your thumbs), you might want to have a totally different tracking behavior than the two finger touch gesture (like each thumb controls an independent vertical slider along the screen edge, instead of two finger scaling, for example), so you have to define a recognizer with a distance threshold or a way of distinguishing those two gestures.

But when only one thumb has pressed, you don't know which way they're holding it yet, whether to expect the second finger will touch nearby or at the opposite side, so the initial one finger tracking has to be compatible for each way of holding it.

Another approach is instead of the app trying to guess how it's being used, for the app to INSTRUCT the user which way it expects them to operate the device, and how it will interpret the gestures, in a way that the user has control of what mode it's in (like touching the screen or not).

So you could switch between different modes by wielding different tools or weapons, and the user interface overlay changes to show you how to hold and operate the iPad to maintain the illusion of pantomiming walking or paddling.

Pantomime switches between showing two hands holding the screen like a steering wheel (when no fingers are touching, you're walking), and one hand holding it like a paddle (when one finger is touching the screen, you're paddling, pivoting on your elbow by the side of the screen you're touching).

And you can detect when the iPad is sitting flat with the screen facing up, and then you can switch into a different mode with different touch tracking, since you know they're probably not holding it like a steering wheel or waving it around if it's flat and not moving.

Here's a good demo that shows panning, rotating, inertia, walking and paddling, with magic cans of different gravities, explained with in-world Help Monoliths:

https://www.youtube.com/watch?v=ma9CsOLnux0

Here's a demo with a terrible bug:

https://www.youtube.com/watch?v=4rBuRDq7pMo

Here's a four-year-old playing with Pantomime -- "I'm so good at this!" he says:

https://www.youtube.com/watch?v=3ilhH2hDyQc

You have to think long and hard how people are going to interact with the device in the real world, and not follow the official operating instructions of your app! There might be two people touching the screen with their fingers near each other. Or it could be a cat swatting or a baby licking the iPad! You can never tell what's going on in the real world.

For Pantomime, I used the TouchScript multitouch tracking library for Unity3D on iOS and Android.

https://assetstore.unity.com/packages/tools/input-management...

It seemed to be able to handle a certain set of complex gesture situations, but not the complex gesture situations I needed it to handle. But it might work for you, and it's free! I think there are other versions of it on different platforms, too. And it handles proxying events from remove devices (or from Flash to Unity). And it can handle attaching different gesture recognizers to different levels of the transform hierarchy (perhaps to control which colliders detect the touches), but I'm not sure what that's good for.

What I needed to do was full screen multi touch tracking, not tracking multiple gestures on individual objects, so I didn't use everything TouchScript had to offer, so I can't comment on how well that feature works.

It had a separate drag recognizer and rotate recognizer that could be active at the same time, and you can configure different recognizers to be friends or to lock each other out, but still all the different handlers had to know a hell of a lot about each other to be able to roll between them properly with any combination of finger touches and lifts. It was not pretty.

It's free, and it's certainly worth looking at the product description and manual to see which complex gesture situations it can handle, if you're interested.

>TouchScript makes handling complex gesture interactions on any touch surface much easier.

>Why TouchScript?

>- TouchScript abstracts touch and gesture logic from input methods and platforms. Your touch-related code will be the same everywhere.

>- TouchScript supports many touch input methods starting from smartphones to giant touch surfaces: mouse, Windows 7/8 touch, mobile (iOS, Android, Windows Store/Windows Phone), TUIO.

>- TouchScript includes common gesture implementations: press, release, tap, long press, flick, pinch/scale/rotate.

>- TouchScript allows you to write your own gestures and custom pointer input logic.

>- TouchScript manages gestures in transform hierarchy and makes sure that the most relevant gesture will receive touch input.

>- TouchScript comes with many examples and is extensively documented.

>- TouchScript makes it easy to test multi-touch gestures without an actual multi-touch device using built-in second touch simulator (activated with Alt + click), TUIOPad on iOS or TUIODroid on Android.

>- It's free and open-source. Licensed under MIT license.

It's not too hard to track full screen gestures, where one object is tracking all the fingers.

The problem is when you have several gestures going on at the same time, or several different objects tracking different gestures.

Are there two objects tracking single finger dragging gestures at the same time, or is one object tracking double finger dragging?

How do you properly roll between one, two and three finger gestures when you raise and lower fingers?

The thing that's frustrating to a programmer used to tracking a mouse is that users can touch and remove their fingers in any order they please, and it's easy not to think things through and figure out how to cover every permutation. They can put down three fingers A B and C one by one, then remove them in a different order, or touch two fingers at once, or almost at once.

So you need to be able to seamlessly transition between 1, 2, 3, etc, finger tracking in any order or several at once.

I also tried implementing web browser pie menus for a gesture tracking library called hammer.js, by making my own pie menu gesture recognizer. Overall hammer was pretty nice for touch screen tracking, but my problem was that at the time (several years ago, I don't know about now) you couldn't make a gesture that tracked while the button wasn't pressed, and mouse based pie menus need to be able to track while they're clicked up. So I needed to do some ugly hack to handle that.

https://hammerjs.github.io/

I am guessing hammer.js was designed mainly for touch screen tracking, but not necessarily mouse tracking (since touch screens can't track "pointer position" when no finger is touching the screen). It would be nice if it better supported writing gesture recognizers that seamlessly (or as much as possible) worked with either touch screen or mice. Maybe it's better at that now, though.

It's not hammer.js's fault, but you must beware the minefield of browser/device support:

http://hammerjs.github.io/browser-support/

With a mouse, you can do things like "warping" the mouse pointer to a new location when the user tries to click up a pie menu near the screen edge, but there's no way to forcefully push the user's finger towards the center of the screen.

But then again, the amazing Professor Hiroo Iwata has figured out a "heavy handed" approach to solving that problem:

3DOF Multitouch Haptic Interface with Movable Touchscreen

https://www.youtube.com/watch?v=YCZPmj7NtSQ

>Shun Takanaka, Hiroaki Yano, Hiroo Iwata, Presented at AsiaHaptics2016. This paper reports on the development of a multitouch haptic interface equipped with a movable touchscreen. When the relative position of two of a user’s fingertips is fixed on a touchscreen, the fingers can be considered a hand-shaped rigid object. In such situations, a reaction force can be exerted on each finger using a three degrees of freedom (3DOF) haptic interface. In this study, a prototype 3DOF haptic interface system comprising a touchscreen, a 6-axis force sensor, an X-Y stage, and a capstan drive system was developed. The developed system estimates the input force from fingers using sensor data and each finger’s position. Further, the system generates reaction forces from virtual objects to the user’s fingertips by controlling the static frictional force between each of the user’s fingertips and the screen. The system enables users to perceive the shape of two-dimensional virtual objects displayed on the screen and translate/rotate them with their fingers. Moreover, users can deform elastic virtual objects, and feel their rigidity.

https://link.springer.com/chapter/10.1007/978-981-10-4157-0_...

(There is some other seriously weird shit on the AsiaHaptics2016 conference video list -- I'm not even gonna -- oh, all right: relax and tighten, then look for yourself: https://www.youtube.com/channel/UC8qMmIgmWhnQBeABjGlzGbg/vid... ... I can't begin imagine what the afterparties at that conference were like!)

Don't miss Hiroo Iwata's food simulator!

https://www.wired.com/2003/08/slideshow-wonders-aplenty-at-s...

http://www.frontier.kyoto-u.ac.jp/te03/member/iwata/index.ht...


Thanks! I hadn't even thought about it from the programmer's-model point of view! I was just thinking about what kinds of user-interface idioms would turn out to be usable — although in the end maybe the user model needs to be the programmer model, especially to support end-user programming.

I haven't tried Unity yet, although it sure looks like fun. So far I feel like 3-D user interfaces have been kind of a mess in terms of usability. Maybe you guys at Pantomime will come up with a solution.

I'd tried Hammer a few years back, and gave up and rolled my own gesture tracking because of kind of similar issues.

In terms of pinch-to-zoom as establishing constraints — have you seen Daniel Vogel's Pinch-to-Zoom Plus? https://www.youtube.com/watch?v=x-hFyzdwoL8 He argues that there's actually a somewhat more usable mode of pinch-zooming which breaks the constraints, but not in the chintzy way you're correctly criticizing.

I'll try to respond in more detail after reading through and watching what you've linked!


Wow, Daniel Vogel kicks ass:

Behold: The foot menu!

https://www.youtube.com/watch?v=pqycjWHoI2w

While coding, they scroll with taps to get some exercise, and then set a breakpoint by using a whole foot right tap.

A kick forward starts a debug session, and they move away from the keyboard to get a break.

While debugging they can step into code with forward toe taps.

Step over code with backward toe taps.

And step out of code with right or left toe taps.

After stepping through the execution like this, they set another breakpoint with a left whole foot tap, and run the code to the end with a forward whole foot tap.

Having got a good break while getting work done, they return to the keyboard to fix the bug.


Isn't the issue with pie menus that they require a set of icons that require no further explanation to all users? Linear menus may be slower but have better usability for new or infrequent users because they allow text descriptions and are easier to scan quickly.


Well, if you already know the icons...

For example, Android tablets have three or four icons at the bottom, by default. (Minor wars have broken out over the order in which they are displayed.) Back, Home, Menu, Recents.

For a few years now I've been installing LMT https://forum.xda-developers.com/showthread.php?t=1330150 on all my Android devices. LMT implements a radial menu that your finger can slide in from any (selectable) side of the device screen. And, of course, you can customize how many slices, what each slice does, color, translucency, highlight color, and a myriad of other small choices which can make the thing feel exactly right for you.

It frequently confuses other people, and why not? It's not like I'm going to let other people use my phone.


Which makes radial menus a good customizable feature but not a great default feature unless you are in a context where you expect users to achieve mastery with your UI (such as in a video game).


People master their phones pretty quickly.

If the setup feature on the iPhone 11 started by teaching you how to use a radial or pie menu system, people would swear it was the best thing Apple ever invented, that it's completely intuitive and toddlers can learn it.

(Part of this is because they say that about every Apple feature except fragile keyboards.)


> (Part of this is because they say that about every Apple feature except fragile keyboards.)

I'm pretty sure they say that about the keyboards too -- i've seen comments like "the j and k keys are stuck, but i love it so much"


I knew someone who enjoyed configuring the X11 "piewm" window manager so it didn't have any frames or borders at all, and popped up pie menus with one of the modifier keys. They enjoyed watching other people suffer when trying to mess around with their computer!

https://www.gsp.com/cgi-bin/man.cgi?section=1&topic=piewm


The don't require icons, but icons certainly fit a lot better. But that's true for anything, including toolbars and linear menus.

Various layout strategies have been developed for justifying text around the pie menu, some are better than others.

Rotating text is usually pretty terrible and hard to read because you have to turn your head and it's jaggy, but now days it's not quite as jaggy with anti-aliasing as PostScript graphics on a bitmap screen was.

The simplest thing to do is to define an inner label radius, and justify against it the center bottom of the top item (if there is one), the center top of the bottom item (if there is one), the left middle of all right items, the right middle of all the left items.

Here's a pie menu with iconic symbols justified that way, and a half pie menu with labels justified that way (just no bottom label since it has an even number of slices, and is half size):

https://cdn-images-1.medium.com/max/600/1*bK8djI5foFRHOjzCQc...

Here's an "eight days a week" menu that shows longer weekday name labels, including ones at the top and bottom -- it looks fine for wide text labels:

https://cdn-images-1.medium.com/max/450/0*4KdSEWpcoP4TM5Hy.g...

And here's a window management menu, that shows a menu with labels, and a grab submenu (for immediately grabbing a corner or edge) with icons.

https://cdn-images-1.medium.com/max/450/0*KkzlkTw5YG8zWJBu.g...

The top and bottom icons look a little hinkey since their center tops and bottoms are pinned against the inside label radius, so they're pushed out a bit too much. (It also may have to do with the icon metrics). To handle that, pie menus with automatic layout should let you tweak each item's exact position with a relative dx, dy offset (I guess I never got around to fixing that grab menu).

Here's a ringed menu with a shitload of very short single character labels, justified that way. Don't try this at home, kids! I'm not saying it's a good design, but I just had to try to see and feel how it worked. Somebody with better graphical design sense than I could do a lot better!

https://cdn-images-1.medium.com/max/300/0*e6Vki12PZpXhNUCF.p...

https://cdn-images-1.medium.com/max/300/0*Bp6mF8xuFoEXccz6.p...

https://cdn-images-1.medium.com/max/300/0*nS1dBKIaUJWAYGRK.p...

Another strategy is to use icons, but show a menu description when no item is selected, and the label of the selected item somewhere, like this:

SimCity Tools menu, nothing selected: "Select a SimCity editing tool, or the zone or build submenu."

https://cdn-images-1.medium.com/max/450/0*M4lyjaseU39Tvl9k.p...

SimCity Tools menu, bulldozer selected: "Bulldozer editing tool."

https://cdn-images-1.medium.com/max/450/0*ugqLx0GcRlCt7e_R.p...

Some of these images were from "OLPC Sugar Pie Menu Discussion" where there's lots more discussion about graphical design, layout, and tracking:

https://medium.com/@donhopkins/olpc-sugar-pie-menu-discussio...

The OLPC can get away with using a lot of nice big bold square icons, because it's designed for kids who may not be able to read. But I think it's important to support not just text labels, but also full text descriptions.

Another feature all kinds of menus should really have, is a way for users to discover why the fuck a menu item is disabled, and what they can do to enable it. It's so terribly frustrating that disabled items just ignore you and won't even let you select them, but there they are staring at you, not giving you any help, just watching you suffer, knowing all along what's wrong and why they're disabled, just not allowed to tell you!

At least let users point at menu items that aren't enabled, and tell them why they're not enabled in the help text at the bottom of the menu, and even provide some context sensitive hints describing what you have to do to enable the item, if not offering to do it for you.


Wacom’s drivers for their drawing tablets include the ability to make a custom radial menu pop up under your cursor. Now and then I think about trying them out, but the idea of organizing every menu item I’m likely to use in Illustrator into eight broad categories feels like more work than it’s worth, especially when I have a ton of custom keyboard shortcuts at hand.


The touch keyboard in Windows 8/8.1 and Windows 10 versions 1507/1511/1607/1703 had radial menus for selecting alternate and secondary characters, they were pretty nice and quick and became even quicker when I ended up memorizing a lot of the gestures. Sadly they're gone since Windows 10 version 1709, replaced with something much slower.


Does anyone other than me have a cognitive problem with pie menus? For some reason I have to slow down and think about what I should do to use them. I have tried and tried and it doesn't get easy. I use autodesk daily and it offers pie menus everywhere but I avoid them.


For me at least, I believe part of the reason is the circular visual layout of the menu, which makes it hard to scan. With traditional context menus, I usually develop muscle memory that helps me aim to the desired option. With pie menus this is not needed, but scanning the available options feels weird, as there's no clear left-to-right reading sequence.

I would love a "square pie" menu that combined the improvements of pie menus (click-and-drag to select, infinite size) with a square, table-like layout surrounding the cursor above, below and to the sides of the initial mouse click point.

Something like this (where <¬ is the mouse cursor):

  .___.___.___
  |  A |  B |  C  |
  |___|___|___|
  |  D | <¬ | E  |
  |___|___|___|
  |       F           |
  |__________|
  |       G          |
  |__________|


Xmonad has something called grid select, which works kind of like a combination of pie menus and a grid, while also being keyboard navigable(I use vim keybindings hjkl).

http://maskray.me/static/xmonad-grid-select.jpg

https://hackage.haskell.org/package/xmonad-contrib-0.13/docs...


This is a step in the right direction, although still a little bit too scary for normal users accustomed to classic context menus:

https://addons.mozilla.org/mn/firefox/addon/radialcontext-mz...


Also Crysis. And when it comes to interesting input interface, anyone remembers Dasher that was once also part of GNOME?

http://www.inference.org.uk/dasher


Dasher is fantastic, because it's based on rock solid information theory, designed by the late David MacKay.

Here is the seminal Google Tech Talk about it:

https://www.youtube.com/watch?v=wpOxbesRNBc

Here is a demo of using Dasher by an engineer at Google, Ada Majorek, who has ALS and uses Dasher and a Headmouse to program:

https://www.youtube.com/watch?v=LvHQ83pMLQQ

Another one of her demonstrating Dasher:

Ada Majorek Introduction - CSUN Dasher

https://www.youtube.com/watch?v=SvsSrClBwPM

Here’s a more recent presentation about it, that tells all about the latest open source release of Dasher 5:

Dasher - CSUN 2016 - Ada Majorek and Raquel Romano

https://www.youtube.com/watch?v=qFlkM_e-sDg

Here's the github repo:

Dasher Version 4.11

https://github.com/GNOME/dasher

>Dasher is a zooming predictive text entry system, designed for situations where keyboard input is impractical (for instance, accessibility or PDAs). It is usable with highly limited amounts of physical input while still allowing high rates of text entry.

Ada referred me to this mind bending prototype:

D@sher Prototype - An adaptive, hierarchical radial menu.

https://www.youtube.com/watch?v=5oSfEM8XpH4

>( http://www.inference.org.uk/dasher ) - a really neat way to "dive" through a menu hierarchy/, or through recursively nested options (to build words, letter by letter, swiftly). D@sher takes Dasher, and gives it a twist, making slightly better use of screen revenue.

>It also "learns" your typical useage, making more frequently selected options larger than sibling options. This makes it faster to use, each time you use it.

>More information here: http://beznesstime.blogspot.com and here: https://forums.tigsource.com/index.php?topic=960

Dasher is even a viable way to input text in VR, just by pointing your head, without a special input device!

Text Input with Oculus Rift:

https://www.youtube.com/watch?v=FFQgluUwV2U

>As part of VR development environment I'm currently writing ( https://github.com/xanxys/construct ), I've implemented dasher ( http://www.inference.org.uk/dasher ) to input text.

One important property of Dasher is that you can pre-train it on a corpus of typical text, and dynamically train it while you use it. It learns the patterns of letters and words you use often, and those become bigger and bigger targets that string together so you can select them even more quickly!

Ada Majorek has it configured to toggle between English and her native language so she can switch between writing email to her family abroad and co-workers at google.

Now think of what you could do with a version of dasher integrated with a programmer's IDE, that knew the syntax of the programming language you're using, as well as the names of all the variables and functions in scope, plus how often they're used!

I have a long term pie in the sky “grand plan” about developing a JavaScript based programmable accessibility system I call “aQuery”, like “jQuery” for accessibility. It would be a great way to deeply integrate Dasher with different input devices and applications across platforms, and make them accessible to people with limited motion, as well as users of VR and AR and mobile devices.

http://donhopkins.com/mediawiki/index.php/AQuery

Here’s some discussion on hacker news, to which I contributed some comments about Dasher:

A History of Palm, Part 1: Before the PalmPilot (lowendmac.com)

https://news.ycombinator.com/item?id=12306377


Dasher is really impressive. I really like the idea of bringing it into VR and maybe it can be taken even further. If you turn the exploration into a 3D graph of X/Y options into the Z direction as opposed to 2D graph of Y options in the X direction, combined with eye tracking of newer VR headsets, you should be able to get a decent improvement to accuracy and would be able to increase the speed.


Imagine how safe democracy would be if only voting machines used pie menus:

https://medium.com/@donhopkins/dumbold-voting-machine-for-th...


It's the 30 year anniversary of CHI’88 (May 15–19, 1988), where Jack Callahan, Ben Shneiderman, Mark Weiser and I (Don Hopkins) presented our paper “An Empirical Comparison of Pie vs. Linear Menus”. We found pie menus to be about 15% faster and with a significantly lower error rate than linear menus!

So I've written up a 30 year retrospective:

This article will discuss the history of what’s happened with pie menus over the last 30 years (and more), present both good and bad examples, including ideas half baked, experiments performed, problems discovered, solutions attempted, alternatives explored, progress made, software freed, products shipped, as well as setbacks and impediments to their widespread adoption.

Here is the main article, and some other related articles:

Pie Menus: A 30 Year Retrospective. By Don Hopkins, Ground Up Software, May 15, 2018. Take a Look and Feel Free!

https://medium.com/@donhopkins/pie-menus-936fed383ff1

This is the paper we presented 30 years ago at CHI'88:

An Empirical Comparison of Pie vs. Linear Menus. Jack Callahan, Don Hopkins, Mark Weiser () and Ben Shneiderman. Computer Science Department University of Maryland College Park, Maryland 20742 () Computer Science Laboratory, Xerox PARC, Palo Alto, Calif. 94303. Presented at ACM CHI’88 Conference, Washington DC, 1988.

https://medium.com/@donhopkins/an-empirical-comparison-of-pi...

Open Sourcing SimCity. Excerpt from page 289–293 of “Play Design”, a dissertation submitted in partial satisfaction of the requirements for the degree of Doctor in Philosophy in Computer Science by Chaim Gingold.

https://medium.com/@donhopkins/open-sourcing-simcity-58470a2...

Recommendation Letter for Krystian Samp’s Thesis: The Design and Evaluation of Graphical Radial Menus. I am writing this letter to enthusiastically recommend that you consider Krystian Samp’s thesis, “The Design and Evaluation of Graphical Radial Menus”, for the ACM Doctoral Dissertation Award.

https://medium.com/@donhopkins/don-hopkins-october-31-2012-e...

Constructionist Educational Open Source SimCity. Illustrated and edited transcript of the YouTube video playlist: HAR 2009: Lightning talks Friday. Videos of the talk at the end.

https://medium.com/@donhopkins/har-2009-lightning-talk-trans...

How to Choose with Pie Menus — March 1988.

https://medium.com/@donhopkins/how-to-choose-with-pie-menus-...

BAYCHI October Meeting Report: Natural Selection: The Evolution of Pie Menus, October 13, 1998.

https://medium.com/@donhopkins/baychi-october-meeting-report...

The Sims Pie Menus. The Sims, Pie Menus, Edith Editing, and SimAntics Visual Programming Demo.

https://medium.com/@donhopkins/the-sims-pie-menus-49ca02a74d...

The Design and Implementation of Pie Menus. They’re Fast, Easy, and Self-Revealing. Originally published in Dr. Dobb’s Journal, Dec. 1991.

https://medium.com/@donhopkins/the-design-and-implementation...

Gesture Space.

https://medium.com/@donhopkins/gesture-space-842e3cdc7102

Empowered Pie Menu Performance at CHI’90, and Other Weird Stuff. A live performance of pie menus, the PSIBER Space Deck and the Pseudo Scientific Visualizer at the CHI’90 Empowered show. And other weird stuff inspired by Craig Hubley’s sound advice and vision that it’s possible to empower every user to play around and be an artist with their computer.

https://medium.com/@donhopkins/empowered-pie-menu-performanc...

OLPC Sugar Pie Menu Discussion Excerpts from the discussion on the OLPC Sugar developer discussion list about pie menus for PyGTK and OLPC Sugar.

https://medium.com/@donhopkins/olpc-sugar-pie-menu-discussio...

Designing to Facilitate Browsing: A Look Back at the Hyperties Workstation Browser. By Ben Shneiderman, Catherine Plaisant, Rodrigo Botafogo, Don Hopkins, William Weiland.

https://medium.com/@donhopkins/designing-to-facilitate-brows...

Pie Menu FUD and Misconceptions. Dispelling the fear, uncertainty, doubt and misconceptions about pie menus.

https://medium.com/@donhopkins/pie-menu-fud-and-misconceptio...

The Shape of PSIBER Space: PostScript Interactive Bug Eradication Routines — October 1989. Written by Don Hopkins, October 1989. University of Maryland Human-Computer Interaction Lab, Computer Science Department, College Park, Maryland 20742.

https://medium.com/@donhopkins/the-shape-of-psiber-space-oct...

The Amazing Shneiderman. Sung to the tune of “Spiderman”, with apologies to Paul Francis Webster and Robert “Bob” Harris, and with respect to Ben Shneiderman.

https://medium.com/@donhopkins/the-amazing-schneiderman-9df9...

And finally this has absolutely nothing to do with pie menus, except for the shape of a pizza pie:

The Story of Sun Microsystems PizzaTool. How I accidentally ordered my first pizza over the internet.

https://medium.com/@donhopkins/the-story-of-sun-microsystems...


Here is one more example[0, 1] of "pie menu" used in GUI of OpenOrienteering Mapper[2] (cartographic DTP for orienteering[3]) since 2012.

P.S.: I'm one of it's contributors ;-)

[0] http://www.openorienteering.org/assets/2012/mapper4_screensh...

[1] http://www.openorienteering.org/news/2012/openorienteering-m...

[2] http://github.com/openorienteering/mapper

[3] http://orienteering.org/resources/mapping/


More examples of things like pie menus

- Messagease that I've been using since Palm Pilot days http://www.exideas.com/ME/index.php) is pie-menu-like

- Inspired by pie menus and Messagease, I experimented with a touch-screen native calculator (instead of skeuomorphic) back in the Maemo days: https://wiki.maemo.org/Ejpi

cross-posting from reddit: https://www.reddit.com/r/programming/comments/8k9ylk/pie_men...


Oh wow, who remembers when everybody was carrying Palm Pilots around, beaming their contacts to each other over IR! It was such a "thing" back in its day.

David Levitt and I developed a Palm app called "ConnectedTV": a universal remote control integrated with a personalized TV guide.

https://cdn-images-1.medium.com/max/600/1*wJL5642D03MaoyBlni...

Stroke replaces Poke: ConnectedTV was designed so you can use it with one hand, without a stylus, with your thumb or finger. That's because you always lose your stylus in the dark living room couch cushions. And also because when you're watching TV in the dark, you usually need your other hand for something else like holding a beer or eating popcorn or whatever (perhaps holding another Palm running PalmJoint).

https://cdn-images-1.medium.com/max/600/1*C8_ZNQRAaTUuwYssmN...

It had "Touch Tuning" to change the TV channel by touching the name of a TV program, and "finger pies" for stroking out up to four different commands plus tapping on any button or TV program.

"Touch Tuning" with ConnectedTV is like speed dialing with the remote: you could forget all those channel numbers, instead you just touch the name of the show you want to watch, and ConnectedTV sent the numbers to change the channel.

ConnectedTV also featured "Finger Pies" which enable you to quickly and reliably select several different commands from one button by stroking in different directions.

You can stroke the buttons with your finger, to invoke different commands in different mnemonic directions.

For example: stroke left and right to page to the previous and next program; stroke up to change the channel to the current program ("send to TV"); stroke down to read more about the current program ("show me more"); stroke the ratings button up ("thumbs up") to add a program to your favorites list; stroke it down ("thumbs down") to add it to your bad programs filter.

ConnectedTV was indispensable if you had hundreds of digital cable or satellite channels, because you can filter out the channels and shows you don't like, and mark your favorites so they're easy to find whenever they're on.

https://www.pcmag.com/article2/0,2817,770400,00.asp

https://www.facebook.com/note.php?note_id=106220169912


Map editing is an excellent application of pie menus, to save you the many trips back and forth between a tool pallet.

When working on the SimCity user interface, whenever I considered popping up a window or dialog on top of the map, I thought about how many acres of prime real-estate I was covering up and hiding from the user. Or how many miles I was forcing the user to move their mouse back and forth between the map and the toolbar. ;)


DonHopinks:

My first encounter with the concept was "Secret of Mana"

One more example for your catalog: http://tvtropes.org/pmwiki/pmwiki.php/Main/RingMenu

Not exactly the same since the input was a directional pad instead of a mouse, but close enough...


Speaking of menues, on my first attempt to favourite yor comment, I'd instead flagged it in error....


Reminds me of one of Microsoft's first Dynamic HTML demos:

There were two buttons, one labeled "Our Web Site", the other labeled "Our Competitor's Web Site".

When you moved the mouse over the "Our Competitor's Web Site" button, it would quickly slide out from under your cursor before you could click it!

Then when you stopped moving your mouse, the "Our Web Site" button would slyly slide right underneath your mouse!

Dammit Microsoft!!! ;)


Back to pie menues: I come across them frequently as Obreey's Pocketbook uses them. That's an ebook reader for Android and iOS. Screenshot here:

http://obreey-products.com/projects/mobile-web/pocketbook-en


Interesting that Steve Jobs was not a fan. Didn’t the original iPod use a physical pie menu?

Edit : oh man, totally misunderstood the distinction. I’ve axtually never thought about these menus before... but now after this post I saw one within five minutes in Fortnite!


It had a little round rocker switch that acted kind of like a d-pad, but didn't turn, I think.

But maybe there was a model of iPod or some other device that had a knob that you twist a little bit in either direction, but didn't spin -- I can't quite remember.

What I do have terrible memories of is the misguided cocaine-addled skeuomorphism in Apple's Quicktime player, which earned its eternal place in the User Interface Hall of Shame.

http://hallofshame.gp.co.at/qtime.htm

>the quicktime 4.0 player contains many examples of how the software must adopt the limitations of the physical device it is based on, but the first example the user is likely to discover is the volume control. since a real-world hand-held electronic device typically employs a thumbwheel to control the volume, the designers concluded that it would work just as well in a software application. what the designers failed to realize is that a thumbwheel is designed to be operated by a thumb, not a mouse. watching new users try to adjust the volume can be a painful experience. the user invariably tries to carefully place the cursor at the bottom of the exposed portion of the control, then drags it to the top of the control and releases, then carefully positions the cursor again at the bottom of the control, drags upward, and well, you get the picture.

http://hallofshame.gp.co.at/images/qtime/qtvol.gif


The iPad stroking gestures are kind of like pie menus in how you use them, but unlike pie menus in how they aren't "self revealing", with a way to show you what gestures are possible and what they do (i.e. a pop-up menu).

But there's another fundamental difference between pie menus and gestures, something I was trying to get at by coining the term "gesture space", defined here (I'd love to know if other people have come up with a better term or more rigorous definition):

https://medium.com/@donhopkins/gesture-space-842e3cdc7102

>Gesture Space: The space of all possible gestures, between touching the screen / pressing the button, moving along an arbitrary path (or not, in the case of a tap), and lifting your finger / releasing the button. It gets a lot more complex with multi touch gestures, but it’s the same basic idea, just multiple gestures in parallel.

Excerpt from OLPC Sugar Discussion about Pie Menus:

https://medium.com/@donhopkins/olpc-sugar-pie-menu-discussio...

>I think it’s important to trigger pie menus on a mouse click (and control them by the instantaneous direction between clicks, but NOT the path taken, in order to allow re-selection and browsing), and to center them on the exact position of the mouse click. The user should have a crisp consistent mental model of how pie menus work (which is NOT the case for gesture recognition). Pie menus should completely cover all possible “gesture space” with well defined behavior (by basing the selection on the angle between clicks, and not the path taken). In contrast, gesture recognition does NOT cover all gesture space (because most gestures are syntax errors, and gestures should be far apart and distinct in gesture space to prevent errors), and they do not allow in-flight re-selection, and they are not “self revealing” like pie menus.

>Pie menus are more predictable, reliable, forgiving, simpler and easier to learn than gesture recognition, because it’s impossible to make a syntax error, always possible to recover from a mistaken direction before releasing the button, they “self reveal” their directions by popping up a window with labels, and they “train” you to mouse ahead by “rehearsal”.

Excerpt from a Hacker News discussions:

https://news.ycombinator.com/item?id=16615023

>Swiping gestures are essentially like invisible pie menus, but actual pie menus have the advantage of being "Self Revealing" [5] because they have a way to prompt and show you what the possible gestures are, and give you feedback as you make the selection.

>They also provide the ability of "Reselection" [6], which means you as you're making a gesture, you can change it in-flight, and browse around to any of the items, in case you need to correct a mistake or change your mind, or just want to preview the effect or see the description of each item as you browse around the menu.

>Compared to typical gesture recognition systems, like Palm's graffiti for example, you can think of the gesture space of all possible gestures between touching the screen, moving around through any possible path, then releasing: most gestures are invalid syntax errors, and they only recognizes well formed gestures.

>There is no way to correct or abort a gesture once you start making it (other than scribbling, but that might be recognized as another undesired gesture!). Ideally each gesture should be as far away as possible from all other gestures in gesture space, to minimize the possibility of errors, but in practice they tend to be clumped (so "2" and "Z" are easily confused, while many other possible gestures are unused and wasted).

>But with pie menus, only the direction between the touch and the release matter, not the path. All gestures are valid and distinct: there are no possible syntax errors, so none of gesture space is wasted. There's a simple intuitive mapping of direction to selection that the user can understand (unlike the mysterious fuzzy black box of a handwriting recognizer), that gives you the ability to refine your selection by moving out further (to get more leverage), return to the center to cancel, move around to correct and change the selection.

>Pie menus also support "Rehearsal" [7] -- the way a novice uses them is actually practice for the way an expert uses them, so they have a smooth learning curve. Contrast this with keyboard accelerators for linear menus: you pull down a linear menu with the mouse to learn the keyboard accelerators, but using the keyboard accelerators is a totally different action, so it's not rehearsal.

>Pie menu users tend to learn them in three stages: 1) novice pops up an unfamiliar menu, looks at all the items, moves in the direction of the desired item, and selects it. 2) intermediate remembers the direction of the item they want, pop up the menu and moves in that direction without hesitating (mousing ahead but not selecting), looks at the screen to make sure the desired item is selected, then clicks to select the item. 3) expert knows which direction the item they want is, and has confidence that they can reliably select it, so they just flick in the appropriate direction without even looking at the screen.

https://news.ycombinator.com/item?id=7262849

>Pie menus completely saturate the entire possible gesture space with usable and accessible commands: there is no such thing as a syntax error, and you can always correct any gesture to select what you want, no matter how bad it started out, or cancel the menu, by moving around to the desired item or back to the center to cancel.

>Handwriting and gesture recognition does not have this property, and it can be quite frustrating because you can't correct or cancel mistakes, and dangerous because mistakes can be misinterpreted as the wrong command. Most gestures are syntax errors. Blind gesture recognition doesn't have a good way to prompt and train you with the possible gestures, which only cover a tiny fraction of the possible gesture space. All the rest of the space of possible gestures is wasted, and interpreted as a syntax error (or worse, misinterpreted as the wrong gesture), instead of enabling the user to correct mistakes and reselect different gestures.

>Even "fuzzy matching" of gestures trades off gestural precision with making it even harder to cancel or correct a gesture, without accidentally being misinterpreted as the wrong gesture. That's not the kind of an interface you would want to use in a mission critical application such as a car or airplane.


Pretty much all the recent apps I'm involved in use pie menus. They are generally very well received. Example: https://www.bigburgh.com


Probably not the sort of software the author has had a chance to look at (beyond, perhaps, Monster Hunter) but these are very common and often configurable in games these days. There are games (Crysis 2, I think?) where Pie vs Linear is a configurable preference. In a somewhat different but also gamey direction, there's also https://www.amazon.com/Razer-Naga-MOBA-Gaming-Mouse/dp/B006W...


Hey Don, could you please change Neuman into 'Newman'?


Oops, good catch! So it's not like Alfred E... Sorry about that, chief. Fixed.

It took me a while to learned to spell Shneiderman without the extra c. It helps to have a good mnemonic. Newman is new, and Wiseman is wise! And Weiser is wiser, but spelled wrong.

I heard a funny story that Donald Michie once overheard his secretary telling someone on the phone how to pronounce his name (in a Scottish accent): "It's Donald, as in Duck, and Michie, as in Mouse." He was so pissed he refused to speak to her for a month! ;)

https://en.wikipedia.org/wiki/Donald_Michie


I still have the pink book sitting on my desk so that was a pretty easy catch for me :)

Thank you for making the change and thank you even more for posting all this. What a mess with that patent situation, one more strike against software patents. I really can't stand them!

Poor Donald! I've seen my name spelled in at least 20 different ways, it no longer bothers me. But this also leads to funny situations: when a guy called Jake Matthews was in a group that I was in and I was sure they meant me...


Here’s a demo of HyperTIES with pop-out embedded menus:

HCIL Demo - HyperTIES Browsing: Demo of NeWS based HyperTIES authoring tool, by Don Hopkins, at the University of Maryland Human Computer Interaction Lab.

https://www.youtube.com/watch?v=fZi4gUjaGAM

A funny story about the demo that has the photo of the three Sun founders whose heads puff up when you point at them:

When you point at a head, it would swell up, and you pressed the button, it would shrink back down again until you released the button again.

HyperTIES had a feature that you could click or press and hold on the page background, and it would blink or highlight ALL of the links on the page, either by inverting the brightness of text buttons, or by popping up all the cookie-cut-out picture targets (we called them “embedded menus”) at the same time, which could be quite dramatic with the three Sun founders!

Kind of like what they call “Big Head Mode” these days! https://www.giantbomb.com/big-head-mode/3015-403/

I had a Sun workstation set up on the show floor at Educom in October 1988, and I was giving a rotating demo of NeWS, pie menus, Emacs, and HyperTIES to anyone who happened to walk by. (That was when Steve Jobs came by, saw the demo, and jumped up and down shouting “That sucks! That sucks! Wow, that’s neat. That sucks!”)

The best part of the demo was when I demonstrated popping up all the heads of the Sun founders at once, by holding the optical mouse up to my mouth, and blowing and sucking into the mouse while secretly pressing and releasing the button, so it looked like I was inflating their heads!

One other weird guy hung around through a couple demos, and by the time I got back around to the Emacs demo, he finally said “Hey, I used to use Emacs on ITS!” I said “Wow cool! So did I! What’s was your user name?” and he said “WNJ”.

It turns out that I had been giving an Emacs demo to Bill Joy all that time, then popping his head up and down by blowing and sucking into a Sun optical mouse, without even recognizing him, because he had shaved his beard!

He really blindsided me with that comment about using Emacs, because I always thought he was more if a vi guy. ;)


The example of Neverwinter Nights was already mentioned but my favorite example of radial menus is from a similar game The Temple of Elemental Evil, also based on DnD but this time fully turn based. Beautiful menus that were entirely text based if I remember correctly.


I'm very impressed by Simon Schneegans' work on Gnome-Pie:

http://simmesimme.github.io/gnome-pie.html

And especially his delightful thesis work:

Trace-Menu:

https://vimeo.com/51073078

I really love how the little nubs preview the structure of the sub-menus, and how you can roll back to the parent menu because it reserves a slice in the sub-menu to go back, so you don't need to use another mouse button or shift key to browse the menus.

Coral-Menu:

https://vimeo.com/51072812

That looks like a nice visual representation with a way to easily browse all around the tree, into and out of the submenus without clicking! I can't tell from the video if it's based on a click or a timeout. But it looks like it supports browsing and reselection and correcting errors pretty well! (That would be something interesting to measure!)

There's another useful law related to Fitts's law that applies to situations like this, called Steering Law:

https://en.wikipedia.org/wiki/Steering_law

The steering law in human–computer interaction and ergonomics is a predictive model of human movement that describes the time required to navigate, or steer, through a 2-dimensional tunnel. The tunnel can be thought of as a path or trajectory on a plane that has an associated thickness or width, where the width can vary along the tunnel. The goal of a steering task is to navigate from one end of the tunnel to the other as quickly as possible, without touching the boundaries of the tunnel. A real-world example that approximates this task is driving a car down a road that may have twists and turns, where the car must navigate the road as quickly as possible without touching the sides of the road. The steering law predicts both the instantaneous speed at which we may navigate the tunnel, and the total time required to navigate the entire tunnel.

The steering law has been independently discovered and studied three times (Rashevsky, 1959; Drury, 1971; Accot and Zhai, 1997). Its most recent discovery has been within the human–computer interaction community, which has resulted in the most general mathematical formulation of the law.

Also here's some interesting stuff about incompatibility with Wayland, and rewriting Gnome-Pie as an extension to the Gnome shell:

http://simmesimme.github.io/news/2017/07/09/gnome-pie-071


Somebody raised an interesting point on reddit about patent trolls:

BobTheSCV> Neverwinter Nights also implemented them, and they worked very well. I had just assumed it was patent trolls or something that kept them from being widely adopted.

I replied:

You are absolutely correct about the patent trolls!

Bill Buxton at Alias and his marketing team spread a bunch of inaccurate FUD about their "marking menu patent", which I accidentally discovered and tried to correct and get him to stop doing decades ago, but he refused, and continued to spread FUD.

So Alias kept advertising their "patented marking menus" for DECADES, purposefully and successfully discouraging their competition 3D Studio Max, AND many other developers of free and proprietary apps as collateral damage, from adopting them.

When I asked Buxton about the "marking menu patent" before it was granted, he lied point blank to me that there was no "marking menu patent", so I couldn't prove to Kinetix that it was OK to use them, or contact the patent office and inform them about the mistakes in their claims about prior art, and the fact that the "overflow" technique they were claiming in the patent was obvious.

The whole story is here:

Pie Menu FUD and Misconceptions: Dispelling the fear, uncertainty, doubt and misconceptions about pie menus.

https://medium.com/@donhopkins/pie-menu-fud-and-misconceptio...

Some excerpts:

>There is a financial and institutional incentive to be lazy about researching and less than honest in reporting and describing prior art, in the hopes that it will slip by the patent examiners, which it very often does.

>Unfortunately they were able to successfully deceive the patent reviewers, even though the patent references the Dr. Dobb’s Journal article which clearly describes how pie menu selection and mouse ahead work, contradicting the incorrect claims in the patent. It’s sad that this kind of deception and patent trolling is all too common in the industry, and it causes so many problems.

Even today, long after the patent has expired, Autodesk marketing brochures continue to spread FUD to scare other people away from using marking menus, by bragging that “Patented marking menus let you use context-sensitive gestures to select commands.”

A snapshot of Alias's claim about "Patented marking menus" from one of their brochures that they are still distributing, even years after their bad patent has expired:

https://cdn-images-1.medium.com/max/450/1*3C79dFnlhN__OJ3XmE...

>"Marking Menus: Quickly select commands without looking away from the design. Patented marking menus let you use context-sensitive gestures to select commands."

http://images.autodesk.com/adsk/files/aliasdesign10_detail_b...

>The Long Tail Consequences of Bad Patents and FUD

>I attended the computer game developer’s conference in the late 90’s, while I was working at Maxis on The Sims. Since we were using 3D Studio Max, I stopped by the Kinetix booth on the trade show floor, and asked them for some advice integrating my existing ActiveX pie menus into their 3D editing tool.

>They told me that Alias had “marking menus” which were like pie menus, and that Kinetix’s customers had been requesting that feature, but since Alias had patented marking menus, they were afraid to use pie menus or anything resembling them for fear of being sued for patent infringement.

>I told them that sounded like bullshit since there was plenty of prior art, so Alias couldn’t get a legitimate patent on “marking menus”.

>The guy from Kinetix told me that if I didn’t believe him, I should walk across the aisle and ask the people at the Alias booth. So I did.

>When I asked one of the Alias sales people if their “marking menus” were patented, he immediately blurted out “of course they are!” So I showed him the ActiveX pie menus on my laptop, and told him that I needed to get in touch with their legal department because they had patented something that I had been working on for many years, and had used in several published products, including The Sims, and I didn’t want them to sue me or EA for patent infringement.

>When I tried to pin down the Alias marketing representative about what exactly it was that Alias had patented, he started weaseling and changing his story several times. He finally told me that Bill Buxton was the one who invented marking menus, that he was the one behind the patent, that he was the senior user interface researcher at SGI/Alias, and that I should talk to him. He never mentioned Gordon Kurtenbach, only Bill Buxton.

>So I called Bill Buxton at Alias, who stonewalled and claimed that there was no patent on marking menus (which turned out to be false, because he was just being coy for legal reasons). He said he felt insulted that I would think he would patent something that we both knew very well was covered by prior art.

At the time I didn't know the term, but that's what we now call "gaslighting": https://en.wikipedia.org/wiki/Gaslighting

Gee, who do we all know who lies and then tries to turn it all around to blame the person who they bullied, and then tries to play the victim themselves? https://en.wikipedia.org/wiki/Donald_Trump

Gordon Kurtenbach, who did the work and got the patent that Alias marketing people were bragging about in Bill Buxton's name agrees:

Gordon> Don, I read and understand your sequence of events. Thanks. It sounds like it was super frustrating, to put it mildly. Also, I know, having read dozens of patents, that patents are the most obtuse and maddening things to read. And yes, the patent lawyers will make the claims as broad as the patent office will allow. So you were right to be concerned. Clearly, marketing is marketing, and love to say in-precise things like “patented marking menus”.

Gordon> At the time Bill or I could have said to you “off the record, its ok, just don’t use the radial/linear combo”. I think this was what Bill was trying to say when he said “there’s no patent on marking menus”. That was factually true. However, given that Max was the main rival, we didn’t want to do them any favors. So those were the circumstances that lead to those events.

What's ironic is that Autodesk now owns both Alias and 3D Studio Max. Gordon confirmed that Alias's FUD did indeed discourage Kinetix from implementing marking menus or pie menus, which were not actually covered by the patent:

Gordon> After Autodesk acquired Alias, I talked to the manager who was interested in getting pie menus in Max. Yes, he said he that the Alias patents discouraged them from implementing pie menus but they didn’t understand the patents in any detail. Had you at the time said “as long we don’t use the overflows we are not infringing” that would have been fine. I remember at the time thinking “they never read the patent claims”.

Don> The 3D Studio Max developers heard about the Alias marking menu patent from Alias marketing long before I heard of it from them on the trade show floor.

Don> The reason I didn’t know the patent only covered overflows was that I had never seen the patent, of course. And when I asked Buxton about it, he lied to me that “there is no marking menu patent”. He was trying to be coy by pretending he didn’t understand which patent I was talking about, but his intent was to deceive and obfuscate in order to do as much harm to Kinetix 3D Studio Max users as possible, and unfortunately he succeeded at his unethical goal.

What's even worse is that in Buxton's zeal to attack 3D Studio Max users, he also attacked users of free software tools like Blender.

>The Alias Marking Menu Patent Discouraged the Open Source Blender Community from Using Pie Menus for Decades

>Here is another example that of how that long term marketing FUD succeeded in holding back progress: the Blender community was discussing when the marking menu patent would expire, in anticipation of when they might finally be able to use marking menus in blender (even though it has always been fine to use pie menus).

https://blenderartists.org/t/when-will-marking-menu-patent-e...

>As the following discussion shows, there is a lot of purposefully sewn confusion and misunderstanding about the difference between marking menus and pie menus, and what exactly is patented, because of the inconsistent and inaccurate definitions and mistakes in the papers and patents and Alias’s marketing FUD:

>"Hi. In a recently closed topic regarding pie menus, LiquidApe said that marking menus are a patent of Autodesk, a patent that would expire shortly. The question is: When ? When could marking menus be usable in Blender ? I couldn’t find any info on internet, mabie some of you know."

>The good news: Decades late due to patents and FUD, pie menus have finally come to 3D Studio Max just recently (January 2018)!

Radially - Pie menu editor for 3ds Max: https://www.youtube.com/watch?v=sjLYmobb8vI


Thank you for the writeup.

Possibly interesting links:

https://www.autodeskresearch.com/people/gord-kurtenbach#pate...

Display and control of menus with radial and linear portions https://patents.google.com/patent/US5926178

Methods and system of controlling menus with radial and linear portions https://patents.google.com/patent/US5689667A

Method and apparatus for producing, controlling and displaying menus https://patents.google.com/patent/US6618063B1


Yes, those are the patents that Alias marketing people and product advertisement brochures were referring to as the "marking menu patents".

Here is the brochure they are still distributing to this day:

http://images.autodesk.com/adsk/files/aliasdesign10_detail_b...

Here is an illustration from that brochure:

https://cdn-images-1.medium.com/max/450/1*3C79dFnlhN__OJ3XmE...

It says "Marking Menus. Quickly select commands without looking away from the design. Patented marking menus let you use context-sensitive gestures to select commands."

What Bill Buxton told me back then (and still tells me now) directly contradicts the claim in that brochure, and the claims that Alias marketing people were making.

Alias marketing claimed point blank that "of course marking menus are patented!"

That brochure (and many others) claim point blank that "Patented marking menus let you use context-sensitive gestures to select commands."

Bill Buxton claimed back then (and still claims now) point blank that "marking menus are not patented", and that I would be an idiot to believe Alias marketing.

So there is an obvious glaring contradiction here.

At the time he made that claim to me, he understood very well because I explained quite clearly that I was asking about any patent relating to marking menus in general, because I did not believe it was possible for marking menus to be patented, and Alias marketing was lying and invoking his lie to support their false claims.

I thought there was some patent related to marking menus that they were talking about, and exaggerating its claims to spread FUD. And there was. And they were.

But he refused to admit the existence of any "marking menu patents" at that time by any definition of the term, even though we all now know they those patents exist, and Alias marketing brochures specifically refer to "patented marking menus".

My problem is that Alias marketing was spreading FUD about the patents, and when I asked them about it, they specifically mentioned Bill Buxton's name as the person who patented them, yet when I called Buxton within minutes of hearing that ridiculous claim to ask him about it, he denied there was any such thing as a marking menu patent by any definition, and he got quite abusive and demeaning, calling me an idiot for believing them, and claiming to be insulted that I'd think he'd patent marking menus, like his marketing people told me he did. And he flat out refused to ask his marketing people to stop lying in his name on his behalf.

My problem is not Bill Buxton's or Alias marketing's obvious insincerity and lack of credibility (that's their own problem). It's their intention and result of their lies that I'm concerned about: their FUD was meant to and succeeded in preventing Kinetix (and others) from adopting marking menus or pie menus for 3D Studio Max.

And there is evidence that it also discouraged open source applications like Blender from using marking or pie menus, when they had nothing to be afraid of.

https://blenderartists.org/t/when-will-marking-menu-patent-e...

Gordon Kurtenbach, who now works at Autodesk, which now ironically owns both Alias and 3D Studio Max, confirmed those facts to me:

>Don, I read and understand your sequence of events. Thanks. It sounds like it was super frustrating, to put it mildly. Also, I know, having read dozens of patents, that patents are the most obtuse and maddening things to read. And yes, the patent lawyers will make the claims as broad as the patent office will allow. So you were right to be concerned. Clearly, marketing is marketing, and love to say in-precise things like “patented marking menus”.

>At the time Bill or I could have said to you “off the record, its ok, just don’t use the radial/linear combo”. I think this was what Bill was trying to say when he said “there’s no patent on marking menus”. That was factually true. However, given that Max was the main rival, we didn’t want to do them any favors. So those were the circumstances that lead to those events.

>After Autodesk acquired Alias, I talked to the manager who was interested in getting pie menus in Max. Yes, he said he that the Alias patents discouraged them from implementing pie menus but they didn’t understand the patents in any detail. Had you at the time said “as long we don’t use the overflows we are not infringing” that would have been fine. I remember at the time thinking “they never read the patent claims”.

But as I said before, the 3D Studio Max developers at Kinetix heard about the Alias marking menu patent from Alias marketing, and made the decision to ignore their users and not support them, long before I heard of it from them on the trade show floor, or read the patent. So of course I couldn't say that at the time.


Given that, is the caption "Emulating Alias Marketing Menus" a Freudian slip?


You got me there!




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: