Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Apple studied this back in the 1980s. This is why the menubar is at the top of the screen. What they found was, people were able to, much more quickly and accurately, pick the menu item they wanted, because they didn't have to be as precise in their targeting when the menu was at the top of the screen than on the window.

If you've grown up with poor copies of the Mac UI (like Windows) then you're used to targeting menu items on a window, and you don't realize that you're slower at it.

But you are.

It's a lot like the one button mouse issue. People think its worse when they've always had to suffer with 2 button mice.



Apple studied this on a 512x342 1-bit display, on a machine architecturally incapable of multitasking more than one app at a time. It's just not relevant anymore, sorry.

The ease of aiming is a real effect, but lost on a 1920x1080 display. And the usability disaster of trying to figure out which of the three "main" windows is in the foreground (and thus owns the menu) is something that was not studied, nor accounted for by Apple in the 80's.


The effect is not lost at all. It's Fitts's Law: http://en.wikipedia.org/wiki/Fitts%27s_law -- the Mile High Menu Bar is still a mile high on a 1920x1080 display. http://joelonsoftware.com/uibook/chapters/fog0000000063.html


Sure, though in interests of full disclosure, on a large screen the menu bar is actually "a mile high, and a mile away". Still, global menu on the Mac is probably still at least a wash for all but the most expansive desktops.

With Unity, though, the global menu bar on a large desktop is "a mile high, a mile away, and invisible". You get to guess where you should be aiming.

Are you really arguing that hitting a target you can't see is easier than a target you can see?

All that said, I do prefer Unity over other options on my netbook. The savings in real estate is well worth the marginal cost in usability. I don't use the menu bar much, anyway.

That last bit is probably Unity's saving grace: most apps used by most people are no longer designed with the menus as the primary interface.

Since they're less used, it makes sense to optimize screen use by tucking them away at the top of the screen.

I wouldn't at all mind if the menu bar had an auto-hide mode, though. Actually, what I'd like is a partial auto-hide. Tuck away over to the top left, with only the window widgets and the indicators showing. I'd gain a line or two in any editor or term on the right side of the screen.


No it's not "a mile away", it's a few inches away. You're performing thought experiments instead of actually measuring, and that's a no-no. Your intuition is no good. The scientific method is. If you can prove your point that the menu bar is hard to hit because it is as far away as it is tall by a controlled experiment, then by all means, publish your work in peer reviewed journals, because a lot of professional HCI researchers will be astonished, and you will be very famous for proving something so counter-intuitive that breaks Fitts's law and flies in the face of all the other studies that have been done, and the hands-on experience of millions of Mac users.

To test your intuition (by quoting one of Tog's favorite puzzles): Name the five points on the screen that are the easiest to hit with the mouse.


It's a mile and four intervening windows all activated via focus-follows-mouse away.

On Macs, fer love of Pete, the Mile High Menu ... is on the other display.

Menus just f@cking suck anyway. I've canned my browser menus via Vimperator (on Firefox / Iceweasel). Sure, I'm a power user and I know what I want to do and I've got finger memory five miles deep (plus command completion). So suck on that teat.

Fipp's Law optimizes for one case: mouse navigation. Sure, it's nice to have a big fat landing zone, when you need it. But often you don't, and the optimization unambiguously and indisputably breaks numerous other optimizations. Which frankly I care a whole f@ck of a lot more for.

We're talking about desktop (or large laptop) displays here. For tablets and small-factor handhelds, there are other considerations. Which is why UI design is complicated and a task and disipline worthy of research and nuanced understanding.

The 1980s were 30 years ago. Go ahead and pop up a 512x342 window on your desktop. On my not-extravagant dual-head display, I can stack those up 6.5 across and three high. With window decorations.

Y'know, I credit Jobs with some good stuff, and he was nothing if not persistent in believing what he believed in. But some things really have to go.


Actually, with pie menus, it's quite easy to hit a target you can't see, because you can "mouse ahead" and be very sure of the direction you're moving, enough that you can reliably select between 8 different directions without looking at the screen. With four items it's almost impossible to make a mistake, unless you're holding the mouse sideways or upside down.


And no, I'm not arguing about Unity's invisible menu bar, or whatever it has. I haven't used Unity, and I have no plans on using Unity, because all of the X11 based Unix and Linux desktops have always sucked, and they always will.


The bar is a mile high, but each menu column is still only an inch wide - so distance matters, even if you have mouse acceleration so that a large vertical is painless.


Even with inch wide menu bar items, the fact that the menu bar is a mile high still completely overwhelms the cost of the distance of moving the cursor to the menu bar. And you can move back and forth while still moving up, to switch between menu bar items without losing the menu bar target. Do the math. Do the experiments. Measure the results. Read the papers. Thought experiments are no good.


"Do the math. Do the experiments. Measure the results. Read the papers. Thought experiments are no good."

Use the software.


Fitts's law (often cited as Fitts' law) is a model of human movement primarily used in human–computer interaction and ergonomics that predicts that the time required to rapidly move to a target area is a function of the distance to the target and the size of the target

How does that do anything but confirm the GP's point, that it's dumb to place a menu bar potentially thousands of pixels away from the window it applies to?


Because Fitts's law relates both the target size and the target distance to the speed and accuracy of hitting the target. Not just the target distance. You can move the mouse very quickly to cover the large distance, without worrying about the accuracy, thus reducing the negative contribution of the distance, because the target size is practically infinite. The target area of the menu bar at the top of the screen extends infinitely up above the screen, because when your mouse hits the edge, it stops moving and stays in the target. Try it yourself. It's EXTREMELY easy to move the cursor to the top of the screen, no matter how far away it is. The distance doesn't matter, because the target size overwhelms it. That's what is meant by the "Mile High Menu Bar" -- calling it a mile high is an understatement!

This is also why pie menus have improved time and error rates over linear menus: linear menu targets are very small, and increasing distances away from the cursor, but the pie menu items all start out adjacent to the cursor, and extend all the way out to the screen edge, so you can trade off increased distance of movement for increased target size. The target area of the pie-slice shaped wedges get wider and wider as you move out further away from the menu center. (I don't mean they dynamically change size as you move, I mean that as you move out, you're in a much larger part of the slice. So with a four-item pie menu, each target area gets about 1/4 of the screen real estate, and you can keep moving the mouse even further when you hit the edge and still be in the same target slice.) Pie menus also minimize the distance, but around the center, the targets are at their smallest, but you can always move out further to get more "leverage" and directional accuracy.


> How does that do anything but confirm the GP's point, that it's dumb to place a menu bar potentially thousands of pixels away from the window it applies to?

That menu bar is effectively a billion pixels tall. You can throw the mouse pointer to the top of the screen and only concentrate on accurate horizontal positioning, since the mouse will not leave the top edge.

Putting the menu bar at the top sells out the distance side of the function to dramatically increase the target size.


So now that you've thrown the mouse to that easy to find top of the screen mile high menu bar, and completed your mouse action, don't you now have to find your teeny weeny window over in some far off portion of the screen and move that mouse back into the window you're actually working in?


That is correct. And one of the largest consistent complaints by new OS X (and prior to that MacOS) users in terms of usability problems.

On multiple monitors, the menu can be 1 or more monitors away from your app window. It might not just be up at the top of the screen, it might be up at the top of 1 screen over and two up.


Fitts' Law is certainly not lost on large displays, set up Expose to use hotcorners and you'll be hitting infinite-width-targets hundreds of times per day.

Same for screen edges: Taskbar buttons or the tabs in Chrome on Windows are awesome, extremely easy to target, because they're on the edge of the screen. (The New Tab Button was recently made Fitts'ier as well: http://code.google.com/p/chromium/issues/detail?id=48727 )

The issue with the Menu Bar is that it's the wrong thing to be putting on the edge of the screen. The menu items are still extremely easy to hit... it's just that they're no longer useful. No need to have them any more. GUI advances such as the Ribbon or just the concept of removing useless features are leading to apps like Chrome having few menu items at all.

On modern apps the cognitive load of the "which-app-is-in-focus?" is the main problem, which you mentioned. But that's a ding against the usefulness of the menubar, not Fitts' Law in general.


> Apple studied this on a 512x342 1-bit display, on a machine architecturally incapable of multitasking more than one app at a time. It's just not relevant anymore, sorry.

... how do these technology changes render their results irrelevant? The menu bar problem sits at the point where a user's ability to aim the mouse in physical space on the screen interacts with how the menu is represented in physical space on the screen. I don't see how any of the changes you've listed affect that problem, except for screen resolution, which only seems to make it worse by shrinking the physical size of click targets.


It's now more of an argument over the usefulness of the menubar, not which is easier to hit. Of course the Mac menu bar is easier to hit. The question to ask is why do we need a menubar at all?

By not having a global menubar, Windows has permitted the development of infinite-width tabs-on-top in Chrome, the Office Ribbon, getting rid of the menubar entirely in Windows Explorer, etc. It's impossible to do things like that in OS X, because we're stuck with a menubar from the 1980s.


If anything, technological changes have made it easier to hit the target, because mice no longer have physical balls that get jammed with dirt, and they are much more accurate. The kid you're replying to has probably never had to pick the black ring of scum off of the wheel inside a Mac mouse, when it stops responding to movements.


I have had to do this. Although I'm absolutely thankful that those days are over, I'm not convinced higher precision mouse control alleviates enough of the problem.


Can you explain why the one button mouse is better than the 2 button mouse? One of my pet peeves with Mac is that my magic mouse takes 2 clicks and a mouse movement to open a new tab in my browser, as opposed to 1 click of the scroll wheel on a pc. (Plus I have to waste a mental thread wondering if its going to register it as a primary or secondary click.)

Usually when I find something on the mac annoying unusable, I blame it on Industrial Design. As in, 'boy the magic mouse really hurts my hand after a while and the secondary click is difficult to use, but it really looks great (the finger scroll is really why I use it fyi)' or 'boy my macbook cuts sharply into my wrists in a just-about-but-not-quite painful way, but damn that unibody is sleek'.

I always just assumed the 1 button mouse (specifically the magic mouse) came about because it just looks so good.


You can configure a Magic Mouse to emulate middle buttons using MagicPrefs. It's a pretty useful add-on.


i will still take the tactile feedback of real buttons.


Cmd+Click. You can even do it one handed on a trackpad. BTW that's what I do on PCs too since the third button is so unreliable across both software and hardware.


Like many of those 20 year old interface assumptions, things like the 1 button mouse are more or less gone from Appleland. Although they really really want you to believe it's still there. Try two finger tapping on a pad, or on the magic mouse a right click is just like a right click on a regular mouse (if you turn on right clicks or secondary clicks or whatever in system prefs).


Apple does studies for these things. They studied multi-button mice and found it slowed people down. The reason is that they clicked on the wrong button some percentage of the time.

What throws people off is, when you make a mistake with a gui, you correct it and forget about it. You don't account for your time because your mind is focused on the task. But and independant observer, who observes you doing the same task in both situations will have a stopwatch and recognize objectively which takes longer.

In some cases-- such as people's preferences for keyboards over mousing-- they perceive that they are faster accomplishing tasks with the keyboard than the mouse. They think this because they press keys faster and the brain accounts for each key press as a bit of an accomplishment... but the stopwatch shows differently.

Its the same thing with the second button mouse.

Trackpads have changed the situation, as a more direct form of control, 2, 3 finger gestures take less cognition. And of course, millions of people prefer 2 button mice, and don't care if they lose a minute or two each day as a result.

So, while there is a reason for the choices Apple makes, those reasons can be less significant over time.

But my point was, there is always a reason, and you shouldn't try to cut new ground unless you understand the reason for the original method.


I agree that finger gestures make up for whatever I have lost from multiple buttons. But I still lose a lot of time every day trying to use the right-click on my mouse. Opening new tabs is a pain and opening new instances of an application is a pain when i go to right click the dock (yes i know you can cmd-click or cmd-N, but thats not always an option).

edit: one last point before this thread ends, trackpads are awesome, computer interaction without one has become much more difficult for me. However they are much too modal, it's fine to make them simple for beginning users who misclick a bunch, but throw a bone to the experienced users who want some power in their interface.


On my Windows laptop, I have configured 1+1 click to act as middle click. It is quite convenient as I move around pointer by touching on finger, and when I want to open the link, I just tap another one, making workflow quite pleasing.

Perhaps you can set the same gesture on Mac ?


>It's a lot like the one button mouse issue. People think its worse when they've always had to suffer with 2 button mice.

It is objectively worse. Cmd-clicking or two-finger-clicking is a pain in the ass. Many things become more annoying on osx simply because of the one button policy.

OSX is generally nicer to use than windows, but it's hardly perfect and there are some things it just plain gets wrong.


Both of your examples are definitely worse on OS X than on basically every other OS in existence. And as proof, OS X is slowly but surely moving in the direction away from at least one of those, and I'd bet that within two or three revs of OS X away from both.

Menus just aren't that hard to hit otherwise all clickable items in a program should be on a screen edge. In fact, according to Fitts's law, they should be jammed into the corners of the screen since that's even easier to hit than a screen edge. But they aren't because it's not that hard for anybody who's bothered to use a pointing device in the last 20 years. More important than that, trackpads are becoming the de facto pointing device on Apple sold computers and Fitts's law works differently on a pad vs. a moving device like a mouse. The edge of the touch surface is the infinite target, not the screen. Since touch devices are not 1:1 mapped to screen area, all clickable interfaces should be at the edge of the pad relative to wherever the cursor starts.

Likewise, a second mouse button turns out to have been a great idea, so great that decades later Apple guarantees that they not only support right mouse buttons, but their default mouse not only ships with support for it, but they even have managed to cram a touchpad into it and their trackpad recognizes a two finger tap as a right-click. Why? Because decades into the great GUI experiment it finally dawned on somebody that interface complexity requires more than one button -- otherwise half of your interface gets buried behind a modifier button (or two or three) or a pile of menus and your sole button.

I've watched many dozens of users move to OS X and one of the first things they ask is "why is the menu bar way the hell over there?" -- with pointing to the top of the screen (or to an entirely different monitor depending) usually preceded by a series of questions about how to do some function that is clearly on the menu bar, but since it's not coupled to the actual program window, they assume it has a decoupled function from the program and don't realize what they are looking for is there.

It's an embarrassingly repeatable user interface experiment that's left me convinced that the only reason it's still part of the OS is to differentiate OS X from Windows.

Physically decoupling software interfaces from the software is almost always a bad GUI idea if you can help it. It repeatedly confused users, particularly new users. It's like putting the steering wheel of your car in your house, and the gas pedal in your back yard shed.

Everything from MDI to full screen apps are now slowly creeping into OS X because time and time again it's shown that users find those alternatives more usable than the old Apple way standby.

Lets stop tooting this "everything Apple does in UI is best" horn. Lots of stuff Apple does in UI is great, it's even the best, these things are simply not.


It isn't the 80's anymore. People often have more than a single window visible on the screen. Refocusing to that window before hitting the menu is a waste of time, and especially aggravating and confusing if you forget.

But again, my main gripe is simply that, right or wrong, having the broad functionality of the window embedded in the window is what people are used to, and what a designer will have in his mind when putting it together, and old habits die hard.

I suspect that if you were to conduct that same experiment with current and prospective users of Unity, you would not get the same results


If you've grown up using the poorly crafted Mac OS interface, you probably don't realize that almost everything else is slower, despite that one really fast menu access (by mouse only).

However, I can Alt-Tab to an app and bring up it's main menu (and hunt through the menu) with only the keyboard. You just can't do that on the Mac. There's a reason most other desktop systems copy the much more logical Windows style instead of the illogical Mac OS style.


On the Mac, you can Cmd-Tab to an app and then hit Ctrl-F2 to navigate the menu from the keyboard: http://support.apple.com/kb/ht1343

I concede that this doesn't address other objections to a single menu bar, such as identifying the active window / application.


The reason most desktop systems copy Windows is simply because Windows is the dominant desktop OS. Similarly, most desktop GUIs ape Windows' white cursor rather than the Mac's black cursor and tile desktop icons from the left rather than the right.


Because the Windows GUI is based on intuition, and the Mac GUI is based on science?


The mile-high toolbar thing is right - a target at the side/corner of the mouse catchment area is easier to hit than a target you have to navigate to stop inside. But this is only right because no usability metric is attached at this point.

If the mouse-button story were just that it takes some milliseconds longer to click a control when there are two, then yes. Likely it does. But you go farther to attach usability claims to it and that's where you become wrong.

For one, the one-button mouse thing was tested on people who aren't used to it. Maybe it takes a few days to get the hang of but it's not the kind of thing where someone who knows what they're doing hits the wrong button.

And even if it were more than microseconds slower for the base physical action, it gets more done in that time. You have to set down your drink and reach for the keyboard with your other hand and I've already right-clicked and am done. Without knowing out workloads you can't make blanket statements like that, and I think that almost anyone could benefit from using more complex control, and macroing, etc, but only once they have a solid understanding of the thing they're trying to do.

"Had to suffer with 2-button mice." Hilarious. Stockholm syndrome.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: