So many people say this but I fundamentally disagree.
Applications are so much more complex today, supporting more combinations of OS and input method and data storage and accessibility and display modes and whatnot.
Applications are harder to use, yes, but because they do so much more. Your interface has to work and be responsive whether your file sits on a local disk or in the cloud, or maybe has to be synced. It needs to work with mouse and touch and a screenreader. And so on ad finitum.
Relative to their complexity, applications are doing just fine today I think. (Also don't forget there were so many terribly designed applications in the 90's. It's not like everybody was even remotely following established UX guidelines.)
The same kind of clear UX standards just don't exist anymore because there are so many different apps that do so many different things, and there's no obvious best answer.
But the good news is that applications do slowly converge on best practices. Think of how things like hamburger menus or swipe-to-refresh or pinch-to-zoom have become expected standards.
> Applications are harder to use, yes, but because they do so much more
I use Microsoft office 2000, because since then, no new features have been added to word or Excel that I care about. In fact, I couldn't even name a single feature added since then. What they did add, is the ribbon instead of the toolbar, which makes it impossible to find things you need, and a whole lot of bloat.
On modern machines, office 2000 opens faster than I can release my mouse button from clicking its icon.
That is to say, I entirely disagree with your statement.
Me too! With the exception of Outlook, I stick with Office 2003. I'm so much faster and more productive with it.
The problem with the ribbon is I find myself constantly having to click back and forth between different tabs. It's annoying, and takes twice as many clicks to get things done. Microsoft lost sight of the purpose of a toolbar: to make commonly used functions ONE click away.
When the rest of the industry followed suit and emulated them, the result was a tragic loss of precious vertical pixel space for the content I actually cared about: whatever I was working on.
I also miss the elegant discoverability of classic menu bars. I loved being able to open a program and quickly become familiar with what tools are available.
They did a great job surfacing keyboard shortcuts. Hints were right there beside the menu items, subtly advertised every time you clicked them. You naturally learned the ones you used most. I worked alongside a younger guy for a few months who was blown away by how quickly I navigated around my PC and got work done, for many sequences using like 90% keyboard and 10% mouse.
Exactly. Modern UX seems to split everything into two categories: things that happen automatically, and things that take ten times as many inputs as they did two decades ago. For example, it's nice that modern Windows automatically switches between wifi networks. It's not nice that instead of being a checkbox on a settings menu, turning off the superfluous Lock Screen requires creating registry keys.
Menu bars are still alive and well on the Mac, but only as a design holdover from a previous era. Really appreciate how the menu bars on the Mac centralize all application functionality, while also surfacing keyboard shortcuts.
Would love to see menu bars make a return to the iPad and mobile, such as in this concept. You can really get a sense for how quickly a touch-based UI can be navigated using menus and toolbars: https://twitter.com/stroughtonsmith/status/12339911770803609...
I just found out yesterday that you can import a picture into word and then remove the background, ala the selection tool in PS. Now while I'm sure most technical people would use PS or gimp or Paint.Net the fact that for 99% of other people there's an easy way to quickly edit photos is pretty incredible.
I always say MS Office reaches its peak right before they introduced the ribbon in 2003. Since then I haven’t seen many interesting new features. They just keep moving stuff around.
There are tons of users who do require critical new features like cloud integration. And the ribbon was designed because specifically more people find it easier to use, as Microsoft's user research showed.
Office 2000 may very well be better for you. But it certainly isn't for everybody.
And the ribbon was designed specifically because more people find it easier to use, as Microsoft's user research showed.
Not exactly. What Microsoft's user research showed was that people who were unfamiliar with Office found the Ribbon easier to use than the traditional menu bar. They did not test whether the same held true for experienced users. They also did not test whether the Ribbon had a higher "skill ceiling" than a traditional menu bar (i.e. if you take two users, one who is proficient with the Ribbon and one who is proficient with the traditional menu bar, and ask them to complete the same task, who is faster?).
Most of the complaints about the Ribbon came from the audiences that Microsoft failed to adequately test the Ribbon on. Microsoft, as far as I can tell, figured that experienced users would quickly get used to the new UI paradigm and adapt. That was not the case, due to the aforementioned tendency for the Ribbon to "helpfully" move things around in order to put the most recently used tools front and center. This broke many people's muscle memory and, more importantly, inhibited the formation of new muscle memory. It's the latter that was especially galling for experienced users. Changing the UI is bad enough. But changing the UI and replacing it with a constantly shifting toolbar that gives the user no indication as to where their controls nor any consistency with regards to their positioning is intolerable.
Imagine if your car radio shifted its buttons around every time you started it to put the last selected station in the first position.
Their research probably found people LIKED it more, not that it performed better. Computers went mainstream and functionality became secondary to seeming high tech.
I don't understand what you gain by asking people questions about how they think usability has improved. Users are notoriously bad at actually knowing what they want. If I were testing this sort of thing I'd give them tasks to do and watch what they do, when they look frustrated etc.
There was a lot of real-world usability research that went into the Office 2007 ribbon-- both qualitative and quantitative. Jensen Harris went into some of this in his blog; see, for example, https://docs.microsoft.com/en-us/archive/blogs/jensenh/more-... .
Recent Excel has a much more convenient interface for formatting pretty graphs than I remember from ~10 years ago. The available options are mostly the same, there indeed aren't really any new features in that, but it has live preview and more "visual" ways to modify the graph, compared to the gruesome modal interfaces that I remember where you have no idea what will happen until you "OK" it.
> But the good news is that applications do slowly converge on best practices. Think of how things like hamburger menus or swipe-to-refresh or pinch-to-zoom have become expected standards.
Hamburger menus are literal garbage with a little bit of everything and zero organization. Give me a menu bar instead. Swipe-to-refresh is completely useless for well-behaving software and pinch-to-zoom often activates when I wanted to press a button instead.
Mobile device features were shoved into desktop UI without regard for desktop users. Desktop users' productivity has suffered as a consequence.
And you will have to click on that hamburger menu, since even on a device with a keyboard there will be no shortcut for using it. It's not merely nostalgia that has me pining for those bygone days of being able to drive almost every feature of every application from the keyboard, and having a clear culture of right-click-gives-context menu ... now on a Mac I'm resigned to apathetically pulling down menus with a mouse and playing guess-which-keyboard-modifiers-combine-for-this-menu-option.
At least on macOS, if you don't like a particular combination, you can reassign it — for any or all application(s).
Also, what stops you right-clicking (or in Apple parlance, secondary clicking) on macOS? Context menus are plentiful, either by:
- Holding Ctrl and clicking with the primary mouse button
- On the Mighty Mouse and Magic Mouse, enabling secondary click in System Preferences
- On the Magic Trackpad, enabling secondary click as either a click in one of the lower corners or two-finger tap in System Preferences
It's not about right-clicking. It's about not having to use the mouse in the first place. Learn to use your keyboard more instead of your mouse. Tabbing around, for example, is a hell of a lot faster than moving your mouse across all the fields to type in. But many hamburger menus can't be tabbed into nor are they brought down with the alt key (like on Windows) nor are they available by the context key (since... if they were, they could be tabbed into).
At least on macOS, that's because hamburger menus should be an accelerator; the functionality that they expose should also be available via the global menu bar.
It's badly-designed apps, likely cross-platform or designed by people who don't know the macOS HIG, that bring mobile-style hamburger menus to macOS, not the hamburger itself.
>Applications are so much more complex today, supporting more combinations of OS and input method and data storage and accessibility and display modes and whatnot.
But do they need to be that complex? Often times we are solving for the same problems (most CRUD apps aren't doing anything we didn't do in the late 90's), but devs have convinced themselves that all of this abstraction and overly engineered layering is necessary. It's often not.
Nobody asked for applications to support "more combinations of OS, input methods and data storage and accessibility and whatever else", developers decided to shove all that because reasons.
If applications are worse because they are doing so much more then they should stop doing that "much more", focus on doing one thing and leave the rest to other applications.
> Think of how things like hamburger menus or swipe-to-refresh or pinch-to-zoom have become expected standards.
That isn't a great example for best practice IMO since the only expectation i have about hamburger menus is for them to die in a fire.
> Nobody asked for applications to support "more combinations of OS, input methods and data storage and accessibility and whatever else
Really?! Just look at all the effort that people have spent their personal time contributing to projects like WINE for Linux for example. MANY many people want cross-compatability. I for one have certainly done a lot of waiting for things to become available for Linux, and very much appreciate all of the cross platform frameworks that exist and have existed in the past (WINE, Adobe Air, Electron, Cordova etc.) to enable cross platform software. I think they have been GREAT for the whole ecosystem, both for consumers and developers.
Making computers more accessible for visually or motor impaired has also been one of the great victories of modern software... and certainly not because developers have shoved it down peoples' throats. In fact they have had to REGULATE it in order to drag developers kicking and screaming into building accessible software.
As for input methods, that is just a natural evolution required from the proliferation of devices (desktops, smartphones, etc).
So sorry to say I don't think your statement is rooted in any kind of reality beyond what exists in your head.
Wine is a compatibility layer and in fact supports what i was saying in two fronts: first, it is something developers decided to do and second, the Windows application does not need to care about the 1% that uses Linux on desktop and force the cross-platform complexity and bloat on the Windows users, but instead the Linux users (who as you seem to imply do not mind that extra bloat) can use Wine and run the same application on their systems.
Accessibility is something that can be necessary, but certainly not in all situations. Though of all things, it is the one bit that the OS itself can support the most and relieve applications from having to do it themselves (of course this assumes that applications do not try to use the lowest common denominator of OS functionality in order to be cross platform and actually take advantage of the functionality their OS provides).
About the proliferation of devices, this is a great case where what i wrote about leaving things to other applications applies: instead of having a single application try to cater to a bunch of different devices and input methods (and thus providing a mediocre experience on all of them), it is better to have one application tailored for each device and input method.
And note that i'm not writing about what is being done but what about should be done. Of course this is in my head, otherwise i would say that things are actually nice now.
If things are so complex now why does Google Maps constantly shift buttons and menus around without offering new functionality? To me it seems designers are just spinning their wheels. The whole data driven UX stuff reminds me a little of Agile with its story points and velocity charts. Looks “scientific” but if you take a closer look it’s just BS.
Also Google Maps interface for EDITING routes, its main purpose, it's utterly broken. If you misclick, you must start over. Most late 90's offline editing maps were billions better.
But these cool kids will never understand functionality vs ubiquity.
I really hate how the order of "Images Video Maps .." changes on Google depending on the query. The UI shuffling around unpredictably makes it that much harder to find anything from positional memory.
It seems like a lot of changes in software now are made just for the sake of change.
But in a way it makes sense for the developers, if there is nothing left to change or for them to work on, there is no reason for them to still have their jobs.
Why fix bugs when instead you can just move things around and make things more flashy in an attempt to make management think you are making the app more 'responsive' or 'increasing user engagement'
"Applications are harder to use, yes, but because they do so much more."
But do they? The move to web browser apps and the loss of rich native desktop functionality means that many web apps offer far less functionality than native desktop apps. The companies that offer these web apps sell them on their easy sharing capability and collaboration features.
An example: thirty years ago (or more), you could use any desktop word processor and perform basic tasks like spell check, change the colour of text, choose fonts and change their size.
Or today, in 2020, you can use Dropbox Paper without any spell check, no way to change the colour of text, no ability to choose fonts or even alter their size. But it does runs in a web browser. This is apparently progress.
I don't know why you're using Paper as an example. In 2020, you can use Google Docs which has spell check, text color, the entire collection of fonts at fonts.google.com available for instant selection, and so on. But I can also collaborate instantly.
Applications are harder to use, yes, but because they do so much more. Your interface has to work and be responsive whether your file sits on a local disk or in the cloud, or maybe has to be synced.
Why? Why does every application need to be "cloud connected"? What's wrong with having a normal desktop application that saves files to the filesystem like every application did for thirty-odd years? The only reason for this that I can discern is that it's an easy way to lock users into paying a monthly or annual recurring fee, rather than a one-time fee for the software.
Users themselves are not asking for cloud connectivity. People understand files. They can save files, copy files to a thumbdrive (or Dropbox), and e-mail files as attachments. Files are an interface that people have figured out. We don't need to reinvent that wheel.
It needs to work with mouse and touch and a screenreader.
In my experience, older applications are far more screenreader friendly than new applications. Moreover, not all visually impaired people are so visually impaired as to require screenreaders, and the more skeuomorphic designs that were favored in the '90s and 2000s were far easier for them to use than today's flat designs where one can't tell what is and is not a button. Heck, even I get confused sometimes on Android UIs and don't notice what is a plain text label and what is an element that I can interact with. I can only think that it's far worse for people who have sensory and cognitive deficits.
As for "it needs to work with a mouse and touch", my answer is once again, "No it does not." Mouse and touch are different enough that trying to handle both in one app is a fool's errand. Mice and trackpads are far more precise than touch, and any interface that attempts to both mouse and touch with a single UI ends up being scaled for the lower precision input (touch), which results in acres of wasted space in the desktop UI.
The same kind of clear UX standards just don't exist anymore because there are so many different apps that do so many different things, and there's no obvious best answer.
Of course there's no obvious best answer if you're trying to support everything from a smartwatch to a 4k monitor with a single app. So why are you trying to do that? Make separate UIs! Refactor your code into shared libraries and use it from multiple UIs, rather than attempting to make a single mediocre UI for every interface.
But the good news is that applications do slowly converge on best practices. Think of how things like hamburger menus or swipe-to-refresh or pinch-to-zoom have become expected standards.
The problem is that all of these new "best practices" are far worse, from a usability perspective, than the WIMP (windows, icons, menus, pointer) paradigm that preceded them. Swipe to refresh is much less discoverable than a refresh button, and much more difficult to invoke with a mouse. Pinch-to-zoom is impossible to invoke with a mouse. Hamburger menus are far more difficult to navigate than a traditional menu bar.
When today's best practices are worse than yesterday's best practices, I think it is fair to say that applications are getting worse.
Mouse and touch seem like completely different things, but they're more similar when you consider pen/stylus input. 2-in-1 devices running a proper desktop-grade OS[0] are amazing devices, and one thing they're missing are properly designed apps, which are few and far between. 2-in-1 made me actually appreciate the ribbon a bit more - though an overall regression in UX, it shines with touch/pen devices, which I'm guessing was MS's intention all along[1]. 2-in-1s with pen are really magical things; I use one (a Dell Latitude) as my sidearm, and started to prefer it over my main Linux desktop on the grounds of convenience and versatility.
Best pen-oriented apps actually allow you to use keyboard + finger touch + pen simultaneously. You use pen for precise input (e.g. drawing, scaling, selecting), fingers for imprecise input (e.g. panning/rotating/scaling, manipulating support tools like rulers) and keyboard for function selection (e.g. picking the tool you'll use with stylus).
--
[0] - Read: MS Surface and its clones.
[1] - For instance, Windows Explorer would be near-unusable as a touch app without a pen, if not for the ribbon that makes necessary functions very convenient to access using finger touch.
I agree with your caveat, but reply with a caveat of my own. The key difference between a mouse and pen/touch is the ability to hover. With a mouse, I can put the cursor over a UI element without "clicking" or otherwise interacting with it. That's difficult to do with a pen and impossible to do with touch. The key use case that hover enables is the ability to preview changes by hovering over a UI control and confirming changes by clicking. A pen/touch UI would have to handle that interaction differently.
Thank you to your caveat to my caveat, and let me add a caveat to your caveat to my caveat: while you're spot on with the hover feature being an important differentiator, it's not in any way difficult with a pen. It works very well in practice. On Windows, even with old/pen-oblivious applications, it works just like moving the mouse - you gain access to tooltips and it reveals interactive elements of the UI. That's another reason I prefer pens over fingers.
(Tooltips usually show when you hold the mouse pointer stationary over an UI element. With pen, it's somewhat harder to do unless you're in a position that stabilizes your forearm, but there's an alternative trick: you keep the pen a little further from the screen than usual and, once over an element you want to see the tooltip for, you pull the pen back a little, so that it goes out of hover detection range. It's simpler than it sounds and it's something you stop thinking about once you get used to it.)
That's very interesting! My only experience with a pen UI is from using the iPad with an Apple Pencil, which doesn't seem to work the same way. As far as I could tell, the Pencil works like a very precise finger. It's great for writing, but I didn't notice any further enhancements beyond that. Of course, it might just have been that the app I was testing it with (Microsoft OneNote) didn't fully support the Pencil at that time.
On a Windows 2-in-1 it works more like a mouse with pressure sensitivity. It maintains its separate pointer (which is only shown when the pen is near the screen), so apps that aren't designed around a pen just behave as if you worked with a regular mouse.
> Why does every application need to be "cloud connected"? What's wrong with having a normal desktop application that saves files to the filesystem like every application did for thirty-odd years? ... Users themselves are not asking for cloud connectivity.
Of course they absolutely* are. I keep literally all my documents in the cloud. I'm constantly editing my documents from different devices -- my phone, my laptop, my tablet. Users like myself are absolutely asking for cloud connectivity. I simply won't use an app if it doesn't have it. Your argument makes as much sense of "why does every skyscraper have to have elevators? Users aren't asking for anything more than stairs!"
> Mouse and touch are different enough that trying to handle both in one app is a fool's errand.
Except you don't have a choice. Many apps these days are webapps, and absolutely require both interfaces to work. Many laptops also support both. That's just how it is.
> The problem is that all of these new "best practices" are far worse, from a usability perspective, than the WIMP (windows, icons, menus, pointer) paradigm that preceded them... When today's best practices are worse than yesterday's best practices, I think it is fair to say that applications are getting worse.
Except WIMP doesn't work on mobile. So it's an apples-to-oranges comparison.
Of course they absolutely are. I keep literally all my documents in the cloud. I'm constantly editing my documents from different devices -- my phone, my laptop, my tablet. Users like myself are absolutely asking for cloud connectivity.
If by "cloud" you mean a filesystem-like abstraction that's synchronized across multiple systems (e.g. Dropbox or OneDrive), I have no objection to that. Heck, I even called out Dropbox as a viable alternative to "cloud connectivity". What I am objecting to is the tendency that many apps (especially mobile apps) have of locking your data away in their cloud, making it impossible to get at your data, back it up, or share it with a different application.
Many apps these days are webapps, and absolutely require both interfaces to work.
That's a nonsequitir. It's entirely possible to detect the size and capabilities of the device that user is using and display a UI that's appropriate to that device. What I'm militating against is the lazy approach of designing the UI for mobile first, and then using CSS media queries to scale it up fit a desktop viewport. That results in acres of wasted space and a poor user experience, because the user doesn't have the same interaction expectations that they would have if they were using the UI on a mobile/touch device.
Except WIMP doesn't work on mobile.
And mobile UIs don't work on desktop. Trying to make a one-size-fits-all UI is a fool's errand. Much better to design each UI for the platform that it will be displayed on (laptop, tablet, phone, smartwatch, etc) than trying to scale a single UI across multiple devices.
So many people say this but I fundamentally disagree.
Applications are so much more complex today, supporting more combinations of OS and input method and data storage and accessibility and display modes and whatnot.
Applications are harder to use, yes, but because they do so much more. Your interface has to work and be responsive whether your file sits on a local disk or in the cloud, or maybe has to be synced. It needs to work with mouse and touch and a screenreader. And so on ad finitum.
Relative to their complexity, applications are doing just fine today I think. (Also don't forget there were so many terribly designed applications in the 90's. It's not like everybody was even remotely following established UX guidelines.)
The same kind of clear UX standards just don't exist anymore because there are so many different apps that do so many different things, and there's no obvious best answer.
But the good news is that applications do slowly converge on best practices. Think of how things like hamburger menus or swipe-to-refresh or pinch-to-zoom have become expected standards.