But the real problem is that most shops can't measure usability in a fine-grained way. And yet everyone does have some idea that it's really important. Consequently they are bamboozled by anyone who purports to have such expertise.
For the most part, so-called usability experts are really just graphic design snobs. They have a certain aesthetic they like, which is arguably "cleaner" and has more white space. So the page might be better organized but that's nowhere near what true usability is.
Real usability work is about trying to understand how the user is thinking. What their mental model is. What their goals are. In what context they are trying to achieve a task. Not just how their eye is scanning for information.
Anyway my point is, when it comes to real usability work, programmers are just as (un)qualified as graphic designers, marketers, or any other kind of "product" person. Arguably the designer is supposed to have a larger toolbox when it comes to possible graphic treatments, but the programmer also has a toolbox of quantitative methods, experimentation, and a deep understanding of how the system could work.
* Minimise the total number of clicks needed to access the sum of the program's functionality.
* Make frequently-used functionality easier to reach than rarely-used functionality.
* Don't have more elements in the UI than is easy to parse quickly.
* No surprises.
However, after reading this article, I get the feeling that it's a more arcane art, which not many people know. What are some simple rules about good UI design, in your opinion?
Your first three principles are about efficiency. That's a big part of usability. When you reduce clutter, you make it easier to use by clarifying the next action the user should take.
But "No surprises" is rather complex. It means that you not only have to explain what you are doing, but also work with the user's mental model, which might be totally wrong.
Here's an example from my current project. We are designing a web form to upload images and video to Wikipedia. In 2010, the users' model of media uploads comes from Facebook, ImageBucket, and maybe Flickr. Even after you make them click on a box that says they give up their own rights before proceeding, they don't understand what they did. Apparently people's mental model just continues on blithely until it smacks against a major contradiction.
So we've tried various metaphors. "License your work" was replaced by "Release rights", and the next iteration is going to be "Donate your work". That wording was the suggestion of a usability testing and analysis firm we hired, by the way. So it does take a certain amount of inspiration to come up with alternatives, but recognizing where the problem areas are is just a matter of measurement.
Inspiration is aided by just having a bigger bag of tricks, too -- knowing common problems and solutions. For instance, if there is so much to do on one page that users are finding it overwhelming, it is helpful to break it into multiple steps (a "wizard"). That's another approach we have taken.
Anyway, I'm not sure if my expertise with usability is so deep that I could pronounce just a few principles. Maybe "reduce cognitive load to the minimum" is one, which has been more memorably distilled by Steve Krug as "Don't Make Me Think". Since we want to avoid onerous information transfer and cognition, there are some models of communication that might be helpful here; a classic text on human communication is The Responsive Chord, by Tony Schwartz. He argued that most communication is about activating stuff you already know (at the right moment), which you can see at play in the example I gave.
But ultimately it's about motivating human behavior, so it's only about as complex as the human heart. ;)
Personally, I spend a great deal of time putting myself in the mind of the user, trying to figure out how we as humans visualize things and simplifying the actual underlying task at hand.
The list you summed up does work for 95% of all cases - especially if you're just thinking about a "website". However when you're trying to create something new or working with some new concepts you have to imagine how the user pictures things so that it matches his thought pattern as much as possible.
This also includes placing things were the user is most likely to look for them etc.
So your right with the number of clicks, but you also have to make sure that the user can complete the task (effectiveness), and that they are happy (satisfaction). From our own research the three; effectiveness, efficiency and satisfaction are very highly correlated together.
Usability is so much more about mental models than it is about efficiency, about helping the user feel confident & safe, than about a lack of surprises. Sensible hierarchy and visual relationships are a big thing that most people seem to forget.
Workflow, too, is critical. I've used a lot of software where every screen is mathematically, scientifically "usable" but the software overall was crummy because it duplicated a workflow from another piece of software which had got it wrong initially.
For example: 100% of email clients. Feed readers. Project management tools. They're all the same.
Surprises are actually very good in many cases. Even "ease of use" is something you don't actually always want.
My interfaces break all sorts of rules like "use standard widgets" and "conform to expectations" and "no surprises" -- but that surprise then results in an "AHA!", great enjoyment and power, and that's why people write love letters to our customer support account.
For example, in the information retrieval community, companies like Google continue to employ a lot of people to mark search results as relevant or not. Even with all this data, it's still hard for us to quantify what is "relevant" to a user. Likewise, it is hard for us to quantify what is "usable."
But even though the programmer may own this toolbox of quantitative methods and whatnot, most of them do not use it when it comes to usability. Why? Because the toolbox is quite empty. The advantage that these "snobby" designers (just like how programmers are "snobby" about their choice of programming languages and tools) have over programmers is that designers have more empathy towards the everyday man. So there is some emotion involved. Sure, there might be a science behind all of this, but since no one figured it out yet, we'll have to go use our intuition of what a good design is.
In the future I imagine there being plenty of UX consultants who, rather than having a strong education in the psychology of websites, instead is experienced with the tools necessary to evaluate a design and make recommendations. Then there will be graphic designers, marketers, etc who are actually qualified to work with usability.
Once you do start analyzing, what do you do with the results? You can try to make small tweaks -- it can be worthwhile if what you have is already good. If what you have is bad however, no amount of rigorous analysis is going to solve the artistic problem of coming up with a cohesive vision of something better.
The way I describe this to people is: "Usability science can only test a hypothesis. It doesn't give you a hypothesis to test."
Not to mention the problem of hill-climbing… you may think your test results are amazing, but you can't tell if you're really at a peak or in a valley with a lot of mist obscuring a potentially even higher peak..
UX designer = "mental space"
User Interface Designer = "white space"
User Experience Designer = "mental space"
Let me suggest an improvement
Mediocre Designer = "white space"
Good Designer = "white space and mental space"
90% of everything is crap.
Therefore, if handed ten UIs designed by programmers, nine will be crap. If handed ten UIs designed by marketers, nine will be crap. Perhaps there is a characteristic way in which the nine programmer crap UIs are crap, but the observation that most programmer UIs are crap is not insightful and doesn't magically justify the idea of turning UI design over to product management.
That; and the attitudes range from "who cares" to downright hostile "if you can't understand it, you're stupid". I say this as a programmer (who spends a lot of time on the "user" facing components), not as a business person.
That attitude has to change - I don't care that the business people think hackers are Eloi fit for the slaughter while they are the Morlocks - the intelligent ones. Good business people don't have that attitude, good business and product people do care about scalability, supportability and quality. Those that don't will fail.
On the other hand, the constant refrain I hear from people in my own community and profession about making something usable and intuitive just makes me upset. I can't change the broken business people - I can try to make my corner of the world better.
May I attempt to find common ground between our points of view by suggesting that 90% of programmers have this attitude? And that by the same token, 90% of business people are busy demanding flash web site intros and banners that SCREAM without doing the A/B testing or other quantified analysis to know whether they are generating higher conversions?
But your last statement is absolutely spot on: making what we do better is what matters most. Upmod gladly given.
Regardless, I concur with you - I can't speak to business people directly as it's not my particular domain, but yes - I've seen the same behavior that you have.
Now that I think about it, what was valuable then that isn't valuable now? I just rejoined!
Very well written; and some great advice. I much prefer this to ranty-zed :)
The Long Beards, (Latin Langobardi, later "Lombards") were a tribe of Germanic "barbarians" who settled in Northern Italy in the late 500s AD. They were primarily responsible for thwarting the Byzantine Emperor Justinian's campaign to reconquer Italy and re-establish the Western Empire.
Today Lombardy is the most populous of Italy's 20 regions and Milan, the capital, is Italy's leading financial center.
I'm sure Apple values systems programmers. And Google. And Oracle (at least they certainly should). Microsoft could not possibly build their products without systems programmers. Amazon could not have created its cloud products without systems programmers.
In Apple's case, their products would be impossible without the capabilities that the systems people build. Without an operating system that they could strip down and fit on a phone, while keeping much of its core functionality, no iPhone. Likewise if they didn't have the technology underlying Safari under their control.
Much of the eye candy that wowed people in Mac OS X came as a thin layer on top of the technology built by systems programmers. Like the graphics APIs underlying Expose. And the fast, system wide indexing behind Spotlight.
So, if you want to be appreciated as someone who "makes products for people who make products," work for a software company that makes products requiring innovation in systems software.
I'm so annoyed with people who make open source projects that don't comply to the principles that Zed listed. (Clear code, documentation, easy to get started, reasonable defaults, support, etc.) It happens so often that I interact with such OSS projects and it's so hard to work with them.
I feel bad for all the honest effort that these people put into the core parts of their OSS projects, without realizing that if they concentrated a bit more on the things that Zed mentioned, their project would become so much easier and fun to work with, and they'd have a bigger healthier community around it.
<Sociopathic rant> Sometimes when I see one of these projects, (let's call it, as an example, project FooBar and say that it converts between image formats,) I fantasize about forking the project, fixing up all the documentation to be good, make a good UI, give installers, make a nice logo, etc. Then rename it, sell it as a commercial project and become a millionaire, without crediting the authors of FooBar. Then I will drive in my fancy car to visit the core developers, and I will show them a wad of $100 notes and tell them: "You see this? This could have been yours. This could have been yours if only you documented your freaking code. You worked so hard on this code, and if you would have only taken care of the things around it, like documentation and binaries and design and UX, you could have made a lot of money. But you didn't. You suck." Then maybe for their next project they'll write freaking documetation. </Sociopathic rant>
Amen to that. I'm working on a project to boost a new product line for my megacorp that competes against the established leader. So I asked to really boost the UI. And it was approved, provided of course, I made it myself :( . All my pleas for a designer were unheard or shrugged off.
With some help from the Internet, I've come up with a design that doesn't suck, but it's still a far cry from what a designer would have done in a tenth of the time (in effect, they hired me as a crappy designer and spent far too much of my time aka their money, but they don't see it that way)
As a hacker, I enjoy having large blocks of time to create what I think is good and not coding some autocratic vision of a product. As I've grown older though, I have come to appreciate and value the interaction between MBA types and myself. In one particular project an MBA type brought me valuable information about what we were doing well, how we compared against competitors, and what he thought our customers wanted. He then told me to run with it and stepped out of the way. I thought that was pretty neat.
Another thought - A quote from the article:
"A sudden rise in Long Beards simply copying Product People Products but doing it cheaper using their cost reducing backend skills."
reminded of the Paul Graham essay "Copy What You Like"
... and copying something you like is probably a good way to bridge the gap between back end "long beard" and product designer.
One of the founding principles of Y Combinator is to make the Hackers the product people.
I end up with things like "arrive at site", "create objectX", "view & edit objectY" as the next level nodes and keep going down levels from there. Since a mind map is a tree structure, it really helps to visually map out decisions a user will make. Following a branch is the user making a decision of what to spend their limited attention on. If you are improperly the distributing number of decisions a user must juggle at a particular node or repeating a lot of functionality in different areas, you will instantly visually see it.
Then when you start translating to code, if it's M-V-C architecture like most webapps, the first order nodes are the names of your controller functions.
What methodologies do other people use?
Folks interested in a look into his more current thinking might want to take a look at his keynote for Agile 2008 http://www.cooper.com/journal/2008/08/alans_keynote_at_agile..., and his keynote at Interaction 08 http://interaction08.ixda.org/Alan_Cooper.php.
And from his involvement in things like the http://www.andersramsay.com/2010/02/04/notes-from-the-agile-... things are moving on from there too.
There's a large chunk of the UX community that's moving away from the developers-as-enemy attitude. And vice versa for the developer community I hope :-)
Most programmers are aware of that, and are willing to put the time and effort in to making what's required. But often the product people, besides deciding what things should look like overall, are also in control of the budget and schedule, and they expect all those kinds of things to just magically appear.
We have a situation at work just like this: the inherited backend code is dreadful, full of WTFs due to poor implementation, lack of knowledge and lack of planning and design. It's eating away our profit margins. New features and customization take far longer than they should, and time estimates are impossible. Bugs, from simple UI stuff to server issues, are numerous and end up adding to customer support costs, not to mention reputation.
Do the management understand this ? Nope, they just consider these costs to be the normal cost of doing business in this sector. We have pleaded time and again to rewrite and refactor the code, but never given the time or leeway because the sales team want the latest shiny feature to sell to the client.
The irony is that we have a healthy and growing user base and top-rank clients. An outside competitor (of which the developers are aware, but about which the management are oblivious) are about to eat our lunch because they can simply execute better and faster.
A 'revolutionary' design may be awesome, and may increase the productivity or ease of use of your specific application, while causing some serious friction between your app and it's environment in the users mind.
This applies just as much to infrastructure as it does to anything customer-facing. To a non-programmer, for example, sendmail, nginx, and mongrel2 may all look like user-hostile crap. To a developer or administrator, it's easier to discern the differences in the design of this part, which is itself a major part of the usability of a webserver.
I'm almost certain when he says he can "make a web server", he means that he can sit down with a text editor, a compiler, and a standard C library, and produce a program that will serve HTTP requests. There's some gray area beyond this, but he definitely does not mean that he can download, configure, and run a premade piece of software.
Is this what you mean too? Is there really a population of self-described "product people" who have this knowledge? Or did you interpret his statement differently? I certainly agree that such people exist, but I would be surprised that their luxurious gray beards don't blow their cover. Which is to say, do they really work as product managers?
And second the stuff about parsers in the section about making usable infrastructure software. Every time I peek at the mongrel2 source, there's one or two more Ragel files in there, some of them generating token streams to be consumed by a Lemon parser. I figured there was a good reason for that (although I shot a quizzical glance at the one that just parses "--a --b b" CLI args into key/value pairs). What I didn't realize was how much that's an integral tenet of his software design philosophy.
Click some of the links at the bottom. It's so gorgeous.
How many unusable software products came about because of Visual Basic? I can rant and rant about Copper.
Then maybe I can stop writing stupid shell script wrappers.
The part about linguistic experience was especially good. I have thought for some time now that all configuration files, command-line options etc. should have simple and clearly defined syntax. Some already do. My dislike of weird, inconsistent and complex syntax (and semantics) is so severe that I've semi-intentionally not learnt shell-scripting, Perl or Ruby.
Sadly, there are very few systems that I feel I get a good linguistic experience(TM)(R) out of. I have come to think that GUIs as we know them are based on fundamentally flawed ideas - pointing and grunting - instead of using a language, the feature that separates humans from other animals.
I really wish good textual UIs (there aren't many that I know, but I'm using one for writing this comment: GNU/Emacs) became the norm, instead of the randomness of the contemporary GUIs or the bass-arckwardness of shell-like languages. I think UIs should aim to become Turing-complete, completely self-documenting and reflective. The distinction between a function, a program and an user interface is in my opinion mostly arbitrary and harmful, creating an unnecessary wall of misunderstandings between the author of a program and its users.
The dilemma is that human language is staggeringly ambiguous. This works because human beings have an extremely sophisticated model of other human beings in their head and a really good pattern matching facility, which makes us really good at selecting the intended interpretation without even noticing the ambiguities. Presented with the languages we use to instruct computers, designed to be perfectly unambiguous, normal humans quickly get upset that the computer doesn't understand them if they veer the slightest bit from the tightrope of a language they're given to use.
In other words, if you have a computer that faithfully and consistently passes the Turing Test, you may have something. Otherwise, pointing and grunting might be less frustrating for normal people.
Another aside: pointing or at least gesturing is much better than even the perfect language interface for editing a photo or playing a video game. So it depends on the application.
Also, editing a photo is a perfect example of where language interface would be useful: instead of hunting through Photoshop's millions of menus for the graphic effect you want, you could just say/type things like:
Scale to 200%
Increase contrast 10%
Resize to 400 by 600
The other thing is that we've somehow locked onto the idea that the computer should be able to understand everything immediately or else it's not even worth it. But in the real world human's constantly miscommunicate, and we solve that problem very simply: ask more questions. So if the user says something like:
Adjust contrast by 10%
Do you want to increase by 10% or decrease by 10%?
You're right it can be easier once you are familiar with it, but there's nothing to suggest how/if "up" works if you are not.
Every educated software engineer know, that by respecting and following a fundamental programming methodologies, such as data-abstraction, modularity, encapsulation of details, and other basic techniques for managing the complexity of large systems, you will get maintainability and programmer-friendliness for free.
The problem is - most of in-house back-end developers and their public teachers doesn't have a even a basic engineering education.
People who have managed to understand the ideas on which the SICP book is based upon, let alone MIT graduates, never write such long but empty posts.
"couldn't care less", please. How can you be hackers if you can't get some basic logic right?
"The Inmates are Running the Asylum argues that, despite appearances, business executives are simply not the ones in control of the high-tech industry. They have inadvertently put programmers and engineers in charge, leading to products and processes that waste money, squander customer loyalty, and erode competitive advantage. Business executives have let the inmates run the asylum!"
Don't get much clearer than that, but let's talk about the damage the book caused. For a book that purports to try to improve things for humans it sure caused a ton of damage to humans. It was lacking in almost any real substance and was effectively a thinly disguised rant about coders. It was used to fire programmers, boost fake ass business guys, and created the plague of crappily implemented products that may look slick, but fail in horrible ways.
In my own personal experience with the book I have had numerous executives and biz people toss it on my desk and say they read it and they agree. I've had friends fired or demoted because of this book. It also spawned whole generations of "UX consultants" who fleeced people using the phrasing and words in the book. I personally had to deal with consultant after consultant who would come in, looked at the best we could do, and then insult us. And any time a programmer tried to show who was really in charge and to blame, out came this damn book to prove us wrong.
I could go on, but no matter what little snippets of text you remember, the impact of the book the way I remember it is one of negativity toward a group of people who were not to blame: programmers.
Cooper's point was that, sans any other guidance, programmers naturally tend to create interfaces that reflect the structure of the code (since that is what they are immersed in) rather than the structure of their user's goals. And the reason they don't have any other guidance is that people running technology companies didn't have any kind of design phase other than sending salesmen out to get pointless checklists of features.
If anything, the book is a criticism of the way tech companies are (or at least used to be) run rather than a criticism of diligent programmers doing their best with incomplete information. In fact Cooper says this several times during the book. Unfortunately, it seems easier for a lot of business guys to use the book to transfer blame to programmers rather than implement what it actually says.
Really bad Amazon summary too.
It was lacking in almost any real substance and was
effectively a thinly disguised rant about coders.
It is sad and weird that many programmers don't like the book or, like you, see it as an attack on them. It's far from the truth. The book gives very accurate description how our (programmers') way of thinking differs from that of "ordinary" people. If you are a programmer writing software for other programmers it may be ok not to care how Joe User thinks, but if you are making product for general audience you better pay attention what Cooper says. I see a lot of his ideas in many successful products: Google Search, GMail, Apple products.
Just recall all the comments after Apple introduced iPad; look at the report that iPad now is the highest-scoring product that a leading consumer satisfaction index has ever tracked; and then go read chapter 7 "Homo Logicus". After that it should be clear how such a lame and purposeless device in the eyes of geeks can be so wildly successful product among consumers.
For me the takeaway from the book was not that "programmers are to blame" but "design for your user, not for you".
That is, I assumed it means the following: when you're building software, you have to accept that the users are the ones "living" in it. Even if the users are bat-crazy, you need to build them an asylum that makes them feel like they're running the show, rather than treat them like inmates to be subdued.
> then he is just unable to get what it's all about (like many other nerds
Extremely flamebaity, abusive 'nerds,' and No True Scotsman-ish.
> I hope he sticks to building awesome web servers nobody needs.