Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: What are some rigorous ways to review and test usability of web apps?
58 points by altsyset on Nov 19, 2017 | hide | past | web | favorite | 19 comments
I am looking for a better and rigorous ways of reviewing and testing web applications' usability. I use some of this criteria as a measure. But I am not satisfied, I believe even this criteria are vague and can be simplified. What are some you use? Or how can we simplify these points, maybe break them down into something specific?

1. Consistent theme - pattern, layout colour, icons and fonts,

2. Simplicity and minimalism (Avoids unnecessary repetition)

3. Consistent messaging and communication,

4. User assistant and user guidance, with data. Expecting less from the user.

5. Inline help and other documentation

6. Error handling and communication of errors,

7. Error prevention and fallbacks

Are there any that I should include? Is there any way to make them be more specific and dead simple?




Random thoughts from the internet.

I think the most most rigorous test is deploying in front of paying customers. Usability is not tidy and theoretical. It is practical and messy. The core of usability is functional usefulness not design aesthetics. At the scale of Google, material design may have an absolute difference of a million users. At the scale of the average app, it is probably close to zero difference. Apple's iOS skeumorphism wasn't driving iPhone sales, people used it because the iPhone was useful. Emacs is still going strong after nearly forty years. People use Vim.

Good luck.


Except that I am not planning to test an app I built but an app my clients are planning to use/buy. And I would have to use these methods, even though, they are subjective. One way to make it less subjective is what @Msurrow suggested, which I think is so valuable.


My follow up @Msurrow 's point:

> This is subdivided into the users skill with technology in general and the users domain expertise level. Is the end user a novice or expert with tech? Is the user (or rather the primary use-case(s)) for domain experts of domain novices?

When I set out to design and implement anything I always try to imagine the least computer / tech savvy potential users as well as the most snobby and tech savvy potential users. After I've figured out who those two extreme groups of users will tend to be I set out to make my designs as "hand holding" as possible for the less tech savvy group while making sure, as best I can, that the way I go about catering to those less tech savvy users won't offend or turn off the most tech savvy users.

While I find it to be one of the most difficult parts about executing and ultimately deciding on a final iteration for release I also find nailing that middle ground as one of the most satisfying single rewards of the work I do. If I can make something that guides clueless computer users through a series of tasks without hangups or significant drop off while at the same time the most tech savvy users have no annoyances or glaring issues to gripe about then I view my job as well done.

I try my best to get the most tech savvy users to acknowledge the detail and care that went into UX / UI / Usability. If I can't display a new / innovative approach, a less redundant approach, or a more streamlined way of guiding a user through the process then my main goal is to create an experience where those two ends of the spectrum both walk away from the experience feeling there was no annoyances, hurdles, or progress halting steps to the process / story / guide / purchase & checkout / etc. Everything should be in it's correct place. Everything should be labeled as needed and those labels shouldn't hog valuable screen space. Errors, notifications, validation checking, and any other changes mid-experience should be clear and obviously placed with color coded nuances. Erroneous use shouldn't trash users inputs, EVER! If you know you will have lots of non-tech savvy users you will have to strategically weigh using innovative UI / UX against the added instructions and newly introduced errors / validation issues that come along with them. Is it worth it to innovate elements and functions that already exist in standardized, well known ways? Will going those extra miles raise the rate by which users interact / check out / continue reading / commit to desired actions?

I think it's easier said than done to set out and then execute this strategy but I also find that it's my personal favorite way of mentally posturing myself when it comes time to make decisions with UX / UI / Usability. Beyond that nothing beats sitting with users as they use your product / app / service / website. I don't know if you have access to potential testing users but I'm sure you could dig up some family and friends to help you run your project through the rigors of real life use.


Just do usability testing with real target users. Expert evaluation does not reveal any surprises, so it’s limited in what it can do for you.

An aside - I find it quite strange that HN has such a blind spot for user research. I’ve been in the UX/user research industry since the mid 2000s. It’s gone from being a niche thing (it was a struggle to find any employers in London who understood what user research was when I started out) - to being a standard practice in big tech companies. Apple, MS, Facebook, Spotify... They’re all at it.

It’s something you really need to know about if your work involves making any kind of decisions that impact your users.


You see, HN's and Reddit's UI is what makes me wonder about everything. I can argue that both these site would score low on many of our criteria yet they strive, even in 2017.


Check out various articles online about heuristic evaluations (https://en.m.wikipedia.org/wiki/Heuristic_evaluation - as a starting point). They will give you checklists that people use for general purpose assessments.

As a rule of thumbs, it’s a good idea to conduct a heuristic evaluation as an initial assessment of usability, make changes as needed, and then conduct an actual usability test to ensure you hit on the points you need and your users can accomplish key tasks in your app...


Another good checklist https://stayintech.com/info/UX


very helpful article, I have seen their blog but not this wiki entry. Thanks. Anyways I am gonna test other people's web app using these techniques. Wish me good luck.


Adding to the list that already exists:

- task oriented. Don't build a complex do-it-all UI, split it into tasks, one at a time

- use simple, clear language to support what you want to user to do

- have a flow to guide the user in the task, so that the user does not need to come back or jump all over in the page

- do the "mother test": show it to your old mom, if she understands then you are in the right direction, otherwise ask her what she gets and what she does not get from your app. Repeat until almost anyone can use your app


I love mother test! Even though it takes as back to the point made by @Msurrow and @scoggs


According to Wikipedia, a web app is:

In computing, a web application or web app is a client–server computer program in which the client (including the user interface and client-side logic) runs in a web browser.

The last two words "web browser" are very important. Are you testing your app using a variety of browsers? Or, perhaps, you're requiring everyone to use IE 6? :) I'm surprised that such a fundamental issue was left unstated.

Once you test various browsers, be sure to go in and set the minimum font size to a reasonably big number. At least 18. Maybe larger. Why not 24? I don't care if the resulting appearance on screen looks like crap, as long as everything is functional. Far too many web pages and web apps obscure large amounts of their functionality once the font size is increased.

Ask me how I know that. :)


The only truly rigorous way is to sit users in front of the app and watch them. Makers are blind to what they don't know users don't know.


Wouldn't that transfer the subjective nature of this challenge from you to users? For example, how do u think an iPhone user will do in an Android phone or vice versa? A win desktop app user in a web app?


If you work in an office with non-dev's -- like QA's, analysts, regular folks -- ask them. And then listen to what they say.

As a QA, I frequently have the following experience:

me: "This won't make sense to people."

dev: "Oh, but it has to be this way b/c {reasons}"

me: "People are going to get confused. They're going to use it wrong and make errors."

dev: "No, our users are smarter than that."

{6 months later}

dev: "We're redesigning our interface because the users found it too confusing."


> As a QA, I frequently have the following experience:

thank you for that impartial, completely balanced anecdote...


For apps that are already in production (and being used), Hotjar and competitors can be very useful for seeing what users are actually focused on. My last company actually went through 4 different iterations of their home page in a year, with the last two being redesigns based on insights from hotjar


Great question! I think this is a very tough subject to work with because much of "user experience" is subjective, or at least just not quantifiable and thus hard to measure and compare. Also very few good standards for this exists (afaik).

I have a background in UX/HCI but havn't worked with it for ages, so no good examples but just some general thoughts.

I think you have some good ones on your list, but heres a few you may consider to add to it:

a) You criteria should take into account who the end user is, because that will define a lot of whats good or bad design/solutions in relation to your points 1-7. This is subdivided into the users skill with technology in general and the users domain expertise level. Is the end user a novice or expert with tech? Is the user (or rather the primary use-case(s)) for domain experts of domain novices? I think tech-ability is addressed fine by your 7 bullets (at least here on top of my head). The last one about domain expertise is the most interesting one to consider imho.

Here is an example: When we typically say that a user interface has to be user friendly or have good usability or whatever we call it, we picture a GUI with a visual nice design, well organised information, not too much not too little, it is easy to do stuff and so on. But what about the linux sysadmin - is a good interface for him a consistent theme, layout and coloured buttons etc? No, he just wants a terminal interface and great usability to him is if the darn thing is scriptable and supports unix piping input/output (or whatever is sexy to sysadmins).

In genereal you can say that if the user is a domain expert and the use-cases for the application is specialist work, then you want a user interface that is more complex and requires more time to learn -- and that is okay, because the expert user can invest 5-10 hours (or more) of learning the complex interface if it makes him e.g. 25% more productive for all of his tasks for the rest of the years he'll use the system. On the other hand, if the system is a landing page for a SaaS startup, you need to design something that is understandable and usable the target user in just about 20-30 seconds, otherwise he has clicked on. The last example here is a variant of the "domaine novice" situation, and note that it changes little wether the user is tech-novice or tech-guru.

NB: The same user will be a different kind depending on what part of his job he is carrying out (this also goes for non-work scenarios) - the engineer may be happy with a very complex application for his engineering tasks, but the system for handling expenses should be simple (b/c he knows nothing about accounting).

The crucial point is that good usability is very dependent on who the user is and what the use-cases are. And you need to build that into your criteria somehow.

b) I think you may be able to add a criteria on "expectations" - do the system (e.g. a button) do what the user expects it to do? This can be down to colouring and icons: The accept button should be green and not red, the floppydisc icon should save the document, not close it and so on. But there are also more complex things like what the system indicates will happen when the last page of a registration wizard is completed. This is basically Donald Norman kind of stuff I'm thinking about (affordances is the concept he is talking about. sadly became a bit overhyped amongst UX designers but the ideas and principles are great).

c) Last thing and also very important is to consider the context of the system (as in IRL context). If for example you are designing a system for doctors or first responders it will be a key part of "good usability" that you can be interrupted and turn away from the screen a lot, and quickly get back to the task in the system again. The obvious example is if you are automatically logged out after 1 minute of inactivity: If the natural workflow, that the system is part of, simply entails that the user will get interrupted often, getting logged out all the time will make the user stop using the system. In that case the system just needs to adapt to the user's context (so thats part of evaluating if the user interface is good or not).

This was a simple example, but the point is to consider the context of the user and the system, because that will tell you so very much about how the system needs to work to _support_ the user, and that in turn will tell you a lot about the user interface. Context is one thing, but the systems part of the workflow (i.e. the entire job/situation) and things like tacit knowledge quickly becomes very important when looking at this level of usability (and evaluation/review). But that is whole research areas on their own (see e.g. Participatory Design, Computer Support Collaborative Work, Activity Theory, and much more. I'm getting out of touch with academia in this area, so these subjects are probably out-dated by now)


This is by far the deepest assessment of the situation and it totally makes sense to consider the user and the situation. In fact, building a matrix with criteria and user/situation will benefit the who process and everyone involved. Thank you very much! But, regardless of the kind of UI, user, situation etc almost everything here is subjective, what do you think could be done to avoid the subjective nature of this process?


Sit at least four / five users[1] in front of the app and let them use it for a real exercise, asking them what they see (but without using leading questions that would give hints that a user will not get in real use).

No matter how much you analyze and measure your design, there will be expressions that they understand with the wrong meaning, buttons they should use that they can't see, and ways to think about the app that you couldn't possibly have anticipated.

So see what those problems are that you couldn't have find out in advance, fix them, and put them again in front of some more users to test whether they really solve the problem.

1. https://www.nngroup.com/articles/why-you-only-need-to-test-w...




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: