Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
iPad Usability: Year One (useit.com)
51 points by joshuacc on May 23, 2011 | hide | past | favorite | 22 comments



For those not familiar with Jacob Nielsen, take a look at his article - "Flash: 99% Bad"

http://www.useit.com/alertbox/20001029.html

It was written in 2000 - at the hight of Flash craze. He was right then, and he is right today .. UI/UX designers should pay close attention to this man's analysis.


I've seen people reading more, now that they've iPad. Content websites should consider creating a usable experience for iPad. It's a focus and regular audience


For an article on usability, that page was terribly hard to look at.


Why?

The font size is 1em – whatever you picked as your default font size, respecting your wishes. The line length is set in a way that it fits about 100 or so characters. That’s in my opinion a few too many (I would go for 60 to 70) but it’s certainly a very reasonable number. You should also note that the line length can be reduced by making your window narrower.

Line height is at 130%. That’s miles ahead of the awfully cramped default (look at this comment to see the awful default in action). I would add quite a bit more – mostly for aesthetic reasons – but 130% is, once again, a very reasonable number.

The text provides a summary and has an ample amount of subheadings. Paragraphs are used to give the text room to breathe.

I’m not a huge fan of the aesthetics (I would go for a serif font – not because it’s more readable, it’s not, but because it’s prettier – and I would decrease the line length and increase the line height) but I can’t find any faults with the text’s readability.


Don't conflate pretty with usable. Useit.com is one of the best sites there is about usability, and it is backed up with real data. While it may not be pretty, it is eminently usable.

Edit: fix typo


Useit.com is neither pretty nor useable. The content is good, the presentation is subpar (and I remember when it was even worse). Yes, it is accessible, and yes, you can read the content. It is still far cry from usability.


"real data"

That's a stretch. Look how many people are in some of these studies. Well, you could look if he bothered to tell you, which he rarely does, because I'm guessing these aren't actually "studies" but are actually interviews with some soundbytes extracted to back up preconceived notions.

I'm not saying he's always wrong, that would entail gathering and presenting actual data, but he's never given me any good reason to think he's right rather than just a pundit with first-mover advantage.


Not many people are needed to identify usability problems. If you want to quantify the effects you indeed need many, many people but Nielsen doesn’t actually do that. Three, four, five people are plenty for those kinds of studies.


Not many people are needed to test _specific_ problems with _specific_ environments/programs/features. 24 of 26 people not being able to figure out how to sign up for hacker news would be a clearly identified problem. Drawing overarching conclusions about the entire ipad ecosystem from 16 out of 50 million users on 26 apps out of ~100,000 and providing no methodology, no data? That's not enough to distill into "ipad usability" for an entire year.

He'd have far more credibility if he simply said "I don't like this feature, and after asking several people, I know I'm not the only one. Here's why I don't like it, and here's what I think can be done about it."


He sits them in front of computer and looks what they are doing. That would be a terrible way of finding specific problems. It’s a great way of searching for problems. I suggest you look into the method Nielsen actually uses.


I know how he does it, that's my point.

Would you be able to make useful, high-level recommendations to the auto industry after riding in the back seat of 26 drivers for an hour or so? Do you think it matters what kind of car you're driving? What kind of roads you're driving on? What time of day it is? What state or country you're in? Of course it does. If your conclusion is that "driving on the PCH in a convertible is fun" then, no, you don't need many participants or any scientific rigor. But if you're say things like "the roads aren't wide enough" or "radios are too loud" or "signs aren't big enough" then you're going to need to back that up with some context and some data.

My beef here is that people who don't want to really think about this stuff but want to seem like they do will read articles like his, which draw conclusions of the latter type described above, and spout them off as authoritative science, because that's how it's presented by Jakob Nielsen, Ph. D. This confuses clients and teams and ends up having people defend their decisions against a ghost of a misinterpretion of a supposed rule, and ultimately yields an inferior product. I've seen his articles quoted verbatim by designers and clients alike, almost always solely to justify their own opinion, but with the weight of his authority.


Would you be able to make useful, high-level recommendations to the auto industry after riding in the back seat of 26 drivers for an hour or so?

I certainly do believe that to be possible, yes.


Interesting, any ideas what those might be? I'm genuinely curious because I actually can't think of any.


Do you have sites/people you prefer to useit? I'm always looking for good UX info, and would love to find gems I'm missing.


Not really, I'm not sure what I would even classify as "good UX info". There are things that work, and things that don't. Things that look good and things that don't. Things that feel good and things that don't. Things that communicate themselves well and things that don't. UX is supposedly looking for the intersection of all of those axes but they're all very subjective.

If you want to be good at UX, never ever ever go to sites like useit, you won't learn anything helpful, and you'll probably learn something harmful. You just need to get out there and experience what's available and think very hard about what's right and wrong with it. There are no rules, no laws, and the lessons you learn are difficult to translate to anything universal. The moment you start thinking about "85% of ipad users like blue buttons" or "People can only remember 7 things" you lose. Tomorrow's successful UI is going to break today's rules.

EDIT: I should clarify, I don't mean to say that I don't peruse portfolio sites and have a number of blog subscriptions, but none of them I'd feel confident recommending. The quality of such sites varies widely over time so you kind of have to just let it all meld together and draw your own conclusions.


I have never said this on hacker news but it definitely still follows the "to your face" rule, This is terrible terrible advice.

A lot of neilsens advice certainly isnt universal, however the main thing to take away from neilsen is not the conclusions but the methodology in coming to those conclusions. UX is not voodoo, there are certainly parts of it that are more subjective than others however if 5 out of 10 people have a problem performing an actions in your app then you have a problem.

Neilsen provides good methodologies for which you can start performing these tests yourself.


This is the first comment that I have seen that I think really suffers from the lack of visible points because it is SO correct and is refuting what I agree is really bad advice but I don't think the GP should be voted into oblivion because he gave bad advice (but I really don't want to turn this into another of those threads ;).

The only way to really know where the problems in your app are is by watching users use it wherever they actually use it. Without that, you will never know about the little cheat sheet they have to reference every single time they use your app because your "amazingly simple" navigation structure that you were positive made your app easier to use (an experience I've had).

You can get some information about what questions to ask from statistical monitoring (why does this step take four times the amount of time of any other?) but you can't find the answers without real users.


I'm assuming the bad advice is the bit about not reading sites like useit, rather than the "get out there and think" part. But I guess I see it as good advice that was communicated poorly, since I agree with your response here.

Yes, you absolutely have to watch people use your site. You need to watch them use your competitor's sites. You need to watch them use sites that come up in conversations about your products. You do not need hundreds of people to make a difference. Get people off craigslist and other classifieds for $25 and have everyone that touches the product watch them use it (as video, after the fact). This input is hugely valuable. Metrics are also very important. If a button is on a page in three places, track which one they click. Measure response times, page flows, all that good stuff (although be wary of blindly conflating marketing success with usability).

If these are the lessons you've learned from Neilsen, I can only say I'm surprised because I see his work as the opposite approach. Perhaps because I've seen him cited so often over the years by people who don't think they need to do this because they think Neilsen already answered that question. Nobody I know that actually embraces this kind of hands-on testing has any respect for his "studies".

What I'm trying to say is don't try and distill universal rules from this narrowly-scoped input. Don't ask a small number of people about a small number of experiences that have no logical connection, and try to infer some wisdom from it that you can apply to another group of people in another experience that may not be at all similar. I've had a case where adding clearer text to a vaguely worded button killed performance. Others where making a button look considerably "worse" improved success dramatically. Sometimes you need to make the font bigger, or add an icon, or remove an icon, or have a popup, or add two pages to your wizard, or not even have a wizard, or whatever. Sometimes you need to do things the wrong way because you lose from inconsistency what you gained from local quality improvements. But usually just need to do things the way that makes sense to you, learn from your mistakes because you fully understand the context, and do it better next time.


Not sure what the "to your face" rule is?

His advice is almost _all_ conclusions phrased as if to be applied universally. Example, from the PDF: "If you have a lot of content (such as product information) to display, use a separate page rather than a modal view". That is not a friendly tip or even a very constrained piece of advice. It doesn't say "consider using" or "try", it says "use". If you think I'm being nitpicky you haven't been subjected to project owners sending you emails with quips like that, expecting you defend yourself against the laws set down by the experts. It may actually make sense in many cases, but there are certainly cases where it's not true.

As for methodology, he offers very little insight and no new ideas. You can spend an hour reading Steve Krug's book and understand the mechanics of usability testing. The rest is common sense and experience.

UX is not voodoo, it's an art. Usability is a holistic quality, not an aggregation like uptime or code coverage or conversion rates. It's pretty widely accepted that the Tivo is a hallmark of usability. Why? Can you answer that only in statistics? Can you then justify why you picked those particular statistics and why they determine "usability"? Are your results reproducible? I'm not saying you actually need to run through the full scientific method on usability, I'm just saying that running through only parts of it to sound fancy benefits nobody. You need to learn when things are just better, and you need to admit that it's ultimately an opinion, an idea, an artform.


The "to your face" rule is "don't put anything in a comment you wouldn't say to a person's face".


In this case he has been very explicit about the number and types of people in the study.

From the article: "In total, 16 iPad users participated in the new study. Half were men, half were women. The age distribution was fairly even for 14 users between the ages of 21–50 years; we also had 2 users older than 50. Occupations spanned the gamut, from personal chef to realtor to vice president of human resources. "

I think the key is not to think in terms of 'statistically significant' as in a formal scientific study of a single fact, but in terms of 'useful feedback from real users.'


In what way? I found it very readable.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: