Hacker News new | past | comments | ask | show | jobs | submit login
User testing in the wild: he has never used a computer (jboriss.wordpress.com)
513 points by tbassetto on July 7, 2011 | hide | past | web | favorite | 199 comments

Sometimes I think technology just leaves people behind, with no hope of catching up.

About 10 years ago, one weekend I drove up to an ATM in Upstate NY, and found an elderly couple standing there, looking confused. After a minute, I stopped the car and got out. Both were well-dressed (probably coming home from church); the gentleman was about 90; his wife was a similar age.

"Is everything OK?" I asked.

"Could you help us?", she said (he was too proud to ask for help, I guess).

"Sure! What can I do for you?" I replied.

"You see, the bank sent us this card and said we should be using it, instead of going to the teller inside. But we don't know what to do with it."

Wow. I was shocked. The bank (HSBC) had not even told them how to use the card. And I was surprised that there existed people who didn't know how to use an ATM in this country!

So I showed them where to insert the card; told him to enter his PIN (while I looked away so I wouldn't see it). I advised them about the security implications; and how to always collect the card after you're done (those ATMs kept the card for the duration, unlike newer ones where you just swipe).

I was left with a sad feeling after that, which I can still remember. If I'm ever designing software for general use, I always think about that couple and try to see it from their eyes.

My brother and I were talking about technology gaps today, specifically about how some older people get stuck day after day in front of their TVs watching trash because they never picked up anything past the VCR, or even the VCR at all (in the case of my grandparents).

We talked about a simplified Netflix-type account/device that would make suggestions, be dead simple to use, and allow an 80-90 year old to re-watch old classics, view documentaries and the like without commercials, repeats, confusion, etc. If something reasonable existed, who wouldn't shout their elderly parents or grandparents something like this for $xx/mo. Ideally, older generations would have a diverse range of friends and social activities to while away their hours, but often this just isn't the case and they are stubborn, have language barriers if immigrants, etc.

I think this would be a good idea -- though keeping a lid on the feature creep would be a challenge.

OTOH it wouldn't help everyone: my grandmother treats anything electronic as 'too complicated' without even trying at this point, though the TV remote is the conspicuous exception to the rule.

This. Also, equally important is the usability of the hardware. E.g. compared to the dot-matrix printers of 90s, it is far easier to feed paper to Laser printers.

Children normally find it easier to adapt to new things. E.g. "Hole in the Wall" experiments: http://www.greenstar.org/butterflies/Hole-in-the-Wall.htm

Not sure, if this is slightly off-topic, but sometimes users ask pretty interesting questions. Circa, 1997 I visited a computer training institute to meet one of my friends who was a Lab Technician. There was a batch of interesting people: only one of them would sit in front of a desktop and all 38 would line up around him, despite terminals being available for all of them. Now, this guy is trying to type "dir /s" and while hitting the Spacebar a question pops in his mind: "Why do you need to hit the Spacebar with thumb?" I started thinking about how it is more efficient to hit the Spacebar with thumb, the notion of finger travel in miles per annum etc., while my friend replied curtly to that 55 year old student: "Sir, do you start your Scooter by hitting the `kick' by foot or by palm?" He gulped the tobacco he was chewing up.

During that year, I rolled out a desktop application in a smallish organization and was astonished to meet users who were typists-turned-computer-operators. I couldn't grasp as to why they found my code to be so user-friendly. Turned out that a key decision in the beginning of the project helped me: all labels in the application were in Marathi.

Re: parent's post, in the case of ATM machine, if they keep a microphone disguised in the form of telephone handset connected to the machine itself, driven by an AI program (without the press 1 for foo, press 2 for bar type of idiocy) might be of great help.

Children normally find it easier to adapt to new things. E.g. "Hole in the Wall" experiments: http://www.greenstar.org/butterflies/Hole-in-the-Wall.htm

Related TED talk: http://www.ted.com/talks/sugata_mitra_the_child_driven_educa...

There are 50 or 60 million cable-TV connections in India at this point in time. The guys who set up the meters, splice the coaxial cables, make the connection to the house, etc., are very similar to these kids. They don't know what they're doing. They only know that if you do these things, you'll get the cable channel. And they've managed to [install] 60 million cable connections so far.

Not so different from many of us (like myself) who learned programming functionally. Very inspiring article, thanks.

Thank you for this link! I read it years ago but could never remember what the kids called the mouse pointer (“ sui, Hindi for needle”). I always wanted to say kudzu for some reason.

Thanks for the link to that Indian experiment.

Our ATMs (Portugal) are very explicit: http://casinoonline.co.pt/images/thumbs/casinomultibanco.gif

The real version is animated (you seen the card entering the slot) EDIT: Someone made a video... https://www.youtube.com/watch?v=-zzEXfX_Otk

I've always wondered why newer ATMs like to retain the card until after the transaction. I've left my card in an ATM two or three times while in a rush on account of that.

Newer/better ones will force you to take your card back before dispensing cash, which I think it's an elegant solution. Keeping your card in the first place is, I suspect, a security thing.

I have taken back my card and walked away forgetting to take my cash on two occasions when using such systems. The stupiditity of users knows no bounds.

There's a definite trade-off here: on the one hand, people are less likely to forget to take their cash; on the other hand, people who find a forgotten card in an ATM are unlikely to abuse it, but if they find cash, they'll probably just pocket it.

All the bank ATM's I've encountered here in the UK won't dispense cash until you've taken your card back out of the slot.

The first ATM I used in Thailand a few years ago, dispensed the cash and then gave the card back. Needless to say I walked off with my money and no card. That turned into a fun holiday....

Most US ATMs can detain cards. This can be done in response to a request from the issuing bank, or if the card is left in the slot after the transaction as a safeguard.

I'd suspect that they want to keep the opportunity to seize the card until the last possible moment.

I've heard that with some of these ATMs, if you enter an invalid PIN too many times, it'll kick you out and eat the card. I've never tried this, personally.

It's happened to me.

Definitely happened to me too. Also, I've noticed the ATMs at convenience stores, bars, on street corners etc are usually the swipe ones and the ones at banks are usually the keep your cards ones for some reason. To me the ones that give me card back right away are much much nicer as it's much harder for me to forget my cash than my card, and it means I only have 1 thing (which is my goal anyway) to get from the machine. It's nice that basically the ones around the places I wind up when I'm tipsy in the middle of the night are the ones that're set up so they're hardest for me to mess up.

I'm not sure if it's simply coincidence, but I've noticed that while most new ATMs (in the US) are swipe only, it seems that on ATMs that allow deposits the machine retains the card. I'm not sure if there's a reason the card is retained for deposits. I also could just be tricking myself with anecdotal evidence, but I've definitely seen new ATMs side by side with a swipe withdrawal-only and a retaining withdrawal-and-deposit.

It works well for me. I always remember I need to get my card back, but more than once I have almost walked away from the kind that return your card first without taking my cash out of the slot because I was in a hurry.

I believe this is a US-only problem. I haven't seen that sort of ATM in the last ten years where I'm from (India), and I was pretty shocked to see them in the US.

First, I don't believe it's a problem but merely a different procedure. Second, in France (and most of Western Europe I think), all ATM's keep your card for the simple reason that the magnetic stripe (the part of the card that's in contact with the ATM when you swipe) is never used, to my knowledge. It's the chip that is used to verify the identity, locally record the transaction and more.

Is that the reason? The chip on the card is positioned so that readers can be designed to only need the card inserted about halfway.

I think it has more to do with giving the machine the ability to not give the card back. :-)

What possible use case exists for that ability? The card's generally blocked after a set number of incorrect attempts anyway, so it's not as if it could be used anywhere else.

ATMs that keep your card for the duration of the transaction only serve to decrease trust in the mechanism.

The card's generally blocked after a set number of incorrect attempts anyway, so it's not as if it could be used anywhere else.

Remember, credit cards originated before there was universal ubiquitous connectivity. It's still no guarantee (think of a small merchant at some outdoor festival). EMV (Chip and PIN) cards have an offline mode which an bad guy can use. http://www.cl.cam.ac.uk/~mkb23/interceptor/

ATMs that keep your card for the duration of the transaction only serve to decrease trust in the mechanism.

In the past, there were modes that allowed you to overdraw your account with an ATM. I imagine this was done in consideration for unreliable communications links or banks that needed downtime in their account balances for batch transaction processing.

"Trust" is a deep and strange concept, but at the end of the day US ATM cards are only a mag stripe and a 4-digit PIN. We'd best not expect too much from them. :-)

Dip ATMs (where you insert your card then pull it out) wouldn't have any trouble with chips either. I can see why it'd be an issue with swipe ATMs though.

I still think ATMs that eat your card up are terrible.

If you don't have a chip on your card, an ATM must ask the issuing bank to validate your PIN each time you type it in. Almost all communicate using the ISO-8583 protocol.

ISO-8583 defines two response codes that cause an ATM to retain your card:

41 - Pickup card (lost card) 43 - Pickup card (stolen card)

It is up to the issuing bank to decide whether to return those codes, or not.

Having had a machine in Romania keep (and supposedly destroy) my card in the first week of a 10-week visit in 2004, I can attest that this is not a US-only thing.

I really wish all software engineers could have an experience like that because it almost always makes them more empathetic which allows them to make better software.

My experience first came when I was helping my dad get a webcam working and I've gone looking for them since, and I'm sure that has gone a long way toward making me a better engineer and a better product manager.

The company I work for produces software for users with zero computer literacy. We're a very small outfit serving a very specific niche. We produce new and highly specialized/distinct programs at a rate of about one or two per week. Our users are often subjected to very tight time requirements, so we can't expect them to develop familiarity over a gradual learning curve. They must immediately know what to do, and our software must support them, computer skills be damned.

Almost ironically, I struggle to hold our team to our own UI best practices; to ensure our interfaces and content are approachable. We're always crushed by deadlines — the time it takes to be sure we haven't made any literacy assumptions is time we don't have. It's my job to make developing this sort of UI/UX absolutely painless. Unfortunately, I get almost no opportunities to actually test with our zero-experience users. This article was a wonderful read, and gave me a very specific type of knowing satisfaction. It's a rare opportunity indeed.

As for those best practices, here are a few examples:

• Emphasize actionable items with animations, and textual and pictorial descriptions of the actions that must be taken. Instead of saying "right click", we might show a picture of the mouse with the right button highlighted, with a "clicking" animation indicating the action to be performed.

• Use iconography and terminology derived from the subject matter or real-world objects, instead of common "abstract" UI elements. For instance, a light switch (indicated as actionable, of course) instead of a check box.

• Simplify user interactions and interfaces to the absolute minimum. Reduce the actions the user must perform in order to be satisfied. Reduce the number of options presented to the user at any one time.

In many ways, designing an interface for a zero-experience user is like choosing a programming language: You want the language that lets you describe exactly the program you need to make in as few instructions as possible. Likewise, you want an interface that lets your users describe exactly the action they need the program to perform in as few interactions as possible.

Some fascinating insights. I know you and your team are busy, but it'd be great to read more of your learnings. Do you have a company blog or anything where you document stuff like this?

We have a blog, but it's for our customers. We use it to give away some of our generic training material as a loss-leader and attract clients to our premium, customized software. It doesn't cover our development stories. I'd find it fun to do such a blog, but I'm not sure the rest of the team would be willing to put in the time necessary to make it rewarding to read.

You and your company almost certainly have some incredible insights into "The Other Half". I would love reading any sort of blog you have.

We have this blog, but it's all domain-specific knowledge offered as a loss-leader for the benefit of our prospective customers: http://www.carldyke.com/newsletters-that-teach

We don't have any sort of outlet for our experiences as developers in this market / with this audience. If we ever start one, I'll be sure to post about it here on HN.

I wish you'd start one now. There is a huge population of developers out there who are not only oblivious to usability issues, but even get offended at the notion that unique insights are required to make something usable rather than just useful.

On a related note, I'd to point out that your blog is, for lack of a better term, scary. The huge-ass fixed-position header gets in the way, the line spacing is way too tight and the colors are too garish and uncoordinated. My first reaction when I saw the page was "augh!" which then faded as my brain slowly peeled away all the visual clutter and started noticing the content.

Yes, the blog is scary. We're doing a site-wide restyle in the next month or so. But thanks for the additional motivation, in both paragraphs. I'm seriously thinking about starting such a blog, thanks to all the requests here. There are a number of factors limiting my inclination, but it would be a great way to keep my team focussed on these practices.

That reminds me of one naive I was helping who had serious trouble relating the 2D movements of the mouse to the 2D movements of the on-screen cursor. That relationship seems like it'd be very challenging to explain if someone's not getting it and there's no teacher available.

It is surprisingly challenging. The mouse doesn't input position, it inputs movement, and not all movement, but movement of a specific point on a plane aligned to itself (not the screen, not the hand) and only if you slide it against a surface, and not your exact movement, due to acceleration and stopping at screen edges. And then it has buttons! And a wheel! And an ominous red light!

A good start is to just put their hand on the mouse, put your hand over theirs, and tell them to watch the cursor.

Hand-over-hand is a well-known learning technique for children and learning-disabled folks as well as it emphasizes muscle memory.

it took my mother quite a while to fully understand why i said i was "scrolling down" when the text on the screen was moving upward.

Yeah, we encounter this situation an awful lot. But this isn't just a problem for zero-experience users. In 3d video games, there is often a setting to invert Y-axis input. In some 3rd-person perspective games you can set your Y-axis mapping for character input (aiming) and camera/viewport rotation separately. Very helpful, and one of those silently critical features that is never lusted after (it's not a "bullet point on the box"), but can make a tremendous difference in user experience.

I think the main differentiator for inverted Y is whether you map the motion of the mouse into the vertical plane (i.e. directing your gaze, non-inverted) or treat it as a flightstick for your neck (i.e. directing forward/backward tilt, inverted Y).

Unfortunately, I get almost no opportunities to actually test with our zero-experience users.

Seems like you just have to hit the mall. What kind of software are you guys making?

We make interactive training materials for people working in the heavy industries (Mining, Oil, Forestry, Pulp & Paper, etc). A typical user is male, in his 30s or 40s, with no interest in technology, a short attention span and temper, low self-esteem with respect to their work, and a preference for the "physical" over anything "abstract". We train them to effectively operate and (especially) maintain/repair the equipment they use in their work.

This equipment is the heavy machinery they tell you not to operate when you take medication, or the sort you see on the Discovery Channel when they talk about the world's biggest construction projects. Machines described by schematics with thousands of components, that come in manuals thousands of pages long. These machines have hydraulic, electrical, and process (computational) systems that all interact in a complex ballet of physical forces and remarkable engineering. And they all break down — constantly.

When the equipment fails, these lumberjack-types have to look up the schematics and deduce exactly what broke on their machine. An hour of downtime can cost tens of thousands of dollars, so they're under tremendous pressure to get repairs done quickly. However, they aren't analytical thinkers (in the typical case), so they aren't great at deducing the root cause of failure. They resort to "part-swapping", where they replace components until the problem goes away. Some of these components cost upwards of a hundred thousand dollars and can take 8 hours to install.

This is where we come in. By commission, we take the manuals for a machine, and compile the anecdotal experiences of its operators and mechanics. We then produce a highly-interactive, true-to-life simulation of the machine, with animated schematics that show off exactly what's happening while it's running, in both normal and faulted states. With these tools, they can actually see how the "damn thing" works, top-to-bottom, inside-and-out. This helps them operate in a way that will avoid driving the machines to fail, and grok the machine well-enough to isolate root causes when it does fail.

I could go on and on about this.. another time, perhaps.

Alas, we don't have a blog about what we make. Perhaps I should start one.

Sounds fascinating.

Seems like there is an inefficiency in the system somewhere though -- if downtime is so expensive why are they not paying better trained people to be mechanics? At $10K\hour a skilled mechanic/engineer would only have to save an hour or two a month to justify having them sit around idle most of the time.

You could say the same thing about RIM, Nokia, or Microsoft. These companies are no doubt full of brilliant people, and they have the finances to hire more, but the ability of your workforce doesn't solve every problem.

A lot of these industrial companies do have very bright engineers on staff, and they regularly hire consultants and analysts, and they have the money to pay companies like mine to create highly-specific simulations. I'm not sure where the balance is struck, or why they have the organization that they do, but they're suffering on many levels without the insight to address it.

For my own professional development, I would like to better understand the motivations of these industrial mega-corps. I get the sense that they often don't understand themselves, and suffer from a sort of cultural poisoning.

The "meta-game" of my business is figuring out how each of these companies think. Ultimately, we're hired to solve problems, and the software we make is but a tool to this end. You'd be surprised by how often we're commissioned for a tremendous project, with specific requests for software, which is completed to the great stated satisfaction of the client... only to discover that they haven't actually used what we've built.

The mechanical failure of equipment and the hardship of repairs-people is a symptom. Training is a remedy, but sometimes it's for the wrong illness.

That sounds awesome. Seriously - I absolutely love what you are doing. Having a father and brother in construction (and having worked construction myself - although not at the scale you describe) I'm particularly sensitive to these kinds of issues.

You really should start a blog, I would read it.

Thank you for your detailed description, it was fascinating. Is your market under-served compared to consumer facing apps, and etc.?

Yes, but that doesn't give us much of an advantage. Most of the companies we sell to are monolithic organizations that are very set in their ways, despite knowing through-and-through that they have serious deficiencies in their workforce and that computer-based training simulations like ours are extremely effective. They cry to the heavens about their need for fundamentally better training, but it's hard to get them to adopt anything new or unfamiliar.

I imagine it is in some ways similar to the experience of B2B venders working at the mega-enterprize level, except in our case we're dealing with organizations that are almost universally resistant to computers themselves, in addition to the usual bureaucracy. Fascinating market, really fundamentally different than the other markets I've worked in, both in terms of development and business constraints.

Joe knew nothing about computers, so he focused on the only item he recognized: text. Icons, buttons, and interface elements Joe ignored completely

Reminds me of my mother. She had a stand-alone word processor back in the day, w/ keyboard, monochrome screen, floppy drive, and daisy wheel. All commands were performed through the keyboard, with prompts on the bottom two lines of the screen, ala emacs.

She was real pro w/ this thing. She taught herself how to use it, and never needed any help from me.

Fast forward to her first personal computer, w/ mouse and icons, and she didn't get it. 10 years later and she still doesn't get it. In fact, it is a major source of anxiety. She doesn't explore the interface because she might get lost and won't be able find her way back to where she was. What do all those icons do? Who knows. She follows a very narrow course through the 4 or 5 tasks that she's familiar with, and that's it. She's almost superstitious about it.

None of this would be terribly remarkable if I didn't know she'd been such an expert user of the old standalone word processor. But she was. And whatever it was about the old machine that worked, it didn't carry over to the new age of mice & GUIs.

[for a similar perspective, see the discussion over at metafilter: http://www.metafilter.com/105309/You-have-to-click-on-the-te...]

I personally suspect that some of the confidence garnered on the word processor might have been because it was more of an appliance, and didn't have a persistant state? When powered on, it was set to work on a specific task already, everything to interact with to change functions or settings, would be reset if you pulled the plug, or switched it off.

Like an old game console. Perhaps she'd like a computer with an OS more close to that, like an iPad?

Very interesting experience. The real issue is deciding on what assumptions you want to make about your audience. Even people who are not completely new to computers may act like people who have never used one before so making as few assumptions as possible is probably best unless you have a strong degree of confidence.

A few weeks ago, I visited a cousin who had recently bought a new fridge with "high-tech" water dispenser. I'm a big water drinker. When I went to the fridge to get some cold water, I felt really stupid for not knowing how to work a freaken water dispenser. It had a ton of buttons the real button to make it give you water looked like it was part of the door, not a press-able thing. The design could have been done better.

How many times have you been at someone's house and needed to use their microwave and it takes more time figuring out how to make the thing go than to actually heat the food? And even if you've used the exact same microwave 4 weeks ago, it still feels brand new and you feel just as lost.

Whether it's directions to a location or how to use an app/device, it may not stick.

Designing for computer illiterates but decently intelligent people seems like a worthy ideal to strive for. You may never get to a point where someone brand new to computers will feel comfortable with your app but keeping it in mind might help guide us into making small tweaks that can add up a big difference.

One example: With WordPress, you can add a search element to your site/blog. The default language of the label is "To search, type and hit enter." This is probably more clear than "Search this site" or plain "Search" but maybe not as clear as "To search, click here, type and hit enter."

Yes, it's more verbose. No, I'm not suggesting that WP should change the default text.

You have to strike a balance between being clear and not boring your audience by telling them what they already know but when in doubt, err on the side of clarity.

We have two microwaves at the office. One has two knobs you can turn. One for intensity, one for time. The other has a tons of features, multiple buttons, and only one turnable knob. Pressing the buttons can put the single knob into different modes and thus allows you to adjust time and intensity and lots of other things.

Guess which interface is universally preferred? (And I work in the company of lots of smart people here in Cambridge.)

I think the ideal microwave would have 2 buttons: one for start and the other for switching between microwave and defrost. The start button would give you 30 second increments of time. What more do you need?

But then you also need a display for how much time you selected, and how much is left. And perhaps some system to decrease the selected time.

A simple knob gives you all that, and no button required at all.

I like this, but sometimes you want to defrost something for 15 minutes. I think a [plus 5 min] button would be nicer than hitting a button 30 times

I agree that you seldom want more than two different settings for the power level. But for the timing, knobs are good. They combine data presentation with input.

For power perhaps a two-state switch would work nice.

A person in 2011 who has never used a computer has far more serious issues (or "differences") to the way they approach life than you can solve by redesigning menus and layout.

This would be like approaching someone in their teens who still cannot read and trying to design a book you are writing to be more accessible to them instead of your general readership.

It's simply a matter of hand-on, over time. I've converted a few AOL users over the years to Firefox, etc. and there is absolutely nothing you can do to improve their experience beyond a very basic level. They just have to use it for a year and go through their own trial-and-error of learning.

His point was hardly that we should design interfaces for computer illiterates, but that a person who has never used a computer may provide valuable information on how humans try to approach new interfaces in general. This information can help us put emphasis on the right things: Instead of a bunch of buttons, concise text that leads the user. I imagine something like "Your first time using Internet Explorer? Click -here- to get more information." would have helped him greatly. (not in this particular situation, but if he were at home with his newly bought computer and properly invested in his pursuit of email, then yes.)

Exactly. I've, perhaps foolishly, designed mock-ups and interfaces before where I've used icons instead of text explaining what it is, and I'd often think to myself "It's ok - people will probably click around to learn what that does". That's probably wishful thinking on my part.

Although I am all for user experience testing, I also agree with you on this point. To be perfectly honest, my first thought when reading this was "well that's not really fair, as imagine the sorts of things that user would have learned before they even got to the browser?". That is to say, if they were brand new to the computer, they would have had to have learned enough to find the browser and run it, first, before the time where this user started. The things the user learned may very well have changed the reactions inside of the browser.

Even so, it's still a great article, as I think sometimes that 'first-time user experience' can also translate to 'new to English user experience', too.

The way he floundered looking for somewhere to get started was quite instructive. I've used several programs over the years that would have really benefitted from a UI element that indicated 'click here if you have no idea how this program works', and that took you to useful help.

Not every aspect of a program has to be designed for beginners. But having just one clear item to get people started might be a boon. It doesn't have to assume no knowledge of computers; perhaps just no knowledge of the problem domain or the particular program's workflow.

Well there are millions such people in India and they are a big market.

No they're not, because they're the people who don't have a cent to spend, and if they do they'll spend it on food.

There are millions who do have some money, but those already know how phones, computers etc. work.

People are not upright beasts whose only needs are food and shelter.

People also might be perfectly well fed and healthy, but live in rural areas, or be elderly, or have learning disabilities.

People who were once very poor can start receiving extra income (and that is particularly true in the BRIC countries) and gain access to computers.

People interact with new interfaces all the time, and not all user interaction occurs with a computer.

So, there definitely is a market for those complete novices. Regardless, understanding how people interact with new interfaces for the first time is interesting. Joe's case is only a single instance of this situation (and perhaps extreme, given the context). Every time a new interaction model is developed, the insights from such an experiment might prove useful.

Your argument is logical but you see you are wrong. There is something about human beings that makes them act un-rationally. Many Indians who live on less that $2 a day own mobile phones.

Coca Cola and Pepsi realized that they need to trap this market hence they decided to come up with very small bottles that cost around half of a normal drink bottle and this strategy got them millions.

Similarly for stuff like Shampoo, ice cream and so on. Same thing can be applied to electronic items as well.

This is not necessarily irrational, but a different expression of relative preference compared to the "most sensible" case.

The reason I say this is because you cannot define a "rational" set of relative preferences. It is hard to say whether or not someone insanely frugal like Sam Walton is acting rationally or not, for example.

Irrational behavior, in this context, could be that someone does value food over phones, but somehow ends up constantly buying phones to his misfortune. Not the same as someone who actually values phones over food.

Some people just don't have the need for a computer, I don't see why that would be a problem if they are happy with how they're life is going without it.

My grandfather bought a computer in 2000 to replace his ancient word processor. When I visited him, he said he'd been using it a lot with Microsoft Works. But I couldn't find any documents on the hard drive. It was only when I found a huge drawer full of used floppies and remembered what the "save" button looked like that comprehension dawned...

Many icon sets (on Linux, at least) use a picture of a hard drive instead of a floppy [1]. Which, though more correct, still worries me. Anybody who recognizes the icon as a picture of a hard drive isn't going to be confused by the concept of saving...do we need a similar icon that uses a picture of an entire computer instead? Should it be a laptop or desktop?

[1] - http://cshared.com/wp-content/uploads/2011/03/Screenshot-xam...

Why do we still need to save explicitly?

1) because people tend to like clicking save or pressing ctrl-s.

2) to complement save-as (aka, poor-man's branch)

to complement save-as (aka, poor-man's branch)

Exactly. Today we use save and save-as as crippled forms of versioning. We can do better.

Save-as also provides the function of being able to save to different media/locations. Furthermore, it is easy. Power users already have their more powerful alternatives and non-power users are used to what they know.

If it ain't broke...

It is broke, though.

How many times have you heard someone complain they failed to save their work? It just shouldn't happen - the computer knows exactly what was inputed - why should it ever lose track of it.

The "Save" function is a carry-over from the time when storage was expensive and slow, and humans had to make decisions about what was worth saving. Now, computers generate a lot more useless logfiles that are saved forever than a human can possibly generate using a word processor, and yet we are asked to make a decision about if we really meant to put out inputs into a computer.

That shouldn't happen any more, and with good software it doesn't.

In Google Docs - for example - there is no "Save" function - it happens everytime you press a button. There is no "Save As"; instead there is "Rename..", "Make a Copy.." and "Download As.." which perform the distinct functions rather than overloading "Save As.."

Save-as also provides the function of being able to save to different media/locations.

Yeah, that too. Duplicate, or export, or copy.

Power users already have their more powerful alternatives and non-power users are used to what they know.

What?! Why should only “power users” be allowed the luxury of never losing important data? We can make that easy too.

I have no idea what point you are attempting to make. Non-power users don't need full-blown version control, there are plenty of more straight-forward ways of making sure users don't lose their data than that. Save-as is not mutually exclusive with automated backup systems, or undo systems.

Furthermore, nobody is forbidding them from using the tools "power-users" use. The only difference between power users and regular users is what tools they choose to use.

I have no idea what point you are attempting to make.

Do you believe the solutions we have today are decent enough and cannot be improved further? I wish handling files was so simple that even those you call “non power-users” (i.e. pretty much everyone) could work with versioned files.

Note that I am neither saying it's easy nor that it is appropriate for every program. But it is definitely possible: http://www.apple.com/macosx/whats-new/auto-save.html

"I wish handling files was so simple that even those you call “non power-users” (i.e. pretty much everyone) could work with versioned files."

We have that already, it's called persistent undo and/or save as. Stunningly, a versioning system designed for the technologically illiterate doesn't measure up to one designed for coders.

Auto-save is a separate issue, but we have that as well...

This came up elsewhere in the thread, see my response there: http://news.ycombinator.com/item?id=2738851

Glad that OSX Lion will do away with the need for this annoying "feature" completely.

I've got my parents using Spotlight and they just dump files into the Documents folder. They are much happier with find>file than doing the browse>navigate.>.>.>file.

If they never had to select to save files to begin with, it would be even easier.

She should be commended for her patience and willingness to help this man. Several years ago I spent about 2 hours helping an elderly woman do pretty much the same thing, who did not get nearly this far and she gave up. The concept of a mouse moving the cursor on the screen was beyond her comprehension, yet otherwise she was a perfectly intelligent, well-read woman. It may have been my fault for not explaining it properly, but it is certainly true that modern interface design obviously takes for granted that we already know the basics. Who would have thought that 'help' offers none? And 'suggested sites' gives some bizarre privacy warning?

I wonder, however, how Joe would have fared with an iPad?

Edit: oops on the gender.

I gave my grandma (who’d never used a computer before) an iPad for christmas. She still doesn’t use it much (just email really) but she grasped the direct touch interaction straight away. Swipes as well were fine - there’s a direct feedback model where if you start dragging a webpage it animates during the process.

Where she has difficulty is understanding what you’d use the net for (“You can get recipes on the internet?”). She needed it though - her current way of getting information is Teletext (UK-wide information system built into analogue TV broadcasts). That gives her weather, stocks+shares, TV listings and news, but is being turned off next year when they turn off analogue broadcasts to free up spectrum space. Digital terrestrial TV has the 'red button' info system but it contains dramatically less information.

It is funny you mention Teletext, my grandpa uses that almost exclusively for almost everything, especially flight status when he is coming to pick someone up from the airport.

It is also generally presented in a much less confusing way than any other format, and doesn't have thousands of distracting ads all over the place.

Better move him onto something else this year then. The last transmitter is being switched off in April 2012:


This is in The Netherlands, not sure if they have the same plans on switching it off.

Terrestial TV in the Netherlands completed the digital transition in 2006 apparently:


Same article says though that 90% of the Netherlands uses cable TV and that’s mostly still analogue (guess that’s how he gets Teletext?)

Yep, definitely over the cable not over the air.

Jumping on this iPad bandwagon. My 90-year-old grandpa hasn't ever used a computer; he used an old Microsoft WebTV since about 1998 so that he could send and receive email and pictures, and the only other things he knew how to do with it were check stocks and weather.

Apparently the last remaining WebTV portal server at Microsoft is getting dusty and flaking out sometimes, so we bought him an iPad (his vision is pretty good.) We preloaded it with bookmarks to local news, stocks, weather, email, a few little reference sorts of apps, and a few games (e.g. he really likes Cribbage.)

At first, he was pretty skeptical, but we spent a few hours showing him all the stuff he could do with it, and he figured it out pretty easily. Now he's perfectly happy with it and likes it a lot better than his WebTV. I'm not sure if he really has a conceptual model of the Internet or anything, but he can do all the stuff he cares about easily.

The most amazing thing to him is Google Maps. He could hardly believe that he could just flick around and see a picture of his house and our houses and all the houses of his friends. I'm not sure he actually bought it when I told him that they drove a car around every street and took pictures.

A couple of years ago, I set my 73yr old father up with his first ever computer. It was a frustrating yet fascinating experience to see how someone who had never interacted with a computer actually did so. It certainly made me aware of usability issues that I would (as someone that sits at a computer every day) have never considered to be anything other than obvious.

I was going to say the same thing. Touch seems so much more intuative, I have the Facebook web app looking at a photo album and I absently flick and it doesn't work... Those youtube videos of a 2 year old using an iPad also..

I have seen a 3 year-old failing to grasp how to use the arrow keys to control a character in a game, but swipe to review pictures on an iPhone spontaneously. There is one indirection less, which probably makes all the difference.

(This does not mean that touch interfaces are better, though -- just that they are more intuitive.)

My two year old managed to figure out the iPad, far better than I would have guessed. He can find games and load them himself, and uses the safety of the Home button to get out of confusing situations. He has also attempted to translate the interaction method to the PC, without much success (he recently pushed the round "off" button on a monitor, because it looks like the iPad home button).

My grandmother also figured out the iPad's UI quickly, although she has almost zero experience using computers, and struggles to work her phone. I didn't explore how much more she could do with it though.

Surprisingly though, my father in law, a doctor with some computer experience, struggled a little with his iPad, and only uses it for online banking. I think it had to do with un-learning his old habits.

What I guess from this very small sample is that a touch interface is more intuitive for novices, but could be troublesome for those with some, limited computer experience.

I'm sure there's tons of research being done into this stuff...hopefully it will be distilled into an accessible form soon [an updated Design of Everyday things say].

my three year old figured out how to move around in my Nook by himself. I opened up a 'read-to-me' book and he grabbed the Nook from me and just started swiping. It was pretty neat to see how rapidly he picked up on what to do. He is now a devoted Angry Birds player on it too.

Those youtube videos of a 2 year old using an iPad also..

My 2-yr old has gleefully wiped my PIN-locked iPhone several times, as it was synced to corporate Exchange (Here Daddy! sigh Thank you, dear).

Her name is Jenny Boriss :)

Funny thing about Firefox 4 release is that it automatically hides menu bar which I thought was stupid. That's right, even help is hidden unless the user presses alt or right clicks to enable menu (Even as experienced as I was, it took time for me to understand why I couldn't find it).

However, if the menu bar was enabled, the user could go Help->Firefox Help and just at the bottom of screen without scrolling is 'Getting started with Firefox'.

Unfortunately the first video talks about bookmarking facebook, doing random searches, keeps talking about how awesome Firefox is and all fast paced and not slow at all.

Firefox "hides" the menu in the glowing orange box in the upper-left corner.

Would a new user know that there is supposed to be a menu? Why would they click the glowing orange box? Why even think it's clickable? Why is it glowing? Is that normal? They'll think: it doesn't seem to be related to what I'm looking for, so I'll keep looking elsewhere.

One thing that I have always noticed to be true (and frustrating) is that new users try to do as little as possible. They try to stay as safe as possible. They want to do one thing, and once they find a solution - any solution - that works, that's what they keep doing. They don't click on strange icons or try new things just because they can. They don't explore because they don't understand, and in my experience they usually think that something will go wrong and they will break the computer if they click somewhere they aren't supposed to. It's hard for us to imagine someone going in with no prior knowledge of how a program is supposed to behave. We have certain minimum expectations, (e.g. there should be a menu bar somewhere in the top left of the screen) and knowledge (you can't do much of anything without clicking the mouse) that they lack that makes what is painfully obvious to us confusing and unintuitive to them.

In these cases I've often thought it would be useful to have an "interactive" character guiding you around the application, such as a dog, or maybe a paperclip for a word-processing program.

It looks like you're writing a comment on Hacker News. Would you like help?

[x] Get upvote for the comment

[_] Just type comment without help

[_] Don't show me this tip again

You forgot clippy.

I used to sell computers at Best Buy in an area that was predominantly elderly, and let me say it is quite a difficult procedure to walk someone through using a computer. Every day someone would come in that said they wanted a computer but had never used one before. I would spend hours showing people how to do things, writing up lists on how each task is performed, etc. It can be very challenging.

I found the best way to help these people is just to put them in front of a computer and walk them through the basics. Also metaphors are huge. It is always easier to understand something when you can relate to it. For example: My documents is like your filing cabinet. Everything is organized into folders so you can find where it is.

The most interesting thing was when the ipad came out. We were seeing people who barely knew what the internet was trying to buy and use one of these things. Try explaining 3G to someone who doesn't have a cell phone. Almost impossible. I remember one man who was so frustrated that he couldn't set up email that he announced that he would give $20 to the person who would do it for him.

>so frustrated that he couldn't set up email that he announced that he would give $20 to the person who would do it for him

// Not that frustrated then. $200 now that's more frustrated!

Haha that is a good point. I guess it was more the look of total despair on his face.

I agree with what I think Boriss' point is – that our interfaces are not natural. They require us to build a new system of patterns to match with encountered interfaces of a similar kind, in order to know what we're doing. Same is easily said of learning any new language, visual, written, or spoken.

But to suggest that iconography and buttons in general are unique to digital interfaces is inaccurate, and recounting one man's first interaction with a computer as "user testing of browsers" comes across as a sensational misrepresentation of what user testing is, and what education-by-interface should be.

Let's not show an entrenched English speaker Japanese and claim that it is the language's responsibility to immediately map to his mental model of English. Learning falls on a motivated student (which Joe, with self-proclaimed "no excuse[s]" for never using a computer, was not) matched with an expert evangelist like Jennifer Boriss, in the event of a total failure of comprehension.

Oh, but I should add that it is cool to see something headlining as "user testing" making its way up the HN ranks.

This is interesting but entirely useless. Though he's never used a computer, someone already has used this one and changed some things about it.

#1 IE defaults to a blank page in what appears to be private browsing mode, rather than Bing as it does on any new windows install. He's somehow running IE on a Mac which may be why.

#2. Someone checked out San Francisco Yelp on Chrome and as a result there's a link for it.

#3. "We shouldn’t assume that new users will inquisitively try and discover how new software works by clicking buttons and trying things out." That's probably misleading for 99.99999% of your audience. Yeah, the one adult on the planet who never used a computer before might be scared to death to play around with it, but you shouldn't design with that in mind (unless you're somehow aiming a product at them, in which case good luck with that sir). Put a 7 year old on a computer and he'll figure out how to find a restaurant in 10 minutes.

* but you shouldn't design with that in mind (unless you're somehow aiming a product at them, in which case good luck with that sir). Put a 7 year old on a computer and he'll figure out how to find a restaurant in 10 minutes.*

To a point, maybe. Could that 7 year old figure out how to install lynx and find a restaurant at a blank console by reading man pages? Probably not.

Try to remember your first time using a computer. My early years were spent using a C64 to play games off of floppy drives, and I still remember the explicit steps: LOAD "*",8,1. I have no idea what that did, and treated it like a black box. Do not stray off the beaten path, for here be dragons.

Actually, I find that quite a lot of people who are even moderately unfamiliar with computers (by HN standards) tend not to explore or experiment with new tools or UI paradigms. It might be because they're afraid to break things, but I think it's just because their unfamiliarity gives them anxiety and for them it's easier and more comforting to simply turn the damn thing off and forget about it rather than plow forward and get even more lots and confused. Kids tend to be more curious, and more used to being unfamiliar with things. I wouldn't take that as representative of techno-illiterate adults.

I think that its more of a generational gap. People from previous generations were raised with the "measure twice, cut once" mentality, because when you're cutting things or drilling things, mistakes are irreversible. It's very hard for these people to get used to a "Control Z" mentality. In contrast, people from 1990+ generation have grown up with the trial and error mentality.

I agree on #1 and #2 but not on #3. I used to run computer training for people who'd never used a computer before, and I'd say the majority were not very adventurous (a few dived in). I think there's a sense that computers are valuable and can 'crash' and so they try not to do anything to break the machine

It seems to me that the question of the day is..what IS "intuition?" Most UI designers will create an interface that relies on typical design standards that any competent user would understand, such as home icons, menu bars, banners, footers, thumbnails etc. There are good reasons for this; namely because many interfaces target a market of experienced users.

Designing an "intuitive" interface true to the word is almost impossible. Experienced users can look at a new UI developed with standard design principles and navigate through it without frustration. But this is not called "intuition;" This is called past experience.

Now, rollover pop-up text that describes functionality may add a level of intuitiveness to a system. But why create a help menu for how to use the mouse or how to single click in a text field before you can type in it? Many of those users who aren't experienced are left behind only because it is usually not economical to market to them.

Here I want to ask a question: did any of the HN users ever get helped by anything put into the "Help" menu in an application?

Yes, although not directly: on OSX, the Help menu has a search box. This searches not only various topics but submenu items, and when selecting a match it opens the menu recursively until it can highlight the item.

In software with big, deep menus with an unclear logic/organization, I've used this quite a bit.

The OSX help menu is pretty cool. A friend was yesterday puzzled because she couldn't figure out how to delete someone from her Skype contacts. I am still on 2.8, she upgraded to the stupid 5.x version. So I dont know if the UI changed or she is just blind. "Just right click and choose delete." "But there is no delete, only block!"

She was just ready to google a solution, as I asked for using help menu, which indeed helped her. She used it before but typed in "remove" instead of "delete" (or vice versa, in reality it was entfernen vs löschen)

In the last decade, I think I've only used it to find version numbers. Here's an embarrassing(?) example, though:

I was typing a response in this post and I wanted to italicize a word. I know different systems have different ways to do that, so I just guessed and tried <i></i> like in HTML, which I learned in fourth or fifth grade. No go, which didn't surprise me because nothing takes HTML input anymore. So I tried [i][/i] as if typing a message on a phpBB forum. No go. So I clicked to edit the post again and just surrounded the word with /slashes/ to indicate it was italicized in my mind. And then I noticed the little tiny help link next to the text box and clicked on it. Help, to use italics? Lame. But it worked (see what I did there?). Here is something interesting: that help link doesn't show up for an initial comment, only if you go in to edit it.

Rarely, except for finding out the version of the software. Also there is nothing worse on a windows machine than hitting F1 mistakenly when going for F2, and having the gargantuan and useless Windows Help spin up.

Now that I think about it, in the last ten years, after ubiquitous internet access and Google, I don't think I've ever clicked on the Help link in any app. It never even occurred to me to do so until you asked just now.

I do recall using it a long time ago, say, 15 years earlier to, to find my way around Excel and Word, but not in the last decade.

Sure, e.g. in Adobe applications it points to the manual, which is always really detailed, logical and comprehensive. But you're right, developers often just put garbage there, items they can't find room for in other menus.

See "Check for updates" and "About".

Especially "Check for updates", which is the worst thing you can do to a user who is looking for help.

I do sometimes resort to help - it seems like when I want some very task-local information, e.g. What's the syntax for this textbox? Recently I just needed the Exchange recipient policy codes and the help from that dialog box goes right there.

But normally help doesn't help - it asks me for a foo-binding-string, I go to the help to see what one should look like for this program, the help says nothing more than "enter your foo-binding-string here".

If I can't find a menu item but I'm pretty sure it exists/should exist I type in versions of it into that help text area (on OSX) to find it. It works probably 60% of the time, the other 40% being that it didn't exist in the first place, or I was completely wrong about what it should be called.

For people who don’t use OS X, every application has a search box in the Help menu (as seen in the 5th image in the original article). If you type in it the computer will search through all of the available commands in the menus and submenus, then open the menu parents of the most likely result and place a big floating marker next to it.

It also highlights the item found, so you can just press [Return] to activate it.

It's basically spotlight/windows search for menu items.

I've only ever been helped by something in the "Help" menu through the search bar on a Mac (which I switched to about 2 years ago) - because it actually highlights with a massive arrow what you need. I don't remember finding anything useful with the Help nav on Windows.

Having said that - the HN audience is most likely a massively different audience to the sort of people that would ordinarily rely on the Help button - people like Joe.

Yes, I'm currently doing some consulting on a lotus notes application and the help contents in the Domino Designer IDE has a lot of useful documentation.

And note that there's a feedback option in every entry in Designer Help -- if something is unclear or misleading, you can point it out and/or suggest a fix. Lotus does pay attention to the Help feedback, and needed fixes are usually incorporated into the next point release. And since the Help file is just another NSF, you can edit your own useful changes, notes and examples into your own copy (or the copy that's usually installed on the server for the edification of others).

Yes, the About dialogs are usually there and are a convenient way to figure out what version of a program you're running.

Yes, to get the version number. And the help in Microsoft Office is quite good, I got help there a couple of times too.

Yes, to access the documentation.

It used to be standard to have your "About x" submenu(?) there, so whenever I had to file a bug report I'd click through help. Actually, on OSX Chrome has the bug report link there, too. I wish I knew that earlier.

Yes, generally when "Help" opens the documentation for all the features of the program. Last time I used it was circa 2000, microsoft word, I think.

I now use man pages and google mostly.

Yes, quite often. Usually it has a link to the manual which is what one is really looking for.

since the advent of fast internet and search engines, only for version numbers

Hell yeah, I use it to access the manual for XCode.

Yes, games, office and even problem solving all had help mentioned.

The funny thing is that as an "advanced" user I use the terminal all day long, but we assume the mouse cursor is easier for inexperienced users. But something about that old DOS prompt was not so bad: you just learn the commands, type them, and see the result. In a lot of ways it's a more straight-forward interaction than mousing around.

As a big CLI fan, I mostly agree, especially with respect to launching programs. But as far as I see there's two big obstacles: 1) Many names are somewhat-obscure in the interest of shortness. But are therefore a bit harder to remember. Compare "ls" with "list-files". 2) Many programs (especially interactive programs)--text editors, office programs, file managers, web browsers--are, I think, more intuitive with a mouse. Want to open that link or edit that textbox? Click it. Want to move that file? Drag it.

I once tutored a blind fellow who thought much the same.

This reminds me of my first experience when I started to use Linux (SSH, terminal only) and Vim.

The default text editor of one of my first servers was emacs. I had no idea how to exit, no idea how to access help, and no idea what program I was in! I called my nerdier brother and he figured it out and told me how to quit.

People generally say "time cures these problems" but I'm starting to wonder. Things change so rapidly, someone who is 60 now and is learning to use a mouse and keyboard might be utterly lost in 20-30 years with whatever advances have been made.

When I'm 60 in ~30 years maybe I'll be clinging to ancient things like notebooks and tablets because I know how to use them, and kids will point and laugh while they use the modern stuff.

At 60 you will be able to cope with new technologies almost just as easily as a 15 year old. The reason being you have had some interaction with technology over the years, so you have something to refer to. The 70 year old of today probably did not get to ever play with a computer.

Yeah, but his point is that something totally, completely new, like say, a direct brain interface, is going to be so radically different that he'll have a hard time becoming acquainted with it given his existing experience.

In my interactions with older relatives, the thing I find that they have the biggest problem with is that they don't try to explore how the computer behaves because the are afraid of breaking something.

So, personally, I think that Apple should advertise it's guest mode for OS X additionally as a "Learn New Things" mode where they very clearly explain that any changes they make are not going to be permanent and that by shutting off the computer, everything can go back to normal (maybe create a simulated web as well). Then there could be a series of tutorial on making significant, (normally) permanent changes to the computer's behavior.

Subsequent advances in technologies always require a bridge to the past in order to be successful. Understanding how to use today's technology is very useful for understanding tomorrow's, no matter how different. A button will likely continue to be a button, whether you press return, click, tap, or telepathically hit it.

The 70 year old of today did see a dizzying array of new technologies arrive before the PC or mobile phone, though and many of them learned to use VCRs, camcorders, color TVs, compact disc and cassette players, etc. without a problem. Are computers fundamentally different due to their abstraction and lack of physicality? What if the technology of 2050 is fundamentally different in a way we can't even understand?

Good point. I like to think I adapt well enough, I quickly learn to use most new things simply by prodding at them and observing the results. However as I get older I get slower and less patient with that sort of thing, and I'm only 28. Hopefully I was just extra awesome before and not actually slow now ;-)

This sounds like how I helped my grandma with her Outlook/phone/digital photo woes. With some things (email) she was extremely successful; with others (digital photos, saving attached photos from emails) she remained lost forever.

She could have benefited from a dumbed-down interface for the things she struggled with. Picasa was as close as I could get as simple photo management, but she needed something even more seamless and simple.

It makes me wonder why there aren't more companies creating products like the Jitterbug ( http://www.greatcall.com/ )--for those of you unaware, the Jitterbug is a cellphone with very large buttons and a presumably easy-to-navigate menu system, with concierge operator service for remedial support tasks ("I need to check my voicemail").

Maybe we have tried making extremely simplified UIs for common computing tasks, and failed (Clippy/MS Bob)? Is there an extremely simple photo management app (or something on the Mac) that I'm unaware of?

The world is full of people who have never used a computer. They are called children.

It's not really the same, they don't have anxiety like "what happen if I click here?" and stuff like this. They don't have any supposition of how things are working or not working and what supposed limitation/ability they have. Nowadays children seems to see computers and other hi-tech devices as kinda "normal".

A recent example with my little step-brother: he had no problem whatsoever to use the trackpad and click buttons of my laptop to choose another Thomas or Beyblade episode on YouTube. Three years old. And he only saw me do it a few times and then I wanted to read my book without being interrupted every 5 minutes so I told him "use the trackpad, it's this stuff here, use your finger on it and it moves the cursor here [I'm pointing my finger to the cursor on the screen]. When the cursor is on the images of the video you want to see next, you click this button [I'm pointing my finger to the left-click button]". He didn't even asks any other questions. He still interrupted me every 5 minutes but to show me that he just launched the new video by himself... :-).

To emphasize my point, it's not him who is particularly able, his older brother, who is five years old now, was exactly the same at his age.

Also video games. My old parents have a hard time to figure out a new tv remote. And computer stuff is overwhelming. They have no concept of a "menu" or "selections" or "radio buttons". That is a huge conceptual hurdle. But since the 80s kids grew up with Atari, Nintendo, Playstation etc and learned with button smashing to figure out complex UI (every game behaves and looks different).

Children are people for whom everything is new. Their brains are geared towards rapid absorption of new experiences and learning by exploration.

Older folks are better at analysis and strategy but have a tendency to see things in terms of what they already know. After all, for most of human history, once you learned how the world worked it didn't really change much for the rest of your life.

Not anymore, they start out young nowadays. My 2 year old niece is a pro with an iPhone.

Actually, outside of us and developed countries, there are millions, if not billions of people who have never used computers. I was one till age 13

And I was a pro with a mouse (Apple Mac Plus) at 2. Young minds are very good at learning how things work.

Well then, they're called babies.

Not even then. My kid was 6 weeks old when I first put the iPad in front of him. Babies get visual feedback right away, especially when it happens right a the point they're touching. It took maybe 30 seconds of training before he was pounding on the screen to make stuff happen.

This was even before he figured out real world things like stuffed animals.

No search engine that I could find [Google, Bing, Duck Duck Go, Dopile, Ask - though the logo is close] specifically prompts you on how to use it. For example they could say "What are you looking for?" for new users.


I don't understand this kind of criticism.

I get into my car. The accelerator doesn't have "Press here to move more quickly" written on it. The brake doesn't have "Press here to stop". The horn doesn't have "Press me if the big light outside is green but the aluminium box with wheels in front of you is stationary"

If you lower your standards to silly degrees, people will mill around that lower standard. I've seen tertiary students reach for the calculator to find out what 3 x 0.2 is, simply because they can get away with not putting in any effort. Naives that need more help should get specialised help to get them over the hump - normal users shouldn't have to deal with UX chaff just on the off chance some random person might choose one day to pick up a computer and try to learn it without asking anyone for any help.

No-one learns to drive all by themselves. Or cook. Or read. Or dress themselves. Would we really want pants to come with a permanently attached set of instructions on how to wear them, just in case someone who always wore skirts might one day try pants on a whim?

A UI should be constructed for a userbase, and the "have never used a computer before, ever, but am trying right now and will only do this on my own" demographic for search engines is miniscule. One wonders how such a person can get to the search engine in the first place - it's certainly not like Duck Duck Go is the default homepage for any browser.

Yet our current UIs assume you are familiar with the previous UI, and that one depended upon the one before.

Early automobile-carriages (cars) had reins! So you could operate them like a horse! But fortunately smarter people invented more direct controls suitable for operating engines.

Example: the Save button looks like a floppy. WTF?

I think there is room for a UI that is direct, not a baroque collage of every UI that came before.

Example: the Save button looks like a floppy. WTF?

A bit of an off-topic: the real WTF for me is that there is such a thing as a save button. I cannot fathom that most software requires us to perform an ancient ritual lest they throw away our hard work.

Both with essays and coding, there've been times when I was very grateful that there was a fixed moment in the document's life that I could easily go back to. Sometimes I start making changes and then think better of it[1]. Sometimes I edit something accidentally, and am very relieved that I'm asked whether I want to save my changes upon exiting the program. A possible solution is to have both normal saving and continuous autosaving to a backup file (as Word, Vim, etc. all do), but in practice that's a bit clunky.

[1] - Yes, I could hit the undo button repeatedly, since any editor worth its salt will have a virtually-unlimited undo. But it's reassuring to know that you've undo-ed to exactly the right point--in Vim, at least, the "+" indicating a modified file disappears once you've undone all your changes.

Yeah, working with code is interesting because a source file is rarely in a “stable” state when it is being edited. The save button in this case serves as a checkpoint marker: “this here is good.” It's like doing mini-commits.

If we were to do away with the save button, the rest of the “working with files” story would have to be thoroughly rethought.

You can take away my save button, as long as you make it easy to discard my current work as well. The version control like systems we have today are ok (Time Machine, dropbox, Windows previous versions), but not ideal.

If I'm using a word processor that is automatically saving, why should I have to then explore my hard drive to find the file and examine previous versions? The app I am using needs to present this interface (even if it is ultimately supplied by the OS, like save dialogues).

You are totally right! After using my iPod Touch for a while my computer seems so archaic, forcing me to do its work, when presumably I got it to automate routine tasks...

Some save buttons look like a 3.5" hard drive, which you could argue that an even smaller percentage of current users has ever seen :)

Don't get me wrong, I'm not against change in UIs. I am against designing specifically to cater for the lowest common denominator in general use applications if it means more chaff for the user with average skills.

Is this a criticism or just an observation?

Clearly the search engines have all weighed the tradeoffs of appearing too helpful for first-time users and cluttering the screen for regular users.

It's offering advice for change or improvement - it's criticism. Criticism doesn't have to be negative.

30.2% of the world have internet access, so it's fair to assume many have not used a computer and will not know anyone who has when they start, especially in Africa when penetration is only 11.4% (http://www.internetworldstats.com/stats.htm)

The car analogy is poor because you are required by law to take lessons in most (all?) countries. It's quite common for people to struggle to afford driving lessons and it slows down the adoption of cars, are you implying that the same should be true for computers?

The search engine was the default page in the Firefox screenshot so it's safe to assume some will see it.

UI should be constructed for a userbase, however, technically literate, westerners who have spent most of their adult life with easy access to technology, are not the only userbase or even the majority in the world.

I think you missed my point. But, let's refute anyway:

30.2% of the world have internet access, so it's fair to assume many have not used a computer and will not know anyone who has when they start

Just like many "technically literate" westerners 15 years ago.

The car analogy is poor because you are required by law to take lessons in most (all?) countries.

Please, learn to understand metaphors - I was illustrating a point, not making a mathematical proof. Also, I think you mean you are required by law to pass a test, not take lessons. Certainly when I did my driver's test, they didn't grill me on the amount of time spent learning, but on how I actually drove.

The search engine was the default page in the Firefox screenshot so it's safe to assume some will see it.

Yes, Firefox's default homepage is a custom page that is a front-end for Google. It is not Google.

UI should be constructed for a userbase, however, technically literate, westerners who have spent most of their adult life with easy access to technology, are not the only userbase or even the majority in the world.

If you're honestly suggesting for a single, globally accessible webpage, you're going to run into i18n trouble long before you have to worry about alienating unmotivated users. I find this point of yours weird given that I thought I was fairly clearly indicating that I was talking about current search engine userbases, not 'all possible userbases that might ever be'

And if most of the world is not technically literate, so? They're not going to become any more so because of a simple friendly sentence on a search engine that they can't get to by themselves anyway. The same thing will happen in the developing world as it did here in the west: the motivated and eager blaze the way, and the knowledge filters down to everyone else through them.

Twenty years ago we didn't even have Mosaic. Everything the laypeople of the west have learned about using the internet has been learned in a mere 15 or so years. Pretty amazing especially considering that the internet itself was still figuring out what it was useful for during that time.

Before the 90s, laypeople in the west had access to technology in the form of tvs, remote controls, microwaves. Anywhere there is electricity, people have access to the same kind of technology - physical buttons control functions. If there's electricity, folks will have at least a similar level of understanding as pre-internet westerners (no point in providing electricity to people who don't have lights or appliances, after all). If there's no electricity... well... computers are going to have a hard time running.

In the U.K. at least you are required to have experienced driver in the car every time you drive until you pass your test, this in effect is a lesson because they are required to pay attention and point out your mistakes.

If every new computer user was required to spend every minute they used a computer with a experienced person until they took a test, then I would agree that computers do not need to be easier to use.

In Australia that's only the case on public roads. You don't require a license or a second person in the car on private land (such as a farm). There's no legal block to someone teaching themselves the art of driving on private land without input from anyone else; they'll get their license as long as they pass their test.

Computers on the other hand don't have the potential to maim or kill people in the hands of a naive user. Mistakes are free of cost (well, apart from a few seconds of time). And even to use said search engine, not only must you have been able to navigate to it, you also have to have an idea of using a mouse and a keyboard. The Enter key is not necessarily obvious. Neither are the control or function keys. Should computer cases be engraved with instructions on every step of the way? Apple isn't going to like having to engrave what the option key does, just in the off chance an utter naive should choose to use one of their computers. I guess what I'm getting at here is: how low do you want the bar?

What of toasters? I assume that in the U.K. you do not need any sort of certification whatsoever to use a toaster. Yet nobody suggests that we go out of our way to design toasters for the portion of the population for which toasted bread is alien.

The problem is thinking that the metaphors we use are actually descriptive to outsiders.

There is no such thing as an intuitive interface that isn't learned.

I had the same encounter with my mom and although she had used a computer before the experience was just as crazy.


Here is an excerpt from the conversation.

Good story. And guiding another person per phone around a computer is incredibly frustrating.

But it shows why the Macintosh had in the 80s/90s no hidden context menus and only one mouse button (which is still mocked today). The difference between right and left click (which is really primary and secondary click) was really difficult to grasp.

I remember the early days of my parents

"Click on the ok button" "I don't see it (x5)" "There is a gray rectangle with OK written on it, is that the OK button?"

"Double click on the icon... no click twice... yeah, like this but faster... faster... no slower than that..."

Really, its like when learning a new programming language, you just need a motivator. My mom learned it to build a genealogy tree and my dad now plays card games online.

Try to explain to them that the first click didn't count because the form didn't have focus.

Would you treat a customer this way?

During the interview Joe was "stressed," "taxed," "frustrated," and "confused." He was asked to try and fail to do things the author already knew he wouldn't be able to do. Repeatedly. The author was surprised that he would take failures personally, and described ending the interview as "cut[ting] Joe a break."

Really very generous, to cut Joe a break from the interview he volunteered for.

There is a serious lack of empathy on display in this post. Joe deserves an apology.

There is a serious lack of empathy on display in this post.

What are you talking about? Everything you know about Joe, you are learning from this post. If the author has no empathy, how come she can describe Joe's emotional state so accurately that it practically moves you to tears?

And I think you need to make allowances for the genre. This essay is a bit impersonal because it's a designer reporting to an audience of designers. The tone is necessarily a bit cold and analytical. I wouldn't assume that the actual interview went down like this, just as doctors don't talk to cancer patients the way they talk to other doctors, engineers don't talk to civilians the way they talk to other engineers, and sausage makers have secret ingredients because... you really don't want to know what they are. Just enjoy the flavor.

As for whether it is cruel to cause someone stress by asking them to use a UX that frustrates them: Well, maybe. But if you have ever shipped a product, you have caused such pain. Not one customer at a time, but wholesale: Dozens, hundreds, thousands or even millions of people have cursed your product. Such is the tragedy of the mass market: You can make the delight scale, but the frustration also scales. The authors of, say, Word didn't want to frustrate anyone to the point of tears, but every day thousands of people are so frustrated, because bugs are inevitable, and because no design can do everything, and because people don't always understand the mismatch between your product and their goals, and because computers are less personal than the least personable of persons.

He wasn't working with a child, he was interviewing an adult who was free to leave at any time. Learning new things can be difficult. If anything Joe probably came away from this having learned something new (and is now able to receive discounts from one of his favorite restaurants). There's no reason to infantilize Joe.

I'm saying that if someone volunteers their time to help you, they deserve the same respect that you would give a client. I don't think that's infantilizing anyone.

But you're ignoring the most important part of my post.

If Joe had hired you to do an important job, and you needed the information the OP collected to get that job done, would you have gotten that information the way the OP did?


As is usual when performing user testing, the OP rewarded Joe in the end. They taught him how to use email and registered his bakery card. Joe volunteered and could have said enough! at any time.

The ones to blame for Joe's stress and frustration are us software authors. We are already treating our customers that way.

We software authors certainly deserve the great majority of the blame.

But this user test, while fascinating, is somewhat artificial and it's probably not helpful to exaggerate its importance.

Joe was not a clean slate, he clearly was bringing in all kinds of baggage. He was ashamed he didn't already know. He'd already experienced failure with the discount card email experience. He had someone literally looking over his shoulder and taking notes on his failure. I'd feel kind of stressed out about it too!

Furthermore, he was given a specific task (locating a restaurant) with a completely general-purpose computer interface and absolutely zero help in figuring it out. Certainly this situation does occur in real life, but is it common enough that it should drive the design of user interface?

The mall probably already has a kiosk for those users with a touchscreen UI. It would be silly to expect every user to use nothing but a touchscreen interface. Oh wait a minute...

What would you have had the tester do differently?

Dismiss the test subject right away since he had never used a computer and found trying to figure tasks out too difficult?

Suppose Joe had been cut off from civilization for the last 35 years and the two of you have agreed on a very large fee that he will pay you if you can get him back up to speed on what he'd missed.

Would you treat him the way the OP did?

Would you ever even use the phrase "too difficult" ?

If I have yet to learn long division well, the equation 129328 / 2983 might very well be too difficult for me. This isn't an insult.

I think there are two lessons to take away from this.

First, the mall can be a great place to do user testing, if you want to reach a broad audience. Except that I would imagine that you get more male participants, because they are waiting for their female partners more then the other way around.

Second, you need to test with your actual target audience. For Mozilla that is at least people who know how to use a computer to some degree.

This reminds me of those Burger King "Whopper Virgins" commercials where they tested BK and McDonald's hamburgers in remote places where no one had ever heard of either restaurant before.

It's entertaining and an interesting experiment, but I'm not sure how much it can really teach us about design for 99% of people who will be using the product.

For some statistics: 2,095,006,005 or 30.2 % of the world are Internet users.


I couldn't find any statistics on computer use, so the number is likely higher.

"penetration", as I understand it, merely means the number of people who can access the internet if they so choose. For example, have a phone line through which they have access to dial-up. I don't believe it actually indicates the number of users.

Reminds me of http://xkcd.com/627/

I applaud the author for doing this sort of user testing. Instead of getting a focus group to come to you, bring your test to them. Besides the mall, you might have fun trying this out at your local museum as well.

Reminds of finding a tribe that has never been in contact with other humans. The importance of the internet really becomes clear when reading about people who've never tried it.

It strikes me that my parents could make an absolute mint volunteering for user testing.

Now I went looking for the "Nightly Help" item in my browser, but I guess it only applies if you have installed a nightly build.

I have several projects atm and feel a bit stressed. In my experience, laughter will help in these situations and reading about Joe's first experience with IE really made my day:

Joe: “I don’t know what anything means.” (Joe reads the text on IE and clicks on “Suggested Sites”)

Me: “Why did you click on that?”

Joe: “I don’t really know what to do, so I thought this would suggest something to me.”

(Joe reads a notification that there are no suggestions because the current site is private)

Joe: “I guess not.”


To clarify: I was laughing at our industry as a whole that expects users to be computer literate to the point where ppl like Joe have to struggle really hard to get even the most basic thing done.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact