Hacker News new | past | comments | ask | show | jobs | submit login
The Distribution of Users’ Computer Skills: Worse Than You Think (2016) (nngroup.com)
275 points by mooreds on Sept 22, 2019 | hide | past | favorite | 271 comments



And this is why software sucks. We should focus on educating users, not stooping to their level

GUIs of old had a specific design language. Like: File, Edit, Window, Help in the menubar -- always consistent, always in the same order, and always containing the same thing. Min/max/close buttons in the upper-right. And so on. If one knows the GUI language they can use almost any application for many solutions at a basic level. This knowledge is getting forgotten as "designers" take creative steps to eliminate seemingly all thought that goes along with software usage. Nowadays "user-friendly" software has 100 different ideas about how to accomplish a task and it almost always involves hiding scary things from stupid users. Which also means that when something goes wrong you have no control over fixing it -- you're simply fucked. Unlike the past when there was usually an option checkbox buried in a comprehensive Preferences window, and you could disable/modify the unwanted behavior.

Ironically seemingly every app has to insult me with oftentimes unskippable tutorials and announcements just to reveal features that should have been easy to discover, but aren't, because the design language and UI standards have been forgotten and/or ignored.

Extensability and control should be WAY more important than catering to the most ignorant users.


Also, in really well-designed UI of the past, every menu option had a hint in the status bar, every GUI element had a tool tip, that explained what will happen when you click on it.

I find modern applications (especially mobile ones) much harder to discover and use. If you're a teen, you have plenty of time to experiment, but as an adult, I just want to get the work done.

Yeah, you could say that I don't want to learn.. but I just want to learn bare minimum to get the job done. The application itself doesn't have to be minimalistic.


The original user interface components had a lot of detail, and focused on learnability.

Look at the original scroll bars, they do quadruple duty: they tell you how long the document is, where you are in the document, let you jump to the area of the document you want, and let you scroll slowly. Modern scrollbars are a tiny strip that you have to keep poking at to make them visible; how is anyone going to learn to use that unless they already know about them?

So many products have a short lifespan now, apps and websites, that it's not worth the time making them amazing. Anything good required someone to improve it over and over. I save links of cool projects I see on Hacker News and half of them stop working after a year. Almost none of them have their full original functionality after 5 years, and are full of broken links, crashing backends, and incompatibilities.


I've read somewhere that to some degree apps do that on purpose. The users feel a sense of "pride" when they figure everything out which reduces incentive to switch to a competitor app. Also users can bond over the app while they explain eachother how it works and what little tricks there are. I think Snapchat was the example used in the article that I read.


That's an interesting take.

It would certainly lessen the incentive to switch apps if it was difficult to learn the one you know.


Everything is prettier looking, but pretty icons without text are often meaningless.


I think a major difference is also that for the teens of today, these apps are intuitive, if they are well designed. They take like 20 seconds to figure everything out. It is just us oldies who grew up with mouse and keyboard who have these problems.


I think it's just dumbing down the options.

It's recognition over recollection.

It is well known that recognition is easier, and that's why consistency help so much.

But used everywhere, as if it was the only available tool, it makes discoverability less safe, hence it reduces it.

If street signs are well positioned I don't need to know the route, if there are no signs and I don't recognize the place, I'm doomed.

If an app does one thing, it's easy to remember how to do that one thing.

But if another app does the same thing in a different way, now I have to remember two things and recall from memory which one uses which way.


It's always been recognition over recollection. Users don't want to learn how the computer works. They just want to get to their desired solution as quick as possible. So they'll pay attention long enough to learn the correct sequence of buttons to push that achieves that then stop there. Any change to that mechanical process, no matter how innate to the UI it is, will be seen as a disruption.

I used to think that UX inertia was a quality for old people not comfortable with computers. I thought that a younger generation which grew up using computers would be more comfortable and not show the same monkey-see-monkey-do habits. I was wrong. I watched a teenager consistently prefer to delete by right-clicking and selecting "Cut" instead of pressing the delete key, even though he knew it was the same thing and would occasionally use the keyboard. But because the "Cut" command was the first thing he learned that made an item disappear that's what he does. This application has different actions if you drag with left-click or right-click; he frequently confuses them. He didn't know that you can select multiple items by holding down the shift key. This is an honors student who has been using computers his whole life.

Other than students pursuing a computer career, this is the typical teenage computer user. It's not that the UI is too difficult or opaque. They simply don't care to learn more than a surface level understanding.

The objective isn't to create a world of gurus. It's to improve quality of life for everyone. I'd certainly appreciate a computer that allows me to geek out to my hearts content, but I am not the target audience. And the thing about a well designed simplified interface is it's simpler and more efficient for everyone, including the gurus. The catch is a lot of simplified interfaces aren't so well designed.


This may be of interest, basically the same sentiment: http://www.coding2learn.org/blog/2013/07/29/kids-cant-use-co...


I mean yes. But no.

A lot of what he says resonates with me. Except the bit about computers needing to be difficult to encourage learning. That's the adversity-builds-character mindset that I disagree with. The computer I learned to hack on was a Macintosh IIsi. It wasn't bare metal like a 8-bit computer that booted into BASIC. Nor finicky with lots of arcane configurations like a Windows 3.1 machine (and I've used both of those as well). It had all the it just works plug-and-play that he laments in Windows 7 but as a curious kid with a hacker mentality it served me just fine. Enough that by the time I graduated from it I was modifying the System with hooks written in 680x0 assembly.

But the author really goes wrong, I believe, in his conclusion that the cause of digital literacy is served by forcing users to be gurus. This is entirely wrong and is the crux of the observation I made above. There is a myth in UX development that programmers are targeting a conceptual user that exists somewhere on a continuous slope between novice and power user. The article ends with suggesting that computers be taught by setting people onto this theoretical path. (And to be honest I think he's mistaken that this isn't what we've already been doing and shown to not be effective.) But what if that path doesn't exist? Then trying to teach anyone that way will be fruitless.

My thought is that there is no gradual progression. Most people are not going to be able to learn computers the way that you and I have. So they need different ways of using computers than what we are used to. And that will mean a lot of disruptive ideas, which we may be suspicious of and call it "dumbing down". But what we should be more skeptical of is our own assumptions.

So if average people want to learn computers differently than nerds do, what we certainly don't want is to cut off the paths of discovery that lead to the next generation of super users, hackers, and developers. Both ways need to be preserved and encouraged to allow average users to get the most benefit of technology, while encouraging the more in-depth learning by those who want to be more deeply involved.


But they aren't. There is a language nowadays but it isn't nearly as consistent, and corrupted by so many dark patterns. Hamburger menu or three dots? Is this circle with color filled in activated or disabled? Is that a button/link or is it flavor?


I learned very thoroughly that the software version details were under "Help -> About". If it's not there, I'm not only stumped; I'm genuinely angry. That doesn't have to be logical; emotional reactions happen by themselves. Making me angry with your software is not good for sales.

Running on my desktop this very second; Postman. "Help" has no about. It has a "Check for Updates", but that doesn't tell me my current version. Only what new version is available. My response; a genuine feeling of frustration and anger.


Newer versions of MS Office (with that ribbon) have the same problem. There is an About dialog, but getting to it is a process that I still can't remember after having to do it a few times. Yet Help->About is still in my mind after all these years, despite not using it all that much.

The idiotic full screen save menu that forces you to click a few more times to save on the computer instead of a cloud service (I'm sure that's some sort of dark pattern...) is even worse, however.

Edit: here's a good rant about that latter one: https://what.thedailywtf.com/topic/11058/save-me-from-shit-u...


You should be glad that you still have any save menu, as they wanted to scrap that in favor of autosave.

Everything coming out of MS the last 5 years has been thoroughly underwhelming, but the UI development and user infantilization takes the cake.

Just image everyone in your company saves any document they opened automatically. How would these even look like after 1-2 weeks?

The decision processes at Microsoft cannot be explained by cocaine alone anymore...

Sorry for the rant, let's just say I understand your frustration...


Autosave is a safe and recommended UI practice; it has been widely implemented and validated by mobile UIs, where whatever you're doing is recorded when you close the app without having to explicitly save the document (a half written Whatsapp message, a mail draft...)

> Just image everyone in your company saves any document they opened automatically. How would these even look like after 1-2 weeks?

The essence of using autosave is separating storing from sharing the information. In an office suite, autosave should save your changes locally, so that a power outage won't wipe your edits.

This doesn't mean that other people will see those changes as soon as they're made; that's why you also create a "share" or "publish" command that will make them available. Developers got it and enjoy it in their version control systems, so why is it so difficult for them to understand that end users would benefit from the same safeguards? The first commandment of UI design is "treat all user input as sacred", yet so few developers respect it.

[1] http://quoteitup.com/quote/beUH/jef-raskin-the-system-should...


Autosave is a good feature, not only since mobile UIs. But not the way it is currently implemented that any change in the document is immediately committed to the original file if you didn't preemptively made a copy. Totally different to any version control system.

For example, we have a traveling expense document that is exported as pdf after people made their input. Since quite some time people can see what the last employee has been up to. Don't get me wrong, that can get quite funny. Possibly ad breach of confidential information, but meh...

We have no backend to publish to for these documents. It is just the boring file server.

There are certainly weaknesses in this particular workflow, but I would argue that it is quite common.


Committing instantly to the file is perfectly fine, as long as you have infinite undo (preferably with version history) and local drafts - i.e. exactly what VCS provide.

A shared folder in a server doesn't separate saving from sharing, but that's a fault of shared folders, not autosave. The popular storage models for end users are utterly broken, but developers don't care because they don't need to endure it, they've got their superior tools that they keep to themselves.


I don't really know if that is in the interest of the user. Just copying a file universally works with any file format and is a concept that users can understand.

A file has a name and a path, analogous to conventional storage systems. Putting it in a proprietary system like SharePoint or a Teams channel, to stay in the MS world, has other disadvantages.

I am giving them the benefit of the doubt, but sometimes I think the main goal for these "improvements" is vendor lock in, especially if the topic is around blurring file locations or URLs.

Access rights and relevancy of files is indeed something most alternatives organize better, but content tagging doesn't really work in my experience, because users just don't do it. So the only information about a files content remains the filename and the path.

Since those are fundamental concepts for data organization in most computer systems, it is advantageous to convey these ideas to the user and from my experience this is not that much of a hurdle.

Sure, a network share has disadvantages and without a competent clean up crew it quickly gets convoluted. But that is a problem all other systems have as well. So to improve the current situation, solutions should build upon file systems.

And asking the user if he really wants to save the file is probably just a good idea. If he ignores the message and throws away some work, he has learned a valuable lesson. Users on every knowledge level go through that experience at some point anyway. I think users suffer far more from most proprietary solutions for data exchange that is distinct from copying some files from A to B.


Copying a file so as to get intermediate drafts is an inferior solution to proper versioning, as any developer using a version control system will tell you (moreso if, like me, they started programming before VCSs were a widely used industry standard).

I agree proprietary lock-in is a pain and a huge risk, but I'm talking about fundamental storage models here, not particular implementations (where the problem is lack of interoperability, not the storage model; a proprietary file system doesn't create lock-in, because there are open standards to get your data out of it).

The best content storage model for end users I've seen is the wiki, where you get content versioning for free, although it doesn't separate storage from publication. A wiki where you could keep local separate branches of content would be ideal, but a.f.a.i.k. the only wiki system using that model (shareable local copies) is the Smallest Federated Wiki[1], and it hasn't gained much tracktion.

[1] https://github.com/WardCunningham/Smallest-Federated-Wiki

> And asking the user if he really wants to save the file is probably just a good idea. If he ignores the message and throws away some work, he has learned a valuable lesson

See, that's the kind of comment explaining why users secretly (or openly!) hate computers and think they go after them. Too many developers creating their software are actively disrespecting user's work and their time, even decades after human-computer interaction specialists found the best practices for caring about the user and making their life easier.


No. Autosave is the correct approach.

We computer people invented the concept of saving as an abstraction over the fact that unless what is in RAM gets written to disk, it is at risk of being lost once the computer is turned off or the program closes. The end user should never, ever have to worry about what is in RAM or on disk. The end user should expect computer documents to behave the way physical documents behave. Physical documents, and the changes people make to them, persist even after you stop working with them. We should go one better than the real world and autosave not just the document, but a complete undo history for that document.

It's not infantilizing to make your program conform to the expectations of real-world logic. It's bewildering to the user when your program does not.


It's at File->Account->About <program>. Because it totally makes sense to put the about box under Account...

I should also mention that there isn't even a Help menu anymore, because nobody would have a need for help with anything in Microsoft Office, right? You can still get help if you know about F1 though.


Office left all the keyboard shortcuts the same, at least in excel. Some of the combos hardwired into my fingers are based on menus that no longer exist, but they still work. I don't know if that was good design or not, but i like it.


thanks for the link! completely agree. I strongly recommend libreoffice from a design perspective


On a related note, on Windows, double-checking the top left corner of a window used to close it (same behavior as the top-right X, but sometimes closer to the cursor) even if no icon were displayed. It gets annoying when nothing happens because the developer thought it would be nice to replace the default frame.


Yeah postman is the worst. I hate that pos with a passion.


I didn't notice how unhappy I was with Postman until I switched to Insomnia https://insomnia.rest/


I can't imagine how Postman must be infuriating as I am already unhappy with some of insomnia UI quirks.

Although maybe it's fixed now.


Paw is a good alternative.


Personal pet peeve of mine is the Visual Studio Code and Windows Terminal Preview configuration.

The first one has a JSON flat file to store settings and you have to dig the documentation or compare the original file to change something (this level of discoverability is akin to reading the source code but it's an electron map, not dwm). Later on they bolted some kind of semi-automatic UI on top of it but it doesn't help me figuring out shortcuts and whatever settings I want to change/find.

The Windows Terminal Preview config opens another flat JSON file in notepad. At least in VS I can used the editor to navigate the file :D.

I thought the preferences menu under the edit menu vs under the tools menu in linux/windows was ridiculous but those two examples are worse.


I absolutely love the VSC way of doing things, it means you can backup all your settings in a single JSON file. The UI they've built on top of it seems to work well enough too and is quickly searchable.


Strongly disagree. I don't want to have to think about structured data or syntactic validity, or trust a magical autocomplete to show me what options are available (and if it doesn't, does that mean it's a freetext preference? Or the autocomplete is broken? Or I invalidated the syntax somewhere else in the file and it no longer knows what I'm trying to do?) when I'm trying to configure software.

Configuration is not like building things. Unless I'm truly tinkering for fun, I usually have a goal in mind. Like, sure, I have to mess around with program code and structured data formats all the time. I'm pretty good at it. But when I want to [re]configure something about my IDE or terminal, I want to be in and out as fast as possible and basically not have to think about it; configuration is a brief diversion not an end in itself.

I'd be less entitled and ranty if these were CLI/unix internal tools we're talking about. The convention with that ecosystem is config files and manpages to learn how to use them. Given that convention and the resources of the OSS community it would be silly to insist on GUIs there (though some do exist!). But Windows Terminal and VS are graphical programs that are made by a GUI-first software company with basically unlimited resources. Come on.

Also, re: backup and restore, "export" or "store configuration at so-and-so path" are far better options--see iTerm2's config export/sync-via-dropbox-or-whatever option; coupled with that terminal's excellent preference UI, it makes config management a breeze, and that software was mostly developed by one person!

/rant


You should check out VS Code. They take the JSON config files and create a GUI from them. It's far from ideal, but I wouldn't be surprised if they're working it out before adding it to VS.


> coupled with that terminal's excellent preference UI

I love iTerm and use it daily, but I would never call their checkbox soup "excellent".


Well first of all, I’d say that good modern interfaces don’t have any of those problems you’ve described. I completely don’t get this romanticization of old apps. Sure there’s plenty of shitty software made today. But complex and inscrutable interfaces were the norm not so long ago, and nobody even had much of a reference point to describe how bad they actually were. Perhaps you just think they were better because you had enough exposure to understand all the peculiar complexities and anti-patterns that were popular back then?

But more importantly, educating users is not the job of app developers, and any attempt to make it so will fail miserably. If you want to make an app that people will use, then you need to make one that suits your customers needs. The real ones, who already exist. You can’t make an app that tries to transform customers into something you that want them to be, and expect it to succeed.


I really think that while, interfaces are less complex these days, they're much more inscrutable, especially with the advent of flat interfaces. You can't tell what's a control or action item or not anymore.


> And this is why software sucks. We should focus on educating users, not stooping to their level

Some users just don't want to learn. It's the reason why I always get a phone call when [Thing I've tried to sit down and teach to grandma] needs to be done.

You're on to something when talking about consistency and following design language. Building a good UI takes skill and thought into making the design intuitive enough that if it comes to it, anyone can figure it out.

I think you can have a UI that is both usable for both sides of the bell curve, it just takes skill and testing.


> Some users just don't want to learn.

100% agree. It took my grandma about two weeks with a tablet to figure out by herself how to look up oil prices in the web browser. It was the first device she ever used that was more powerful than an old Nokia mobile phone and her first contact with the internet. We never taught her what a web browser was or how to use it.

Meanwhile my mother used a laptop and smartphone for years with everything that goes with it, and uses these devices mainly to complain about how they don't do exactly what she wants. It just has to work the first try or the computer is broken, there's no inbetween for her.


> ... how to look up oil prices in the web browser.

Grandma usually trades oil contracts by phone? Or Grandma heats with oil and needs a refill? Or Grandma has a salad dressing factory? Or...


I was also wondering that! I’m imagining my own grandmother: “No, I said 3,000 barrels of West Texas Intermediate, dammit! Make it happen or it’ll be your ass! slams down rotary phone


So many possibilities!


Some users also really want to learn, but can't, for some reason. My grandmother used a computer for years, I spent countless hours explaining it to her, but she never managed to be confident enough to break out of very detailed "scripts" (to open a document, read/write an email, etc) and try and discover things by herself. Perhaps this is an age thing; at some point, some people can't integrate a new paradigm?


One of my grandad struggles to deal with word, the other installed skype and found grey market dealers for some goods


The solution isn't to put things in preference windows either because even smart users will find that cumbersome. There's an interesting pair of videos (if you have some time) that goes over the UI of two competing music scoring software and why one is bad and the other good.

https://www.youtube.com/watch?v=dKx1wnXClcI

https://www.youtube.com/watch?v=4hZxo96x48A


interesting, of course partially subjective.

I switched to lilypond for typesetting music, I guess it may not be the right option for those who actually compose (I mostly typeset classical pieces to add fingering and obtain a better quality than the cheaply layouted scores that are published today have, unfortunately guitar sheet music is way behind the standards that are set in classical piano music, and just retyping things in lilypond improves the quality by A LOT).


A video to teach me about UI ! It is well known that text is a lot more effective than watching video.


I have tried for years to educate users on basic tasks. A small percentage get it and use it; some basically get it and use it sometimes, forgetting other times; but some obviously pretend to watch the demonstration but actively refuse to think and commit to memory what they have just seen.

In some cases I have worked with the same people for years, guiding them (IT-wise), being friends with them, watching them do domain-specific things that require good mental abilities... but also watching them misuse or fail to use software despite my repeated efforts to train them.

If you want to reach the greatest audience and also reduce support time/cost, then we _should_ be catering to the most ignorant users. But as I say elsewhere, we should have two interfaces - one for normal/advanced users, and one for the really low performing users.


> some obviously pretend to watch the demonstration

It’s easy to watch a lesson or demonstration and think you get it only to discover some time later that you didn’t really understand what was being shown. A much better approach is to walk people through performing the task themselves rather than passively watch it being done. They will be much more likely understand the task and to remember how to carry out the task – and retain that knowledge for a longer time.


I'd agree with you if we're just talking about business users, or even home users with high-educational attainment. But this article is specifically about the broader population.

In that context, I think your assessment that 'old GUIs' were more user-friendly is very rose-tinted. Back in the early 90s when those GUIs were everywhere, personal computers came with documentation, usually still on paper, because the devices weren't even user friendly enough that a typical purchaser could power on the computer and view documentation without help. And even then, these purchasers were a tiny subset of the population, typically with high educational achievement.

This was a time when public opinion hadn't even yet tilted in the direction of considering computer literacy to be a skill useful outside of an office building. Computer skills were definitely (and even to some degree still is) something people would list on a resume.

Consider the introduction of iOS. Windows Mobile had more features, more applications, was available on cheaper phones, and was entrenched in the business market at the time. It even followed traditional Windows design patterns, and applications often had a menu bar, as to your preference. They had plenty of documentation, and a link for 'Help' was always just a click away inside the Start Menu.

So, why was iOS so successful? It wasn't because power-users like you and I switched over. Many power-users mocked the device. It wasn't successful for including user documentation, it didn't and still doesn't. iOS was successful because it could be used by people who were unable or unwilling to read and follow documentation before using such a device.

Computer literacy is certainly a good academic initiative, but it won't help you make successful software for the mass market. The vast majority of casual computer users won't pursue educational endeavors to use your application. Most will opt for an alternative that doesn't require it, or they won't use it at all.

You should always consider the intended audience for your application. In our information bubble we sometimes lose an accurate perspective of the skills of the general public. This is of little consequence for many applications, but if your application is truly intended for the general public, like a government website, it's something that might need to be highly prioritized.


Agree but the current fashion is unstructured, non hierarchical UI. Amazon Music and iTunes on Windows are the poster children, where is it always unclear to me how I got to a screen and how to get back there if I want.


This is why I love macOS. Every application has the menubar at the top with everything in the right place. Apple's design guidelines also dictate that every action should be available as menu item. Preferences can always be summoned using the same keyboard shortcut, File open/save dialogs behave the same. It just works. Except for those few applications that are plainly ported from Windows or Linux without a thought about the native platform they run on. They just don't fit in.


Then you want to disable mouse acceleration, and you're humped. Apple's approach to simplicity is just to extinguish user choice. That's not a great approach.


The trick to disabling mouse acceleration is not to move the slider all the way to the left. I believe it’s the position two notches to the right of the lowest setting which gives a linear response. Above and below that point you’re flipping the concavity of the curve.


Funny enough Office for Mac still has those classic menus with the same layout/options whereas Office for Windows took them out


"We should focus on educating users, not stooping to their level"

Educating users costs money as well. From business perspective, no one is going to spend that money.


> GUIs of old had a specific design language. Like: File, Edit, Window, Help in the menubar -- always consistent, always in the same order, and always containing the same thing. Min/max/close buttons in the upper-right.

Mistake #1: Windows != GUI

Apple has/had another design language. But Apple got it more right. Ironically Steve Jobs fulfilled Bill Gates mission (~a computer in household) with the iPhone.

Mistake #2: "If one knows the GUI language they can use almost any application for many solutions at a basic level."

No and no. It becomes clearer that this is exactly the hard part: figuring how to provide value to users without telling them they are too stupid to use our software and we need to educate them first. If you build the iPhone it really would be just another Windows device. ;)

> Which also means that when something goes wrong you have no control over fixing it -- you're simply fucked

You are not. In the end, all a user tries to do is changing a value in a database.


A Linux/Windows computer in a household is a device that allows both consuming and creating. Many, if not most of us got into creating using a computer just by having access to one and being curious.

An iPhone or an iPad is just a device for consumption. It completely hides, if not locks you out of, the underlying workings of the device, meaning that you learn nothing and you create nothing.


> An iPhone or an iPad is just a device for consumption.

Absurd. There are a huge number of tasks for which, if I'm only allowed one device to accomplish it, I'd definitely take an iPad over a laptop. Practically anything involving photography or recording video or audio, for instance. Many sorts of art. Music, depending on the task. If allowed an external keyboard it's better than a laptop for writing in some environments—the decoupled device & keyboard allow for more flexibility in arranging print outs, books, and any other stuff you need.

For some of it an iPhone, even, is better than a laptop or desktop. And it fits in my pocket. A laptop doesn't.

Slab-of-glass type computers are worse at some things than traditional desktops and laptops but wildly better than "normal" computer at a ton of others, especially given space or budget constraints. The statement that tablets and phones are consumption devices to a greater degree than other computers is very wrong and I can't believe it keeps popping up so often, after all this time.

[EDIT] I also object to dividing consumption and creation so neatly. Consumption is part of all sorts of creation tasks. Maybe most. When you look something up on SO has a switch flipped and now you're "consuming" rather than "creating"? If I'm reading a recipe on my tablet is that consumption? What if I'm cooking at the same time? Scanning back through a video on some home improvement project to check my work while I'm working? For that matter, add "basically anything that involves doing things in the real world" to the list of creative activities for which I'd take a phone or tablet to assist me, both as a relevant-media display device and as a tool itself, over a laptop. I use my iPhone a ton just about any time I do actual doing-real-world-stuff work.


Every 'creative' app I've used on IOS has been disappointing. Even if the GUI is decent touch detection is so imprecise that its always super easy to fat-finger things. FLStudio vs FL mobile, for example

When I was younger I thought it would be cool to DJ with a three-iPad setup-- two ipads acting like a turntable/CDJ and a third acting as a mixer. Nowadays, putting on my performer hat, I would hate that setup as I feel like I would be fighting with the touch interface. I'll take real buttons and knobs on dedicated hardware any day of the week.


Well yeah, of course dedicated pro hardware's gonna be better. Physical knobs and buttons are awesome. Most'd probably take those over a PC keyboard (the qwerty type, not like a midi keyboard), too, for real work in music, if not allowed both. I'm not claiming iOS devices do everything really well, but if you just want a portable all-in-one creation device for a wide variety of needs and don't care about programming very much there's a good chance a tablet's gonna serve you quite well.

I find the iPad great for drawing (with Pencil) and anywhere from pretty good to great for all kinds of other creative work, especially compared to a PC/laptop with either having only its out-of-the-box equipment (no big add-ons), which is to say it's excellent for hobbyists, small-timers or newbies to a whole bunch of creative fields. But of course I can also watch Netflix on it, and it's nearly useless for dev work except as a terminal device, so... that makes it consumption-focused while PCs are creation focused I guess? I mean I've been using PCs since the DOS days so I'm not some touchscreen native, but the assertion just doesn't make any sense to me, yet it keeps coming up in developer circles. I don't see them (primarily-touchscreen portables) in my life and the way I see them used by others as any more consumption-focused than PCs, really, they're just good at a bunch of stuff PCs aren't so great at, and bad at some stuff PCs are good at. I do not understand the source of that claim, a single bit, unless "consumption" is "anything that's not computer programming".


You should have a play with Shortcuts on the iPhone - or Swift Playground, or any other the plethora of 3rd party coding platforms that run on iOS.


By being given an iPhone/iPad and clicking around, you wouldn't even learn about what a filesystem is, let alone about anything else in terms of how your device works.

On a PC, just by being bored you can learn quite a bit about how the OS works; you can access devtools from a browser; you can learn tons and tons by trying to get some old/obscure game to work; you can google "how to make a game" which would probably lead you to a download page for python, which you could install and start writing some scripts; you could easily stumble upon Blender and some tutorials and start 3D modelling.



Most of these are for education and not real work.


you have a very narrow idea of what creating is. non programming tasks don't require low level system access.


Realtime, low-latency audio does, though


iPhones and iPads have plenty of creation tools available. What you say is more true of Android, which is fundamentally an advertisement delivery platform.


Nit pick: Bill Gates achieved his mission of a computer in every household long before iPhone. Jobs just changed the paradigm to a computer in every pocket.


Mistake #1: Windows != GUI

I think that's only true in a literal sense. For many people, it's untrue in a practical sense, and those people live in a practical world.


It frustrates me that the takeaway of this post, and the implicit conclusion of many software designers, is "keep it extremely simple, or two thirds of the population can’t use your design."

I think a better conclusion is "if these skills are important, design software which aids in and rewards the development of them."

If your software is designed with some UX which is hyper-optimized to get users to do some small handful of tasks you thought of, instead of designed as a simple tool they can understand, then it shouldn't be surprising if users don't learn to do anything except those tasks.


Conclusions made by design folks are difficult for me to take seriously, because the journey mapping and design process is usually based on consensus on desired behaviors. Design isn’t science, it’s usually a process framework.

Best case it’s an 80/20 situation. Few design teams have a mandate or desire to dive into the 20%. They love to remove “distractions” and usually don’t understand that tasks that happen once a quarter are often as important as tasks that are done 3x per day.

I’ll use iMovie as an example. It’s a well-designed, great app and incredibly intuitive... if you are on a path/experience that was designed for you. My then-5 year old could make shockingly well done little movies without the ability to meaningfully read. But... if you need to step off the path, it’s a cliff.

When I was in college, in the 90s, I worked at a big box retailer selling computers and similar gear. Their ERP was an AS/400, text based system that was not at all intuitive, but with about 30 minutes of training was usable. After a few weeks was a trusty tool that could be adeptly and efficiently wielded. The difference was that it was a tool, and the practitioners of the tool could make changes to how the tools were used to get shot done.


I’ve got 20 years in tech and I find products like Snapchat extremely difficult to use. It’s NOT intuitive at all. Is it just me, or is it terrible?


I think by most standards Snapchat’s UX would be judged pretty harshly. But it has a strong following because at its core it’s a power user app. Making features/actions discoverable takes up UI space that Snapchat almost entirely hides behind swipes or unlabeled buttons.

Snapchat’s philosophy is interesting because it’s designed for people who already know how to use Snapchat. It actually increased the social aspect a bit because most everyone probably has a story or two of being shown how to do something by someone else.


That’s a good way to describe it. It’s interesting that an anti-pattern (bad UI/UX) can be an asset like that. Snapchat is the Bloomberg Terminal for teens.


>Snapchat’s philosophy is interesting because it’s designed for people who already know how to use Snapchat. It actually increased the social aspect a bit because most everyone probably has a story or two of being shown how to do something by someone else.

Very true. Just last night a person I've been on a few dates with showed me that they apparently added multiplayer games to snapchat quite awhile ago and I had no idea they existed til then.


Mobile app discoverability is terrible, it’s not just you. I guess you’re just supposed to monkey around with it and hope you find new features.


Mobile app discoverability could be great, but we're currently going through this excruciatingly hostile minimalist design fad where every feature needs to be hidden away out of sight, with barely understandable affordances to get at them.

> [recommendations in the article]

> * Little or no navigation required to access the information or commands required to solve the problem

> * Few steps and a minimal number of operators

> * Problem resolution requiring the respondent to apply explicit criteria only (no implicit criteria)

I think this is telling us what to do. Stop hiding things away behind hamburger menus and replacing interactive, rich dialogs with a chain of single-choice dialogs. Don't hide context or make assumptions. I've heard designers say things like "you should have a maximum of one decision on the screen at once" and "you should provide a maximum of two choices for each decision." But why? It's just more steps, more hidden assumptions, longer dialog chains for the user to navigate and more complicated logic (e.g. handling "go back" in the middle of one of these chains). But hey, I don't have a fine art degree so my input about a good user experience is worthless.


This is the information age. Just Google it. I don't meant that flippantly. One of their best features, the Snap Map, is nigh undiscoverable. But after seeing someone else use it and being unable to figure out how to access it, Google was able to tell me. It's absolutely a failure of their UX but say, what's the discoverability of regexps with and without Google? Jira also suffers from the same problem - it's often far easier and far faster to just Google how to do something, rather than figure out eg which particular permissions system you want to be in, in order to be allowed to do a particular action.


My point isn’t that I can’t figure it out, it’s that it’s painful to use. Someone is paid to be in charge of the ui/ux, their objective is mysterious because it’s not usability.


Usually with Snapchat, new features spread organically. Teens show other teens how it works, etc..


It's hilarious watching engineers give up on a mobile app and then go right back to fixing bugs in complicated code


Snapchat is very opaque, but greatly encourages experimentation. It’s got a learning curve, but fits a tremendous amount into a very sparse interface. In some ways it is like vim.

I’m not sure if that is praise or criticism.


Snapchat is a weird edge case because they've claimed they intentionally made some features difficult to find in the UX to encourage conversation about the app.

Don't know if it actually pays off positively for them, because I also find it confusing. But there's definitely been many moments of "Whoa, You can add a friend that way? You can do this by holding that button down?" etc


I foind Snapchat impossible too, but I get the feeling that to an extent its deliberate gatekeeping to keep us old folk out.


Which is fair, we already ruined facebook.


It is terrible.

But it's a nice counterargument to all the 'make the UI for the minimum viable user' thinking. It's a demonstration that if people care even a little bit, they can figure even such terrible, undiscoverable UI like Snapchat out.


I've been using snapchat since it came out, and it's definitely gotten worse like a lot of users have said. It used to be that you'd have your list of recent snaps and a camera—that's it.

Then they added stories and a contact list, and then they had to make money with the platform and things truly went to shit. That being said, it's not nearly as clunky as instagram's clone. I'd say regular users have gotten used to the death by thousand cuts at this point, like Latin evolving into vulgar Latin then later romance languages, and totally incomprehensible to someone who's just downloaded the app.


My understanding was that Snapchat is like that on purpose. Facebook became uncool with younger people because it became cool with older people. Snapchat want to avoid that, so the overuse of hidden gestures acts like a filter. I found it almost impenetrable (I'm over 30) and I had to ask my friends 18 year old sister how to use it.


Should I also learn how to repair my car instead of just taking it to the dealer? It’s called specialization of labor. Most of us make an above average wage dealing with computers and pay other people to know things we don’t know.


I see the car analogy used regularly and I think it’s completely off base. A much better analogy for computer skills is literacy and numeracy. We don’t tell people “don’t bother learning how to read, you can listen to audio books on audible!” just as we shouldn’t tell people that computer programming is only for professional software developers (or anything else involving intensive skill mastery related to computers).


Being able to use a computer is like being able to drive a car. After a little practice and learning a few rules most cars are pretty easy to just get in and drive. It takes a little more effort to learn to drive stick, and formula one cars are only driven by professionals.

Computer programming is like building, modding, or repairing your car — you can learn but it’s an activity done basically only by hobbyists and professionals.

Troubleshooting computer issues is like car maintenance. Everyone can fill the tank with gas, most people can change a tire, fewer can change their oil, and fewer still can jump a battery.


Those are really poor analogies. What you call "using a computer" means using a pre-made application that turns the computer into an appliance. Instead of making inappropriate analogies (IMO usually a redundant phrase), look at where the tasks fit into Piaget's model of cognitive development.¹ Driving a car is the sensorimotor stage. Proficient computer use requires a high capability of the formal operational stage.

The bad news is that a lot of people will never attain the formal operational stage, and what is even worse is that the percentage of people that do is falling.² The results of this study are not surprising.

The take-away should be that it is pointless and counter-productive to try to dumb down the user interface for applications in domains that require a high level of cognitive development. No amount of picture icons and hamburger menus is going to make up for an inability to reason abstractly. What it will do is make it impossible for anyone to ever use the application efficiently.

¹ https://en.wikipedia.org/wiki/Piaget%27s_theory_of_cognitive... ² https://oneofus.la/files/flynn2018.pdf


Pretty much right. Yes its useful for almost anyone to know programming but we can't learn everything in the world that's useful. A carpenter is probably thinking its crazy I can't build my own bench and have to pay loads more for someone else to do it.


You can make a bench with 4 cinder blocks and a plank. Now a good bench ...

I guess the equivalent in computing would be spreadsheets? You can automate and build awesome things but god help you if you wanna move it somewhere else


As long as such a bench gets the job done, the person making it wins.

Similarly with spreadsheets. Sure, they get ugly. But they allow to get work done better and faster that it would be otherwise (if it was done at all). The alternative usually involves finding a ready-made domain-specific application and adjusting your (or your company's) workflow to it, or pestering someone to make it for you, both of which are frequently bad trade-offs. It takes time to arrange, often siloes you off your data, you have lots of irrelevant stuff to learn besides your task, and it prevents you from iterating on your workflow. I'd say the last one - iteration on workflow - is the biggest source of Excel (ab)use. Hard to lock down your process in a domain-specific tool if the details of that process are fuzzy and subject to change.


The idea of a house full of hacked together cinder block and chipboard furniture doesn't thrill me. It might work for a workshop but I think I will just pay for the expert made stuff in my house.


And car mechanics would look at you just as derisively for not knowing how to fix your own car. They would also say that fixing a car isn’t just for professionals. The average person doesn’t need to know how to program any more than the average programmer needs to know assembly (I do). Everyone is ignorant to someone who doesn’t knows something that they do.


Is it fair to say that because there will always be more specialized skills to be learned, that none should be learned?

Everyone may not need to know how to repair their car, but to perform basic, routine maintenance on it and to learn driving technique as intended is something that, when neglected, can cause major inefficiencies.


Specialization says just the opposite. That everyone should specialize in a combination of what they are good at and there is a demand because it is much more efficient and trade those goods and services that they specialize in for goods and services that they don’t. In the modern era instead of trading directly, we use money as an intermediary.

Economics 101 says just the opposite, that you create inefficiencies when you don’t specialize. Why is car maintenance anything that everyone should know and not plumbing, electrical work or carpentry?


> Economics 101 says just the opposite, that you create inefficiencies when you don’t specialize.

That's simply bullshit. If you specialize on only installing tires on cars, but not removing them, you have specialized more than a business that swaps your tires, but you have created massive inefficiency by requiring your customers to somehow move around vehicles without tires for you to install tires for them.

There are particular circumstances where specialization increases efficiency, and there are (obviously) other circumstances where specialization decreases efficiency, so it's nonsensical to just say that specializing is always the more efficient choice, which is why all your analogies fail: You use an example where specialization (arguably) increases efficiency, then you completely fail to explain how computer skills fall into the same category as that example, and then you conclude that therefore it is in the same category.


And you left out the part where I said “and there is a demand”. There is no demand for someone who can remove a tire but not install it....

Every specialization is about knowing the level of integration and specialization.


> And you left out the part where I said “and there is a demand”. There is no demand for someone who can remove a tire but not install it....

Well, then that's simply the claim that markets solve all problems optimally ... which is equally bullshit?

> Every specialization is about knowing the level of integration and specialization.

So ... specialization is always better, except when it's not? Yeah, duh!? How exactly does that help us with determining what the right level of integration and specialization is?


On a personal level, the “right level” depends on your disposable income and your talents. I have the disposable income to throw money at a lot of things that I don’t want to do - not bragging almost any software engineer in the US should be at the top quintile of earners for their local market.

On a broader scale that’s the entire idea of the value chain.


If you know nothing whatsoever about cars or maintenance thereof you will get taken by the salesman then you will run your cars into the ground burning money and then get ripped off every time you need someone to maintain or fix your car because you don't know enough to know when someone is bullshitting you.

People are expected to know SOMETHING about cars because they are frequently a huge expense that you are required to expend to be able to exist in a lot of places.

Econ 101 in this case assumes that time and money are fungible in any particular increments and that the money they earn doing whatever they are optimal at is greater than the cost of the specialists services.

Example someone making 20 bucks an hour needs a professional service that requires 3 hours of work for a professional at a cost of $900 learning to do so ineffeciently for 5 hours then spending 6 hours seems like a, huge waste but consider.

- Just because their labor is worth $20 an hour doesn't mean that they they can trivially in the context of their current obligations convert a day off into extra pay right when they need it.

- 11 hours x 20 hours will furnish 1/4 of the money required. Even given an immediate alternative it will require 45 hours of additional work.

Incidentally if you own a home you probably ought to learn at least enough basic maintenance to fix simple things.

This is often effecient in practice.


Yes there are transaction costs to doing anything that you pay someone else for. In the IT industry it’s just like deciding to build versus buy and using managed services. Setting up a few VMs on Linode and hosting all of your own databases, queueing systems, etc is much cheaper than buying the same from AWS, yet and still organizations pay more for AWS everyday, why is that?

Every time you go out to eat, you are paying a markup over something you can do yourself - do you go out to eat?

Would it be more efficient for me to cut my own grass and maintain my yard on the weekend than pay someone else since I can’t convert that time I save on the weekend to cash - of course. But that’s time I can spend with my wife or relaxing. I also haven’t washed my own car, preferring to go to the car wash since I got my first real job out of college.

My maternal grandfather was a “man’s man” he built his own house, could fix cars, he took his pigs to the slaughterhouse and had a ranch with cows that he maintained until close to the time he died. On the other hand, my father isn’t as mechanically inclined, always looked up to his father in law and it took him years of convincing that it wasn’t emasculating to pay someone to do something that you’re not good at.


You're ignoring transaction costs.

Doing a bit of maintainence can take you 2 minutes, having it done by an expert could take 2-3 man hours once all the inefficiences are considered (getting to the mechanic's shop, setting a price, waiting for things to be done, all the overhead of running and advertising the shop).

Based on my experience with talking to clients and observing them while doing B2B, if the average office worker had decent Excel and Googling skills (let alone the skills of the average vim user) they'd save a couple of hundred hours a year.


People do disregard numeracy if all the, "LOL, your teacher said you wouldn't always have a calculator in your pocket" memes are anything to go by.


I guess a lot of those people end up in poverty, though probably not as many as it would seem it should, because the society does a lot of work to protect unwillingly innumerate people from exploitation, which makes it easier for lazy people to disregard numeracy without consequences.

Still, numeracy is not being able to multiply a bunch of 3-digit numbers on a piece of paper. Numeracy is primarily knowing which numbers to multiply, or add together, or subtract from each other to learn something you need, and that you can do this in the first place.


Maybe you don't need to know how to repair your car, but it's easily arguable that every car driver should recognize the things happening in their cars, otherwise, they are an unsafe driver.

On the simple side of things, is your check engine light on, are you dragging your air dams, are your lights on, do your tires have any tread.

Slightly more complex, recognizing that your car is over heating, recognizing burning oil, brakes on the wear pad.

More complex, is the suspension connected to the body?

What's scary is the number of ride share drivers (and taxi cabs) that continue (are allowed to?) drive with some of these serious issues happening. even in states like California with vehicle inspections.


If my check engine light is on, it shows up right on the panel, I take it to the car mechanic. How much do you have to “know” to see a light?

Most problems with cars (except maybe tread), you get some combination of an indicator light, something smells funny, sounds funny, or your car drives differently. In either case, you either take it to a mechanic or if it can’t be driven, you get it towed.


To see a light and to take appropriate action when you see a light are 2 different things.

Where a single light can mean everything from you're losing 10% fuel efficiency to your car will seize and light on fire if you keep driving for 10 minutes.

Relevant fact: there were on average 152,000 car fires each year in the U.S.A and most crashes do not result in fire.


Take a look at the top posts on /r/justrolledintotheshop and you will see plenty of examples of people who drive with their fingers in their ears and the dashboard lit up like a christmas tree. And those are only the cars that people begrugingly have brought in, probably 10 more like that on the road for every one in the shop.


These people know something is wrong with their car and probably couldn’t afford to get it fixed. No one chooses to not get their car fixed if they have money in the bank to take it to the car dealership.


> These people know something is wrong with their car and probably couldn’t afford to get it fixed.

But, without cursory knowledge of car issues, that decision is unwise, negligent at worst. A car breaking down in the center of traffic has negative externalities. A car fire (152k/year) has negative externalities. A tire or brake failure resulting in an accident has negative externalities.

Even the seemingly innocuous check engine light being on from emissions equipment has negative externalities.


I’m not saying it is a wise decision. No one would say not paying your rent and being evicted is wise either. Poverty forces you to make bad choices sometimes.


So this is where the analogy works; It's wise that the average person be able to perform moderate computer skills; similar to it's wise for anybody with a car to be able to accurately assess risk by knowing enough about cars.


What specialist does the labor get offloaded to, if someone doesn't know how to keep track of their data, doesn't know how to do simple research on the Internet, or doesn't understand how to secure their computer? Instead, people just lose their data, believe false things, and get pwned.


You pay someone to do it. My uncle has owned a car repair shop for 30+ years and he pays someone to do all of the office work.

Your expertise is someone else’s “undifferentiated heavy lifting” no matter what field you’re in.


Well that person doing the repair work isn’t going to use some dumbed down tools. Somebody has to specialize.

Plus the point is that as cars became more ubiquitous, once esoteric skills became the norm, such as checking oil, filling up gas. At some point the baseline has to advance.


> Plus the point is that as cars became more ubiquitous, once esoteric skills became the norm, such as checking oil, filling up gas. At some point the baseline has to advance.

Yes and no. On the one hand, as early cars become more common, more people had to deal with them. But on the other hand, cars became simpler to use over time.

Tinkering with your car used to be a hobby of many people and a necessity. These days, there's not much user servicing left. (I don't even know whether you still have to manually check the level of oil? Don't they have sensors for that these days?)

Gas station pumps have also become much easier to use. It requires some intention and a minimum amount of skill to overfill your tank these days. Similar considerations apply to using computers.

Starting your spreadsheet by typing something in the DOS prompt used to be how things were done. These days, vastly more people are expected to be at least somewhat familiar with spreadsheets--it's no longer something to highlight in your CV--but starting the program used to do so has become simpler.

Similar countervailing developments have been common since at least the industrial revolution. See https://en.wikipedia.org/wiki/Deskilling


> (I don't even know whether you still have to manually check the level of oil? Don't they have sensors for that these days?)

Most cars do have sensors for this, but they often fail, and even if they're working fine, tell you at a warning level, not at a full level. If you want to be a prepared driver you'd check your fluids regularly, but especially before long trips; The bonus is that the sensors can't tell you that your oil is prematurely dirty, so just by looking at it, you get more information.

This reminds me of Obama recommending people check their tires; yes, modern vehicles have tpms, but if they heed the warning, it often doesn't come on until a tire is significantly low ~20%.


There are so many mechanics that will do free or low cost inspections for things like that, why wouldn’t I do that before going on a long trip?

Speaking of which, that’s yet another thing we pay for other people to do. If it’s a trip longer than about five hours we are flying or if we are being cheap and it’s not too far (around 6 to 7 hours) and if possible, taking the MegaBus.


Going to a mechanic to have your oil level and tire pressure checked sounds like you would spend more time driving than just doing it.

Then there is the fact that about half those places specialize in making up stuff that is wrong with dumb peoples cars to rip them off.


We also fly out of Atlanta - the world’s busiest airport - and live an hour or more away depending on traffic. The calculus between whether it’s worth the hassle to deal with going to the airport, driving, or taking MegaBus changes the calculus considering the ceremony of getting on the plane can take an additional 2-3 hours with travel, security, and getting there early.


It would be great if everybody did this!

Or even rental car companies.


Interesting. Do you know why the sensors seem to have a binary output only, as opposed to checking for full level and dirt?

If we put our minds to it, these days we can often make sensors that are much better than humans for these kinds of applications.

Btw, you are talking about the former US president Obama? Seems a bit weird for him to care about car maintenance?


It was during the 2008 campaigns with high fuel prices; He was ridiculed for the simplicity by the right, but the NHSTA estimated that 25% of American tires are under-inflated at a cost of 2.8 billion gallons of gasoline every year.


> Btw, you are talking about the former US president Obama? Seems a bit weird for him to care about car maintenance?

Why wouldn't the President care about the safety of millions of potential death traps driving to work daily?


More topical was the fuel efficiency at the time

https://www.politifact.com/truth-o-meter/statements/2008/aug...

Estimates were that 25% of American tires were underinflated, wasting 2.8 billion gallons of gasoline every year.


Yes, unpopular opinion these days, but car owners should really know how to do at least the bare minimum of basic car maintenance. I'm not talking about replacing the suspension or re-building the transmission, but things like changing oil, brake pads, and refilling washer fluid. The very basic basics. Not only does it save you money (who wants to have a mechanic do something for $200 that you can do for $30 + a little time) but knowledge of a tool you use every day is valuable in and of itself.

Wow, -2 in 40 minutes. I knew unpopular but is it that controversial a view??


I live in an apartment complex and park my car on the premises. I'm not allowed to change my oil on the property due to the landlord's fear of oil left on the parking surface, don't have anywhere to safely store tools or jacks to change a brake pad, and have no where to keep a half-used container of washer fluid outside of my living space (and I'd rather not keep it in my living space).

That said, I know how to do all of these things, I actually service my car and a motorcycle myself despite the apartment complex's "rules", but I think for a lot of car owners they're not in "safe" space to do their own maintenance. There's also the up front cost of buying tools. I think a lot of car owners in similar situations are relegated to taking the car to a shop for simple repairs as a result. Any advice?


Here we have self-service maintenance/cleaning places for cars (often at gas stations). So if you don't have your own driveway/garage you can rent some space and have access to tools and a place to do maintenance.


I’m sure if you find a nice farmer you can also pick up a chicken, manually snap it’s neck (seen it done before, my grandfather had a farm) and cook it yourself...


I was providing an example of what you can do if you want to service your car yourself but don't have the space where you live. Nobody says you have to if you prefer someone else do it for you.


There are a lot of things I could save money on, but I make a good living developing software so I can pay other people to fix my car, press my clothes, mow my lawn, etc. I work during the week, money gets deposited in my account so I can enjoy the weekend doing things I want. We even spend extra going to restaurants so we don’t have to cook!


I guess I'm just with Heinlein on this one: Specialization is for insects. Interdependence is great if you can [currently] afford it. I sometimes pay someone to do something that I know I can do myself. I consider it a guilty luxury. But what's the plan if you run into hard times? When money stops magically appearing? Stop driving because you don't know how to keep your car functioning? Wear dirty, wrinkled clothing because you can't do laundry? Stop eating because you can't cook?

Being able to transform money into convenience is a great benefit of living in a modern inter-connected society. But it's only great while it lasts, and relying on always having this ability is a risky move. When my parents ran into hard times they raised and slaughtered hogs. When things got better we stopped. I don't think I'd be alive today if they didn't have that know how.


If I am running into hard times I’m probably unemployed, the little money I save by pressing my own clothes isn’t going to make a dent in my inability to pay our bills - and if I am not working, I won’t be wearing clothes that need pressing.

If the hard time is temporary, that’s what credit is for. If it is systemic, I’ve got bigger fish to fry - downsizing our house, getting job training, moving near a bus line to save money on car insurance, etc.

It’s just like the BS “Latte Factor”. People are not unable to save for retirement because of $6/day Latte. It’s usually because they have high fixed costs relative to their income.


80-90% of people earn less


The average car is almost 12 years old (https://www.cnet.com/roadshow/news/average-vehicle-age-incre...) and most people aren’t fixing their own car. That means most people can afford the maintenance needed to keep a car on the road.

As many dry cleaners that I see even in the less affluent parts of the city, people are able to afford to do that too.

Most less affluent people who do own their own home (as well as many who could afford it) do cut their own grass.


I'm perpetually wondering about this. As I do a lot of maintenance myself (in the house and on my bicycles) and as I derive satisfaction from it I understand what you're saying. It does feel liberating to have agency in these matters!

On the other hand these jobs do take space in my life and in my mind. It is questionable whether we should "force" people to do simple car maintenance. Our society runs on division of labour and everybody places their cutover to experts somewhere else. You say they could benefit a lot from gaining some mechanical understanding of their car's operation. And I feel the same. But they may wonder why they should learn a skill that takes time away from other things important to them.

Sure I may question how they spend their time. But now we've moved away from how knowing basic maintenance is good for you to a more general outlook on life. Watching a tutorial will improve one's future life. But we can't say that watching a car maintenance tutorial will be the right thing. Maybe it's a tutorial on Excel? Which would be more in line with the topic of the article anyway.

So I agree with you in general but not in specifics :-)


Brake pads are not “the very basics”; they require owning a set of tools.


I don't think having a jack stand and a 40x10x10cm toolbox squirreled away in a closet is that big of a hurdle. The tools required for all the examples I mentioned are extremely minimal, and the skills can be picked up by watching 10 minutes of YouTube.


Or I can, drop my car off on the way to work, take the shuttle to work, take Uber home, or bring my laptop with me and earn my living doing what I’ve done for 30 years and pay someone else to do it...


You are a rich person with money to burn and a job that offers you time liquidity. Most people have neither, which is why - and not because of their irrationality - arguments from opportunity cost don't work as well as one would expect.


Seeing that “most people” are paying someone else to fix their car, that means that most people find a way to pay other people to do it instead of learning how to do it themselves.


Nobody is arguing with you about what YOU ought to be doing. I personally disagree with this being presented as good general advice to most people who make a lot less.


> Nobody is arguing with you about what YOU ought to be doing

Current subthread started with this: "Yes, unpopular opinion these days, but car owners should really know how to do at least the bare minimum of basic car maintenance"


Car owners are a class of people that ranges from people living in said cars to the the hyper rich.

It's probable that Bezos has to give zero fucks beyond what brand he prefers. People living in their car are probably just praying it doesn't break down. In between there is a giant middle ground that composes 80% of the car owning population whose benefit derived from car know how is inversely proportional to their wealth.

Most car owners should learn the basics of how their car works and how it is maintained even if they intend to outsource that labor to others just so they aren't trivially taken advantage of. If we assume that MOST people aren't born into wealth most people could stand to learn the basics during the time when they have more time than money and then they could make productive use of this know how later.

I'm not sure how you can even argue this point.


It’s called an existence proof. Most people don’t know anything about car repairs, yet they manage to go through life fine letting other people repair their cars.


There is an entire industry devoted into turning small regular maintenance tasks into big paydays based entirely on lies to people who don't know anything.

I don't think your existential proof is a good one. It's like proving Cigarettes are OK by noting that billions use and only millions die.


Repairing car is comparable to debugging/writing software. You should know how to operate car, park on hill, know what all controls and indicators do, etc. a power user should know how to check fluid levels, tire pressure, be able to comprehend maint schedule and have it performed.

Most Americans are as knowledgeable about how to operate their automobil e as they are about software. That is, not very.


Yes, you absolutely should. And they should make cars easier for everyone to repair too.


Should I also know how to do my own plumbing and electrical work? Carpentry? Go out and hunt and grow a garden?


Absolutely. We should strive to master our surroundings. One thing is choosing not to repair your car today, because you have other things that give you greater value, another thing is not having the choice to repair it because you don't know how to. At a minimum, a human should know basic operating principles, repair and maintenance of their machines and buildings. Unfortunately, as most people chose to remain ignorant to their surroundings, manufactures get away with making devices less and less serviceable.


At what level should I know it? There is a famous essay “I pencil” that posits no one person knows how to create a pencil (http://www.joshharness.com/2012/10/nobody-knows-how-to-make-...)


I can tell you how to make a year's supply of charcoal in a couple of hours. Or, how to use a knife to make a writing quill from a feather, and how to make a lifetime supply of ink from household materials.

Making a pencil is a highly fragmented industrial process, even at the 19th century level. Just making the "lead" is quite sophisticated.

There is much satisfaction in being a maker - or at least being more than just a consumer.


Some (many) users actively refuse to learn. There appears to be some psychological barrier that prevents them from even trying (perhaps similar to how some people panic at seeing any math beyond simple addition arithmetic). That's not to say the people are stupid - many of them are relative experts in some other domain, but yet when a computer interface is presented, they shut off their brains.

While I find "wizard" interfaces frustratingly slow to use, I think they are a better approach for many users. Here's a silly but not so terrible idea for an email interface, wizard-style:

1. "You have a new email from bob@example.com. Would you like to open it? [Open] [Save for Later]"

2. (opened) (display email). "Do you want to reply to Bob now? [Reply] [Do Something Else]"

etc. etc.

This kind of interface could be handled by anyone, and it would probably work well for people with different physical or visual challenges.

Anything more complicated than this is just beyond the willingness or capability of most people to handle. What's really scary is that some people operate their cars much like they operate a computer - doing unnecessary ritualistic things that have no benefit or produce the desired outcome as a side-effect, leaving some systems in a permanent state when they should be enabled/disabled at different times, and so on. The tendency is to think that people are just generally incredibly stupid (unwilling to learn, not incapable); but perhaps it's just that our UI approach needs to actually be two approaches - one for users, and one for refusers.


Users don't read so I don't know how well this is going to pan out.

You're making the mistake thinking users are carefully operating their machines with a purpose. It's more like being surprised by a spider, clicking on anything that moves and if it starts bonging click shut down and try again later before you delete your life.

/12 years tech support and have a family.


This brings back times when doing parental tech support over the phone.

"can you read what is on the screen right now"

"There is nothing on the screen!"

"there is a box on the center of your screen with words, please read them"

"OH that box.."


That's arguably consequence of the discussion being over the phone, not of it being about computers.

In my experience, any kind of task which involves giving people instructions is extremely frustrating when done over the phone; there's just too much context missing, so you have to start being extra precise to compensate - but then, most regular people aren't used to being talked to with precision, so they start getting annoyed and push back.

The most common example I can think of is trying to explain someone over the phone how to get somewhere, when you know the lay of the land but they don't. A lot of frustration and useless info gets exchanged before they're even able to communicate you where they are.


Shoulda just FaceTime'd...


I think this type of approach doesn't really help anyone, it's just condescending. As a user I don't want to learn, I actively refuse to. I want to do what ever I need to do and move on. Unless I use your application professionally, I don't want to learn it.

I think the crux of the problem is secretly hidden here, though. Many creators overestimate how interested other people are in what they made. The complexity of using your solution is always at odds with how painful the problem I have is. If your solution isn't easy to figure out, it can be easier to just solve the problem in an other way.


It's obvious to me that this is because of the bubble that I ostensibly live in, but this is still simply shocking data. Only 5% of people can perform more complex tasks than finding an email with a particular sender, subject and date? I suppose I severely take for granted and upbringing that surrounded and supported me in learning to adapt to new tech.

It does make me wonder about the sustainability of technological growth however. If there is such a small portion of folks that can use all of this new tech, how far reaching can it really be? Will this divide begin to shrink? Or stay static as one small portion of the population drifts further and further away from the other...


When this came out I was skeptical and looked at transactions that I had metrics for to corroborate with some colleagues. Not at liberty to discuss specifics, but we all rejected the proportions that the study used.

End of the day, complex processes get completed when offered to the public, and computer based processes are generally the norm. People somehow get hunting licenses, navigate complex DMV processes, book transportation, and get passports, even when they are not above average IQ.

I think if you asked these questions based on end goal rather than method, you’d see smaller numbers of level 0/1 people.

It sounded to me like the assessment was framed around PC use — most people don’t use PCs! I’ve seen scenarios where complex workflows delivered in social services scenarios (where education, literacy and language competency are an issue) where getting tasks completed in mobile (which is ubiquitous and often more complex) is significantly better than a similar complexity task (travel vouchers) with a professional audience on PC.

If less than 5% of the population is capable of performing complex tasks, society would break down. Something is wrong with the assessment of complexity or questions asked.


There's some technical blindness that comes with PC use. While people can navigate complex situations fine by talking to humans, put them in front of a user interface and they can't tell an affordance from a hole in the ground. They're not dumb people, they just spent all their time interfacing with humans instead of with computers, so when faced with a technical challenge they don't really know how to start.

(I'm sure you've met people who are the inverse.)


I think part of that contrast is in face to face situations, the person on the other side knows inside and out what needs to be done as part of their job. You can call a hotel with zero knowledge on how to book a room, and the concierge will hold your hand and walk you through the process until you have a room booked.

Even online chat support is pretty lackluster and puts a lot of work on the user. At an old job of mine, IT wouldn't even mess around with trying to troubleshoot a ticket going back and forth chatting or emailing. They'd just remote in to your workstation and fix the problem, same thing a concierge at a hotel would do with their keyboard and mouse rather than you struggling through booking on hotels.com and managing an online chat.


There are, but based on the limited information, I remain skeptical of the study.

The example of a difficult task was to “schedule a meeting room in a scheduling application, using information contained in several email message”. Many of the other examples given are email related tasks.

If you meter and study Office app use, you’ll find that features like this are rarely invoked. For the example above, Where I have studied behavior on significant user populations, I would guess that 5% of users book a meeting room resource in any 30 day period, and only 2-3% do so more than once in a 30 day period. The numbers will vary in some populations, but usually you see 80% of the booking done by a small number of people (managers and admins).

The problem with this is that the study is looking at a narrow range of office worker tasks and applying them to the public at large. It’s an assessment of MS Office skill sets across the population, not ability to perform complex tasks.

There may be more depth in the paywalled study, but this article didn’t surface it.


Right, but if you give that task to pretty much anyone here, they'd nail it no prob. Even if they had to do it in Thunderbird or GCal instead of Outlook. Heck, I'm sure I could manage it in Lotus 1-2-3. I don't think reliance on office software or oddness of the task is in question -- and if you're saying it's due to unfamiliarity with the software, well, that's still very bad news for all the software writers out there.


I sometimes go to "kaiten sushi" (sushi comes around on a conveyor belt and you can pick what you like) with my mother in law. There is an order system where you can use a touch panel to order something specific if you want it. Granted my mother in law is 86 and hasn't really used computers before, but it always amazes me that she can't figure out "buttons" on the screen. For her, they are just text with coloured backgrounds She tries reading the text, but it just doesn't make sense to her because it is essentially splatted randomly around the screen. Eventually she realised that text with a red background is a button and so now she thinks that all buttons are red! She has not been able to jump to the idea that these rectangular areas of the screen with the same background colour are buttons. I've tried explaining in the past, but she's completely uninterested and so never listens to me. In the end, watching her struggle is so interesting that I've stopped trying to help her. We take a lot of things for granted when we think about UX.


> I've tried explaining in the past, but she's completely uninterested and so never listens to me.

All it takes is one of her friends or influencers to explain it, and viola, no longer allergic to technology.


I don't think this is a new feature of humanity. Consider the people you know who don't care to learn about, or pay attention to: Food (cooking, sourcing ingredients), cars, health care (e.g. what drugs do what and how; what makes your body+mind feel good or bad in the long term), electrical wiring, plumbing, "nature", etc. Computer technology just seems like another aspect of the world which many people are happy to ignore. Also... consider the number of professional computer programmers you know who don't read the readme+API dox but jump straight to stack overflow


Wow, I regard all of those to be interesting subjects, and consider that an adult should have at least a cursory knowledge of all of them. However, you are right, plenty of people just don't want to know anything about any of the above.


Consider this fact: 70% of the adult population of the United States is overweight or obese.¹ An uncharitable interpretation would be that 70% of adults in the United States cannot manage to feed themselves appropriately.

¹ https://www.niddk.nih.gov/health-information/health-statisti...


...or that they really really love to eat.


I have a cursory knowledge of how a car works but if there was a problem where it couldn't start and the problem wasn't that the battery is flat or out of fuel then I would have no idea what to check.

I also don't drive a car so this skill is mostly useless to me much like some people don't need any more computer skill than just being able to send emails and read websites.


Reminds me of "Profession", a great short story by Isaac Asimov:

https://www.abelard.org/asimov.php


That site has horrible design, that hurts my eyes so much. I've resorted to just copy/paste the text there to my preferred editor to read it. That said, I did enjoyed the story, so thank you.


Firefox's reading mode solved this problem for me.


But, it cuts off the story at the word "Always.", just before the image of the planet on the original page.


100% agree that the site was subpar. Plus, the URL was confusing. Oh well, worth every cent I paid for it :)


While that is true, I think it (the site) gets something of a pass for being almost 20 years old.


Maybe it's time we stop catering to the users inabilities,and start making software interfaces that make sense, and expect the users to adapt if they wish to benefit. That was the expectation when I started using computers, and it was reasonable. It not only made me faster, better, stronger, it also meant that after passing a certain threshold, back then, I "got it", I didn't just get that one piece of software, I had some intuition, an instinct, about how "everything" worked.. That made all software immensely more useful to me.

That instinct is getting less useful as every application tries hard to abandon basic concepts in favor of being intuitive, it may lower the bar somewhat (I don't believe that), but it definitely makes applications harder to actually use for people who used to "get it".


I believe that the Rich Hickey’s mantra “simple is better than easy” applies very well to UIs.

A lot of the times UIs try to hide complexity by making it “easy” to do common tasks, but the underlying complexity is still there and rears its ugly head when you inevitably bump into an edge case, then you have to learn all that complexity (or as is often the case - rage quit).

A good example is Amazon software in general - from their shopping software to AWS console - all great if you’ve done it before and follow their guided path, but if you want to understand what will _actually_ happen or to do a specific edge case well then you better have a lot of nerves and patience.

Seen a person actually brought to tears by a shopping experience were she fell outside of a happy one click sale path and had to do some amendments, with some significant money on the line. She just felt powerless because suddenly an “easy” experience turned bad and now she had to understand _all_ the minutia around sellers, shipping and refund policies.

Not a pretty sight.

Had the underlying principles been a little less convoluted, and actually exposed, then a user could successfully navigate even some edge cases as then he would have at least some tools at his deposal.

Right now UIs tend to drift to “click and pray” category, where you identify what’s the several common scenarios and just cover those.


The happy path is one solution in a labirynth. Once you’re off the happy path you are navigating a labirynth in the dark. It is a very frustrating experience. It would be useful to have a process map similar to the site maps of the internet 2.0 times.


Strongly, 100% agreed. Especially that "catering to users inabilities" leads to dumbed down software sucking out oxygen from any problem space, leading to frustration and inefficiency for those users who have jobs to do in it. This is becoming more of a problem now that a lot of software is collaborative in nature, meaning you're stuck with whatever your team or company uses.

The way I see it, we suffer from a bad case of "worse is better" here. The amount of utility a piece of software provides can be to first order expressed as (Number of users * Average productivity * Average result of the results of using the software). Unfortunately, the revenue of a typical SaaS vendor is proportional only to the number of users. Dumbing down the interface and trimming features increases the number of users, but kills the right part of productivity and result utility distributions. In other words: more people doing less with worse tools.


So 2 tier applications?


In the old days it would just be 2 separate applications. Unfortunately, with so much work being done with SaaS that suck in collaborative features (that should ideally be handled on a different layer, outside of the software), it has to be one app, so I guess it should be done in 2 tiers to be useful.


Interesting to see that Japan both has the most "can't use computers" and "strong" users...

I've personally observed (and had to help) many times where an otherwise extremely intelligent person seems to completely lose his/her brain upon being seated in front of a computer, and wonder why that is; these people can be experts in various other fields involving significant mental ability (e.g. maths, physics, chemistry), but when given the simplest of tasks to perform on the computer, like those in the article, all common sense and rational thinking seemingly disappears completely. Most people in the advanced category probably know that a computer is "a stupid machine", but could it be the "magical" nature of computing, and how it's often sold as, that completely destroys their expectations?

Relatedly, I'm pretty sure the stories at http://rinkworks.com/stupid/ are all true.


Mainstream computing is utterly non sensical. Even developer tools are rife with ad-hoc conventions and arbitrary signs.

I saw this in college student who had straight As in every classes failed the programming exam ..because of a gcc flag. Student was just not aware of some synta and got stuck.


Peruse the posts in this thread and I think you can see your answer. Many people seem to be literally happy to accept ignorance in fields outside their expertise. There's one exchange near the top of this thread that's been repeated multiple times with the specifics swapped. A user suggested that people would benefit from learning the basics of car maintenance - changing fluids, brake pads, etc.. It's downvoted with numerous comments responded essentially arguing 'why, when I can pay somebody else to do it?'

Those are trivial skills that take a matter of minutes to learn, and for which there are countless step by step guides available online. I'm certain that car mechanics have had this exact same conversation with the nouns changed about. How can somebody who is an expert in e.g. computer science, not manage to take a few minutes to learn how to change their fluids or pads? Skills that would not only improve their independence and knowledge but also save them thousands of dollars over time? And indeed even when given illustration of how to do so, some people just seem to be arguably voluntarily incompetent. I think it's the same thing with computers as I've observed the exact same thing, as I imagine everybody has. Some people just don't want to learn things outside their domain, and seem to become voluntarily incompetent when facing those problems.


First of all, user didn't suggest that people would benefit from learning the basics of car maintenance but instead said that the car owners should learn how to do it.

But to get to the real point, you are talking about two different issues as if they are one and the same. The car discussion is about not being interested in learning some things because you can just pay someone to do it while you do something else that you enjoy more. That it is easy to learn has nothing to do with it.

The parent poster here is talking about intelligent people being unable to complete basic tasks on a computer, even though they are sincerely trying to learn how to do them and sometimes even having them explained in detail.


Brake pads? A garage can lift my car up, take all four wheels off with a pneumatic driver, and be mostly done in a few minutes; for me, that's a hand-jack lift of each corner of the car one at a time, with a hand and foot lug wrench to remove and re-fit 20 wheel nuts, then a trip to dispose of the brake pads, and several hours.

Framing it as "ignorance" and "voluntarily incompetent" is incorrect, I don't reject the knowledge of brake pad changing, I desire to avoid the work of brake pad changing. It doesn't need doing often, I don't have the tools and experience to do a good, quick, job (pneumatic wheel nut removers, or a car lift). I already have to get my car inspected annually (UK), and that's a perfect time for a garage to change things. Worse, if I screw something up by being inexperienced, I'm in trouble - I can't drive my car to a garage, and a tow would cost more than the work. Assuming I can afford it and am not on a tight budget, then were's the compelling reason to do it?

> improve their independence

An illusion. I can't legally run a car without a garage doing an annual inspection, I can't make my own brake pads. I don't become more independent by removing and replacing 20 wheel nuts.

> [improve their] knowledge

Brake pads need changing every ~24,000 miles, that's every 2 years of average mileage, or ~25 times you will use this knowledge in your adult motoring life. Rough numbers. (Electric cars with regenerative braking will reduce wear on brake discs and pads). Is that a priority thing to learn? The best available thing to learn?

But my point is not that it's a bad trade, my point is that framing it as "the only imaginable reason is wilful ignorance" is little more than a putdown, and it's wrong. There are other reasons. You're selling the Heinlein-ian "if you can't do Everything yourself, then you're not a Real Man(tm)" worldview with a slightly different wrapper on it.


With Japan, they've got a bunch of older folks that aren't computer literate. The anecdotes I hear about non-tech Japanese companies trying to use email is pretty disheartening. Japan also has a thriving tech industry, with particularly robotics shining. This study is still a bit surprising, but also believable.


> Interesting to see that Japan both has the most "can't use computers" and "strong" users...

There was a good post about this from foobiekr a couple of months ago: https://news.ycombinator.com/item?id=20571481


I don't find this even slightly surprising.

I did a further degree (not tech) just a few years back at my local uni which put me more in touch with regular, non-techy young folk than I'd had for years. I was assured by everyone (despite their knowing that I'm a programmer) that the digital natives would run rings around me with their mad internet & computing skills. I've also done a fair bit of "Code Club" style volunteer teaching at a local library.

What I find is cohorts of young people most of whom have desktops covered in files not because they're intrinsically messy people, but because they have no idea how to manage files elsewhere (and would lose them if they tried). And whose internet-based research skills are no different from my 83 year old mother's. They're more confident, and hence more willing to try things, but no more knowledgeable. What intuitions growing up with computers has equipped them with mostly relates to a few narrow & specialised corporate interfaces (Facebook, Instagram et al).

Our educational institutions (here in Aus anyway) do a lousy job of teaching people to use computers. And our computing industry's approach to human factors is an unmitigated shitshow.


What percentage fits into, "Tries right clicking things when the buttons on top of the app don't have the immediate function"?

It's my observation that people who don't right click can't be taught to program. (half serious)


>Instead of using live websites, the participants attempted the tasks on simulated software on the test facilitator’s computer.

I wonder if this really lowered all the scores, because the test UI may have been very non-standard/outdated/etc. "Simulated software" could be literally anything and could have caused a bunch of confusion among people who are used to certain icons and usage patterns.


I wonder if this really lowered all the scores, because the test UI may have been very non-standard/outdated/etc.

...or, as may as likely be the case, very "modern". I'm confident to say that I'm far in the "advanced user" category, also having been a developer for a few decades, and yet I still get perplexed often by UIs with lots of hidden functionality and confusing hieroglyphics instead of descriptive text.


I find touchscreen UIs to be particularly bad at this. Icons only have meaning if they're familiar, and beyond simple point, drag, tap and pinch, touchscreens have devolved into an often-arbitrary muddle of secret handshakes.

Example (as a recent iPhone user): If I want to open an SMS from the lock screen notification I swipe right, if I want to clear or respond to the notification I swipe left then select from the options that appear, but if I swipe left from a little further across on the screen it opens the camera, from whence swiping right again switches to video or slow-mo instead of going back to the lock screen. Swiping up from the bottom of the screen gets a neat little toolbox thing with commonly used options, whether or not the phone is locked. Swiping down from the top brings up notifications if the phone is unlocked but doesn't seem to do anything on the lock screen (I guess because it already shows notifications?). Double-pressing the 'home' button gets you a list of open and recently-used apps, but double-tapping the same button without triggering the physical button gets you some empty tray thing at the top of the screen which I'm still not sure the function of (maybe makes it easier to swipe down and get to notifications?)

This is random and fiddly enough for someone who writes software as a day job. I can see how it would be confusing as hell to anyone who doesn't usually use computers.


> but double-tapping the same button without triggering the physical button gets you some empty tray thing at the top of the screen which I'm still not sure the function of (maybe makes it easier to swipe down and get to notifications?)

This is correct. Soft double tapping the home button cuts the display window in half, to make it easier to operate one-handed.


Its also interesting how much of what we consider an "easy to use UI" actually just means "Its what I already know".

I have been using linux with gnome for years and I recently had to use a macbook. I wanted to change the sound settings so I opened the global app search thing and typed "Sound" got nothing so I searched "Audio" still nothing so I tried to find the settings app and tried "Settings" still nothing until I remembered the settings app is called "Preferences"

The OSX UI is not hard to use, its just not the same as what I had been using for years so even simple interactions seem difficult.


How long ago did you try this? I just tested and searching for "sound" gets me the system preferences sound window. Searching "settings" returns system preferences.

There was a time where apple and windows both seemed pretty stubborn with their semantics (trash vs. recycle bin, both do the same thing, solely for brand identity), but I think that must have died a couple versions ago.


This was last week but I haven't used this macbook since about 2017 so its quite out of date.


Jef Raskin wrote about this 25 years ago: https://www.asktog.com/papers/raskinintuit.html


Thank you for posting that, its nice to see an "authority" validating something that I've often felt the need to point out: people tend to misattribute to ease of use what is merely familiarity.


> UIs with lots of hidden functionality and confusing hieroglyphics

A simple example would be the keybase android app which actually has a hamburger icon rather than 3 lines that we devs call 'a humburger', which in itself is supposed to represent a 'menu' of some sort. Classic.

As a dev, I loved it. As a UI for the masses...probably not the best idea.


This shouldn't matter. If the task wasn't timed, then computer literate people would still be able to find the required features, while the illiterate people wouldn't feel comfortable clicking through menus and trying to figure out how to use the program in hand.

If you want to be pedantic about this then using GMail or Outlook would probably have increased the scores some what, because a lot of people already know how to use them, but that probably doesn't answer the questions posed in the study since even most illiterate computer user can perform complicated tasks if they were shown how to do it (assuming they are of about average intelligence)


I see a bunch of "this is why software UI sucks" things in the comments here but did anyone read the example tasks?

> An example of level-2 task is “You want to find a sustainability-related document that was sent to you by John Smith in October last year.”

This varies from easy to really, really hard depending on the specifics. I know that to do this in my specific condition I've had to do work to make it easier, and for my academic paper collection it still doesn't work consistently because Google OCR sowmtimes fails on older papers.

> The meeting room task described above requires level-3 skills. Another example of level-3 task is “You want to know what percentage of the emails sent by John Smith last month were about sustainability.”

Has anyone here ever, EVER asked this question about a peer? Or even a client? I work with computers for a living and off the top of my head I'd say, "This is incredibly annoying to do using normal tools at my disposal."

I can't help but feel like a great deal of bias went into these categorization metrics, especially past Level 1, which artificially deflate and cluster the resulting users into a group with skills in data processing and analysis.

I'd surely believe that cohort is rare.


> I work with computers for a living and off the top of my head I'd say, "This is incredibly annoying to do using normal tools at my disposal."

I think you're overthinking this. They aren't looking for a NN based solution with sentiment analysis.

from:John Smith sustainability -> results: 300

from:John Smith -> results: 800

37.5%


Not only are those numbers not always right (they can contain replies), but also that's not the question. If the questions are about keyword presence, it's easier but it depends a lot on stuff like, "do we count attachment content?"


I have to agree with the other commenter, I think you're definitely overthinking it. A keyword search is clearly what they meant for the user to do.

They're not expecting 5% of the population to train a word2vec model and produce prediction interval estimates. Just a simple back-of-the-envelope check.


Maybe they should word the question that way then? No one is asking for using widely criticized ML tooling.

The addition of the word "estimate" doesn't break the word budget and opens a lot of flexibility when it comes to unfamiliar tooling. Even if we ignore that, I reiterate: who has ever asked that question? The outlandishness of that question surely has at least as much responsibility as the obscurity of the UX in question or the general population's lack of education.

Personally I think most of the actual tasks have so little bearing on the reality of the average computer user's needs I can't say I'm surprised folks couldn't succeed. I also think this is a cognitive trap for a lot of folks here used to being skilled computer users; becoming eager to pontificate on why our unique insight is what's needed to "fix" the problem.


I just think that you're focusing on the wrong things here. Towards the top of the article they say:

> One of the difficult tasks was to schedule a meeting room in a scheduling application, using information contained in several email messages. This was difficult because the problem statement was implicit and involved multiple steps and multiple constraints. It would have been much easier to solve the explicitly stated problem of booking room A for Wednesday at 3pm, but having to determine the ultimate need based on piecing together many pieces of info from across separate applications made this a difficult job for many users.

This task isn't overly complicated. It's a task that computer-savvy people do on a regular basis, but it complex enough that it might trip up someone who can't use a computer well.


It's also a task people screw up routinely. Reliably getting a room for a meeting in a busy building is not a solvable problem. And again, it's a problem the vast majority of computer users simply do not and will never have.

The self-similar biasing in this study us almost ridiculous.


Even with corporate training the users I worked with couldn't find out how to do things even with a FAQ, HOWTO, F1 Help, and Help Desk helping them out. Web and Desktop apps. Plus they requested features that already were in the programs to be added in the next release. Which means they didn't know how to access them.


Someone I know well cannot find and use the volume-control on an iMac despite the icons being noted right on the damned keyboard.

I've spent upwards of a quarter hour in support calls or installfests trying to get someone to open a terminal window and issue 2-3 commands.

I routinely get asked for assistance with such technical tasks as: deleting spam email, restoring sidebars and menus, resetting font sizes, locating a window that's been hidden behind another foreground window. This from otherwise nominally self-sufficient, educated, professional adults.

Though in cases there are sensory and motor impairments involved.

The users themselves consider themselves proficient. Dunning-Kruger strikes again.


Shit, I run our developer services group at my company, meaning we design and manage tools for our own developers.

I am always shocked at how many people respond to emails asking questions that are answered IN BOLD in the first paragraph of the email announcing a new tool or service.

The smartest people can sometimes miss the simplest of details.


That might actually be "banner blindness". If you make something stand out too much, especially in strategic places like the very top of the message, people will subconsciously interpret it as an advertisement, and filter it out before it reaches their awareness.


Something I'm left wondering about is whether there is a difference in proficiency between laptop/desktop-based apps and mobile apps. This study seems to define a computer as a laptop or desktop machine which may've made sense given the study started in 2011 when mobile computing was still relatively nascent, but the time span of the study from 2011-2015 encompasses a period of pretty expansive mobile adoption (IIRC I didn't even get my first smartphone until 2012). Moreover, my impression is mobile penetration outstrips that of sit-down machines (someone please correct me if I'm mistaken) so for some demographics their primary understanding of a "computer" could be from the perspective of a mobile device. If that is the case then perhaps that introduces a caveat to the scoring -- mobile and desktop experiences are quite different and someone coming to a desktop-based interface from a mobile-first background might score more poorly on the test than their actual level of technical proficiency would suggest.

Additionally, it seems to me that a lot of more recent interface designs seem to converge in the overall UX. Perhaps it's a result of standards like Material Design/Apple Guidelines and frameworks like Bootstrap and Semantic UI being published, but whatever the reason I think it has the benefit of reducing the learning curve for new products and making them easier to navigate. Even though this study concluded only 4 years ago in 2015, tech trends move fast and I personally think its measure of users' computer skills may be a bit dated in the context of today's tech landscape. This could just as well be my perspective from inside the bubble though, so here's your grain of salt with all of the above


Why is the result of the email test surprising? The popular meme that “email is for old people -lol” (https://www.huffpost.com/entry/email-is-for-old-people-l_b_3...) is true. If you don’t work in an office environment, why would most people use an email with a reply to all or schedule a meeting on a desktop app?

People use, Slack, Facebook messenger, iMessage, AirDrop etc. Even kids often use DropBox to turn in work or whatever Google Docs uses.

For day to day use, my wife, son and I have a shared calendar that we update on our phone. If someone texts me a date, I just click on it to add it to a calendar.

Even when people do respond or send an email to people in their personal life, it’s usually to one person not a group. For groups, they will usually use a messaging app.


There is a pretty steep cliff in offices too.

Even in modern Outlook on the Web, resource booking is buried in the UI — people don’t do it.


DOS had it right all along with the one application at a time system when it comes to ease of use by the general public who is likely to think in one app at a time. The text GUIs were actually fine. It is no coincidence that it was as successful as it was.

On top of that there were clear things such as menus, status bar tips, shortcuts and help docs that were to be found in the same place across programs, one would have to understand the system once and they would be generally fine across various programs.

Now, of course, I am not suggesting we go back to DOS but I am suggesting we take the good principles out of it and consider standardization. Windows had it right as well up to Windows 8 when they made a mess with the tiles and all the push towards a confusing mess (to this day, the settings app still doesn't have proper navigation).


eh. that’s about what i think. and not just about computer skills.

driving skills

basic reading comprehension

the ability to think more than one step ahead

i know it’s not going to be a popular opinion, but yes most people are stupid.


> i know it’s not going to be a popular opinion, but yes most people are stupid.

No, most people are not stupid. Ignorant or unskilled about many subjects? Sure, but we all are.

Consider any speciality you like. There are those whom are Masters in it, fluent in its nuance and implication. And yet those same Masters are novices in most other domains. Are they stupid as well?

To put a fine point on it, consider this haiku:

  Fishermen can fish,
  But if you want a new roof,
  Hire a carpenter.


I wish the issue was that we all just have different specialties. I'm afraid that some people just struggle with everything.


> I'm afraid that some people just struggle with everything.

  Human progress is neither
  automatic nor inevitable...
  Every step toward the goal
  of justice requires
  sacrifice, suffering, and
  struggle; the tireless
  exertions and passionate
  concern of dedicated
  individuals.[0]
HTH

0 - https://www.brainyquote.com/quotes/martin_luther_king_jr_164...


They might not be stupid, but at any given time people might be any mixture of: distracted, anxious, in a hurry. I pay full attention when I’m using work software during a 2-hour block of focused time, but I’m really not doing that when ordering a passport online while trying to get on the train to make it to the restaurant on time.


Half of all people are below average...


no.. half of people are below the median.. I know it's a bit pedantic but in a thread making fun of stupid people lol


Median is an average.

There are three generally used measures of central tendency:

- The mean: the "arithmetic average", sum of observations divided by count.

- The median: the central measure. The single value (or averaged pair) at the center point of an ordered listing.

- The mode: the most frequently occurring measure. Sort a set of values by frequency, and pick the most frequent instance.

Most people intend "mean" as "average", but that's not strictly the case.

https://en.wikipedia.org/wiki/Average

For our next pedant's corner post, we'll take on the distinction between descriptivist and prescriptivist definitions.


I have never, ever heard anyone colloquially use the word 'average' and mean anything except 'mean'.


I have heard it all the time for median. Average height, average wealth, ect.


But people usually are expecting mean in your two examples.


I think they mean the median. Median income in a group of people is usually smaller than the mean becomes of extremely wealthy outliers.

If someone says something about the "average person", they are not talking about the mean, they are talking about the median.


It's possible that median is a better measure of central tendency for those things. It also sounds like you're interpreting their use of 'average' as meaning median when you think that would make more sense, rather than asking them (and in fairness, I don't usually grill people on which measure they're thinking of when they say 'average' either). In my experience, people who actually understand the different measures of central tendency usually (on average, heheh) specify one explicitly. The rest of the population is probably only aware of arithmetic mean.


Intelligence is approximately a normal distribution so the mean and median are almost exactly the same.


Ha! It is, but “median” just doesn’t pack the same snarky punch.


> but yes most people are stupid

Which is why democracy is so terrifying sometimes


It's not that it's an unpopular opinion; "hurr, everyone (except me) is dumb, lol" is extremely popular.

But it isn't a good, useful, insightful comment.


The challenge for the UI and software designers is to create interfaces that are effective, easily learnable or intuitive, and, if possible, similar to some interface the user is already familiar with, regardless of their skills.


These results seem a bit implausible

There is no obvious link to the method of the study, but the text of the study seems to imply that these tasks are some variation of "How well can you perform a task on a slightly outdated version of Microsoft Office", which these days is not widely used outside of bureaucracies, and therefore likely to be unfamiliar to most people (even for _n_x-based programmers such as myself)

I suspect if you were to give users relatively advanced tasks related to social media, online shopping, or device configuration you would get much more positive results.


There’s lots of discussion here about UI design and intuitive interfaces in here, but not much about training.

Computers are, by far, the most complicated machines that people are expected to be able to operate without any training whatsoever. Is that actually reasonable?

It really sticks out for me in the workplace. So many jobs require computer use. So many of the people in those jobs have no idea how to use them. They muddle through on a combination of habit and tribal knowledge, and call for help the moment anything different happens.

Consider jobs that require operating a car. It’s expected that someone hired for this job already knows how to drive. If they dont, they either won’t be hired or they’ll be required to complete training to learn how to operate this equipment.

Yet if you replace “drive a car” with “operate a computer,” usually they don’t care. Just try your best, and call IT for help.

Making intuitive interfaces that anyone can pick up and use is great, don’t get me wrong. But I feel like we try to take it to impossible places.


You would think it would follow a Pareto distribution, so how is this 'Worse than you think"?

Now I do get the article itself is more about the level of computer literacy rather than the distribution, and that things like the mean skill might be way below where you think it is.

Part of my pet peeves about 'modern' systems is the indiscoverability. There was a glorious time in between the magic incantations command line and obscure multi key keyboard presses, and the wiz bang of of secret hand and finger gestures, where there was a universal compact meta system of menus and dialog boxes that while it could be extensive, allowed for browsing through all the command option in a discoverable way (and yes, I am over-glorifying).

We somewhere seem to have lost this. It is not about the command prompt vs menus vs gestures. It is about having a clear and universal road to discovery when you do not know what and how some command can be given.


I'd be cautious of this sort of study. Just as many of us Google how to do things, people can offen figure out enough to get things done with the help of those around them. The staff at fast food restaurants and call centers generally seem to muddle through.

Similarly, kids seem to figure out video game problems that would stump many adults.


Video games disprove a lot of UI/UX "wisdom" by demonstrating that people actually do learn - if they care. It's often the context that makes people not care. Do they get any personal value from caring? Can they get away with not caring and still get the same (or same enough) results? For video games, the answers are, in order, "yes" and "no", and so players easily learn to operate absurdly complex interfaces, and to do it at speed.


No, all you need to disprove this is to look at the achievement rates for beating a game. When barely 1/4 of people finish the game, which ideally is what a developer wants, the idea that players muddle through is a bit much.


I think I've finished like 5% of the games I played with. Even counting ones on which I spent more than 10 hours, I don't think it would be more than 10%. Hell, I'm 241 hours into Stellaris and haven't even won one game yet; my Kerbal Space Program hour count is four digit and you can't even win that game :).

Point being, people don't finish games because they get their fill of enjoyment before then. Many games I know start to feel like a chore after a time; some early on, others near the end. Unless there's a story you desperately want to see resolved - and many games these days ship without one - there's little reason to play a game once you mastered all the different mechanics it offered and seen most of the variety it featured. And then there's multiplayer, which wouldn't even count under this metric.

I propose a different one: people who play a game with nontrivial UI for more than 2 hours are evidence in favor. If the UI was a problem, they'd drop out sooner. After all, unlike software used at work, nobody forces you to play a game.


”only 5% of the population has high computer-related abilities”

Of course. If 50% could do them, they wouldn’t be called “high computer-related skills”. Only 5% of the population has high athletics/cooking/etc. skills, too.

”and only a third of people can complete medium-complexity tasks.”

Seems like the definition of “medium-complexity” needs a small adjustment.

The message of this is “the medium is lower than you think it is”.

The good news is that what is considered medium goes up all the time. Transferring files between computers used to require meddling with ‘standard’ RS-232 cables and Kermit. Nowadays, 3-year olds regularly do it. Retouching photos used to be something for the 1% (and that likely is rounding up significantly). Now, 5-year olds can do it in seconds (taste still may need development at that age). Printing a map with the driving route from A to B? Child’s play (but they would wonder why you would take the trouble)


I know a suprising number of people 10-20 years older than me who regularly use desktop computers but do not understand what a window is e.g.: that windows can cover each other up, and that there are special places you must grab or click them to either resize or move them, that you can shrink them into a dock without closing the application etc.

Rather ironic considering that the operating system they all use is actually named after this, for them, inscrutable abstraction.

Those guys at Apple knew what they were on to when they created iOS with its one app at a time paradigm, even if it is slowly adding all that complexity back in to the more recent versions.


The data, conclusions, and implications of this are huge, and go several ways.

First: virtually nobody using information technology has any idea of what it's really doing, or how to do anything beyond a very narrow bound of tasks. This includes the apparently proficient, and it's almost always amusing to discover the bounds and limits in knowledge and use of computers by the highly capable. Children, often described as "digital natives", are better described as "digitally fearless": they're unaware of any possible consequences, and tend to plunge in where adults are more reticent. Actual capability is generally very superficial (with notable exceptions, of course).

Second: if you're building for mass market, you've got to keep things exceedingly and painfully simple. Though this can be frustrating (keep reading), there's absolutely a place for this, and for systems that are used by millions to billions (think: lifts, fuel pumps, information kiosks), keeping controls, options, and presentation to the absolute minimum and clearest possible matters.

Third: Looking into the psychological foundations of intellectual capabilities and capacity is a fascinating (and frequently fraught) domain. Direct experience with blood relatives suggests any possible genetic contribution is dwarfed by experiential and environmental factors. Jean Piaget's work, and subsequent, makes for hugely instructive reading.

Fourth: If you are building for general use keep your UI and conceptual architecture as stable as possible. There simply is NOT a big win at UI innovation, a lesson Mozilla's jwz noted years ago. (Safe archive link: https://web.archive.org/web/20120511115213/https://www.jwz.o...) Apple's Mac has seen two variants of its UI in over 35 years, and the current OSX / MacOS variant is now older than the classic Mac UI was when OSX was introduced. Food for thought and humble pie for GUI tweakers.

Fifth: if you're building for domain experts, or are an expert user forced to contend with consumer-grade / general-market tools, you're going to get hit by this. The expert market is tiny. It's also subject to its own realms of bullshit (look into the audiophile market, as an example). This is much of the impetus behind my "Tyranny of the Minimum Viable User", based in part on the OECD study and citing the Nieman-Neilsen group's article:

https://old.reddit.com/r/dredmorbius/comments/69wk8y/the_tyr...

(I've been engaged in a decades-long love-hate, and increasingly the latter, battle with information technology.)

Absent some certification or requirements floor (think commercial and general aviation as examples), technical products are displaced, and general-market wants will swamp technical users' needs and interests.


> Second: if you're building for mass market, you've got to keep things exceedingly and painfully simple.

Not the primary reason, but anyway one of the reasons google+ failed.


There are plenty to choose from.

It helps to rmember that as a rule, all but one mass-market social networking platform will fail. "Success" means "claiming the most mindshare". And if you're in the mass market, that means there's only one brass ring.

Any roadblocks or injuries, self-inflicted or otherwise, toward attaining that goal will not help you. Google+ certainly had much help inside and out in failing to attain that mark.

Paradoxically, this is why aiming for a specific niche can be a success, at least on its own terms. Reddit, Twitter, and Hacker News qualify on this basis.

Facebook is now actively competing against not only other comers, but itself (WhatsApp, etc.) in various guises. A battle it will all but certainly, eventually, lose.


So this is sub level 1:

> An example of task at this level is “Delete this email message” in an email app.

But it's probably at least level 3 to realize that actually deleting email messages is impossible.


What do they mean by "sort function"?


This reminds me of an oft referenced quote:

  Any sufficiently advanced technology
  is indistinguishable from magic.[0]
What may not be immediately obvious from the wisdom bespoken by Arthur C. Clarke is that given enough technology, all of us will eventually find a point where we believe magic truly exists.

0 - https://www.brainyquote.com/quotes/arthur_c_clarke_101182


I'll see that, and raise you a quote: "Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works. Anything that’s invented between when you’re fifteen and thirty- five is new and exciting and revolutionary and you can probably get a career in it. Anything invented after you’re thirty-five is against the natural order of things." --Douglas Adams


I see your Douglas Adams quote and raise you an Albert Einstein quote:

  Common sense is the collection of
  prejudices acquired by age eighteen.[0]
:-D

0 - https://www.brainyquote.com/quotes/albert_einstein_125365


Based on how this comment is resonating with HN, little did I know suggesting fallibility would result in such disdain.

No matter.


This is where open source is an unsung hero continually advancing not just the computer skills but the really advanced stuff. This study finds only 5% of adults can do what they call complex tasks like navigating and filling out forms in a browser, while 69% could barely work for Uber, barely use Uber, or not even figure out Uber. Open source is sitting there teaching and training people how to create Uber.


I fail to see what open-source software has to do with this data. If someone is unable to move their mouse pointer to the Delete icon of an email client, the license of the software likely has no effect.


Learning how to use, configure and create open source is definitely one of the highest-proficiency forms of computer usage. Especially creating but even using open source requires a bevvy of advanced skills identifying appropriate software and getting it to run. The strongest 5% of computer users they found can barely search email and some % are pursuing incredibly sophisticated skills through open source setting up their own email server or writing one. That's the relevancy.

This study stopped counting at 3rd level with multipage forms basically... what’s a Linux terminal or git or swift or json/yaml configuration files if not highly advanced computer usage. 3rd, 4th, 5th, 100th, somewhere up there is a proficiency-level where you build your own computer in Minecraft for the fun of it and write languages and operating systems and these skills are learned and shared through open source more than anything.

I think if they had kept tracking the higher levels of proficiency by the time they got to open source users and developers there is a chasm between their ability to use computers to solve problems vs the 3rd-level person - data munging, web scraping, scripting, machine learning, advanced internet searching, bespoke software...


> Learning how to use, configure and create open source is definitely one of the highest-proficiency forms of computer usage.

Some of the most successful accountants, CFO's, and stakeholders I have met would beg to differ.

> I think if they had kept tracking the higher levels of proficiency by the time they got to open source users and developers there is a chasm between their ability to use computers to solve problems vs the 3rd-level person ...

This is a straw man argument[0]. OSS != "higher levels of [computer] proficiency".

0 - https://en.wikipedia.org/wiki/Straw_man


Look at the steps to set up WordPress. These steps can impart a lot of knowledge on the people following them. There are classical models of learning that seem very applicable - learning by doing, experiential learning and rote learning. The article is very clear society doesn't make highly proficient computer users, so if using open source doesn't create them where could they possibly be coming from?

- domain registration

- web hosting

- put Wordpress on the hosting

- configure Wordpress

https://www.shivarweb.com/website-setup


Solving problems can be done in many ways. I reach first for OSS myself, but that's just me. Others solve the problems they must with tools which are not OSS.

What matters most is solving the problem at hand, not what tools are used.

I'm just a random Internet account, trying to impart a bit of perspective. Do with it what you shall.


using open source requires a bevvy of advanced skills identifying appropriate software and getting it to run.

Are you taking the position that open source is heroic because it has garbage user experience?


I'm taking the position that open source is how advanced computer skills are learned today, through trying to use it, configure it, host it, maintain it, write it etc.

Just look at the steps for setting up WordPress, it's a 101-level project on internet and web hosting. Reading and writing open source is a vital part of how people learn to use highly-advanced tools. I think this education is a really overlooked benefit of open source.

https://www.shivarweb.com/website-setup


Free Software doesn't have to compete on marketshare. It may well benefit in terms of mindshare by favouring advanced users. Taking as an example Linux, with still single-digit usershare among operating systems, there is sufficient user-share among potential contributors to support the several thousand kernal contributors, as well as contributions to a wide range of system, userland, and GUI tools.

Given the focus drift that's accumulated from even modest popularity among some Linux distros, and meant not as a critical note, just a set of observations over my own several decades' experience, more technical Linux and BSD variants tend toward more technical users, whilst the more user-friendly distros (Mandrake, back in the day, Ubuntu, Mint, Cinnamon, etc.) tend to show a greater aversion to CLI and system tools, in favour of GUIs and the like.

Because commercial software does see massive network effects and attraction to mass markets, it tends to fall far lower on the user-capability curve.

This can also hit mass-market free software, with Chromium and Android (dismissing status as truly Free Software / Open Source) being cases that readily come to mind. The mechanism here is that these are aimed at advertising-promotion platforms, and hence, mass markets (Android devices number in the billions). Which is to say the polar opposite of intellectually-rewarding power-user tools.

It's not that the users' skills depend on the software licence, it's that licencing determines business interests and user-focus of the software developer.


Now let's talk about the 5% of the population who realize that losing tons of money for 10 years in a row is not a sustainable business plan.


And the one percent of the five percent that understand that open source is not a business model.


How is Open Source sitting there teaching people how to use large amounts of investor money to perform regulatory arbitrage at international scale, breaking laws in country after country and getting away with it because you move fast and break things (and are based in a country surrounded by nuclear aircraft carriers)?

Oh wait, you mean create an app for a taxi network?


Open source is only useful if you can make use of source code, which is already far beyond what these users can do.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: