Hacker News new | past | comments | ask | show | jobs | submit login
The Distribution of Users’ Computer Skills: Worse Than You Think (nngroup.com)
512 points by hug on Dec 6, 2016 | hide | past | favorite | 325 comments

I think we underestimate how much time is required to be really good with computers. I'm very knowledgeable about computers, but I've been using them heavily for 22 years.

We have this innate comfort and familiarity with using computers, but as hackers it's a huge part of our lives. People have other ways of life than us, and have other expertise. We shouldn't dismiss their computer illiteracy as stupidity; it's just as bad as them writing us off as "computer nerds".

There's also this tendency for non-computer-literate people to be overly self-deprecating. I noticed this with mathematics, when I tutored people @ uni, but with computers it's the same shit. I constantly hear "yeah, I'm awful with computers, it's all black magic to me". But the person saying that is often not even trying, which is frustrating. It's like a helpless excuse to be lazy and let someone else do the work, which is often not that complicated.

I don't know about that. There seems to be something innate about one's ability to use computers.

My father is 85 and is quite good with computers; he uses email constantly, writes documents and books using word processors and understands the concept of files, reads on a Kindle, etc. But he had never touched a computer before he was maybe 70 (let alone own one), and before that he wasn't technical at all. He was trained as a Latin teacher (!) and spent his professional life in politics, never having to type anything by himself.

My father-in-law is very good at many manual tasks including woodworking, electricity, electronics, and was an officer in the French army for many years, where he was responsible for maintaining highly technical weapons. He too is 85 and he can't seem to understand computers, although I have known him for 15 years and have been trying to help him for that long. He seems to want to learn but isn't really interested in what one would call "the big picture", he just wants to know a list of actions that will get them to where he wants to go. Of course that approach fails miserably whenever something happens that's not on the list.

I have two friends who have been to top engineering schools who, I discovered recently, type whole urls not in the browsers' bar but in Google's search form. My eldest son is 11 and doesn't confuse the two. My younger son is 7 and doesn't seem much interested.

I agree with this assessment. It has a lot to do with whether the person has the basic ability to understand computers and engineering-like things. If it's not taught at an early age it's tough to pick up this skill later. My parents are perfect examples of this.

My mother is a graphic designer, old enough to have done page layout with a knife and glue. She learned Quark, then Freehand, then In-Design and still uses the stuff to this day. She was not raised with a computer, but her father was an engineer and she has that engineering mind, where she can follow something back to its root and figure it out.

My father, on the other hand, is a Dale Carnegie teacher. He's a people person. He's had a computer since we got an Atari ST in 1985. He's been on Macs since 1987. He STILL cannot use the thing properly. Sending an email is hard for him. Using his iPad is tough. Facebook enrages him because "I don't know any of these people, how do I get rid of them, I didn't ask to see their pictures!" My dad was raised by a photographer and a nurse. He never learned how to troubleshoot. He cannot find a root cause.

But he can hold a room full of people captivated and make them laugh, something my mother would NEVER want to do. So, it's those people skills versus those tech skills, as ever it was.

Aptitude, yes. Some of it is what you learn first, and therefore what you must unlearn. Ditto terrible mental models. Ditto tools poorly designed for the work. Etc, etc.

All this has been exhaustively hashed and rehashed by the UI/UX, cognitive psychologist, designers, SIGCHI, etc groups for decades.

We just don't listen to them.

Typing URLs into the search bar of Google is very common. I see it all the time. If google is your default new tab page there's really nothing wrong with it. One extra click maybe.

It does imply that the user is not (sufficiently) aware of the difference between a browser and a for-profit search engine. It's a type of behaviour that makes people dependent on a service where no dependency is needed. Also, how many of these users are both aware that they are sharing large parts of their browsing behaviour with Google and are okay with that?

Well, most browsers are for-profit too. Typing an address into Chrome's address bar will still send it to Google so they can give suggestions back, I think. (E.g., if I type "http://www.abc" into the bar, it suggests an autocomplete to "www.abcnews.com", which I have never visited. Doing the same in a 'Private Browsing' window turns off this autocomplete.) I'm not convinced that distinguishing between address bar and search field is either a necessary of a sufficient condition for understanding Internet privacy.

You get a search result page instead of going directly to the site (that's how I noticed they were doing it).

It does have the nice feature of spell checking for you :-) It's a good bit safer to search for your bank site, for example, since there're large incentives for phishing sites to sit on common misspellings of bank domains.

I actually do this just for financial sites I don't have bookmarked. It's faster if I miss type and vastly safer.

There are conflicting incentives for search engines (Yahoo seems much worse than Google for this) to allow paid results for phishing sites above the real result. I can often tell exactly which phishing site a user is getting directed to instead of the real domain that they want.

Innate, yes: not being stupid helps tremendously.

Not stupid + put in the time = "good" with tech.

I would say innate interest in the field rather than intelligence vs stupidity.

I know intelligent people that spend time on it and fail because of lack of interest.

Innate: inborn, natural

I seem to have an innate talent for computers. Given a "clue" I can usually extrapolate most of what I need to know. I may be "smart", but my memory recall is horrible -- making connections is very difficult to me given too much time displacement. The extrapolation is mostly automatic w/o actually "thinking".

You just described me, my memory recall is horrible and I'm very curious in finding an explanation.

My memory is poor and seems to only get worse and I'm not even in my 30s yet.

I was just thinking about this recently while installing something with radio buttoned install options:

Standard Install (Recommended)

Custom Install (Advanced Users)

Yet the custom/"advanced" option was just a couple checkboxes that let you exclude extra bloatware during installation.

I wonder how many users upon seeing both options tell themselves they are not "advanced" computer users and just go with option 1 without even looking at option 2. Had they bothered to look many of them would have understood to uncheck the bloatware they know they don't want. If that was too much there was a Back button. But the wording "Advanced Users" scares many ppl away from even looking into that option and so they end up with some Yahoo! search toolbar forever after...

Self-deprecating computer users is an issue. The extent some software companies are taking advantage it is another.

I'm constantly annoyed by "Advanced user" installation options.

They're occasionally accurate - I can think of a few programs where the "advanced" option demanded that I go and read a bunch of documentation - but mostly it's either laziness or malice. I don't think they've been a reasonable choice since storage was so limited that 'advanced' meant screwing around with DirectX directories and partial installs.

In 90%+ of cases, basic/advanced could be better addressed with a handful of simple-English checkboxes for what to install. One is locked-active, for the core bundle. The others have one-line descriptions of what they do and possibly what other products make them redundant, maybe with a "more info" button.

And since evil bloatware isn't going anywhere, we could at least set a norm that real menus give you options, and anyone trying to prevent you from engaging your brain is pushing crapware.

After being burned by ATI and Nvidia driver installs gone bad I'm reluctant to use anything but the defaults. Software SDKs too seem especially brittle when installed apart from the recommended options.

So it's not only casual users who prefer the default.

I would probably make an exception for driver installs, which definitely get into that "actually broken" territory. I'm also ok with "detect my device settings automatically".

I was mostly thinking about the program installs where 'advanced' is a list of different things you might want to install. The worst case there is pretty much just "you didn't install the thing you wanted", versus the "good luck buddy" worst case of tools and drivers.

Seems to me that you're describing the exact sort of psychological tactic that software vendors favor to get people to install the bloatware in the first place.

Sometimes called "Dark Patterns" http://darkpatterns.org

Perhaps one should really call them what they are, scams.

Please (re)read Norman's Design of Everyday Things, for starters. TLDR: Blame the designer (of the tool), not the user.

I taught 100s of people how to use CAD. Tough going. Technophile me thought "Of course it's better, easier, more powerful." Most of my student's blamed themselves. But I knew they were smart, knew what they wanted to accomplish, and just couldn't figure out how to do their jobs with CAD.

You could say "square peg, round hole", or "like pushing rope". I preferred "CAD is an angry 800lb gorilla sitting between people and their work".

Being kinda thick, I finally figured out the tools suck, not the people. Specifically with CAD, worse than the terrible UI, they had terrible mental models, completely divorced from how the users see their own work.

In conclusion, blaming the user is not very humanist. I rage at computers all the time, and I supposedly know what I'm doing. I have deep empathy for anyone not chin-deep in this crap.

That's definitely the right approach most of the time. But sometimes people are just beyond any sort of help. When a literate, educated user cannot even read a message presented on the screen (and I don't mean understand or act upon the message, I merely mean read it), or when they don't understand that their electrically operated computer will not function until the building's electricity is turned back on, I think it's fair to blame the user.

Sometimes people really do have the necessary knowledge and skills, they just refuse to apply them to computers. Certainly, many computer are too quick to blame the user when it's really a design problem, but it's not always wrong.

I sometimes wonder if this is the result of people falling into one of two categories of how they learned to see the world, deeply or shallowly. I think of myself as looking at the world deeply. I'm often less interested in the immediate response to my action than I am as to why that was the response. I want to understand the cause of things. I've met plenty of people that don't care about this at all, they care that action X yields response Y, and (for the most part, barring specialties) go through life with that as their model of the world. Sometimes I envy these people, as their heuristic approach often yields better projections than I'm able to come up with in certain areas (such as how a person may respond to a complex situation).

Unfortunately, I think the shallow mental model has problems when presented with large systems of behavior as a whole that are foreign. Similar to the average person when dropped into a foreign culture with a different language, someone with a shallow mental model of the world may find a computer, with it's decades of layered systems and skeuomorphic metaphors, if they are even attempting that in the specific instance, may be initially too daunting to attempt on their own. At least that's my pop-psychology theory of the day.

I think there's a correlation here, but I'm not sure that someone's weltanschauung would be causative of digital literacy.

Rather, I think the divide is "I can" vs. "I can't", and the latter limited thinking leads to the sort of limited world view you describe.

It's something that fascinates me - so many lead lives that are essentially proscribed, view reality through a pre-fabricated toy lens - and when confronted with an individual that has chosen something other than default options in life react vigorously and vociferously. I see the same in computer literacy and literacy in general - the "I can't" stemming from a refusal to accept new information which may conflict with preconceived notions.

If anything, I think it's fear of thought for the potential unhappiness that knowledge brings - adding something to the picture can make the overall scene so large and intimidating that many shy away rather than attempting the daunting task of filling the gaps.

I mean, it is daunting - once one has climbed a mountain of knowledge you can see the whole, the foothills, how it joins in coherence - but at the bottom, all you can see is an almighty sheer face. I have learned to be comfortable with the knowledge that I know nothing, despite knowing far more than my fellow man. Knowledge requires humility, as you swiftly learn that you couldn't be further from the centre of the universe or less significant.

I think there's a point early in life when this bifurcation happens - perhaps it's down to whether parents attempt to answer a child's questions about the world or simply reply with "stop asking stupid questions". Perhaps it's down to egoism - you either have to be humble or unspeakably arrrogant to pursue knowledge, and the middle is intimidated.

That's my pop-psychology theory of the day :)

You should read "Thinking, Fast and Slow", it will change your life. Hint: you're not too far off base

> When a literate, educated user cannot even read a message presented on the screen (and I don't mean understand or act upon the message, I merely mean read it)...

This is usually trained into users by non-existent or awful error handling and messages in the majority of software. How the industry addresses errors is haphazard at best, and I see lots of evidence of what I call "checkbox/boilerplate project management" in lots of software products. The classic symptom is an error throws a message along the lines of "Foo Error: there was a foo type of error"; it is an error message, so I guess it got marked as a successful part of somebody's sprint by its mere existence, and not evaluated on its utility.

The industry should be hiring legions of editors with red markers to up our communications game when it comes to error messages.

To an extent, yes. I can understand why users will ignore error messages when they're in the middle of doing something else. I can even understand why they might ignore them when things are going wrong. But where it crosses the line for me is when an expert is helping them and asks them, "Can you read me the error message?" And they plead computer-ignorance and refuse. (Yes, this really does happen.)

Learned helplessness seems like a perfectly reasonable reaction to the terribly designed products we are all subjected to.

I've seen people that can't understand the interface to an elevator, that the up light means it's going up and the down light means it's going down. That's for a single purpose tool that people use everyday. How do you propose we design intuitive UI's for those people?

And aren't CAD tools power tools? They are designed to be easy to use for the experienced, not easy to pick up. There is a trade off between power and ease of use.

It's like a helpless excuse to be lazy and let someone else do the work

In my experience more often then not the 'not even trying' has nothing to do with wanting someone else to do it, but rather with being scared to mess up exactly because they feel they are not knowledgeable. That's pretty instinctive I think (e.g. the first time I used a chainsaw I was also rather hesitant as I basically knew sh*t about it) but sometimes also driven by the abdunance of horror stories heard about 'losing all mail/photos/....' (which is, given the person's skills, usually actually very hard to do for them).

I agree. I felt that with maths many times - it was fear caused by lack of understanding. "I don't know what it is, I don't know how to bite it, I don't feel the problem, help!".

I have that with software too, sometimes - again, due to lack of understanding. For most of my Java experience, Maven POM files for me were "that fucking pile of shitty XML I'm not touching with a ten foot pole, because it doesn't make sense". I didn't know why it looks the way it does, what any change will do, how to make anything work. I eventually bit the bullet and spent a day reading up on the docs and now I'm not afraid anymore. But it took me some years to first stop avoiding the topic altogether.

> 'losing all mail/photos/....' (which is, given the person's skills, usually actually very hard to do for them).

The thing is, they don't know that. They have no intuition for the range of operations and capabilities of a piece of software in front of them. You and I both know that this program they're using probably won't be able to find user's mail or photos by itself, much less delete anything - nor that anything in its operations comes close to deleting anything. But regular people don't know that, and I think the common factor here is not having an intuition about the limits of space you're moving in.

Don't blame yourself.

XML obfuscation layer, semi-declarative runtime that can't be debugged (no breakpoints), weird rules, unpredictable failure modes, arcane config & installation, etc.

Maven is utterly broken and irredeemable. That you rose to the occasion to master it is a testimony to your adaptability and pluck.

I'm not proud of my adaptability; I'm ashamed it took me so long to bite the bullet and read the docs.

I'm actually working on turning it into an instinctual habit - instead of trying to power through the unknown, take a break and read a goddamn book about it. It makes you less productive for few hours or days, but then quickly pays significant dividends. Sharpening the axe, and all that.

I brought up the story to highlight the fact that it was so easy, so natural for me to avoid dealing with Maven altogether for years. If, as a technical person, this is my default reaction to a confusing piece of technology, then how can I expect more from normal people? Conversely, their fears could probably be alleviated with as little as few hours worth of education or at least a good explanation. The trick is to make normal people willing to invest those few hours, when their own curiosity isn't enough.

What's better than Maven to solve the problems Maven is used for?

Yeah i see people happily surf all kinds of sites, but once they need to deal with local files they lock up i fear of making an irrecoverable mistake.

The benefit of youth is that one do not have prior memories of failure and punishment that thus allows one to mess up. One of the first things i did after getting a computer was to accidentally blank the hdd. But after a friend helped me reconstruct the environment i learned how to do same and more. And these days i am the ad-hoc support line for family and friends.

I wonder if the whole GUI is not really helping here. Sure, it allows things like browsers to exist etc. But at the same time all the locations of files and such are mental rather than physical.

back in the day, if i wanted to run a spreadsheet program i had to locate the floppy it was stored on. Now it is stored in one of the many sub-folders that blanket my 1TB+ sized hdd.

A whole lot of problems with home computers seems to stem from them doing a number of things in the background without the user being aware.

I can't help wonder what would come to be if one were to go back to the C64 or similar where if we wanted it do anything we had to start it ourselves.

BTW, there is an old OSNews article[0] from someone teaching basic computer usage to older people. And he claims that he had a fair bit more success with the CLI than the GUI. This because not only was it clear when the computer was doing something (one could not input more commands) but it also gave a history of previous actions, and the ability to set aside tasks for later (with a warning about that on exit).

I can't help wonder if the whole desktop metaphor and WIMP, while making things more "friendly" at first glance (vs that blinking prompt) in the end has just added a whole lot of conceptual thicket to cut through when trying to translate goals to actions.

All in all i wonder what we would get if we had a cli with some gui niceties, like being able to add file names to a command by clicking them in the history of a recent ls/dir command.


> This because not only was it clear when the computer was doing something (one could not input more commands)

Maybe old GUIs were better. You had N windows on screen, meaning N tasks being done. Now you also have to track minimized windows. And tray icons. And programs that run in the background without showing any window or icon at all. And system services, which are different kind of background programs. And I'm probably forgetting something.

Right now, in Windows, I believe the Task Manager is a fundamental tool to understand the runtime state of your system, and it should be advertised as a "normal-user tool", not "expert tool". And I'm talking here about detailed process view[0], and not the dumbed-down view from Windows 8/10[1]. Users can't really hurt themselves with Task Manager anyway (you don't get to kill a system process that easily), and it's fundamental to getting a feel for what your computer does.

Tangentially, the "dumbed-down view" of Task Manager is an example of what I believe is the biggest sin of modern UIs - lying to people. There's hardly a better way to confuse the shit out of users' minds than presenting a limited and inconsistent view of the internal state.

> the ability to set aside tasks for later (with a warning about that on exit).

I wonder how it is this feature was made - both in GUI and CLI - in such a way almost nobody (except sysadmins) use it? I mean, personally I discovered that Windows can do it only very late in my life, and I don't trust UNIX cron. Most people I know - including programmers - don't know about either. Yet delaying tasks seems like useful feature. How come it's not exposed in OSes (and it seems to not exist on mobile at all, from user's POV)?

[0] - http://www.howtogeek.com/wp-content/uploads/2012/03/650x593x...

[1] - http://www.howtogeek.com/wp-content/uploads/2012/03/411x406x...

Sorry, i was not talking about cron or similar. But simply ctrl+z. Keep in mind that while on the surface that is similar to having multiple windows open, the process is suspended and out of the way. Now the user can actively tell the process to resume in the background, or just leave it there until later. And if one was to attempt to log out while having such a process waiting, there will be a warning.

All in all, this is perhaps a closer approximation of a physical workflow than the desktop metaphor.

Ah, found the article finally.


I guess i should have tried harder before hitting reply on my initial comment.

"Task Manager is a fundamental tool to understand the runtime state"

IIRC, the taskbar and tray are meant to communicate that at all times. But trays were easily overrun as people installed more and more background software on a single PC. Modern taskbars too have become so crowded there are groupings and virtual taskbars for the virtual desktops.

One cause is software unnecessarily running at all times. Another is people do more with their PCs than in times past.

i dont have a good response yet, but your comment made me think about this and try to explore this problem, so thanks for that!

No problem. I rewrote the comment some 3-4 times and still it came out long and rambling.

> But the person saying that is often not even trying, which is frustrating.

This reminds me of all the anecdotes I've heard about people who immediately click through all alert boxes, to the point of being nigh-incapable of passing on an error message unless someone is standing over their shoulder forcing them to read it.

Oh god my mother. What she needed to do was written in plain language right in front of her, and it was nothing technical but something simple like "you should fill out that text field". I had to install an antivirus software that would not nag her about installing the for-pay version, because no matter what I said, she would inevitably end up with the for-pay version and its useless features installed (but only the trial version, she's smart enough not to actually pay). My mother is pretty intelligent and far from senile, just around 65, it wasn't that long ago that she was a busy executive in a major corporation who had to manage not only people but also her Excel sheets and MS Office at the very least. I still remember many many years ago when she first got a computer with a mouse. She moved the mouse carefully between the tips of only two fingers, and taking her hand off of it between movements to stop and look...

The first time my mother used a mouse was in 1989. She called me on the phone and said "the mouse is not working".

- What do you mean it's not working, does it move on the screen or not?

- Well, it moves but just a little.

- ??? Tell me more?!?

- When I do big movements it moves a little, but when I do small movements it doesn't move at all. I can't get it to cross the screen, even with momentum.

It took me a while to understand she was moving the mouse in the air; in those times mice had a ball that would indeed register movement if you shook the object hard enough.

This was fun and all, but in the end my mother learned to use computers and is now very good at it.

I was certain you were going to conclude with coaching her on how to remove the ball and clean the oil/dirt/hair off the rollers with a q-tip.

It doesn't help that 'the Windows trick', for the UI that is, became popups. Popups after popups that were almost all useless things or worse annoying ads that got popped up on some page you were on.

There is a LOT of noise to that signal. Contrasted to more traditional (teletype inspired) interfaces that only reported actual warnings (if asked for) and failures unless you were extremely specific.

This UI hostility also burns people who aren't experienced with hotkeys. An awful lot of malware-serving webpages pop up duplicate header bars as part of an image, in hopes of catching people who click on the wrong X (much like ads that consist of fake download links).

That's a non-event for a an experienced user, who (if ad block didn't disable the page) probably resorts to the close-tab hotkey by sheer habit. But it's an appreciable threat to someone who's trying to rely on predictable screen UI for their browsing.

Weirdest thing i have seen was some ad delivered malware that would pop up a full screen image of an XP desktop, complete with some popup claiming to be some error or other.

The whole thing was clickable and it fooled me the first time (i really should have picked up on the pointer being a hand rather than an arrow), never mind my less knowledgeable relatives.

That's a new one on me, too. I'm not sure I would have caught it.

One useful trick is right-clicking first, which shouldn't trigger the link and will be very different on an outbound link than a blank webpage or desktop.

Of course, mobile is where I get the most trouble from this - I don't have any hotkeys, and real page-loading is so bouncy and unpredictable that it's particularly easy to get caught by a fake top bar. It's made even worse by things that scroll away the top bar as you move down page, and then bring back a fake one.

In deep pain, I have to regret I even know some IT people that don't read error messages. (I suspect partial problem is here the effort he has to read.)

Not reading error popups is the norm around me, and I work in IT support. It has nothing to do with the message, but the medium: a message popup disrupt the user's workflow, and more often than not for no good reason. All people who work with computers very quickly get in the habit of dismissing flow-breaking popups. I'd say the problem is not with the people.

It's like the ER doctors who miss crucial "Patient is dying!" alarms. They're not just lazy or careless, they work in an environment where every device is screaming "I am here!" every minute of the day.

It's both an important work skill and an unavoidable physiological response to tune out obnoxious and unimportant stimuli, to the point where the people abusing those signals ought to be accountable.

You had something similar that resulted in a mid-Atlantic plane crash some years ago. The crew got so blanketed by automated warnings, and didn't have any visual cues thanks to being a night flight, that they basically flew the plane into the ocean.

This same topic is taught in flight training.

Pilots are repeatedly taught to never "silence" the annoying warning horns that come on for different reasons (slow speed, wrong landing gear configuration). The idea being that mindlessly dismissing alarms will build a bad habit pattern. Especially when you're actually in trouble.

I suspect this is why Microsoft introduced the notification center, and seems hell bent on redirecting popups there.

I don't read them either it turns out, even though I would assume I'm the kind of person to do so. I particularly notice this when say in a video game you need to hit one button to exit and then it asks you to confirm. I can find myself going through the exit, cancel exit, exit again loop multiple times before my brain catches up to what's happening.

In case of IT, you also have to factor in that sometimes people dismiss error messages because they already know what they're telling. Like in my C++ days, I would often not bother looking at the error output at all, because the moment I saw that compiler is starting to spew out text, I immediately realized where I've made a typo.

Then again, compiler errors can be nigh incomprehensible.

it's the same shit. I constantly hear "yeah, I'm awful with computers, it's all black magic to me". But the person saying that is often not even trying, which is frustrating.

It's the difference between a Growth Mindset and a Fixed Mindset: http://www.aaronsw.com/weblog/dweck

"For twenty years, my research has shown that the view you adopt for yourself profoundly affects the way you lead your life. It can determine whether you become the person you want to be and whether you accomplish the things you value" (https://www.brainpickings.org/2014/01/29/carol-dweck-mindset...).

Link to Fullsize Image: http://www.gridgit.com/postpic/2013/10/fixed-mindset-vs-grow...

Very interesting. Thanks for the info.

Now does everyone just tend toward fixed mindsets as they get old? Or is that just a popular misconception or some sort of selection bias?

I feel like I ought to offer this link: http://slatestarcodex.com/2015/04/08/no-clarity-around-growt...

There are several more pieces in that series, on that site, if you care to look.

The reality and strength of the Growth Mindset theory is fairly unclear, despite a lot of books pushing it as gospel. It's also unclear whether it's accurate even if it is effective - there's an open possibility that people who are basically delusional about their own potential practice harder than the realists, and a handful of them who had high starting potential get great results.

So... age-related degeneration is real, neuroplasticity drops over time, but I wouldn't want to connect that to growth mindset because I'm not convinced that's even a real effect.

I think the incentives to having a growth mindset are diminished. Once people are comfortable with what they have in life - job, spouse, children, what benefit does learning, say, poetry, or programming, have to their daily life? I feel like at a certain point you have to feel like it enhances your life beyond allowing you to acquire the standard metrics of success that we adopt into our cultural consciousness.

It's a very large and fuzzy topic. It depends on what computer system you run. Being able to use windows is not a skill, it's extremely unconceptual and ad-hoc, and mostly coll rote learning. I'm sure older systems, with less layers and cruft (think 70s box) taught you more, giving you simpler tools and more ways to solve problems on your own. Nowadays it's mostly fads.

You're right about self deprecation. Society lifts computer usage as a value even if it's mostly absurd in reality (because I knew excel, people would look at me doing basic tasks saying they couldn't do it). It affects many people, it's very very hard to reverse. It's indeed very similar to math anxiety. Reminds me of childhood friends who dropped out of school in Junior High, couldn't do the "simplest ratio" but were doing them on the fly with custom units to sell weed, I couldn't follow them. It's all ceremony, sadly.

ps: one last thing, I wonder how illiterate people would feel if given a system like NixOS or GuixSD with almost fully rollbackable system; meaning they can mess with it without negative emotions or almost no superstition.

Laziness and time efficiency trumps experimentation and curiosity. Now, if you exposed kids to those systems...

Even in programming schools I and others felt that we should have old machines to program most of the time. It makes you think differently, or just more.

Also, less distractions.

With a full screen cli, all you see is what you are trying to do in this moment.

With a WIMP interface you have the task bar and whatsnot that may at any time try to grab your attention away.

Never mind all those background processes a modern OS runs, for various reasons.

When I push this trail of thought I sometimes believe mainframe programming style slow turnaround: plan, reflect, write, wait for test, correct; was great for this.

Sure exploratory programming, which I used to love, has value, but lots of the development in past days I think came from the technological wonder of "can we do it this way ? cool".

In the post-mainframe golden age, before UNIX and C came to screw everything up, people were using highly interactive programming environments with REPLs, system-wide debuggers, ability to work directly on running processes, etc. I think having access to such tools does not diminish your ability to think (I'm judging by my experience with those tools which somewhat survived in the Lisp world). But the issue with distracting OS seems more real. Back in the 70s, they didn't have a task bar with IM, music player and web browser waiting for you there. It was you and your dev environment.

the person saying that is often not even trying, which is frustrating. It's like a helpless excuse to be lazy

I suspect some of that might be due to Learned Helplessness. That is something that has to be worked on:



not just computers. whenever i visit home and my mom or sister want to watch netflix i have to be the one to plug in the hdmi cord for them from the ps4. i've showed them how to do it dozens of times but they somehow zone it out and would rather bother me and wait then figure it out.

Then again, HDMI is laden with HDCP, and i seem to recall more than one story about how you had to turn on devices in just the right order to get something other than a black screen or error because HDCP tripped up.

Digital signals are in theory a boon. but digital signals wrapped in DRM is a hot mess best avoided.

why should they learn, when you do it every time?

sounds like a great bargain for them.

Reminds me of my cats. One of them learned to open doors by jumping on the door handle. The other didn't, and when he needed some door open, he would wait for the first one to come and help.

You: "Seriously? It's about 5% more complicated than plugging in the vacuum cleaner. I'm in the middle of something. One last time and that's it." Voila!

I couldn't agree more with your last paragraph - I see this all the time. If these people approached other skills like driving a car or using a washing machine the same way they approach computers, they'd walk around in dirty clothes. As you mentioned it's easy to dismiss computer wizardry as black magic. This is the lazy approach in my opinion. This is simply a question of attitude towards problem solving.

On the other hand, I could redirect to a bright fifth grader most adults who have some computer/mobile/IT problem.

I analyzed technology proficiency level vs adult literacy level. Listing the equivalent literacy level with relatively equal % of population. ie, level 3 proficiency is 5% of population; above a college reading level is 5% of adult population:

  Cant use computers ~ Under 4th Grade level including illiterate
  Below level 1 ~ 4-6th grade reading level
  Level 1 ~ 6-10th grade reading level
  Level 2 ~ 10th-college reading level
  Level 3 ~ College+ reading level
Literacy %'s from https://contently.com/strategist/2015/01/28/this-surprising-...

The literacy idea is a good one. I was mystified by the description of the tasks being basically an IQ test or cognition test or math and logic test tangentially involving a computer as a topic and shockingly the result of predictable IQ test results being that computer user interfaces suck, which is pretty hilarious.

"I failed my standardized tests math section because the user interface of a #2 graphite pencil is poor." No, its because you're not good at math, the problem isn't the pencil. Half the population has to be below the median, after all, and we design civilization to make sure evolution doesn't cull anyone, so ...

I am kinda old and I can assure you that plenty of people were completely incompetent at using a physical card catalog (kind of like a printed out google search form, where an old fashioned library used to kinda be a printed out internet, back before libraries became internet cafes without coffee and also turned into daycares). Inability to search for things intelligently is not a recent invention of computation, any more than the inability to write essays is a result of word processing software or century old typewriter or the ballpoint pen.

Its truism thru history that as soon as the first tool was invented, some dude without enough mental horsepower to use it immediately started attacking the tool as being poorly designed. Fire, wheels, carpentry tools, farm machinery, computers, same old story. Some folks can't think, and giving them a tool then asking them to perform a thoughtful task is doomed to fail and result in the tool catching most of the blame. Why if only wheels were better designed to be round in all three dimensions instead of just two then that pesky problem of axle placement wouldn't be so stressful for end users, etc etc.

The fact of the matter is the median human is barely at a level of boolean algebra basics and arithmetic and following instructions on a packaged cake mix box. Giving someone innumerate a calculator doesn't mean they can now launch the space shuttle, any more than giving someone an anvil means they can shoe horses or handing me a basketball means I automatically have a career in the NBA.

> No, its because you're not good at math, the problem isn't the pencil. Half the population has to be below the median, after all, [...]

This is hilarious, talking about hypothetical somebody sucking at math and using popular incorrect opinion about median in almost the same sentence xD

There is a quite simple distribution with population of just five that has neither half of the population below the median nor above the median.

I must be missing something here. Surely the definition of median is that half the population is above, and half below?

No. Median is a value of a specimen that at least half of the population is not smaller than it and at least half is not greater than it. In typical cases of large(-ish) populations with a range of values the notion of "precisely half of the population is smaller" is just a mental model good enough to generally understand what the median is about.

Now imagine a degenerated case of a population with the same score. Surely there is a median, but surely none of the specimen is smaller than the median. A little less trivial case is a population of five with one value below, one value above, and three values in the middle.

> Median is a value of a specimen that...

Correction: median doesn't have to be a value of a specimen.

This is not adding up... 32% of US population has Bachelor's degree but only 5% has college level reading skills? I think original survey probably has some fatal flaws. May be they should re-publish results after only taking in to account individuals of age 25 years or more who actually respond.


The disconnect is that the definition of "college-level reading skills" is normative, not empirical. It refers to what a college-educated person "should" be able to read, based on the materials they see in coursework, not on what an actual college graduate can in practice read. Similarly, you'll often see stats showing that the average 10th-grader in so-and-so slice of the student population reads at a 3rd-grade level - meaning that their reading skills are really only up to school materials taught in 3rd grade.

I would say the term "college-level reading skills" is poorly defined if it's based on some idealalized college graduate rather than real ones.

The phrase "college-level reading skills" has always been well-defined as a prescribed threshold of literacy rather than describe any college student with any level of reading.

If we used your suggested 2nd definition of "real students", the label would be circular and meaningless. (E.g. prerequisites reworded in a bizarre way such as, "Remedial Reading courses support students who do not demonstrate non-idealized reading skill proficiency that real students have."[1],[2])



The alternative definition "any college student with any level of reading" certainly strikes me as a straw man (granted, unintentional one). How about "average reading level of a college student"? Or "average needed to graduate college"? (I'm not saying that is the definition, but I think it's closer to what laymen assume "reading level" refers to.)

> How about "average reading level of a college student"?

This is a form of normed referencing, and ultimately it becomes a (sometimes rapidly) moving target that makes comparisons difficult.

The standard used in the field is criterion referencing (similar to "prescriptive" used by 'jasode). This allows for easier apples-to-apples comparisons.

You are correct that they layman understanding is closer to your definition (at least in my experience). The layman understanding that you describe is not the standard in the field.

That seems reasonable - anecdotally I know many graduates from both sides of the pond who have never read a book they didn't have to - for instance, in the UK, fifty shades was broadly heralded as the first book many had read in their adult lives.

From within our intellectual elite bubble it's easy to go "surely it ain't so", but there it is. Hell, my ex business partner who was a highly educated and intelligent man doesn't own a single book and hasn't read one since "animal farm" at school.

The sad reality is that the vast majority of people have absolutely no interest in reading. It's not a question of access, rather perceived effort.

I think school is partly responsible for this. I had to read many terrible books in school. French is my mother tongue and I had to read stuff written in ancient French where a third of the page is definition so you can understand. Most of the time the book is just uninteresting. In my 5 years of high school, there are 3-4 books out of the 20+ that I consider worth reading. And the worst thing for me is the Analysis that come out of nowhere. I actually liked to read when I was a kid but now when I see a book I just have French class flashbacks and a feeling of disgust. People told me it goes away after a while. I hope it's true.

Anecdata: I know a lot of well educated people who don't read. At all.

What reading level would you expect for someone who got a degree in their early twenties then didn't read anything more complicated than People magazine for decades after.

Level 1 or 2. They may have never been functional at level 3 (not that there is anything wrong with that).

Level 3 means that someone can consistently read and comprehend the full meaning of college-level texts. In the US, one does not need to read at this level to graduate from college (or even many grad schools).

There are many people in the world who are smart and/or very knowledgeable who don't consistently read and comprehend at level 3 across the range of topics covered in a typical college curriculum.

Source: Me. I have trained people in both reading literacy and tech literacy. My experience is both theoretical and practical. Feel free to ask follow-up questions if you have any.

Anecdata: I know a lot of well educated people who don't read. At all.

Our current Presdident Elect (US) reportedly falls into this category (or, pretty darn close to it). That's a scary thought.

They probably did the survey at my university

That's a neat analysis -- thanks for posting!

Even most programmers are distributed like this. My team recently was doing usability tests of a networking library that I work on. We screen recorded a few random programmers try to install it and then make a simple app with it, giving them access to our tutorial and Hello World demo.

It was painful to watch them stumble about, trying to debug installation errors that seem obvious to us. Just like trying to watch a lay person try to use a computer, watching these other people made me want to blurt out the answer.

Any of the lessons that you take away from designing simpler UI for lay people applies just as much to professionals. Write a better API, a better library, and better documentation.

> and better documentation.

This. A whole lot of this.

Case in point: while Java Swing where popular, noobs, including me where looking to Graphics2D and Image when we first wanted to insert a picture.

The correct solution was jLabel.

Since then I have learned a lot but man pages and docs without examples are still annoying. (BTW: used rsync manually for the first time in a while yesterday and the man page was good, src and dst syntax was described. : )

The problem there being that few people are going to see JLabel in a class list and think "oh, that must be something to do with my image-related needs!" Especially if they've used other GUI tookits, which as far as my experience goes use "label" exclusively to mean "some static text".

Well, except WPF, where you can re-template anything you like, but even there, the default Label is just a text container.

Or web developers, who know that the obvious way to insert an image is to take a block element originally used for text and style it with a background image...

The bane of high-contrast users everywhere

The problem there being that few people are going to see JLabel in a class list and think "oh, that must be something to do with my image-related needs!" Especially if they've used other GUI tookits, which as far as my experience goes use "label" exclusively to mean "some static text".

In Tcl/Tk you would also use a label to display your image with. Tk was one of the most popular toolkits at the time Swing was designed.

After moving from Swing to WPF I started deriding myself for not having done so earlier. WPF is a joy compared to most of the Java GUI toolkits.

Being blunt, it's also possible that your library was garbage from a usability perspective. It's nearly impossible to recognize how painful it is to work with your own code since you are so intimately involved in writing it.

That's most likely true. Most libraries are horrible to install and configure and only make sense to the people who wrote it.

I gave up on learning PHP and Laravel recently when I went to install Laravel and they recommend Homestead. So I went to install Homestead, but they require Vagrant. Of course Vagrant requires VirtualBox, and VB can't find my kernel source code on RHEL 6.5. Okay, so let's skip Vagrant... Okay, now I need to install Composer instead of Homestead... what's this? SSL operation failed... failed to enable crypto... operation failed in command line code...

Yeah I give up. Yeah, I could take the time to work through this nonsense, but I shouldn't have to. Getting multiple layers deep of having to install dependency A to satisfy dependency B which satisfies dependency C is not my idea of fun.

Both can be true. Figuring out poorly explained technologies is still indicative of a certain kind of aptitude or proficiency.

> It was painful to watch them stumble about, trying to debug installation errors that seem obvious to us. Just like trying to watch a lay person try to use a computer, watching these other people made me want to blurt out the answer.

I see that all the time. Programmers aren't necessarily expert computer users.

The number of people on my team that don't use, say, WebStorm's regular expression search & replace to quickly refactor, or find usages, or anything in the refactor menu, or even the NPM scripts I made to quickly update the project -- any of the less obvious features to make their own lives easier is more than half of the team.

And they also like to work on one monitor, maybe two.

Just watching them try to debug something with their cargo cult methodology is frustrating.

Shit, I have people on my team who click Edit -> Copy and Edit -> Paste. Sometimes they might right click and paste, but keyboard shortcuts are foreign to them.

Isn't it common knowledge though, that the setup is a really big chunk of the work?

And the problems encountered are probably not all too exotic, so you'll be much better after doing it a few times. However this is not a task you do often (at most once or twice per project). It's no surprise to me that years of practice work wonders on reducing the setup time.

If you add research of suitable tools it becomes probably even more dramatic (finding, installing, testing, uninstalling, repeat until you think you might have the best or a suitable solution)

More and more "usability" seems to be more about prior knowledge and/or context.

One is basically trying to distill many hundreds of hours of accumulated experience into a few buttons and prompts.

The iPhone home button being one example of a context heavy control

> It was painful to watch them stumble about, trying to debug installation errors that seem obvious to us At least they were trying and did not dismissed the error messages as soon as they appeared

It's hard to make good UIs.

Be it in the classical sense or in an API/Docs sens.

I can do it, but it doesn't work for my own code, haha

Links like this illustrate what a distorted bubble the typical HNer lives in.

I've lost track of how many times I've seen a product or company posted here and HNers will say "I can already do that myself." For instance, I still chuckle at the "you can already build such a system yourself quite trivially" comment in the Dropbox HN post nearly a decade ago.

Even within HN there are bubbles inside the bubbles.

I keep seeing _sec people make statements that only makes sense if one is the administrator/dictator of a company network or server farm, and that are basically impossible to implement on a personal computer without effectively treating the user as an attacker.

> Even within HN there are bubbles inside the bubbles.

sup dawg...

makes me wonder how many business opportunities we are missing out.

we could possibly be making billions selling illiterates a usb-pluggable physical button that opens a browser window to gmail.

ye deities, the idiocrasy creep ever closer...


Ha! I referenced that very comment here just yesterday.


They should have included a category 'below 0': "User attempts to complete task by using the interface provided to him. Gives up in disgust after 15 minutes. Spends next 8 hours to (in this order) use common automation tools to reach his goals, write small scripts, reverse engineer the data and network formats, and disassemble the executable to inject his own hooks. After still failing to accomplish task, user tries to gradually replace parts of the provided software with his own versions; first using quick glue languages, eventually having to resort to rewrote large core parts in C just to have enough control of all components to make them interoperate. After 4 weeks of caffeine-fueled 16 hour days, user declares war on the vendor and starts writing his own version of the software and ecosystem to free humanity of the plague that is this god-forsaken piece of crapware. After 8 months of battling 30 years of legacy interoperability, data exchange formats and interfaces and the 8th question from Suzy in Admin about where the button that made the one thing with the blue border go away and then made the green other thing go bleep bleep has gone, user flees to off-grid cabin in the mountains with a beard below his collarbone and a crazed look in his eyes, while muttering 'they drew first blood. They drew first blood.' The end."

(bonus upvote for the first person to recognize the movie reference)

Sounds like a normal day. Rambo: First Blood reference also :)

I don't know where this is from but it literally made me laugh out loud, so thank you.

It's [OC] so thank you - and I'm not ashamed to admit that during the first few hours after I posted it, it was up/downvoted a bit and it even got to -1 for some time; and that at that point I cried a tiny little tear inside, filled with essence of the programmer's existentialist anxiety...

I would like to think about the set of skills that don't revolve entirely around coincidental interfaces made by a few mega corporations.

Are there computer skills more general than the ability to use iMessage or Windows Live Mail? Because if using iMessage means you now know how to chat, then what happens when NewChat(tm) comes out with a new brand experience?

What happens when Microsoft goes Metro or Windows 10?

As a society, should we be paying for user training for specific familiarity with proprietary interfaces? Shouldn't Microsoft or Apple be paying for this stuff? This is why I'm generally very suspicious of what people mean by "computer skills" in schools. They generally mean user training for Apple, Microsoft, or Google.

The generalized skill is the ability to recognize and categorize metaphors in the interface. Once you've used a few different systems you start to gain an intuition that is more generally useful. But most users don't use a lot of different interfaces. They have experience with just 1. I think that's the primary difference between people who work in Tech and UX and the general user. One group has developed an intuitive sense for metaphors and the other group has memorized at most a couple set of metaphors.

This is exactly how it seems to work to me but I've never seen it put so concisely.

To me computer skills means two things: rate at which one can learn new technologies, and one's understanding the basic components that comprise of the machines.

Programming fits both of those categories: consider the number of programming languages in vogue at the moment. These languages however are composed of the same components which comprise the machine.

Programming languages are software, no differently than iMessage or Windows Live Mail. It's just another way to control the machine.

Well, I think you have to learn a particular system and then generalize.

NewChat(tm)'s best bet is to have the new interface mimic and borrow the metaphors of the older interface.

In a sense, what makes an interface "good" is that it uses what is already known.

These are called interface standards and should follow the standards defined by the eco-system in which the application resides. This is why we adhere to them to ensure that the userbase has as much of a baseline familiarity as can be assumed.

As for the original question regarding the distribution of user's "computer skills"... if you can figure out how designers think about how you should use systems, then it's much less about having "computer skills" than having the capacity to figure out the psychology of the application designer - if you can type and use a mouse and can either Google or read a book describing the paradigms of an application or eco-system and figure out what they mean, you've got all the computer skills you need.

As programmers, I think we tend to forget what exactly it is we do... we don't just program, we figure shit out at the most technical levels. Other people are quite adequate at this too - we have lawyers, doctors, surgeons, mechanics and any number of other professions that solve way more complex problems than we do - we're not islands of all the world's intelligence. If people appear to lack "computer skills" it's because the paradigms we use to define our user interfaces and solve the problems we are attempting to solve are shit - and I say this as a pretty well seasoned computer programmer who spends most of my life trying to figure out how to get other people's shit user interfaces to get my own shit done.

So... that's really what I've got to say about that. People will figure out what they have time and inclination to figure out. If they don't have the time or inclination to figure it out, then it's your problem: Your software isn't solving big enough pain points for the learning curve involved. This means that either this user is not your target audience, or you don't understand your target audience well enough.

Either way, it's not on the end user, it's on you as the software designer.

>> NewChat(tm)'s best bet is to have the new interface mimic and borrow the metaphors of the older interface.


>Are there computer skills more general than the ability to >use iMessage or Windows Live Mail?

wouldn't that be knowing how to program?

I think there is a set of basic computer skills that can help one manage their personal finances and basic administration, safely maintain (backup) a collection of important digital documents (including personal photos, but also things like scans of diploma's and such), and author neat letters and a resume for formal correspondence or job hunting.

Regardless of their future career, people should probably be taught how to use a word processor (use styles, hierarchically outline your document, basic sensible typography) and spreadsheet software at school, along with basic internet skills (particularly with regards to finding reliable information and understanding the pros and cons of social media).

None of these skills require any programming knowledge.

>> I would like to think about the set of skills that don't revolve entirely around coincidental interfaces made by a few mega corporations.

> wouldn't that be knowing how to program?

Nearly all programming languages or instruction sets for assembly languages are also coincidental interfaces (this time for programming), often (though not always) at least originally designed by some "megacorp":

- Assembly languages: typically the respective processor vendor

- C: Bell Labs (AT&T)

- C#: Microsoft

- Go: Google

- Java: Sun (now Oracle)

- Javascript: Netscape

- Objective-C: NeXT (now Apple)

- R: A reimplementation of S (originally by Bell Labs; later commercially offered by TIBCO Software (another megacorp) under the name "S-PLUS").

- Swift: Apple

So what I would consider as more general computer skills is rather the mathematical or the electrical engineering side of computing.

"Nearly all programming languages or instruction sets for assembly languages are also coincidental interfaces (this time for programming), often (though not always) at least originally designed by some "megacorp":"

Nearly all the mainstream, massively successful ones, yes. But the vast bulk of languages are not, and there is little evidence to be provided that there is some sort of massive advantage to be gained by "normal users" for any of them. There are better ones than those for some purposes, yes, but after many thousands of languages it's not like there's one that is clearly better for non-programmers, and this after at least several dozen and probably several hundred attempts explicitly aimed at that purpose.

Asking for people to understand the math is an even larger ask than asking them to understand the programming, which I base on the fact you can still find a very significant percentage of professional programmers, possibly even the majority, who have disdain for the mathematical elements of computing.

This is a not-very-generous reading of "most", because while I read that as "most programming languages (weighed by how often they are used)" you can also read that as "most programming languages (weighed by counting them)" if you want to prove some point.

In the context of discussing whether or not there is a more non-programmer friendly programming language, the fact that there are dozens or hundreds of attempts at that specific problem, which have largely failed, is relevant. The programming world is not defined by the top 10ish commercial languages; they're merely "very important".

It is also not an unfriendly reading when my very first words (which I have not edited) acknowledge the alternative readings.

I might just be confused, but I'm having a hard time understanding what point you're trying to make. I've read your comments and the context a couple times now and now I'm only sure that you're reading something in other comments that I'm simply not reading.

Yes, there are programming languages aimed at non-programmers. Many of these are also "commercial" languages, or at least were commercial in origin (like C). Smalltalk (Xerox), Hypertalk (Apple), the efforts towards 5GLs in the 1980s, Visual Basic (Microsoft), etc. We agree that these are relevant, I'm just not sure what point you're trying to make with that relevance, and I'd love it if you could elaborate.

>, I'm just not sure what point you're trying to make with that relevance,

I'll make an attempt to explain why jerf's response looks out of place.

Basically, wolfgke and jerf looked at the 2 groupings of programming languages (mainstream megacorps vs unknown) as evidence for 2 different goals.

wolfgke: generalization path -- dominance of megacorps languages means you must keep going one level lower in hierarchy of computer science concepts to learn universal concepts instead of proprietary syntax.

jerf: pedagogy ease of use -- megacorps languages are not provably any harder to learn than specialized toy languages

How did those two end up talking about 2 different things?!?

If you look at the comment chain from posters threatofrain-->bryanrasmussen-->wolfgke ... they started a dialogue that keeps diving lower and lower in underlying principles. It's a variation of the XKCD comic about "purity".[1]

jerf's response doesn't continue that purity dissection. Instead, his emphasis on pedagogy seems to point back to the original article by Jakob Nielsen which prompted this thread. That article says that elite users (like HN readers) can use complicated computer software and we forget that most others can't. The communication breakdown was assuming that wolfgke listed the megacorps language as a (ease-of-use) response to J Nielsen instead of a specific (purity) reply to bryanrasmussen.

But then again, I might have misunderstood everybody and I have no idea what people were trying to say.


> wolfgke: generalization path -- dominance of megacorps languages means you must keep going one level lower in hierarchy of computer science concepts to learn universal concepts instead of proprietary syntax.

Correct, with the additional (IMHO important) fact that threatofrain criticized "coincidental interfaces made by a few mega corporations" and looked for more general skills, bryanrasmussen meant that "knowing how to program" is such a skill, but I analyzed that most popular programming languages are also just "coincidental interfaces made by a few mega corporations" (this time for programming instead of the general user), so that we have nothing won concerning the original problem of "coincidental interfaces made by a few mega corporations" - we are just some layers deeper. So I suggested that if you look for more general skills, you probably have to look even deeper into the mathematical or the electrical engineering side of computing.

Between those two ends of the spectrum lie abilities like being able to read and reason through operating system menus.

I would have liked to see screenshots of the UIs they used, but they seem to be absent from the OECD report.

A lot of the standard tasks we do with office/business software are unintuitive, but easily learned once someone shows you or you Google it. Even as a designer or developer, it's not always clear what each button does.

>participants were asked to perform 14 computer-based tasks. Instead of using live websites, the participants attempted the tasks on simulated software on the test facilitator’s computer. This allowed the researchers to make sure that all participants were confronted with the same level of difficulty across the years and enabled controlled translations of the user interfaces into each country’s local language.

Same with the language used in software: Sometimes tranlating UI buttons actually makes usability worse, because now you have to learn the shared language all over again. The descriptions on UI elements are often useless, unless you already know what the buttons do.

I've been telling my SO to "google it" for years. She won't do it unless I ask her to. Then when she does, I do this -> 0_o because, really, why are you typing that? Do you not know how to google?

Anyway, all this misses the point. If you have to google how to use a GUI the software has failed at a fundamental level.

You can take the technology part of the test here:


I have to admit I found it somewhat difficult, it's not surprising that most people performed poorly on it.

"You must use Firefox 10 or higher."

Unexpected impasse. Multiple steps. Requires navigation across multiple pages and applications.

Just taking the test is a Level 3 task.

The link took me to a 404 page.

One of the hardest computer tests I've ever taken.

I took the test (demo test at least).

Only 2 out of 6 questions had anything to do with UI/UX - the rest were either reading comprehension and/or basic math.

From the 2 that dealt with UI/UX, both were extremely hard, in one of them I had to send an email, get a token and navigate the site to send another email.

If this is what the results are based on, I wouldn't base any UI/UX decisions for web products on the results of this test.

The simulated environment also makes it more difficult. I accidentally tried to go back in the real browser instead of the simulated browser and broke the test. The simulated environment is also very slow.

And I requested the code 2 or 3 times until I realized there was a email tab at the bottom.

yeah, one of them told me to highlight a passage of text in some dense report about education effects. WTF does that have to do with "computer literacy"?!

Yeah isn't that just reading comprehension?

The user interface for that test is horrendous. I felt dumber just looking at it. The flow is quite rigid.

For example... at the start of the test, there's a "Please press 'ENTER' to continue"...

A) My keyboard has a return key, but no enter key.

B) It's a button. Has the test started yet, because I'm not sure if you're talking about the button, or the non-existent key on my keyboard.

C) You're making me think, and the test may or may not have started yet.

Maybe my reading comprehension skills are lower than I'd like to admit. Or maybe your UI makes smart people stupid.

> To begin, enter the Authorization Code provided by your administrator into the box on the left and click "Submit."

Am I missing something?

/am probably more computer illiterate than I realized

You need to choose the "Demo Test" link.

Meta-UI problems

"Page not found." Or is that a trick question?

Looks like they took it down -- archive.org found it again at http://esonlineenu.startpractice.com/

It's very concerning to me that they didn't include screenshots of this simulation site in their report. The pages and pages of data are meaningless without knowing what they were actually measuring.

I'm so computer illiterate I can't even click a link and get to the right page.

One of the questions for the test in English is “when did you first come to the United States”, but I have never been to the United States...

Incidentally, while I got every question right, I could easily imagine myself getting one or two of them wrong. Not out of stupidity, just bad luck. The question asking about the impact of educational attainment had at least 2 sentences that plausibly answered it, yet the correct answer was to highlight just 1 of those sentences.

"Press 'Control' + 'Alt' + 'Backspace' to log out of the browser for Linux."

I did the demo test tasks. I think their test UI is pretty bad. But I feel like anybody who's used email could probably figure most of it out.

I see the argument of "not enough hours of practice" in some comments. I disagree. That would be true if there was a direct correlation between moderate proficiency in computers (e.g.: ability to independently reinstall your machine completely and diagnose which hardware component to replace after a failure).

The unmentioned problem here is that people are actively reluctant (this is tested, too) to learn the skills that would let them be more independent. I doubt this is entirely related to the total amount of hours spent at a computer.

In some way, computer literacy seems to be a similar scenario than cars: the homo simplex doesn't want to understand how it works, he/she just wants it to work while enjoying the luxury of not needing to understand it while posting duck faces on Instagram.

Let's face it, computer illiterates understand less, pay more and end up having lower purchasing power. Considering that this is entirely under their control, I find this quite fair (I have different opinions about Seniors, but that's another discussion).

To be honest, I can find only one valid reason for this to be an issue: civil liberties and human rights.

I'm intimately and increasingly concerned by how much civil liberties have been taken from citizens in "developed" countries in the recent years, and I am under the impression that this is the direct effect of computer illiteracy spread across all levels of the population.

Citizens are being asked to vote on laws relying on technological concepts they don't understand, and which end up in striping them from their rights.

That's what computer illiteracy is about and that's what (I think) the article should talk about... :(

my father is a professor and has been asking my mom the same questions about microsoft word for the past 20 years, never learning it. he spends a lot of time on it too.

I understand that some software really sucks.

For exmple word processors, I really hate them with all my heart, but when getting a degree, I had to write much stuff. So I sat down and learned the core functions of Libre Office Writer and it made my life better.

Even my profs. were amazed how good my papers looked. Not because they were really good, but because none of my fellow computer science students bothered to learn the software and make them look good.

You wrote a scientific paper in LibreOffice?

Not LaTeX?

I thought it would take too long to learn latex.

Writer had a nice DSL for math terms, so I had everything I need.

(Yeah, alright, I have written a scientific paper in LibreOffice as well. But it was one that was started by someone else, I just finished it. And I didn't inhale.)

> has been asking my mom the same questions about microsoft word for the past 20 years

That's because the answers change with each new release :)

Wow, there's a lot to unpack there. We went from not being able to schedule a meeting between four people in a calendar app, to lamenting the lack of computer repair skills, to denigrating people as somehow less than human ("homo simplex") because they have better things to do with their lives than to devote them to your hobby, to the rise mass surveillance.

There's no shared thread among any of these things, except they all contain a silicon wafer somewhere. They also all contain plastic, but I don't any one lamenting the lack of knowledge about polymer chemistry among the populace.

The calendar problem isn't necessarily a "computer skills" problem, as much as it could be simply a lack familiarity (solved by time), UX problem (making it literally not the user's problem). You want to know a secret? I don't know how to use GMail. Seriously. I don't.[0] I had to get someone to come over and show me how to simply reply to an email, because every time I clicked what looked like reply arrow, the damn thing went back to the list. I have no idea how to sort the inbox, even if you can. Perhaps, the most popular email client in the world. I don't know how to use it. Am I illiterate? Sure looks it. Yet, I can send an email using telnet.

Personally, given that people find word problems complicated, can't calculate restaurant tips in their head, and are prone to fall into personal bias supporting arguments, I think the failure to schedule a meeting is more of a general problem, than a computer one.

You're elevating "computer literacy" as some sort of giant lift in the economic chain. Really? Sure software engineering pays well, but it's also completely unrelated to sending out a calendar invite. If that was the metric, then why aren't secretaries paid $100k? Similarly, you can swap out software engineering, with any other high earning field, say hedge fund managing. Arguably, that's a better field, and you don't have to know how a floating point numbers are represented, nor do you have to send your own calendar invites.

Finally, yes, mass surveillance is a real issue, and yes more education about what is being done, and what is possible, and at what scale these things are being might help. It might not as well. These are much more political, legal, and philosophical questions than computing questions. After all, I don't need to understand how carbon and iron bond to understand that ethics of stabbing someone in the throat with a knife.

[0] Before anyone chimes in. I don't really care how use GMail. It sucks.

I've had the same issue with gmail. The back and reply buttons are nearly identical, the reply one is just a bit curvier. I don't know why google has a reputation for good UI's, most of the ones I've used have varied from decent to awful.

Who told you Google had a reputation for good UIs? Every Google UI is atrocious. Gmail uses UI concepts that were tried and rejected by Microsoft back in 2000. (Drop-down menus that obscure less-used options under expandos.)

YouTube's ContentID UI deserves a special mention, it's not just awful, it's actively sabotaging people questioning ContentID strikes (90% of which are fraud.)

Or try the new Google Hangouts desktop "app". (Not really an app any longer-- now only the Chrome add-in is available.) It's hilariously awful. It's almost as if they're trying to sabotage their own product. Don't take my work for it, read literally any of the reviews on their own app site:


Sorry. I could rant about how terrible Google's UI/UX is for days.

Maps has gotten really terrible. I can never find anything I want to do on there.

Google does UX so badly that other google employees blog about it https://medium.freecodecamp.com/stop-the-overuse-of-overflow...

This reminds me of a UX lecture I attended in undergrad in the UK.

The lecturer flashes up a picture and says "who is this?". There is a sea of blank faces as a reply. The picture was of Roy Chubby Brown [1] who by some measures was the most popular comedian in Britain at the time. Out of nearly 100 people no one had any idea.

You are not average. Not even close.

Random trivia: the fictional town in the cult TV series "The League of Gentlemen" is named after Roy Chubby Brown:


But how do you know the "average" person knows the most popular comedian in the first place?

There's one day of the year which is the "most popular" birthday for a population, but that doesn't mean a huge percentage were born in it.

This would have been surprising until a week ago when a friend of mine sent me this image of course offerings for those signing up for unemployment: https://imgur.com/gallery/tCG6J

Side note: If anyone is looking for an experienced technical sales person in Boston let me know! They have experience with startups, IPOs, large corporations, channel sales, and has the sales gift in a non sleazy way.

I'd like to see the data for people below 25. It's unclear how much of that distribution is due to adoption lag.

The same group did some research into that [1][2]. TLDR: there are differences in behaviour but error levels are similar and it makes no difference for the UI guidelines you should apply when designing.

[1] https://www.nngroup.com/search/?q=age [2] https://www.nngroup.com/articles/young-adults-ux/

Kids are worse with computers than you'd think. Volunteer in a library for example and you'll be helping 16 year olds with printing double sided.

All kids have been experts on computers for about three generations now. Its no longer a valid meme.

There's kids now that don't know what VCR was, but you can assure them that their Grandpa was the only person in the household who could figure out how to program the timer.

The idea that kids or young people are better with computers is not actually true. Yes they can use apps, some of them can install them, but for the most part they are clueless when it comes to actually fixing a problem with a computer.


In fact everybody who seems to be a 'computer expert' is probably comfortable at just one layer or two. Layers include

   running applications
   installing applications
   writing applications
   installing services and drivers
   writing services and drivers
   installing operating systems
   writing code for operating systems
   writing kernel code
   designing hardware
   designing chips
   designing chip technologies
   chemistry and physics

The clear message of the article is about increasing awareness of users' limitations, but I'm also wondering what can be done to increase people's computer literacy. When we see statistics about functional illiteracy in the traditional print sense, we might think about ways of minimizing demands for people to read things, but we might also wonder how we can help improve literacy.

So, believing in the analogy between print literacy and computer literacy, I want to ask that question here too. What helps or doesn't help, and what can change? How early would different interventions have to start to be helpful?

In my kids UK primary school ICT (Info & Comp. Tech) amounts to playing games, not even computing focused ones just crappy flash games.

It's like teaching literacy by giving them only picture books. They'd be able to handle books, turn the pages, etc., just not actually use them properly.

With games like lightbot, apps like turtles and Scratch, and sites like code.org I can't really see the excuse for this.

At the very list give them dosbox and make them enable extended memory.

Your comment caught me off guard and made me laugh out loud.

There were so many things about those early systems which were not universal skills at all, but simply hardware limitations (or holdovers from previous hardware limitations - or just terrible design).

And yet, after watching my kid learning to use a computer, I do feel a deep nostalgia for the single-purpose simplicity of the command line I grew up with.

The clear message of the article is about increasing awareness of users' limitations, but I'm also wondering what can be done to increase people's computer literacy.

I think those are two sides of the same coin - if we understand the user better we can make software that's easier for them to use, which in turn makes the user more confident and better at using the software. The goal shouldn't be to take away the powerful tools and replace them with simplified tools, but to make the learning curve shallower so users can learn to use the software and eventually to the point where they can use complex features if they need to.

Agreed. But I think advanced users become comfortable with idiosyncratic aspects of computing that could be recognised as deficiencies, and improved. So as well as teaching, there's the matter of raising the standards of software quality.

An opinion stated to a community that (generally speaking) thinks that Git is a pretty good piece of software.

Snarky, I'm sorry, but you're 100% right. I grew up in a world where there was tons of optimism in software development. Programs like HyperCard, FileMaker, Microsoft Access, Visual Basic/WinForms, RealBasic, etc. promised to make (the most common type of) application development accessible to everybody with minimal training.

HyperCard's most successful developers didn't come from software development backgrounds-- they were artists, journalists, historians, etc.

(See this excellent article: http://www.filfre.net/2016/10/the-manhole/ )

All that's abandoned now. The optimism is gone. Developers who gave a crap seemed to have disappeared a decade ago. The open source development methodology, who never gave a crap about ease-of-use even when Apple and Microsoft used it as the foundation of their products, have taken over all software development.

And I sit at my cubicle, thinking about the now-dead world that was emerging back in the mid-90s, thinking: I could have been a plumber instead.

Maybe there's hope for a new generation to come to be educated with those old values. And it doesn't need to be an entire generation... just enough people in the right places.

I didn't grow up in that age. Thanks for the link!

There is a scene in Silicon Valley (the series) where Richard thinks the interface of the platform is really simple and intuitive, but turns out he only asked Level 3 people and above and it was not easy to use at the Level 1 people.

Well, in Turkey, because of frequent bans on websites, a good amount of younger people know what DNS is, what a VPN is, how do they work, which one is better than the other, etc. I can say that people on Level-1 use these tools to solve "problems" and people on level-2 can set these stuff up for them.

>26% of adults were unable to use a computer

What? This is really surprising. I expected some people to be really poor at using computers, but not being able to use them at all? Wow. TIL

I didn't realise this until I heard that Hillary didn't know how to use a desktop computer and could only use a specific version of blackberry. Apparently it's still possible to be to totally functional without using technology.

Trump either:

The New York Times reported on Nov. 6 that Trump “does not use a computer,” which explains why he was perplexed at his campaign spending millions of dollars on digital ads. - http://qz.com/829647/donald-trump-and-hillary-clinton-dont-k...



He probably doesn't drive or cook either. Is that perhaps as much a rich person thing as anything? I suspect he does most things by proxy?

I'm sure he _knows_ how to drive. Its getting to the point where not being able to use a computer has the same impact as not knowing how to read.

I don't think that's true anymore. If anything we're getting past the point where not knowing how to use a computer is harmful. Smartphones and tablets have really lowered the barrier to entry for modern (high-tech) life.

Desktop operating systems are complex and hard and if you do the wrong thing, you can break everything. Mobile OSes don't give you those options.

"Smartphones and tablets have really lowered the barrier to entry for modern (high-tech) life."

This is true - however these devices have also completely taken away our computational freedom as well. In fact, I'd argue that computer literacy is more important than ever. The only thing that has changed is that now society suffers as a whole from computer illiteracy, instead of just the computer illiterate. They can still post cat photos on social media, but a growing segment of the population has no idea how any of this works, and is at the mercy of Microsoft, Google, Apple, Facebook, Zynga, etc.

As someone who grew up with computers and started making Doom maps when I was 8, and programming C++ on Linux when I was 14, I see this trend as an oncoming storm. Its as if we were replacing libraries with cable television. Computers (phones and tablets specifically) are transforming from a platform that enables us to do anything we want, to a sort of Disneyland marketplace where nothing exists unless there is money to be made, regardless of ethics.

I think its more important now than ever for people to understand how to identify and select legitimate software that doesn't have ulterior motives, and put together a system which serves no one but it's owner. I suppose this is exactly what TFA says is impossible, but damn is it important.

I should have said technology instead of computer, I would include smart phone under a broad definition of 'computing device'.

Most people I know in business management default to passing tasks off to other people (as I think they should). They end up like the opposite of the absent-minded professor: They are skilled in getting people to do what they want, but have no technical skills of their own. I'd imagine when you get to that level, it's even more concentrated.

HRC and DJT were born in 1946. They were college-educated adults with years of career experience before it was reasonable to expect someone to have computer experience, and before the popular GUI computers were in use the general public.

Good news for politicians. Only 5% of the population actually understand how dangerous they are for civil liberties.

This is a great topic and an informative article. As a probable exception to the rule, I'm an "older" individual with decent computer skills. No doubt it's attributable to managing and programming computers out of necessity since the IBM PC days. Having to learn everything the hard way has its merits.

Related to the article I created and maintain a database application for an non-profit arts organization. It involves membership, artwork inventory and exhibitions. It's a fairly complex task, almost all the work goes into the web-based UI, while the PostgresQL backend/server is pretty straightforward.

The challenging part is getting the artists and volunteers to actually use the DB program. The UI is as simple and unambiguous as I can make it. There's only so much to do to make a record with two dozen fields to fill in "simple".

One method is using dropdown options to select from where that fits. Also avoiding hidden "tricks" the user would have to know, giving on-screen examples of proper field format (like date or time entries) and providing concise, specific, instructive error messages.

Even with all that effort, convincing users to try it out has proven the biggest hurdle to success. I've come to realize a key to the tool's utility is constant encouragement. Live demos are often a useful way to help people "get over the hump" of fear and resistance.

Ironically, the "power users" often show the greatest resistance to trying out the web app. While it's made to work as directly as possible, power users fear they will look "dumb" if they don't instantly grasp the operation of every feature.

What I've learned (again) is it requires empathy for users' fears and sense of intimidation, and assuring the app is designed with "foolproof" safeguards preventing accidental disasters. Above all it takes enormous patience training users, listening to user feedback, answering questions and never, ever criticizing when people make mistakes.

I wish they had breakdowns by age. I think it would be marginally better for younger people, but probably not as good as one would hope.

I admit I still catch myself accidentally hitting the browser's back button on a multi-page Ajax form without its own back button. Or having to fill out a form again because, instead of opening a modal or a new tab, the current page changes (e.g., TOS). I've come across countless UX developers who don't consider these subtle details.

And this is only a small part of nuances that a typical user faces. I don't care how beautiful or fancy your UX is. Familiarity is king in design, and if you stray too much away from the current experience, I won't hesitate to say that my toilet has better UX.

Ajax can behave nicely with your browser's back button, but you're right that it's impossible to tell if it will before you try.

Interesting that Japan has both the highest level 3 percentage and the highest can't use computer percentage.

Not very surprising actually. They have good chunk of people who are heavily reliant on mobile devices and that predates smartphones. I have heard more than few cases where university professors commenting about student's lack of computer skill, to the extent that some of their student submit their essay using their phone.

This is fantastic news. I am glad to hear I am in the upper echelons and computing literacy - let alone programming literacy - has not caught on. My skillset would be useless if everyone could do it.

I found it funny that the researchers refrained from assigning a computer skills "level zero" to avoid the negative connotation. To experienced computer users zero is just the first number

I've heard the zero vs one indexing as offset vs position. The first element has offset zero.

Edited to add relevant xkcd: https://xkcd.com/163/

I don't get why they didn't simply call the lowest level "level 1" and the highest "level 4".

Level 7: Rewrites browser software in Emacs LISP before proceeding

Level 8: Writes driver software in HTML.

Level 9: Builds rudimentary computer out of off-the-shelf parts, on a table covered in breadboard and DIP-switches, programs C compiler by hand on the DIP-switches with a hand-carved wooden stylus, and bootstrap-compiles the full-featured compiler and OS kernel for commercial computer hardware from painstakingly audited source before proceeding.

Why would a smart person complicate their life with no gain?

(unless the goal is learning all the its about Lisp or web browsers)

I read this piece for the first time about four years ago. I've been revisiting it at intervals ever since. As it happens, I had been assigned to a Javascript project at that time after a decade of working with Java. I had been initially dismissive of the "toy language" of Javascript but I also felt that as I learned it, I grew inexplicably fond of it. The Graham essay helped me put the puzzle pieces in their right places and realize that Java was really a pretty clunky language and that Javascript was, in fact, superior. Never went full LISP though -- I had tried Scheme in the past and failed to wrap my head around it -- but it definitely made me respect the ideas behind the language.

For fun?

What is with the needlessly complex description of tasks?

> Tasks are based on well-defined problems involving the use of only one function within a generic interface to meet one explicit criterion without any categorical or inferential reasoning, or transforming of information. Few steps are required and no sub-goal has to be generated.

It reminds me of times when I was writing articles for journals and had to find filler to reach the required arbitrary page count.

My thoughts exactly. As UX authority, you'd think they at least write clear descriptions.

The report covers roughly 1 billion people if you take the working population of the 33 countries involved.

At a sample of 215942 the margin of error is fractions of a percent.

But, what this doesn't account for is that in the original report (http://www.oecd-ilibrary.org/education/skills-matter_9789264...) you do see a significant variance in the distribution of skills based on:





In Japan, only 4.3% of adults are at a "Level 1" where as in Indonesia it is 37.2% which is to be expected when Indonesia is the bottom of the ladder in GDP Per Capita (http://www.keepeek.com/Digital-Asset-Management/oecd/educati...) and in literacy scores (http://www.keepeek.com/Digital-Asset-Management/oecd/educati...).

While this report says it takes the 'average cross the OECD countries' it's not clear on which factors were weighted or adjusted.

The information isn't wrong - it's just worth taking it with a grain of salt depending on how you are using the information. That said the US market for example lines up very close to the average, which is still staggering.

I'm not surprised. Typically "computer skills" are taught as "click here, here and here to edit a word document". I think going back to basics and teaching people how to do things from the command line would improve literacy dramatically, especially with the younger generation who have been shielded from the concepts of "files" and "programs".

Why would you need the command line for that? Any GUI file browser is a much clearer tool for understandings files and programs.

That's a matter of opinion that I am sure many Linux/unix users do not share. Even on windows the tree structure is easiest understood in plain text without all the noise. It's literally a diagram of the file system.

> That's a matter of opinion that I am sure many Linux/unix users do not share.

Indeed. It's one of the fundamental fallacies of design: The fact something is clear and easy for you does not mean that it's objectively clear and easy. You are not your users. http://uxmyths.com/post/715988395/myth-you-are-like-your-use...

I grew up using command line, I kept stubbornly using it for years after GUIs was ubiquitous, and I still use it regularly at work and at home. None of that means that it is the best tool available for this specific learning task.

> It's literally a diagram of the file system.

So is Windows Explorer or any other GUI file browser, in every way. If you put it in details view, it even displays the same data in the same way as a command line directory, except you can navigate it intuitively with a mouse.

Explorer will show a lot of unrelated stuff on the left. It will hide important information (filenames). It makes it difficult to see hidden files. It will pollute the view with different icons for file extensions.

When someone is learning what a file is, explorer shows both too much and too little information.

You don't, but when you want to combine the primitives to get stuff done it's much easier from the command line.

Most people's primary computing device is a phone. They don't have a command line the user can access.

A phone is a media consumption device, not a computing device.

You are right, and it's a problem, but it doesn't have to be that way, we can improve human input/output.

Touch screens are extremely clumsy input devices...

Keyboards sure are hard to beat. Maybe a foldable blue-tooth keyboard that you can have in your pocket and then place on your lap or a table.

Where do you place your mobile while typing with both hands though ? Maybe in a casing so it can sit on your lower arm. Or a Velcro/burdock case so you can place the mobile on textile surfaces.

I'm hoping for something like hololens with some kind of keyboard, virtual or otherwise. Then we can take large screen monitors with us.

"Computing?" Now that's a term I haven't heard in a long time.

Who does computing outside of data center? I'm typing this on a laptop right now, but I wouldn't say I do any "computing" on it. Even at work, I would say that the actually amount of "computing" I'm responsible for (i.e. a program I specifically wrote to calculate some number.) is very low compared to simply text editing, web surfing, and music playing.

Throwing around this archaic and ill defined term, and then dismissing the phone as "a media consumption device" strikes me as elitist and out of touch. Is photography consumption? Clearly not, since it's production. Is media editing consumption? No. Is it computing? Well given that nonlinear digital video editors were big expensive machines several years ago, I'd say so, unless of course we're moving the bar. Is writing computing? 30 years ago it was. Do people actually write on their phones? Yes. Oh communication doesn't count? How about term papers? Do they do that? Yes, they do.[0]

I think you need to reexamine your biases.

[0] http://www.wsj.com/articles/look-mom-im-writing-a-term-paper...

Fair enough, it's an bad term. What I mean is instructing the computer to do something for them, not having to manually perform every action themselves. The sorts of things people do in excel but in a more automated way.

As for what phones are capable of, I'm aware. But they are the wrong tool for the job. For the most part the are walled gardens where the only things you can do are what some app developer thought you might like to do. The industry is stuck in a regressive state where we are trying to limit users, not empower them.

But they are the wrong tool for the job.

They're the wrong tool for the job of creating and managing files in a filesystem. They're not the wrong tool for writing a Google Doc, or updating a Facebook status, or taking a photo and uploading it to Instagram. Arguing that we need to teach users paradigms like a command line is out of touch with what computers are to most people today.

We should be making tools that work with the way people use computers, not making users work at learning the tools we build.

You mean teaching them to be completely reliant on cloud services to make silicon valley "entrepreneurs" rich.

That isn't empowering users, it's enslaving them.

Yet they are many times more powerful then the computers we had in the eighties. The input look the same though, maybe it's time to invent some new IO, for example a wrist band so you can type in the air and a HUD like google glass.

Oh that yet-another-product-Google-killed-off?

You must mean an Apple watch you can speak to, and a Microsoft Hololens...

Why should the typical computer user need to understand concepts like files and programs? That sounds akin to saying that before someone can drive a car they should need to learn how to strip down an engine. There probably are those who advocate that but they're in the minority

Computer is not a car. Car is an appliance, and perfect bullshit-free car would be one that simply drives you from point A to point B safely, without any additional input besides providing the point B.

Computer is not an appliance, it's a general-purpose problem-solving tool. It's closer to reading and writing than to dishwasher or fax machine. You can argue that applications are like appliances, and that makes some sense, but refusing to learn about the environment those applications live in makes you that much less able to do things that span multiple applications.

Because those are the primitives that make computers work. Interfaces change like fashion but something like "saving a file" is something that was useful 30 years ago and now.

Stripping down the engine would be more akin to coding the programs themselves. This is more akin to saying that before someone drives they should know about the accelerator, brake and steering wheel, the basic tools that they use to interact with the car.

> This is more akin to saying that before someone drives they should know about the accelerator, brake and steering wheel, the basic tools

No, this is something in between, but an analogous system does not exist in cars. If cars had trim, like airplanes, that might be analogous.

To follow the analogy further, it's both. We've never had a machine as general purpose as computers are. I'd classify it as more like writing and mathematics than any piece of technology. And with those we teach from the bottom up.

The same reason you want literacy in any other field: to not be taken by the nose, taken advantage of. Example medical care, in theory I should not know a thing about it. Yet not all doctors show the required dedication to figure out a problem to it's root cause. As such I need some literacy in this field, so I can solve, or insist/steer the problem to a solution.

I suspect, but can't say for sure, that every commoditized technology throughout history has gone through this process where first you can't use it without understanding it in depth, then people try to build "it just works" experiences with gradually decreasing pushback from the experts, and eventually very few people actually understand the thing.

Knowing the basics of what a file and program are is more like knowing what a door and steering wheel are.

'A file is where data is stored'


'a program is a collection of instructions for the computer to do something' are the basic levels of understanding one would expect.

> Why should the typical computer user need to understand concepts like files and programs? That sounds akin to saying that before someone can drive a car they should need to learn how to strip down an engine.

I'd say it sounds a lot more like learning the basic theory of hydraulic brakes (because you shouldn't brake too much but rather switch to a lower gear when you drive down from mountains.)

So they can work with them far more efficiently? A computer is a much more general purpose tool than a car.

Not mentioned yet: separation of concerns.


No, this is like knowing how to read road signals.

They know command line alright. It's only that Google search bar is the new command line.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact