We have this innate comfort and familiarity with using computers, but as hackers it's a huge part of our lives. People have other ways of life than us, and have other expertise. We shouldn't dismiss their computer illiteracy as stupidity; it's just as bad as them writing us off as "computer nerds".
There's also this tendency for non-computer-literate people to be overly self-deprecating. I noticed this with mathematics, when I tutored people @ uni, but with computers it's the same shit. I constantly hear "yeah, I'm awful with computers, it's all black magic to me". But the person saying that is often not even trying, which is frustrating. It's like a helpless excuse to be lazy and let someone else do the work, which is often not that complicated.
My father is 85 and is quite good with computers; he uses email constantly, writes documents and books using word processors and understands the concept of files, reads on a Kindle, etc. But he had never touched a computer before he was maybe 70 (let alone own one), and before that he wasn't technical at all. He was trained as a Latin teacher (!) and spent his professional life in politics, never having to type anything by himself.
My father-in-law is very good at many manual tasks including woodworking, electricity, electronics, and was an officer in the French army for many years, where he was responsible for maintaining highly technical weapons. He too is 85 and he can't seem to understand computers, although I have known him for 15 years and have been trying to help him for that long. He seems to want to learn but isn't really interested in what one would call "the big picture", he just wants to know a list of actions that will get them to where he wants to go. Of course that approach fails miserably whenever something happens that's not on the list.
I have two friends who have been to top engineering schools who, I discovered recently, type whole urls not in the browsers' bar but in Google's search form. My eldest son is 11 and doesn't confuse the two. My younger son is 7 and doesn't seem much interested.
My mother is a graphic designer, old enough to have done page layout with a knife and glue. She learned Quark, then Freehand, then In-Design and still uses the stuff to this day. She was not raised with a computer, but her father was an engineer and she has that engineering mind, where she can follow something back to its root and figure it out.
My father, on the other hand, is a Dale Carnegie teacher. He's a people person. He's had a computer since we got an Atari ST in 1985. He's been on Macs since 1987. He STILL cannot use the thing properly. Sending an email is hard for him. Using his iPad is tough. Facebook enrages him because "I don't know any of these people, how do I get rid of them, I didn't ask to see their pictures!" My dad was raised by a photographer and a nurse. He never learned how to troubleshoot. He cannot find a root cause.
But he can hold a room full of people captivated and make them laugh, something my mother would NEVER want to do. So, it's those people skills versus those tech skills, as ever it was.
All this has been exhaustively hashed and rehashed by the UI/UX, cognitive psychologist, designers, SIGCHI, etc groups for decades.
We just don't listen to them.
Not stupid + put in the time = "good" with tech.
I know intelligent people that spend time on it and fail because of lack of interest.
I seem to have an innate talent for computers. Given a "clue" I can usually extrapolate most of what I need to know. I may be "smart", but my memory recall is horrible -- making connections is very difficult to me given too much time displacement. The extrapolation is mostly automatic w/o actually "thinking".
My memory is poor and seems to only get worse and I'm not even in my 30s yet.
Standard Install (Recommended)
Custom Install (Advanced Users)
Yet the custom/"advanced" option was just a couple checkboxes that let you exclude extra bloatware during installation.
I wonder how many users upon seeing both options tell themselves they are not "advanced" computer users and just go with option 1 without even looking at option 2. Had they bothered to look many of them would have understood to uncheck the bloatware they know they don't want. If that was too much there was a Back button. But the wording "Advanced Users" scares many ppl away from even looking into that option and so they end up with some Yahoo! search toolbar forever after...
Self-deprecating computer users is an issue. The extent some software companies are taking advantage it is another.
They're occasionally accurate - I can think of a few programs where the "advanced" option demanded that I go and read a bunch of documentation - but mostly it's either laziness or malice. I don't think they've been a reasonable choice since storage was so limited that 'advanced' meant screwing around with DirectX directories and partial installs.
In 90%+ of cases, basic/advanced could be better addressed with a handful of simple-English checkboxes for what to install. One is locked-active, for the core bundle. The others have one-line descriptions of what they do and possibly what other products make them redundant, maybe with a "more info" button.
And since evil bloatware isn't going anywhere, we could at least set a norm that real menus give you options, and anyone trying to prevent you from engaging your brain is pushing crapware.
So it's not only casual users who prefer the default.
I was mostly thinking about the program installs where 'advanced' is a list of different things you might want to install. The worst case there is pretty much just "you didn't install the thing you wanted", versus the "good luck buddy" worst case of tools and drivers.
I taught 100s of people how to use CAD. Tough going. Technophile me thought "Of course it's better, easier, more powerful." Most of my student's blamed themselves. But I knew they were smart, knew what they wanted to accomplish, and just couldn't figure out how to do their jobs with CAD.
You could say "square peg, round hole", or "like pushing rope". I preferred "CAD is an angry 800lb gorilla sitting between people and their work".
Being kinda thick, I finally figured out the tools suck, not the people. Specifically with CAD, worse than the terrible UI, they had terrible mental models, completely divorced from how the users see their own work.
In conclusion, blaming the user is not very humanist. I rage at computers all the time, and I supposedly know what I'm doing. I have deep empathy for anyone not chin-deep in this crap.
Sometimes people really do have the necessary knowledge and skills, they just refuse to apply them to computers. Certainly, many computer are too quick to blame the user when it's really a design problem, but it's not always wrong.
Unfortunately, I think the shallow mental model has problems when presented with large systems of behavior as a whole that are foreign. Similar to the average person when dropped into a foreign culture with a different language, someone with a shallow mental model of the world may find a computer, with it's decades of layered systems and skeuomorphic metaphors, if they are even attempting that in the specific instance, may be initially too daunting to attempt on their own. At least that's my pop-psychology theory of the day.
Rather, I think the divide is "I can" vs. "I can't", and the latter limited thinking leads to the sort of limited world view you describe.
It's something that fascinates me - so many lead lives that are essentially proscribed, view reality through a pre-fabricated toy lens - and when confronted with an individual that has chosen something other than default options in life react vigorously and vociferously. I see the same in computer literacy and literacy in general - the "I can't" stemming from a refusal to accept new information which may conflict with preconceived notions.
If anything, I think it's fear of thought for the potential unhappiness that knowledge brings - adding something to the picture can make the overall scene so large and intimidating that many shy away rather than attempting the daunting task of filling the gaps.
I mean, it is daunting - once one has climbed a mountain of knowledge you can see the whole, the foothills, how it joins in coherence - but at the bottom, all you can see is an almighty sheer face. I have learned to be comfortable with the knowledge that I know nothing, despite knowing far more than my fellow man. Knowledge requires humility, as you swiftly learn that you couldn't be further from the centre of the universe or less significant.
I think there's a point early in life when this bifurcation happens - perhaps it's down to whether parents attempt to answer a child's questions about the world or simply reply with "stop asking stupid questions". Perhaps it's down to egoism - you either have to be humble or unspeakably arrrogant to pursue knowledge, and the middle is intimidated.
That's my pop-psychology theory of the day :)
This is usually trained into users by non-existent or awful error handling and messages in the majority of software. How the industry addresses errors is haphazard at best, and I see lots of evidence of what I call "checkbox/boilerplate project management" in lots of software products. The classic symptom is an error throws a message along the lines of "Foo Error: there was a foo type of error"; it is an error message, so I guess it got marked as a successful part of somebody's sprint by its mere existence, and not evaluated on its utility.
The industry should be hiring legions of editors with red markers to up our communications game when it comes to error messages.
And aren't CAD tools power tools? They are designed to be easy to use for the experienced, not easy to pick up. There is a trade off between power and ease of use.
In my experience more often then not the 'not even trying' has nothing to do with wanting someone else to do it, but rather with being scared to mess up exactly because they feel they are not knowledgeable. That's pretty instinctive I think (e.g. the first time I used a chainsaw I was also rather hesitant as I basically knew sh*t about it) but sometimes also driven by the abdunance of horror stories heard about 'losing all mail/photos/....' (which is, given the person's skills, usually actually very hard to do for them).
I have that with software too, sometimes - again, due to lack of understanding. For most of my Java experience, Maven POM files for me were "that fucking pile of shitty XML I'm not touching with a ten foot pole, because it doesn't make sense". I didn't know why it looks the way it does, what any change will do, how to make anything work. I eventually bit the bullet and spent a day reading up on the docs and now I'm not afraid anymore. But it took me some years to first stop avoiding the topic altogether.
> 'losing all mail/photos/....' (which is, given the person's skills, usually actually very hard to do for them).
The thing is, they don't know that. They have no intuition for the range of operations and capabilities of a piece of software in front of them. You and I both know that this program they're using probably won't be able to find user's mail or photos by itself, much less delete anything - nor that anything in its operations comes close to deleting anything. But regular people don't know that, and I think the common factor here is not having an intuition about the limits of space you're moving in.
XML obfuscation layer, semi-declarative runtime that can't be debugged (no breakpoints), weird rules, unpredictable failure modes, arcane config & installation, etc.
Maven is utterly broken and irredeemable. That you rose to the occasion to master it is a testimony to your adaptability and pluck.
I'm actually working on turning it into an instinctual habit - instead of trying to power through the unknown, take a break and read a goddamn book about it. It makes you less productive for few hours or days, but then quickly pays significant dividends. Sharpening the axe, and all that.
I brought up the story to highlight the fact that it was so easy, so natural for me to avoid dealing with Maven altogether for years. If, as a technical person, this is my default reaction to a confusing piece of technology, then how can I expect more from normal people? Conversely, their fears could probably be alleviated with as little as few hours worth of education or at least a good explanation. The trick is to make normal people willing to invest those few hours, when their own curiosity isn't enough.
The benefit of youth is that one do not have prior memories of failure and punishment that thus allows one to mess up. One of the first things i did after getting a computer was to accidentally blank the hdd. But after a friend helped me reconstruct the environment i learned how to do same and more. And these days i am the ad-hoc support line for family and friends.
I wonder if the whole GUI is not really helping here. Sure, it allows things like browsers to exist etc. But at the same time all the locations of files and such are mental rather than physical.
back in the day, if i wanted to run a spreadsheet program i had to locate the floppy it was stored on. Now it is stored in one of the many sub-folders that blanket my 1TB+ sized hdd.
A whole lot of problems with home computers seems to stem from them doing a number of things in the background without the user being aware.
I can't help wonder what would come to be if one were to go back to the C64 or similar where if we wanted it do anything we had to start it ourselves.
BTW, there is an old OSNews article from someone teaching basic computer usage to older people. And he claims that he had a fair bit more success with the CLI than the GUI. This because not only was it clear when the computer was doing something (one could not input more commands) but it also gave a history of previous actions, and the ability to set aside tasks for later (with a warning about that on exit).
I can't help wonder if the whole desktop metaphor and WIMP, while making things more "friendly" at first glance (vs that blinking prompt) in the end has just added a whole lot of conceptual thicket to cut through when trying to translate goals to actions.
All in all i wonder what we would get if we had a cli with some gui niceties, like being able to add file names to a command by clicking them in the history of a recent ls/dir command.
Maybe old GUIs were better. You had N windows on screen, meaning N tasks being done. Now you also have to track minimized windows. And tray icons. And programs that run in the background without showing any window or icon at all. And system services, which are different kind of background programs. And I'm probably forgetting something.
Right now, in Windows, I believe the Task Manager is a fundamental tool to understand the runtime state of your system, and it should be advertised as a "normal-user tool", not "expert tool". And I'm talking here about detailed process view, and not the dumbed-down view from Windows 8/10. Users can't really hurt themselves with Task Manager anyway (you don't get to kill a system process that easily), and it's fundamental to getting a feel for what your computer does.
Tangentially, the "dumbed-down view" of Task Manager is an example of what I believe is the biggest sin of modern UIs - lying to people. There's hardly a better way to confuse the shit out of users' minds than presenting a limited and inconsistent view of the internal state.
> the ability to set aside tasks for later (with a warning about that on exit).
I wonder how it is this feature was made - both in GUI and CLI - in such a way almost nobody (except sysadmins) use it? I mean, personally I discovered that Windows can do it only very late in my life, and I don't trust UNIX cron. Most people I know - including programmers - don't know about either. Yet delaying tasks seems like useful feature. How come it's not exposed in OSes (and it seems to not exist on mobile at all, from user's POV)?
 - http://www.howtogeek.com/wp-content/uploads/2012/03/650x593x...
 - http://www.howtogeek.com/wp-content/uploads/2012/03/411x406x...
All in all, this is perhaps a closer approximation of a physical workflow than the desktop metaphor.
Ah, found the article finally.
I guess i should have tried harder before hitting reply on my initial comment.
IIRC, the taskbar and tray are meant to communicate that at all times. But trays were easily overrun as people installed more and more background software on a single PC. Modern taskbars too have become so crowded there are groupings and virtual taskbars for the virtual desktops.
One cause is software unnecessarily running at all times. Another is people do more with their PCs than in times past.
This reminds me of all the anecdotes I've heard about people who immediately click through all alert boxes, to the point of being nigh-incapable of passing on an error message unless someone is standing over their shoulder forcing them to read it.
- What do you mean it's not working, does it move on the screen or not?
- Well, it moves but just a little.
- ??? Tell me more?!?
- When I do big movements it moves a little, but when I do small movements it doesn't move at all. I can't get it to cross the screen, even with momentum.
It took me a while to understand she was moving the mouse in the air; in those times mice had a ball that would indeed register movement if you shook the object hard enough.
This was fun and all, but in the end my mother learned to use computers and is now very good at it.
There is a LOT of noise to that signal. Contrasted to more traditional (teletype inspired) interfaces that only reported actual warnings (if asked for) and failures unless you were extremely specific.
That's a non-event for a an experienced user, who (if ad block didn't disable the page) probably resorts to the close-tab hotkey by sheer habit. But it's an appreciable threat to someone who's trying to rely on predictable screen UI for their browsing.
The whole thing was clickable and it fooled me the first time (i really should have picked up on the pointer being a hand rather than an arrow), never mind my less knowledgeable relatives.
One useful trick is right-clicking first, which shouldn't trigger the link and will be very different on an outbound link than a blank webpage or desktop.
Of course, mobile is where I get the most trouble from this - I don't have any hotkeys, and real page-loading is so bouncy and unpredictable that it's particularly easy to get caught by a fake top bar. It's made even worse by things that scroll away the top bar as you move down page, and then bring back a fake one.
It's both an important work skill and an unavoidable physiological response to tune out obnoxious and unimportant stimuli, to the point where the people abusing those signals ought to be accountable.
Pilots are repeatedly taught to never "silence" the annoying warning horns that come on for different reasons (slow speed, wrong landing gear configuration). The idea being that mindlessly dismissing alarms will build a bad habit pattern. Especially when you're actually in trouble.
It's the difference between a Growth Mindset and a Fixed Mindset: http://www.aaronsw.com/weblog/dweck
"For twenty years, my research has shown that the view you adopt for yourself profoundly affects the way you lead your life. It can determine whether you become the person you want to be and whether you accomplish the things you value" (https://www.brainpickings.org/2014/01/29/carol-dweck-mindset...).
Link to Fullsize Image: http://www.gridgit.com/postpic/2013/10/fixed-mindset-vs-grow...
Now does everyone just tend toward fixed mindsets as they get old? Or is that just a popular misconception or some sort of selection bias?
There are several more pieces in that series, on that site, if you care to look.
The reality and strength of the Growth Mindset theory is fairly unclear, despite a lot of books pushing it as gospel. It's also unclear whether it's accurate even if it is effective - there's an open possibility that people who are basically delusional about their own potential practice harder than the realists, and a handful of them who had high starting potential get great results.
So... age-related degeneration is real, neuroplasticity drops over time, but I wouldn't want to connect that to growth mindset because I'm not convinced that's even a real effect.
You're right about self deprecation. Society lifts computer usage as a value even if it's mostly absurd in reality (because I knew excel, people would look at me doing basic tasks saying they couldn't do it). It affects many people, it's very very hard to reverse. It's indeed very similar to math anxiety. Reminds me of childhood friends who dropped out of school in Junior High, couldn't do the "simplest ratio" but were doing them on the fly with custom units to sell weed, I couldn't follow them. It's all ceremony, sadly.
ps: one last thing, I wonder how illiterate people would feel if given a system like NixOS or GuixSD with almost fully rollbackable system; meaning they can mess with it without negative emotions or almost no superstition.
With a full screen cli, all you see is what you are trying to do in this moment.
With a WIMP interface you have the task bar and whatsnot that may at any time try to grab your attention away.
Never mind all those background processes a modern OS runs, for various reasons.
Sure exploratory programming, which I used to love, has value, but lots of the development in past days I think came from the technological wonder of "can we do it this way ? cool".
I suspect some of that might be due to Learned Helplessness. That is something that has to be worked on:
Digital signals are in theory a boon. but digital signals wrapped in DRM is a hot mess best avoided.
sounds like a great bargain for them.
Cant use computers ~ Under 4th Grade level including illiterate
Below level 1 ~ 4-6th grade reading level
Level 1 ~ 6-10th grade reading level
Level 2 ~ 10th-college reading level
Level 3 ~ College+ reading level
"I failed my standardized tests math section because the user interface of a #2 graphite pencil is poor." No, its because you're not good at math, the problem isn't the pencil. Half the population has to be below the median, after all, and we design civilization to make sure evolution doesn't cull anyone, so ...
I am kinda old and I can assure you that plenty of people were completely incompetent at using a physical card catalog (kind of like a printed out google search form, where an old fashioned library used to kinda be a printed out internet, back before libraries became internet cafes without coffee and also turned into daycares). Inability to search for things intelligently is not a recent invention of computation, any more than the inability to write essays is a result of word processing software or century old typewriter or the ballpoint pen.
Its truism thru history that as soon as the first tool was invented, some dude without enough mental horsepower to use it immediately started attacking the tool as being poorly designed. Fire, wheels, carpentry tools, farm machinery, computers, same old story. Some folks can't think, and giving them a tool then asking them to perform a thoughtful task is doomed to fail and result in the tool catching most of the blame. Why if only wheels were better designed to be round in all three dimensions instead of just two then that pesky problem of axle placement wouldn't be so stressful for end users, etc etc.
The fact of the matter is the median human is barely at a level of boolean algebra basics and arithmetic and following instructions on a packaged cake mix box. Giving someone innumerate a calculator doesn't mean they can now launch the space shuttle, any more than giving someone an anvil means they can shoe horses or handing me a basketball means I automatically have a career in the NBA.
This is hilarious, talking about hypothetical somebody sucking at math and
using popular incorrect opinion about median in almost the same sentence xD
There is a quite simple distribution with population of just five that has
neither half of the population below the median nor above the median.
Now imagine a degenerated case of a population with the same score. Surely
there is a median, but surely none of the specimen is smaller than the
median. A little less trivial case is a population of five with one value
below, one value above, and three values in the middle.
Correction: median doesn't have to be a value of a specimen.
If we used your suggested 2nd definition of "real students", the label would be circular and meaningless. (E.g. prerequisites reworded in a bizarre way such as, "Remedial Reading courses support students who do not demonstrate non-idealized reading skill proficiency that real students have.",)
This is a form of normed referencing, and ultimately it becomes a (sometimes rapidly) moving target that makes comparisons difficult.
The standard used in the field is criterion referencing (similar to "prescriptive" used by 'jasode). This allows for easier apples-to-apples comparisons.
You are correct that they layman understanding is closer to your definition (at least in my experience). The layman understanding that you describe is not the standard in the field.
From within our intellectual elite bubble it's easy to go "surely it ain't so", but there it is. Hell, my ex business partner who was a highly educated and intelligent man doesn't own a single book and hasn't read one since "animal farm" at school.
The sad reality is that the vast majority of people have absolutely no interest in reading. It's not a question of access, rather perceived effort.
What reading level would you expect for someone who got a degree in their early twenties then didn't read anything more complicated than People magazine for decades after.
Level 3 means that someone can consistently read and comprehend the full meaning of college-level texts. In the US, one does not need to read at this level to graduate from college (or even many grad schools).
There are many people in the world who are smart and/or very knowledgeable who don't consistently read and comprehend at level 3 across the range of topics covered in a typical college curriculum.
Source: Me. I have trained people in both reading literacy and tech literacy. My experience is both theoretical and practical. Feel free to ask follow-up questions if you have any.
Our current Presdident Elect (US) reportedly falls into this category (or, pretty darn close to it). That's a scary thought.
It was painful to watch them stumble about, trying to debug installation errors that seem obvious to us. Just like trying to watch a lay person try to use a computer, watching these other people made me want to blurt out the answer.
Any of the lessons that you take away from designing simpler UI for lay people applies just as much to professionals. Write a better API, a better library, and better documentation.
This. A whole lot of this.
Case in point: while Java Swing where popular, noobs, including me where looking to Graphics2D and Image when we first wanted to insert a picture.
The correct solution was jLabel.
Since then I have learned a lot but man pages and docs without examples are still annoying. (BTW: used rsync manually for the first time in a while yesterday and the man page was good, src and dst syntax was described. : )
Well, except WPF, where you can re-template anything you like, but even there, the default Label is just a text container.
In Tcl/Tk you would also use a label to display your image with. Tk was one of the most popular toolkits at the time Swing was designed.
I gave up on learning PHP and Laravel recently when I went to install Laravel and they recommend Homestead. So I went to install Homestead, but they require Vagrant. Of course Vagrant requires VirtualBox, and VB can't find my kernel source code on RHEL 6.5. Okay, so let's skip Vagrant... Okay, now I need to install Composer instead of Homestead... what's this? SSL operation failed... failed to enable crypto... operation failed in command line code...
Yeah I give up. Yeah, I could take the time to work through this nonsense, but I shouldn't have to. Getting multiple layers deep of having to install dependency A to satisfy dependency B which satisfies dependency C is not my idea of fun.
I see that all the time. Programmers aren't necessarily expert computer users.
The number of people on my team that don't use, say, WebStorm's regular expression search & replace to quickly refactor, or find usages, or anything in the refactor menu, or even the NPM scripts I made to quickly update the project -- any of the less obvious features to make their own lives easier is more than half of the team.
And they also like to work on one monitor, maybe two.
Just watching them try to debug something with their cargo cult methodology is frustrating.
And the problems encountered are probably not all too exotic, so you'll be much better after doing it a few times. However this is not a task you do often (at most once or twice per project). It's no surprise to me that years of practice work wonders on reducing the setup time.
If you add research of suitable tools it becomes probably even more dramatic (finding, installing, testing, uninstalling, repeat until you think you might have the best or a suitable solution)
One is basically trying to distill many hundreds of hours of accumulated experience into a few buttons and prompts.
Be it in the classical sense or in an API/Docs sens.
I can do it, but it doesn't work for my own code, haha
I've lost track of how many times I've seen a product or company posted here and HNers will say "I can already do that myself." For instance, I still chuckle at the "you can already build such a system yourself quite trivially" comment in the Dropbox HN post nearly a decade ago.
I keep seeing _sec people make statements that only makes sense if one is the administrator/dictator of a company network or server farm, and that are basically impossible to implement on a personal computer without effectively treating the user as an attacker.
we could possibly be making billions selling illiterates a usb-pluggable physical button that opens a browser window to gmail.
(bonus upvote for the first person to recognize the movie reference)
Are there computer skills more general than the ability to use iMessage or Windows Live Mail? Because if using iMessage means you now know how to chat, then what happens when NewChat(tm) comes out with a new brand experience?
What happens when Microsoft goes Metro or Windows 10?
As a society, should we be paying for user training for specific familiarity with proprietary interfaces? Shouldn't Microsoft or Apple be paying for this stuff? This is why I'm generally very suspicious of what people mean by "computer skills" in schools. They generally mean user training for Apple, Microsoft, or Google.
Programming fits both of those categories: consider the number of programming languages in vogue at the moment. These languages however are composed of the same components which comprise the machine.
Programming languages are software, no differently than iMessage or Windows Live Mail. It's just another way to control the machine.
NewChat(tm)'s best bet is to have the new interface mimic and borrow the metaphors of the older interface.
In a sense, what makes an interface "good" is that it uses what is already known.
As for the original question regarding the distribution of user's "computer skills"... if you can figure out how designers think about how you should use systems, then it's much less about having "computer skills" than having the capacity to figure out the psychology of the application designer - if you can type and use a mouse and can either Google or read a book describing the paradigms of an application or eco-system and figure out what they mean, you've got all the computer skills you need.
As programmers, I think we tend to forget what exactly it is we do... we don't just program, we figure shit out at the most technical levels. Other people are quite adequate at this too - we have lawyers, doctors, surgeons, mechanics and any number of other professions that solve way more complex problems than we do - we're not islands of all the world's intelligence. If people appear to lack "computer skills" it's because the paradigms we use to define our user interfaces and solve the problems we are attempting to solve are shit - and I say this as a pretty well seasoned computer programmer who spends most of my life trying to figure out how to get other people's shit user interfaces to get my own shit done.
So... that's really what I've got to say about that. People will figure out what they have time and inclination to figure out. If they don't have the time or inclination to figure it out, then it's your problem: Your software isn't solving big enough pain points for the learning curve involved. This means that either this user is not your target audience, or you don't understand your target audience well enough.
Either way, it's not on the end user, it's on you as the software designer.
wouldn't that be knowing how to program?
Regardless of their future career, people should probably be taught how to use a word processor (use styles, hierarchically outline your document, basic sensible typography) and spreadsheet software at school, along with basic internet skills (particularly with regards to finding reliable information and understanding the pros and cons of social media).
None of these skills require any programming knowledge.
> wouldn't that be knowing how to program?
Nearly all programming languages or instruction sets for assembly languages are also coincidental interfaces (this time for programming), often (though not always) at least originally designed by some "megacorp":
- Assembly languages: typically the respective processor vendor
- C: Bell Labs (AT&T)
- C#: Microsoft
- Go: Google
- Java: Sun (now Oracle)
- Objective-C: NeXT (now Apple)
- R: A reimplementation of S (originally by Bell Labs; later commercially offered by TIBCO Software (another megacorp) under the name "S-PLUS").
- Swift: Apple
So what I would consider as more general computer skills is rather the mathematical or the electrical engineering side of computing.
Nearly all the mainstream, massively successful ones, yes. But the vast bulk of languages are not, and there is little evidence to be provided that there is some sort of massive advantage to be gained by "normal users" for any of them. There are better ones than those for some purposes, yes, but after many thousands of languages it's not like there's one that is clearly better for non-programmers, and this after at least several dozen and probably several hundred attempts explicitly aimed at that purpose.
Asking for people to understand the math is an even larger ask than asking them to understand the programming, which I base on the fact you can still find a very significant percentage of professional programmers, possibly even the majority, who have disdain for the mathematical elements of computing.
It is also not an unfriendly reading when my very first words (which I have not edited) acknowledge the alternative readings.
Yes, there are programming languages aimed at non-programmers. Many of these are also "commercial" languages, or at least were commercial in origin (like C). Smalltalk (Xerox), Hypertalk (Apple), the efforts towards 5GLs in the 1980s, Visual Basic (Microsoft), etc. We agree that these are relevant, I'm just not sure what point you're trying to make with that relevance, and I'd love it if you could elaborate.
I'll make an attempt to explain why jerf's response looks out of place.
Basically, wolfgke and jerf looked at the 2 groupings of programming languages (mainstream megacorps vs unknown) as evidence for 2 different goals.
wolfgke: generalization path -- dominance of megacorps languages means you must keep going one level lower in hierarchy of computer science concepts to learn universal concepts instead of proprietary syntax.
jerf: pedagogy ease of use -- megacorps languages are not provably any harder to learn than specialized toy languages
How did those two end up talking about 2 different things?!?
If you look at the comment chain from posters threatofrain-->bryanrasmussen-->wolfgke ... they started a dialogue that keeps diving lower and lower in underlying principles. It's a variation of the XKCD comic about "purity".
jerf's response doesn't continue that purity dissection. Instead, his emphasis on pedagogy seems to point back to the original article by Jakob Nielsen which prompted this thread. That article says that elite users (like HN readers) can use complicated computer software and we forget that most others can't. The communication breakdown was assuming that wolfgke listed the megacorps language as a (ease-of-use) response to J Nielsen instead of a specific (purity) reply to bryanrasmussen.
But then again, I might have misunderstood everybody and I have no idea what people were trying to say.
Correct, with the additional (IMHO important) fact that threatofrain criticized "coincidental interfaces made by a few mega corporations" and looked for more general skills, bryanrasmussen meant that "knowing how to program" is such a skill, but I analyzed that most popular programming languages are also just "coincidental interfaces made by a few mega corporations" (this time for programming instead of the general user), so that we have nothing won concerning the original problem of "coincidental interfaces made by a few mega corporations" - we are just some layers deeper. So I suggested that if you look for more general skills, you probably have to look even deeper into the mathematical or the electrical engineering side of computing.
A lot of the standard tasks we do with office/business software are unintuitive, but easily learned once someone shows you or you Google it. Even as a designer or developer, it's not always clear what each button does.
>participants were asked to perform 14 computer-based tasks. Instead of using live websites, the participants attempted the tasks on simulated software on the test facilitator’s computer. This allowed the researchers to make sure that all participants were confronted with the same level of difficulty across the years and enabled controlled translations of the user interfaces into each country’s local language.
Same with the language used in software: Sometimes tranlating UI buttons actually makes usability worse, because now you have to learn the shared language all over again. The descriptions on UI elements are often useless, unless you already know what the buttons do.
Anyway, all this misses the point. If you have to google how to use a GUI the software has failed at a fundamental level.
I have to admit I found it somewhat difficult, it's not surprising that most people performed poorly on it.
Unexpected impasse. Multiple steps. Requires navigation across multiple pages and applications.
Just taking the test is a Level 3 task.
One of the hardest computer tests I've ever taken.
Only 2 out of 6 questions had anything to do with UI/UX - the rest were either reading comprehension and/or basic math.
From the 2 that dealt with UI/UX, both were extremely hard, in one of them I had to send an email, get a token and navigate the site to send another email.
If this is what the results are based on, I wouldn't base any UI/UX decisions for web products on the results of this test.
For example... at the start of the test, there's a "Please press 'ENTER' to continue"...
A) My keyboard has a return key, but no enter key.
B) It's a button. Has the test started yet, because I'm not sure if you're talking about the button, or the non-existent key on my keyboard.
C) You're making me think, and the test may or may not have started yet.
Maybe my reading comprehension skills are lower than I'd like to admit. Or maybe your UI makes smart people stupid.
Am I missing something?
/am probably more computer illiterate than I realized
It's very concerning to me that they didn't include screenshots of this simulation site in their report. The pages and pages of data are meaningless without knowing what they were actually measuring.
Incidentally, while I got every question right, I could easily imagine myself getting one or two of them wrong. Not out of stupidity, just bad luck. The question asking about the impact of educational attainment had at least 2 sentences that plausibly answered it, yet the correct answer was to highlight just 1 of those sentences.
I did the demo test tasks. I think their test UI is pretty bad. But I feel like anybody who's used email could probably figure most of it out.
The unmentioned problem here is that people are actively reluctant (this is tested, too) to learn the skills that would let them be more independent. I doubt this is entirely related to the total amount of hours spent at a computer.
In some way, computer literacy seems to be a similar scenario than cars: the homo simplex doesn't want to understand how it works, he/she just wants it to work while enjoying the luxury of not needing to understand it while posting duck faces on Instagram.
Let's face it, computer illiterates understand less, pay more and end up having lower purchasing power. Considering that this is entirely under their control, I find this quite fair (I have different opinions about Seniors, but that's another discussion).
To be honest, I can find only one valid reason for this to be an issue: civil liberties and human rights.
I'm intimately and increasingly concerned by how much civil liberties have been taken from citizens in "developed" countries in the recent years, and I am under the impression that this is the direct effect of computer illiteracy spread across all levels of the population.
Citizens are being asked to vote on laws relying on technological concepts they don't understand, and which end up in striping them from their rights.
That's what computer illiteracy is about and that's what (I think) the article should talk about... :(
For exmple word processors, I really hate them with all my heart, but when getting a degree, I had to write much stuff. So I sat down and learned the core functions of Libre Office Writer and it made my life better.
Even my profs. were amazed how good my papers looked. Not because they were really good, but because none of my fellow computer science students bothered to learn the software and make them look good.
Writer had a nice DSL for math terms, so I had everything I need.
That's because the answers change with each new release :)
There's no shared thread among any of these things, except they all contain a silicon wafer somewhere. They also all contain plastic, but I don't any one lamenting the lack of knowledge about polymer chemistry among the populace.
The calendar problem isn't necessarily a "computer skills" problem, as much as it could be simply a lack familiarity (solved by time), UX problem (making it literally not the user's problem). You want to know a secret? I don't know how to use GMail. Seriously. I don't. I had to get someone to come over and show me how to simply reply to an email, because every time I clicked what looked like reply arrow, the damn thing went back to the list. I have no idea how to sort the inbox, even if you can. Perhaps, the most popular email client in the world. I don't know how to use it. Am I illiterate? Sure looks it. Yet, I can send an email using telnet.
Personally, given that people find word problems complicated, can't calculate restaurant tips in their head, and are prone to fall into personal bias supporting arguments, I think the failure to schedule a meeting is more of a general problem, than a computer one.
You're elevating "computer literacy" as some sort of giant lift in the economic chain. Really? Sure software engineering pays well, but it's also completely unrelated to sending out a calendar invite. If that was the metric, then why aren't secretaries paid $100k? Similarly, you can swap out software engineering, with any other high earning field, say hedge fund managing. Arguably, that's a better field, and you don't have to know how a floating point numbers are represented, nor do you have to send your own calendar invites.
Finally, yes, mass surveillance is a real issue, and yes more education about what is being done, and what is possible, and at what scale these things are being might help. It might not as well. These are much more political, legal, and philosophical questions than computing questions. After all, I don't need to understand how carbon and iron bond to understand that ethics of stabbing someone in the throat with a knife.
 Before anyone chimes in. I don't really care how use GMail. It sucks.
YouTube's ContentID UI deserves a special mention, it's not just awful, it's actively sabotaging people questioning ContentID strikes (90% of which are fraud.)
Or try the new Google Hangouts desktop "app". (Not really an app any longer-- now only the Chrome add-in is available.) It's hilariously awful. It's almost as if they're trying to sabotage their own product. Don't take my work for it, read literally any of the reviews on their own app site:
Sorry. I could rant about how terrible Google's UI/UX is for days.
The lecturer flashes up a picture and says "who is this?". There is a sea of blank faces as a reply. The picture was of Roy Chubby Brown  who by some measures was the most popular comedian in Britain at the time. Out of nearly 100 people no one had any idea.
You are not average. Not even close.
There's one day of the year which is the "most popular" birthday for a population, but that doesn't mean a huge percentage were born in it.
Side note: If anyone is looking for an experienced technical sales person in Boston let me know! They have experience with startups, IPOs, large corporations, channel sales, and has the sales gift in a non sleazy way.
There's kids now that don't know what VCR was, but you can assure them that their Grandpa was the only person in the household who could figure out how to program the timer.
installing services and drivers
writing services and drivers
installing operating systems
writing code for operating systems
writing kernel code
designing chip technologies
chemistry and physics
So, believing in the analogy between print literacy and computer literacy, I want to ask that question here too. What helps or doesn't help, and what can change? How early would different interventions have to start to be helpful?
It's like teaching literacy by giving them only picture books. They'd be able to handle books, turn the pages, etc., just not actually use them properly.
With games like lightbot, apps like turtles and Scratch, and sites like code.org I can't really see the excuse for this.
There were so many things about those early systems which were not universal skills at all, but simply hardware limitations (or holdovers from previous hardware limitations - or just terrible design).
And yet, after watching my kid learning to use a computer, I do feel a deep nostalgia for the single-purpose simplicity of the command line I grew up with.
I think those are two sides of the same coin - if we understand the user better we can make software that's easier for them to use, which in turn makes the user more confident and better at using the software. The goal shouldn't be to take away the powerful tools and replace them with simplified tools, but to make the learning curve shallower so users can learn to use the software and eventually to the point where they can use complex features if they need to.
Snarky, I'm sorry, but you're 100% right. I grew up in a world where there was tons of optimism in software development. Programs like HyperCard, FileMaker, Microsoft Access, Visual Basic/WinForms, RealBasic, etc. promised to make (the most common type of) application development accessible to everybody with minimal training.
HyperCard's most successful developers didn't come from software development backgrounds-- they were artists, journalists, historians, etc.
(See this excellent article: http://www.filfre.net/2016/10/the-manhole/ )
All that's abandoned now. The optimism is gone. Developers who gave a crap seemed to have disappeared a decade ago. The open source development methodology, who never gave a crap about ease-of-use even when Apple and Microsoft used it as the foundation of their products, have taken over all software development.
And I sit at my cubicle, thinking about the now-dead world that was emerging back in the mid-90s, thinking: I could have been a plumber instead.
I didn't grow up in that age. Thanks for the link!
What? This is really surprising. I expected some people to be really poor at using computers, but not being able to use them at all? Wow. TIL
The New York Times reported on Nov. 6 that Trump “does not use a computer,” which explains why he was perplexed at his campaign spending millions of dollars on digital ads. - http://qz.com/829647/donald-trump-and-hillary-clinton-dont-k...
Desktop operating systems are complex and hard and if you do the wrong thing, you can break everything. Mobile OSes don't give you those options.
This is true - however these devices have also completely taken away our computational freedom as well. In fact, I'd argue that computer literacy is more important than ever. The only thing that has changed is that now society suffers as a whole from computer illiteracy, instead of just the computer illiterate. They can still post cat photos on social media, but a growing segment of the population has no idea how any of this works, and is at the mercy of Microsoft, Google, Apple, Facebook, Zynga, etc.
As someone who grew up with computers and started making Doom maps when I was 8, and programming C++ on Linux when I was 14, I see this trend as an oncoming storm. Its as if we were replacing libraries with cable television. Computers (phones and tablets specifically) are transforming from a platform that enables us to do anything we want, to a sort of Disneyland marketplace where nothing exists unless there is money to be made, regardless of ethics.
I think its more important now than ever for people to understand how to identify and select legitimate software that doesn't have ulterior motives, and put together a system which serves no one but it's owner. I suppose this is exactly what TFA says is impossible, but damn is it important.
Related to the article I created and maintain a database application for an non-profit arts organization. It involves membership, artwork inventory and exhibitions. It's a fairly complex task, almost all the work goes into the web-based UI, while the PostgresQL backend/server is pretty straightforward.
The challenging part is getting the artists and volunteers to actually use the DB program. The UI is as simple and unambiguous as I can make it. There's only so much to do to make a record with two dozen fields to fill in "simple".
One method is using dropdown options to select from where that fits. Also avoiding hidden "tricks" the user would have to know, giving on-screen examples of proper field format (like date or time entries) and providing concise, specific, instructive error messages.
Even with all that effort, convincing users to try it out has proven the biggest hurdle to success. I've come to realize a key to the tool's utility is constant encouragement. Live demos are often a useful way to help people "get over the hump" of fear and resistance.
Ironically, the "power users" often show the greatest resistance to trying out the web app. While it's made to work as directly as possible, power users fear they will look "dumb" if they don't instantly grasp the operation of every feature.
What I've learned (again) is it requires empathy for users' fears and sense of intimidation, and assuring the app is designed with "foolproof" safeguards preventing accidental disasters. Above all it takes enormous patience training users, listening to user feedback, answering questions and never, ever criticizing when people make mistakes.
And this is only a small part of nuances that a typical user faces. I don't care how beautiful or fancy your UX is. Familiarity is king in design, and if you stray too much away from the current experience, I won't hesitate to say that my toilet has better UX.
Edited to add relevant xkcd: https://xkcd.com/163/
(unless the goal is learning all the its about Lisp or web browsers)
> Tasks are based on well-defined problems involving the use of only one function within a generic interface to meet one explicit criterion without any categorical or inferential reasoning, or transforming of information. Few steps are required and no sub-goal has to be generated.
It reminds me of times when I was writing articles for journals and had to find filler to reach the required arbitrary page count.
At a sample of 215942 the margin of error is fractions of a percent.
But, what this doesn't account for is that in the original report (http://www.oecd-ilibrary.org/education/skills-matter_9789264...) you do see a significant variance in the distribution of skills based on:
In Japan, only 4.3% of adults are at a "Level 1" where as in Indonesia it is 37.2% which is to be expected when Indonesia is the bottom of the ladder in GDP Per Capita (http://www.keepeek.com/Digital-Asset-Management/oecd/educati...) and in literacy scores (http://www.keepeek.com/Digital-Asset-Management/oecd/educati...).
While this report says it takes the 'average cross the OECD countries' it's not clear on which factors were weighted or adjusted.
The information isn't wrong - it's just worth taking it with a grain of salt depending on how you are using the information. That said the US market for example lines up very close to the average, which is still staggering.
Indeed. It's one of the fundamental fallacies of design: The fact something is clear and easy for you does not mean that it's objectively clear and easy. You are not your users. http://uxmyths.com/post/715988395/myth-you-are-like-your-use...
I grew up using command line, I kept stubbornly using it for years after GUIs was ubiquitous, and I still use it regularly at work and at home. None of that means that it is the best tool available for this specific learning task.
> It's literally a diagram of the file system.
So is Windows Explorer or any other GUI file browser, in every way. If you put it in details view, it even displays the same data in the same way as a command line directory, except you can navigate it intuitively with a mouse.
When someone is learning what a file is, explorer shows both too much and too little information.
Where do you place your mobile while typing with both hands though ? Maybe in a casing so it can sit on your lower arm. Or a Velcro/burdock case so you can place the mobile on textile surfaces.
Who does computing outside of data center? I'm typing this on a laptop right now, but I wouldn't say I do any "computing" on it. Even at work, I would say that the actually amount of "computing" I'm responsible for (i.e. a program I specifically wrote to calculate some number.) is very low compared to simply text editing, web surfing, and music playing.
Throwing around this archaic and ill defined term, and then dismissing the phone as "a media consumption device" strikes me as elitist and out of touch. Is photography consumption? Clearly not, since it's production. Is media editing consumption? No. Is it computing? Well given that nonlinear digital video editors were big expensive machines several years ago, I'd say so, unless of course we're moving the bar. Is writing computing? 30 years ago it was. Do people actually write on their phones? Yes. Oh communication doesn't count? How about term papers? Do they do that? Yes, they do.
I think you need to reexamine your biases.
As for what phones are capable of, I'm aware. But they are the wrong tool for the job. For the most part the are walled gardens where the only things you can do are what some app developer thought you might like to do. The industry is stuck in a regressive state where we are trying to limit users, not empower them.
They're the wrong tool for the job of creating and managing files in a filesystem. They're not the wrong tool for writing a Google Doc, or updating a Facebook status, or taking a photo and uploading it to Instagram. Arguing that we need to teach users paradigms like a command line is out of touch with what computers are to most people today.
We should be making tools that work with the way people use computers, not making users work at learning the tools we build.
That isn't empowering users, it's enslaving them.
You must mean an Apple watch you can speak to, and a Microsoft Hololens...
Computer is not an appliance, it's a general-purpose problem-solving tool. It's closer to reading and writing than to dishwasher or fax machine. You can argue that applications are like appliances, and that makes some sense, but refusing to learn about the environment those applications live in makes you that much less able to do things that span multiple applications.
Stripping down the engine would be more akin to coding the programs themselves. This is more akin to saying that before someone drives they should know about the accelerator, brake and steering wheel, the basic tools that they use to interact with the car.
No, this is something in between, but an analogous system does not exist in cars. If cars had trim, like airplanes, that might be analogous.
'A file is where data is stored'
'a program is a collection of instructions for the computer to do something' are the basic levels of understanding one would expect.
I'd say it sounds a lot more like learning the basic theory of hydraulic brakes (because you shouldn't brake too much but rather switch to a lower gear when you drive down from mountains.)