Hacker News new | past | comments | ask | show | jobs | submit login
The Cognitive Style of Unix
239 points by gandalfgeek on Feb 17, 2011 | hide | past | favorite | 68 comments



The paper (unfortunately behind the ACM paywall) makes for very interesting reading.

There are a couple of points in the paper that aren't mentioned in the article.

The first one is the the difference between low and high NFC (need for cognition) individuals. The paper defines NFC as follows: A person with a high NFC loves to seek, reflect on and reason about information, whereas someone on the other end of the continuum only thinks as hard as (s)he has to and is inclined to rely on others. Their results show that low NFC folks actually took longer to complete the internalized version of the task while the high NFC folks took longer to do the externalized version. This reaffirms Haldar's point, but with a caveat - his conclusions are applicable only to "power users".

The other interesting thing was that both low and high NFC individuals got started on the task much faster (the paper calls it time to first move) with the externalized version. Presumably, all the individuals were told they _had_ to complete the task while in the "real world" many might have just given up. If you're designing an application, this is a useful lesson, getting started should be easy (i.e., an externalized interface). I guess this is also traditional wisdom, but it's nice to see this confirmed by peer-reviewed research.


Alternate hosts of the paper can be found using Google Scholar; here's one: http://dspace.learningnetworks.org/bitstream/1820/629/1/ICLS... [pdf]

From José Ortega y Gasset's Revolt of the Masses [1930]:

    “Doubtless the most radical division of humanity that  
     can be made is that between two classes of creatures:  
     those who demand much of themselves and assume a burden 
     of tasks and difficulties, and those who require 
     nothing special of themselves, but rather for whom to 
     live is to be in every instant only what they already 
     are.”
What Ortega y Gasset argues, and what is still true today, is the true threat to our livelihood comes not from those working to change our society but from those who deny its imperfections, who see acts of self-improvement as admissions of weakness, and who define the correct view to be equivalent to the view of the majority.


> the true threat to our livelihood comes not from those working to change our society but from those [...] who define the correct view to be equivalent to the view of the majority.

Take care with that. Political implications could be huge. This can be read as declaring Democracy being a threat to our existence. Must be my bad english though...

Thanks for the non-paywall link btw.


> Take care with that. Political implications could be huge. This can be read as declaring Democracy being a threat to our existence.

An unfair reading to be sure! The statement should have been followed with a huge asterisk and an explicit qualifier. The thrust of what I meant hinges on the difference between (to quote N+1) a "formal democracy, or the equal right to an opinion, with a democracy of quality in which all views possess equal value — until some are proved superior by commanding a mob following."

I see I've really put my foot in it now. Here's the N+1 article in reference: http://nplusonemag.com/revolt-of-the-elites


Actually that view challenges the Wikipedia idea more than democracy.


I just want to clarify that I'm not making a normative claim about the behavior of low or high NFC individuals. I'm just saying it might be useful to know what kind of audience you're targeting when building your application.


A big problem for people self-learning complex/powerful interfaces is that they often have no way to gauge how long it will take to reach a useful level of proficiency. For busy people or for urgent tasks that means allocating time to learn the thing is a risk. The deadline may have passed by the time you figure it out, or you may be interrupted by other tasks and end up cognitively back at square one.


The case of people who never take risks is well understood.


This is nerd crack: enjoyable and reinforcing, but not necessarily deep. Its recommendations seem, prima facie, to be limited to a small subset of the population. Is "giving up in frustration" measured? What about "fuck it, I have better things to do"? I'll agree that experts are better off with a command line, but I don't think the conclusions can be extended to the population at large.


On the other hand, there are so many people who rely on "easy" interfaces every day whose productivity could be increased dramatically by learning the tools just a bit better. Think of how many people hunt-and-peck on their keyboards. Some of these people have been hunting and pecking for twenty, thirty or more years of typing every day (I work in a school---it's embarrassing to watch so many teachers do this day in and day out). You don't always have to become a "guru" to get something significant out of having put in a little effort learning your tools.


Your comment makes me think of how short-sighted laziness is endemic or maybe engrained on a cultural level. Or even worse, it's a natural part of our decision-making processes, but I really think culture can overcome it.

Solving the problem of people never learning "one thing" to save hundreds of wasted hours over the years is akin to solving the obesity problem in America. Everybody that is obese knows already that obesity is a health problem, but it takes that little bit of effort to start making it pay off, and nobody has the time/desire to put in that little bit of effort. Meanwhile massive societal forces conspire to both make people overeat and stay inactive, just as they keep them from taking the time to learn new things about their software and work processes.

The reason I think culture and education can overcome these forces is because of another comparable societal problem: the campaign to end smoking actually has gotten a large percentage of Americans, especially youth, to get over that hump or even better never start at all. It took massive investment in public education and time for it to propagate through a generation. So, these are cultural problems: we have a failure to motivate people to explore new computer skills.


True; some people haven't even learned to use search/replace properly, let alone macros in their word processing tool. Some tasks they spent hours or days on could be done in a few minutes, given just a little bit knowledge.

Then again. Does Joe Sixpack benefit from being more productive? After all, usually they aren't paid more when they do more work, so there isn't that much incentive...


> Then again. Does Joe Sixpack benefit from being more productive? After all, usually they aren't paid more when they do more work, so there isn't that much incentive...

My initial reaction to this was: of course! Doesn't everyone benefit from spending less time doing repetitive, thoughtless tasks at work? (even if they don't use the extra time to produce more output for their employer?) But you're right: the evidence is pretty overwhelming that either this isn't true, or it's true but most people don't care or realize it.


Most people don't have their performance or efficiency being monitored or have any internal/external motivation to improve that efficiency.

That's one of the things I enjoy most as a programmer is having the tools to easily monitor my output (LOC per day or something) but also the power to improve or create better tools.


I agree. In particular I found the tool on this page very useful: http://cspangled.blogspot.com/2010/05/staying-efficient.html


I don't think the solution to this is to make hunt and peck typing impossible, whether by removing the labels on the keys, making the keyboard invisible, or what have you. People won't learn touch typing as a result, most will instead just not use the tool at all, instead of using it sub optimally. "Gee, my life would be a lot better if I tried this e-mail thing instead of calling people all the time. Oh, it's too hard for me to type, because I can't hunt and peck?" Give up


Which is why it should be taught in school. It took me two weeks to learn to type in my late twenties. It's not like you'd ever forget the skill--or have the opportunity to forget with all the homework that needs to be done.


Is there any research about the keyboard issue? Sometimes I suspect that it is simply impossible for adults to learn touch-typing naturally the same way I did as a child (ie, simply by telling myself 'don't look at the keyboard'). Or maybe it's that they never engage in the activities that really force people to become fast typists (in my case, probably IRC) and perhaps never will.


I only learned to touch-type in my late twenties. It took my about two weeks of practising with gtypist for 20-30 minutes a day. I only type about 50-60wpm when I push myself (usually quite less), but even if you can only type 30wpm it's still an enormous improvement over hunt-and-pecking. It makes me cringe to think they're "teaching" MSOffice to school kids but can't be bothered to teach them basic typing skills.


not necessarily deep - It directly challenges a very widespread thesis, explicitly laid out in the UI manuals of Apple and Sun, of how to present information in GUIs. The work shows how following this advice can suggest non-optimal problem-solving strategies.

The recommendations in the papers aren't, contrary to what the blog post says, about CLI vs. GUI, but about what kind of information should be presented to the user.


I get to be in the position to observe the fragments of the 'larger parts' of the population during their interactions with the machine. They copy and paste for hours, editing files line by line replacing a character by another. Maybe it's just me and my background, but I immediately feel en prise by the waste of precious human minds.

And: If they these people have the endurance to stand countless hours of such repetitive tasks, they are more than able to take the challenges of steep learning curves.

That said: My approach always has been to try to teach people about some fundamentals instead of letting them walk calmly into their ui abysses. And usually, when people learn that you can do with a keystroke what took four clicks before, they want to know more.


They might not be able to take the learning curve, doing something repetitive gives a steady stream of feeling-of-accomplishment.


I hate excel for this reason. So many research colleagues do such horribly repetitive things when processing their data it makes me weep. The problem is they're not lazy enough to learn R, Stata, python etc...


I think the take away from this isn't that all user interfaces should be one way or another, but simply that things that are "tools", things that are meant to be used day in and day out for productive tasks, could benefit by increasing their learning curve at the cost of intuitivity.


Right, if you are to learn to use a tool in novel ways you need to understand its behavior at a basic level: it is better to experiment with it from the outside until you understand its "Tao".

On the other hand if you need to achieve a task whose outcome you won't rely on to reach other ends, like say buying airline tickets (ha), its probably better if you can offload the thinking to some other entity.


Totally agree. I didn't read the paper, but I doubt they have modeled the fact that people have a finite amount of time that they can invest in internalising stuff. IOW, (I presume) they have not looked at the opportunity cost of internalising UI details, and often that time is better invested in other pursuits.


In fact they did. They measured total task time for internalized versus externalized conditions. Contra your prediction, there wasn't a significant difference. Keep in mind this is one limited experiment so it's obviously not the final say, but it's always good to actually look at the data.


Thanks, you're quite right. Having now skimmed the paper "Does an interface with less assistance provoke more thoughtful behavior?", I would say that the measures of how many moves were made or remade, and how long was taken before the first move, are basically irrelevant, as the only quantities of genuine interest are how long the solution takes, and how well the subjects did on the questions afterwards. As you say, there was no significant difference in average solution time -- even though the low-NFC group did on average 19s better on the externalised version. Nor was there a statistically significant difference in either of the question types (although one type was close to favouring the internalised approach at p = 0.06). Maybe the other papers are more persuasive.


Just to clarify: are we talking about "learning the interface" plus "accomplishing task" here, or just "accomplishing task"? I'm pretty sure that e.g. unzipping your first archive with WinZip can be done more quickly than untarring your first archive with tar...


Both people had to learn to use an interface for scheduling speakers on a calendar GUI. Roughly speaking, in the externalized condition, more problem constraints were identifiable through visual feedback, while these were absent and had to be inferred in the internalized condition.

So, (in this paper), we're talking about planning versus relying on more external visual cues. Your WinZip versus tar example is apt, but it also introduces the issue of GUI versus CLI, which isn't addressed by the paper.


Nimwegen's PhD thesis, The paradox of the guided user: assistance is later that the cited publications, and can be counter-effective, is from later than the cited articles (2008) and is available online: http://igitur-archive.library.uu.nl/dissertations/2008-0401-...

The experiment in the Ph.D. research did not tackle CLI vs. GUI, but rather gave two GUIs, one based on internal (i.e., figure out for yourself what you can do) vs. external (give accessible information on what you can do) GUIs. There is a little bit of discussion in chapters 1&2 of the interface styles of CLI vs. GUI.


In 'Influence: Science and Practice', Cialdini delves into the research surrounding hazing and more abstractly, how the value we place on things is often correlated to the degree of difficulty or suffering endured to get them.

Basically, I like linux and vim because I had to suffer so much learning them ;)

but seriously, I think that notion applies heavily with coding. the languages and tools are often difficult and time-consuming to learn, and once you've invested the time to learn them well, you're psychologically predisposed to like them more. if time is valuable and one spends a lot of time learning a tool, only to find out later that it might be sub-par, it causes cognitive dissonance. at that point dissonance is most quickly reduced by looking for information confirming you've made the right choice, or simply assuming you have, and getting back to work

not directly parallel or orthogonal to the OP, but just a thought that crossed my mind while I was reading this

(I'm getting delirious because I popped some sleeping-pills, so I'm not to be held responsible to how coherent it was, orthogonality aside.)


Based on this, what might be interesting is a user interface that gives you very restricted options to begin with and then gradually removed those restrictions the more you used it.

Having just written that, I realised that games have been doing this for years. Imagine having a DVCS inform you that you've levelled up and can now do cloning.


I think Microsoft tried that with Office (2000 and more ?), they had adaptative menus. Apprently most of the users disliked this feature and it was eventually removed (http://blogs.msdn.com/b/jensenh/archive/2006/03/31/565877.as...)


They did, and it worked great on your own machine after a few weeks. The problem was it made everyone else's machines basically unusable.


Good idea, but there would have to be cheats, be it for urgent tasks or because of a reinstall.


Wow. This is bikeshedding to the extreme.


Or a programming language. We can measure 'level' in pointers. Then we'd have a practical application of the term 'three star programmer.'


The reason we become experts as something (interacting with a computer here) is that we spend time thinking about it. By making user interactions harder, we're actually training people to think in a way that will allow them to extrapolate simple solutions to larger problems in the future.

Knights-in-training learned with heavier swords than they'd use in battle, so that when battle came they'd be able to wield their swords effortlessly.


most people don't want to be knights, they want to be the people who are ordering the knights into battle. (most computer users don't want to become experts at interacting with a computer, they want to be adept at using the computer to do taxes, interact with colleagues, search for information, etc.)


Most knights-in-training also had other ambitions. But they wanted to live through the battles.

How many people have had their week ruined by a lack of version control? Not geeks, but real people who didn't even know what that was until Dropbox arrived.

And version control is old hat. What other computer tricks are there? Backup? Deployment? Putting services in the cloud? Building services? This isn't stuff that requires a PhD in algorithms and distributed systems, just stuff that requires basic competence.


Exactly, that's why UNIX is for knights! Those who use it (at least, via the terminal) eventually become knights, and it's through that process that the curve shown in the article develops.


I dont think the internalization principle holds up in todays life when the number of devices / tools / software we use is exploding. In a slower moving world with few tools there was an incentive to mastering the tools and then exploiting your skill. I personally find it useful to partition tools into two categories - the ones that I want to think about and the rest that I want to use without much thought.


The difference is mostly between those who work with the OS and shell directly (mostly programmers and system administrators) and those who simply need to interact briefly with an application program. Those who regularly use tools like word processors and spreadsheets are an in-between situation where either paradigm could be better depending on how and how much they use their particular tools.


Interesting linking style used in this article, especially considering the topic.

Isn't it a bit of a step backwards to start using footnotes on the web? I'd rather see the links to the past papers worked into a sentence, with links for each per topic for instance. That's what is so great about the web, I can hop instantly off to check on something that catches my eye.

I suppose it could be argued that using footnotes will keep the users from getting distracted, but I consider cleverly working in links to be one of the joys of both reading and writing on the web.


You're supposed to internalize the process of remembering footnotes and cross-referencing them with the links, instead of relying on external cues like hyperlinks. ;-)


My preference: using author-date referencing with a reflist rather than footnotes makes it easier to see what the aside is about, and allows you to include such relevant information as the title of the paper and where it was published together at the end of the page in the reflist. Bundling all this information inline is distracting if it happens more than once or twice. There's no reason why the hyperlinks shouldn't be to the papers themselves if you follow this system.

Putting digressions, rather than just sources, in footnotes is what annoys me. It's just lazy: "I wanted to say this as well, so I'll slap it into a footnote instead or figuring out how to say it in a readable way. So footnote 1, Incidentally it is amusing to note, blah, blah...", leaving the reader with the choice of following the footnote and spoiling the flow of the main post, or not following it and missing whatever the writer had thought was worth writing down. They make distractions, they don't avoid them.


Sometimes with technical material, I read the same paragraph multiple times. If the footnote is really an "aside", I can just read it once and not have it clutter the paragraph when I'm rereading.


If there's some policy at work that allows you to know what kind of information might be in a footnote, fine, you can indeed eliminate clutter in this way. Only putting sources in footnotes works, for instance.

But authors who tend to put asides in footnotes sometimes put such crucial material as definitions in footnotes. Don't you ever find yourself switching from main text to footnote when rereading technical material?


Interesting. Put the theory in the head, where it's easier to work with. You can reason with it, make predictions, maybe even develop an intuition, integrating it with existing knowledge, find metaphors. You can't do that if it's in the interface; you have to manually try each one. Even though you could extract the rules, you don't need to.

It also makes your product stickier (harder to change products; a switching cost), if users have internalized the rules.

It also may make the product harder to adopt, initially. A nice combination would be to make it trivially easy to do some common tasks (adoptable), but require internalization to do tricky things (a rewarding path to mastery; proficiency with your product becomes a markable skill; people look up to you; you become one with the tool; the power enables you can get things done).

There are other aspects of ease-of-use that aren't related to {internal,/external}ization, such as consistency of interface. e.g. ls -r means reverse, not recurse. Even though having to learn arbitrary differences will make it stickier.


The author forgets that by making UI more complex, application filters out less experienced users. End result: remaining users are more experienced and productive on average. But it's not because users are getting better, but because weaker users are filtered out!


The whole idea of making interaction harder via command line irregularities w.r.t usage is incredibly stupid. Windows powershell has somewhat of a good idea by using object pipes formating is eliminated; a better idea would be to extend this system wide and 'pipe' even UI's to different formats such as javascript + markup or terminal output, this would require something completely different from unix, linux or windows.


Powershell is a good scripting environment, but is a lousy interactive shell. And is is exactly _because_ it pipes objects instead of text.

When plain text is being passed around, one can build a pipe incrementally and immediately see what the input to the next command is and decide what to do with it. With powershell one has to constantly check the properties of the objects.

It is faster to do a 'some -command | grep foo' than 'some -command | Where { $_.SomeProperty -match "foo"}'.


Do this now format it for another command and another and another, non-object pipes are good for one-offs and anything that doesn't require structure (which almost everything beyond interactive use does). Syntactic arguments are moot if the language was a property of the operating system itself (vm based) the syntax differences would mean nothing.


The fact that you don't know how to make your tools easy to use doesn't justify making them complex nightmares.

UNIX (and open source software in general, with a few notable exceptions) is difficult because it's built by thousands of hackers with no user experience goals in sight. The goal is solving a problem for that particular person, as fast as possible. It's rarely getting more people to use the product.


> The goal is solving a problem for that particular person, as fast as possible. It's rarely getting more people to use the product.

I would say that the incredible versatility of the majority of stock UNIX tools and the philosophy that encourages combining them is evidence against this. Trying to predict every use case is a waste of time so make it simple and pipe. The alternative is software that is restrictive for some subset of users, no matter how elaborate it is.


> The fact that you don't know how to make your tools easy to use doesn't justify making them complex nightmares.

Does anyone know how to make their tools easy to use? Apple, for one, certainly doesn't.


I believe this comes down to interfaces. It's fair to assume that programmers will have greater ease in a UNIX command line environment after overcoming the learning curve. On the other hand, standard user would be better off interfacing with a GUI as it is superior to command line for tasks that are different than a developers. Data creation vs. Data consumption.


The problem in the article is that he talks about the two extremes of interface design, the gui vs the linux command line.

There are better interfaces for complex tasks.just try a good python Shell , with auto-complete and context sensitive help(like wing ide). the learning curve is much shorter. and you don't lost power along the way.


I haven't used AutoCAD since R13 and I was still a teenager working in an engineering office, but it was one of the best GUI/CLI hybrids I've ever used. You could learn with the GUI, and it would parrot the CLI commands in the command window. Eventually, you'd familiarize with the CLI commands through pointyclicky GUI usage and just rip through work at ridiculous speeds.

Mapping the commands (lisp-based DSL if I remember correctly...) to left-hand-only single key macros while your right hand moused the coordinates on the canvas ... holy crap was that fast. You could whip up fully qualified engineering drawings in a matter of minutes.


I feel like Edward Tufte would have a seizure at the way this article juxtaposes two wildly unrelated charts.


The second chart's legend also threw me off quite a bit.


So: make interaction harder and costlier, and users will spend more time thinking about how to do what they want and less time doing it.

Can someone explain how this is a good thing?


The point is, they spend less time overall by thinking about it first, than they would spend by poking at all the options the program provides, hoping one of them will be close enough.

Of course, you can't spend all your time thinking about the distracting minutia of every-day life (hand-encoding assembly instructions to program the microwave oven, or change the channel on the TV), but if you're actually trying to get something done, an interface that gives you a broad array of tools with complicated interactions can be better than a simple tool. Compare Vim vs. Notepad, or Photoshop vs. Paint.


Or Autocad, Maya, Logic and Protools... Those are the types of tools where professionals will dismiss simpler UIs out of hand.


Isn't "poking at all the options the program provides, hoping one of them will be close enough" the way one actually learns to use a program? At least I personally learn everything this way.


Internalizing interaction is better when you want users to make a thoughtful, creative contribution to your community (or use tools which will, basically coding, CAD'ing, crafting; i.e. Threadless, Newgrounds, Etsy).

Externalizing interaction is better when you want to provide a mental framework to drive behavior (e.g. most viral hooks or premium upsells; i.e. Farmville, Facebook, App Store).


You could take the same research and apply it to IDE's. Perhaps this explains why I prefer Emacs to Eclipse or IntelliJ.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: