
The Cognitive Style of Unix - gandalfgeek
http://blog.vivekhaldar.com/post/3339907908/the-cognitive-style-of-unix 
======
microarchitect
The paper (unfortunately behind the ACM paywall) makes for very interesting
reading.

There are a couple of points in the paper that aren't mentioned in the
article.

The first one is the the difference between low and high NFC (need for
cognition) individuals. The paper defines NFC as follows: A person with a high
NFC loves to seek, reflect on and reason about information, whereas someone on
the other end of the continuum only thinks as hard as (s)he has to and is
inclined to rely on others. Their results show that low NFC folks actually
took longer to complete the internalized version of the task while the high
NFC folks took longer to do the externalized version. This reaffirms Haldar's
point, but with a caveat - his conclusions are applicable only to "power
users".

The other interesting thing was that both low and high NFC individuals got
started on the task much faster (the paper calls it time to first move) with
the externalized version. Presumably, all the individuals were told they _had_
to complete the task while in the "real world" many might have just given up.
If you're designing an application, this is a useful lesson, getting started
should be easy (i.e., an externalized interface). I guess this is also
traditional wisdom, but it's nice to see this confirmed by peer-reviewed
research.

~~~
paulgerhardt
Alternate hosts of the paper can be found using Google Scholar; here's one:
[http://dspace.learningnetworks.org/bitstream/1820/629/1/ICLS...](http://dspace.learningnetworks.org/bitstream/1820/629/1/ICLS06_cameraready%20final%20submission%20vannimwegen%20et%20al%20icls2006.pdf)
[pdf]

From José Ortega y Gasset's _Revolt of the Masses_ [1930]:

    
    
        “Doubtless the most radical division of humanity that  
         can be made is that between two classes of creatures:  
         those who demand much of themselves and assume a burden 
         of tasks and difficulties, and those who require 
         nothing special of themselves, but rather for whom to 
         live is to be in every instant only what they already 
         are.”
    

What Ortega y Gasset argues, and what is still true today, is the true threat
to our livelihood comes not from those working to change our society but from
those who deny its imperfections, who see acts of self-improvement as
admissions of weakness, and who define the correct view to be equivalent to
the view of the majority.

~~~
nickolai
> the true threat to our livelihood comes not from those working to change our
> society but from those [...] who define the correct view to be equivalent to
> the view of the majority.

Take care with that. Political implications could be huge. This can be read as
declaring Democracy being a threat to our existence. Must be my bad english
though...

Thanks for the non-paywall link btw.

~~~
paulgerhardt
> Take care with that. Political implications could be huge. This can be read
> as declaring Democracy being a threat to our existence.

An unfair reading to be sure! The statement should have been followed with a
huge asterisk and an explicit qualifier. The thrust of what I meant hinges on
the difference between (to quote N+1) a "formal democracy, or the equal right
to an opinion, with a democracy of quality in which all views possess equal
value — until some are proved superior by commanding a mob following."

I see I've really put my foot in it now. Here's the N+1 article in reference:
<http://nplusonemag.com/revolt-of-the-elites>

------
noibl
A big problem for people self-learning complex/powerful interfaces is that
they often have no way to gauge how long it will take to reach a useful level
of proficiency. For busy people or for urgent tasks that means allocating time
to learn the thing is a risk. The deadline may have passed by the time you
figure it out, or you may be interrupted by other tasks and end up cognitively
back at square one.

~~~
enduser
The case of people who never take risks is well understood.

------
redthrowaway
This is nerd crack: enjoyable and reinforcing, but not necessarily deep. Its
recommendations seem, _prima facie_ , to be limited to a small subset of the
population. Is "giving up in frustration" measured? What about "fuck it, I
have better things to do"? I'll agree that experts are better off with a
command line, but I don't think the conclusions can be extended to the
population at large.

~~~
angus77
On the other hand, there are so many people who rely on "easy" interfaces
every day whose productivity could be increased dramatically by learning the
tools just a bit better. Think of how many people hunt-and-peck on their
keyboards. Some of these people have been hunting and pecking for twenty,
thirty or more years of typing every day (I work in a school---it's
embarrassing to watch so many teachers do this day in and day out). You don't
always have to become a "guru" to get something significant out of having put
in a little effort learning your tools.

~~~
wladimir
True; some people haven't even learned to use search/replace properly, let
alone macros in their word processing tool. Some tasks they spent hours or
days on could be done in a few minutes, given just a little bit knowledge.

Then again. Does Joe Sixpack benefit from being more productive? After all,
usually they aren't paid more when they do more work, so there isn't that much
incentive...

~~~
askedrelic
Most people don't have their performance or efficiency being monitored or have
any internal/external motivation to improve that efficiency.

That's one of the things I enjoy most as a programmer is having the tools to
easily monitor my output (LOC per day or something) but also the power to
improve or create better tools.

~~~
yaongi
I agree. In particular I found the tool on this page very useful:
<http://cspangled.blogspot.com/2010/05/staying-efficient.html>

------
chalst
Nimwegen's PhD thesis, _The paradox of the guided user: assistance is later
that the cited publications, and can be counter-effective_ , is from later
than the cited articles (2008) and is available online: [http://igitur-
archive.library.uu.nl/dissertations/2008-0401-...](http://igitur-
archive.library.uu.nl/dissertations/2008-0401-200600/nimwegen.pdf)

The experiment in the Ph.D. research did not tackle CLI vs. GUI, but rather
gave two GUIs, one based on internal (i.e., figure out for yourself what you
can do) vs. external (give accessible information on what you can do) GUIs.
There is a little bit of discussion in chapters 1&2 of the interface styles of
CLI vs. GUI.

------
doorhammer
In 'Influence: Science and Practice', Cialdini delves into the research
surrounding hazing and more abstractly, how the value we place on things is
often correlated to the degree of difficulty or suffering endured to get them.

Basically, I like linux and vim because I had to suffer so much learning them
;)

but seriously, I think that notion applies heavily with coding. the languages
and tools are often difficult and time-consuming to learn, and once you've
invested the time to learn them well, you're psychologically predisposed to
like them more. if time is valuable and one spends a lot of time learning a
tool, only to find out later that it might be sub-par, it causes cognitive
dissonance. at that point dissonance is most quickly reduced by looking for
information confirming you've made the right choice, or simply assuming you
have, and getting back to work

not directly parallel or orthogonal to the OP, but just a thought that crossed
my mind while I was reading this

(I'm getting delirious because I popped some sleeping-pills, so I'm not to be
held responsible to how coherent it was, orthogonality aside.)

------
d4nt
Based on this, what might be interesting is a user interface that gives you
very restricted options to begin with and then gradually removed those
restrictions the more you used it.

Having just written that, I realised that games have been doing this for
years. Imagine having a DVCS inform you that you've levelled up and can now do
cloning.

~~~
cake
I think Microsoft tried that with Office (2000 and more ?), they had
adaptative menus. Apprently most of the users disliked this feature and it was
eventually removed
([http://blogs.msdn.com/b/jensenh/archive/2006/03/31/565877.as...](http://blogs.msdn.com/b/jensenh/archive/2006/03/31/565877.aspx))

~~~
bigfudge
They did, and it worked great on your own machine after a few weeks. The
problem was it made everyone else's machines basically unusable.

------
fendrak
The reason we become experts as something (interacting with a computer here)
is that we spend time _thinking_ about it. By making user interactions harder,
we're actually training people to _think_ in a way that will allow them to
extrapolate simple solutions to larger problems in the future.

Knights-in-training learned with heavier swords than they'd use in battle, so
that when battle came they'd be able to wield their swords effortlessly.

~~~
pgbovine
most people don't want to be knights, they want to be the people who are
ordering the knights into battle. (most computer users don't want to become
experts at interacting with a computer, they want to be adept at using the
computer to do taxes, interact with colleagues, search for information, etc.)

~~~
wisty
Most knights-in-training also had other ambitions. But they wanted to live
through the battles.

How many people have had their week ruined by a lack of version control? Not
geeks, but real people who didn't even know what that was until Dropbox
arrived.

And version control is old hat. What other computer tricks are there? Backup?
Deployment? Putting services in the cloud? Building services? This isn't stuff
that requires a PhD in algorithms and distributed systems, just stuff that
requires basic competence.

------
dman
I dont think the internalization principle holds up in todays life when the
number of devices / tools / software we use is exploding. In a slower moving
world with few tools there was an incentive to mastering the tools and then
exploiting your skill. I personally find it useful to partition tools into two
categories - the ones that I want to think about and the rest that I want to
use without much thought.

~~~
billswift
The difference is mostly between those who work with the OS and shell directly
(mostly programmers and system administrators) and those who simply need to
interact briefly with an application program. Those who regularly use tools
like word processors and spreadsheets are an in-between situation where either
paradigm could be better depending on how and how much they use their
particular tools.

------
nicpottier
Interesting linking style used in this article, especially considering the
topic.

Isn't it a bit of a step backwards to start using footnotes on the web? I'd
rather see the links to the past papers worked into a sentence, with links for
each per topic for instance. That's what is so great about the web, I can hop
instantly off to check on something that catches my eye.

I suppose it could be argued that using footnotes will keep the users from
getting distracted, but I consider cleverly working in links to be one of the
joys of both reading and writing on the web.

~~~
chalst
My preference: using author-date referencing with a reflist rather than
footnotes makes it easier to see what the aside is about, and allows you to
include such relevant information as the title of the paper and where it was
published together at the end of the page in the reflist. Bundling all this
information inline is distracting if it happens more than once or twice.
There's no reason why the hyperlinks shouldn't be to the papers themselves if
you follow this system.

Putting digressions, rather than just sources, in footnotes is what annoys me.
It's just lazy: "I wanted to say this as well, so I'll slap it into a footnote
instead or figuring out how to say it in a readable way. So footnote 1,
Incidentally it is amusing to note, blah, blah...", leaving the reader with
the choice of following the footnote and spoiling the flow of the main post,
or not following it and missing whatever the writer had thought was worth
writing down. They make distractions, they don't avoid them.

~~~
jedbrown
Sometimes with technical material, I read the same paragraph multiple times.
If the footnote is really an "aside", I can just read it once and not have it
clutter the paragraph when I'm rereading.

~~~
chalst
If there's some policy at work that allows you to know what kind of
information might be in a footnote, fine, you can indeed eliminate clutter in
this way. Only putting sources in footnotes works, for instance.

But authors who tend to put asides in footnotes sometimes put such crucial
material as definitions in footnotes. Don't you ever find yourself switching
from main text to footnote when rereading technical material?

------
6ren
Interesting. Put the theory in the head, where it's easier to work with. You
can reason with it, make predictions, maybe even develop an intuition,
integrating it with existing knowledge, find metaphors. You can't do that if
it's in the interface; you have to manually try each one. Even though you
could extract the rules, you don't need to.

It also makes your product stickier (harder to change products; a switching
cost), if users have internalized the rules.

It also may make the product harder to adopt, initially. A nice combination
would be to make it trivially easy to do some common tasks (adoptable), but
require internalization to do tricky things (a rewarding path to mastery;
proficiency with your product becomes a markable skill; people look up to you;
you become one with the tool; the power enables you can get things done).

There are other aspects of ease-of-use that aren't related to
{internal,/external}ization, such as consistency of interface. e.g. _ls -r_
means reverse, not recurse. Even though having to learn arbitrary differences
will make it stickier.

------
dennisgorelik
The author forgets that by making UI more complex, application filters out
less experienced users. End result: remaining users are more experienced and
productive on average. But it's not because users are getting better, but
because weaker users are filtered out!

------
nwmcsween
The whole idea of making interaction harder via command line irregularities
w.r.t usage is incredibly stupid. Windows powershell has somewhat of a good
idea by using object pipes formating is eliminated; a better idea would be to
extend this system wide and 'pipe' even UI's to different formats such as
javascript + markup or terminal output, this would require something
completely different from unix, linux or windows.

~~~
CrLf
Powershell is a good scripting environment, but is a lousy interactive shell.
And is is exactly _because_ it pipes objects instead of text.

When plain text is being passed around, one can build a pipe incrementally and
immediately see what the input to the next command is and decide what to do
with it. With powershell one has to constantly check the properties of the
objects.

It is faster to do a 'some -command | grep foo' than 'some -command | Where {
$_.SomeProperty -match "foo"}'.

~~~
nwmcsween
Do this now format it for another command and another and another, non-object
pipes are good for one-offs and anything that doesn't require structure (which
almost everything beyond interactive use does). Syntactic arguments are moot
if the language was a property of the operating system itself (vm based) the
syntax differences would mean nothing.

------
giladvdn
The fact that you don't know how to make your tools easy to use doesn't
justify making them complex nightmares.

UNIX (and open source software in general, with a few notable exceptions) is
difficult because it's built by thousands of hackers with no user experience
goals in sight. The goal is solving a problem for that particular person, as
fast as possible. It's rarely getting more people to use the product.

~~~
noibl
> The goal is solving a problem for that particular person, as fast as
> possible. It's rarely getting more people to use the product.

I would say that the incredible versatility of the majority of stock UNIX
tools and the philosophy that encourages combining them is evidence against
this. Trying to predict every use case is a waste of time so make it simple
and pipe. The alternative is software that is restrictive for some subset of
users, no matter how elaborate it is.

------
wopsky
I believe this comes down to interfaces. It's fair to assume that programmers
will have greater ease in a UNIX command line environment after overcoming the
learning curve. On the other hand, standard user would be better off
interfacing with a GUI as it is superior to command line for tasks that are
different than a developers. Data creation vs. Data consumption.

------
ippisl
The problem in the article is that he talks about the two extremes of
interface design, the gui vs the linux command line.

There are better interfaces for complex tasks.just try a good python Shell ,
with auto-complete and context sensitive help(like wing ide). the learning
curve is much shorter. and you don't lost power along the way.

~~~
tastybites
I haven't used AutoCAD since R13 and I was still a teenager working in an
engineering office, but it was one of the best GUI/CLI hybrids I've ever used.
You could learn with the GUI, and it would parrot the CLI commands in the
command window. Eventually, you'd familiarize with the CLI commands through
pointyclicky GUI usage and just rip through work at ridiculous speeds.

Mapping the commands (lisp-based DSL if I remember correctly...) to left-hand-
only single key macros while your right hand moused the coordinates on the
canvas ... holy crap was that fast. You could whip up fully qualified
engineering drawings in a matter of minutes.

------
mcantor
I feel like Edward Tufte would have a seizure at the way this article
juxtaposes two wildly unrelated charts.

~~~
sskates
The second chart's legend also threw me off quite a bit.

------
DavidSJ
So: make interaction harder and costlier, and users will spend more time
thinking about how to do what they want and less time doing it.

Can someone explain how this is a good thing?

~~~
thristian
The point is, they spend less time _overall_ by thinking about it first, than
they would spend by poking at all the options the program provides, hoping one
of them will be close enough.

Of course, you can't spend all your time thinking about the distracting
minutia of every-day life (hand-encoding assembly instructions to program the
microwave oven, or change the channel on the TV), but if you're actually
trying to get something done, an interface that gives you a broad array of
tools with complicated interactions can be better than a simple tool. Compare
Vim vs. Notepad, or Photoshop vs. Paint.

~~~
Gibbon
Or Autocad, Maya, Logic and Protools... Those are the types of tools where
professionals will dismiss simpler UIs out of hand.

------
zaphar
You could take the same research and apply it to IDE's. Perhaps this explains
why I prefer Emacs to Eclipse or IntelliJ.

