
Cognitive Overhead, Or Why Your Product Isn’t As Simple As You Think - pg
http://techcrunch.com/2013/04/20/cognitive-overhead/
======
jwilliams
I was all primed to disagree with this article, but it's a pretty well placed
piece.

I think an interesting pivot occurs when your start hitting the limits of
working memory. Posting a photo is usually pretty immediate and doesn't max
out your working memory.

Many years ago, I worked on a project to convert a Loan Application system
from a VT100 "green screen" terminal to a Web Interface.

The Web Interface was actually pretty good, but it required too much
navigation and too much working memory. We tried to reduce the number of
clicks, but then the pages themselves became too slow and confusing.

Finally I got to see someone using the old VT100 app. It was blistering fast,
and they supplemented it with codes written on a piece of paper. Crude and it
required training, but it was far superior. The main thing was navigation. The
codes on the paper were enough of a picture to offload that bit of cognitive
processing... With the web interface, as soon as you clicked and had to pause
or search - you'd lose the thread.

Unfortunately too late for that project, but a good lesson for me.

~~~
monkeyspaw
Knowing what you now know, what tricks would you use to make the web interface
more like the VT100?

My guess would be keyboard navigation, and entering numbers as codes instead
of clicking radio buttons.

It's important to remember that discoverability in an interface doesn't matter
as much when you can train the workers. Not everything has to be intuitive,
especially when you control training (and you can trade eventual speed for
immediate understanding).

This is the basis for all the VI/Emacs users who claim to be more productive
than in a visual editor/IDE.

~~~
jwilliams
I did come up with a prototype, but it went a bit of a different direction.

My theory was the biggest issue was the UI enforced a workflow. Most likely
the banker is given a big pile of documents, then they work through it. One
confirms your address, the next your income, the other is some other piece of
evidence.

I particularly noticed they would be sitting there shuffling back and forth
between the paper & then shuffling back and forth on the interface. It caused
a lot of rework, and required you to remember where you were at with both.

The Web Interface was waaay to rigid. Want to add a party - you need to enter
their name and work details. So you'd have to track down that information.
Even if it wasn't 100% required to get a provisional. You could enter whatever
value you wanted for the income at this stage, so why force the banker to
enter work address and work phone? They can do that at final docs when you're
required to provide a payslip (for example!).

Paper & terminal worked because you could move quickly between disconnected
parts. Enter the person's name and DOB. Then jump to income. Back to their
home address. Over to some kind of KYC information, etc, etc.

So I came up with something that was a bit more of a mind-map. Each node
indicated if it (and it's subnodes) were complete or not. You could jump
around and enter data in whichever order you liked & not have to think too
much about what we left to do. Then the nav was about quickly moving up & down
the tree (keyboard being a big factor).

It focused on making the data capture piece much more organic & goal-based.
You only needed to enter discrete information to get a provisional, so use
flags to tell the user what's needed. Then they can populate the space in any
order they like.

Was waaaaaaaaay to weird for a conservative bank to consider :-) So I never
really got to try it out. Would have been interesting to see if it made any
difference.

~~~
AndrewDucker
Having done similar things, I agree.

1) Make sure that the UI fits with the order they have data on the bits of
paper. 2) Let them data in any order they like, and give them a big "Validate"
button they can press to check whether what they've entered is all coherent.
(Don't check as they go along, or the validation markers will confuse and
distract them.)

------
ricardobeat
> There isn’t yet much written about cognitive overhead in our field

Couldn't disagree more. Cognitive load is a major element of information
architecture and even has an explicit representation in flow charts. Any book
on information architecture or UX is half about it.

Many if not most decisions in interface design and IA are based on estimating
(and reducing) cognitive load - how many options/buttons to present, how deep
navigation should go, how to group things, splitting up an action flow in
order to reduce the memory/choices needed...

The example given for QR codes is also wrong - "So it’s a barcode? No? ..." -
it _is_ a barcode and you need a barcode scanner/app, everyone knows how
barcodes work. QR codes are not the best thing around, but IMO they didn't
catch on for a lack of interest from manufacturers (no native support), not
because it's overly complex.

~~~
gingerlime
The argument about QR codes I didn't totally agree with either, and I think
that at least in Europe it caught-on pretty well.

One thing that could definitely be improved is if the built-in camera app on
your phone automatically detected them and popped-up more info. When I first
tried using a QR code, that's what I did, using the iphone camera app, and was
disappointed to discover it didn't work this way.

Eliminating the need for an extra app and could dramatically boost their
appeal, but the core concept is there.

~~~
hdctambien
My windows phone has QR reading capabilities built in (Bing Vision).

Bing Vision can also apparently scan text you point the phone at, translate it
to a different language, and the overlay the translation on the screen.

~~~
ricardobeat
Inspired by Google Goggles, which for some reason was never merged into
Android's camera app.

------
dkrich
I think there are some strong points in the article but overall it misses a
major point. Whatever interface you are designing, the underlying service you
are building the interface for _has_ to be something people _really, really_
want.

The most cognitively recognizable interface won't get people using something
if they don't care about the information coming from the other side.
Conversely, if there is information that the end user needs, he will spend the
time learning the interface. For example, if Shazam didn't exist and you heard
a song you really liked at the bar, you might ask around, google some fragment
of the lyrics you recognize, etc. to find out the same information. So if
Shazam had an interface that was far less-intuitive, I have to believe it
would still be very popular.

That sounds simple, but I think the article puts the weight on the interface
without much regard to the fact that all interfaces are merely a means to an
end.

~~~
vinceguidry
> Whatever interface you are designing, the underlying service you are
> building the interface for has to be something people really, really want.

How do you target that? I'm not entirely sure you can. Is there a set of steps
you can follow, a design process, that will get you to something people
_really, really_ want?

~~~
acgourley
That's generally what a lean startup process / mindset is supposed to help you
find. The idea being it's not something you can get a read on without putting
a prototype MVP into the world to test.

------
pg
I'd be interested to know when/where the term "cognitive load" first appeared,
if anyone knows.

~~~
ricardobeat
The term comes from psychology, so probably a few decades back:
<http://en.wikipedia.org/wiki/Cognitive_load>

In relation to the web, a quick Google Scholar search brings up lots of papers
from the 1990s:
[http://scholar.google.com/scholar?q=%22cognitive+overhead%22...](http://scholar.google.com/scholar?q=%22cognitive+overhead%22&hl=en&as_sdt=0%2C5&as_ylo=1980&as_yhi=2000)

~~~
pg
I poked around myself and the earliest use of the term I can find is in an
article from 1968:

    
    
      Pupil size and problem solving
      JL Bradshaw 
      The Quarterly Journal of Experimental Psychology, 1968
    

Of course when I tried to read the text of the article, I got a message saying

    
    
      Sorry, you do not have access to this article.
    
      How to gain access:
    
      Recommend to your librarian that your institution 
      purchase access to this publication.

~~~
staunch
Here's one from 1970

[http://books.google.com/books?id=srxOAAAAYAAJ&q=%22cogni...](http://books.google.com/books?id=srxOAAAAYAAJ&q=%22cognitive+load%22&dq=%22cognitive+load%22)

If you search that book for "cognitive load is defined" you get the snippet:

> _"Cognitive load" is defined loosely as the amount of mental strain put on a
> person during the performance of some task, often at least partially due to
> the constraints placed on the performance of that task..._

------
holri
I make better photos with my old manual Hasselblad from the 1970ies. Because
the interface has massive cognitive overhead. No automatic at all, all manual,
clunky, complicated. It forces me to think slowly. Therefore it gives my head
and heart the time and deep concentration about the picture I want to make.
This leads to much better photos.

~~~
rogerbinns
I'll play devil's advocate to that. Nowadays you can use high resolution
cameras with automatic everything and take lots of pictures. High resolution
means cropping still yields a useful picture and taking lots means you are far
more likely to have a useful picture. You are certainly far less likely to
miss something. (Manual focussing is also not available to short sighted
people like me since glasses/contacts don't result in perfect correction.)

If we extrapolated your approach to development then we shouldn't allow
highlighting editors, debuggers (Linus has argued this), and similar modern
tools. Heck you should have to wait hours/days for program output like they
did in the punch card days.

I suspect that skilled people can make good use of the tools available, be
they completely manual or with lots of automation. It is quite possible the
automation doesn't help them that much. But the vast majority of people are
closer to average.

~~~
ricardobeat
It certainly depends on the results you're after. If you're photographing
landscapes you're not worried about "missing something", and taking lots of
pictures without manual adjustments won't do any good.

I thought the diopter adjustment could compensate for glasses?

~~~
rogerbinns
> I thought the diopter adjustment could compensate for glasses?

The problem I had when I tried it was that I could get everything looking
perfect through the viewfinder, but the resulting picture was blurred. My
current prescriptions round to the nearest 0.25 (glasses) / 0.5 (contacts)
dioptres.

Even if I could make a perfect adjustment, my vision alters during the day.
For example when tired things can get a little blurry.

~~~
holri
I have 3.2 dioptres and have no problem at all with the hasselblad manual
focus.

~~~
rogerbinns
I'm at -5 and -6 and the last time I tried things was with a Nikon many years
ago.

------
arocks
I am not sure if every product benefits from reducing Cognitive Overhead. Some
of the areas where it would work are novel technologies (like Shazam or Wii)
or mass market products.

How many of us would use a music player that just had start and stop buttons?
Why would this seem as an inconvenience? If the product is already familiar,
then it would be better to improve on the already familiar interface. When it
is a completely new technology, you get an opportunity to make a fresh start.

Similarly, many niche products have power users who would be unhappy if the
interface is too "dumbed down". The classic case of an IDE and UNIX text
editors come to mind. An IDE is quite obvious to use, but many would not find
it as efficient.

Cognitive overhead is a good guideline but we need to understand our target
market first.

~~~
quanticle
>How many of us would use a music player that just had start and stop buttons?

Two words: iPod Shuffle. Or, if you want a _slightly_ more complicated
example, Pandora. Pandora literally started out with three buttons: "I like
it", "I don't like it" and "Stop". Recently, they added a fourth, "Skip".

~~~
saurik
<http://www.apple.com/support/ipodshuffle/reference/>

~~~
socillion
The 3rd generation shuffle has no controls apart from a switch between loop,
shuffle, and off.

<https://en.wikipedia.org/wiki/IPod_Shuffle#Third_generation>

~~~
saurik
No: if you double-click, triple-click, and (potentially in combination) hold
that button you can do all of the normal things you can do with a media
player. Go read the user manual. People who believe otherwise (and I've seen
many) simply assume user interfaces don't exist if they aren't obvious.

------
npsimons
Much as I like having options, I have come to realize this applies to code as
well. On my current project, I keep telling my partner that we need to keep
the cognitive load down, because eventually we will have to bring on other
people. Being that it's written in C++ (please don't flame me; it's what we
have to work with), I'd love to use things like template metaprogamming and
multiple inheritance. Since we are already working in a domain that doesn't
always follow common sense (RF/E&M), and we are also using some design
patterns that take a little getting used to in C++, we've had to eschew a few
features of the C++ language. It's probably for the best anyways.

------
jamesrcole
Cognitive overhead doesn't just apply to products we'd think of as simple.

Vim is an example. One of the key benefits of Vim's editing style is that it
reduces (one kind of) cognitive overhead. It really does.

You want to delete the current sentence? Just type 'das'. Want to change the
text within the quotes? Just type 'ci"'. It allows you to more directly
express your editing operations, rather than having to translate them into a
number of steps. That reduces the cognitive overhead of that action.

It's also a case where you need a certain amount of training before you can
perform the actions that reduce the cognitive overhead. Nothing says that the
reduction in cognitive overhead has to be there from the get-go.

I'm not saying that Vim's operation reduces cognitive overhead in all ways,
but that it does in one specific respect.

------
swatkat
Off topic: what's up with the url ;)

~~~
jordn
You're right, what extra information is this URL sending? I haven't seen it
any other techcrunch linked article.

    
    
      http://techcrunch.com/2013/04/20/cognitive-overhead/?icid=wym3&grcc2=870a9d585d2156a619b69a6861141a34~1366515093871~fca4fa8af1286d8a77f26033fdeed202~507a1debf7b9102c09107d1e3b60a173~1366514940000~0~13~0~0~0~0~0~0~7~13~26~H4sIAAAAAAAAAIVQy2rDMBD8lfoDFEtaaSUHSik99RxyLhtpVQv8CLackIu-vU5_ICzMwM7AzmxfyvXYtoVDH5ZtCv0hzGOrpYJWmp3bkZYgaIoL87ryJErP4j4vQ9xx2_HCYtxCv3MpvIicxJ1FT1FYKcZ5YbHmIYd5EjcaBn6sbUWpjJGoDVqnrVbou9q_ihHm3ymXfGMx33jpmWL7kUOO7_fHCFX5WmXVtqZAJpGnpLTH6Mm5pFECpMgctdSvDV13gaBsBAWJIlqGJJ9u4yUCIdW65sLf8ShQopPeemtBo3POW_wJNF4p_05P3SIoB_sZ0ymAzqHSe8jnnE_NeW_D8e1UqPDaGOmb0zZNj_1J3HjpGnAHQO-lbYTS-iAB4X_ffH1WfTGA6QKWWBmfPCZWBFpjUtaBS_UP0H6PptUBAAA

------
jdrobins2000
Great article, and totally agree.

I am working on a product where that is a critical design principle. An
interesting related question I am wrestling with: can a product be successful
by reducing overall cognitive load for a user, but carrying a larger cognitive
load than the average app? That is, it reduces complexity for users, but since
it deals with higher complexity concepts, it still nets out to being more
complex. If so, how much additional cognitive load does this make
tolerable/enjoyable? My guess is a little, but not much (in the mass market,
of course).

------
john_w_t_b
My problem with startups is the cognitive overhead in linking their company
name to their function. I cannot remember what trello does for example? Naming
of services should be more obvious.

~~~
emiliobumachar
Domain name squatting makes that much difficult.

~~~
david_lieb
If your product is a mobile app primarily, domain names are far less
important. (We don't own bump dot com, and while it's hard to prove the
absence of a thing, we don't think it hurts us at all).

------
thefsb
This is hilarious: "Web 2.0 took over, yielding big buttons, less text, more
images, and happier users."

See also: <http://www.thebaffler.com/past/the_meme_hustler>

------
dcw303
Products that have low cognitive overhead:

\- Original iPod

\- Fender Stratocaster

\- Roland TB-303

\- Casio F91W

\- Unix (Ok, it's not a simple product, but "everything is a file" goes a long
way.)

------
michaelochurch
The "test on the young, old, and drunk" concept got me thinking: this might be
why the winter travelers among us are so much better at design.

I always thought my design skill and difficulties were separate issues,
perhaps connected at some neurological level, but I think the wide variations
of intellectual ability (+/- 15 IQ swings that normal people don't experience)
are _why_ we develop such creativity and design sense. I have a week or two
every year when I'm basically incapable of doing most of the stuff that's
normally easy, but during that time I learn how to design for _that_ person.
How do you get the person whose cognitive resources are strained (fatigue, not
stupidity) to see the value right away, rather than fear and uncertainty and
chaos?

We have to be careful, though. It's not that the rest of the world is
_stupid_. The cognitive-load problem isn't about stupidity. It's about the
fact that there are a million things competing for peoples' attentions and
unless we can prove right away that our wares are cool (by demonstrating value
in a simple, low-cognitive-load way) we fade into the noise.

~~~
irremediable
What is a "winter traveler"? On searching for the term, I only saw websites
about travelling during the winter. Is it a sufferer of seasonal affective
disorder?

~~~
michaelochurch
Person with mild mental illness that results in clearer perception, e.g.
depressive realism, and often better character. (When your biology is
difficult, you don't get to fuck around the way most people do.)

Traveling through the forest during summer is much more comfortable, but you
can't get as sharp a feel of the landscape because the trees are leaved out.

