
List of Cognitive Biases - plibither8
https://en.wikipedia.org/wiki/List_of_cognitive_biases
======
buybackoff
I was a big fan of cognitive psychology and biases and tried to read as much
books as possible on the subject. Even had a mind-map on finished and TODO
books
([https://ic.pics.livejournal.com/buybackoff/8746464/10862/108...](https://ic.pics.livejournal.com/buybackoff/8746464/10862/10862_original.gif)
upper-right corner).

I think the best practical material on the subject is Charlie Munger talks.
Particularly his talk "On the psychology of human misjudgement"
([https://buffettmungerwisdom.files.wordpress.com/2013/01/mung...](https://buffettmungerwisdom.files.wordpress.com/2013/01/mungerspeech_june_95.pdf))
and "Poor Charlie's Almanack: The Wit and Wisdom of Charles T. Munger", which
is an edited collectoin of his many talks.

My main conclusion from reading all the books/talks is that you could only be
aware of the existence of the biases, but cannot realize which one(s) is(are)
at play now in your brain and cannot "fix" a bias with any cognitive effort.
So "sleeping on"/delaying an important decision is the best practical way I
have found to mitigate the always present pervasive biases.

~~~
mettamage
There are efforts to train biases out of your system. A long time ago I did
such a training from a renowned scholar online (he made a simple web page). I
forgot how it was called, but I remember one clear thing that I use in every
day life.

When you estimate something, never ever estimate a single value. Always
estimate within a range. He showed in his training that, for me, I would come
to more sane averages/point estimates. It helped me. Unfortunately, that's
simply anecdata.

In any case, when you google on "debias training" or something similar, you
will see that many efforts are underway.

~~~
stiff
IIRC the book "How to measure anything" contains some advice like this:
[https://www.amazon.com/dp/1118539273/](https://www.amazon.com/dp/1118539273/)

The author also offers webinars, so maybe it was from him:
[https://www.howtomeasureanything.com](https://www.howtomeasureanything.com)

~~~
mettamage
I think it was from a pretty prominent researcher that had a .edu site, but I
couldn't find it and I really tried.

------
baxtr
I am wondering which of these are "solid" enough to be considered? I am asking
because of the "replication crisis" which affected also Kahneman et. al.

EDIT: I am _genuinely_ interested in knowing, since it would be helpful to
know which of these are reliable - in order to change my behavior accordingly.

[https://www.theatlantic.com/science/archive/2018/11/psycholo...](https://www.theatlantic.com/science/archive/2018/11/psychologys-
replication-crisis-real/576223/)

[https://replicationindex.com/category/kahneman/](https://replicationindex.com/category/kahneman/)

~~~
mar77i
Beware of potential recursion.
[https://en.wikipedia.org/wiki/Bias_blind_spot](https://en.wikipedia.org/wiki/Bias_blind_spot)

------
etaioinshrdlu
How many of these will appear naturally in powerful AI systems?

Perhaps many! Maybe by trying to emulate a human brain we will end up
recreating its flaws.

I am very excited in the progress of deep learning applied to symbolic,
logical reasoning, like theorem proving. Theorem verification is easy and
tractable, proving is not.

We can have heuristic algorithms come up with provably correct algorithms!
That is vaguely analogous to a human writing a program then proving it
correct. Now that will be useful.

~~~
matthewtoast
I wonder if we might achieve better* results in some systems if we added in
these biases on purpose. (I.e. treated them like features, not bugs or even
emergent patterns.)

------
SubiculumCode
Some of which may be replicated.

I'd be tempted to down-vote myself for snarky trolling except that I work in
the field of psychological research, and perhaps it is my bias, but many of
the cognitive biases that came from social-psychology research do not stand up
to scrutiny, too frequently resulting from bad statistical practice...at least
two decades ago.

~~~
mycl
Can you expand on that? My impression is that Kahneman and Tversky "proved"
that human cognition is not Bayesian and now much of cognitive psychology is
turning around and saying, no, they didn't, and it is. As a layperson, I don't
know whom to believe.

~~~
reallydontask
Richard Nisbett claims that with training a lot of the biases can be overcome
(I'm paraphrasing)

There is an interesting course of his on Coursera (
[https://www.coursera.org/learn/mindware](https://www.coursera.org/learn/mindware))

------
samdung
This is purely my opinion.

All theories within Psychology and Economics are based on people being
'rational'. Any thing contrary to their theory is branded 'irrational' and
given a name. The name usually sounds like a 'disease/ailment'.

~~~
jonathanstrange
Well, people shared your intuition for a while but then new results came up.
This literature started with the supposed rationality of compelling axioms of
decision making by Ramsey, von Neumann, Savage, etc. These are in the end
based on measurement theory. People noticed early that humans seem to remain
rational even if they violate intuitively acceptable rationality postulate.

Take Luce's coffee cup example as an illustration. You prefer black coffee to
sweet coffee. Suppose you compare coffee with no sugar to coffee with one
grain of sugar added. You're indifferent: a~b. Then add another grain, and so
on. You will get comparisons a~b, b~c, c~d, e~f, ..., j~k, and then suddenly
a>k, a violation of the supposed transitivity of equipreference (aka
indifference, equally good). But that seems to be rational.

So people relaxed rationality requirements and now there is the problem what
'rationality' actually means.

Fast forward a few years and empirical studies found the following strange
behavior: If you mention a high number before asking people for some fictional
charity contribution, then people tend to be be willing to pay more than if
you mention a low number before, and it does not matter in which way you
mention the numbers. (Actual experiments were made by making people roll a
rigged lottery wheel before doing some completely different task, for
example.) You can even tell participants about the observed effect before, it
will still be observed.

I see no way how this "anchoring effect" could be described as being rational.

But many people nowadays share your opinion, and there is a whole field called
"ecological rationality" in which scholars try to re-interpret supposedly
irrational biases as good and rational heuristics increasing e.g. evolutionary
fitness. I don't think they're right in general, though. Some of the biases
are just flaws. If I flash a number before your eyes and this affects your
subsequent decision making, then that's not a useful heuristics, it's a flaw
in your brain processing. My 2 cents, others disagree with me.

~~~
simonh
>Some of the biases are just flaws.

It's probably a trait that is (or was) advantageous in one context, that is
disadvantageous in this new or less vital context.

------
DantesKite
I once tried memorizing this list of cognitive biases but eventually came to
the conclusion they were ill-defined and in some cases, not even biases at
all, but a heuristic to keep me alive and well-functioning.

~~~
failrate
I tried to memorize them, but recency bias meant I only remember the last one
in the list.

~~~
futureastronaut
I tried to think of others, but availability bias means I'm stuck on recency
bias.

------
tasuki
It's an alphabetically sorted list, sure one can read the whole list top to
bottom, but it just doesn't _flow_ very well.

If you're interested in rationality and cognitive biases, I'd highly recommend
reading Eliezer Yudkowsky's "Rationality: A-Z" sequences:
[https://www.lesswrong.com/rationality](https://www.lesswrong.com/rationality)

------
alecco
Thinking Fast and Slow explains many cognitive biases.

[https://en.wikipedia.org/wiki/Thinking%2C_Fast_and_Slow](https://en.wikipedia.org/wiki/Thinking%2C_Fast_and_Slow)

------
dang
Previous threads:
[https://hn.algolia.com/?query=List%20of%20Cognitive%20Biases...](https://hn.algolia.com/?query=List%20of%20Cognitive%20Biases%20comments%3E10&sort=byDate&dateRange=all&type=story&storyText=false&prefix=false&page=0)

------
JonathanCreek
I had a class at Babson about “Decisions”. Best class ever. My favorite case
was about decision making process at NASA that led to the Discovery disaster.
Along the case (you can find multiple versions online and it is an awesome
read) there was this HBR article about flaws in decision making process. “The
Hidden Traps in Decision Making” by by John S. Hammond, Ralph L. Keeney, and
Howard Raiffa.
[https://www.researchgate.net/publication/12948100_The_Hidden...](https://www.researchgate.net/publication/12948100_The_Hidden_Traps_in_Decision_Making)

~~~
zomg
i went to babson as well -- too bad i missed this class, sounds interesting!

i grabbed a copy of that HBR article and will read it later. thanks!

------
ujjain
I think human rational thinking is completely f*cked. We are just not capable
of thinking very logically/rationally.

I think all the more reason to meditate, be mindful and adopt philosophies
that are not always rational, but good instead.

Also, the truth is often very complex or very dark, so thinking is only going
to bring incorrect simplified (black/white) conclusions or
negativity/resentment.

~~~
mar77i
You just reminded me...
[https://www.xkcd.com/1163/](https://www.xkcd.com/1163/)

------
srik
Being reminded of cognitive biases on a regular basis does wonders for staying
grounded! I currently use a browser plugin for that but this poster seems like
a better alternative — [https://designhacks.co/products/cognitive-bias-codex-
poster](https://designhacks.co/products/cognitive-bias-codex-poster)

~~~
joshschreuder
What's the browser extension?

~~~
srik
[https://chrome.google.com/webstore/detail/my-cognitive-
bias/...](https://chrome.google.com/webstore/detail/my-cognitive-
bias/cmapeoagadpppgajnicpagcgpdklfhch)

------
sizzle
Anyone want to work with me to create a cognitive debiasing AI algorithm/chat
bot?

~~~
kekeke
That would be really interesting project. Only up for it if it's open source.

------
asavadatti
According to Daniel Kahneman the research on whether biases can be overcome is
"not encouraging". [https://getpocket.com/explore/item/the-cognitive-biases-
tric...](https://getpocket.com/explore/item/the-cognitive-biases-tricking-
your-brain)

------
glutamate
The list is missing "The Bias Bias in Behavioral Economics"
([https://www.nowpublishers.com/article/Details/RBE-0092](https://www.nowpublishers.com/article/Details/RBE-0092))

~~~
ergest
Gerd Gigerenzer's work and the book Simple Rules are far more efficient ways
of making better decisions than reading a list of 100+ "biases" and trying to
overcome them. A good intro is Risk Savvy.
[https://www.youtube.com/watch?v=KnRWVmWQG24](https://www.youtube.com/watch?v=KnRWVmWQG24)

------
rumpope
A more digestible format for this: [https://busterbenson.com/piles/cognitive-
biases/](https://busterbenson.com/piles/cognitive-biases/)

------
11thEarlOfMar
Which biases are startup founders more likely to fall into?

------
Tepix
Do we suffer from normalcy bias when dealing with the effects of global
warming? It sure looks that way.

------
owenshen24
While I think it's good for pedagogical purposes to have a catalogue of many
examples of where our thinking goes wrong, I worry that these lists can give
off the wrong idea that our thinking is broken in so many "different" ways.

In some sense, many of these biases seem like specific instances of a more
general phenomenon. For example, illusion of control and pareidolia both seem
like they'd arise if you buy into the brain as doing [predictive
processing]([https://slatestarcodex.com/2017/09/05/book-review-surfing-
un...](https://slatestarcodex.com/2017/09/05/book-review-surfing-
uncertainty/)). So it's not exactly that we have over 100 ways that our
thinking goes wrong, but that the same types of mistakes occur in different
ways.

In which case, for preventative reasons, knowing the core mechanism at play
seems much more important. Similarly, I feel that lists of mental models might
also be missing the point; no one can really go through a list of 100+ items
to figure out which one is at play. You're going to need a smaller, more
general toolkit.

~~~
perfmode
what toolkit do you use?

~~~
owenshen24
I don't have something that feels comprehensive yet, but another more general
concept aside from predictive processing that I've been thinking about is
substituting a hard thing for an easy thing, if the easy thing looks like the
hard thing.

Examples: \- Following through the steps of a proof vs covering up the proof
and doing it yourself \- Asking yourself if something sounds familiar instead
of trying to summarize it \- Criticizing an idea instead of adding a new one
or suggesting an improvement

------
toomim
Wow, there are now 194 listed Cognitive Biases. The number keeps growing.

It is an awful sign for a scientific community when they are working on a
theory that includes 196 different exceptions and adjustments that have to be
made in order to make a model fit the data. It means that your underlying
model probably isn't right.

This reminds me of when Astronomers thought the universe revolves around the
Earth, rather than the Sun. The earth-centered theory made sense until we got
better data, and then sometimes planets appeared to go backwards. Sometimes
they appeared to swirl around a line. Sometimes there were swirls within the
swirls, and sometimes swirls within those:
[https://invisible.college/attention/dissertation/retrogrades...](https://invisible.college/attention/dissertation/retrogrades.png)

Astronomers had to account for this data with a complex set of retrograde
motions and epicycles layered upon epicycles. These complexities only
increased as telescopes and charting techniques improved, uncovering more
distortions from in the idealized orbital lens. Take, for instance, the
numerous parameterized gears required for an early Galilean planetary model:
[https://invisible.college/attention/dissertation/galileo2.jp...](https://invisible.college/attention/dissertation/galileo2.jpg)

Only when Copernicus and Kepler put the _sun_ in the center of the universe
could the models be simplified. Suddenly, each planet's orbit fit a perfect
elipse -- no epicycles, no retrograde motions.

We can do the same thing for Economic theory, by moving the center of the
utility function from the _future_ to the _present_. Right now, Economics
models humans as optimizing future outcomes. The modeled humans are focused on
the future: they allocate infinite attention to computing the optimal action
for the future. But real humans have scarce attention for computing the
future. When they run out of attention, these 194 heuristics and biases
display themselves in full effect.

We solve this dilemma when we evaluate the utility function in the present,
rather than the future. Instead of assuming humans have infinite attention,
the utility function itself predicts _how_ humans allocate their scarce
attention. The new utility function evaluates the utility of attention itself.

And it turns out that we can empirically measure this value of this utility
function, by running controlled experiments online with 1,000s of
participants, and paying them different amounts of money to attend to
different tasks. This lets us measure how much utility people ascribe to
paying attention to television shows, sexy pictures, video games,
advertisements, iPhone screens, or reddit posts. We can measure it in pennies
per second.

This new model is a measurable _Attention Economics_ :
[https://invisible.college/attention/dissertation.html](https://invisible.college/attention/dissertation.html)

~~~
n4r9
As I understand it cognitive biases aren't exceptions or adjustments to a
model; they are observed phenomena. Any model of psychology or economics which
purports to unify these phenomena must be able to explain/predict each bias
individually. How does "attention economics" predict the planning fallacy?

~~~
toomim
Cognitive biases _are_ exceptions or adjustments to the _rational model_ :
[https://en.wikipedia.org/wiki/Rational_choice_theory](https://en.wikipedia.org/wiki/Rational_choice_theory)
On the whole, the field of Behavioral Economics is a correction to the
Rational Economic Model. Behavioral Economics says "people are rational,
_except_ for the ways in which they are biased." They call it "bounded"
rationality.

For instance, the planning fallacy is a correction to the idea that people
will rationally predict how much time something will take. So we first
estimate how long something might take, and then the planning fallacy teaches
us to increase it to account for our bias.

> Any model of psychology or economics which purports to unify these phenomena
> must be able to explain/predict each bias individually

That's close to correct, but I'd like to distinguish explaining the _bias_ vs.
the _data_. The new theory should explain the _data_ , not the biases in the
old theory. Consider that Kepler's elliptical orbit theory didn't explain each
individual planetary epicycle -- it didn't need to. Kepler's theory didn't
need epicycles at all to explain the data.

Likewise, Attention Economics doesn't need a "Planning Fallacy", because it
doesn't assume humans are good planners. It rather looks at how people
actually allocate their attention while planning. Consider that if people
allocate more attention to their plans, they are likely to make better
estimates. So how are they allocating their attention when planning? In the
"planning fallacy" [1], Kahneman and Tversky envisage "that planners focus on
the most optimistic scenario for the task, rather than using their full
experience of how much time similar tasks require." I haven't run the
experiments myself, but one could certainly test for this in an Attention
Economic experiment, by seeing how much more attracted people are to focus on
the most optimistic scenario for their task, rather than the pessimistic
scenarios. And then we can learn _why_ they focus on the optimistic scenario,
by manipulating other variables until we see which ones lead people to
consider optimistic vs. pessimistic scenarios when planning.

[1]
[https://en.wikipedia.org/wiki/Planning_fallacy](https://en.wikipedia.org/wiki/Planning_fallacy)

~~~
n4r9
Thanks for your detailed response. I'm afraid I'm still not sure how this
simplifies much. The fact that people focus specifically on an optimistic
scenario rather than having a random variance like a normal distribution seems
very significant to me. If you can't predict this from first principles then
you have to add in some extra explanatory factors, and then you still get your
cursed epicycles.

------
foobar_
Defense mechanisms are far more interesting

------
hsnewman
Are these just made up or found by scientific method?

~~~
gmiller123456
Those two things are not necessarily mutually exclusive. But a lot of them
have been studied pretty extensively, I know I've seen a lot on the
"Anchoring".

