
Concepts for Your Cognitive Toolkit - cryoshon
http://mcntyr.com/52-concepts-cognitive-toolkit/
======
ACow_Adonis
Actually a kind of cool little article. However, a few little
errors/improvements I think could improve it:

14\. Time value of money: Its actually the opposite of whats stated. We value
money today MORE than money tomorrow, not less. The discount rate is how much
more money/return you'd need in the future to compensate.

21\. Bikeshedding phenomenon is not really substituting an easy problem for a
hard one: its more a comment on how people will focus on the problems that
they are cognitively able to understand. Or more flippantly, the time spent on
a problem is directly proportional to its triviality.

32\. Hawthorne effect: This one is close to my heart, because my mother mis-
stated this one when I was little. Its that people act different when they are
AWARE they are being observed, not just that they act differently when
observed.

34\. Flynn effect: I feel its important to recognise the subtlety that the
flynn effect is about the observed increase in INTELLIGENCE TEST SCORES over
several decades. Indeed, the whole point of learning about the flynn effect is
to learn about the ambiguity and controversy between tests, test scores, and
general intelligence, and the general investigation into why this apparent
increase is happening. I feel to simplify this to "IQ has been increasing" is
to miss the entire point/controversy/investigation of the flynn effect.

46\. Cognitive dissonance: Is i think a mis-statement/erroneous. Cognitive
dissonance is indeed the uncomfortableness experienced by humans holding
conflicting beliefs, but it does not imply that one of the conflicting beliefs
have to be discarded. Rather, it is the interesting ways that human beings
deal with conflicting beliefs that don't involve discarding them which I think
the real-value/most interesting point of the concept of cognitive dissonance.

47\. Coeffcient of determination: Just to save space in this comment, read the
wikipedia article if you're into this kind of thing:
[https://en.wikipedia.org/wiki/Coefficient_of_determination](https://en.wikipedia.org/wiki/Coefficient_of_determination).
Personally, I'd barely call this a concept...its more of a model specific
stats metric really, but i'm not really in the mood to argue it...

~~~
petermcintyre
All good points, thanks. 14\. Yes, that was a mistake, sorry. 32\. Agreed,
edited to avoid confusing people. 34\. Agreed and added. 46\. Yeah, one isn't
always discarded. For example, humans are pretty good at compartmentalising.
47\. Maybe not a concept, (I'm not one to quibble of definitions -
[http://lesswrong.com/lw/np/disputing_definitions/](http://lesswrong.com/lw/np/disputing_definitions/))
but I feel as though it can be useful. Beyond the formal use, it can be
helpful to think about how much variance is explained in a model by any
particular attempt. For instance, say someone claims that women are bad at
giving directions. Even if on average this is true, gender might only account
for a small amount of differences in direction-giving abilities. In this
example, you'd want to see some data, but at least it can be instructive to
how one might conceptually picture the claim someone is making.

------
steveeq1
Ahh, these are "mental models". Charlie Munger gave a great speech on the
advantages of thinking using mental models and how going through them in a
checklist fashion makes into some powerful thinking. Speech here:
[https://old.ycombinator.com/munger.html](https://old.ycombinator.com/munger.html)

Also, here's an ebook outling more mental models in case anyone's interested:
[http://www.thinkmentalmodels.com/](http://www.thinkmentalmodels.com/)

------
aggerdy
Found the article to be a good summary of a lot of concepts I had encountered
on their own, but hadn't seen together in list form before. If you enjoyed
this, you might find Daniel Dennett's "Intuition Pumps and Tools for Thinking"
interesting [1]. I would highly recommend "The Philosopher's Toolkit: A
Compendium of Philosophical Concepts Methods" by Julian Baggini and Peter Fosl
as well [2].

\- [1] Link to Talk at Google By Dennett:
[https://youtu.be/4Q_mY54hjM0](https://youtu.be/4Q_mY54hjM0)

\- [2] Link to pdf:
[http://www.mohamedrabeea.com/books/book1_10474.pdf](http://www.mohamedrabeea.com/books/book1_10474.pdf)

------
th0ma5
Google cache:
[https://webcache.googleusercontent.com/search?q=cache:mPkna3...](https://webcache.googleusercontent.com/search?q=cache:mPkna3e1knAJ:mcntyr.com/52-concepts-
cognitive-toolkit/+&cd=1&hl=en&ct=clnk&gl=us)

------
tbrownaw
Biases and heuristics are the same thing.

The banana example for marginal thinking contradicts the expected value text.
Expected value really only works that way for things that you expect to keep
happening.

The efficient market hypothesis doesn't quite hold in the real world, but it's
still a useful heuristic.

The typical mind fallacy is a special case of availability bias.

Aumann's agreement theorem assumes that both agents have the same set of goals
and values. This doesn't entirely hold for humans.

Against chesterton's fence, there's also the principle that it's easier to ask
forgiveness rather than permission. Or if you don't know what the fence is
for, take it down and watch what happens.

I seem to recall hearing that the Hawthorne Effect could just as well be that
people are more productive at the beginning of the week. Productivity
experiments should not be on a weekly schedule.

~~~
sawwit
\- Biases and heuristics are both priors, but they are particular kinds of
priors. The former mostly refers to priors which systematically slightly shift
our thinking away from the truth and on which we can only counteract by self-
reflection. The latter is a prior which we consciously choose which is very
rule-like and probably overly general.

\- Aumann's agreement theorem assumes that all actors are perfect Bayesian
actors and have plenty of time to talk to each other. Since goals and values
are not preprogrammed (unlike needs), it follows that they can update on these
things as well, if one party convinces the other one of better goals and
values for maintaining/achieving their needs. Unless I'm overlooking
something, it must assume that the actors have the same needs.

~~~
vbs_redlof
I'm confused with the "goals and "values" terminology, I've never once heard
of these terms in this area of research. Aumann's agreement theorem simple:
two agents with common priors cannot agree to disagree if their posterior
beliefs are common knowledge.

Beliefs are common knowledge if I know that you know that I know .... (and so)
on that something is true. This occurs anytime there is trade between two
agents, such as in stock markets. If I can see that you can see that this is
the price we're trading at, then the traded price is common knowledge.

One of the curious implications of this theorem is that no rational agent in
any market would ever agree to trade with another rational agent.

So Aumann's agreement theorem is really a warning against applying game theory
to every scenario. Specifically, the common prior assumption limits the
usefulness of game theory in real settings. He makes no mention of "goals" or
"values", only the assumptions that the priors beliefs of the two agents are
the same and their posteriors are common knowledge.

In the context of the article, disagreement means either: 1) one of you is not
Bayesian rational or cannot update probabilities properly (bounded
rationality) or 2) both of you must have different priors (subjective
beliefs).

Both points are likely to be the case in reality, although we dislike the the
second point since once we begin to entertain the idea that agents have
subjective priors, you can rationalize anything and "anything goes".

~~~
sawwit
Needs are what your neural architecture has evolved to optimize (e.g.
sustenance, pain avoidance and affiliation). Goals are the preferred states
that the brain learns to optimize its needs (via reward). Values are cultural
or (fortunately, but also necessarily due to evolution) Schilling point memes
which manifest as virtual rewards that mostly enable human cohabit (e.g. you
are good if you don't litter the street, i.e. that will increase your chance
of future social reward).

------
narrator
I think "Algernon's law" and the "Efficient Market Hypothesis" are suspect
because they amount to "Just So Stories"[1] about the vastly complicated
topics of neurobiology and investing. Limiting the field of inquiry of
researchers and specialists in these topic by proclaiming these broad general
laws, and thus ignoring possibly useful new technology, theory, or
experimental evidence is not rational.

1\. [http://rationalwiki.org/wiki/Ad_hoc](http://rationalwiki.org/wiki/Ad_hoc)

~~~
0xcde4c3db
There's a classic joke related to the EMH:

An undergraduate and an economics professor are walking across campus. The
undergrad sees a bill on the ground. Getting closer, he sees that it's a $100
bill.

"Professor, there's a $100 bill on the ground there!", he says.

"No there isn't", the professor responds. "If there were really a $100 bill,
someone would have picked it up".

~~~
edanm
And the brilliant continuation of this story I once heard:

Everybody always tells this joke to make fun of economics professors who
believe in the EMH. But - have you ever _actually_ found a $100 bill lying on
the ground? If not, that's the EMH in action :)

------
pdonis
Aumann's Agreement Theorem is slightly misstated. A better statement would be:
if two rational agents disagree _when they both have exactly the same
information_ , one of them must be wrong. The qualifier is crucial; very often
disagreement is due to differences in information, not failure of rationality.
So you should take disagreement seriously _if_ you have reason to believe the
other person has significant information that you don't.

~~~
vbs_redlof
A better statement would be: if two rational agents share the same prior
beliefs and their posterior beliefs are common knowledge, irrespective of what
private information they have been exposed to, they cannot rationally agree to
disagree.

When are beliefs common knowledge? when both agents can directly observe one
another's beliefs. I.e. Bob must know Alice knows that Bob knows that .... ad
inifinitum that XYZ is true. Mutually witnessing an event is sufficient for
common knowledge.

I feel this is not a useful day-to-day heuristic since the theorem was
intended to highlight deficiencies within the Bayesian rational paradigm
(specifically the common prior assumption since game theorists weren't ready
to abandon rationality in the 70's).

~~~
pdonis
_> irrespective of what private information they have been exposed to_

I think this statement is too strong. I can see it being correct if the domain
being reasoned about is monotonic (i.e., new information can never change the
belief state of a statement once it is established), but most domains of real-
world interest are not.

------
pdonis
The bikeshed example is also somewhat misstated. In the original (fictional)
story from the book _Parkinson 's Law_, the issue is not that people looking
at the design of a nuclear plant spend too much time looking at the bike shed
design and not enough looking at things like nuclear safety. The issue is that
the committee trying to decide whether various projects should be funded at
all spends only about two and a half minutes in approving an expenditure of
$10 million on a nuclear reactor, but spends about forty-five minutes arguing
about the design of a bike shed, with the possible result of saving some $300.
(They then spend an hour and a quarter arguing about whether to provide coffee
for monthly meetings of the administrative staff, which amounts to a total
annual expenditure of $57, and refuse to make a decision at all, directing the
secretary to obtain further information so they can decide at the next
meeting.)

The point being that "bikeshedding" is not (just) about what parts of a
project to pay attention to, but _which projects_ to pay attention to. Spend
more time and effort paying attention to projects where there is more value at
stake.

------
cjauvin
Am I the only one not finding the connection between the _Anthropic Principle_
and the _Sleeping Beauty problem_ completely obvious?

------
SilasX
Great list, but I think the Shelling Point one (35) is blurring it with
Shelling Fences. The former is (correctly) described as the point that people
converge do in the absence of communication; the latter is the need to "hold
the line" against proverbial camels that want to keep going further into our
tent.

------
bordercases
They're retracing a lot of the ground explored by Less Wrong. Look up
"Rationality from A to Z" for a (long!) series of essays on all the topics
that were mentioned.

~~~
ramidarigaz
I believe the title is actually "Rationality: From Ai to Zombies", unless
you're referring to something else.

~~~
will_pseudonym
Thanks for the recommendation! FYI for those interested, the ebook is 4.9* on
Amazon. Here's the Amazon link:

[http://smile.amazon.com/Rationality-From-Zombies-Eliezer-
Yud...](http://smile.amazon.com/Rationality-From-Zombies-Eliezer-Yudkowsky-
ebook/dp/B00ULP6EW2?sa-no-redirect=1)

I bought it and plan on reading it.

------
antman
I am just putting these here for reference:

[https://en.wikipedia.org/wiki/List_of_fallacies](https://en.wikipedia.org/wiki/List_of_fallacies)

[https://en.wikipedia.org/wiki/List_of_cognitive_biases](https://en.wikipedia.org/wiki/List_of_cognitive_biases)

------
johnloeber
Most of these are pretty interesting. Algernon's Law, however, is a ridiculous
misunderstanding of evolutionary biology.

------
tremguy
There were some good ones in there that prompted me to follow up with some
reading on my own, but for the most part these were quite elementary and
unchallenging. I dont quite get all the praise this one got, considering all
the more in depth content that gets posted on here?

------
nitin_flanker
>money today is worth less than money in the future

The marginal utiliity decreases with increase in the money. So if you have
less money in the future, the value may increase. This is something faulty, I
would say.

------
bobcostas55
How do we reconcile "Algernon’s Law" with the Flynn effect?

~~~
hibikir
The world changes, so we are not really adapting to the same conditions that
our ancestors did. Let's go for a fake example.

Imagine that there is a mutation that hands a human 20 points of IQ, but then
makes said human extremely near sighted. For most of human evolution, that'd
be a terrible call: Being able to see well was far, far more important than
those 20 points of IQ: There are diminishing returns, of both eyesight and
smarts.

But today, we are smart enough to make bad eyesight be a minor annoyance, as
opposed to something crippling than it was 2000 years ago. So if smarts and
myopia were to be related genetically, then today we'd be selecting for more
nearsighted people, because today, being nearsighted is not a big deal, but
the extra smarts are valuable. A change in the world leads to a different
optimal tradeoffs. The one difference is that now, it's us selecting
ourselves, and using technology to account for our genetical weaknesses: We
are a bit ahead of Darwin's finches.

This is what is so amazing of the world today: We have social, behavioral
selection mechanisms that work far faster than any external pressures we are
facing today. Think of, for instance, of AIDS: For many years, a deadly STD,
with no cure and not even treatment. Social adaptation to STDs (monogamy +
condoms) and our awareness of the problem made it so that we didn't lose most
of the population to it. Without rationality, we'd deal with a disease like
that like mosquitos deal with pesticides: A whole lot of them die, but
eventually a tiny minority has the right genes that make them resistant, and
you get a new population of mosquitos, with different genetics. So by adapting
technologically, our evolutionary pressures change completely.

------
nunyabuizness
Time value of money is how money is worth _more_ (not less) today than it is a
year from now, which is why you would have to pay interest to give it to me in
the future.

~~~
pedrosorio
True, but don't forget: [http://www.cnbc.com/2015/10/21/is-the-us-headed-for-
negative...](http://www.cnbc.com/2015/10/21/is-the-us-headed-for-negative-
interest-rates-commentary.html)

------
cryoshon
This is a fantastic article that I highly suggest everyone read! It contains a
quick run down of a lot of cognitive tropes which can add a new perspective.

------
sawwit
I think I would have added "attractor states" and "constraints".

------
colinmegill
Down for me

------
Oxydepth
You shouldn't comment asking for people to read. It makes it look like you're
gaining something from it.

Though, I will say it's a very in depth article. It's a good share.

~~~
vdaniuk
>It makes it look like you're gaining something from it.

I object to the implicit assumption that "gaining" from sharing valuable
content (corrected for conflicts of interests) is immoral or bad.

I would venture forth and suggest that community of developers should strive
to increase financial and other "gains" of independent developers to counter
centralization of megacorporates.

