Hacker News new | past | comments | ask | show | jobs | submit login
Concepts for Your Cognitive Toolkit (mcntyr.com)
240 points by cryoshon on Dec 31, 2015 | hide | past | favorite | 44 comments



Actually a kind of cool little article. However, a few little errors/improvements I think could improve it:

14. Time value of money: Its actually the opposite of whats stated. We value money today MORE than money tomorrow, not less. The discount rate is how much more money/return you'd need in the future to compensate.

21. Bikeshedding phenomenon is not really substituting an easy problem for a hard one: its more a comment on how people will focus on the problems that they are cognitively able to understand. Or more flippantly, the time spent on a problem is directly proportional to its triviality.

32. Hawthorne effect: This one is close to my heart, because my mother mis-stated this one when I was little. Its that people act different when they are AWARE they are being observed, not just that they act differently when observed.

34. Flynn effect: I feel its important to recognise the subtlety that the flynn effect is about the observed increase in INTELLIGENCE TEST SCORES over several decades. Indeed, the whole point of learning about the flynn effect is to learn about the ambiguity and controversy between tests, test scores, and general intelligence, and the general investigation into why this apparent increase is happening. I feel to simplify this to "IQ has been increasing" is to miss the entire point/controversy/investigation of the flynn effect.

46. Cognitive dissonance: Is i think a mis-statement/erroneous. Cognitive dissonance is indeed the uncomfortableness experienced by humans holding conflicting beliefs, but it does not imply that one of the conflicting beliefs have to be discarded. Rather, it is the interesting ways that human beings deal with conflicting beliefs that don't involve discarding them which I think the real-value/most interesting point of the concept of cognitive dissonance.

47. Coeffcient of determination: Just to save space in this comment, read the wikipedia article if you're into this kind of thing: https://en.wikipedia.org/wiki/Coefficient_of_determination. Personally, I'd barely call this a concept...its more of a model specific stats metric really, but i'm not really in the mood to argue it...


All good points, thanks. 14. Yes, that was a mistake, sorry. 32. Agreed, edited to avoid confusing people. 34. Agreed and added. 46. Yeah, one isn't always discarded. For example, humans are pretty good at compartmentalising. 47. Maybe not a concept, (I'm not one to quibble of definitions - http://lesswrong.com/lw/np/disputing_definitions/) but I feel as though it can be useful. Beyond the formal use, it can be helpful to think about how much variance is explained in a model by any particular attempt. For instance, say someone claims that women are bad at giving directions. Even if on average this is true, gender might only account for a small amount of differences in direction-giving abilities. In this example, you'd want to see some data, but at least it can be instructive to how one might conceptually picture the claim someone is making.


Very well stated thoughts! I found this quick summary more readable than the whole article, even though I enjoyed it as well.


"Substituting an easy problem for a hard one" sounds more like this:

https://en.wikipedia.org/wiki/Substitution_bias_%28psycholog...


Ahh, these are "mental models". Charlie Munger gave a great speech on the advantages of thinking using mental models and how going through them in a checklist fashion makes into some powerful thinking. Speech here: https://old.ycombinator.com/munger.html

Also, here's an ebook outling more mental models in case anyone's interested: http://www.thinkmentalmodels.com/


Found the article to be a good summary of a lot of concepts I had encountered on their own, but hadn't seen together in list form before. If you enjoyed this, you might find Daniel Dennett's "Intuition Pumps and Tools for Thinking" interesting [1]. I would highly recommend "The Philosopher's Toolkit: A Compendium of Philosophical Concepts Methods" by Julian Baggini and Peter Fosl as well [2].

- [1] Link to Talk at Google By Dennett: https://youtu.be/4Q_mY54hjM0

- [2] Link to pdf: http://www.mohamedrabeea.com/books/book1_10474.pdf



Biases and heuristics are the same thing.

The banana example for marginal thinking contradicts the expected value text. Expected value really only works that way for things that you expect to keep happening.

The efficient market hypothesis doesn't quite hold in the real world, but it's still a useful heuristic.

The typical mind fallacy is a special case of availability bias.

Aumann's agreement theorem assumes that both agents have the same set of goals and values. This doesn't entirely hold for humans.

Against chesterton's fence, there's also the principle that it's easier to ask forgiveness rather than permission. Or if you don't know what the fence is for, take it down and watch what happens.

I seem to recall hearing that the Hawthorne Effect could just as well be that people are more productive at the beginning of the week. Productivity experiments should not be on a weekly schedule.


- Biases and heuristics are both priors, but they are particular kinds of priors. The former mostly refers to priors which systematically slightly shift our thinking away from the truth and on which we can only counteract by self-reflection. The latter is a prior which we consciously choose which is very rule-like and probably overly general.

- Aumann's agreement theorem assumes that all actors are perfect Bayesian actors and have plenty of time to talk to each other. Since goals and values are not preprogrammed (unlike needs), it follows that they can update on these things as well, if one party convinces the other one of better goals and values for maintaining/achieving their needs. Unless I'm overlooking something, it must assume that the actors have the same needs.


I'm confused with the "goals and "values" terminology, I've never once heard of these terms in this area of research. Aumann's agreement theorem simple: two agents with common priors cannot agree to disagree if their posterior beliefs are common knowledge.

Beliefs are common knowledge if I know that you know that I know .... (and so) on that something is true. This occurs anytime there is trade between two agents, such as in stock markets. If I can see that you can see that this is the price we're trading at, then the traded price is common knowledge.

One of the curious implications of this theorem is that no rational agent in any market would ever agree to trade with another rational agent.

So Aumann's agreement theorem is really a warning against applying game theory to every scenario. Specifically, the common prior assumption limits the usefulness of game theory in real settings. He makes no mention of "goals" or "values", only the assumptions that the priors beliefs of the two agents are the same and their posteriors are common knowledge.

In the context of the article, disagreement means either: 1) one of you is not Bayesian rational or cannot update probabilities properly (bounded rationality) or 2) both of you must have different priors (subjective beliefs).

Both points are likely to be the case in reality, although we dislike the the second point since once we begin to entertain the idea that agents have subjective priors, you can rationalize anything and "anything goes".


Needs are what your neural architecture has evolved to optimize (e.g. sustenance, pain avoidance and affiliation). Goals are the preferred states that the brain learns to optimize its needs (via reward). Values are cultural or (fortunately, but also necessarily due to evolution) Schilling point memes which manifest as virtual rewards that mostly enable human cohabit (e.g. you are good if you don't litter the street, i.e. that will increase your chance of future social reward).


I think "Algernon's law" and the "Efficient Market Hypothesis" are suspect because they amount to "Just So Stories"[1] about the vastly complicated topics of neurobiology and investing. Limiting the field of inquiry of researchers and specialists in these topic by proclaiming these broad general laws, and thus ignoring possibly useful new technology, theory, or experimental evidence is not rational.

1. http://rationalwiki.org/wiki/Ad_hoc


The big problem with the EMH is how easy it is to miss what it actually means: That human behavior, when combined, is aggregating all publicly known information. So a market can have a price that is a terrible estimate in hindsight, but that's because of hidden information. Pure market speculation is betting on your own personal information being better than the information available at large. If you really have better information, you are improving the quality of the market price. In no way does it mean that going against a market price, or common sense, is asinine: It's just asinine to do so without first getting some special insight.

Another common misconception is that it requires perfectly rational market participants, when all it needs is a semblance of rationality in the aggregate: That individual mistakes happen in multiple directions. When that's the case, if people can know that the population is biased in a specific direction on average, the market adapts by providing better returns to the people betting in the opposite direction, which lets them bet harder later.

We have plenty of evidence that the EMH works in a general sense: It makes predictions after all, and they are testable. The rise of index funds is, in practice, saying that the EMH is true, and that we have so many people trying to devise the right prices of stocks that we are better off not paying them a dime, and going with their average prediction, saving ourselves the fees. And what do we see? Index fund that perform well enough that they are the biggest investors in pretty much every component in the SP500. And their share will keep growing until so much of the money is in an index that the average fund manager can beat the market by enough to make the fees worthwhile.

So I don't see how this hurts researchers and specialists: It just makes them focus on the search for novel information. It's the same when we try to create a new javascript framework, for instance: If web development has stunk for a couple of decades, what is the key insight that everyone else as missed? Without a key, novel insight, it'd all be busywork.


There's a classic joke related to the EMH:

An undergraduate and an economics professor are walking across campus. The undergrad sees a bill on the ground. Getting closer, he sees that it's a $100 bill.

"Professor, there's a $100 bill on the ground there!", he says.

"No there isn't", the professor responds. "If there were really a $100 bill, someone would have picked it up".


And the brilliant continuation of this story I once heard:

Everybody always tells this joke to make fun of economics professors who believe in the EMH. But - have you ever actually found a $100 bill lying on the ground? If not, that's the EMH in action :)


More specifically Algernon's law seems to lean on the fallacy of humans being some kind of endpoint of evolution or perhaps a misunderstanding of the time scales involved.

The efficient market hypothesis and a number of other items here are useful ideas to know, but you simply have to understand the limited applicability of modeling humans as rational actors. It's the equivalent of spherical cows or frictionless planes in Physics.


Also, they become intellectual "defaults" such that any argument which does not assume their truth must be incorrect. They become self-referential, considered true because people assume them to be true.


Phrased more optimistically, they're axioms: assumptions underlying all the research, such that if you did change them, your own hypothesis wouldn't be able to lean on any of the existing research for support—because your hypothesis would exist in a different axiomatic universe from everything else.

That's not to say you shouldn't; the axioms assumed in the model might not be the same ones that actually apply to our own universe. But frequently, the only thing that studies like economics care about is whether something is true or false in the model, rather than whether it's true or false in reality. In economics (and in many other disciplines) the model is an idealized universe we're interested in studying for its own sake, whether or not it is a perfect (or even close) description of reality.

In other words: while we may never learn whether there can be such a thing as an efficient market on Earth, we will be able to know decisively whether there can be an efficient market in the nation of RationalActoria, a land-plate resting on a giant spherical cow. And knowing that can actually be helpful! Especially, the knowledge that something can't be true, even under those conditions, can frequently tell us a lot about whether it can happen in the real world.

And that's the real concept to learn there, I think: that even a stupid, low-dimensional model that doesn't have much precision in reflecting the real world can very effectively answer real questions, especially questions about what can't happen—because if it can't happen in "ideal conditions", it certainly can't happen in real ones.


That's well put, but clearly not the implied takeaway - given that they are interspersed with other items describing real world subtle human biases.


Aumann's Agreement Theorem is slightly misstated. A better statement would be: if two rational agents disagree when they both have exactly the same information, one of them must be wrong. The qualifier is crucial; very often disagreement is due to differences in information, not failure of rationality. So you should take disagreement seriously if you have reason to believe the other person has significant information that you don't.


A better statement would be: if two rational agents share the same prior beliefs and their posterior beliefs are common knowledge, irrespective of what private information they have been exposed to, they cannot rationally agree to disagree.

When are beliefs common knowledge? when both agents can directly observe one another's beliefs. I.e. Bob must know Alice knows that Bob knows that .... ad inifinitum that XYZ is true. Mutually witnessing an event is sufficient for common knowledge.

I feel this is not a useful day-to-day heuristic since the theorem was intended to highlight deficiencies within the Bayesian rational paradigm (specifically the common prior assumption since game theorists weren't ready to abandon rationality in the 70's).


> irrespective of what private information they have been exposed to

I think this statement is too strong. I can see it being correct if the domain being reasoned about is monotonic (i.e., new information can never change the belief state of a statement once it is established), but most domains of real-world interest are not.


The bikeshed example is also somewhat misstated. In the original (fictional) story from the book Parkinson's Law, the issue is not that people looking at the design of a nuclear plant spend too much time looking at the bike shed design and not enough looking at things like nuclear safety. The issue is that the committee trying to decide whether various projects should be funded at all spends only about two and a half minutes in approving an expenditure of $10 million on a nuclear reactor, but spends about forty-five minutes arguing about the design of a bike shed, with the possible result of saving some $300. (They then spend an hour and a quarter arguing about whether to provide coffee for monthly meetings of the administrative staff, which amounts to a total annual expenditure of $57, and refuse to make a decision at all, directing the secretary to obtain further information so they can decide at the next meeting.)

The point being that "bikeshedding" is not (just) about what parts of a project to pay attention to, but which projects to pay attention to. Spend more time and effort paying attention to projects where there is more value at stake.


Am I the only one not finding the connection between the Anthropic Principle and the Sleeping Beauty problem completely obvious?


Great list, but I think the Shelling Point one (35) is blurring it with Shelling Fences. The former is (correctly) described as the point that people converge do in the absence of communication; the latter is the need to "hold the line" against proverbial camels that want to keep going further into our tent.


They're retracing a lot of the ground explored by Less Wrong. Look up "Rationality from A to Z" for a (long!) series of essays on all the topics that were mentioned.


I believe the title is actually "Rationality: From Ai to Zombies", unless you're referring to something else.


Thanks for the recommendation! FYI for those interested, the ebook is 4.9* on Amazon. Here's the Amazon link:

http://smile.amazon.com/Rationality-From-Zombies-Eliezer-Yud...

I bought it and plan on reading it.



Most of these are pretty interesting. Algernon's Law, however, is a ridiculous misunderstanding of evolutionary biology.


There were some good ones in there that prompted me to follow up with some reading on my own, but for the most part these were quite elementary and unchallenging. I dont quite get all the praise this one got, considering all the more in depth content that gets posted on here?


>money today is worth less than money in the future

The marginal utiliity decreases with increase in the money. So if you have less money in the future, the value may increase. This is something faulty, I would say.


How do we reconcile "Algernon’s Law" with the Flynn effect?


The world changes, so we are not really adapting to the same conditions that our ancestors did. Let's go for a fake example.

Imagine that there is a mutation that hands a human 20 points of IQ, but then makes said human extremely near sighted. For most of human evolution, that'd be a terrible call: Being able to see well was far, far more important than those 20 points of IQ: There are diminishing returns, of both eyesight and smarts.

But today, we are smart enough to make bad eyesight be a minor annoyance, as opposed to something crippling than it was 2000 years ago. So if smarts and myopia were to be related genetically, then today we'd be selecting for more nearsighted people, because today, being nearsighted is not a big deal, but the extra smarts are valuable. A change in the world leads to a different optimal tradeoffs. The one difference is that now, it's us selecting ourselves, and using technology to account for our genetical weaknesses: We are a bit ahead of Darwin's finches.

This is what is so amazing of the world today: We have social, behavioral selection mechanisms that work far faster than any external pressures we are facing today. Think of, for instance, of AIDS: For many years, a deadly STD, with no cure and not even treatment. Social adaptation to STDs (monogamy + condoms) and our awareness of the problem made it so that we didn't lose most of the population to it. Without rationality, we'd deal with a disease like that like mosquitos deal with pesticides: A whole lot of them die, but eventually a tiny minority has the right genes that make them resistant, and you get a new population of mosquitos, with different genetics. So by adapting technologically, our evolutionary pressures change completely.


Algernon's law essentially says that biologically simple, major changes to human intelligence are very unlikely to have been strictly better in the ancestral environment, or else they would have happened already. Let's look at Wikipedia's article on the Flynn effect and go down the list of major proposed causes:

> Schooling and test familiarity / Generally more stimulating environment

This one doesn't postulate an actual improvement in intelligence, though it might explain why the test scores are rising.

> Nutrition

Better nutrition has not been easy for most of human history, so this one is fine.

> Infectious diseases

Shifting childhood metabolic effort away from fighting off diseases is one of those tradeoffs mentioned in more detailed discussions of Algernon's law -- if you happen to be in a low-disease environment then, sure, that frees up resources for other things.


One is post birth and the other pre birth.


Time value of money is how money is worth more (not less) today than it is a year from now, which is why you would have to pay interest to give it to me in the future.



This is a fantastic article that I highly suggest everyone read! It contains a quick run down of a lot of cognitive tropes which can add a new perspective.


I think I would have added "attractor states" and "constraints".


Down for me


You shouldn't comment asking for people to read. It makes it look like you're gaining something from it.

Though, I will say it's a very in depth article. It's a good share.


Maybe he just wants to engage in the conversation. I don't think it deserves to be called out like this.


>It makes it look like you're gaining something from it.

I object to the implicit assumption that "gaining" from sharing valuable content (corrected for conflicts of interests) is immoral or bad.

I would venture forth and suggest that community of developers should strive to increase financial and other "gains" of independent developers to counter centralization of megacorporates.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: