14. Time value of money: Its actually the opposite of whats stated. We value money today MORE than money tomorrow, not less. The discount rate is how much more money/return you'd need in the future to compensate.
21. Bikeshedding phenomenon is not really substituting an easy problem for a hard one: its more a comment on how people will focus on the problems that they are cognitively able to understand. Or more flippantly, the time spent on a problem is directly proportional to its triviality.
32. Hawthorne effect: This one is close to my heart, because my mother mis-stated this one when I was little. Its that people act different when they are AWARE they are being observed, not just that they act differently when observed.
34. Flynn effect: I feel its important to recognise the subtlety that the flynn effect is about the observed increase in INTELLIGENCE TEST SCORES over several decades. Indeed, the whole point of learning about the flynn effect is to learn about the ambiguity and controversy between tests, test scores, and general intelligence, and the general investigation into why this apparent increase is happening. I feel to simplify this to "IQ has been increasing" is to miss the entire point/controversy/investigation of the flynn effect.
46. Cognitive dissonance: Is i think a mis-statement/erroneous. Cognitive dissonance is indeed the uncomfortableness experienced by humans holding conflicting beliefs, but it does not imply that one of the conflicting beliefs have to be discarded. Rather, it is the interesting ways that human beings deal with conflicting beliefs that don't involve discarding them which I think the real-value/most interesting point of the concept of cognitive dissonance.
47. Coeffcient of determination: Just to save space in this comment, read the wikipedia article if you're into this kind of thing: https://en.wikipedia.org/wiki/Coefficient_of_determination. Personally, I'd barely call this a concept...its more of a model specific stats metric really, but i'm not really in the mood to argue it...
Also, here's an ebook outling more mental models in case anyone's interested: http://www.thinkmentalmodels.com/
-  Link to Talk at Google By Dennett: https://youtu.be/4Q_mY54hjM0
-  Link to pdf: http://www.mohamedrabeea.com/books/book1_10474.pdf
The banana example for marginal thinking contradicts the expected value text. Expected value really only works that way for things that you expect to keep happening.
The efficient market hypothesis doesn't quite hold in the real world, but it's still a useful heuristic.
The typical mind fallacy is a special case of availability bias.
Aumann's agreement theorem assumes that both agents have the same set of goals and values. This doesn't entirely hold for humans.
Against chesterton's fence, there's also the principle that it's easier to ask forgiveness rather than permission. Or if you don't know what the fence is for, take it down and watch what happens.
I seem to recall hearing that the Hawthorne Effect could just as well be that people are more productive at the beginning of the week. Productivity experiments should not be on a weekly schedule.
- Aumann's agreement theorem assumes that all actors are perfect Bayesian actors and have plenty of time to talk to each other. Since goals and values are not preprogrammed (unlike needs), it follows that they can update on these things as well, if one party convinces the other one of better goals and values for maintaining/achieving their needs. Unless I'm overlooking something, it must assume that the actors have the same needs.
Beliefs are common knowledge if I know that you know that I know .... (and so) on that something is true. This occurs anytime there is trade between two agents, such as in stock markets. If I can see that you can see that this is the price we're trading at, then the traded price is common knowledge.
One of the curious implications of this theorem is that no rational agent in any market would ever agree to trade with another rational agent.
So Aumann's agreement theorem is really a warning against applying game theory to every scenario. Specifically, the common prior assumption limits the usefulness of game theory in real settings. He makes no mention of "goals" or "values", only the assumptions that the priors beliefs of the two agents are the same and their posteriors are common knowledge.
In the context of the article, disagreement means either: 1) one of you is not Bayesian rational or cannot update probabilities properly (bounded rationality) or 2) both of you must have different priors (subjective beliefs).
Both points are likely to be the case in reality, although we dislike the the second point since once we begin to entertain the idea that agents have subjective priors, you can rationalize anything and "anything goes".
Another common misconception is that it requires perfectly rational market participants, when all it needs is a semblance of rationality in the aggregate: That individual mistakes happen in multiple directions. When that's the case, if people can know that the population is biased in a specific direction on average, the market adapts by providing better returns to the people betting in the opposite direction, which lets them bet harder later.
We have plenty of evidence that the EMH works in a general sense: It makes predictions after all, and they are testable. The rise of index funds is, in practice, saying that the EMH is true, and that we have so many people trying to devise the right prices of stocks that we are better off not paying them a dime, and going with their average prediction, saving ourselves the fees. And what do we see? Index fund that perform well enough that they are the biggest investors in pretty much every component in the SP500. And their share will keep growing until so much of the money is in an index that the average fund manager can beat the market by enough to make the fees worthwhile.
An undergraduate and an economics professor are walking across campus. The undergrad sees a bill on the ground. Getting closer, he sees that it's a $100 bill.
"Professor, there's a $100 bill on the ground there!", he says.
"No there isn't", the professor responds. "If there were really a $100 bill, someone would have picked it up".
Everybody always tells this joke to make fun of economics professors who believe in the EMH. But - have you ever actually found a $100 bill lying on the ground? If not, that's the EMH in action :)
The efficient market hypothesis and a number of other items here are useful ideas to know, but you simply have to understand the limited applicability of modeling humans as rational actors. It's the equivalent of spherical cows or frictionless planes in Physics.
That's not to say you shouldn't; the axioms assumed in the model might not be the same ones that actually apply to our own universe. But frequently, the only thing that studies like economics care about is whether something is true or false in the model, rather than whether it's true or false in reality. In economics (and in many other disciplines) the model is an idealized universe we're interested in studying for its own sake, whether or not it is a perfect (or even close) description of reality.
In other words: while we may never learn whether there can be such a thing as an efficient market on Earth, we will be able to know decisively whether there can be an efficient market in the nation of RationalActoria, a land-plate resting on a giant spherical cow. And knowing that can actually be helpful! Especially, the knowledge that something can't be true, even under those conditions, can frequently tell us a lot about whether it can happen in the real world.
And that's the real concept to learn there, I think: that even a stupid, low-dimensional model that doesn't have much precision in reflecting the real world can very effectively answer real questions, especially questions about what can't happen—because if it can't happen in "ideal conditions", it certainly can't happen in real ones.
When are beliefs common knowledge? when both agents can directly observe one another's beliefs. I.e. Bob must know Alice knows that Bob knows that .... ad inifinitum that XYZ is true.
Mutually witnessing an event is sufficient for common knowledge.
I feel this is not a useful day-to-day heuristic since the theorem was intended to highlight deficiencies within the Bayesian rational paradigm (specifically the common prior assumption since game theorists weren't ready to abandon rationality in the 70's).
I think this statement is too strong. I can see it being correct if the domain being reasoned about is monotonic (i.e., new information can never change the belief state of a statement once it is established), but most domains of real-world interest are not.
The point being that "bikeshedding" is not (just) about what parts of a project to pay attention to, but which projects to pay attention to. Spend more time and effort paying attention to projects where there is more value at stake.
I bought it and plan on reading it.
The marginal utiliity decreases with increase in the money. So if you have less money in the future, the value may increase. This is something faulty, I would say.
Imagine that there is a mutation that hands a human 20 points of IQ, but then makes said human extremely near sighted. For most of human evolution, that'd be a terrible call: Being able to see well was far, far more important than those 20 points of IQ: There are diminishing returns, of both eyesight and smarts.
But today, we are smart enough to make bad eyesight be a minor annoyance, as opposed to something crippling than it was 2000 years ago. So if smarts and myopia were to be related genetically, then today we'd be selecting for more nearsighted people, because today, being nearsighted is not a big deal, but the extra smarts are valuable. A change in the world leads to a different optimal tradeoffs. The one difference is that now, it's us selecting ourselves, and using technology to account for our genetical weaknesses: We are a bit ahead of Darwin's finches.
This is what is so amazing of the world today: We have social, behavioral selection mechanisms that work far faster than any external pressures we are facing today. Think of, for instance, of AIDS: For many years, a deadly STD, with no cure and not even treatment. Social adaptation to STDs (monogamy + condoms) and our awareness of the problem made it so that we didn't lose most of the population to it. Without rationality, we'd deal with a disease like that like mosquitos deal with pesticides: A whole lot of them die, but eventually a tiny minority has the right genes that make them resistant, and you get a new population of mosquitos, with different genetics. So by adapting technologically, our evolutionary pressures change completely.
> Schooling and test familiarity / Generally more stimulating environment
This one doesn't postulate an actual improvement in intelligence, though it might explain why the test scores are rising.
Better nutrition has not been easy for most of human history, so this one is fine.
> Infectious diseases
Shifting childhood metabolic effort away from fighting off diseases is one of those tradeoffs mentioned in more detailed discussions of Algernon's law -- if you happen to be in a low-disease environment then, sure, that frees up resources for other things.
Though, I will say it's a very in depth article. It's a good share.
I object to the implicit assumption that "gaining" from sharing valuable content (corrected for conflicts of interests) is immoral or bad.
I would venture forth and suggest that community of developers should strive to increase financial and other "gains" of independent developers to counter centralization of megacorporates.