The Model Thinker is also an excellent book! It's more mathematical models, but if you understand the intuitions behind the models then they are a great additional to your thinking tools.
Interesting that you don’t recommend the Farnham Street book. It’s been sitting on my shelf for a while, and I’m a big fan of a lot of what they put out. What don’t you like about it?
Have you ever had a subject matter that you are comfortably familiar with explained to you through a strained analogy in a patronizing manner that makes you question the speaker's own understanding of the material? This book is full of those.
Here's a representative excerpt from early in book:
“Imagine you are on a ship that has reached constant velocity (meaning without a change in speed or direction). You are below decks and there are no portholes. You drop a ball from your raised hand to the floor. To you, it looks as if the ball is dropping straight down, thereby confirming gravity is at work. You are able to perceive this vertical shift as the ball changed its location by about three feet.
Now imagine you are a fish (with special x-ray vision) and you are watching this ship go past. You see the scientist inside, dropping a ball. You register the vertical change in the position of the ball. But you are also able to see a horizontal change. As the ball was pulled down by gravity it also shifted its position east by about 20 feet. The ship moved through the water and therefore so did the ball. The scientist on board, with no external point of reference, was not able to perceive this horizontal shift.”
The other half of the book is Charlie Munger quotes. I would say skip the trouble and go straight to reading “Almanack”.
Really nicely presented site, but I can't be the only one who's tired of what feels like people constantly "name dropping" these models to make a point. Just because it has a name doesn't magically make your argument more impactful, you still have to convince me.
The fallacy fallacy is a logical fallacy which occurs when someone assumes that if an argument contains a logical fallacy, then its conclusion must necessarily be wrong.
the names are a shortcut so you don't have to spell out the steps of the argument every time. that's just language, words are shortcuts for things and ideas.
sometimes the application of the fallacy may not be obvious, requiring the convincing you ask for, and sometimes it's so obvious, no further explanation is needed. maybe just ask for clarification when it's not clear to you?
Thanks for sharing! I like the simple design. One piece of feedback I got when I wrote a blog post about mental models [1] was that mental models are easy to understand, but people struggle to figure out how to actually use them. In addition to the examples, it would be great to add concrete applications to each cards.
This is similar to My Cognitive Bias - they have a browser plugin that opens a random cognitive bias when you open a new tab (chrome, firefox) - basically like flashcards for embedding the knowledge - I probably found that one here too.
I’ve started to accumulate mental models on my personal site. I hate how buzzy the language has gotten, but I do see the value in this accumulation.
One thing that’s missing: how these models connect, and ways of picking the models you need for a given thought experiment. This is a service or app I would gladly pay for, particularly if it provided the ability to add my own models, relate them in a smart way, etc.
Choosing the mix of applicable models is the art of wisdom. Connecting all models pairwise would have O(N^2) complexity.
But I agree, writing down your thoughts on how models interact helps you understand their focal points and limitations.
To extend this line of thinking —one might hope that the synthesis of multiple models would get more elegant — but this would likely come at the expense of interpretability to particular situations. This line of thought is discussed extensively in the philosophy of science and complexity theory.
But nobody needs the full pairwise everything, just a few related concepts, like "If this seems close but doesn't quite fit, have a look at P, Q, R, and S?"
Is there any sort of fallacy that the saying "use the right tool for the job" in the context of programming languages / datastoring technologies etc. can be categorized with?
I always have a feeling that this might be a fallacy, because tools are usually defined as "carrying out a specific function", but there is no specific function in regards to programming languages or datastoring technologies other than "make machine do" and "store data" respectively.
The rest just seems to be sideeffects. Kind of like you can either insert a nail into wooden board or produce a gaping hole in a Human's skull, whereas the function of a hammer is just to apply force to a relatively small surface (more or less). There are many right tools for inserting nails into wooden boards or producing gaping holes on human skulls, which is why I'm starting to think that the phrase "Use the right tool for the job" is preemptive hindsight.
You could always choose the right tool - so why didn't you do that?
I just started this yesterday and a quarter of the way in the main thought I have is that I wish I'd read this in college. Picking up this stuff all at once would have been way more useful than doing it slowly (sometimes painfully) over a decade.
People love lists of models, fallacies, and cognitive biases. I don't think any list of them can be authoritative though because they all slice the concepts up differently. But when well-presented I think lists like these are still valuable because they're entertaining to review, and can remind people of how to be better thinkers.
My theory is that pretty much any bias or fallacy can be derived from simpler rules of causation - truth, and the logic of necessity and sufficiency.
Appeal to Authority: Is it really true that if A says x is true, that x is really true? Is x really true?
Confirmation bias: Is it really true that if I want x to be true, it is true? Is x really true?
Survivorship bias: Is it really true that if x is true, that this set of x's causes were sufficient to yield x? Is this set of x's causes always sufficient to yield x?
The site does not seem to say who made it, but it sure reminds me of Shane Parrish (Farnam Street blog) who wrote a book about about mental models, among other tings.
All the time. I find they help you reason about the long term outcomes of things or directions things are headed in. They provide a nice framework for thinking about things. Rather than looking at every situation on a case by case basis you start to see aggregates of situations and once you see the abstraction then everything becomes "another one of those". The more abstractions you find, the more you see the patterns.
[1] https://medium.com/@yegg/mental-models-i-find-repeatedly-use...
[2] http://www.defmacro.org/2016/12/22/models.html
[3] https://fs.blog/mental-models/
[4] https://nesslabs.com/mental-models
[5] https://amzn.to/2KiKQEg (Don't recommend)
[6] https://amzn.to/2GOHoz4 (written by DuckDuckGo founder Gabe Weinberg)
[7] https://amzn.to/31lfK4Q (A classic)
[8] https://amzn.to/31tX17l (Haven't read but recommended by others)