https://amzn.to/2KiKQEg (Don't recommend)
 https://amzn.to/2GOHoz4 (written by DuckDuckGo founder Gabe Weinberg)
 https://amzn.to/31lfK4Q (A classic)
 https://amzn.to/31tX17l (Haven't read but recommended by others)
Here's a representative excerpt from early in book:
“Imagine you are on a ship that has reached constant velocity (meaning without a change in speed or direction). You are below decks and there are no portholes. You drop a ball from your raised hand to the floor. To you, it looks as if the ball is dropping straight down, thereby confirming gravity is at work. You are able to perceive this vertical shift as the ball changed its location by about three feet.
Now imagine you are a fish (with special x-ray vision) and you are watching this ship go past. You see the scientist inside, dropping a ball. You register the vertical change in the position of the ball. But you are also able to see a horizontal change. As the ball was pulled down by gravity it also shifted its position east by about 20 feet. The ship moved through the water and therefore so did the ball. The scientist on board, with no external point of reference, was not able to perceive this horizontal shift.”
The other half of the book is Charlie Munger quotes. I would say skip the trouble and go straight to reading “Almanack”.
The fallacy fallacy is a logical fallacy which occurs when someone assumes that if an argument contains a logical fallacy, then its conclusion must necessarily be wrong.
sometimes the application of the fallacy may not be obvious, requiring the convincing you ask for, and sometimes it's so obvious, no further explanation is needed. maybe just ask for clarification when it's not clear to you?
But I agree with your point - substance over labels!
This is similar to My Cognitive Bias - they have a browser plugin that opens a random cognitive bias when you open a new tab (chrome, firefox) - basically like flashcards for embedding the knowledge - I probably found that one here too.
One thing that’s missing: how these models connect, and ways of picking the models you need for a given thought experiment. This is a service or app I would gladly pay for, particularly if it provided the ability to add my own models, relate them in a smart way, etc.
But I agree, writing down your thoughts on how models interact helps you understand their focal points and limitations.
To extend this line of thinking —one might hope that the synthesis of multiple models would get more elegant — but this would likely come at the expense of interpretability to particular situations. This line of thought is discussed extensively in the philosophy of science and complexity theory.
I always have a feeling that this might be a fallacy, because tools are usually defined as "carrying out a specific function", but there is no specific function in regards to programming languages or datastoring technologies other than "make machine do" and "store data" respectively.
The rest just seems to be sideeffects. Kind of like you can either insert a nail into wooden board or produce a gaping hole in a Human's skull, whereas the function of a hammer is just to apply force to a relatively small surface (more or less). There are many right tools for inserting nails into wooden boards or producing gaping holes on human skulls, which is why I'm starting to think that the phrase "Use the right tool for the job" is preemptive hindsight.
You could always choose the right tool - so why didn't you do that?
My theory is that pretty much any bias or fallacy can be derived from simpler rules of causation - truth, and the logic of necessity and sufficiency.
Appeal to Authority: Is it really true that if A says x is true, that x is really true? Is x really true?
Confirmation bias: Is it really true that if I want x to be true, it is true? Is x really true?
Survivorship bias: Is it really true that if x is true, that this set of x's causes were sufficient to yield x? Is this set of x's causes always sufficient to yield x?