The key problem is equating simplicity with correctness. This is usually disastrous. Once you feel that something is "correct" you stop looking for ways to falsify it. That's the exact opposite for what Occam's razor is used for.
Instead, if you have 2 competing hypotheses (two hypotheses for which the evidence supports both), you use the one with less assumptions. Partly because the one with less assumptions will be easier to work with and lead to models that are easier to understand. But mostly because less assumptions makes it easier to falsify.
Abusing this principle outside of the scientific method leads to all sorts of incredibly bad logic.
Famously, Karl Popper (1959) rejected the idea that theories are ever confirmed by evidence and that we are ever entitled to regard a theory as true, or probably true. Hence, Popper did not think simplicity could be legitimately regarded as an indicator of truth. Rather, he argued that simpler theories are to be valued because they are more falsifiable. Indeed, Popper thought that the simplicity of theories could be measured in terms of their falsifiability, since intuitively simpler theories have greater empirical content, placing more restriction on the ways the world can be, thus leading to a reduced ability to accommodate any future that we might discover. According to Popper, scientific progress consists not in the attainment of true theories, but in the elimination of false ones. Thus, the reason we should prefer more falsifiable theories is because such theories will be more quickly eliminated if they are in fact false. Hence, the practice of first considering the simplest theory consistent with the data provides a faster route to scientific progress. Importantly, for Popper, this meant that we should prefer simpler theories because they have a lower probability of being true, since, for any set of data, it is more likely that some complex theory (in Popper’s sense) will be able to accommodate it than a simpler theory.
Popper’s equation of simplicity with falsifiability suffers from some well-known objections and counter-examples, and these pose significant problems for his justificatory proposal (Section 3c). Another significant problem is that taking degree of falsifiability as a criterion for theory choice seems to lead to absurd consequences, since it encourages us to prefer absurdly specific scientific theories to those that have more general content. For instance, the hypothesis, “all emeralds are green until 11pm today when they will turn blue” should be judged as preferable to “all emeralds are green” because it is easier to falsify. It thus seems deeply implausible to say that selecting and testing such hypotheses first provides the fastest route to scientific progress.
So the more assumptions you're adding to the hypotheses the more you're getting taxed on the likelihood of it being correct. Therefore it's more likely that the hypothesis with the fewer assumptions to be correct.
Suppose you have a hypothesis, H, which is based on assumptions A1, A2, ..., Ak. This can be phrased logically as an implication:
(A1 & A2 & ... & Ak) -> H
!A1 | !A2 | ... | !Ak | H
Pr(!A1 | !A2 | ... | !Ak | H)
= 1 - Pr( A1 & A2 & ... & Ak & !H)
This is backwards. It should be
H => (A1 & A2 & ... & Ak)
Suppose you have the hypothesis that Bruce Wayne is Superman. Then you see the two of them in the same room together. It's still possible that Bruce Wayne is Superman, but only if he has an identical twin. Your credence that Bruce Wayne is Superman should decrease accordingly.
In other words, the claim "Assuming Q, I prove P" does not mean (to me) that Q must hold in order for P to hold, but rather that one way to show that P is true is to show that Q is true.
Assumptions are the left-hand side of an implication, by definition. (And the right-hand side is called "conclusion".)
The relevant statement here is not "for this hypothesis to be true, these assumptions must hold".
It is: "for this hypothesis to be derived this way, these assumptions must hold".
There is always the possibility that a hypothesis can be proved in a different way from different assumptions.
Unless, of course, your theory not only proves "(A1 & A2 & ... & Ak) -> H" but "(A1 & A2 & ... & Ak) <-> H". That is, if your theory shows that your hypothesis does not only follow from the assumptions, but is equivalent to its assumptions. That's quite a rare case, though.
I'm using the word "assumption" in a natural way. (Also in the way that it's used in Occam's razor.) If you have a definition that says I'm using it wrong, then your definition is silly.
If I think Bruce Wayne is Superman, I might base that on the fact that they're both physically very fit; that one would need to be very rich in order to have the kind of technology that is indistinguishable from alien powers; that Bruce Wayne's parents were murdered, and this could conceivably draw him to a life of fighting crime, which is a thing Superman does.
That sort of thing leads me to form the hypothesis: "Bruce Wayne is Superman".
But that sort of thing isn't what Occam's razor is about. It's about things that we haven't observed to be true, but which would need to be true for the hypothesis to hold. You should prefer a hypothesis that requires fewer such things.
If I see Bruce Wayne and Superman in the same room, then in order for Bruce Wayne to be Superman, he must have an identical twin. I haven't observed him to have one, but that's what the hypothesis requires. Accordingly, my confidence in the hypothesis decreases.
* I have heard this defined as hypothetical baggage or implicit baggage. ie. if CO2 is not increasing temperature then why not?
What is the probability that the hypothesis is correct?
What is the probability that the implication "from the assumptions follows the hypothesis" is correct?
Why? Because that's exactly what the theory proves logically. The theory can't tell you whether A1, ..., Ak are all true in the real world, but it does tell you that _if_ these are true, H is also true.
So this is really a typical strawman argument (although maybe unintendedly): It is different from the original question, and it boils down to a trivial but misleading answer.
Going back to the original question, you'd have to compare the two hypotheses H1 and H2, where the set of assumptions of H1 are a strict subset of the assumptions of H2:
A1 & A2 & ... & Ak -> H1
A1 & A2 & ... & Ak & ... & An -> H2
P(A1 & A2 & ... & Ak) > P(A1 & A2 & ... & Ak & ... & An)
Pd1 = P(not(A1 & A2 & ... & Ak) & H1)
Pd2 = P(not(A1 & A2 & ... & Ak & ... & An) & H2)
Pd1 = Pd2
P(H1) = P(not(A1 & A2 & ... & Ak) & H1) + P((A1 & A2 & ... & Ak) & H1)
= Pd1 + P((A1 & A2 & ... & Ak) & H1)
= Pd1 + P(A1 & A2 & ... & Ak)
= Pd2 + P(A1 & A2 & ... & Ak)
> Pd2 + P(A1 & A2 & ... & Ak & ... & An)
= Pd2 + P((A1 & A2 & ... & Ak & ... & An) & H2)
= P(not(A1 & A2 & ... & Ak & ... & An) & H2) + P((A1 & A2 & ... & Ak & ... & An) & H2)
P(H1) > P(H2)
In other words, it was unclear to me what the answer to the question "Are the assumptions part of the hypothesis?" was. If, as I did, we assume that "yes, they are" then I don't think it follows that the probabilities will both be `1`, because we do not have logical proofs for the claims, the implication could only be true in the model (they are not necessarily entailments).
The waters are muddied further still when the hypothesis itself is phrased as an implication.
It also strikes me that for your line of reasoning to hold, it is not sufficient that Pd1 = Pd2 are small, but instead `Pd1 = 0 = Pd2`, in order to justify this line:
> = Pd1 + P((A1 & A2 & ... & Ak) & H1)
> = Pd1 + P(A1 & A2 & ... & Ak)
(A1 & A2 & ... & Ak) <-> H1
& (A1 & A2 & ... & Ak & ... & An) <-> H2
Ignore that, it is not tantamount, it is a weaker condition.
First of all, if you know that
(A1 & A2 & ... & Ak) -> H1
A1 & A2 & ... & Ak
(A1 & A2 & ... & Ak) & H1
Never a bad idea
1. The patients humors are out of whack. The treatment is bloodletting.
2. The patient has a complex infection involving many physiological systems like immune system, foreign bacteria, gut flora, etc. The treatment is rest and administration of a lab engineered antibiotic for weeks.
1. The patient is possessed by a Djinn/Demon/spirit and needs an exorcism from a priest/shaman/imam.
2. The patient suffers from mental illness which is difficult to describe let alone treat. Treatment will be years, if not decades, of a mix of therapy, lifestyle changes, and medication.
Sadly, these attitudes still exist, even in the industrialized West. I often visit /r/paranormal because I have a thing for ghost stories and sometimes there's a posting about "possession" which is very clearly about a mentally ill person. When I point this out and ask why this person isn't getting proper care, I'm downvoted to -5 near instantly. Yes, that's right, the guy saying "This isn't a demon, this poor woman needs proper medical help," gets argued with like its the 13th century.
See also BIC (Bayesian Information Criterium) for selecting models.
Occams razor does not ask that you accept the simplest explanation. It asks that one take into account as many, and only as many factors as necessary to explain a phenom. It does not promote fallacy or lessen rigour. It is a "loose leash but a tight chain"
As originally defined, it stated: Entities should not be multiplied without necessity(Entia non sunt multiplicanda praeter necessitatem).
Bertrand Russel held the principle in high regard. This quote from Newton encapsulates its application: "We are to admit no more causes of natural things than such as are both true and sufficient to explain their appearances." and simplified for scientists in this form: ""when you have two competing theories that make exactly the same predictions, the simpler one is the better" http://math.ucr.edu/home/baez/physics/General/occam.html
There is a line of scholarship that believes William of Occam (c1287-1347) never made the quote attributed to him.
What is termed Occams razor by vog asQuirrel asmad and others is a statistical/logicians derivative not really of concern to most people.
Which they don't, therefore you can't say Occam's razor isn't useful. :) funny
Also, i like Hanlon’s Razor. "Never attribute to malice that which is adequately explained by stupidity." Generalizing here, but people _are_ stupid.
In the search for a cure for head ache, some guy took aspirin and did a magic rite, and he was cured; another guy just took aspirin and he was cured. Both account can be reproduced, and so far have worked. Therefore when doing analysis you can ignore the bit about magic, while it may be relevant in some mystical sense (the spirits are more happy if you do it, whatever), it is unnecessary to explain the cure from head ache.
You cant do a 'premature' optimization and not even attempt to piece together a theory that sounds complex, when maybe that is currently the only theory that explains the phenomenon.
The collection of data can have assumptions built in that you are unaware of. If you aren't thinking about the earth moving when measuring the orbits of the other planets the data shows that the orbits are pretty crazy.
Likewise when we measure at the quantum level things seem pretty crazy, things in two places at once, etc. Saying "Once something is small enough it can behave completely differently from what we observe at larger scales" is a pretty big assumption.
String theory trades some assumptions for others to rationalize some of the 'crazy behaviour' at small scales.
> two hypotheses for which the evidence supports both
If they are supported by the same evidence, then the truth that is being supported must be the same. The simplest hypothesis is therefore the smaller nutshell that captures that truth. The other one is bloated, and bloated information (what this is all about) punishes us with complexity and irrelevance.
Fundamentally, any theory is an abstraction of evidence. Ockham's razor is about the quality of said abstraction.
Rocks falls downwards since masses attracts each other.
Rocks falls downwards due to invisible ghosts making masses attract each other.
Both of these theories have the same amount of evidence supporting them, but one is strictly simpler than the other. If one theory isn't a simple reduction of the other then Occam's razor doesn't apply, but in those cases it is usually possible to make experiments which disproves one of them.
Occam's Razor is a useful -- but not fool-proof -- tool for the latter.
It says nothing definitive about the former. At best, it makes a very broad statistical generalization.
If it's actually statistical (measured), and not anecdotal.
Because the idea is that the theory will be tweaked anyway. Nothing's gonna be perfect on the first try.
But if the theories produce different predictions, then it doesn't matter which one is simplest, the one which most closely match reality is the truest.
There are an infinity of possible models to choose from, with most of those models containing no less information than the phenomenon they seek to model. Predictive power is what is important for models; a model that has enough dials to be adjusted to work with any new piece of data might be 'correct' but it is not useful. My favourite example of this is the fact that when the heliocentric model of the solar system was being developed, the geocentric model was providing much more accurate values for the positions of celestial bodies for a long time because it had had years of being tweaked to do so. Initially, it was the simplicity of the heliocentric model rather than its accuracy that was appealing.
Sure, it's far different doing daily training to get those concepts ingrained in your mind so you don't have to actively think about them, but it's nice to see them listed like this.
Here are a couple more:
- Overconfidence bias: we usually think we're better than the average on something we know how to do (driving) and worse than the average in something we don't (juggling), even if almost nobody knows juggling and everyone knows how to drive
- No alpha (aka can't beat the market): you can only consistently beat the market if you're far better at financial analysis than a lot of people who do it every day all day. So don't bother trying.
- Value chain vs. profits: you'll find that most of the excess profits in the value chain of a product will be concentrated in the link that has the least competition
- Non-linearity of utility functions: the utility of item n of something is smaller than item n-1. Also, the utility of losing $1 is smaller than (1/1000) utility of losing 1000. This explains insurance and lotteries: using linear utility function, both have a negative payout, but they make sense when the utility function isn't linear
- Bullwhip effect in supply chain: a small variation in one link of the supply chain can cause massive impacts further up or down as those responsible for each link overreact to the variation (also explains a lot of traffic jams)
- Little's law: in supply chain (and a lot of other fields): number of units in a system = arrival rate * time in the system
I'll add more as I think about them.
I'd argue that you can have alpha if you are better informed than everybody else. Financial analysis is the craft that comes after that. So yes, if all you have is financial analysis don't bother trying to beat the market. But if you have some unique insight, some information that the market doesn't have or doesn't see, then with some added financial analysis on top you do have an advantage that you can use to generate alpha.
Then "unique insight" is financial analysis, plus macroeconomic analysis, etc.
In other words, if everyone have access to the same info, you can only consistently do better than the market by consistently having better analysis than the market. Everyone is seeing the same info, so you don't do better by seeing some piece of info others are not seeing, but by using different weights in your analysis than the market is using.
And even in those cases, the market might stay irrational longer than you can stay solvent.
Say you've worked in a specific industry for a long time and you know all the players. You know where the technology is, what the challenges are and where the tech is going. You know how key companies are managed, you have an idea about their goals and strategies. You know who's best positioned for what's coming. This is just general knowledge that you've acquired through your job over the years.
Now say you've made enough money and retire. Because you know a thing or two about your industry you decide to buy or sell some stock. Can this be called insider trading? Perhaps. Is it illegal? Most likely not. Can you derive alpha from it? Hell yea.
Again, unless you are confident that you're a better expert then everyone else, you shouldn't think you will beat the market with just public info.
Now if you're saying that you keep getting updates insider information from the entire industry WHILE you're trading, well, then you're back to the "insider info" part.
I've summaries some of these strategies on my Github account; I call it an "emotional framework".
On supply chain, just look for operations research, a lot of it came from the military and industrial research, but can be applied everywhere, say, dimensioning servers for concurrent users.
Also, keep in mind that supply and value chains are very different. Let's take online mobile ads:
- Supply chain: advertiser -> agency -> ad exchange -> website -> viewer
- Value chain: website -> ad platform -> hosting service -> internet connection -> device -> OS -> browser -> viewer -> advertiser
If someone controls the entirety of any of the links, they will hold an excessive market power, and will be able to extract excess profits. Let's say for example iPhones were the only smartphones in existence, Apple would be able to control what through them (unless the government took action in some way), extracting disproportionate profits.
Now if there are 1000 smartphone manufacturers, competition between them would lower prices, pretty much killing excess profits from that link, which could be captured by other links in the chain.
I'd recommend one but it's been ages since I was an undergrad and I no longer remember specifics.
Remove Metcalfe's law. It is a massive overestimate. See http://www.dtc.umn.edu/~odlyzko/doc/metcalfe.pdf for the better n log(n) rule for valuing a network.
And I find Le Châtelier's principle generally applicable, and not just to Chemistry. It says that if you observe a system at equilibrium, and try to induce a change, forces will arise that push it back towards the original equilibrium. It is one thing to recognize this at work in a chemical reaction. It is quite another to be blindsided by it in an organization.
See http://bentilly.blogspot.com/2010/05/le-chateliers-principle... for my explanation of why this holds in general outside of chemistry.
Humans have a desire pro-speed and anti-risk. Supposedly, if you introduce seatbelt laws, speed increases and risk stays the same. How do you predict that in advance? Why doesn't risk decrease and speed stay the same? Or why not speed increase a bit and risk decrease a bit?
I think it's a much better idea to study things like critical thinking, practical reasoning and operational leadership. Back in the day hacker values stated that you could ask for directions, but not for the answer. Because the process itself was as important as the answer. Not just for amusement, but because there might not be a right answer and the next time you're confronted with a similar problem you now have some experience of making those decisions.
A great deal of "stupidity" in technology these days seem to stem from schools that promote check box answers to complex problems and the popularity of these "laws" that make people so sure of themselves that it prevents them from proper reasoning.
For other people, you're right, it's about as useful as reading through a list of course descriptions rather than taking the actual courses.
That's exactly what I'm questioning though, if that's the right way to learn things. Especially with the premise of the article, there's a risk that people are just "collecting facts" to be used as anecdotes to avoid reasoning.
"For other people, you're right, it's about as useful as reading through a list of course descriptions rather than taking the actual courses."
Somewhat ironic I read course curriculums all the time to figure out which subjects are covered and what are good beginner books.
I'm not sure what you mean by using them as tools to avoid reasoning- they are explicitly meant to help with reasoning. I don't know what sort of reasoning could be done without incorporating any sort of logical frameworks at all. That's all this stuff is, tools to aid reasoning by identifying common patterns and antipatterns in thoughts and perceptions about the world around us. Anyone who treats these ideas as absolute laws rather than occasionally (frequently) useful abstractions is doing it wrong.
Identifying patterns is second to learning something. Logical fallacies are examples of bad arguments. You should first learn how to evaluate an argument  before trying to identify logical fallacies. Not only will you learn more, but there's a greater chance you will be able to put any given "model" in context. That people find things like logical fallacies useful is an indication that they don't understand the fundamentals.
* Dimensionality Reducing Transforms
* Hysteresis, Feedback
* Transform, Op, Transform
* Orthogonalization for things that are actually dependent
* Ratios, remove units, make things dimensionless
I am getting better at it, but I have to be conscious of it, lest I estimate for superman by accident!
* do I have the proper tools? Lookup special fasteners
* I will misplace the screws, a magnet or plastic cups would help
* It might be dirty inside, I need something to clean
* I might drop a screw inside, tweezers
* Could be dark, headlamp
* cable might not stay in place, tape
I was first introduced to the concept of hysteresis as an EE undergrad.
As I went on to grad school, which was heavily economics-based, I took a course in system dynamics at mit . In the intro class, the prof said: "system dynamics will change your mental model of the world" (and it did) . As we went through the course, I realized that while many of the concepts in the course were econ-based, in reality were similar to my EE / mathematics concepts (capacitor=time delays, etc.) For me, systems dynamics showed me that concepts in one discipline could be applied to a completely different discipline with great effectiveness. In doing further economics work, I was immersed in many other mental models. My friends and I would use these economics-based mental models - many of which are in the OP article - to communicate in an efficient manner at school and when we were out on the town, almost like a shortcut way to speak and efficiently organize thoughts / explain a given situation.
By the time I re-entered the workforce, post-grad school - working in the venture world - I was regularly and subconsciously thinking/communicating in terms of these econ-models. But, a big aha happened when one day I heard one of the partners at my firm use the term "hysteresis", not to describe a hardware company we were looking at, but to describe a very specific management-related situation with one of the entrepreneurs we were speaking with. And I understood exactly what he meant by that term, as it applied to this management situation. Aha! It turned out that my EE world provided me with a whole toolbox of mental models - just like econ - that I can not only use to express myself, but also to be understood! (fair enough: this was the valley, where most people I dealt with were engineers). It was one of those moments when I realized what I had learnt many moons ago had direct applicability to what I was currently doing but in a completely different context.
Seeing "hysteresis" in the parent's post brought back memories to that realization and its backstory, thus my comment.
 this is a close enough comp for the Systems Dynamics course mentioned above - http://ocw.mit.edu/courses/sloan-school-of-management/15-871...
 An example is "stocks and flows", a way to view things as static or dynamic. This was used effectively in a cybersecurity market map / competitive analysis many years later.
Veblen goods clearly exist, but the evidence for the existence of Giffen goods is much more suspect. (Did the poor really eat more bread because the price of bread rose, or because there was an across-the-board increase in the price of all kinds of food?)
The Precautionary Principle is not just dangerous or harmful, but guaranteed suicide; as things stand right now, we are all under a death sentence. It needs to be replaced by the Proactionary Principle, which recognizes that we need to keep making progress and putting on the brakes is something that needs to be justified by evidence.
Any list that has sections for both business and programming needs some entry for the very common fallacy that you can get more done by working more hours; in reality, you get less done in a sixty-hour week than a forty-hour one. (Maybe more in the first such week, but the balance goes negative after that.)
The distinction between fixed and growth mindset is well and good as far as it goes, but when we encourage the latter, we need to beware of the fallacious version that assumes we can conjure a market into existence by our own efforts. You can't become a movie star or an astronaut no matter how hard you try, not because you lack innate talent, but because the market for those jobs is much smaller than the number of people who want to do them.
Though I think I'd agree that it's technically a different model, but related.
There are many events that we usually think are related to us, but actually aren't, like your boss or customer being angry is in most cases not about you but something else.
I have looked through a lot of pg's essays but didn't find it. He probably removed it just that I can't find it (/example).
If someone else finds it, please link.
What you describe is our inner voice doing the same thing. (This is my personal explanation!)
Google for "inner voice doubt" and find out more!
I personally use cost-benefit analyses for every non-trivial decision in my life.
What's clearly more advanced than Bayes' theorem, and as useful? ET Jaynes' flavor of probability theory? I'd posit the more advanced version of active listening as, "being able to perform a bunch of kinds of therapy--freudian, rogerian, family and systems etc." Of course I don't mean you go get a license for these things. I'm positing them as difficult, generally-applicable life skills. I'm not claiming these are good examples; I think HN can come up with better ones.
EE control theory class IS an entire senior year class on applying a model to something (a thermostat?) which isn't terribly hard, and then modeling and measuring its performance and finally optimizing the model which is pretty hard.
Shannons law explains how good ideas, noise/distraction/bad ideas, depth of concentration or maybe total volume of information, and rate of mistakes all interrelate and how changing one (or several) will affect the others in general.
There are some interesting tradeoffs in communication filter design (analog hardware or modeled in DSP) along the lines of you can freely trade smoothness in response (group delay, ripple, latency, monotonicity kinda), accuracy in response, and complexity/cost. These tradeoffs apply to everything in the world that processes things not just filter synthesis.
There is some kind of chaos theory "thing" where as feedback mechanisms become more complicated, oscillation becomes inevitable and unpredictable. Doesn't matter if we're talking about high gain amplifier design or world economic models.
This is aside from the general engineering mental models of a good engineer can freely exchange cost, reliability/safety, and performance. In fact it being enormously easier to exchange in those rather than expand, you can pretty much see thru transparent marketing that only mentions one or some factors. This applies to all of reality not mere structural engineering.
I think the optics people could say a lot about their seemingly endless stable of aberrations. There are so many effects and interactions its surprising anything optical works at all, much less works well. Optics is almost a meta law that everything interacts with everything and constants aren't.
One instance where I've seen the former applied to society is the idea of research benchmarks getting stale from "overfitting". Even when researchers do cross-validation, we might still expect our exploration of the space of ML models to be skewed towards models that perform unusually well on well-known benchmarks. This was described in http://www.deeplearningbook.org/ with reference to ImageNet (of course).
As for the latter, pretty much every time I've seen a discussion of statistics on social or old media, 90% of the participants seem unaware that base rates matter.
Disclaimer: I'm not sure if it's derivative blogspam or legitimately insightful / original
Missing a couple interrelated mental models I find very important:
- emergence: a process whereby larger entities, patterns, and regularities arise through interactions among smaller or simpler entities that themselves do not exhibit such properties
- decentralized system: a system in which lower level components operate on local information to accomplish global goals
- spontaneous order: the spontaneous emergence of order out of seeming chaos. The evolution of life on Earth, language, crystal structure, the Internet and a free market economy have all been proposed as examples of systems which evolved through spontaneous order.
I struggle with problems and eventually find a solution,
I encounter a name for a similar solution,
as I encounter new, similar problems, I begin to recognize "how the model works",
I forget about the model when I don't use it,
occasionally I come across a list like this one where it's fun becasue it validates the usefulness of the models I've already found and introduces me to new names for ones I already have encountered.
I don't feel that a list like this is super useful to me outside of that framework-- I wouldn't take it as a "study guide".
But I feel that framework has given me a lot of personal validation and pointers on how to better deal with problems I encounter.
One of them is using the efficient market hypothesis (true enough for this application) to avoid being taken in by a real estate broker.
Halons's Razor: A partner company just did something that makes life much more difficult for you and much easier for themselves. By all accounts it looks like they are only pretending to be "partners", but actually secretly trying to screw you over for their own gain. It's easy to get emotional and paranoid in this situation. If it's really true, you need to find a way to cut off the partnership quickly. That is serious business. But, it's more often better to default to the possibility that maybe they aren't actually fucking with you. Maybe they're just idiots. Maybe they got lazy. Maybe they didn't think through the consequences. Maybe you don't need to go into paranoid adversary mode, blindside your partners with suspicious reactions "out of no where" and fuck up a good partnership that in reality just needed better communication. Or, maybe they actually are out to get you. Just don't completely forget the more likely possibility that this is simply a mistake. It's very common that people do forget...
Zero Sum: It's easy to default to a "If they are getting richer, someone else is getting poorer" mindset. A significant number of businessy people have a "In order for me to win, YOU MUST LOSE" mindset. From both directions, this cuts off the greatly preferable win-win outcome. Recognizing this flaw in default thinking can lead you to an even better outcome for yourself than the "you defeating someone" outcome. You can instead find a way for both of of you come out ahead of the "individual victor" outcome.
Streisand Effect: You just fucked up majorly in a way that isn't obviously your fault. There are two different ways that you can try to improve your situation that will likely backfire very badly. 1) You can pretend nothing happened and hope it goes away. That is very easy. But, when the truth becomes clear, you won't just be a fuck-up, you'll be a lying bastard betrayer fuck-up, unworthy of trust or respect. 2) Even worse: You could try to shift the blame to someone else. Doing this will mostly serve to bring focus on the problem that you yourself caused. So now, even more people become strikingly aware that you are a lying bastard betrayer fuck-up who back-stabs innocent people for your own benefit. In the end, if you had simply admitted the problem and discussed how you were trying to solve it, most people would have been OK with your fuck up. But, by trying to hide it, you only made it much worse.
Framing: Sometimes mechanically analyzing a complicated situation is difficult for a human. It's easier to fall back on prior, similar references. Unfortunately, that tendency can be hijacked and abused in situations where you don't actually have much in the way of prior references. By presenting brief, false, set-up situations, an adversary can plant invented prior references into your decision process. If you are not aware enough to dismiss those plants, you will likely make a very poor value judgement. The adversary might not be a person, but instead simply a situation.
And so on...
I remember telling some class mates to take the class and they assumed it was for an easy A and not for how useful the class would be (and I went to a GaTech a long time ago and well the social sciences were just not respected like engineering disciplines at the time).
This HN comment summarizes it pretty nicely "everything in an OS is either a cache or a queue" https://news.ycombinator.com/item?id=11655472
Also Overton window
So far, mixed results. I would like to say that I think of "Bayes Theorem" at the perfect time because I wrote it on a list, but that never happens. I guess I've benefitted from thinking about these concepts more, but that's almost impossible to measure. A list of 100 useful mental models has limited value if you can't hold all of them in memory at once and retrieve them at the right time. I'm still trying to come up with a solution for this. Unfortunately I think this might be a fundamental limitation of human learning.
In planning a strategy, I've found it helpful to consider Win Conditions. It forces me to think backwards from the goal, construct a dependency tree, and consider resource allocation. I first heard about it from videogames but I've also seen it in math, engineering, logistics, recipes, etc. I also pattern-match it the insight that solved the Problem Of Points  which motivated Probability Theory. If it were on the curated list, I'd expected to find it under "models" next to cost-benefit analysis.
1. Process-knowledge. Arts and practical stuff, say, agriculture, construction, boatbuilding, sailing, etc.
2. Fuels & combustion, generally. Wood, plant and animal oils, charcoal, coal, petroleum, steam, otto, deisel, turbine engines.
3. Materials. Functions dependent on specific properties, and abundance of materials they're based on.
4. Power and transmission.
5. Sensing, perception, symbolic representation & manipulation.
6. Systematic knowledge. Science, geography, history.
7. Governance, management, business, & institutions.
8. Scaling and network technologies. Cities, transport, communications, computers.
9. Sinks & unintended consequences. Pollution, effluvia, systems disruption, and their management.
"Thought technology" probably falls into scientific knowledge (models) or symbolic processing.
"I don't need this big framework, I can do with much less!"
The most successful people, peak performers are those who have the best mental representations.
'... an economic theory of consumption behavior which asserts that the best way to measure consumer preferences is to observe their purchasing behavior. Revealed preference theory works on the assumption that consumers have considered a set of alternatives before making a purchasing decision. Thus, given that a consumer chooses one option out of the set, this option must be the preferred option'
In other words "observe their actions, not their words"
> Frequency-dependent selection: fitness of a phenotype depends on its frequency relative to other phenotypes
> Evolutionarily stable strategy (ESS) is a strategy which, if adopted by a population in a given environment, cannot be invaded by any alternative strategy that is initially rare. It is relevant in game theory, behavioural ecology, and evolutionary psychology. Related to Nash Equilibrium and the Prisoners dilemma.
> Debasement (gold coins): lowering the intrinsic value by diluting it with an inferior metal.
So many things seem intractable and formidable in complexity yet once these things are broken down into pieces things become clear. The Asana CEO once talked about this. Breaking things out provides clarity and once you have clarity productivity is massively increased.
Even when this model doesn't explain 100% of occurrences is great as a starting point of view to understand the main pattern of a complex system.
 - http://c2.com/cgi/wiki?EverythingIsa
Are you supposed to know all 100's of them by heart and then, in the middle of conversation, go: "Ah, but X principle says Y, therefore we will go with Z option". Is it?
Am I missing something?
I mean, I'd love to use this but I don't have enough brain cells for all of those :)
Overall, a superb list.
We're familiar with inflation in the financial sense. But then there is also grade inflation. There is inflation of superlatives in our language, e.g. "great" and "awesome". Once we see a few examples we realise that inflation is a more general concept, and a useful one to use in explaining a lot of situations.
Same for peak oil, I think. Not sure about botnet.
Regardless of what you settle on I'd look for the equivalent of http://www.packal.org/workflow/alfred-pinboard for whatever service and platform you use. Being able to instantly search through all fields of all items in your archive is pretty great and has changed the way I work.
This is a set of both guidelines and heuristics, a set of patterns, if you will, which can be applied to situations or analyses. Some give you a fast route to a simple answer (Occam's Razor), some give pause before accepting what appear to be well-founded results (Simpson's Paradox -- I've encountered that before but had largely forgotten it). Some are simply shortcuts in estimation (order-of-magnitude, and log-based math -- multiplication and division become addition and subtraction).
Critical thinking has varying definitions, but I'd generally describe it as more structured and procedural than what's offered by @wegge. See: https://en.m.wikipedia.org/wiki/Critical_thinking
All the information, easier to read quickly.
Instead of relying on how long some website says something will take to read, it's usually a better idea to scroll through once just doing scanning at the high level to get an idea of length and then read it if you want to.
For example, rocket / grenade / arrow spam in TF2, or Lucio / Hanzo / Symmetra projectile spam in Overwatch. In this context, spamming is just firing in the general direction of the enemy, hoping that some of the rounds will hit.
Maybe this generalizes to repeated application of some cheap technique that has a low probability of success, where the low chances of success are compensated by the low amount of effort per 'shot' required -- but I can't think of any more examples.