
Mental Model Fallacy (2018) - sturza
https://commoncog.com/blog/the-mental-model-fallacy/
======
gjm11
The usage of the term "mental model" here seems _really weird_ to me.

Once upon a time, a "mental model" was, in particular, a _model_ : some sort
of mental representation of a thing, usually simpler and clearer than the
thing itself.

I'll give some concrete-ish examples.

\- For anyone you know reasonably well, you can probably make a lot of
predictions about how they would react to various things. Whatever you use to
do that -- which may well be fuzzy and complicated and largely inaccessible to
you -- is your mental model of that person.

\- Sometimes you might explicitly model someone as a "homo economicus" money-
maximizer, who will always do whatever gets them the most money. This is
obviously an extreme simplification, but it has the merit of being something
you can explicitly reason about.

\- Your mental model of the US government might be that the president makes
decisions and everyone else does exactly what he says. This would be a
_hopelessly wrong_ model, of course.

\- If you're good with differential equations, your mental model of a pandemic
might be something like the Kermack-McKendrick SIR model, and that might give
you useful intuitions for the consequences of (e.g.) implementing social-
distancing measures.

\- If you're not so good with differential equations, you might adopt a
simpler model that just says that the number of infections is an exponential
function of time. Even that's enough to make better decisions than many people
do.

But here "mental model" seems to be being used to be generalized way beyond
that, to include any _way of thinking_ or _broadly applicable idea_. A few
examples from the Farnam Street list linked near the start of the article:
"First principles thinking", "Thought experiment", "Second-order thinking",
"Inversion", "Occam's Razor". Some of these are useful tools when building
mental models. Some of them are useful things to do, or keep in mind, whatever
sort of thinking you're doing. None of them is a _model_.

In fact, almost _nothing_ in the Farnam Street list is a model. But some of
the things in the list are _useful metaphors_ , which you could consider to be
tiny models, or model-parts. (For instance: "velocity". Velocity is not a
mental model, but you might often want to build mental models that include
something you could reasonably call "velocity".)

Is the usage of "mental model" to mean "any thinking tool at all" actually
widespread? I've seen it before, but I think the other instances I've seen
have been closely linked with this one -- all, I think, quoting Charlie
Munger. (I don't know whether Munger's own use of the term is as broad as e.g.
Farnam Street's.)

I hope it isn't; the narrower notion in which a mental model is actually a
_model_ seems to me a valuable one, and it seems easier to find other terms
that convey the broader idea (e.g., "thinking tool", "general principle",
"idea") than good replacements for the narrower one.

~~~
shadowsun7
You're absolutely right. As a follow up I wrote
[https://commoncog.com/blog/the-mental-model-
faq/](https://commoncog.com/blog/the-mental-model-faq/) where I finally
figured out the three categories that Farnam Street was mixing up:

The origins of mental models as a psychological construct may be traced back
to Jean Piaget’s Theory of Cognitive Development. However, much of mental
model writing today is not about Piaget’s original theory. It is instead used
as a catch-all phrase to lump three different categories of ideas together:

1\. __Frameworks __. A large portion of mental model writing is about
frameworks for decision making and for life. Frameworks do not sound as sexy
as ‘mental model’, so it benefits the writer to use the latter phrase, and not
the former. An easy exercise for the reader: when reading a piece about mental
models, substitute the word ‘mental model’ for ‘framework’. If this works,
continue to substitute for the rest of the piece. You will notice that the
word ‘framework’ comes with restrictive connotations that the term ‘mental
model’ does not. For instance, writers will often claim that ‘mental models
are the best way to make intelligent decisions’ — a claim they cannot make
when talking about frameworks (nobody says ‘frameworks are the best way to
make intelligent decisions!’). This is understandable: writers optimise for
sounding insightful.

2\. __Thinking tools __. A second, large portion of mental model writing is
about thinking tools and techniques. Many of these techniques are drawn from
the judgment and decision making literature, what I loosely call ‘rationality
research’: a body of work that stretches from behavioural economics,
philosophy, psychology, and finance. This category of mental model writing
includes things like ‘reasoning from first principles’, and ‘cognitive bias
avoidance’. The second part of my Putting Mental Models to Practice series
concerns itself with this category of tools, and maps out the academic
landscape that is of interest to the practitioner.

3\. __Mental representations. __This is Piaget’s original theory, and it
references the internal representations that we have of some problem domain.
It is sometimes referred to as ‘tacit knowledge’, or ‘technê’ — as opposed to
‘explicit knowledge’ or ‘epistêmê’. Such mental representations are difficult
to communicate through words, and must be learnt through practice and
experience. They make up the basis of expertise (a claim that K. Anders
Ericsson argues in his book about deliberate practice Peak).

~~~
michael-ax
tl; i read this as saying that you've not looked into the word 'regime'. seems
you're talking about mental regimes while avoiding the word.

------
gloryless
This is a great example of bad writing. I think you have an inkling of a
premise, but spread it weakly across three paragraphs while muddling through
your feelings and misunderstandings about models. You never actually define or
defend a position. Couple times you've come close to contradicting yourself,
but the premise is so weak I can't say for sure.

I do think you successfully communicate that you struggle with some of the
concepts you're trying to refute.

Just looking at your highlighted statements:

> The most valuable mental models do not survive codification. They cannot be
> expressed through words alone.

Close to stating a premise but you've gone and blown away the the subject of
models. I think you're trying to say farnam street is selling snake oil, but
you're now arguing experience can't be taught, which is tangential and
generally uninteresting

> When Warren Buffett studies a company, he doesn’t see a checklist of mental
> models he has to apply.

1) You don't know that and 2) "warren buffet studies a company by running
through a checklist of mental models" is a claim you've just come up with to
refute (strawmen seem to make up the majority of this post)

> How do you know if a computer program is badly designed? You don’t go
> through a mental checklist; instead, you feel disgust, coloured by your
> experience.

No. You've indicated you don't understand design, and again that a model =
mental checklist, which is your own assertion

~~~
shadowsun7
Author here. You're right, this was an initial attempt at criticism. Shane
Parrish of Farnam Street reached out to me a few weeks after I published this,
and then I spent the next year as a member of Farnam Street's learning
community, executing a research program around this criticism so as to make it
more constructive. A crisp version of that series may be found here:
[https://commoncog.com/blog/the-mental-model-
faq/](https://commoncog.com/blog/the-mental-model-faq/)

The core of this series stems from the observation that _all expertise is
tacit_. Polanyi and Papert has the best articulated expression of these ideas,
and they match up to my experience in actually pursuing expertise.

~~~
ssivark
I find quite funny the irony in the situation. You’ve come to the interesting
realization (your tacit knowledge / mental model) that most useful knowledge
is tacit. You’ve written an article on this, but that is very much like
communicating a mental model — and readers are having a hard time making sense
of it, and are therefore denying the message ¯\\_(ツ)_/¯

You might find interesting the concept of “legibility” as discussed on the
Ribbonfarm blog.

~~~
shadowsun7
Who says my goal is to convince those who deny the message?

As per Papert, you're either ready to learn something or you aren't. The more
people who do not get this, the larger the competitive advantage for those of
us who do.

~~~
ssivark
I never said you’re trying to convince readers; it is simply interesting to
observe the impedance mismatch in play, as yet another case study :-)

------
intrepidhero
This article seems to be saying that theory is insufficient for understanding.
Practice is required. I completely agree.

That doesn't negate the value of mental models. Mental models are absolutely
required for understanding any complex system. They are literally how we
think. They should be refined through direct experience. Listening to experts
doesn't hurt either.

What has me worked up is that the examples in the article are just garbage.

A person skilled in tennis or MMA will have no problem communicating to you
what it takes to acquire their skill. It so happens the most efficient way to
communicate complex physical motions is to model them, then allow the student
to attempt them, then critique the student's form. Finally the student will
require many hours of practice of the correct forms to build muscle memory.
There is nothing in here that says anything about mental models.

"How do you know if a computer program is badly designed? You don’t go through
a mental checklist; instead, you feel disgust, coloured by your experience."

You absolutely better have a mental checklist and solid technical reasons for
why it's badly designed. Imagine going to your boss, asking to do a complete
re-write because you feel disgusted by the code base. How is that going to
work out for you? Maybe it's algorithms that are not performant. It has
inconsistent abstractions, too much abstraction, or too little. It may not
accurately model the problem domain. A feeling of disgust is not going to get
you anywhere. Technical knowledge and experience will.

Ok. Clearly this article hit one of my buttons. I think I'm done now.

~~~
phkahler
You can be good at something and not be good at articulating why stuff is good
or bad, just that it is. Learning to explain to others does tend to refine
your own ability though.

~~~
Enginerrrd
I disagree with this almost as strongly as I possibly can. I've only ever
encountered people with middling or mediocre skill in a subject express this
attitude. People that are good, are good because they know _precisely_ where
the line between good and bad is, AND they've practiced "good" a LOT. Knowing
is enough to articulate it. If you can't, you probably don't know it as well
as you think.

This leads to a bit of a paradox. The things I'm best at are precisely those
things in which I feel I still have the most to learn. Why? Because I _know_
the difference between what I'm doing and what good would be.

~~~
bsder
> People that are good, are good because they know precisely where the line
> between good and bad is, AND they've practiced "good" a LOT. Knowing is
> enough to articulate it. If you can't, you probably don't know it as well as
> you think.

Formula 1 contradicts you.

One of the things that made Michael Schumacher worth the exorbitant salary he
got paid was that he could articulate what happened while he was driving on
the track in such a way that engineers could make relevant changes.

Ferrari had lots of really good drivers, but it took Schumacher until they had
somebody who could communicate with the engineering team such that they could
actually improve the _car_.

~~~
Enginerrrd
I would argue you're talking about a situation where a different skillset than
was traditionally selected for was what actually became valuable. I'll concede
that you could have a really really good driver that can't give useful
feedback on their particular performance. Cognitive load is extremely high and
they may not remember what something felt like as they were going. However,
that same driver in the passenger seat could tell you what you're doing wrong
with tremendous nuance: entering a turn too early, speed too slow, suspension
not in the right position because you didn't set it up right when braking,
etc.

------
Barrin92
This article is right in my opinion. This 'mental model' thinking and the
surrounding ecosystem of 'rationalists' where it's super popular to me seems
like Oprah for slightly smarter tech bros.

The problem with it really is as the author suggests the lack of authentic
experience on the one hand, but I think more importantly it's that the idea of
"mental models" just tries to hand people a bag of disjointed tools.

When you look at what it means to really understand something and you look at
say a world class pianist or something, then you'll almost certainly find they
have an _integrated_ perspective on what they do. They don't have model A and
model B and model C and a bag of fortune cookie wisdoms, they have tacit
knowledge and beliefs that are coherent and whole. Really understanding
something ironically often leads to the inability to articulate how it is one
understands it, because it's just become integrated into how someone operates
in general.

Umberto Eco once made the great point that unread books are much more
important than read books because known knowledge pales in the face of
everything that is unknown, no matter how dedicated one is to reading. And
it's the same thing with these mental models. You're not smarter because you
know 200 models or 300 models or 400 models, just like reading 50 more books
per year isn't going to make anyone any smarter in a sort of simple additive
way.

~~~
inimino
Any field of knowledge that has been dissected into a taxonomy must be dead.
On the other hand, the taxonomy can give you words for what you already know,
so it is not entirely useless.

~~~
chairmanwow1
I don't understand how you make the leap from knowledge being equated to the
field being dead? Taxonomy provides a foundation for others to learn about
something.

~~~
inimino
Of course. But thinking belongs to the practitioner, the point is that you
have to do it and not just study the doing of it.

You acquire a mental model by doing the things that lead to having that mental
model, not by reading about the model. Memorizing a taxonomy of cognitive
biases doesn't necessarily make you a better thinker, anymore than memorizing
design patterns necessarily makes you a better programmer.

------
mox111
This article seems to have divided people somewhat. A think a great
reconciliation between the practical and the theoretical when it comes to
learning is found in Mindstorms by Seymour Papert:

"An important part of becoming a good learner is learning how to push out the
frontier of what we can express with words. From this point of view the
question about the bicycle is not whether or not one can "tell" someone "in
full" how to ride but rather what can be done to improve our ability to
communicate with others (and with ourselves in internal dialogues) just enough
to make a difference to learning to ride."

------
maffyoo
What an odd article, I have to admit I spent a good while questioning whether
i'd got the my own understanding of the value of mental models totally wrong.
My first issue with the article is that i've never really seen any claims that
by understanding mental models you will be furnished with the skill to either
win at MMA or beat the stock market (or anything else for that matter). This
leaves me wondering why these very narrow examples of the value of mental
models are used as a foundation to the argument.

Knowledge and wisdom is built on abstractions (this is well established). When
i think about how i have distilled the things i know, im certain that its
finding the correct abstraction - the correct picture or mental model. for
example a mental modal about winner takes all markets allows you to understand
that certain markets have this trait and therefore the mental model allows you
to identify this class of market efficiently due to a nice terse abstraction.
I really don't see the link the author makes between understanding the mental
model of a winner takes all market meaning that you should all of a sudden be
an expert in how to beat one, you may know the basics but i think most people
would also know there is experience, nuance and instinct involved - very
indefinite things, unlike a the classification that the mental model portrays.
Knowledge is built upon abstractions and therefore mental models are just
that, its the classification of knowledge that captures a model (generally
static) of the world. In understanding a mental model you get some insight
distilled into a neat abstraction. Believing you can win at anything just by
understanding a mental model... thats crazy surely !?

~~~
mistermann
It's also odd that there isn't more disagreement with the article. It
essentially takes a nonexistent claim, and then debunks it via strawman
analogies, with no accompanying proofs.

My favorite part:

> A Little Bit of Epistemology Goes a Long Way

Not if you've somehow come to fundamentally misunderstand the principles.

Although, I can't resist the urge to now read more articles by the author,
perhaps this is actually an extremely clever example of gonzo advertising.

------
whorleater
IMO mental models are relatively useless for what they're supposed to do on
the tin, but they're useful for illuminating aspects of the world that
would've been left otherwise undiscovered, like a unknown-unknowns thing. Like
this article says, most mental models don't survive codification, but do make
for fun reading, similar to a healthy-snack version of books.

~~~
Matticus_Rex
It depends on what you do. I read about mental models for a long time before
any of them became useful for me. Working at a startup right out of law school
there were a few that were relevant for my job, but nothing earth-shattering.
Years later in a different job where I make a lot of business decisions,
consult internally on marketing strategy, and deal with a lot of metrics, I
use a lot of the specific Farnam Street-style models constantly.

------
a_c
> The mental model fallacy is that it’s worth it to read descriptions of
> mental models, written and aggregated by non-practitioners, in the pursuit
> of self-improvement and success.

This is also why I think a project manager, if a project has one, need to be
technically literate to have positive impact to a project's success

------
shadowsun7
Author here. A crisper version of the criticism may be found here:
[https://commoncog.com/blog/the-mental-model-
faq/](https://commoncog.com/blog/the-mental-model-faq/)

It is, in turn, a summary of a 30k word series on 'putting mental models to
practice' [https://commoncog.com/blog/a-framework-for-putting-mental-
mo...](https://commoncog.com/blog/a-framework-for-putting-mental-models-to-
practice/) — originally published in Farnam Street's Learning Community. To
his credit, Shane Parrish of Farnam Street invited me to share my criticisms
in his members-only forum, so I reciprocated by putting in the work to back up
the ideas in this piece.

------
hooande
This is like saying "buying a tool set doesn't make you a handyman, you need
years of experience"

Duh. You DO need tools at some point though. You can, in theory, build all of
your tools from scratch. Assuming you're already familiar with the different
types and paradigms of tools. But it might be better to look at a list of
options before deciding which tools you want to deliberately practice using

------
parched
In my humble opinion, mental models are like software design patterns. You
kinda have to understand them to understand them, otherwise you end up falling
into the "when all you have is a hammer everything looks like a nail"
situation, which I guess is a mental model too!

------
runawaybottle
I reacted to this article reflexively in response to the abundance of pop self
help content that’s been with us forever, now in the form of non stop podcasts
of people essentially giving the same advice. It’s fine, but the self-help
realm is just, blah, too much of it can really be counter-productive. In so
many words, the author is just echoing “there’s a lot of bullshit out there”.

Your basic Philosophy 101 class teaches interesting mental models like
Cartesian doubt. That really shaped the way I thought about things for years
to come.

With that said, it’s important to identify models properly. If you listen to
MMA fighters or other athletes, you can start to see their incremental
approach to increasing training intensity to achieve measurable results. Not
parsing that out will leave you with a shadow of a mental model that, in this
case, would be mostly comprised of non-core strategies (e.g Always stay
positive, have no fear, never accept no for an answer, etc).

------
ineedasername
The fallacy with this article about a "fallacy": That mental models for a
physical activity (MMA) are comparable to mental models about _ideas_.

Of course that doesn't mean you can learn everything from a mental model. By
nature, they abstract away plenty of important details needed for expertise.
But they are not deficient in terms of ability to convey some level of
understanding.

If you think about it, this _has_ to be the case. In many ways, knowledge is
simply a series of mental models built on each other. Yes, much of it comes
from empirical observation, but that ends up encoded into mental models.

------
ssivark
This article is _on the dot_. This is especially true with Thiel-ean
“secrets”. The most important secrets stay secret for a long time because most
people are blind to them — they cannot even be communicated, because they
would sound like gibberish/wrong to most listeners (cue: blub paradox). This
is why Planck said that science progresses from funeral to funeral.

The other thing about lists is that all the items are roughly equally
weighted. In real life, the value is almost never distributed that way — a
couple of items absorbed well will often contribute immense value. But we
water that down with lists (to make the source sound authoritative), and
worse, bury the best stuff down the list for SEO & clicks & “engagement”
metrics.

It’s interesting to imagine that as a corollary, this brushes aside almost all
punditry, and a lot of context-free college education.

Here’s the thing... a good mental model is worth its weight in gold to a
practitioner. There’s something magical about the deep intermingling of theory
and practice, with each piggy-backing on the other. To have any shot at that,
it is very important to be _situated_ in a context, getting useful feedback
from reality. If you think about it, the fields with the best theory also have
the best experiments & feedback. Failing that, excessive theorizing is akin to
the insanity of a dream world.

------
starpilot
I'm convinced that everything has infinite fractal complexity and it's a
fool's errand to reduce most non-trivial situations to metaphors or explicit
systems. As I get older, I rely on instinct/gut more. Just like a master chess
player doesn't see 10 possible moves before him, evaluating each as the start
of a sequence. He sees just one move, the best one, based on his experience.

~~~
ardy42
> I'm convinced that everything has infinite fractal complexity and it's a
> fool's errand to reduce most non-trivial situations to metaphors or explicit
> systems.

I think I agree with you. I also think there's a dangerous temptation to fail
to recognize that foolishness, and cultivate a false impression of
understanding by almost denying the reality of things some cherished model
doesn't handle. That temptation increases with the elegance of the model and
the number of limited cases where it can be applied with reasonable success.

~~~
pcnix
Interestingly enough, people really good at understanding the complexity in
systems are also people that seem to fall prey to, as you said, cultivating a
false impression of understanding. Taleb is one that I think deserves
criticism in that regard.

------
enz
I stopped reading this kind of stuff a few months ago. I felt the content was
very valuable and interesting, but only if you have prior experimental
exposure. If not, the content is just like an empty shell. Words are wise, but
you just can’t understand why.

I’ll read those books again in a few months/years, after practicing. I’m
pretty sure it’ll be like reading entirely new books.

------
contravariant
While it is an interesting article and I can agree with some of the arguments
in it I can't help but think that in outlining the author's reasoning why
reading about mental models is futile it inevitably contradicts itself (which
I guess it is at least up front about, not many blog posts tell you to stop
reading the blog entirely, halfway through the first section). Then again I
suppose that writing a blog post on mental models that on the face of it is
trivially false does end up proving the author's point.

On a more serious note, while we're comparing mental 'mental model' models. My
experience has been that it's more useful to think of mental models as a
vantage point. A good mental model grants more perspective while bad mental
models obscure things. Of course this doesn't absolve you of understanding the
basic concepts, perspective won't help you if you don't know what you're
looking at.

------
selfselfself
...and it's not always simple to practice/apply mental models after reading
them in a list. I used to be a member of Farnam Street until last year. Most
of the members weren't sure how they could apply the mental models per their
internal forum. The top answer was to use the list of models as a checklist.

------
csours
Mental Models are definitely useful, so not a fallacy from that point of view.

It's kind of like using analogy as rhetorical device, it works up to a point;
but when you get to the details it's not worth extending the analogy.

Mental models can be good for raising questions, but then you have to actually
find answers to those questions.

------
lukev
The author of course is correct that communicating a model does not
necessarily communicate expertise, and that many important elements of
expertise are non-communicable (at least not easily) tacit knowledge.

But what is not addressed is that some mental models very clearly can be
usefully shared. There is not a clear and distinct line between a "mental
model" and communicating fundamental elements of how something works.

For example, my 8 year old was trying to figure out why his walkie-talkies
kept making howling, screeching sounds. I explained the concept of feedback to
him and he was immediately able to put that knowledge into practice (keep them
further apart, or don't have both microphones on at the same time).

I would argue that for tech and business work, communicable models/techniques
vastly outnumber noncommunicable models/techniques.

~~~
ssivark
On the contrary, that mental model was only useful to your 8-year old because
he had contextual experiences and was situated in a reality where he could
easily explore its consequences. Most conversations about mental models are
far less tangible.

~~~
mistermann
> On the contrary, that mental model was only useful to your 8-year old
> because he had contextual experiences and was situated in a reality where he
> could easily explore its consequences.

This doesn't really make sense to me. How is it impossible that explaining
feedback couldn't have been useful unless he had contextual experience _and_
could explore the consequences?

------
mannykannot
There's a word hovering in the background throughout this article, but it
never makes it onto the page: judgement.

In software development, for example, there is a lot of explicit knowledge to
learn, and then there are the mental models about how one should apply that
knowledge - e.g. SOLID - but these models do not tell you exactly what to do
(in particular, there are all sorts of ways of using the rules to justify bad
decisions.) It takes good judgment, honed by experience, to apply the rules
effectively.

~~~
shadowsun7
I was actually going for 'taste' when I wrote the piece, but this is a good
word, as it's more generally applicable. Thank you.

------
jt2190
> The mental model fallacy is that it’s worth it to read descriptions of
> mental models, written and aggregated by non-practitioners, in the pursuit
> of self-improvement and success.

Is this actually something that people believe? I've always assumed that
people reading about doing $THING were procrastinating, rather than doing
$THING.

~~~
phnofive
No, this is the central strawman.

------
tzs
Is "mental model" an overloaded term? I've only heard it in connection with
designing interfaces, which seems quite a bit different than what the article
seems to be talking about.

The way I've heard it used in interfaces just means what the user knows (or
thinks they know) about how your thing works.

A couple examples from Donald Norman's "The Psychology of Everyday Things"
(which was renamed "The Design of Everyday Things" in later editions).

Consider a thermostat, which has a simple dial with an arrow painted on it,
with a scale of temperatures printed around it. You can turn the dial to point
the arrow at a temperature.

The way this thermostat _actually_ works is that there is a bimetallic strip
inside that bends as the temperature changes. When it gets cold enough it
bends far enough to close a contact that turns the heater on. When the room
warms up enough, the bending of the strip abates, opening the contact, and
turning off the heater. Turning the dial modified the distance between the
bimetallic strip and the contact. Turning the dial to a lower number moves
them farther apart, so the strip has to bend more to close the contact, and so
the room has to get colder before the heat turns on.

If you ask people how they think it works, some will know the above. Others
will have other explanations.

Some people might think that it is just based on time. The system runs the
heater for a variable time controlled by the dial, and then turns it off for a
fixed time, and then repeats this cycle. Turning the dial to a lower number
reduces that variable time.

There were, if I recall (it's been years since I read it) correctly, a few
more explanations given, some quite wrong in the sense that no one would
actually build a thermostat that way.

The interesting and important thing from a user interface point of view,
though, was that all of them lead to the user doing the right thing when it
comes to actually operating the thermostat. If the room is too hot, they move
the dial to a lower number. If the room is too cold, they move the dial to a
higher number.

The point was that users are going to have some kind of model for how your
thing actually works. It is not important that the model they have is actually
right--as long as their model leads them to the right control inputs that is
fine. Be aware of what kind of models people are going to have, and design
your interface to not encourage bad models.

An example of an interface design failure was a refrigerator/freezer Norman
had. It had two sliders, labeled "Freezer" and "Fresh Food", both in the
fridge compartment. The "Freezer" slider had settings A-E, and the "Fresh
Food" slider had 0-9. The instructions said: "Normal settings C and 5",
"Colder Fresh Food C and 6-7", "Coldest Fresh Food B and 8-9", "Colder Freezer
D and 7-8", "Warmer Fresh Food C and 4-1", and "Off 0".

The labeling of the controls suggest or reinforces a model that leads to
someone who wants to adjust the freezer temperature fiddling with just the
freezer control, and someone who wants to adjust the fridge temperature
fiddling the the other control. Probably something with thermostats in both
compartments, each controlling a cooler unit for that compartment, with the
sliders each controlling one of the thermostats.

The way that fridge actually worked is that there was a thermostat somewhere,
controlling the cooling unit. The output of the cooling unit went through a
valve that could direct part of it to the freezer and part of it to the
fridge. One of the sliders controlled the thermostat, and one controlled the
valve. Nothing really suggested which compartment had the thermostat (assuming
that it is even in one of the compartments), or which control was for the
thermostat and which was for the valve.

An average user of that fridge who, say, feels the fridge temperature us just
right but would like the freezer to be a little warmer is in for a frustrating
time of fiddling with the controls. Their model of how it works probably leads
to different predictions of control response than the actual model. That way
that fridge actually works is far enough away from the models the users are
likely to have that the controls should have at the very least been labeled in
a way that lets those user know that this fridge is difference.

Maybe label the slider that controls the cooling unit something like "Overall
Cooling" and label the valve control something like "More to fridge/less to
freezer <\--> More to freezer/less to fridge". Still a pain to operate, but at
least it is obvious it is going to be a pain. The user who wants a warmer
freezer but is happy with the fridge temperature can tell from that labeling
that they are going to have to decrease the overall cooling, and change the
allocation toward "More to fridge/less to freezer" to keep the fridge
temperature the same.

------
olooney
I'm a little tired of "X is a fallacy." Everything is a fallacy, or nearly so.
Almost every method by which we naturally and intuitively reason have been
shown to be error prone, deeply biased, or at best an approximate heuristic.
This is why Wikipedia is able to list a hundred different kinds of
fallacies[1] without even beginning to scratch the surface. I'm not a huge fan
of teaching rationality by exhaustively listing fallacies because it's
endless. But when maintaining a blacklist becomes too onerous, the solution is
to switch to a whitelist. This turns out to be much easier, because the
constructive list of techniques that work is very short. Of all the ways of
reasoning that are intuitive appealing and naturally make sense to us, only
two have stood the test of time:

1\. Modus Ponens (If A implies B, and also A, then B.)

2\. The Hypothetico-deductive model (The guess-and-check scientific method)

And frankly I'm a little suspicious of that second one!

These are better known as "math" and "science," or "deduction" an "induction."
And yes, modus ponens is really the only inference rule you need for logic -
Hilbert proved that[2].

The other 98% of the algorithms built into our brains are unreliable and
cannot be trusted. Consider for example your optical cortex, which is attempts
to patch up raw input in a dozen different ways, resulting in dozens of
optical illusions, saccadic masking, not being aware of your own blindspot,
and so on. We literally can't trust our eyes... or rather, we can't trust the
instinctive processing our own brains do on raw visual input. So it is with
the other parts of our brain. Or what Kahneman calls "System 1."[3] It's a
patchwork of barely functional heuristics.

Scientists learn to shut out that 98% and use only the two reliable systems.
Mathematicians take it even further and shut out 99%, leaving only modus
ponens and methods of deduction.

People hate that this is true. They want to reason intuitively, naturally.
They hope they can patch their hopelessly bugged brains into something useful
if they can just memorize and avoid a list of pitfalls. I'm telling you
there's a better way. Forget about fallacies. Stop looking for shortcuts like
"mental models." Construct rigorous arguments inside of formal deductive
systems. Use those to build formal mathematical models that describe reality.
Test those models ruthlessly against experimental data, even to destruction.

You know this works. It put a man on the moon, for god sake. It predicted what
a black hole would look like, then took a picture of it. It's cured so many
diseases and so many problems that our main problem is that we don't have
enough problems. Yet people still want to look for shortcuts. I can sympathize
with that. We're all busy. But the real choice you face is this: be rigorous,
or be wrong a lot.

[1]:
[https://en.wikipedia.org/wiki/List_of_fallacies](https://en.wikipedia.org/wiki/List_of_fallacies)

[2]:
[https://en.wikipedia.org/wiki/Hilbert_system](https://en.wikipedia.org/wiki/Hilbert_system)

[3]:
[https://en.wikipedia.org/wiki/Thinking,_Fast_and_Slow](https://en.wikipedia.org/wiki/Thinking,_Fast_and_Slow)

