
Ludic Fallacy - rfreytag
https://en.wikipedia.org/wiki/Ludic_fallacy
======
Emphere
Can't help but feel that people here are missing the point and perhaps too
hung up on the specific example in the wikipedia article. The broader point is
about the misuse and misapplication of technical/"rational" methods in real
life i.e. not thinking about whether the model can actually be usefully
applied in a certain real world situation. For example, blindly following rote
statistical methods/not understanding their assumptions is what got us into
the whole psychology replication crisis mess.

~~~
bezmenov
Yes, it’s just a variation of _don’t confuse the map for the territory_ , or
the more general, _don’t confuse the model for reality_.

------
Animats
That's Taleb. Taleb ran a fund that bought options way out of the money. This
loses money every year, unless there's a big crash. His fund happened to be
active in 2008, and made a ton of money that year. He doesn't release the
numbers for other years. Not clear if this is a net win over a full business
cycle.

Taleb is sort of a counter to Black-Sholes option pricing. Black-Sholes
assumes a Gaussian distribution. Given that assumption, a few numbers let you
quantify risk. That's convenient, but somewhat unrealistic, and was taken way
too seriously by the bond market from the 1980s to 2008. It provided a
philosophical underpinning for the junk bond market. (They can't possibly all
go bad at the same time, can they?)

------
dtjohnnyb
The book Range talks about a similar idea, the idea of "kind" and "wicked"
learning environments.

In kind environments (most games) "patterns recur, situations are constrained,
and importantly, every time you do something you get feedback that is totally
obvious, all the information is available, the feedback is quick, and it is
100% accurate"

In contrast, wicked environments (most of the rest of life!!!) none of the
properties of kind environments hold, and in some extreme cases the feedback
teaches you the exact wrong lesson.

In the book, he similarly warns against taking the lessons from successful
people in chess or golf and applying them to our own wicked lives.

~~~
misiti3780
What did people think about that book ?

I enjoyed it but felt like it was a bunch of other peoples ideas (Taleb,
Kahneman, Levitin, etc) strung together.

~~~
rogerclark
that's every book

------
sudhirj
The nicest example of this I've seen is that Tim Ferris won a kickboxing
championship without being a kickboxer,
[https://www.martialdevelopment.com/how-to-win-kickboxing-
wro...](https://www.martialdevelopment.com/how-to-win-kickboxing-wrong/)

The fallacy is that winning a world championship at kickboxing _must mean_
that you're an excellent kickboxer, but nope.

~~~
buran77
I'm not sure what kind of championship that was but for an experienced fighter
having a 30 pound "handicap" versus an inexperienced one should amount to no
real life disadvantage. Don't underestimate what a 165 lbs person's tibia can
do to a 195 lbs person's head on even thigh. There's no "hack" to not getting
your lights knocked out with one punch/kick from a good fighter 30 lbs lighter
than you. And tricking the weigh in is something that has been done since the
second ever weigh-in.

The fact that he was able to hold off experienced fighters, even if they were
below his weight class, means he was plenty qualified to be there despite his
very modest self assessment.

[Edited from here] He seems to be an actual fighter under Dave Camarillo but
the framing of this story, that he's just an average "stuff my face with
peanut butter sandwiches, tricked the weigh-in" kind of guy, is just in order
to sell his "life hacks" books.

All I can find is that 1999 was the second year for the championship which
could mean not very strong competition. But there's no mention of him in any
Wushu/Sanda/Sanshou championship. He also used to claim that he is a "Cage
fighter in Japan, vanquisher of four world champions (MMA)" (since removed
from his page). Being a 4 time world champion would have contradicted his
"just a regular guy using life hacks" premise. Without an actual record of him
winning that "Chinese Kickboxing National Championships 1999" I guess he's
just peddling the same "tricks" from his book. One of them might be "just say
you did it".

~~~
yesenadam
> an actual record of him winning that "Chinese Kickboxing National
> Championships 1999"

There appears to be some footage of it in this video.

[https://www.youtube.com/watch?v=ODoVqXgblyw](https://www.youtube.com/watch?v=ODoVqXgblyw)

~~~
buran77
I may have edited too late. He does seem to be an actual fighter and at least
for a while under Dave Camarillo, but still very far from being a world
champion. Given that he makes a living selling such tricks I think that is a
compilation of random training sessions and championship fights intertwined
with shots of his face and nothing more.

From what I gather from a short internet search he changed his story
repeatedly depending on what sold more. He's either an experienced fighter
with multiple world championship wins, or an average guy using his own tricks
to win. But his name is not associated with any official leaderboard or
championship win as far as I can tell.

As someone who has consistently excelled at amateur martial arts I can
guarantee there's no way a world champion gets beaten by some guy pushing him
off the lei tai repeatedly even after the same technique was used in every
round and with all other opponents. I could knock out almost any "non-trained"
person with a single well placed punch or kick, and if they weren't well
placed they would still do some temporarily debilitating damage. Yet whenever
I had friendly fights with well trained pros, including ones with good results
in international championships but no gold medals, I have no words to describe
how inadequate and unprepared I felt.

All I'm saying is that the story is not what it seems. Whether he is an actual
pro worthy of a gold medal, or some incredible fluke made him fight severely
unprepared opponents I don't know. But he certainly didn't win anything
because he tricked the weigh-in (that's standard procedure), or because he
pushed opponents off the lei tai round after round, opponent after opponent
and none of those world class fighters had any recourse.

This is how a fight would look like [0]. You win that by going to a lower
weight class and "pushing" only if you are perfectly able to take the hits
anyway and are already _good_. Tricks will get you from silver to gold, not
from peanut butter.

[0]
[https://www.youtube.com/watch?v=tQsTOVCqduY](https://www.youtube.com/watch?v=tQsTOVCqduY)

------
kwhitefoot
Seems like a long winded and sanctimonious way of saying that models are
simpler than the reality that they mimic.

Bears in woods and catholic popes come to mind.

~~~
ajross
It sounds obvious, but in practice it's not. It's an election year, so take
polling as a great example:

Every poll of a race comes with two numbers, one a fraction of support for
whatever issue or candidate is being measured, and the other a "margin of
error" (which in the industry is a 95% confidence interval, I believe). This
is just a statistical measure based on sample size. It's backed by two
centuries of math and no one interestingly disputes the way the number is
calculated or whate it means.

And there are a LOT of these polls taken. So take all those polls, check their
variance, and you'll find that it's _much higher_ than the value you'd expect
given the margins of error they reported. This is a routine effect. So square
that: why are the MoE numbers "wrong", given that multiple measurements of the
same number are giving values with more noise than expected?

The answer is exactly this paradox. The model, that all these polls are
measuring the same thing, is wrong. They aren't measuring the same thing. The
different polls use different sampling methods, they weight their data using
different algorithms, with opinion polls they phrase the questions slightly
differently (or just ask the same questions in a different order), all of
which affect the results.

And this happens everywhere in science. It's a frightenly easy mistake to
make, especially as data sets get large.

~~~
jayd16
Hmm this is a surely a common mistake to make but what does it have to do with
games? If you sampled poker plays with similar differences you'd get similar
errors.

~~~
garmaine
You’re getting too hung up on the word “game.” In this context it just means
the rules of the model.

------
throwaway2245
I haven't understood this as a fallacy.

In the "suspicious coin" example, there is an opening assumption that the coin
is fair, but it transpires clear evidence to the contrary.

It isn't a fallacy to continue to treat this as a game: it has just become
sensible to drop the assumption and treat it as a game where you are unsure
about whether the coin is fair (as the second player in the example, in fact,
does).

~~~
thih9
> In the "suspicious coin" example, there is an opening assumption

I understood it differently: there are no assumptions per se, just a request
to make an assumption. The request is made by a third party and all this is
part of the thought experiment.

> It isn't a fallacy to continue to treat this as a game

True. The article states that “The ludic fallacy here is to assume that in
real life the rules from the purely hypothetical model (where Dr. John is
correct) apply.”.

> treat it as a game where you are unsure about whether the coin is fair

This significantly modifies the scope of the original game.

I understood this as an example of how simplified models might easily stop
being useful because of real life conditions.

------
quickthrower2
And yet companies make a fortune betting on markets or sports using those
exact "Ludic Fallacy" models - like linear regressions, probability etc. They
know it's real life, and many someone else is fixing the coin toss, but they
can still get an advantage. In any real casino, 99 blacks in a row and they'd
shut down the table.

~~~
Reimersholme
...and then a financial crisis hits and suddenly they expect the government to
bail them out because they didn't account for that risk in their models.

~~~
Ambol
Or they knew all along that the government would bail them out, so they were
playing a very different game than everybody else.

------
stared
One core example of modeling something as a game is dating (as in "The Game"
by Neil Strauss).

Two literal takes on that are:

\- "Super Seducer: How to Talk to Grils" a (somewhat controversial) game on
Steam by Richard La Ruina, a Pick-Up Artist -
[https://store.steampowered.com/app/695920/Super_Seducer__How...](https://store.steampowered.com/app/695920/Super_Seducer__How_to_Talk_to_Girls/)

\- "If Dating was like Who Wants to be a Millionaire" \-
[https://www.youtube.com/watch?v=7E33j3gQg98](https://www.youtube.com/watch?v=7E33j3gQg98)

While some analogies of a game are in place (depending on someone's action,
it's "fail" or "progress"), I think that the worst fallacy is that one always
can, and should, "win". Depending on the attitude, it is anything from
frustrating (obsessing why one has failed despite the effort, or ending in a
relationship with a wrong person) to outright dangerous.

------
sradman
All models are simplifications of real-world phenomena, the question is how
useful they are. Taleb’s Ludic Fallacy is a useful prod to remind us to
consider our core assumptions.

------
ak39
Interesting. I see the examples don’t cite “technical analysis” of price
action of stocks. Is that also an example of Ludic fallacy?

~~~
cjfd
The ludic fallacy could apply here as well. If you calculate that on average
you will make a profit but in actuality this profit fluctuates so wildly that
you will almost certainly go bankrupt in the process the ludic fallacy could
apply.

Another criticism by Nassim Taleb to this field is the use of the normal
distribution to distributions that are not normal at all. This could lead to
the problem that I described in the previous paragraph.

~~~
Ambol
Also the fact that many market players got bailed out by governments when
things went south. They may have been aware that this would happen all along,
which would mean the explicit market game and the real market game are 2 very
different things with very different rules.

------
FeepingCreature
Isn't the first example extremely bullshit?

> A third party asks them to "assume that a coin is fair, i.e., has an equal
> probability of coming up heads or tails when flipped. I flip it ninety-nine
> times and get heads each time. What are the odds of my getting tails on my
> next throw?"

Istm that Fat Tony here gets the right answer to the wrong question. Whether
we _should_ assume this is not part of the question. That this is a useful
thing to ask IRL doesn't change the fact that if you posed this to me IRL and
smugly quoted the ludic fallacy at me for correctly answering your question
I'd be strongly tempted to deck you.

[https://xkcd.com/169/](https://xkcd.com/169/) comes to mind.

~~~
TheOtherHobbes
But the assumption is the point. When you have enough data to suggest your
model is wrong, you should throw away the model and the assumptions it's based
on, not the data.

The fact that your model can be justified as technically correct is
irrelevant. When the data doesn't fit the model any more, the supposed
absolute correctness of the textbook model _blocks your ability to understand
what 's happening._

This isn't even about games, but about whether it's possible to be open-minded
enough to consider that simple textbook "solutions" are not guaranteed to be a
good fit for real problems - especially if you don't understand their
limitations or the assumptions they're based on.

And it doesn't just apply to statistics.

~~~
moring
> When you have enough data to suggest your model is wrong, you should throw
> away the model and the assumptions it's based on, not the data.

In the example, you agreed to assume the coin is fair. That is, you agreed to
stick to the model even if it turns out to be wrong. Throwing the model away,
however suggested by the evidence, is just violating the rules you agreed to.

~~~
greggman3
I'd call it the opposite. There is statistically no such thing as 99 heads in
row. So when someone says "assume the coin toss is fair, 99 heads in row
happen, what are the odds the next will be heads". Your answer should not be
"50%". Your answer should be "You just asked me a bullshit question because
there is basically no such thing as 99 heads in a row in the real word. In a
hypothetical world maybe but in the real world nope"

Or maybe to be nicer about it the proper response is "So are we talking about
the real world because in the real world the odds of 99 heads in a row are 1
in 2^100 which is effectively zero so I just want to verify you understand
that your hypothetical situation and conclusion will have no basis in reality.
If you actually saw 99 heads in a row then the odds are infinitely higher the
coin is broken and is not fair"

------
xondono
I get the idea (although I’m not sure I’d call it a ‘fallacy’), but the
examples are horrible.

------
aaron695
I think The Emperor Wears No Clothes.

This has no meaning.

But it's vague and meaningless enough you can pull it out when you want to
prove something involving statistics or models not working, which they mostly
don't.

~~~
raxxorrax
A case against hubris but also the next step to declare every written word as
fallacious.

