
Is Ego-Depletion a Replicable Effect? - gwern
https://replicationindex.wordpress.com/2016/04/18/is-ego-depletion-a-replicable-effect-a-forensic-meta-analysis-of-165-ego-depletion-articles/
======
tlb
Ego depletion is framed as an effect in a particular direction: applying self-
control reduces self-control in the near future. So studies are called
successful if they see that direction, otherwise unsuccessful.

But the other direction is a reasonable hypothesis too: applying self-control
increases self-control because the brain regions are activated and ready, or
something.

So you could frame the problem bidirectionally: applying self-control affects
self-control ability according to some time-dependent formula with parameters
that we'd like to estimate.

Framing the problem bidirectionally might reduce publication bias, since every
estimate of the parameters would seem equally valuable.

~~~
Cybiote
The common mistake that many (but not necessarily you) make is to assume that
conscious effort is proportional to energy use in the brain. The idea that
willpower brain is some kind of store that can be depleted is not as
parsimonious as other hypotheses.

Cognizing is a costly activity, whether daydreaming or thinking, a lot of
stuff is going on. So why does thinking in particular feel difficult?

The least terrible analogy I can think of is to imagine a distribution
network. Postal systems, railways, the internet; whether they are at 20% or
100% utilization, the marginal costs don't budge much. The other key concept
is that the brain both learns as well as has to figure out how to do so
efficiently. Reward schedules and surprises play prominently into this.

The hypothesis: some things feel difficult in a way not directly related to
energy use because it's attention that is key and it is better thought of as a
resource to be allocated and managed.

The questions are: why is it so hard to maintain focus on abstractions for
extended periods of time? Someone is going to mention the ADHD hunter ancestor
theory as selectors for extended focus, but even then, I will say that their
attention still did not stay as fixed or devoid of obvious rewards as some
abstract thinking requires. Is focus trainable? Why are some better at not
responding to immediate rewards? Is this trainable?

~~~
visarga
I think conscious effort in itself is not difficult. What is difficult is to
be lacking acceptable options or strategies to solve the problem at hand. That
is causing anxiety and fatigue.

To put this in terms of machine learning, the brain is doing reinforcement
learning maximizing a value function. But when the value function is very low
and options (strategies) available are too limited, the system can't output a
satisfactory action for the next step so it interprets this as suffering.

It's all about availability of choices, not the actual effort. What's
depleting is not energy in the brain but the perception of how well we can
handle the situation. The state of flow is also related to conscious effort,
but in flow there is always a good outlook, there are many options available,
many strategies, creative possibilities of combination, and consequently, it's
a joyful activity.

~~~
gragas
That is a near perfect description of a feeling I feel a lot when debugging
without a clear path.

------
onetwotree
This really strikes me more as a problem of incentives than "lol psych
researches need to learn statistics".

Incentives in this situation are funding and publication. These both require
positive results. This seems to be tied to increasing monetization of science
driving it away from a principled search for the truth.

We also see this with patents. My girlfriend has her first lab job this
summer, and they had a day long seminar with the university's patent
foundation on methods to ensure that nobody stole their ideas. This leads to a
lot of secrecy (for example, this particular patent office doesn't allow
researchers to store anything in the cloud, which speaks to lack of technical
understanding, but also to an intense paranoia). This also dramatically
reduces the capacity for sharing basic lab know how - there's nothing remotely
like StackOverflow for scientists.

This issue is a bit near to my heart, as the company I work for was originally
founded to help researchers share data in a secure way. It turned out that
there wasn't any demand for that, because, as our CEO put it, "Everybody
thinks their going to win the damn Nobel Prize". So we pivoted and now sell
mostly to financial and tech companies that are serious about protecting user
data.

------
mysterypie
The main thing I got out of the article was that I should ignore results from
the fields of psychology or sociology for the foreseeable future. The standard
of proof seems very low.

I never fell for Tony Robbins-type pop psychology, but I loved books like
"Influence: The Psychology of Persuasion" by Robert Cialdini and "The Tipping
Point" by Malcolm Gladwell.

I thought I was getting useful, non-obvious, actionable information. I'm
disappointed that the lessons from those books might be BS as well.

~~~
fdsaaf
> I should ignore results from the fields of psychology or sociology for the
> foreseeable future

Yes, you should. The problem is that some of these bad results have become key
political tenets. Try voicing skepticism about the reality of stereotype
threat and see how long it is before HR finds some excuse to fire you.

If you haven't noticed, there are powerful social pressures in the Bay Area to
go along with certain kinds of reality denial. The more reality suggests that
dominant Bay Area social beliefs are false, the louder supporters reiterate
these beliefs and the greater the ferocity with which they punish dissent.

~~~
popra
> HR finds some excuse to fire you

Which shouldn't be surprising, because unless carefully expressed, skepticism
towards the stereotype threat effect can be interpreted as a justification for
stereotyping, especially when the skepticism is expressed in the context of
political tenets.

~~~
fdsaaf
Stereotypes _are_ accurate, though. People are good pattern-recognizers, and
the characteristics that we use to form stereotype do convey useful
information.

Pretending that stereotypes are factually bogus is wrong. It creates
resentment. It's much better to accept that stereotypes exist for a reason,
but emphasize that it's a grave injustice to let stereotypes hurt our
evaluation of people who defy these stereotypes.

> skepticism towards the stereotype threat effect can be interpreted as a
> justification for stereotyping

Current HR policies go beyond requiring fair treatment of individuals as
individuals. Every sane person supports doing that. HR demands that we silence
truths about the natural world go unstated. HR demands that we make untrue
statements about the world. That's anathema to anyone technically-minded.

~~~
popra
I like to think of myself as technically minded. However, your stereotype of
technically minded people seems to sugest that either I accept the "natural
truth" of stereotype accuracy or forfeit my technically minded view of myself.
I will do neither.

The claim that one can engage in stereotyping people while exibiting fair
threatment of the same people as individuals is dubious to say the least given
the definition of stereotyping.

Once engaged in stereotyping no amount of post factum fair threatment of the
individual will make up for the unfairness.

Later edit: People are imperfect pattern recognisers, in fact our pattern
recognition generates a significant amount of false positives. From an
evolutionary perspective this was acceptable at a time when tigers could be
lurking in the bush. The penalty for running from an imaginary tiger in the
bush is insignifiant compared to the penalty of not running from an undetected
tiger.

------
joeyrideout
As a layman when it comes to statistics and academia, this is the first I've
heard of "publication bias". It seems like a pretty big problem at first
glance, especially if it recurses and calls into question meta-analysis being
done to investigate the presence of the bias itself. Sounds like a massive
headache.

If this problem is as big as it sounds, what are the repercussions?

~~~
Houshalter
Oh it's far worse than that. Iirc about a third of studies fail to replicate,
and in psychology it might be higher. In fact we even have a control group for
science. A fake field of research that spends it's time studying phenomena
which can't possibly be real, the paranormal researchers. They produce tons of
positive results anyway. See _The Control Group is Out of Control_ :
[http://slatestarcodex.com/2014/04/28/the-control-group-is-
ou...](http://slatestarcodex.com/2014/04/28/the-control-group-is-out-of-
control/)

~~~
pknight
If your view is a priori that psi effects are impossible all you will see in
the studies that 'it must be invalid and flawed or faked'. Yes, this makes
people doubt scientific practices in general. You will always feel you are
winning that argument, no matter what criteria those scientists meet.

But psi researchers these days jump through far more hoops, run far more
replications, spend far more time looking for weaknesses and alternative
explanations, garner far more scrutiny and second guessing. To call that a
'fake field of research' when it is more rigorous is disingenuous.

Negative results do get published regularly. And more replications take place.
It is quite hilarious really, because it makes people panic about what is
wrong with the methods we apply across the board in science. Skeptics just
raise the bar abritrarily and shift the goalposts as a means of sweeping
things under the rug while calling it scientific rigour.

We have also now gotten to the point that the fact that we commonly have
failed replications (across all social science), it means the whole premise of
a line of research must be completely flawed. But we should expect failed
replications - for one, it's impossible to repeat a set of circumstances
exactly. Many replications aren't true replications.

~~~
TeMPOraL
> _But psi researchers these days jump through far more hoops, run far more
> replications, spend far more time looking for weaknesses and alternative
> explanations, garner far more scrutiny and second guessing. To call that a
> 'fake field of research' when it is more rigorous is disingenuous._

But that's the point of Scott's article. We have more than enough reasons to
believe the whole field is bullshit (did you see a working practical
application of that research?), but because it does all the moves and rituals
of Real Science, it forms a perfect baseline for how many positive results we
can expect with _pure_ cargo-cult science.

~~~
pknight
Explaining away thousands of studies that happen to study controversial
effects with unknown mechanisms in one brush is not demonstrating intellectual
rigour nor does it make you come across as upholding 'real science'.

The explanation that psi-researchers are just good at finding the appearance
of signal from noise, small as it may be, is fine, but then personally, I'd
like to see the same vigour for explaining what the signal actually is. And
people that have a firm belief that any 'psi'-explanation can't possibly true
(and this a pretty arbitrary notion at that, as psi would just follow
conventional physics and adhere to normal biological mechanisms) just aren't
going to accept any other explanation. So they look for issues with the
statistics used.

The notition that the whole field is BS, as you put it, requires all the
researchers in involved to be utterly mad, stupid or deceitful. That is a
claim easily made by skeptics, a thing that they have no trouble agreeing to.
That view is not born out of scientific honesty, it's just another form of
cargo-cult thinking.

------
EGreg
After reading this whole article I do feel a bit tired... does that count as a
meta meta study? Just kidding! Let's eat some cookies!

Richard Feynman was right... this whole field of statistical studies should be
called something other than science.

[http://calteches.library.caltech.edu/51/2/CargoCult.htm](http://calteches.library.caltech.edu/51/2/CargoCult.htm)

------
cleeus
love this quote:

> Every researcher knows about publication bias, but the practice is so
> widespread that it is not considered a serious problem.

~~~
algorias
It is considered a serious problem by many, being a key part of the whole
replication crisis. The idea that studies need to be preregistered is slowly
gaining ground I feel.

~~~
Noseshine
It seems to me, not serious enough for decisive action apparently. Progress is
slow on that front. "Knowing" and "doing" are different. Most people who smoke
or eat badly are aware that it's a (serious) problem, but it does not
necessarily lead to action. So that people "are aware" of something doesn't
tell the whole story.

------
facepalm
While the picture seems bad for Ego-Depletion, I find the counter arguments
put forward in the article not wholly satisfying, either.

So they tried to replicate it in a large effort and failed, but they didn't
use the same test as in the original paper. How is that a replication study? I
think to properly fail a replication study, you should use the same test as in
the original study. In the replication the used many participants, but
developed a cheaper test to administer (wholly computer based).

As it is, they have another indicator that Ego Depletion might not be real,
but just as well there could be some other effect that explains why the
replication failed.

Likewise for meta analysis: it looks good on paper, how could an average of n
studies be wrong? Surely the mass of papers studied will cancel out all the
flaws of the individual studies? But on the other hand it seems to me meta
analysis would overestimate the impact of bad studies, as they are easier to
make so there a probably more of them.

A meta study is certainly an interesting data point, but I feel it can not
always be the end of the story. It can maybe point out the need for deeper
questions or further research, but not be proof on it's own.

------
apatters
So... do I eat the chocolate cookies, or not? This is so difficult!

~~~
Noseshine
By the way, the marshmallow study has been redone and reinterpreted. It has
been found that the children's behavior is heavily influenced by their life
experience. When they live in an environment where they can't trust adults
they eat the marshmallow sooner, quite rationally. So it isn't an inherent
trait of the child but of its environment whether it waits for the promised
future return or not.

~~~
agarden
Oh, wow. That is such an obvious explanation, now that you point it out. Where
can I read more?

~~~
Noseshine
This is one link, but I think there probably is more follow-up research on
that study:

[http://www.sciencedirect.com/science/article/pii/S0010027712...](http://www.sciencedirect.com/science/article/pii/S0010027712001849)

I've seen an interview with a researcher who said something along the lines of
"we take reliability into account" \- but what he then said seems to show a
very narrow interpretation. The experimenter must be well-known to the
children and has played with them before the experiment, and that "the
marshmallows are right there in front of them, so it's not a reliability
issue". Personally, I don't think that is even remotely enough to account for
the effect and that is a somewhat naive take on the issue, especially the
second part that solely relies on _reason_. Which doesn't even work in adult
scientists to counter social effects.

------
ktRolster
I'm glad to see sociology is becoming more rigorous.

