Hacker News new | comments | show | ask | jobs | submit login
Bayesian analysis of ego-depletion studies favours null hypothesis (wiley.com)
116 points by mkempe 6 months ago | hide | past | web | favorite | 84 comments



According to marines, will power depletion is bs. This is anecdotal, sure, but I believe in momentum. If you have momentum, at some point, you don't want to fail at any will power task, because you are on a winning streak.


I agree with momentum. I've been using the anology for years.

There are only so many points in life where it changes direction. Most of the time is doing what you did the past months/years, whatever is staying home getting drunk or working non-stop. Sure you can change direction, but it's not easy to fight inertia. You gotta apply some force to change direction!


Interesting. There is actually a concept from behavior analysis called "behavioral momentum." [1]

1: https://en.wikipedia.org/wiki/Behavioral_momentum


Reminds me of how the hardest part of many tasks is simply getting started.


Here's the anecdotal video. https://www.youtube.com/watch?v=OyakvZgU_gk NVM it was former navy seals who said it


Indeed.

Also, if you transform what you want, you reduce need for will power.


More likely you will hit physical exhaustion at some point before being "will depleted"


So perhaps the real metric is "motivational momentum"?


I think the real metric is "it's complicated" and that we shouldn't be hasty to draw over-generalized, innaccurate analogies.


I can agree with that.


You are confusing the [in]ability to take more and more decisions with momentum/inertia.

Often momentum is kept by NOT taking more difficult decisions.

Also, if you don't trust psychologists you should trust the marines even less.


I'll have to look more closely at the paper, but am very surprised by this result! I've always figured the effect sizes were quite small, and there are good researchers who don't buy its main explanations, but the idea of ego depletion shows up in a lot of areas under different names (e.g. "resource depletion in working memory").

Pretty contentious area of research, so I'm sure this will be some nice fuel for the fire..


Not disputing the paper or the scientific method applied here, but how is "bayesian factor" different than p-values?


In a nutshell: If the p-value is below some conventional threshold (in psychology it's usually 0.05), you can conclude that the null-hypothesis (no effect) is likely wrong and that something else must be the case. However, if p is above the threshold you can't conclude anything, because that means that there is either no effect or that the data was not sufficient to show the effect. So the p-value method (formally known as null-hypothesis significance testing, NHST) can only ever be used to reject the null hypothesis but not to support it. This is where Bayes factors are different. In that approach, you formulate not just one hypothesis but two. Then you test how well both hypotheses are supported by the data. Three conclusions are possible: Hypothesis A receives more support, the alternative hypothesis receives more support, there is not enough evidence to conclude anything with sufficient certainty. Presented this way, the Bayes factor approach appears to be strictly superior to NHST. But there are also some complications that make Bayes factors tricky (related to how you select and formulate the hypotheses) and some statisticians go as far as to say that Bayes factors are obviously non-sense (e.g. Andrew Gelman). The truth is probably that they are neither a panacea nor non-sense.


That makes a lot of sense, but can't you model two null hypotheses using the same data in some clever way?


Well, for one thing it doesn't carry that subconscious bias of interpreting something below 0.05 as guaranteed significant as people tend to do with p-values, but is a softer metric that reflects your belief in one or the other hypothesis, given the data. I think that in itself makes it easier to evaluate and reason about.

The fact that you can incorporate (and reflect on) a priori assumptions about the likelihood of one or the other model is another advantage imho.



I don't see "Bayesian factor" in there.


It feels like the fact that this is a bayesian-favoured study is some kind of sign that people (i.e. people that are upvoting, not necessarily the researchers) are actually taking that silly bayesian vs frequentist political war thing seriously. (Which I don't think is advisable.)


I agree, there is no reason to mention "bayesian" in the title other than using it as a hype term like "blockchain" or "deep learning".

Also, the real difference is whether they test a "null hypothesis" vs testing an "actual hypothesis". Like almost all psych, this falls in the first category.


You say this but then you go read a paper with a cruddy t-test and then injure your eyes with the spasmodic rolling that it induces, like I did.

It's kinda remarkable how many people don't actually use modern bayesian analysis techniques, which is why it's such a policy war. It's the difference between "older statistical methods uninformed by the glut of modern processing power" and "things which require a computer to do well, and preserve uncertainty."


I don't think it's an issue of old versus new methods. Daniel Lakens has an great string of blog posts / publications on how a lot of the same things can go wrong with Bayesian anaylses [1], and how p-values can be viewed as a "poor man's Bayesian updating function" [2].

[1]: http://daniellakens.blogspot.com/2017/03/no-p-values-are-not...

[2]: https://daniellakens.blogspot.com/2015/11/the-relation-betwe...


When I do not have a processor easily capable of bayesian mkmv sampling I'll consider using that method.

But I do and I guess this means I am rich? Sure.


>It's kinda remarkable how many people don't actually use modern bayesian analysis techniques, which is why it's such a policy war. It's the difference between "older statistical methods uninformed by the glut of modern processing power" and "things which require a computer to do well, and preserve uncertainty."

There's an old joke: There are two types of statisticians. Those who use Bayesian techniques, and those who use both.


Here is what they say they did:

>"A Bayesian analysis was performed to test the relative likelihood of observing our results given the null hypothesis (i.e., the absence of any depletion effect) versus the likelihood of observing our results given the alternative hypothesis (the presence of a nonzero effect)."

I mean that I have no interest in the result of this "test" at all. First of all, they are not directly testing for the existence of a "depletion effect". From the paper:

>"Participants in both conditions were provided with a lengthy passage of text. Those in the Control condition simply crossed out with a pencil all occurrences of the letter ‘e’ in a passage of text, whereas those in the Depletion condition were given the following instructions: ‘cross off the letter “e” every time it appears with the following exceptions: 1. Do not cross out the “e” if it is adjacent to another vowel (e.g., friend); 2. Do not cross out the “e” if it is one letter away from another vowel (e.g., vowel); 3. Do not cross out the “e” if the word has 6 letters (e.g., “there”); 4.Do not cross out the “e” if it is the third to last letter (e.g., customers); 5.Do not cross out the “e” if there are double letters in the word (e.g., “hello”)’. Participants were asked to continue the writing task for six minutes, after which the experimenter instructed participants to discontinue.

[...]

Participants were given a sheet on which 30 anagrams were listed. Participants were asked to complete as many anagrams as they could within five minutes. After the anagram task, participants were asked to refer to a Likert scale to rate the difficulty of the initial crossing-out letters task (1 = not at all difficult; 7 = very difficult). The number of correctly completed anagrams was recorded for each participant."

So "depletion effect" means "got fewer anagrams correct after inspecting text closely for 6 minutes". How about their eyes just got tired from squinting closer at the page? Or they were primed to think/write slower since they had to cross out fewer e's? I'm sure you can come up with more alternatives.

Second of all, who is actually predicting exactly zero effect of crossing out the letter e more or less often on filling out anagrams? It seems totally implausible to me there is no effect, especially if you go for longer than 6 minutes. Because that is what they are comparing: exactly zero effect vs any other size effect. This is not "small effect" vs "large effect". Literally any way they can mess up the experiments will tend toward non-zero effect. It is only a matter of sample size.

Bayes factors vs p-values is irrelevant to the actual problems with these studies.


I always get the feeling from reading academic material on psychology that people are trying to apply the scientific method on top of unscientific principles. It feels rather like alchemists trying to explain fire via phlogiston: an attempt to quantify high-level phenomena when you don't understand any of the lower levels.

All the data comes from psychology students at a college in the US who either were able to use their participation for a course requirement or given extra credit. The method by which they generated and measured depletion seems adequate, but all the data is self reported based on how the test subjects felt.

The statistics gives evidence linking the amount of generated depletion and the reported depletion, but not any insight into what mechanism actually causes depletion. What use is this? Why build a map when we cannot effectively observe the territory?


It might not seem as clear cut as other natural sciences (which also get fuzzy once you go into the details) but there is credibility in psychological reasoning. The mechanisms that cause a certain behavior are represented as models in psychological research. Such models must, by design, simplify the real mechanisms. But, as opposed to the real mechanism, they can be discussed and disproven, which is what apparently happened here.


>It might not seem as clear cut as other natural sciences (which also get fuzzy once you go into the details) but there is credibility in psychological reasoning.

Any "fuzziness" in the other natural sciences (thinking biology, chemistry, physics, astronomy, etc.) is usually limited by current technology or measurement methods. Newton's classical mechanics were observably true for a long time before their flaws started showing. In most situations, classical mechanics is all you need to accurately model a system. Is there any such well accepted theory that semi-accurately predicts psychological phenomena in a similar fashion?

I agree that sometimes psychological findings seem credible because they align with our personal models of how we think people work, but this gives you nothing in terms of the scientific method. Just because a model is intuitive doesn't mean that it is any more true.

>Such models must, by design, simplify the real mechanisms.

What model? They essentially have a single claim backed by evidence. I don't see what predictive power their conclusion brings. We don't understand what "ego-depletion" is, how can we pretend to measure it? How can this "measurement" of something that we don't understand give us any more information about human psychology?

What real mechanism have they suggested as an explanation for this phenomenon? I didn't see one.


When papers drop below the 50/50 threshold and are more likely to be false then true, the entire discipline becomes worthless.


Modern experimental science is just garbage on the whole. Psychology has the disadvantage of a very poor theoretical foundation, making a lot of its pronouncements seem hokey and fabricated. But on the whole I don't think the prevalence of poor measurement and (espeically) bad statistics is any worse there than it is in, say, biology. Unreproducible experiments are the rule in modern science, not the exception.


While theory indicates that many (or even most) published empirical findings are false [1], this doesn't make experimental science garbage in its entirety. Although the signal from individual papers is noisy, some results do get confirmed repeatedly by independent experiments over time. Those results are likely to be correct. Theory and multiple independent experimental methods should converge on the same result as work continues; if this doesn't happen, it's a red flag. Working scientists have to constantly estimate and re-estimate their confidence in a result, but progress still (so very slowly) gets made.

Tracking research developments this way takes a lot of time and expertise. Sometimes unpublished knowledge or insider access are needed. Unfortunately, news reports about science rarely provide the context of prior work and competing hypotheses that is necessary to estimate the veracity of a result. Simple, sensational statements apparently get more clicks. This is not a good situation and leads to the impression that entire fields are garbage. Any effort to fix this communication problem is greatly appreciated.

[1] http://journals.plos.org/plosmedicine/article?id=10.1371/jou...


Depends on the discipline. I haven't really heard of frequent non-reproducible results in physics, chemistry, and geology. It seems to be more of a medicine-biology-psychology problem.

Personally I think psychology shouldn't even be a separate field from neuroscience. Psychology is, after all, merely an emergent phenomenon from neuroscience, but culture can muddy whether psychological results are truly indicative of how humans think or rather of how a culture has conditioned us to behave in a certain way. I can't remember his name (and I hope someone does and links his wikipedia article), but there was a very well-known and controversial figure within the psychology/psychiatry community who believed that psychology was only slightly better than pseudo-science, and that oftentimes it merely studies behavior and pathologizes others based on our culture rather than science. I just think we don't know enough about neuroscience to really make psychology a very legitimate field of study - it's like trying to perform chemistry without the knowledge of electrons.


Science isn't driven by principles like that. Nothing is built from the ground up, and data that isn't formally understood or modeled is relied upon often. Though we think of it as a dead end today, the experiments and thinking of alchemists (like Isaac Newton) are part of the history of science and were essential to the development of e.g. chemistry and physics.


Data collection on a lot of research papers today: quiz on Mechanical Turk


While most modern psychology seems to be, well, kind of bullshit, the problem isn't with the idea of observing behavior and trying to deduce underlying principles from that.

How do you think that we arrived on those lower-level phenomena that you're lauding? It wasn't by getting an electron microscope and a particle collider from God and starting from there.

And we disproved phlogisten without use of lower-level principles either -- just better measurement.


> most modern psychology seems to be, well, kind of bullshit

Based on what? What better source do we have that contradicts it?

Identifying, defining, and measuring psychological phenomena is much more difficult than doing the same with physical phenomena, for obvious reasons. That leads to lower accuracy. Should humanity not even try, leave these questions unanswered, and suffer the consequences?

While there are many failures and there are the flaws associated with every human endeavor (and that applies to other fields too - computer security comes to mind), modern psychology successfully treats many conditions and helps a very large number of people. Consider the world before it, when we didn't understand depression ('stop being lazy!' 'just buck up!'), schizophrenia (demonic possession), bi-polar disorder, anxiety, PTSD (General Patton assaulting a hospitalized soldier), etc. Is all that knowledge and treatment just "bullshit"?

It's sort of common on HN to toss off derisive remarks about psychology - or any field besides what HN readers personally know well, such as computers, math, and natural sciences - but let's hold ourselves to a higher standard: What do we really know?


>Based on what? What better source do we have that contradicts it?

Based on the "psychology replication crisis" obviously.

https://en.wikipedia.org/wiki/Replication_crisis#In_psycholo...

Psychology is a big field, and often when you hear that claim it's complaining about social psychology, not clinical psychology.


Phlogiston was actually disproved by applying the lower level principle of weight to the supposed phlogiston reaction.

The idea was that phlogiston was released when things burned. This was disproved when it was observed that some things gained weight when burned. Either the concept of weight was wrong, or the concept of phlogiston was wrong.

The scientific method requires you to reduce your reasoning into the lowest-level possible discrete arguments. In this case, scientists were forced to abandon phlogiston as it worked on a very high level, but did not integrate with lower level observations.

My gripe boils down to a desire of psychologists to provide sweeping high-level conclusions without even being able to describe possible lower-level phenomena.


Sometimes you don't need knowledge of the lower-level phenomena. Darwin's arguments for evolution by natural selection were persuasive even though neither he nor contemporaries could conceive of what the chemical basis of gene transmission would look like.


But they did know that genes were transferred. Modern psychology feels to me like researchers all want to be the one who gets to the big revelation first and so they abandon the typical scientific pursuit of building a standard model.

Newton came up with classical mechanics by simply observing mechanical systems. Darwin and friends came up with the idea of gene transmission by observing the offspring of plants and animals. The difference between them and modern psychologists is that psychologists make very little attempt to explain the underlying phenomena.

Without any explanations, modern psychology is little more than a series of observations. Science is a process, and I feel like that you can't claim real knowledge of things if you can't explain them.


Data collection in soft sciences definitely turns it into a Hardmode, but that doesn't mean all findings are bunk.

https://vimeo.com/118188988


What the hell are you reading?

By your reasoning no sciences should exist until we completely understand the mechanisms at a lower level.

Plenty of phenomena are observed and even experimentally manipulated all the time without understanding mechanisms. This happens in lots of biology all the time--infectious disease being one prominent example but there are thousands of other examples you could pick. This also happens with various phenomena in physics and chemistry as well, and is routine in astronomy (minus the experiments perhaps).

The undergrad student example is an armchair criticism, but think about it carefully: if the process you're studying shouldn't vary across populations, in a fundamental way, why does it matter if you use undergrads or 40 year old engineers?

Ego depletion is not such an unreasonable hypothesis. People perceive many things as effortful, and become exhausted. Try running a marathon for example, and tell me there's no such thing as fatigue.

A similar thing is also reported and observed for temptation. This is extremely common, so much so that people here are variously posting that the presence of the effect is obvious, and its absence is obvious. It's a bedrock idea invoked by all sorts of people all the time in explaining unhealthy behavior. It's also not unreasonable to assume, for example, that glucose in the brain is a limited resource, and that certain pathways are involved in certain types of tasks more, and might need rest. But you don't need to study the brain to examine whether depletion exists as a phenotype.

The effect in question is about the depletion idea, period, not about the mechanism. And you don't need a mechanism to study whether or not it exists.

Your final sentence betrays the problem in your reasoning: various lines of reasoning, including this paper but not limited to it, and involving all sorts of behavioral tasks, suggest that the depletion effect does not exist. So what insight is there to have in a mechanism that is not present? It's an illusion. This seems nontrivial to me.


> What the hell are you reading?

Uncivil attacks will get you banned here. Please don't do that.

Also, would you please stop bulk-creating HN accounts? This is in the site guidelines: https://news.ycombinator.com/newsguidelines.html. HN is a community. If users don't have some consistent identity for others to relate to, we may as well have no usernames and no community at all. That would be quite a different kind of forum. Anonymity is fine, and throwaways for a specific purpose are ok—just not routinely.

More here: https://hn.algolia.com/?query=by:dang%20community%20identity...


ELI5?


Basically, there was a theory that exercising self-control would "deplete" ones "self-control reserves". The idea was that exercising your self-control, to say avoid a junk food, would temporarily decrease your ability to resist that temptation again.

This paper reports that not only were they unable to verify any of the prior research into this so called "ego depletion" effect, but they found more persuasive evidence suggesting that the theory is false.


This is reassuring. Every time someone told me this I figured it had to be bullshit. I think people used it as an excuse to make bad decisions.


...and you're exemplifying confirmation bias right there, which is the whole reason this happened in the first place.


Having an opinion that is later vindicated is not the same thing as confirmation bias.


Gross. It sounds like a total failure to correctly model reality.

Like measuring how blue the sky is, based on how much "blue" is left in the universe. The idea they attempt to quantify is improperly described in terms of qualities. e.g. Do colors even work that way?

(...and keep your planck unit, M theory, quantum field eigenstate, history-of-the-big-bang, age of the universe remarks away from this hypothetical example, please. You know what I mean when I describe the sky as either blue or not blue in the colloquial sense.)


Well of course it sounds ridiculous when you use an absurd analogy. A more reasonable analogy was something like muscle fatigue, or even mental fatigue, and it seems reasonable to believe that willpower can be depleted in a similar way. In fact, there were many published studies consistent with this hypothesis (as the abstract states).


I disagree.


Ego depletion refers to the idea that self-control or willpower draws upon a limited pool of mental resources that can be used up [1]. Previous studies have found that it is real. This study did a complex statistical analysis and found that they didn't have enough evidence to prove that it is real.

[1]: https://en.wikipedia.org/wiki/Ego_depletion


To give some extra context: this is another study confirming the "replication crisis"[1] in psychology and social psychology (and other sciences as well, but these are the most prominent fields) where longstanding results that were thought to be rock-solid are falling like dominoes. Basically, the science backing, like, half of all TED talks and most episodes of Radiolab is crumbling before our eyes.

[1] https://en.wikipedia.org/wiki/Replication_crisis


It seems like the least competent people to be designing and analysing psychology studies are psychology researchers. Statistics should be the number one skill required. The psychology part of these high-profile studies is usually trivial so that anyone with no psychology background could have thought of it, or at least easily understand it. I don't really see the need for a specialist psychology researcher to do this kind of work. Maybe we need a new field - "studies of interesting everyday phenomena" that's mostly about doing statistics right.


The irony is that a lot of statistics has really been developed in psychology and allied disciplines without even realizing. Meta-analysis, for example, really developed into its modern form in psychology, as a way of examining psychotherapy effects (even though the basic idea was around beforehand). Deep learning models too also have their roots in psychological models. They're called neural networks, but the models are more psychological than neurobiological per se, and have connections to other models that are firmly in the realm of psychological models.


I think you're being down voted because this might sound overly elitist, but to an extent I agree. As someone who considers himself relatively knowledgeable of statistics (coming from ML background) I have a hard time taking a lot of social science research seriously, because quite frankly, I am often left frustrated at the lack of rigor of their methodologies. Sample sizes are often way too small, and even more frequently, unacceptably biased to draw significant scientific conclusions. You also have the publication-bias problem which means that studies may be accepted as "legitimate" due to p-value hacking or because enough people were studying the problem that someone was bound to get a "statistically significant results" even when there was none.


Ego-depletion is mentioned, not always by name, in hundreds productivity and focus related articles and books published in the last 5 years or so.

Steve Jobs is often used as an example because he wore the same outfit to work every day, purportedly so he would not have to waste any crucial willpower early in the day on such a trivial task as choosing what to wear.


>> Steve Jobs is often used as an example because he wore the same outfit to work every day, purportedly so he would not have to waste any crucial willpower early in the day on such a trivial task as choosing what to wear.

That was to free up time and free himself from an extra decision. Mental exhaustion is not the same as ego depletion. The later claims that "self control" is a limited resource, but picking your outfit doesn't really require that anyway.


You have no idea how i really want to dress at work.


I almost qualified my statement with an exception for people like you ;-)


ego depletion is the idea that the more you stop yourself from doing things, the harder it gets to stop yourself. It sounds like they looked at a bunch of studies on this effect and concluded that it is not likely by doing some math on those studies.


Ego depletion isn't real.


To me, this is an interesting statement because it highlights the difference between true and useful.

For several years, since I read the book Willpower (an intro piece to ego depletion), I've meditated on the idea. I've decided that even if it's not scientifically true, it can be useful to me.

Some may see it as self-help tripe ("Eat your frogs in the morning!"), but it's a useful mental model for me to push myself to get stuff done sooner, rather than later, because I imagine myself not having the willpower later.

Your choice of word -- real -- was probably casual, but I think a better term would be "scientifically valid". There are many things that aren't replicable that have use to many humans.


I personally believe that a lot of religion falls onto the "useful" side of this divide. Which is why I have no reason to go out of my way to tell people that I think their religion is not literally true. If it works for them, great.

I think this also applies to beliefs like the 10,000 hour hypothesis. I strongly doubt it is true, but some people really like it as a motivational tool. I have no reason to burst their bubble.

The only time I am bothered is if people argue in favor of unverified beliefs to people who are seeking real truth, or especially if people try to legislate such beliefs.


I find it much more useful to break the world down into functions. Then we can simply ask in what contexts are these functions useful.

We can model religions as perceptual sets & then find their uses. I'm designing a religion of absurdity as a cultural experiment in teaching the models I'm developing and can testify to the usefulness of being able to find absurdity in everything. It also helps to not judge things beyond "Does this sustainably contribute to life or not?" I'm still trying to nail down the definitions, but the point is it doesn't matter what we believe, but what we do with those beliefs.


I have similarly softened towards people's religious and spiritual beliefs as I've aged.

Personally, I only get mad when someone tries to claim their belief is objectively true, or tries to proscriptively generalize it for others.


I started using animism yesterday to motivate me to do the dishes, viewing them as the physical embodiment of souls that I, their god, allowed to get moldy/rusty through neglect.

I think when we introduce choice/attention into the theory that we construct reality out of beliefs, emotions, culture (thought/behavior patterns), and intentions, it's possible to find all kinds of useful frameworks.

Maybe the idea of "scientific validity" needs to be adapted for subjective experiences constructed by us.


You win a prize for the most bizarre story I'll read today.

That said, I think one thing you indicate here is helpful - life is a personal experience, and while some people may experience motivation from the exercise of willpower, others may feel they have their willpower 'broken down' over time.

I think the specific context of the experiment really does impact the results.

It is never as simple as: WILLPOWER [<3 <3 <3 </] HP [<3 <3 <3 <3]

etc.


How do you think we could adapt that concept to subjective experiences? As I've gotten older (and learned to manage my mental state better), I've found great benefit from separating "scientifically true" and "useful to lead a good life".

I suppose that makes me a hypocrite in the same way that devoutly religious scientists are. But it makes my life more fulfilled, I experience less anxiety, etc.

It's an interesting ride, whatever it is.


I think one thing that needs to happen is a wider adoption of the pragmatic view described here:

https://plato.stanford.edu/entries/structure-scientific-theo...

Mainstream approaches to science are obsessively objective, denying the fact that everything we do is a subject experience. So many people learn the idea that things can only be true or false, as opposed to things being dependent on the lens through which they're viewed. We see the same sort of separation in our politics.

I think we need a framework for self-observation. Granted, we have systems in place designed to punish us for being ourselves, so that'll have to change, too.

If you want to learn how to work in a field dedicated to observing & documenting something, then first or concurrently learn to responsibly observe & document yourself. The goal is to maximize self-awareness, which I think everyone can benefit from in some way.


This looks very interesting, thanks!

I agree about the usefulness of such a framework. Subjective experiences draw optimizations (self-improvement), so by observing, we naturally are changed. This does not assume improvement; for that, we have to combine change with a fitness function.


Just because the effect isn't real doesn't mean the system can't be beneficial. There's hundreds of millions of people all over the world who benefit from a Abrahamic worldview.

The word superstition has negative connotations, but perhaps wrongly so.

Also, some aspects of ego depletion are grounded in science. Just because this particular model isn't valid, doesn't mean there's no valid model for ego depletion.


The question remains, then. Why is it that the same individual may perform (or not perform) an action while alert, but fail to their desired behavior when more tired, or hungry, or are otherwise distracted with effort toward maintaining another action state?

In other words, if someone is trying to stop eating candy, why do they draw from the candy bowl in some cases but not others?


It's disappointing to see links to the abstracts on the front page of HN (given the paper itself is behind a paywall).

In this case, the find some non-zero effect, but call it zero because the difference was not statistically significant.

That's likely a reflection of their small sample size rather than evidence for the null hypothesis as the title suggests.

But it's hard to know, or even have an intelligent discussion about it, since the paper itself is behind a paywall.


Sample size is, of course, one of the factors included in any analysis that gives you a p-value, or any other measure of confidence.

In this case, the abstract specifically mentions a "Bayes factor > 25". The Bayes Factor is the likelihood, i. e. the ratio of probabilities for two competing hypotheses. With no information whatsoever, it is 1. A factor of 25 means that the null hypothesis is 25 times as likely to be correct as the theory of ego depletion.



> In this case, the find some non-zero effect, but call it zero because the difference was not statistically significant.

Neither the title of the paper nor the abstract say there is zero effect. They accurately and precisely state that the null hypothesis is favored and that the effect is not significantly different than zero.

> That's likely a reflection of their small sample size rather than evidence for the null hypothesis as the title suggests.

That doesn't seem to be the case. From the introduction:

> The Hagger et al. (2010) meta-analysis reported an estimated depletion effect size of d = .62, which we used to conduct a power analysis to establish sample size (n = 33) at 0.80 power. We established n = 35 per condition for our studies, which is slightly larger than the average sample size in the published depletion literature of n = 27 (Lurquin et al., 2016), because the intent of this series of studies was to examine empirically the frequency of null findings in studies similar in size and procedures to those in the published literature. The use of a similar sample size allows for an empirical examination of how frequently the depletion effect occurs when using methods and sample size similar to those in the pre-2013 published literature.

So if their samples sizes are a problem then it's a problem that the rest of the literature has too.

> But it's hard to know, or even have an intelligent discussion about it, since the paper itself is behind a paywall.

Agreed paywalls are unhelpful, but that doesn't mean we can't intelligently discuss what information is available. In addition, I and probably many other people on this forum do have access to many paywalled papers, so exact information is not exactly impossible to obtain.


In Bayesian analysis, it's possible to actually accept the null, rather than just the reject/not reject that is common to frequentist statistical inference.


I'm all for replication and ensuring scientific accuracy - but this does not change my perspective of ego-depletion.

I have observed it in myself, time and time again. If not it directly, something similar enough. Even at a glucose level, various studies have shown relation between our blood sugar and ability to tolerate frustration and make decisions.

So while the accuracy of ego-depletion may be in question, I can still benefit from reasoning it to be true and challenging my emotions and expectations with that. if/when a better model or perspective comes around i'll surely adopt it.

But of course, if this news in some way helpful or freeing for you, by all means ride that wave. Success matters more than truth.

I've observed that psychology will sometimes split hairs - arguing over whats true and whats truer. Overall this is good - but it's helpful to consider the proliferation of blatant non-truths out there and how far we've come. Sometimes "good enough" is good enough. Fill in the blanks with the facts you can find today and be willing to swap them out for better ones tomorrow.


> I have observed it in myself, time and time again. If not it directly, something similar enough. [Emphasis added by me]

This is the crux. I agree with you that there are certainly some sort of effects related to will-power, self-control, focus, energy levels, and so on. This feels obvious enough, based on everyday experience.

But it seems we don't yet know how to isolate, measure, or analyze these dynamics in a highly rigorous way.

Of course, there is possibility that the everyday experience is actually by caused some different mechanism we don't understand yet.


>Even at a glucose level, various studies have shown relation between our blood sugar and ability to tolerate frustration and make decisions.

I think it is exactly these kinds of studies that are being questioned. Can they be reproduced?


Stay hungry (e.g. On low calorie diet) for few days and then watch yourself more nerveus and making dumber decisions.


I fasted for few days on many occasions. The only ill effect was that it was difficult to fall asleep and the sleep was very shallow on second and third night. After that sleep got better. I have not noticed any dumbness. Subjectively perception even improves. This is in line with experience I read in books and articles.

Sometime I use that if I need to drive a long way. I stop eating the day before that ensures that I never fall asleep behind the wheel and my attention stays sharp all the time.


Just because you've observed a behavior in yourself doesn't mean it's inevitable.

Also, look up the nocebo effect.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: