There are only so many points in life where it changes direction. Most of the time is doing what you did the past months/years, whatever is staying home getting drunk or working non-stop.
Sure you can change direction, but it's not easy to fight inertia.
You gotta apply some force to change direction!
Also, if you transform what you want, you reduce need for will power.
Often momentum is kept by NOT taking more difficult decisions.
Also, if you don't trust psychologists you should trust the marines even less.
Pretty contentious area of research, so I'm sure this will be some nice fuel for the fire..
The fact that you can incorporate (and reflect on) a priori assumptions about the likelihood of one or the other model is another advantage imho.
Also, the real difference is whether they test a "null hypothesis" vs testing an "actual hypothesis". Like almost all psych, this falls in the first category.
It's kinda remarkable how many people don't actually use modern bayesian analysis techniques, which is why it's such a policy war. It's the difference between "older statistical methods uninformed by the glut of modern processing power" and "things which require a computer to do well, and preserve uncertainty."
But I do and I guess this means I am rich? Sure.
There's an old joke: There are two types of statisticians. Those who use Bayesian techniques, and those who use both.
>"A Bayesian analysis was performed to test the relative likelihood of observing our
results given the null hypothesis (i.e., the absence of any depletion effect) versus the
likelihood of observing our results given the alternative hypothesis (the presence of a
I mean that I have no interest in the result of this "test" at all. First of all, they are not directly testing for the existence of a "depletion effect". From the paper:
>"Participants in both conditions were provided with a lengthy passage of text. Those in the Control condition simply crossed out with a pencil all occurrences of the letter ‘e’ in a passage of text, whereas those in the Depletion condition were given the following instructions: ‘cross off the letter “e” every time it appears with the following exceptions: 1. Do not cross out the “e” if it is adjacent to another vowel (e.g., friend); 2. Do not cross out the “e” if it is one letter away from another vowel (e.g., vowel); 3. Do not cross out the “e” if the word has 6 letters (e.g., “there”); 4.Do not cross out the “e” if it is the third to last letter (e.g., customers); 5.Do not cross out the “e” if there are double letters in the word (e.g., “hello”)’. Participants were asked to continue the writing task for six minutes, after which the experimenter instructed participants to discontinue.
Participants were given a sheet on which 30 anagrams were listed. Participants were asked to complete as many anagrams as they could within five minutes. After the anagram task, participants were asked to refer to a Likert scale to rate the difficulty of the initial crossing-out letters task (1 = not at all difficult; 7 = very difficult). The number of correctly completed anagrams was recorded for each participant."
So "depletion effect" means "got fewer anagrams correct after inspecting text closely for 6 minutes". How about their eyes just got tired from squinting closer at the page? Or they were primed to think/write slower since they had to cross out fewer e's? I'm sure you can come up with more alternatives.
Second of all, who is actually predicting exactly zero effect of crossing out the letter e more or less often on filling out anagrams? It seems totally implausible to me there is no effect, especially if you go for longer than 6 minutes. Because that is what they are comparing: exactly zero effect vs any other size effect. This is not "small effect" vs "large effect". Literally any way they can mess up the experiments will tend toward non-zero effect. It is only a matter of sample size.
Bayes factors vs p-values is irrelevant to the actual problems with these studies.
All the data comes from psychology students at a college in the US who either were able to use their participation for a course requirement or given extra credit. The method by which they generated and measured depletion seems adequate, but all the data is self reported based on how the test subjects felt.
The statistics gives evidence linking the amount of generated depletion and the reported depletion, but not any insight into what mechanism actually causes depletion. What use is this? Why build a map when we cannot effectively observe the territory?
Any "fuzziness" in the other natural sciences (thinking biology, chemistry, physics, astronomy, etc.) is usually limited by current technology or measurement methods. Newton's classical mechanics were observably true for a long time before their flaws started showing. In most situations, classical mechanics is all you need to accurately model a system. Is there any such well accepted theory that semi-accurately predicts psychological phenomena in a similar fashion?
I agree that sometimes psychological findings seem credible because they align with our personal models of how we think people work, but this gives you nothing in terms of the scientific method. Just because a model is intuitive doesn't mean that it is any more true.
>Such models must, by design, simplify the real mechanisms.
What model? They essentially have a single claim backed by evidence. I don't see what predictive power their conclusion brings. We don't understand what "ego-depletion" is, how can we pretend to measure it? How can this "measurement" of something that we don't understand give us any more information about human psychology?
What real mechanism have they suggested as an explanation for this phenomenon? I didn't see one.
Tracking research developments this way takes a lot of time and expertise. Sometimes unpublished knowledge or insider access are needed. Unfortunately, news reports about science rarely provide the context of prior work and competing hypotheses that is necessary to estimate the veracity of a result. Simple, sensational statements apparently get more clicks. This is not a good situation and leads to the impression that entire fields are garbage. Any effort to fix this communication problem is greatly appreciated.
Personally I think psychology shouldn't even be a separate field from neuroscience. Psychology is, after all, merely an emergent phenomenon from neuroscience, but culture can muddy whether psychological results are truly indicative of how humans think or rather of how a culture has conditioned us to behave in a certain way. I can't remember his name (and I hope someone does and links his wikipedia article), but there was a very well-known and controversial figure within the psychology/psychiatry community who believed that psychology was only slightly better than pseudo-science, and that oftentimes it merely studies behavior and pathologizes others based on our culture rather than science. I just think we don't know enough about neuroscience to really make psychology a very legitimate field of study - it's like trying to perform chemistry without the knowledge of electrons.
How do you think that we arrived on those lower-level phenomena that you're lauding? It wasn't by getting an electron microscope and a particle collider from God and starting from there.
And we disproved phlogisten without use of lower-level principles either -- just better measurement.
Based on what? What better source do we have that contradicts it?
Identifying, defining, and measuring psychological phenomena is much more difficult than doing the same with physical phenomena, for obvious reasons. That leads to lower accuracy. Should humanity not even try, leave these questions unanswered, and suffer the consequences?
While there are many failures and there are the flaws associated with every human endeavor (and that applies to other fields too - computer security comes to mind), modern psychology successfully treats many conditions and helps a very large number of people. Consider the world before it, when we didn't understand depression ('stop being lazy!' 'just buck up!'), schizophrenia (demonic possession), bi-polar disorder, anxiety, PTSD (General Patton assaulting a hospitalized soldier), etc. Is all that knowledge and treatment just "bullshit"?
It's sort of common on HN to toss off derisive remarks about psychology - or any field besides what HN readers personally know well, such as computers, math, and natural sciences - but let's hold ourselves to a higher standard: What do we really know?
Based on the "psychology replication crisis" obviously.
Psychology is a big field, and often when you hear that claim it's complaining about social psychology, not clinical psychology.
The idea was that phlogiston was released when things burned. This was disproved when it was observed that some things gained weight when burned. Either the concept of weight was wrong, or the concept of phlogiston was wrong.
The scientific method requires you to reduce your reasoning into the lowest-level possible discrete arguments. In this case, scientists were forced to abandon phlogiston as it worked on a very high level, but did not integrate with lower level observations.
My gripe boils down to a desire of psychologists to provide sweeping high-level conclusions without even being able to describe possible lower-level phenomena.
Newton came up with classical mechanics by simply observing mechanical systems. Darwin and friends came up with the idea of gene transmission by observing the offspring of plants and animals. The difference between them and modern psychologists is that psychologists make very little attempt to explain the underlying phenomena.
Without any explanations, modern psychology is little more than a series of observations. Science is a process, and I feel like that you can't claim real knowledge of things if you can't explain them.
By your reasoning no sciences should exist until we completely understand the mechanisms at a lower level.
Plenty of phenomena are observed and even experimentally manipulated all the time without understanding mechanisms. This happens in lots of biology all the time--infectious disease being one prominent example but there are thousands of other examples you could pick. This also happens with various phenomena in physics and chemistry as well, and is routine in astronomy (minus the experiments perhaps).
The undergrad student example is an armchair criticism, but think about it carefully: if the process you're studying shouldn't vary across populations, in a fundamental way, why does it matter if you use undergrads or 40 year old engineers?
Ego depletion is not such an unreasonable hypothesis. People perceive many things as effortful, and become exhausted. Try running a marathon for example, and tell me there's no such thing as fatigue.
A similar thing is also reported and observed for temptation. This is extremely common, so much so that people here are variously posting that the presence of the effect is obvious, and its absence is obvious. It's a bedrock idea invoked by all sorts of people all the time in explaining unhealthy behavior. It's also not unreasonable to assume, for example, that glucose in the brain is a limited resource, and that certain pathways are involved in certain types of tasks more, and might need rest. But you don't need to study the brain to examine whether depletion exists as a phenotype.
The effect in question is about the depletion idea, period, not about the mechanism. And you don't need a mechanism to study whether or not it exists.
Your final sentence betrays the problem in your reasoning: various lines of reasoning, including this paper but not limited to it, and involving all sorts of behavioral tasks, suggest that the depletion effect does not exist. So what insight is there to have in a mechanism that is not present? It's an illusion. This seems nontrivial to me.
Uncivil attacks will get you banned here. Please don't do that.
Also, would you please stop bulk-creating HN accounts? This is in the site guidelines: https://news.ycombinator.com/newsguidelines.html. HN is a community. If users don't have some consistent identity for others to relate to, we may as well have no usernames and no community at all. That would be quite a different kind of forum. Anonymity is fine, and throwaways for a specific purpose are ok—just not routinely.
More here: https://hn.algolia.com/?query=by:dang%20community%20identity...
This paper reports that not only were they unable to verify any of the prior research into this so called "ego depletion" effect, but they found more persuasive evidence suggesting that the theory is false.
Like measuring how blue the sky is, based on how much "blue" is left in the universe. The idea they attempt to quantify is improperly described in terms of qualities. e.g. Do colors even work that way?
(...and keep your planck unit, M theory, quantum field eigenstate, history-of-the-big-bang, age of the universe remarks away from this hypothetical example, please. You know what I mean when I describe the sky as either blue or not blue in the colloquial sense.)
Steve Jobs is often used as an example because he wore the same outfit to work every day, purportedly so he would not have to waste any crucial willpower early in the day on such a trivial task as choosing what to wear.
That was to free up time and free himself from an extra decision. Mental exhaustion is not the same as ego depletion. The later claims that "self control" is a limited resource, but picking your outfit doesn't really require that anyway.
For several years, since I read the book Willpower (an intro piece to ego depletion), I've meditated on the idea. I've decided that even if it's not scientifically true, it can be useful to me.
Some may see it as self-help tripe ("Eat your frogs in the morning!"), but it's a useful mental model for me to push myself to get stuff done sooner, rather than later, because I imagine myself not having the willpower later.
Your choice of word -- real -- was probably casual, but I think a better term would be "scientifically valid". There are many things that aren't replicable that have use to many humans.
I think this also applies to beliefs like the 10,000 hour hypothesis. I strongly doubt it is true, but some people really like it as a motivational tool. I have no reason to burst their bubble.
The only time I am bothered is if people argue in favor of unverified beliefs to people who are seeking real truth, or especially if people try to legislate such beliefs.
We can model religions as perceptual sets & then find their uses. I'm designing a religion of absurdity as a cultural experiment in teaching the models I'm developing and can testify to the usefulness of being able to find absurdity in everything. It also helps to not judge things beyond "Does this sustainably contribute to life or not?" I'm still trying to nail down the definitions, but the point is it doesn't matter what we believe, but what we do with those beliefs.
Personally, I only get mad when someone tries to claim their belief is objectively true, or tries to proscriptively generalize it for others.
I think when we introduce choice/attention into the theory that we construct reality out of beliefs, emotions, culture (thought/behavior patterns), and intentions, it's possible to find all kinds of useful frameworks.
Maybe the idea of "scientific validity" needs to be adapted for subjective experiences constructed by us.
That said, I think one thing you indicate here is helpful - life is a personal experience, and while some people may experience motivation from the exercise of willpower, others may feel they have their willpower 'broken down' over time.
I think the specific context of the experiment really does impact the results.
It is never as simple as:
[<3 <3 <3 </]
[<3 <3 <3 <3]
I suppose that makes me a hypocrite in the same way that devoutly religious scientists are. But it makes my life more fulfilled, I experience less anxiety, etc.
It's an interesting ride, whatever it is.
Mainstream approaches to science are obsessively objective, denying the fact that everything we do is a subject experience. So many people learn the idea that things can only be true or false, as opposed to things being dependent on the lens through which they're viewed. We see the same sort of separation in our politics.
I think we need a framework for self-observation. Granted, we have systems in place designed to punish us for being ourselves, so that'll have to change, too.
If you want to learn how to work in a field dedicated to observing & documenting something, then first or concurrently learn to responsibly observe & document yourself. The goal is to maximize self-awareness, which I think everyone can benefit from in some way.
I agree about the usefulness of such a framework. Subjective experiences draw optimizations (self-improvement), so by observing, we naturally are changed. This does not assume improvement; for that, we have to combine change with a fitness function.
The word superstition has negative connotations, but perhaps wrongly so.
Also, some aspects of ego depletion are grounded in science. Just because this particular model isn't valid, doesn't mean there's no valid model for ego depletion.
In other words, if someone is trying to stop eating candy, why do they draw from the candy bowl in some cases but not others?
In this case, the find some non-zero effect, but call it zero because the difference was not statistically significant.
That's likely a reflection of their small sample size rather than evidence for the null hypothesis as the title suggests.
But it's hard to know, or even have an intelligent discussion about it, since the paper itself is behind a paywall.
In this case, the abstract specifically mentions a "Bayes factor > 25". The Bayes Factor is the likelihood, i. e. the ratio of probabilities for two competing hypotheses. With no information whatsoever, it is 1. A factor of 25 means that the null hypothesis is 25 times as likely to be correct as the theory of ego depletion.
Neither the title of the paper nor the abstract say there is zero effect. They accurately and precisely state that the null hypothesis is favored and that the effect is not significantly different than zero.
> That's likely a reflection of their small sample size rather than evidence for the null hypothesis as the title suggests.
That doesn't seem to be the case. From the introduction:
> The Hagger et al. (2010) meta-analysis reported an estimated depletion effect size of d = .62, which we used to conduct a power analysis to establish sample size (n = 33) at 0.80 power. We established n = 35 per condition for our studies, which is slightly larger than the average sample size in the published depletion literature of n = 27 (Lurquin et al., 2016), because the intent of this series of studies was to examine empirically the frequency of null findings in studies similar in size and procedures to those in the published literature. The use of a similar sample size allows for an empirical examination of how frequently the depletion effect occurs when using methods and sample size similar to those in the pre-2013 published literature.
So if their samples sizes are a problem then it's a problem that the rest of the literature has too.
> But it's hard to know, or even have an intelligent discussion about it, since the paper itself is behind a paywall.
Agreed paywalls are unhelpful, but that doesn't mean we can't intelligently discuss what information is available. In addition, I and probably many other people on this forum do have access to many paywalled papers, so exact information is not exactly impossible to obtain.
I have observed it in myself, time and time again. If not it directly, something similar enough. Even at a glucose level, various studies have shown relation between our blood sugar and ability to tolerate frustration and make decisions.
So while the accuracy of ego-depletion may be in question, I can still benefit from reasoning it to be true and challenging my emotions and expectations with that. if/when a better model or perspective comes around i'll surely adopt it.
But of course, if this news in some way helpful or freeing for you, by all means ride that wave. Success matters more than truth.
I've observed that psychology will sometimes split hairs - arguing over whats true and whats truer. Overall this is good - but it's helpful to consider the proliferation of blatant non-truths out there and how far we've come. Sometimes "good enough" is good enough. Fill in the blanks with the facts you can find today and be willing to swap them out for better ones tomorrow.
This is the crux. I agree with you that there are certainly some sort of effects related to will-power, self-control, focus, energy levels, and so on. This feels obvious enough, based on everyday experience.
But it seems we don't yet know how to isolate, measure, or analyze these dynamics in a highly rigorous way.
Of course, there is possibility that the everyday experience is actually by caused some different mechanism we don't understand yet.
I think it is exactly these kinds of studies that are being questioned. Can they be reproduced?
Sometime I use that if I need to drive a long way. I stop eating the day before that ensures that I never fall asleep behind the wheel and my attention stays sharp all the time.
Also, look up the nocebo effect.