
Ego depletion theory may have been debunked - platz
http://www.slate.com/articles/health_and_science/cover_story/2016/03/ego_depletion_an_influential_theory_in_psychology_may_have_just_been_debunked.single.html
======
Spooky23
Here's the scary part to me:

> "For a second paper published last year, Carter and McCullough completed a
> second meta-analysis that included different studies, including 48
> experiments that had never been published. Again, they found “very little
> evidence” of a real effect."

> "“All of a sudden it felt like everything was crumbling,” says Carter, now
> 31 years old and not yet in a tenure-track position. “I basically lost my
> compass. Normally I could say, all right there have been 100 published
> studies on this, so I can feel good about it, I can feel confident. And then
> that just went away.”"

So you have at least 100 bullshit studies on a really popular and often citied
experiment. If bullshit can hide in plain sight like this, imagine what is
lurking out of sight?

~~~
crpatino
Didn't you read the Memo?

Science is a lot like sausage processing. The end result may range from the
quite passable to the quite good, but if you actually saw the raw materials
and the process, you'd be more discriminant on what you put in your plate.

~~~
entee
One possible advantage the "harder" sciences have is that another study that
builds on the previous one can in some ways act as a replication.

If I publish something claiming bacteria has a gene X that produces phenotype
Y, then a subsequent study that transfers gene X into a different bacteria
must also observe Y. Otherwise something is fishy, and often the second set of
researchers will go back and reexamine the original result.

It's rare that work in science is directly, point for point replicated, it's
unsexy, expensive work. The fact that the harder sciences have somewhat of a
built in control mitigates but by no means eliminates the risk of bad science
surviving.

I can't think of an analogous process in the softer sciences. Seeing if a
phenomenon is replicated in a different context does not implicitly re-test
the original work in the same way.

~~~
asuffield
One of the biggest things about the "harder" sciences is that there's usually
a whole engineering discipline behind them as well. Nothing reproduces a
result as effectively as a host of engineers building real things on the back
of it. If it doesn't work, we'll find out pretty soon, and if it does work
then we'll have lots of evidence to show it.

The problem with results like "ego depletion" is that their utility is fuzzy
at best. With no clear application, we don't have a body of practical results
using it.

------
entee
As someone who was once a bench scientist, I'd love to think I was immune to
these kinds of pitfalls. After all, we control things carefully in the lab, we
can control many more of the inputs to an experiment, test a hypothesis by
different avenues, have truly independent controls...

But this is a very large literature, that in effect by sheer volume of work
was doing many of those things. This is quite humbling.

That said, I do think that medical science, though massively flawed (see
previous discussion on clinical trials no longer revealing positive results
[https://news.ycombinator.com/item?id=11209778](https://news.ycombinator.com/item?id=11209778))
there is a way to make it so we can trust science. By:

1.) as previously said given a hypothesis, it must be confirmed by 2
independent approaches (if this is true, then this other thing should be true,
is it?)

2.) publishing more negative results, removing the incentive for the file-
drawer effect mentioned by @jobu

3.) require much larger sample sizes for studies to be published, and take
effect size properly into account when judging whether the outcome is valid.
One interesting way to do this:

[http://www.stat.columbia.edu/~gelman/research/published/retr...](http://www.stat.columbia.edu/~gelman/research/published/retropower_final.pdf)

------
andrewflnr
Maybe my definitions are off, but clicking or not clicking based on the
arrangement of letters in a word seems like an cognitive task, not one of
willpower; it seems pretty easy to do without much stress. So this new result
seems like a very oblique attack, not a strike at the foundations of ego
depletion theory.

~~~
kartan
For me this is quite interesting. Because some one has said that it has
debunked a study doesn´t means that he has done it. There is still the
possibility that the new study is the incorrect one.

That a team claims that there is a problem it is a good sign that this needs
more studies. But it will be jumping to conclusions telling people that the
original study has been debunked.

~~~
dalke
There can always be more studies, and there can always be the possibility of
error.

However, consider if the two studies were reversed in time, so the first study
specified what its endpoints were, then did the study, and concluded there was
no impact.

Then the second study came along, and during data analysis an interesting
correlation appeared which showed there was a impact.

Assuming that both were done by equally competent researchers, which
conclusions would you trust more, and why?

Do your conclusions change if you swap the time order?

Personally, I would trust the study with the well-defined endpoints over the
other one, but I also feel within myself the desire to trust the first one -
even though the order shouldn't change things.

~~~
JamesBarney
If the studies were reversed I think my opinion would be the same. We don't
know that much about ego depletion. If it exists or what's driving it.

I think it means ego depletion is much more fragile than we thought or is
caused by some other phenomenon. I don't think this study means the thing we
called ego depletion doesn't exist and is an artifact of publication bias and
chance. It seems to me like there were just too many studies replicating it
for that to be true.

------
smithkl42
Not to be an ass - but I've got to ask. Does the "reproducibility crisis" have
any policy or political implications for the "global warming" threat? I know
nothing about climatology, and since I can't judge the "conservative"
critiques of the consensus on their own merits, my tendency is to dismiss them
as politically motivated and hence unreliable. As folks keep saying, "The
science is settled." But if the science was settled in this instance, is it
possible that my strategy for sorting through the debate - which I suspect is
in fact a pretty common strategy - is in fact premature?

~~~
habitue
One difference here which might give you an idea of where to place your bets
is that the conservative opposition to global warming is based largely on
studies financed by the energy industry. The willpower research is being
called into question by researchers who don't have a link to any industry with
a vested interest in the outcome.

Is this an airtight argument? No. There is still the possibility that global
warming research has analogous methodological issues. But the conflict of
interest definitely raises the burden of proof.

An additional difference is that climatology research is much better founded
than willpower research is. Psychology is notoriously tricky to test since you
can't break apart a brain and watch all the parts interacting. Compared to
psychology, the systems that climatologists study are downright simple, and we
have good models of how the systems behave.

~~~
smithkl42
I agree that the conflict of interest definitely raises the bar. But of
course, that's what I immediately hear from my conservative friends as well:
that papers which support the consensus are more likely to be published, that
studies likely to confirm the consensus are more likely to be funded, and
researchers who question that consensus are likely to be blackballed by their
colleagues. Not being on the inside, I don't have any special insight as to
whether or how strong the dynamic operates, of course. Which means that as an
non-scientist, I sometimes feel like I'm in a pretty poor state to actually
know what's true. Like I said, my instinct is to respect the consensus. But
the evidence seems to have increased over the last decade or so that
"consensus" can be a pretty poor pointer to the truth. And I want to take that
evidence into account as well.

~~~
habitue
Actually, scientists have a huge incentive to overthrow an existing regime if
it's bad but nobody knows it yet. It will make your career if you are able to
prove it. The evidence required is high if what you're trying to disprove is
well supported, but if you can provide that evidence there will be a
wellspring of PhD students ready to ride that wave and build their careers.

For climate deniers, the evidence just isn't there. There's no wave to ride.

------
whiddershins
One of the interesting things that occurs to me when I read this is the
potential failure for researchers to remember one great way to test a
hypothesis is to test a competing hypothesis.

So for example, in the cookie/radish story, were all participants exposed to
the cookie smell? Were they aware some people got cookies and some people got
radishes?

If so, the reduced performance of the radish group could be related to a
feeling of being pushed down in the social hierarchy, a self-esteem blow,
because they were denied a perk other participants received.

I intuitively feel some of these psychological experiments suffer from this.
That there is another phenomenon which is really close to the one they think
they are studying, which is how they get it wrong. Because the brain/sense of
self are both such complex interrelated systems and it is extremely hard to
make distinctions between different elements of the process.

~~~
bjornsing
> one great way to test a hypothesis is to test a competing hypothesis.

Isn't it worse than that? A statistical hypothesis test will essentially just
let you refute the null hypothesis, it provides no confirmation at all for a
specific alternative hypothesis (unless of course it's the only possible
alternative hypothesis).

A funny story comes to mind: I studied physics at university but did one
semester of psychology. One day in class we did an experiment that was
supposed to show that memory is organized into something called schemas, but
the experiment seemed severely flawed to me and I couldn't help protest that a
much more reasonable interpretation of the outcome was that "people who have
actually been told a story will be better at retelling the story, at least
when compared to people who have _not_ been told the story." On hearing this
criticism my professor became silent for a moment and then said something
along the lines of:

"I have a colleague who's son borrows his bike when his own is broken. This
colleague sometimes says that his son thinks that his bike will mend itself if
left to its own devices. Well, that's one way to interpret the data..."

I thought it was a brilliant way to acknowledge my criticism. But my fellow
students unfortunately didn't seem to get it. I guess psychology was never for
me. :)

------
tootie
I'm in the middle of Thinking Fast and Slow which revolves heavily around ego
depletion and this is very surprising. They cite a ton of evidence and the guy
won a Nobel Prize for his work. Hard to imagine so much of it could be flat
wrong.

~~~
_yosefk
He also said there "you cannot argue with the evidence"... I'm glad someone is
trying.

~~~
Lawtonfogle
The thing is, you can always argue with evidence, you just need newer
evidence.

There is only a few special case exceptions where this presents a problem,
which are mostly thought experiments in astronomy where the evidence will be
lost in the future.

------
mathattack
I am a big believer that leaders need to understand social psychology.
Unfortunately every one of these debunking studies makes it very hard for the
non-expert to know what to apply in the field. It's very frustrating! (I
suppose one could say the same about economics)

~~~
danharaj
If by social psychology you mean "the way people think and behave socially", i
agree. If you mean "the results and theories of the discipline called 'social
psychology'", i disagree. It must be the case that each of us has some
understanding of our social behavior and the behavior of others. Otherwise we
would never form social structures.

i don't know if i've ever gained real insight into human behavior from reading
the results of a contrived experiment.

~~~
mathattack
_i don 't know if i've ever gained real insight into human behavior from
reading the results of a contrived experiment._

Therein lies my problem. :-) I want to know more, but most sources are
woefully unreliable.

~~~
danharaj
Your own life is the richest source of information you have about social
psychology!

~~~
nsajko
No, your life is a source of proverbs.

------
Dowwie
_may_ have been debunked

Baumeister has yet to be steamrolled by the reproducibility crusaders

------
davekinkead
> Just two of the [24] research groups produced a significant, positive effect

If only they had used green jelly beans instead [^1]...

This is not limited to social psychology. Scientific publishing has a strong
bias towards positive results. As long as null results are filed away, this
kind of thing will continue to happen.

[1]: [https://xkcd.com/882/](https://xkcd.com/882/)

------
cLeEOGPw
> "It’s easier to publish stuff that tears something down than it is to build
> something up,” he told me wearily. “It’s not an enjoyable time. It’s not
> much fun.”

If your proposed effect is real then you should have no problem in proving it.
If it doesn't exist, or if it does, but is much smaller and doesn't apply
nearly as broadly as you have stated, then of course you will have "not much
fun" proving it. Especially if that fun means making tens or even hundreds of
experiments, then exhaustively processing data to find some experiments with
positive result and discarding the rest, then again looking for explanations
why these positive result experiments are right, and null or negative are
somehow flawed. Not only that, but you have to avoid contradictions with older
studies.

~~~
Terr_
Also, it may be technically "easier" to tear-down a hypothesis, but the
_rewards_ (prestige and attention) often come from going the other way.

At least, up until whatever-it-is becomes established enough that tearing it
down is novel and surprising.

~~~
cLeEOGPw
It should be established that tearing things down that looked solid and
accepted before is novel and surprising. Because it is. Especially when so
many people have worked on top of that.

Once everything that was tearable is teared down, the remaining picture will
be much more reliable representation of what is real and what is not.

------
EGreg
I like Baumeister and Tice. They make interesting theories, especially the
talk "Is there anything good about men?" But this is good too -- showing that
science can be messy. Makes me think of Thom Hartmann's "farmer vs hunter"
analogy for ADHD, and Bem's studies on precognition.

Science is a human endeavor and as such is subject to lots of economic and
mass psychological effects, but at least there is some constraint on what can
be claimed. The truth is out there, in physics and other hard sciences. In
everything else, the models are much more handwavy approximations of
underlying complexity and variation.

------
hartmel
The saddest thing is that since "ego depletion is a normal thing for every
human" is no more a reliable concept, people will miss an explanation for
their forthcoming loss of will power or failures in their life :)

~~~
nefitty
Time to start coming up with some new models then ;)

------
rajivtiru
> "When he and his adviser, Michael McCullough, reanalyzed the 2010 paper’s
> data using state-of-the-art analytic methods, they found no effect."

Hmm...what was different this time?

------
hosh
Funny thing is that the debunking of ego-depletion theory has some interesting
intersection with another article posted on the front page of HN:
[https://news.ycombinator.com/item?id=11231371](https://news.ycombinator.com/item?id=11231371)

------
TrevorJ
Self control as a static variable is not equally useful across all
situations... I find it improbable that humans would have only a single
'algorithm' through which to mediate executive function. Surely the dominant
strategy would be greatly affected by the environment?

Psychology is an interesting field, because one really has to wonder just how
static human beings actually are over time...It seems almost probable to
assume people living today would handle the same experimental setup quite
differently than those who lived 50 years ago.

------
raziel2p
I fail to see the significance of this, other than a shift in what sort of
self-help books are sold.

Are there any widespread pedagogic or management practices that are built on
top of ego depletion?

~~~
inanutshellus
I'd say it has had a profound effect. The thought that you can wear down
someone's willpower by making them make too many decisions is pretty scary if
your job is to make decisions. One work-around that, for example, the POTUS
has espoused is wearing the same clothes every day. (One less decision to
make.)

See: [http://www.fastcompany.com/3026265/work-smart/always-wear-
th...](http://www.fastcompany.com/3026265/work-smart/always-wear-the-same-
suit-obamas-presidential-productivity-secrets)

------
nostrademons
The framing of this article very much misunderstands the scientific method.

Science is a process. It's a method where you conduct a series of disciplined,
reproducible experiments and use this data to build a _model_ of the truth.
Science never gives you the truth directly; instead, you get a series of
successively more accurate representations of the truth that you can use to
make predictions. Those predictions are not guaranteed to always yield a
positive result, but when they come up negative, you have more data with which
to refine your theories.

Casting this in black and white terms - "this effect has been debunked" \- is
doing a disservice to science.

Rather, a negative result in this study should be grounds for further
investigation. _Why_ did it come up with different results? Is there something
different about how they ran the study? Is there something different about the
_subjects_ , something that may hold useful lessons in its own right? Was the
original data flawed? Does the effect hold only under certain conditions? All
of these would be very useful things to know. The point of science is not to
say "This is true and this is not", it's to be able to create increasingly
refined models of "This is true under these conditions, and if you apply this
model in this situation, you will get these results."

By analogy with physics: the ancient Egyptians (the Ptolemaic kings) believed
that the sun revolved around the earth. To make this model work, they had to
invent increasingly more complex epicycles in the motions of planets.
Copernicus came around and showed that the earth revolved around the sun in a
circle. Kepler re-examined this data and said no, planets actually move in
_ellipsis_. Einstein re-examined this data and came up with a tensor calculus
that even more accurately predicted the orbits of the outer planets. Now,
there is strong evidence that Einstein's theory of general relativity is
"wrong" too, but we don't have anything to replace it with.

Was the Ptolemaic calculation wrong? Well, technically not - as my physics
professor pointed out, they were just doing a Fourier decomposition of the
observed data. And for a kid making a sun-dial on a beach or a hiker who wants
to know when the sun will set, the first term of that series is perfectly
adequate. But the advantages of the heliocentric and relativistic models are
that they can predict the data with increasing accuracy, _without_ adding
extraneous magic numbers.

Similarly, when a psychological study can't be replicated, it doesn't
necessarily mean the basic idea is useless. Rather, it means that we should
look further into the _situations_ where that idea is useless, and refine our
models so they gain additional predictive power.

~~~
ksk
So then, one could propose any ridiculous hypothesis (e.g. vaccines do not
work), and when nobody can reproduce the results, applying your line of
argument, it would lead us down a rabbit hole of second guessing and proposing
even more complex reasons as to why others are unable to reproduce the
results.

I think that in the - day-to-day usage of the word / most commonly understood
meaning of the word - 'true', it is fair to say that Science _is_ in the
business of determining truth from fantasy. Sure, you are permitted under
science to investigate anything and everything that pleases you/(your donors
:) ). I think that scientific consensus is very closely associated with
'truth'.

I totally get the larger point you're making, but even the nature of science
itself is a bit of a grey area. I think that some ideas should, and have been
laughed out of the room. The so called man-on-the-street does not have time
for subtlety or scientific hypotheses which are carefully couched in nuanced
terminology. I think that its OK for people to think that science has all the
answers, every once in a while :)

~~~
nostrademons
The "vaccines cause autism" scare is actually a good example of science
working. It was investigated - as it should have been, because if vaccines
_did_ cause autism that would be a massive public health crisis - and then
they found that the original author falsified the data. The unfortunate part
is that that finding hasn't made it into public consciousness the way the
original scare did.

I think that the belief that science = received truth is behind the rejection
of science by a large portion of the population, because it puts it in
conflict with _other_ sources of "received truth". People discount evolution
as "just a theory" because they're looking for absolute truth; if they were
habitually trained to look for evidence and then knew the body of evidence
behind evolution, and could compare it with the body of evidence behind
gospel, they might understand that "theory" is a lot more rigorously proven
than "truth".

------
tempodox
> _Eating a radish when you’re surrounded by fresh-baked cookies represents an
> epic feat of self-denial_

This is just so much BS. I could bathe in radish. All the chocolate cookies in
the world can go hang themselves if there's radish around.

Maybe my ego feels so depleted because I just haven't had any radish yet this
year. But I did have mountains of cookies.

------
make3
calling psychology a soft science is like calling water a soft solid. Science
is based on isolating causality in a reliable, repeatable manner. Human
behavior is extremely, deeply context dependant, and in a complex multi
layered chaotic way.

~~~
muzani
Well, science is about doing experiments. It's really difficult to isolate
causality with psychology, but possible to at least identify patterns. And
then it takes more experiments to peel off those patterns.

This is a really good case of science at work. It's an experiment that
specifically targeted ego depletion and couldn't find it.

I find it a bit worrying that so many people immediately agree to the
conclusion proposed by these experiments. Ego depletion is one thing, but
there are other well-known effects like Pavlov's Dog.

~~~
make3
they say credibility is hard to gain and easy to lose. Physics has as a
standard to do predictions reproducible exactly to a something like 8 standard
deviations. Psychology, as a domain, appears to be struggling to make
predictions that are reproducible "up to 40% of the time". Now, even if we say
they are twice that as an approximate totally unscientific show of good faith,
we're still really far at 20% reproducible from reliable, credible knowledge
as is promised by other fields, and would still make the field pretty murky at
best

------
nibs
Good. I am going to need unlimited will power to wade through the haystack of
modern science and find the needles of truth.

------
tedks
It's a little darkly funny how people react to perceived indignities in the
sciences with the _exact same behavior pattern_ they're bemoaning.

The narrative of these people is that there are _bad people_ out there doing
_unreproducible work._ They know they're bad, but they're just motivated by
grants, low p-values, etc, and eventually they just snap and do bad science
all over the place. It's an incentive problem.

So, to prove this, they... go get grants, do a meta-analysis with some nice
statistics, and get subjected to the same incentive system. Except this time,
they don't even have to do anything original. They just have to badly copy
something else.

There's _always_ something like this in the article:

>In his lab, Baumeister told me, the letter e task would have been handled
differently. First, he’d train his subjects to pick out all the words
containing e, until that became an ingrained habit. Only then would he add the
second rule, about ignoring words with e’s and nearby vowels. That version of
the task requires much more self-control, he says.

I really hate getting science news from sources like Slate, because you have
to decompile the journalese into what the scientist actually said, but the
technical terms here are that Baumeister had participants perform a fixed-
target character recognition task. This is an incredibly old cognitive task
that was used in some of the first cognitive psychology experiments like
[[http://psych.indiana.edu/tradition/Shiffrin_and_Schneider_19...](http://psych.indiana.edu/tradition/Shiffrin_and_Schneider_1977.pdf)]
that showed that over time with practice, a _controlled_ or _manual_ cognitive
process, like "determine if this word contains the letter 'e'" will become an
_automatic_ process.

My cognitive psychology professor was one of Shiffrin's students, so he was
actually in that experiment, and was very annoyed by it, because in the fixed-
target condition, you pretty quickly start noticing your target letter all the
time. It'll just "pop" out at you when you're trying to read.

Obviously the exercise here won't cause that sort of behavior, but once you
learn a rule like that and start having to react in miliseconds to the
presence or absence of an e, even a little bit of training will create a much
stronger effect than no training at all.

This is a huge study, and it would have been drastically more expensive if
this facet had been included. But it also might have reproduced Baumeister's
work much more fully and the answer would have been useful. As it is, this is
literally more bad science, for exactly the same reason as the original
science may or may not have been bad.

------
mulle_nat
In an ideal world, there would be a filter for all news, that contains "may
have been". The filter would either:

* throw the article silently away

or

* substitute all occurrences with "may or may not have been", which is identical in meaning. I am sure more readers would notice, that they are being fed information tripe.

------
6stringmerc
It's examples like this - particular to psychology - which always give me a
chance to reflect and chuckle about how I believe the 'science' of psychology
to be academically pointless. Not to be confused with psychiatry, which I find
to be a worthwhile medical, scientific pursuit (adding subtances/drugs does
change the equation quite a bit). But real, authentic psychology is pretty
much an observational writing exercise that, in my opinion, can't be broadly
applied any more than general dietary advice.

Each individual's physiological composition makes psychology nothing more than
a practice bed for bored creative writers. Understanding the psyche is
critically important to write convincing characters, absolutely, and that
takes a lot of practice and thought to fashion into a functional perspective
(or literary craft tool) that enables a person to write "believable"
characters. Observation, not causation, as it were.

It's not really a stretch to share this belief, if one were to be familiar
with the origins of psychology that trace back to an esteemed writer and
philosopher(and turn-of-the-century physician...) by the name of William
James, brother of noteworty Henry James.

~~~
KingMob
It's smug comments like this - particular to engineers - which always give me
a chance to chuckle about how little computer programmers actually understand
scientific practice.

 _All_ the same problems happen in biological sciences, and are even worse in
some cases, since there's more money at stake. Failures to replicate are
_rampant_ in medical/pharmaceutical studies.

If you want to claim biology's not scientific either, go ahead. But science is
about seeking knowledge, not having total agreement all the time.

~~~
6stringmerc
Well I'm not wrong that the origin of psychology is simply a blending of
philosophy and writing, which are fundamentally outside the scope of the
scientific method. I went out of my way to argue that biology-based science is
truly science (e.g. psychiatry) but that didn't seem to register. Oh well,
audiences miss the point all the time.

~~~
idm
Research psychologists are at the forefront of science, the scientific method,
and statistical methods.

Haven't you ever wondered why all of these articles by psychologists have been
questing the very fundamentals of the way science is practiced? Psychology, as
a field, is our day's premiere example of a science that examines itself. The
issues with reproducibility are being illuminated and confronted by - who
else? - psychologists.

You're wrong on virtually every count. The scientific method is the primary
philosophy practiced by psychologists. I would encourage you to read more
before making easily falsified statements like "psychology is ...
fundamentally outside the scope of the scientific method."

~~~
mattkrause
To add some specifics to this pile-on:

\- The classical example of a randomized experiment is a
perception/psychophysics experiment: the Lady Tasting Tea. This introduced the
idea of a null hypothesis as well as Fisher's exact test.
[https://en.wikipedia.org/wiki/Lady_tasting_tea](https://en.wikipedia.org/wiki/Lady_tasting_tea)

\- Visual and auditory experiments more or less invented signal detection
theory and used it to probe visual and auditory processing. Those ROC curves
everyone loves predate "machine learning" by decades.

\- A lot of the fancy hierarchical and Bayesian modeling work have come from
psychologists and social scientists (e.g., Andrew Gelman)

------
Kenji
People need to stop doing meta-analysis * . Have you ever heard of meta-
analysis of mathematics or physics papers? No? P = NP meta analysis maybe?
Haha.

I can tell you why: Because if it's solid science, you don't need 100 papers
to describe an effect. Besides, it is a proven fact that the results of meta-
analysis are weaker due to how the underlying statistics works (at least if
you do it in a naive way, and I do not know if it can be fixed by altering the
procedure. I have to brush up on my statistics knowledge, it's good for
machine learning.).

* EDIT: I should have added here 'to investigate the ground-truth of underlying papers.' Of course there's nothing wrong with meta-analysis to investigate researcher behaviour and methodology.

~~~
juhanima
Actually: [https://www.win.tue.nl/~gwoegi/P-versus-
NP.htm](https://www.win.tue.nl/~gwoegi/P-versus-NP.htm)

And amusingly, the distribution of result is close to 50/50

~~~
Kenji
That's actually pretty cool. The difference is that here people are completely
aware that this study does not in any way tell us whether P = NP (even if it
was 100/0) as long as there's no solid proof on the table.

A meta-analysis in those fields is usually only used to analyze the behaviour
and methodology of researchers, not to investigate the ground truth of the
papers.

~~~
Retra
"The behaviour and methodology of researchers" is exactly that thing that will
lead to the truths you seek. You're basically saying a bridge-builder needn't
waste any time worrying about the engineering of boats.

~~~
Kenji
While I admit it was wrong to claim that all meta-analysis is bad, the main
point of my argument has not been refuted. You cannot inspect 100 papers and
make a truth claim that is stronger than the quality of the papers, and doing
so is a folly. Nobody in maths uses meta-analysis to prove a theorem.

