
We have an epidemic of deeply flawed meta-analyses, says John Ioannidis - danso
http://retractionwatch.com/2016/09/13/we-have-an-epidemic-of-deeply-flawed-meta-analyses-says-john-ioannidis/
======
chongli
Ahhh yes, the Ouroboros of Scientific Evidence [0] strikes again. Personal
opinion trumps meta-analysis. _Science! You were the chosen one!_ Seriously,
read this essay on the topic by Scott Alexander. It might change your life.

[0] [http://slatestarcodex.com/2014/04/28/the-control-group-is-
ou...](http://slatestarcodex.com/2014/04/28/the-control-group-is-out-of-
control/)

~~~
tezza
I can't quite work out what your comment is saying.

Are you critising the man in the article ( John Ioannidis ) for egotism
because he somehow is disparaging of Further ( Meta ) analysis of his results
?

Or are you saying that Ioannidis has a point and that studies of studies are
poor ?

~~~
vanderZwan
If you read the article that chongli linked (and I second their
recommendation) then it becomes self-evident that they mean the latter.

~~~
jessriedel
I've read that link, but it's a bad idea to have comments that can't be parsed
except in context of a 4,000 word essay.

~~~
chongli
You're probably right. I was literally about to fall asleep for the night and
I wanted to make a short comment that might encourage people to read the
essay. I hope the cheekiness didn't annoy too many people!

------
Amygaz
Another trend in life science, which is on the path of reaching epidemic
proportion, is the publication of conceptual "work". I am talking about
publications in peer-reviewed scientific journals papers, about some
conceptual ideas or framework that they may have had. Technically conceptual
papers are appropriate when getting real observation if irrelevant. The
problem is that, this is never the case in life science... So, basically they
feel like they can't be bothered to obtain factual information, or they don't
want to risk being scooped, which means they assume their "novel" concept is
somewhat obvious.

Common features of these conceptual papers include: \- coming from a high
profile lab, in a ivy league school (i.e. high schmoozing factor); \- said lab
is typically the best equipped to actually tackle that concept; \- not
supported by any new data, but piggy backing on others previous data from
others; \- some form of deductive reasoning; \- reads more like a novel; \-
claims for openness and future collaboration (but remember the 2nd feature
about being in the lab best equipped to tackle the concept).

At best these papers can be used to study informal vs formal logic. In
reality, I find these papers being used as a form of first-to-publish or
lobbying material to influence NIH funding. I think they work well as a good
PR factor, and this not surprising since since the appearance of doing
something has more value nowadays than being the one who actually moves the
needle. So my conclusion would be the same as Prof Ionnadis, that these serve
primarily as self-promoting and marketing tools. They lack the rigor and
quality that we should expect from scientific journals, and should not be
labeled as such.

~~~
semi-extrinsic
Don't forget funding agency demands. Just got a new five year project? Of
course you need a deliverable after 12-18 months. Nevermind that you haven't
had any chance of producing good science in that timeframe (unless you take
the PhD comics version of the grant cycle as literal advice)

[http://www.phdcomics.com/comics/archive.php?comicid=1431](http://www.phdcomics.com/comics/archive.php?comicid=1431)

~~~
mcguire
As an aside, I have seen that grant cycle given to new grad students by a
senior researcher as the way things are actually done. And I can't say I
actually disagree.

~~~
semi-extrinsic
As they say, "it's funny because it's true".

------
Kenji
I remember that time when I said on HN that most meta-analyses are complete
garbage because of statistical/mathematical effects that weaken the result and
that the conclusions are necessarily as flawed as the underlying data. I got
downvoted into oblivion.

~~~
jessaustin
Perhaps TFA is found more persuasive than your comment?

~~~
Kenji
It's just interesting to see the opinions shift over time.

------
carsongross
The further Science, Inc. gets from the scientific method and independent
verification, the more of a joke it becomes.

Vox Day (no endorsement) makes the distinction between scientistry and
scientody, which I think is an increasingly useful one.

~~~
Fomite
One should note that meta-analysis is a form of rigorous independent
verification.

There's also some flaws in his analysis:
[https://www.ncbi.nlm.nih.gov/pubmed/27620683#cm27620683_2695...](https://www.ncbi.nlm.nih.gov/pubmed/27620683#cm27620683_26950)

~~~
carsongross
Meta-analysis is _not_ a form of rigorous independent verification in the vast
majority of cases. It is usually a statistical veneer placed, lazily, on top
of a smear of vastly different experimental results, giving the appearance of
rigor.

Replication is a rigorous form of independent verification.

~~~
dfsegoat
How should the average researcher go about replicating something like 3-5
clinical trials that a drug company may have run for a human drug?

Should we replicate all that work? Or perhaps just obtain the raw, patient-
level data from said company and analyze the data ourselves?

I am just curious which you view as more efficacious?

~~~
carsongross
If you want to claim to be a scientist, you replicate the work.

If you want to be a statistician of increasingly obvious limited social
utility, you rerun the statistics.

~~~
epistasis
Question: why should we take your declarations of who is and isn't a scientist
more seriously than the scientists themselves?

Lets take a typical cancer drug. It goes through safty, dosing, and efficiacy
clinical trials. In lucky cases, it will show efficacy in a multi-year
clinical trial across many geographic locations with hundreds of patients.

If the trial is successful, the FDA will eventually approve the drug, the drug
can now be prescribed. The FDA mandates that follow up study is done
continuously to learn more about the drug; better indications for use,
contraindications for when it won't work, etc.

Where does "replication" come into any of this? Why in the world would
somebody replicate a dosing study and generate the same data? That would be
unethical, dangerous, and counterproductive. At best, one would throw out bad
data that was improperly collected. At worst, one would just abandon the drug
and move on to a different candidate drug.

When it comes to human studies that come at real cost to human life, not using
all the best available information is unscientific, unethical, should result
in civil penalties, and should probably result in criminal penalties as well.

This is the situation that the parent was asking about; by making short
blanket statements about what a scientist is and is not, without considering
the real issues at hand, makes it seem like you're not engaging the issue.

"Scientists replicate data" is a simple thing to say if you're looking at
stars or running a particle collider or working on a new synthetic compound;
taking that simple minded attitude is not appropriate for much of the most
expensive research out there.

The question of what to replicate and when is a difficult one; it's the
tradeoff between new discovery and making sure you're on the right path. If
you can make a new discovery that simultaneously proves or disproves that
you're on the right path, that's a smarter move, but it's not "replication."

~~~
dfsegoat
THANK YOU. Nobody seems to pick up on the fact that those 3-5 trials cost (on
average) in the hundreds of millions of dollars ($30-50M).

Not to mention the countless Institutional, Human Subject, and ethics review
boards that must be satisfied before we can even begin to think about laying
hands on a human to conduct a study of any sort - let alone one with an
investigational new drug.

------
gopher2
Soooo a meta-analysis of meta-analyses?

------
golemotron
> The increase (in the number of meta-analyses) is a consequence of the higher
> prestige that systematic reviews and meta-analyses have acquired over the
> years, since they are (justifiably) considered to represent the highest
> level of evidence

No. The increase is because it is cheaper to do meta-analysis that it is to
design and conduct experiments. They also carry less reputational risk.

~~~
Fomite
There are other, non-nefarious reasons.

1\. They make excellent student projects. Part of this is cheapness, sure, but
part of it is that meta-analysis can be done relatively quickly. Some
observational studies will take years to complete - in the meantime, your
Masters student needs something to do.

2\. They are often "Step 1" of a number of study designs. For example, if one
is eliciting priors for a Bayesian analysis, or in my case trying to
parameterize a theoretical model, "Is there a meta-analysis on this, and if
not, can we do one?" is one of the first questions asked.

3\. It allows participation in a field. For example, I have thoughts about
some aspects of clinical medicine. I am unlikely to ever run a clinical trial,
what with not having a position in a medical school. I can however perform a
meta-analysis of trial data as well (or possibly better) than the people
performing the studies. Running a study and conducting a meta-analysis are not
necessarily the same skill set.

~~~
golemotron
> There are other, non-nefarious reasons.

Cost is a nefarious reason?

~~~
Fomite
There's an undertone in this thread that meta-analysis is just what you do if
you can't run a study. I wanted to note that there are _scientific_ reasons to
perform one in addition to logistic ones.

------
Ericson2314
Until the incentives change in academia...

------
erez
In many ways, the title is an example of a deeply flawed meta-analysis.

