
A new book on the lack of rigor in biomedical science - tmd83
https://arstechnica.com/science/2017/04/how-sloppy-science-creates-worthless-cures-and-wastes-billions/
======
nonbel
From this description I wouldn't buy this book. It is focusing on a bunch of
minor issues rather than the main problem. Biomed researchers only come up
with vague "A makes B go up/down" hypotheses. This is not because biology is
more complicated than physics. It is 100% a culture and training issue.

[https://bml.bioe.uic.edu/BML/Stuff/Stuff_files/biologist%20f...](https://bml.bioe.uic.edu/BML/Stuff/Stuff_files/biologist%20fix%20radio.pdf)

[https://meehl.dl.umn.edu/sites/g/files/pua1696/f/074theoryte...](https://meehl.dl.umn.edu/sites/g/files/pua1696/f/074theorytestingparadox.pdf)

~~~
return0
Agree with that. The problem is not just the inaccurate answers, but the
questions are often ill-thought as well. Life scientists often make
correlations from molecules to behavior or phenotype, an inference that spans
many many orders of spatial magnitude and a gazillion of complex processes.
There is just not enough of incentive for systematic bottom-up approaches, as
the average grant size is made to fit this kind of hypotheses. It's more
culture than training, or rather, training is a consequence of the culture of
the field.

~~~
type0
It's more like: if you lost your keys on your way home, you go and look for
them under the lamp post because you're drunk and you can only see where the
light is.

------
untilHellbanned
As a biomedical scientist at a solid US university, I'm getting pretty fed up
with all the attacks on my field's credibility. It's similar to education
where everyone moans about poor quality. While i cant speak to the average
teacher I can honestly say the average researcher tries very hard to get it
right. Remember most people don't become biomedical scientists unless they
worked hard for many, many years in school and beyond for often quite terrible
wages. So you can hate the game (drug companies, academic journals), but
please think through any hate you might have on the player.

~~~
capnrefsmmat
I don't think the blame can all be pinned on drug companies and academic
journals, or whatever other third parties we can identify. They all have a
role, but many problems with biomedical science come down to individual
researchers making poor decisions about their experimental design, statistical
methods, and computational tools.

I'm a statistician, and when I wrote my own book about bad statistics in
science (see
[https://www.statisticsdonewrong.com/](https://www.statisticsdonewrong.com/)),
I made sure to reference studies which quantify how often errors occur in real
published research. The rate is stunningly high. The average biomedical
experiment is conducted with (a) a sample size which is far too small to
detect an effect of the expected size, (b) a vague analysis plan which leads
to exploratory analyses with high false positive rates, (c) frequent copy-and-
paste errors and math mistakes in presenting important results, and (d) an
overreliance on statistical methods to make up for poor experimental design.

This means the average published paper is likely a false positive, likely an
overestimate of the true effect if not, and is barely reproducible.

Is this the fault of individual scientists? Partly, yes -- these problems have
been pointed out for years in leading journals, but nobody takes action to do
better research. It's also the fault of the grant funding systems which
incentivize salami-slicing of results instead of doing one big, rigorous,
well-designed study, and of journals which prefer dramatic but unreliable
results over mundane but well-executed results. (Of course, the journal
editors and reviewers are usually active scientists themselves.) I think the
average researcher would like to "get it right", but has to focus on getting a
career instead.

Just a few papers on the problem of poor sample sizes in biomedicine:
[http://journals.plos.org/plosbiology/article?id=10.1371/jour...](http://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.2000797)
[http://rsos.royalsocietypublishing.org/content/4/2/160254](http://rsos.royalsocietypublishing.org/content/4/2/160254)
[http://www.nature.com/nrn/journal/v14/n5/full/nrn3475.html](http://www.nature.com/nrn/journal/v14/n5/full/nrn3475.html)

~~~
untilHellbanned
I appreciate your comments but as the biologist hopefully you can understand
that it's not like we are trying to keep our sample sizes small. We understand
statistical power. It's simply not feasible logistically and financially to
have the ideal numbers a lot of the time. Real life intervenes, so we do our
best. For math/computer people which is probably many here in HN, I think they
have a hard time with real-world messiness that is biomedical research. It's
the same mentality of "oh I'll just apply my algorithms to this..." which
works for them but not so much for us.

~~~
capnrefsmmat
Given how infrequently we see power calculations in published papers, and the
results of typical surveys of scientists, I don't think the typical scientist
_does_ understand statistical power or other statistical concepts. There are
plenty of surveys (e.g.
[http://www.tandfonline.com/doi/abs/10.1080/17470218.2014.885...](http://www.tandfonline.com/doi/abs/10.1080/17470218.2014.885986))
showing that scientists just pick sample sizes based on what's usually done in
their field, rather than working out what's necessary. There are plenty of
other surveys showing practicing scientists don't understand the concepts of
significance and power. Some of the authors of those studies don't understand
them either -- I've seen at least one study with questions with no correct
answer...

I know large sample sizes are often impractical, and you have to make do. But
given that, results must be presented with all sorts of disclaimers, since
results from underpowered studies are frequently wrong or exaggerated
([https://www.ncbi.nlm.nih.gov/pubmed/18633328](https://www.ncbi.nlm.nih.gov/pubmed/18633328)).
That's not what happens -- scientists who should be aware that their studies
have very little value as evidence instead present them as groundbreaking
definitive results.

I would much rather see slow, difficult, tentative studies instead of the
high-speed spray of nearly meaningless results we see every day. But that will
take a dramatic change in career and funding incentives.

~~~
toufka
>I would much rather see slow, difficult, tentative studies instead of the
high-speed spray of nearly meaningless results we see every day.

A major hurdle is the experiments _do_ keep getting slower and slower. As a
recently graduated biomedical grad student I spent literally 6.5 years of my
life trying to get an experiment to work enough times to glean useful data out
of it. I knew exactly how much power my data had (I had literally years to
think and rethink about it). The number of stars that had to align to get
equipment, protocols, controls, materials, animals, sleep-schedules, etc. all
aligned to get data out of an incredibly complicated system meant that there
is no fast data. Useful biological data is hard-won. And it's getting harder
and harder (read, more expensive and labor-intensive).

You could say, well, then just don't expect every scientist to do their own
(independent) research (see the author lists on CERN publications...). That
indeed would change incentives a lot in the biomedical fields - and I actually
do think for the better.

The last point is I think there's a big difference between how a practicing
scientist sees a 'published paper' in their field, and how everyone else sees
it. A published paper really is just a mark of what you did, and what
happened, and maybe a basic interpretation. It really does not claim to be
'true' in a strong sense. And there are entirely reasonable situations where
different papers come to opposite (justifiable) conclusions. This is itself
weighted data to be used in the next motions of the field precisely because
sample size and (more often than not) experimental complexity makes clarity
hard to come by for a single paper.

------
derefr
The basic research isn't what costs billions, of course.

If we've got "worthless cures" and a "waste [of] billions", it's because we've
got:

1\. pharmaceutical corporations incentivized to take any random chemical they
might be able to make money off of by _telling_ people it has a certain effect
(and that isn't _so_ useless that the FDA will literally tell them to stop
selling it for being useless) and get them on the market; and

2\. an FDA body incentivized to put billion-dollar requirements on
corporations to research a chemical's _safety_ by putting it through a
gauntlet of trials, and to research said chemical's _efficacy_ (i.e. whether
it has some "statistically visible" effect on people), but without that
efficacy test translating into anything like a chemical's _marketable
usefulness_ (i.e. it being the answer to any problem anyone actually has, such
that a doctor would be _independently_ motivated to prescribe it without the
pharmacorp's encouragement.)

In other words, our system assumes that pharmacorps will only _bother_ putting
drugs on the market when they're useful. But that makes no sense: if peddling
snake oil—snake oil the FDA has rubber-stamped as having "[statistically]
noticeable effect" on some disease—is just as profitable as peddling cures,
why would pharmacorps _care_ about curing anything?

It's common knowledge that most nutritional supplements—especially the kind
that are marketed as containing some named plant, rather than some named
active ingredient—are useless, and often don't even contain what they claim to
contain. We know exactly why the corporations who produce them produce them
anyway: people are dumb enough to buy them, and the rules for things marketed
as "nutritional supplements" don't prevent them from making a pill with
literally _no_ active ingredient.

Well, the logic for pharmacorps isn't any different. If the rules for things
marketed as "drugs" allow them to get away with selling pills containing a
chemical that just caused people to "show 1% improvement" along some symptom
axis in studies, and doctors are dumb enough to [be nudged into] prescribing
that pill—why wouldn't they sell them?

The research scientists who found out that the chemical might be beneficial in
some way didn't cause any of this, any more than the people who first
discovered coal in the ground caused coal power plant pollution. Corporate
greed, and a government body with weak "efficacy" standards, are at fault
here. The basic research—sloppy or not–is just (unintentional!) grist for
their mill.

\---

That being said: if you _are_ a research scientist, and you want to do what
_you_ can to be a check on this system, making your basic research as rigorous
as possible, so that they can make conclusions of _no_ effect that the FDA can
cite as reasons to _reject_ a pharmacorp's pill—would be useful.

But, very likely, the pharmacorp has more money than your lab does, and so
will be able to pay for a higher-powered study than you can afford, that
_will_ be sloppy, such that it could prove the opposite conclusion. Your
voice, however clear and strong, might just be drowned out anyway.

Really, if we want to fix this problem, the fix needs to come from a different
direction. Possibly FDA drug-approval reform, toward something more like a
peer-review model (i.e. the pharmacorp gives the FDA money; the FDA uses that
money to pay independent labs to rigorously replicate the efficacy study; and
then the FDA trusts their independent labs over the pharmacorp's results.)

~~~
Spooky23
Would you provide some non-supplement examples of FDA approved drugs in this
category where they are "safe" but do nothing?

~~~
derefr
Most newer anti-depressants, for example:
[https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4172306/](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4172306/)

\---

A tangent: interestingly, with anti-depressants in particular, doctors
prescribe many of these "basically a placebo" drugs fully aware that they're
basically-placebos, and indeed do so independently of any pharmacorp wining-
and-dining. This is because the first-line treatment for depression is
effectively "let's see if we can give you a placebo to get you to trick
yourself out of being depressed." Which often works!

There are _effective_ treatments for depression, mind you, but they have
_side-effects_ , while placebos are—unsurprisingly—very well-tolerated. So the
"actually a drug" drugs get pushed back to the second-line or beyond.

Basically, doctors are treating depression the an ISP's phone support treats
user complaints: they have a first-tier response that's effectively just there
to help people who are temporarily confused and will resolve the problem
themselves if you can talk to them long enough. You only get the "actual
support" support once the phone-tree algorithm has verified that "placebo
support" wasn't enough for you.

