
30k medical studies may be invalid due to contaminated cell cultures - anigbrowl
http://www.sciencealert.com/more-than-30-000-scientific-studies-could-be-wrong-due-to-contaminated-undying-cells
======
sadscientist
This just highlights an emerging trend, that I'm already seeing in University
labs. The distrust of science, especially experiments you didn't do yourself.
It's worrying enough to see the public mistrust science, but I'm seeing this
happen in biology labs across my university.

This article is understating the effect on pi's and their labs. During one of
my conference meetings with other PI's we estimated 3 million of our combined
grant were spent trying to replicate incorrect or bad studies.

It also has an extremely demoralizing effect down the lab pipeline with the
other workers in the labs from phd students, postdocs, and lab techs.

I've been hearing more and more disillusionment about the state of science
from lab members. I've already had to have a meeting with several postdocs who
said something similar to I don't trust 90% of published articles.

Something is really wrong in science, and this is just the beginning.

The scientific community need to have a public serious discussion about the
worries many of us are already having in private. But since no PI ,including
me, wants to jeopardize their career, I fear things are going to go off a
cliff.

EDIT: If any rich silicon valley investor is reading this, please invest in a
companies trying to automate and standardize research lab work.

~~~
Consultant32452
>I've already had to have a meeting with several postdocs who said something
similar to I don't trust 90% of published articles.

GOOD. They should feel that way. Science is in a state of crisis and that 90%
number is an accurate reflection of reality. In the study mentioned below 90%
of landmark cancer research papers could not be reproduced. Someone will
surely chime in that their own favorite branch of science is better, but
better than 90% failure is a really low bar.

[http://www.reuters.com/article/us-science-cancer/in-
cancer-s...](http://www.reuters.com/article/us-science-cancer/in-cancer-
science-many-discoveries-dont-hold-up-idUSBRE82R12P20120328)

~~~
SubiculumCode
Yes and no. In many ways the probelems we are addressing are much more
complicated than were being addressed 100 years ago. Often advanced
statistical methods need to be used to make inferences. We are often looking
for 2% signal over noise, trying to understand complex dynamical systems. Even
without anything nefarious, and careful execution of rigorous methods,
unmeasured 3rd variables can be different across study samples, and you can't
measure everything. Even if you could, you won't have the statistical power.
Random assignment is nice, but not every field can do it ethically. Even with
random assignment, there is no guarantee that the effect of unmeasured 3rd
variables will average to zero. That is. Science is harder than many give
credit for.

That said, the competition and incentives in science are perverse. Peer review
quality has declined, if only because everyone is in a rat race to survive,
and being a peer reviewer does not do much for getting you a job or tenure.

------
sndean
This really isn't that surprising and has been known for years. Lots of kids
in grad school were doing experiments with cells at >200 passages that were
obviously contaminated with mycoplasma. I think they knew it, but their HIV
research had always been done that way. I'm guessing if they went back to
lower passage / non-contaminated cell lines their results would change.

Similar thinking from the source [1]:

> Why does ATCC continue to distribute HeLa Contaminated Cell Lines?

> ATCC continues to distribute these cell lines, even though they have been
> shown to be contaminated with HeLa, because researchers need them for
> purposes beyond use as models for specific disease/original source tissue.

Beyond that, certain cell lines started out as contaminated [2]:

> In the earliest stocks available, the level of contamination was 0.6%.

[1]
[https://www.atcc.org/Global/FAQs/3/6/HeLa%20contaminated%20C...](https://www.atcc.org/Global/FAQs/3/6/HeLa%20contaminated%20Cell%20Lines-1207.aspx)

[2]
[https://www.atcc.org/Products/All/CRL-1593.2.aspx#characteri...](https://www.atcc.org/Products/All/CRL-1593.2.aspx#characteristics)

------
civilian
I'm imagine that Step 0. of Materials & Methods in new papers going forward
will be: "We verified that our cell line was made of intestinal cancer cells
with PCR and genomic expression tests..."

------
beebmam
This contamination has happened before with HeLa cells and the original "War
on Cancer". It was an absolute disaster back then, and I'm aghast that the
medical industry hasn't learned from this mistake. I actually am quite shocked
and horrified that this has happened again. This means that many many human
lifetimes of work are invalid.

There's a great documentary by Adam Curtis on this exact topic. "The Way of
All Flesh"
[https://www.youtube.com/watch?v=C0lMrp_ySg8](https://www.youtube.com/watch?v=C0lMrp_ySg8)

Again, I can't express just how sad this makes me.

------
Bromskloss
> The scientists can carry out a genetic test before starting their research
> to detect misidentified cells. But that takes time and money. “The
> scientists I spoke to said that was the biggest problem,” says Halffman.

Would performing such tests be a business opportunity? A researcher wants to
use a certain cell product, but first sends a sample to our testing facility
to see if it's a quality product or a botched one.

------
Gatsky
This is overstating the issue in the present day. An immortalised cell line is
after all a very convenient but very contrived model of a real system. For
example, the greatest revolution in cancer treatment in the last 10 years has
been immunotherapy, which was developed using more realistic models than 'cell
line in a dish'.

~~~
nonbel
This is what you always hear from biomed/psych/etc. It simply is not important
to get details right. Misinterpret the results of your analysis (p-values),
use the wrong cell lines, measure the wrong thing, etc. It rarely seems to
affect the conclusions.

If anyone who points out errors is being pedantic, it makes you wonder about
the point of doing all that. Just come up with an idea A, say "heads =
conclusion A", "tails = conclusion not A", and flip a coin.

~~~
Fomite
I think one of the things that is actually happening is that for many
examples, while the _quantitative_ answer changes, the _qualitative_
conclusion isn't altered by the errors.

In graduate school, as part of a class on survival analysis, we subjected the
same data set to increasingly sophisticated analysis techniques to account for
all kinds of things.

Each time, the effect estimate changed.

At the end, while discussing them, the professor asked a very simple question:
"Do any of these suggest that HAART is a bad idea?"

Similarly, during the Ebola epidemic, when everyone was fussing about the
various forecasts missing the mark, etc., what they were actually predicting
was "This is a serious crisis that needs international intervention".

~~~
nonbel
I'm sure that HAART is a good idea sometimes and a bad idea other times. Also,
this will change as time goes on (HIV mutates, demographics change, other
treatments are available, etc)

I'm also sure that there are many things going on at any given moment that
should count as "serious crisis that needs international intervention". The
real question is whether it is a more serious crisis than other things
currently happening.

To deal with both these issues you are going to need deeper understanding than
"good idea vs bad idea" or "is crisis vs not crisis".

But the real problem with being unable to quantify your understanding is that
you are left unable to make precise predictions, thus you can never perform
any stringent tests. If all you can predict is something vague (eg, HAART will
increase 5 year survival), there will be many ways to misinterpret the data in
support of your explanation even if it is totally wrong.

~~~
Fomite
The point is not that it will change over time - it was that _in this setting_
with _this information_ different methods can give you different answers, but
that may not actually matter. If you're HIV+ in this country right now, you
want HAART, whether I did some fancy marginal structural modeling or not.

> To deal with both these issues you are going to need deeper understanding
> than "good idea vs bad idea" or "is crisis vs not crisis".

This is not, actually, how the response to Ebola worked. It was very much
"crisis vs. not crisis". And a huge amount of clinical decision making is
"good idea vs. bad idea".

The suggestion was not that you're not able to quantify your understanding.
The suggestion was there are errors that are possible to make that, while
changing your effect estimate, do not change whether or not you do a thing.

To return to the HAART example, imagine you're an HIV+ patient and I've told
you that a major study failed to control for time-varying confounding, and
that upon re-analysis, instead of doubling your 5 year survival, it only
increases it by 87%.

Or, for a form of "is this repeatable?" that I particularly despise, that it
still doubles your chances of survival, but the p-value has gone from 0.047 to
0.062.

Do you want to stop taking the drug?

~~~
nonbel
>"To return to the HAART example, imagine you're an HIV+ patient and I've told
you that a major study failed to control for time-varying confounding, and
that upon re-analysis, instead of doubling your 5 year survival, it only
increases it by 87%.

Or, for a form of "is this repeatable?" that I particularly despise, that it
still doubles your chances of survival, but the p-value has gone from 0.047 to
0.062.

Do you want to stop taking the drug?"

This obviously depends on the relative costs. Cost of side effects, buying the
drugs, time going to treatment, etc. That all requires accurate
quantification. But cost-benefit isn't even what I meant.

To begin with, if they can't nail down a quantifiable effect that is stable
from study to study, who knows what is going on? Why would you have confidence
in their estimates of effectiveness if they are inconsistent with one another?

~~~
Fomite
"This obviously depends on the relative costs. Cost of side effects, buying
the drugs, time going to treatment, etc."

I'm going to suggest if you're facing death from an AIDS-related illness, a
relative risk of 2.00 vs. 1.87 will feel very, very similar to you.

"To begin with, if they can't nail down a quantifiable effect that is stable
from study to study, who knows what is going on? Why would you have confidence
in their estimates of effectiveness if they are inconsistent with one
another?"

Define "stable" \- because even if the effect of something is fixed in the
same way a physical constant it, it's invariable sampled with error.

Again, if I give you five studies that suggest that HAART improves survival
by:

100%, 102%, 87%, 94% and 96%

are you really going to suggest that we don't know that HAART improves
survival?

~~~
nonbel
> _" I'm going to suggest if you're facing death from an AIDS-related illness,
> a relative risk of 2.00 vs. 1.87 will feel very, very similar to you."_

I at first thought we both understood there would be some uncertainty about
these values no matter what but we were leaving that out out for simplicity's
sake. Roughly, I was assuming it is at least +/\- 10-20%. The reason these
values "feel very, very similar" is that I would not expect medical data to be
able to distinguish between them.

> _" Define "stable" \- because even if the effect of something is fixed in
> the same way a physical constant it, it's invariable sampled with error."_

Ok... so you _are_ implicitly considering uncertainty.

> _" Again, if I give you five studies that suggest that HAART improves
> survival by:

100%, 102%, 87%, 94% and 96%

are you really going to suggest that we don't know that HAART improves
survival?"_

Not enough info.

\- Where did the uncertainty go?

\- What methods were used to generate these values? Even basic strategies like
blinding the people collecting/processing/analyzing data is often still
missing.

\- What population/frame do those numbers refer to, and how much error can we
expect from extrapolating for other situations in the future?

You need a reliable quantification, doing this hand wavy "things are
better/worse, significant/insignificant" is an awful idea.

------
robocat
I wonder how many cervical cancer deaths are parasitic cancers from HeLa cell
lines? Similar to
[https://en.m.wikipedia.org/wiki/Devil_facial_tumour_disease](https://en.m.wikipedia.org/wiki/Devil_facial_tumour_disease)

~~~
civilian
Huh. I'd take a bet that it's less than a dozen. I think it would be difficult
to transfer it-- you'd have to have an OB-GYN who was also do research who
didn't follow proper cleaning protocols between researching and doctoring.

Unlike Devils, we don't bite eachother's cervices as a way to say "hello".

~~~
sjg007
I would make a stronger bet and say it's exactly zero.

~~~
civilian
Well, does a non-malignant HeLa tumor that's been artificially transplanted
count? Because if so, we may have both lost our hypotheyical bets. :]
[https://www.quora.com/Is-it-possible-that-the-HeLa-
immortal-...](https://www.quora.com/Is-it-possible-that-the-HeLa-immortal-
cell-line-could-cause-an-infection-or-cancer)

(I can't find numbers of how many tumor masses lasted, but hundreds of
prisoners were injected.)

------
fly-swatter
If only we could convince more physicists, chemists, and software engineers to
do biological and medical research.

~~~
Fomite
Having met a number of physicists and software engineers working in biological
and medical research, they have their own, unique brands of fail in addition
to the usual slate.

------
stefantalpalaru
This problem has been cheerfully ignored for a long time -
[http://discovermagazine.com/2014/nov/20-trial-and-
error](http://discovermagazine.com/2014/nov/20-trial-and-error) :

> In June 2007, all that changed. Ain attended the annual Endocrine Society
> meeting in Toronto, where Bryan Haugen, head of the endocrinology division
> at the University of Colorado School of Medicine, told Ain that several of
> his most popular cell lines were not actually thyroid cancer. One of
> Haugen’s researchers discovered that many thyroid cell lines their
> laboratory stocked and studied were either misidentified or contaminated by
> other cancer cells.

[...]

> But rampant contamination is not the shocker in this story. Ain retired all
> the lines; he never sent any of them out again. He also sent letters to 69
> investigators in 14 countries who had received his lines. He heard back from
> just two.

