
Cancer reproducibility effort faces backlash - tlb
http://news.sciencemag.org/biology/2015/06/feature-cancer-reproducibility-effort-faces-backlash
======
TisButMe
This is the same behaviour I've seen time and time again in biology labs.

People there are re-doing the same experiment over and over until it gives
them the result they want, and then they publish that. It's the only field
where I've heard people saying "Oh, yeah, my experiment failed, I have to do
it again". What does it even mean that an experiment failed? It did exactly
what it was supposed to: it gave you data. It didn't fit your expectations?
Good, now you have a tool to refine your expectations. But instead, we see PhD
students and post-doc working 70h hours week on experiments with seemingly
random results until the randomness goes their way.

A lot of them have no clue about statistical treatment of data, making a
proper model to try and test assumptions against reality. Since they deal with
insanely complicated system, with hidden variables all over the place, a
proper statistical analysis would be the minimum expected to be able to
extract any information from the data, but no matter, once you have a good
looking figure, you're done. In cellular/molecular biology, nobody cares about
what a p-value is, so as long as Excel tells you it's <0.05, you're golden.

The scientific process has been forgotten in biology. Right now it's basically
what alchemy was to chemistry.

I very happy to see efforts like this one. Sure, they might show that a lot of
"key" papers are very wrong, but that's not the crux of it. If there is a
reason for biologists to make sure that their results are real, they might try
to put a little more effort into checking their work. And when they figure out
how much of it is bullshit, they might even try to slow down a little on the
publications and go back to the basics for a little while.

I'm sorry about this rant, but I've been driven away from a career in virology
by those same issues, despite my love for the discipline, so I'm a bit bitter.

~~~
jboggan
Spot on with the alchemy remark, I've made similar comparisons before. Coming
into bioinformatics/computational biology with a strong discrete math
background I found a lot of professors excited to work with me until I started
telling them how their ideas and models and experiments didn't imply what they
wanted them to. Just like the startup world is awash with "it's like Uber for
X" the biology world is full of "let's apply X mathematical technique to
$MY_NICHE" and somehow this is supposed to always generate novel positive
results worthy of publication. Then you tell them that you applied such-and-
such mathematical/statistical model to their pet system and that the results
contradict their last 10 years of published papers . . . and they ask you to
do it again.

I remember one professor studied metabolic reaction networks modeled with
differential equations. The networks themselves were oversimplifications and
relied on ~5N parameters (N being the number of compounds in the network). The
problem was that while all the examples in publication converged on a nice
steady state (yay homeostasis is a consequence of the model!) it was trivial
to randomize the parameters within their bounds of experimental measurement
and create chaotic systems. Did this mean the model wasn't so great? No, it
just meant those couldn't be the real-life configuration of those parameters .
. . sigh. And now I'm a data engineer and no one asks me to get data from an
API that doesn't actually provide it and I'm much happier.

~~~
TisButMe
I hoped that it wasn't as bad in computational biology, or ecology, or any
other biology field where systems and models are actually defined. It saddens
me to read that your experience was as bad as mine...

~~~
donovanr
I'm a CompBio PhD student, and my experience is that folks in that field are
much more careful with statistics than in, say, molecular biology labs, but it
varies from lab to lab. My PI is exceedingly meticulous about stats -- for
instance, we don't report p-values, but rather entire distributions -- but
that's because our work is all in silico, so it's easy to run tons of
replicate simulations. Wet lab work that's finicky should definitely be held
to high statistical standards, but I don't think it's fair to presume everyone
in the field guilty until proven innocent.

~~~
TisButMe
But wet lab scientists should be even more careful! They have way less control
over the system they're trying to study than you do, so stats are the only
security net we have to even attempt to do anything with the data we produce.

I also agree on the innocent until proven guilty part, but by now I've seen
and talked to hundreds of people with the best intentions, who do not realise
how important careful examination of the data is, so I'm growing a bit
disillusioned.

~~~
peterfirefly
I think it is entirely fair to presume biologists incompetent and sloppy until
they have proven otherwise.

(My impression from admittedly limited contact with biology students and from
browsing through the occasional paper is that most of them barely approach
mediocrity from below.)

------
astazangasta
Looking at this through the lens of drug discovery is the wrong way to do
this. The problem is with our drug discovery strategy, generally, not with the
reproducibility of our research.

STK33, for example, is _definitely_ implicated in cancer through a wide
variety of mechanisms. It is often mutated in tumors, and multiple studies
have picked it up as having a role in driving migration, metastasis, etc.

This doesn't mean we can make good drugs to it.

Making drugs is hard - they need to be available in the tissue in the right
concentrations, often difficult to achieve with a weird-shaped, sticky
molecule. They need to have specificity for the tumor, they need to have
specificity for the gene target(s) of interest. They need to be effective at
modulating the target.

More importantly, though, the drug is modulating a target (gene) that is
involved in a biological system that involves complex systems of feedback
control, produces adaptive responses, and otherwise behaves in unexpected ways
in response to modulation.

In my experience this is usually underappreciated by most drug discovery
strategies, which merely seek to "inhibit the target" as if its involvement in
the tumor process means we can simply treat it as an "on-off" switch for
cancer. This assumption is asinine, and of course will (and does) lead to
frequent failure. STK33 is not an on-off switch, and attempting to treat it
that way will likely result in a drug that does nothing.

~~~
toufka
This is absolutely correct. Pharma companies are running into a wall and are
flailing to figure out what to do. It's quite clear that from first principles
bathing the entire body in trillions of little molecules hoping that they
_only_ and _completely_ shut down a single kind of protein, at the right time,
in the right place, of the right cell, and do nothing more, is insane. There
is some logic behind the ability for a small molecule to help against invading
diseases (the antibiotics of the 20th century), but the same strategy will
philosophically not work for entire classes of cancers or other innate
biological problems. Though we all somehow just assumed that it would.

The pharmacy of the future will be entirely curative to non-invading diseases,
will repair the DNA that's been mutated, will express or inhibit the proteins
that need to be expressed or inhibited. And small molecules will be the
payload of these fancy protein-based nano-machines. But this hunt to bring
down the cost of those small-molecule targets at the cost of the reputation of
the science itself might be foolhardy when the cost is near-infinite in the
first place because they're looking from the wrong perspective.

tldr; Pharmaceutical companies' hammer that worked so well against the nails
of bacterial infection is in no way suited to the plumbing of cancer. And now
they're 'investigating the plumbers' to figure out why their fancy new
50-billion-dollar "water-hammers" don't work so well to unclog pipes.

~~~
refurb
I agree it's challenging to create small molecule treatments for oncology.
That said, there have been some recent, massive successes recently. Look at
Imbruvica which is a massive jump forward in treating MCL and CLL.

Even if you drop small molecules and focus on antibodies, it's not like it's
all that easier.

~~~
toufka
Certainly there are success stories when so much effort is put forward. But
look at all the things even Imbruvica does in addition to helping treat cancer
[1].

If you want to fix MCL you figure out how to engineer a genetic payload that
targets B-cells IFF they express particular genes, and reengineer those cells'
genomes to either no longer reproduce abnormally, or shut them down. You do
NOT covalently turn off an entire class of kinases in the ENTIRE body...

And that's the success story. Antibodies are just the tip of the protein-
iceberg. They're the 'same things as a small molecule' but in protein form -
baby steps into a whole new world. Sure, they can bind to stuff tightly, but
if that (alone) is what you're aiming for, then they're not being used to try
to fix the problem or engineer a way to the solution. There's lots more that
we could do if we started actually engineering the proteins and their
interactions, and delivering them in directed ways. We have access to those
primitive engineering tools, but instead of focusing on those nascent tools,
we're polishing up the old hammer.

[1]
[https://en.wikipedia.org/wiki/Ibrutinib#Adverse_effects](https://en.wikipedia.org/wiki/Ibrutinib#Adverse_effects)

------
gwern
> This past January, the cancer reproducibility project published its protocol
> for replicating the experiments, and the waiting began for Young to see
> whether his work will hold up in their hands. He says that if the project
> does match his results, it will be unsurprising—the paper's findings have
> already been reproduced. If it doesn't, a lack of expertise in the
> replicating lab may be responsible. Either way, the project seems a waste of
> time, Young says. “I am a huge fan of reproducibility. But this mechanism is
> not the way to test it.”

One swallow does not make a spring. With a belief like 'one replication is
enough', I'm not sure Young actually appreciates how large sampling error is
under usual significance levels or how high heterogeneity between labs is.

~~~
joe_the_user
As far as I can tell from the article, Young's stance is that the contract lab
doing the reproduction doesn't seem competent enough to correctly reproduce
the experiment.

I don't know enough to know if that's true or not. But it seems at least
possible that in the rush to reproduce much, the project is cutting costs by
using less-skilled contract labs.

~~~
jsprogrammer
Presumably the replicating lab will record and document their approach and
results so that it can be compared with the original experiment.

~~~
chris_wot
Am I missing deliberate irony here?

~~~
jsprogrammer
The results will either be replicated or they won't.

There will either be sufficient documentation to show whether the experiment
was repeated accurately or there won't be.

------
ryanobjc
As a complete outsider reading the attitudes behind the original scientists,
it seems to me that they resent the oversight and hate to do extra work. In
defending their practices they fall back on "expert work" and essentially are
arguing that what they are doing is too complex for anyone else to do and they
should be left alone to continue to do it.

And from their point of view, it seems all very reasonable. But from the rest
of humanity who is being asked to materially support them, and waits for their
conclusions to make the world a better place, it seems ... frankly... lazy and
selfish. 30 emails, wow! 2 weeks of a graduate student's time -- these are the
people who are the least paid right? Below minimum wage even? The demands on
their time seem so low, yet the complaints are so high, that one can't help
but wonder if the concern really is that their results are too 'magical' and
irreproducible and they just fear other people learning about it.

I've seen this behavior in professional settings, and ultimately it comes down
to a lack of confidence in oneself, the tools and technology and the quality
of work being done. Careers are at stake, but is the alternative to just give
people a free pass?

~~~
nmrm2
Sorry, but how is this not just anti-intellectualism?

 _> 30 emails, wow!_

It can take up to half a day to reply to an email containing even one fairly
technical question. Obviously the time invested depends on the content of
those 30 emails, but it's not at all difficult to imagine this eating a week
worth of a PI's time.

 _> 2 weeks of a graduate student's time -- these are the people who are the
least paid right? Below minimum wage even?_

The typical attitude is that this is a justification for _not_ making them
spend their time managing these sorts of tasks. You don't get to say "we get
to pay you crap because your job is exciting and rewarding" and then load that
same person down with a bunch of grunt work (on top of all the normal grunt
work). That's how you bleed students and kill the lab's productivity.

Also, grad students are not well paid but they are typically no cheaper than
post docs; PIs pay tuition.

 _> The demands on their time seem so low, yet the complaints are so high_

So let's imagine a world where people calling for massive improvements to
reproducability get their way. Let's say it is a month worth of time for the
lab. For each paper. Multiple papers a year. That's a pretty massive time
investment. If you're a top lab, that could become a full-time position. And
believe me, you're not going to be able to fill that position with a grad
student. That person will have to be well-paid, because their job is going to
suck.

So it's reasonable that scientists are peeved when they invest all this time
and don't perceive their collaborators as acting in good faith, or feel like
their collaborators are trying to cut corners to pinch pennies.

~~~
drcode
If an engineer builds the worlds greatest new engine but says "unfortunately
it'll only run in my lab, no one else is competent enough to run it or build a
copy" then what good is it to society?

If the researchers in a lab are such geniuses that they are doing experiments
almost nobody else can duplicate and it is therefore impossible to determine
the veracity of their claims, how is that helping society and why should
society fund them?

Isn't the onus on the researchers to focus experiments that are also
reproducible by non-supergeniuses?

~~~
nmrm2
_> If an engineer builds the worlds greatest new engine_

There is more to science than advanced product development. Conflating the two
is wrong-headed.

 _> but says "unfortunately it'll only run in my lab, no one else is competent
enough to run it or build a copy" then what good is it to society?_

It's fantastically good to society. A company interested in monetizing the
research could provide that researcher with a multi-year sabbatical to come to
their company and turn is _Research_ into a _Product_.

Incidentally, that happens. It's also not unheard of for phd students to carry
an adviser's idea forward toward application in the context of a permanent
position at a relevant company.

 _> almost nobody else can duplicate_

There's a difference between not being able to duplicate, and duplication
being expensive.

 _> and it is therefore impossible to determine the veracity of their claims_

Again, reproducibility should focus on the veracity of the claims, not the
economics of reproducing them. Nothing is wrong with calling for better
reproducibility. The problem is in expecting to get it for free, and assuming
that it's always appropriate at every stage of research.

Investment in reproducibility should be in proportion to the degree of trust
the scientific community puts in the claim, and it is absolutely reasonable
for that investment to grow over time. But, don't kid yourself, it's an
investment. And society would have to be stupid to invest enormous amounts of
money into ensuring every single scientific paper ever published is held to an
extremely strong standard for reproducibility.

 _> Isn't the onus on the researchers to focus experiments that are also
reproducible by non-supergeniuses?_

If by "super genius" you mean "someone else who does research in the same or a
closely related field", then Hell. No. The onus on the scientists is to focus
on experiments that push science forward in service of mankind.

Sometimes this means helping mega-corps figure out how to reliably reproduce
your research without expert help and thereby increase profit by decreasing
required investment. Sometimes this means focusing on discovery.

~~~
mcguire
" _A paper that Young, a biologist at the Massachusetts Institute of
Technology in Cambridge, had published inCell in 2012 on how a protein called
c-Myc spurs tumor growth was among 50 high-impact papers chosen for scrutiny
by the Reproducibility Project: Cancer Biology._ "

That hardly sounds like "every single scientific paper ever published".

~~~
nmrm2
You are correct; I was being hyperbolic. But still, not sure
Nature/Science/Cell is a high enough standard for "anyone should be able to
replicate with little effort" \-- lots of "non-late-game" results that aren't
necessarily ready for industry applications get published in those venues
(which, I guess I've been arguing, is a good thing.)

------
jessriedel
It seems like a sensible check is for this collaboration to include original
studies, like the one mentioned in the article lede, that have already been
replicated elsewhere. (Ideally they would keep blind the relevant members of
the collaboration to this fact.) Then when you say "we failed to replicate X%
of the studies" you also say "of the subgroup that had already be replicated,
we failed to replicate Y%". If Y isn't much smaller than X, you know the
replication collaboration is probably botching this.

------
nmrm2
I can't state how strongly I disagree with the conclusion that papers should
be providing excruciating detail about protocol just because "pharmaceutical
companies can't reproduce key cancer papers [without the help of the original
scientists]". Science has rarely been done like this.

It would be like Google complaining that they can't copy psuedocode verbatim
out of a paper and have a highly performant algorithm. Or Microsoft
complaining that a static analysis defined in a paper wasn't accompanied by a
production-ready implementation.

Producing protocols that literally anyone could replicate without expending
effort is not the business of Science.

Replication should focus on the veracity of the underlying truth claim, not
the economics of reproducing the results.

~~~
Panoramix
Your analogies are completely off the mark. It is not the same at all. The
cornerstone of science is that other people can reproduce your results.
Period. There is no use publishing otherwise. Withholding key information,
which is so ubiquitous now, is a great disservice to the scientific community.

~~~
nmrm2
The question is not whether reproducibility is good, it's how much labs should
invest upfront in producing descriptions of protocols. My argument is that
they should probably invest more than they do now, but not enough that
pharmacutical companies are able to reproduce a given experiment without
talking to the lab.

Science is a collaborative process. There's nothing wrong with collaboration
being part of the reproducibility process, as long as the person doing the
reproduction maintains their objectivity.

------
cwyers
> Jeff Settleman, who left academia for industry 5 years ago and is now at
> Calico Life Sciences in South San Francisco, California, agrees. “You can't
> give me and Julia Child the same recipe and expect an equally good meal,” he
> says. Settleman has two papers being replicated.

Uh, Julia Child WROTE FREAKING COOKBOOKS. The entire point of Julia Child was
that she tried to develop recipes in such a way that another cook could
produce an equally good meal. Now, yes, if I went into a boiler room at
Goldman Sachs and picked 10 guys at random, I doubt that most would be able to
duplicate the recipe. If I picked 10 professional sous chefs at random and
none of them were able to make a dish as good as Julia Child's from her
recipe, I would start to have my doubts about the recipe.

By the same token, I don't expect rank amateurs to be able to duplicate state
of the art cancer research. But if labs run by pharma companies and academic
institutions are having the failure rate at reproducing research that the
article claims, I think it's more than reasonable to start questioning the
paper that documented that research, if not the research itself.

~~~
untilHellbanned
Analogy doesn't hold up. The complexity of cancer research is vastly greater
than cooking. But anyway, chefs claim that there is special, non-reproducible
elements about their environment and creations, e.g., oven, ingredients, etc.
all the time.

> none of them were able to make a dish as good

Defining "good" is the problem for both cancer research and cooking. You can
have two experts saying this is "good" or this is "bad" and not really be able
to prove who is right. Fine, maybe for cancer research you can ultimately
prove who is right but its not realistic in most cases given the resource
constraints of even top-flight labs.

~~~
cwyers
Well, it's not my analogy. But I think it's a good one, it just undermines the
point of the person who made it.

If I go into Momofuko, I'm looking for a good meal. A good meal is one that
tastes good, in this case. If I'm looking at a Julia Child cookbook, I'm
looking for a good recipe. One of the criteria for a good recipe may be that
it tastes good, or that it produces a healthy meal -- there's several
different criteria you can use here. But one criteria for a good recipe is
reproducibility -- in order for a recipe to be good it must contain enough
information and be accurate enough for me to create the dish that the recipe
is for. A recipe for a tasty meal that does not contain the right ingredients
or enough detail in the steps to prepare it is a bad recipe.

By the same token, an experiment that cannot be repeated is a bad experiment.
It may not be false. But its explanatory value is limited -- if a reaction can
only take place in water that's treated a certain way or has/lacks certain
minerals, then a paper that doesn't tell me that is leaving out important
information. Regardless of whether or not you define the point of cancer
research in purely scientific terms -- that is, to learn more about cancer --
or in more pragmatic terms -- that is, to allow us to create better cancer
treatments -- omitted information about the circumstances surrounding the test
that has a significant effect on the test result gives us less information and
is less likely to lead to better cancer treatments.

------
azernik
> It's unrealistic to think contract labs or university core facilities can
> get the same results as a highly specialized team of academic researchers,
> they say. Often a graduate student has spent years perfecting a technique
> using novel protocols, Young says.

Then they need to spend the time documenting those protocols.

My dad worked in biological research, and his attitude has always been: if you
don't write it down, you might as well not have done the work at all.
ESPECIALLY in research.

------
chris_wot
So hold on a moment. These researchers are doing experiments so badly that
they can't find the actual procedures they used to get their results? And now
they are tracking down old postdocs and lab technicians just to pick their
Brian's as to what they actually did?!?

How the heck did this stuff get through peer review? Surely I'm missing
something critical?

~~~
infamouscow
Academic peer review has nothing to do with quality control and everything to
do with maintaining the status quo. Don't take my word for it, read Retraction
Watch: [http://retractionwatch.com/](http://retractionwatch.com/)

------
lettergram
Interesting... Over the past year I was tickling the idea of making an
organization/company/website which automatically would give tax dollars to
research. The idea being, you can maximize the amount you right off in taxes
and donate to research you desire (i.e. more NASA, less children killed around
the globe).

The idea stemmed from the the idea that people want access to research
(public) AND reproducible. The funds would go directly to research groups, and
as an incentive, reproducibility would have bounties based on what people were
willing to donate. Because virtually every research group with public research
is supported by a non-profit, no one losses additional money, but more funds
go towards public interest research GROUPS not organizations with bureaucracy.

Somewhat off topic, but this seems another reason for me to start the project.

------
x0054
I tend to agree that the biology papers often lack proper documentation of
procedures and methodologies. This is a wonderful effort to reproduce some of
the key experiments. That being said, I think it's also very important to look
at the quality and qualifications of the labs doing the reproductions.

I don't have any direct link to cancer research, so I can't speak with
authority on the subject, but I have been involved in the past with a company
working in the Preimplantation Genetic Diagnosis field.

The basics of their procedure is to create one or more human embryos via IVF,
incubate the embryos for up to 6 days, than either freeze them, or transplant
them into the prospective mother. On day 3 or 5 of incubations the embryo is
biopsied, and the genetic material is tested to make sure there are no
aneuploidy defects. We were also able to test for some other types of genetic
abnormalities. This is for people who are having problem becoming pregnant.

In any case, some time in the mid 2000s there were 3 papers published in
Europe claiming that performing biopsy on Day 3 is extremely detrimental to
the embryo, and their conclusion was that PGD with Day 3 should not be
performed. The experiments were conducted by people who were unskilled in
micro manipulation.

They did follow proper protocols, and I am sure they did their best to
replicate proper procedures. But micro manipulation is as much skill as it is
knowledge. For instance, I can write a detailed procedure on how to shoot a
compound bow, and you can follow that procedure exactly. But, without
practice, you are not going to hit the bullseye on the first try.

Because we were in the business of providing services to doctors, not
publishing papers, we constantly tracked our embryo mortality rates, birth
rates, and accuracy of testing. The better our results were, the more business
we would get. And we couldn't fake the results, because clinics ordering the
test would be the once recording all of those statistics for us.

Any way, long story short, none of our data agreed with the papers claiming
that Day 3 biopsy was detrimental to the embryo. In fact, quite the contrary,
many of our statistics suggested that Day 3 biopsy and Day 4 or Day 5 transfer
would result in better implantation rates. But, the papers were published, and
referenced, and then it became "common knowledge" that Day 3 biopsy is bad,
and the medical industry moved on to Day 5 biopsy and embryo cryopreservation,
and so has the company I worked with.

To the company I worked for it's all the same, money is money. Day 3 or Day 5
biopsy, they make money all the same. But the patients are not more limited.
From the stats we have seen, it doesn't look like Day 4 or 5 biopsy is worse
for the embryo, but being frozen isn't a walk in the park. With Day 5 biopsy
you have to freeze the embryo, in order to allow time for the test results to
come back.

Any way, it's my 2 cents. Reproducibility is important, but I think it's just
as important to change the incentives of those who publish papers. If you goal
is to be published, then of course your research will suffer. It's the publish
or parish mentality in academia that is the problem, I think.

~~~
danieltillett
Why didn't your company publish their data. If day 3 biopsy is better and you
have the data to show it get it out. This is not some meaningless result -
this is a matter of life and death.

~~~
x0054
Because things in medical science function a lot differently from the rest of
science. Take a look at this video to see what I mean:
[https://www.youtube.com/watch?v=VArT6Kj_x_8](https://www.youtube.com/watch?v=VArT6Kj_x_8)

------
platform
I agree with the overall sentiment, the scientific community working on cancer
cure(s) is failing us, the patients and their families.

And they are failing us because of some fundamental gaps in how the research,
and subsequent review/dissemination/presentation of finding is done. I suspect
there are multiple failures in the process. The standards of scientific proof
and repeatability, used by mathematicians, physicists and chemists are not
followed

The net result is the following disappointing statistic:

"... In 1971, President Nixon and Congress declared war on cancer. Since then,
the federal government has spent well over $105 billion on the effort (Kolata
2009b). ... Gina Kolata pointed out in The New York Times that the cancer
death rate, adjusted for the size and age of the population, has decreased by
only 5 percent since 1950 (Kolata 2009a)." [1]

And this was just the US federal government investment. Not counting the
private donations, and private company research. Today the annual fed
investment is 5bln annually [3] I do not mean to sound totally discouraged, as
clearly the screenings have helped many to detect cancers before they
metastasized. And I would say the science results are showing that that part
of the research is working well.

However,for the cancers that can be rarely be detected before the spread (eg
pancreatic cancer and others) -- the investment our country and other
societies have put in -- simply has not payed off.

What worries me is that our research quality gates are not able to improve the
QoS of the underlying research.

And with my 'management hat on' \-- I am reaching out for this quote by
Einstein.

"Insanity: doing the same thing over and over again and expecting different
results.

The OP paper is not the first one pointing at the lack of reproducible
results, and it not just for cancer research "... But it may also be due to
current state of science. Scientists themselves are becoming increasingly
concerned about the unreliability – that is, the lack of reproducibility — of
many experimental or observational results. ..." [2]

There needs to be a bit of a revolution in the science of the cancer research
and the way money is allocated to it. Clearly the current model does not work
and likely is encouraging the pseudo science to prosper.

[1]
[http://www.csicop.org/si/show/war_on_cancer_a_progress_repor...](http://www.csicop.org/si/show/war_on_cancer_a_progress_report_for_skeptics/)

[2] [http://www.forbes.com/sites/henrymiller/2014/01/08/the-
troub...](http://www.forbes.com/sites/henrymiller/2014/01/08/the-trouble-with-
scientific-research-today-a-lot-thats-published-is-junk/)

[3][http://blogs.reuters.com/stories-id-like-to-
see/2014/09/09/t...](http://blogs.reuters.com/stories-id-like-to-
see/2014/09/09/the-money-spent-in-fighting-cancer-and-alibabas-risk-factor/)

------
anonbanker
_the group clarified that they did not want to replicate the 30 or so
experiments in the Cell paper, but just four described in a single key figure.
And those experiments would be performed not by another academic lab working
in the same area, but by an unnamed contract research organization._

sounds like someone wants to quietly weaponize this.

