

Yes, Clinical Trials Work - jtleek
http://simplystatistics.org/2013/07/15/yes-clinical-trials-work/

======
zekenie
A great book RTCs is Ben Goldacre's Bad Science
([http://www.badscience.net/](http://www.badscience.net/))

He points out that some of the first clinical trials recorded were in the
bible. I couldn't find his quote, but I found an NEJM piece that references
the same thing, although I think its in a nutrition debate.

"The first published report of a clinical trial has biblical origins. In the
Book of Daniel,5 reference is made to the unwillingness of the Israelite
Daniel to accept the diet offered by the Babylonian king Nebuchadnezzar. The
king's official had put a steward in charge of Daniel and his three friends
(Shadrach, Meshach, and Abednego):"

"Daniel said to the steward . . . “Test your servants for ten days; let us be
given vegetables to eat and water to drink. Then let our appearance and the
appearance of the youths who eat the king's rich food be observed by you, and
according to what you see deal with your servants.” So he hearkened to them in
this matter, and tested them for ten days. At the end of ten days it was seen
that they were better in appearance and fatter in flesh than all the youths
who ate the king's rich food. So the steward took away their rich food and the
wine they were to drink and gave them vegetables."

([http://www.nejm.org/doi/full/10.1056/NEJM200301023480120](http://www.nejm.org/doi/full/10.1056/NEJM200301023480120))

~~~
Renaud
Maybe I'm mistaken, but the historical bias fat=healthy seems to be visible
there. It's doubtful that by eating only vegetables they could be 'fatter'
than those who drank and ate rich foods. It's obvious the writers were trying
to make a point, but I'm not persuaded by the accuracy of the findings.

~~~
zekenie
Yeah, I don't really care one way or the other about the context. I more care
about the fact that there was a clinical trial thousands of years ago before
concepts of statistics... I like that it can all be boiled down to: split
people into two groups at random. Do something to one group, do nothing to the
other, measure the results.

------
jpdoctor
I think the blog post missed the most important point of the NYT piece: _Too
often, he says, trials are against “a straw-man comparator” like a placebo
rather than a competing drug. So the studies don’t really help us understand
which treatments for a disease work best._

The _standard_ is usually a placebo. This is done when you are not even sure
the drug will have an effect.

The _standard_ should be a competing drugs, otherwise you have no measure of
whether the drug under trial is better than current drugs.

The financial incentives are not present for any of the parties (including the
regulators) to use the higher bar.

~~~
potatolicious
I wonder if this will have unintended economic consequences. Clinical trials
are astonishingly expensive - if we required all players to clinically test
against competitors (or just the current leading competitor), it seems this
would create a perverse incentive to be first to market. Not only would you
rake in the early profits, but you'll also significant raise the bar for
anyone trying to unseat you.

I'm not convinced this will actually be the case, but it's a possible outcome
worth thinking about.

One thing I'm wondering: if all clinical studies are quantified via efficacy
vs. a fixed placebo, shouldn't that make the results comparable? If Drug A is
200% more effective than a placebo, and Drug B is 300% more effective, does
that not suggest that Drug B > Drug A?

~~~
refurb
Yes, it is possible to compare two placebo controlled trials and get a rough
estimate of the relative efficacy of the two drugs. Many of the prescribing
decisions that physicians make are based on these indirect comparisons.

However, it's only a rough estimate. Even if the patient populations are
nearly identical, you can often see different outcomes.

To give you an example: Crohn's disease is an autoimmune disease of the large
(and sometimes small) intestine. If you look at the clinical trials for the
biologics used to treat the condition, you'll see remarkably varied outcomes,
even in the placebo arm.

In other words, if drug A show 50% vs. 20% efficacy vs. placebo and drug B
showed 70% vs. 40% efficacy vs. placebo, which is the more efficacious drug?

~~~
roc
> _" drug A show 50% vs. 20% efficacy vs. placebo and drug B showed 70% vs.
> 40% efficacy vs. placebo"_

Which raises a further (or more basic) confounding factor: the placebo effect
_isn 't_ fixed.

It _does_ vary between trials and even appears to be steadily _increasing_ in
potency over time. [1]

So first-to-market drugs have an added advantage when naively considering
"improvement vs placebo" \-- as their test were run years ago, when the
placebo effect itself was a weaker opponent.

[1]
[http://www.wired.com/medtech/drugs/magazine/17-09/ff_placebo...](http://www.wired.com/medtech/drugs/magazine/17-09/ff_placebo_effect?currentPage=all)

------
ronaldx
This defensive blog post perhaps misses a crucial point that is made by the
NYT through a haze of lay-person journalism.

By placing the focus on an 'average' patient, there is a danger of missing
drugs which work effectively but only in sub-populations while having costs in
other sub-populations (as drugs generally do in practice).

"Some who take Avastin significantly beat the average" is probably indeed
worth noting because the naive RCT may be disguising significant effects of
clinical importance.

~~~
cowsandmilk
The article is completely misleading in this regard. Every modern clinical
trial looks for these sub-populations. The article even states:

    
    
       Sixty percent of the new drugs in the works at
       Genentech/Roche are being developed with a companion
       diagnostic test to identify the patients who are most
       likely to benefit.
    

That said, it is extremely frequent that clinical trials that follow up on
these sub-population observances find them to be statistical anomalies. When
you have 100 ways to subdivide your populations, you expect to have some
anomalies at a typical 5% confidence level.

The article points to I-SPY 2 as the future of clinical trials, making it
sound like bayesian analysis is something novel to solely this clinical trial.
As is alluded to in what you call the "defensive blog post", this is now
standard practice in clinical trial design at most pharma companies. These
companies don't like to throw 9 figures down the drain and they recognize that
adaptive trial designs can get them statistical significance at lower cost
while determining optimal treatment.

~~~
ronaldx
I call it "defensive" because the answer is _not_ simply "yes" and I think
it's a poor position to view clinical trials uncritically.

Clinical trials may indeed work, but it's absolutely not clear that they are
optimally designed.

Although few people on this thread are experts, there are a plethora of
different comments which illustrate the impossibility of perfect clinical
trial design - there are always improvements that could be made and subjective
decisions that have to be taken.

The point that you make about pharma companies being profit-making companies
could equally be phrased as a very strong criticism: that any pharma clinical
trial design is likely to be heavily biased by their desire to sell their most
profitable ("optimal") products.

------
szx
What I found most worrisome about the NYT piece was the idea that _" the
payoff for a successful Phase 3 trial can be so enormous that drug makers will
often roll the dice"_.

Now, IANAStatistician but from what I understand there's an inherent (small)
chance of any given trial showing a drug is effective. Wouldn't that mean that
since there's such an incentive to test unpromising drugs, it's more likely
some would get through by chance and get to the market?

