

Clinical trials: Unfavorable results often go unpublished - crocus
http://www.scienceblog.com/cms/clinical-trials-unfavorable-results-often-go-unpublished-18271.html

======
bk
This issue also concerns studies with non-results in all scientific fields.
The pressure to produce "results" causes two types of problems:

1\. Massive fudging of data to achieve (statistical) significance.

2\. Inefficiencies due to researchers repeating failing experiments because
they can't learn from the unpublished non-results of others.

It's fundamentally a problem of human psychology (reputation/face saving), and
of organizational design, which sets up the rewards context (universities,
tenure process, journals, etc.) The system is pretty outdated and broken for
the modern pace of information production, imho.

~~~
ricree
To be fair, though, people are aware of this issue and working against it. For
example, there's the Journal of Negative Results in BioMedicine
(<http://www.jnrbm.com/>), which provides an avenue to publish these sorts of
negative results. That said, I agree that this is definitely a major issue for
research in most fields.

------
TomOfTTB
If I had to pick one thing in our society that worries me the most these days
it would be this mentality of "what I believe is more important than the truth
so that makes it ok to bend the truth to fit my beliefs" I really began to
notice it during this election season and even made a post on my blog a while
back: [http://www.tomstechblog.com/post/Why-I-Dont-Trust-
Polls-(and...](http://www.tomstechblog.com/post/Why-I-Dont-Trust-Polls-\(and-
What-We-Should-Do-About-It\).aspx)

(please excuse the inadvertent plug, I don't think there's any way to post
images here)

To me this news about medical studies represents the same mentality but at a
much more dangerous level. People willing to twist medical facts in order to
support the conclusion they went in trying to prove.

I think our culture needs to really look at the value we put on "truth" and
start judging those who try to hide it much more harshly.

~~~
tokipin
<http://en.wikipedia.org/wiki/Truthiness>

------
jballanc
First of all, the title is misleading. Negative results are not the same thing
as unfavorable results. Second, as a person involved in biomedical research, I
am very familiar with the bias toward publishing positive results, and leaving
the negative results buried in a lab notebook somewhere. There are two root
causes for this:

1\. Funding agencies reward positive results. Of course, the biggest funding
agency in the U.S. is the U.S. gov't. The gov't must answer to the people, and
the people only want to hear about positive results. Show some interest or at
least concern for negative findings (and learn, or teach kids in school, why
negative findings are important), and you'll find more scientists publishing
negative findings.

2\. Funding, especially in the U.S., is a competition. Why would you tell your
competitors all the things that didn't work? Why give them that strategic
advantage? Would you expect Google to tell Yahoo which search algorithms don't
work? Reward scientists based on consistent good work, and not based on their
ability to beat out competitors, and you'll find more scientists publishing
negative findings.

~~~
gojomo
'Unfavorable' may be a loaded title term, but properly understood it captures
the essential point of both the article and your own expert observations:
results that are 'unfavorable' _for a researcher's career_ go unpublished.

------
ivankirigin
This is the worst part about academic publishing too. Researchers should keep
blogs to record results, and the RSS feed is the publication.

~~~
tjpick
> The reasons most commonly given for not publishing were that investigators
> thought their findings were not interesting enough or did not have time.

not publishing to a blog is the same as not publishing elsewhere. It's not the
medium that's the problem.

~~~
ivankirigin
Blogs build reputation by good work. Lowering the threshold to publishing will
make people more likely to publish.

It's a lot of work to publish, and publications won't accept negative results
as interesting.

By self publishing on an accessible medium, negative results are more likely
to be shown. Not every blog post would even be about results, but about the
process.

~~~
redrobot5050
I readily await the fate of a scientist who's google search results only turn
up an entire blog of failed results...

------
lacker
I can't trust this article. If they had done this study and found unfavorable
results are just as likely to be published, their study would be much more
boring, and it would not have been published. ;-)

------
etal
Good steps, at least for pharma:

\- Require better registration of clinical trials and automatic aggregation of
results, as part of medical regulation

\- Make the clinical data submitted to the FDA (or equivalent agencies)
public, or at least accessible to researchers. Currently, the data sent to
journals is _not_ the same set previously submitted to the FDA; it's been
touched up to make it more suitable for publication. Another paper found that
overall these papers show slightly more positive results than the
corresponding FDA data does. (Dunno how they managed to get that data set.)
Authors have their own specific justifications for this, but the overall trend
is bad.

------
tokenadult
Peter Norvig has a good article about the implications of this and other
research problems.

<http://norvig.com/experiment-design.html>

------
trapper
Personally I find the problem the lack of raw data. We need a much more opaque
process with science. Imagine a website that allowed the following workflow:

1\. Upload hypothesis 2\. Describe experiment 3\. Add datasets as they come in
4\. Analyse data 5\. Publish

This way, if there was a new amazing result, the first thing you would do is
go through the raw data to check. Re-test the statistics. You could
automatically look for signs of fraudulent data.

------
tocomment
I wonder if the double blind model can be applied right into the publishing
stage? Or perhaps journals could have some requirement, 50/50
positive/negative results.

~~~
cninja
I think that the problem is we consider a result as either "positive" or
"negative". All correctly preformed experiments add a positive ammount of
knowledge. Humans tend to classify unexpected results as negative, but that is
not necessary.

~~~
etal
That's not what negative means here. For these studies the conclusion is a
statistical test -- accept or reject the null hypothesis. If a drug doesn't
give results significantly different from a placebo, that's "negative". More
to the point, when a researcher collects a pile of data, does some analysis,
and can't see a damn thing either way, then _either_ the experiment was flawed
to begin with, _o_ r that absence of meaning is a real, useful result. Maybe
the sample size just wasn't large enough to show a small but significant
trend. It takes a fair amount of confidence in one's skills to rule out the
first possibility and announce loudly that there's nothing to be seen further
down the path you took.

