
Combating bad science - mathattack
http://www.economist.com/news/science-and-technology/21598944-sloppy-researchers-beware-new-institute-has-you-its-sights-metaphysicians
======
pavpanchekha
This is an incredibly important problem that Ioannidis is working on. Not only
could it increase our confidence in scientific results, it can help
institutionalize a bunch of practices (reproducibility, publishing negative
results, post-publication review) that we know to be good ideas.

But it might be worth it to take a step back and recognize what a brilliant
process science is. After all, even with such waste, such bias, and such low
rates of reproduction, science still gives us amazing insights into the world,
enriches our understanding of the universe and our place in it, and of course
births incredible technology. Through fraud exists, though scientists are
flawed humans, though funding agencies can be so conservative or wrong-
headed—still, science chugs along. It's so robust, so powerful, and yet so
simple!

It's truly amazing to live in our world.

~~~
return0
The problem is that science for many scientists is the business of persuading
reviewers and editors, publishing positive results on a regular basis and
spending the rest of the time writing grant proposals.

This has to do with the publishing process itself, and the poor peer review
(closed, sloppy, even for "respectable" journals) of the current publishing
system. Science nowadays generates large amounts of knowledge, not all of
which is useful, but we lack even rudimentary tools to catalog and analyze
that information.

Very few people have suggested ways to catalog science in ways that are useful
to inform future research[1,2].

[1] [http://www.kurzweilai.net/new-tools-to-manage-information-
ov...](http://www.kurzweilai.net/new-tools-to-manage-information-overload-
threatening-neuroscience)

[2]
[http://www.silvalab.com.cnchost.com/silvapapers/S2Neuron2013...](http://www.silvalab.com.cnchost.com/silvapapers/S2Neuron2013.pdf)

~~~
fnl
I could not agree more to your description of what the life after your PhD
seems to be. The only other workday activities you are missing out are
probably (a) travelling, meetings, and conferences [lots] and (b) reading
papers [hundreds/year].

However, I fail to see how "research maps" would improve our way to catalog
and analyze information. What is shown in your ref [1] is a regulatory
network. These models have been used for years to drive research into diseases
ranging from cancer to neurodegenerative diseases. And, there is (at least in
the biomedical sciences) a very vibrant community dedicated to that very job,
the "bio-curators" [[http://www.biocurator.org/](http://www.biocurator.org/)]
that directly or indirectly fill (relational) bio-repositories. "Research
maps" like STRING, Reactome, PIR, BioCyc, or KEGG, to name a few, have been
around for quite a while, too. So I think the claim in [1] that Silva has
"invented" "research maps" is a quite some PR exaggeration, as this can be
simply described as a "Reactome limited to Neuroscientists only".

~~~
return0
Agree it's not a better solution, it's just that there is little talk of it in
general. Paywalled science plays a big role here, it basically makes it
impossible to gather even paper titles for further analysis.

------
fnl
I am happy to have seen Ioannidis' talk on this very matter a while ago - very
inspiring indeed.

However, I think, often such articles as the one in The Economist lead
"outsiders" to the opinion that much of the current biomedical science is
flawed. They say that "only" 10 out of 13 in the article have been shown to be
reproducible, and while even three bad teeth are not good, it is not even
clear what was wrong in the remaining three. It might be very trivial issues,
or it might be fraud, who knows. Furthermore, it does mean that at least 3/4
of those papers were fine. Last, so far, the reproducibility results of the 50
cancer papers is still outstanding.

So while I do agree there is an issue at hand, it is a bit like with Apple vs.
Windows issues: consumers of the former brand are used to high quality, so a
single bug or issue in an Apple soft- or hardware quickly fills news outlets
in the world, while nobody gives a damn about even the most critical issues in
Windows, because their customers are used to it. While it is important to
correct the (ab-)use of biomedical statistics, I think the way this is being
presented to outsiders tends a bit towards sensationalism.

I do agree with Ioannidis' as far as that our papers should get more in-depth
reviews (not only) of the statistics in them, but the problem is that we need
to publish 2-3 papers per year that in turn need 3-5 reviewers per paper, so
everybody has to do about 10-15 reviews per year. And by "doing a review" I do
not mean just doing an "intensive" reading of the paper being reviewed in half
a day, but actually looking at the data and methods and spending some days on
it. I am not sure we have the time for that and I believe the reason is this
madness of having to publish several papers per year just to survive. I.e.,
the real problem is our "publish or perish" system, nothing else. All this
leads to the fact that it can be better to have dozens of junk papers than one
good one, at least if you do not manage to get it to the very top (Nature,
Science, Cell, etc.).

------
renox
> what a brilliant process science is.

Compared to what? Religion? Well, due.

> After all, even with such waste, such bias, and such low rates of
> reproduction, science still gives us amazing insights into the world,
> enriches our understanding of the universe and our place in it

The low rates of reproductions are in social sciences, drug research, etc,
where the "insight into the world" comes from physic, astrology, etc.

~~~
fnl
> the "insight into the world" comes from physic, astrology, etc.

then I hope we live in two different worlds - preferring astrology over
religion will not get us very far, I guess... (just cracking some balls, but
it's a too funny typo to resist)

------
mattfenwick
Interesting initiative. For those interested in finding out more information
about reproducibility in science, check out David Donoho and Roger D. Peng as
well (and of course, Ioannidis has lots of other interesting publications in
this area).

One of the outcomes from these efforts that I'm hoping for is that more
scientists realize that reproducibility is not just a sideshow that's nice to
achieve for some abstract reason. Rather, reproducibility is the core of
science and __reproducibility is the value that a scientist provides __.

Based entirely on my personal experience, it's depressing how little other
scientists even understand the problem (much less care about it). It's just
not on anybody's radar. I'm often told that it's a waste of time and not worth
bothering about because people have more important things to do. (This is at
the same time that we [in our specific field] routinely throw out analysis
results that took weeks or months to prepare, because they couldn't be
reproduced).

------
af3
NSF/NIH grants? :D

