
Most Published Research Findings Are False (2005) - known
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1182327/
======
kerkeslager
I think the solution to this is that science publication needs to be done like
science itself. Just as scientists commit to doing experiments without knowing
what the results of the experiments will be, publishers need to commit to
publishing experiments without knowing what the results of the experiments
will be.

Publishers need to be involved earlier in the process. Instead of submitting a
completed research finding, scientists should apply to publish research before
doing it, and if the topic of research is interesting to the publisher, both
scientist and publisher should commit to publish in that journal, no matter
what the results are.

Additionally, publishers should commit to publish only something like 1/3
research into new topics, with the other 2/3 being publishing attempt to
reproduce previous interesting results. I don't mean a slavish repetition of
existing experiments, but taking a critical eye toward testing the same
hypotheses with different (hopefully better) methodology.

------
yesenadam
_Inside the Fake Science Factory_ (2018) is an amazing DEFCON talk, by 3
people who stumbled on and investigated an unbelievably huge worldwide network
(400,000 scientists involved) of fake-scientific conferences, journals,
websites raking in money and enabling people to make claims that their
products are scientifically proven. The presenters got hackers to track down
the people behind one of the largest organizations, and analyze the huge
numbers of conferences/papers. They present/publish nonsensical talks on
graphs and bees+cancer to see what peer review there is... A must watch. Very
funny, if depressing.

[https://www.youtube.com/watch?v=ras_VYgA77Q](https://www.youtube.com/watch?v=ras_VYgA77Q)

------
samuell
A nature survey from 2016 among 1500 scientists, revealing a widespread
consensus that there exists a reproducibility crisis in science:

[https://www.nature.com/news/1-500-scientists-lift-the-lid-
on...](https://www.nature.com/news/1-500-scientists-lift-the-lid-on-
reproducibility-1.19970)

~~~
jshowa3
I generally agree with this. The publication game tends to focus on exciting
new discoveries way more than reproduced ones. I think this is because
universities push you to focus on "original ideas" when publishing theses
instead of reproducing other ideas because the former is "harder" than the
latter.

------
Certhas
Most code is buggy despite passing all tests.

Publication is a sign of persistence, plausibility and a minimal quality
check. Peer review improves articles, it does not guarantee correctness.

That's also why one should be extremely careful about things that are not peer
reviewed. Not because the bar is so high, but because it's so low.

~~~
kolbe
Just because the bar is low doesn't mean its height is in any way related to
its discernment of quality. Say, for example, the bar was simply based on the
institution that the author is employed by. That may be a low bar in that
anyone from Harvard can get a paper published, but it also doesn't actually
tell us anything about the paper's validity. Whereas I wouldn't be skeptical
of a good Rush University paper, simply because its authors weren't
prestigious enough for Nature to pay attention to.

~~~
Certhas
I was talking about getting your paper peer reviewed, not placing it in a
particular journal. If you want to get into PLOS One I don't think your
university matters very much.

------
buboard
Science doesn't seem to have fixed itself since. At this point it is prudent
to shift the public's perception from "research is true" to "research is
valid". To the extent in which public policy is dictated by research, it
should be based, not on journal-published research, but perhaps on meta-
analysis or third party research-on-research.

------
chmike
As stated, this research finding my be false too ! ;)

------
simplystats2
A couple of other articles of interest:

An empirical estimate:
[https://academic.oup.com/biostatistics/article/15/1/1/244509](https://academic.oup.com/biostatistics/article/15/1/1/244509)

And a review with some context:
[https://www.biorxiv.org/content/10.1101/050575v1](https://www.biorxiv.org/content/10.1101/050575v1)

------
rthomas6
Beware the man of one study: [https://slatestarcodex.com/2014/12/12/beware-
the-man-of-one-...](https://slatestarcodex.com/2014/12/12/beware-the-man-of-
one-study/)

------
johannesbeil
Do you think a Github for science would help?

~~~
kerkeslager
If you don't have a clear mechanism by which a startup could solve this
problem, I suspect the result would be the same structures that already exist,
but with better marketing and a Web 2.0 interface. If anything, this would
make the problem worse, because it creates a company that now has a vested
interest in keeping the status quo because it's part of the status quo, and
that company will be manipulating politics and public opinion with its
marketing.

A second problem with this sort of company is that it often doesn't really
understand the industry it's "disrupting", so they end up not solving a lot of
the problems which older solutions solved.

At a more fundamental level, I think that a company is always going to be
motivated to make money rather than solve a problem. Even if intentions start
off well, the people in the company will believe that the company is good and
needed, so they'll make choices that give up a little of the original purpose
of the company to make the company survive, because if the company doesn't
survive then it can't do any good, right? And after a bunch of one-degree
deviations from its original direction, the company is going a completely
different direction than its original purpose and isn't even geared toward
solving the problem any more.

Sure, "We're going to save the world" is great marketing, but at this point I
simply don't ever believe that a startup is a solution to a major problem like
this.

~~~
kiba
Why does everything needs to be a startup?

~~~
kerkeslager
It doesn't, but that's what I took "GitHub for science" to mean.

------
lainon
(2005)

------
acqq
Needs: [2005]

------
chrisseaton
Including this one?

~~~
vertline3
Can you elaborate as to why? Snark is not really the goal here.

~~~
YorkshireSeason
It's a perfectly valid, and indeed important question.

Self-application is an important form of scientific methodology. Think e.g. of
programming languages development: one of the first things you do is bootstrap
a compiler.

~~~
sporkologist
You're conflating science publishing philosophy with standard software
development?

~~~
YorkshireSeason
Programming language development is not standard software development.

PL research is a discipline with no really feasible empirical methods for
robust and replicable evaluation of programming language ideas. Using self-
application is a test of internal coherence -- if your new language is not
good at writing a compiler, then maybe it isn't such a good idea.

Self-application is a well-known scientific methodology. Would you trust a
physical theory that predicted the impossibility of physicists?

------
aceon48
I learned this in middle school during the Science Fair. My experiment didn't
work. Up against the deadline and not wanting a 0, I simply made up the data.
I got a nice big A. Probably exactly what happens in the real world too.

~~~
droussel
If you would have gotten a 0 then your science teachers failed you I'd say. A
failed experiment is just as valid a result as a successful one if the goal is
to gather data.

