
Let’s make peer review scientific - return0
http://www.nature.com/news/let-s-make-peer-review-scientific-1.20194?WT.mc_id=TWT_NatureNews
======
joelg
Shameless plug: I'm working at the MIT Media Lab on the PubPub project
([http://www.pubpub.org](http://www.pubpub.org)), a free platform for totally
open publishing designed to solve a lot of these problems:

One is peer review, which, as some have already mentioned, needs to be done in
an open, ongoing, and interactive forum. Making peer review transparent to
both parties (and the public) makes everyone more honest.

Another is the incentive of publication itself as the ultimate goal. Instead,
we need to think of documents as evolving, growing bodies of knowledge and
compilations of ongoing research. Every step of the scientific process is
important, yet most of it is flattened and compressed and lost, like most
negative results, which are ditched in search of sexy click-bait headliner
results.

Another is the role of publishers as gatekeepers and arbiters of truth. We
need a medium in which anyone can curate a journal, and in which submission,
review, and acceptance procedures are consistent and transparent.

Another is the nature of the medium itself. It's 2016, and these dead, flat,
static PDFs are functionally identical to the paper they replaced! Insert your
favorite Bret Victor/Ted Nelson rant here: we need modern, digitally-native
documents that are as rich as the information they contain.

Another is reproduciblity. We should be able to see the code that transformed
the raw dataset, tweak it, and publish our own fork, while automatically
keeping the thread of attribution.

The list goes on and on...

~~~
suchsciencewow
How are you planning on addressing the fact that none of this matters in the
slightest if academic career growth/incentives/reputation specifically
revolves around publishing boring pdfs in established journals? What do I gain
by sending my work to your platform where the first question anyone in my
"target audience" will ask is "why didn't this get published in a real
journal, whats wrong with it?", and I will get essentially zero resume kudos
points for it? Keeping in mind that it took me at least many months to do the
work, I don't exactly have an abundance of papers to throw around, and my
officemate almost certainly will submit work of similar quality to a
traditional journal and reap the benefits of that.

Just saying, my decision to publish in traditional journals isn't as much as a
decision as it is a requirement of the very career I am attempting to pursue.

~~~
isTravis
Great question, and one that certainly doesn't have a straightforward or
trivial answer. It's definitely more of a social challenge than a technical
one - making publishing free/open won't do anything to fix incentives on its
own.

My hunch is that change to this system will come from the outside. It's too
risky of a career decision for a tenure-track professor to start publishing on
PubPub (or any open/new system). But, there are lots of people who aren't
playing that game. Lots of people who are doing science outside of academia,
at a corporate R&D position, or for the sake of education, etc.

The most important step is to show that open publishing works. If we can work
with these early adopters and show that conversations are more rich, or
results more reproducible, we can start to go to universities and grant
agencies and advocate for them to require open publishing. The first day that
a university hires a professor or an agency rewards a grant based on the
history of openly published work, will be a turning point. I hope it will be
similar to the first time a software dev was hired for their Github profile,
rather than their CS degree.

Today, software companies hire on experience. A university degree can show
that, but so can major contributions to an open-source project. I hope science
can become the same. Whether you're a PhD out of a great program, or an high-
school drop out that has committed her life to rigorous experimentation, your
demonstrated experience should be what you're hired on, not the list of
journals that have found it in their interest (many of them are for-profit) to
include your work.

~~~
zenogais
Perhaps offering some sort of crowdsourced funding mechanism and a reputation
system would go a long way toward correcting some of these incentives?

For example, giving authors / organizations a Bitcoin address where they can
receive funds from individuals / organizations who want to support their
research.

Also, awarding reputation to authors based on the level of peer review their
research has successfully undergone (number of peers, level of rigor, etc.),
and conversely awarding reputation and funding to those who perform peer
reviews. Allowing users to contribute to a peer review fund for individual
articles or in general.

All that to say this is very exciting and opens up a lot of new possibilities.

~~~
yayday
> For example, giving authors / organizations a Bitcoin address where they can
> receive funds from individuals / organizations who want to support their
> research.

That's a fantastic idea. Maybe we could call this "depository" of money to
conduct research something like, hmmm, what's a good word… a __grant __?

> Also, awarding reputation to authors based on the level of peer review their
> research has successfully undergone (number of peers, level of rigor, etc.),
> and conversely awarding reputation and funding to those who perform peer
> reviews.

Sounds fantastic as well! Maybe these authors could create like, a __website
__or __curriculum vitae __where they could list their accomplishment to
establish their reputation. You know, they could have a section in their
medium of choice that could be titled something like __selected peer reviewed
articles __where they 'll list their publications along with their coauthors
and the journal it appeared in. Maybe these journals could devise some kind of
__ranking __to measure reputation. Maybe they could call it something like…
__amount of impact __or maybe just __impact factor __for short. I think this
could work really well.

> Allowing users to contribute to a peer review fund for individual articles
> or in general.

Maybe a general fund should be created to support science! Maybe a __national
science fund __or something, governed by a so-called __national science
foundation __who can vote scientists, engineers, and the like onto their board
to __steer __the allocation of funding.

I really think you're onto something very good here!

~~~
apathy
> Maybe we could call this "depository" of money to conduct research something
> like, hmmm, what's a good word… a grant?

Nah, that word is already in use for stagnant allocations of academic welfare
to work on bullshit instead of transformative techniques (e.g. CAR T-cells,
which NIH refused to fund for years). Need a new word to signify "money that
is actually intended to produce results" instead of "a pension for irrelevant
tenured senior PIs to pay non-English-speakers below-minimum-wage to work on
topics that became irrelevant a decade ago".

> Maybe they could call it something like… amount of impact or maybe just
> impact factor for short. I think this could work really well.

Ah yes, impact factor is such an amazing tool. It allows "executive"
"leadership" types to predict (very poorly, but who cares?) how many citations
a paper might receive if it survives the months or years between submission
and publication in a major journal. Trouble is, JIF is massively massaged and
the COI that Thompson Reuters has in equitably enforcing it is ridiculous.

WARNING: Non-peer-reviewed work ahead! If you're not careful, you might have
to apply critical thinking to it!

[http://biorxiv.org/content/early/2016/07/05/062109](http://biorxiv.org/content/early/2016/07/05/062109)

> Maybe a general fund should be created to support science!

That's a great theory. Perhaps it can be as well executed as the CIHR fund
(where study section has given way to "ignore everyone who doesn't suck my
dick directly") or NSF (whose yearly funding is dwarfed by the R&D funding at
a single company). This approach is working out very well!

You know, if I didn't know better, I might think you were the sort of
researcher that fails to look at the details and just submits your most
fashionable bullshit to whateve journal at which your pals happen to be
editors. I might get the impression that you're the cancer which is killing
grant-funded science, which prizes large labs over large numbers of R01
projects, which believes that O&A is an entitlement to take out mortgages on
new buildings instead of to pay for the costs of disbursing and administering
a grant. But, since the evidence isn't before me, I won't.

It would be nice if you thought a little more carefully about what you wrote.
The devil is in the details.

~~~
kd0amg
> or NSF (whose yearly funding is dwarfed by the R&D funding at a single
> company)

If the worst thing you can say about the NSF is that they need more money,
that makes it sound like GP has come up with a nice way to allocate the
available funding towards particular research projects.

> It would be nice if you thought a little more carefully about what you
> wrote. The devil is in the details.

Details like how to get "crowdfunding" to put up enough money that
"independent scientist" can be a full time job and not just a hobby for the
odd few who somehow already have most of the needed lab facilities/equipment?

~~~
apathy
I take it you're not familiar with "crowdfunding" sources like the AACR, LLS,
ASCO, or other professional societies?

As someone who is funded by several of the above, and who noted that their
review processes were substantially less bullshit-intensive yet no less
rigorous than NIH review (which has many benefits, efficiency not among them),
I'm going to go out on a limb and suggest that it's possible.

As far as the NSF, they do a good job with what they have, but what they have
is not commensurate with what we as a society could stand to spent on science.
Even NCI is a far cry from that:
[https://pbs.twimg.com/media/CmLJzKQWkAAl372.jpg:small](https://pbs.twimg.com/media/CmLJzKQWkAAl372.jpg:small)

Distributions are similar for various other avenues of funding, and it is
quite clear that the overhead & administrative costs requested by many
recipient instutions are far out of proportion to actual needs, so the impact
of the funding allocations is further reduced.

Thus it appears that a direct conduit from potential patrons to researchers
is, in fact, desirable. Otherwise, services like experiment.com would not
exist. They're not at the level of an NIH study section (duh?) but they have
consistently produced a small stream of usable results that belie their
supposed irrelevance. Once upon a time, the Royal Society existed for just
such matchmaking: find a rich patron and a promising young scientist and line
them up. You've likely noticed that many if not most major universities and
"centers of excellence" rely upon exactly this model, _supplemented with NIH
or NSF grants_ , to exist. Further modularizing the model so that an
administrative hand yanking out bloated "indirects" at every turn is not
mandated might not be the worst thing, or (alternatively) being more
transparent with said O&A requests, might at least bring some of the bullshit
under control.

The public clearly wants accountability. The masses may be asses, but if we
want their money, we really ought to be transparent about what we're doing
with it.

~~~
beevai142
The difference between professional societies and crowdfunding is that
professionals, not the crowd who donate directly, decides which projects to
fund. In this sense, I do not see a great qualitative different to government
funding agencies --- if you do, please elaborate.

EDIT: And to clarify, in the societies I know, general members do not directly
take part in grant decision processes. Rather, the decisions are made by a
small panel, possibly together with external reviewers. This is fairly
different from crowdsourcing.

~~~
apathy
It's different from crowdsourcing, but the source and sink for the funds also
tend to be more closely related. Ultimately I don't really believe that major
initiatives (eg P01-level grants) can be adequately reviewed by anything other
than genuine peers.

But by the same token, an exploratory study requesting $30k for field work or
sample processing could very well be evaluated by less skilled peers.
Actually, I think I'm going to try and shop this to a friend at NIH. I'll
fail, most likely, but at least I won't just be whining.

For example, pharma and big donors use the LLS review system as a "study
section lite" to hand out grants larger than a typical R01. The paperwork and
BS isn't really necessary at that level and just gets in the way. If something
like this existed for "lark" projects, inside or outside of NIH/NSF, perhaps
more diverse and potentially diversifying proposals would be worth submitting.

------
danieltillett
This issue of scientists being OK with totally non-scientific processes is all
too common. When I was an academic my Department used to make an enormous fuss
about the year to year jitter in student teaching evaluations. My colleagues
(all scientists) would sit around discussing why they were a heroes because
their evaluation went up 10% from last year or what they had to change because
it went down 10%. I used to just sit there thinking if this sort of analysis
was in a paper they were reviewing they would have ripped the authors to
shreds.

Peer review fails on all levels. It does not functions as a source of quality
control (everything gets published eventually) and even worse it rarely
improves the quality of the paper being reviewed. I have published dozens of
papers over the years and on only one occasion has the review process improved
the paper - in most cases the reviewers demands made the papers worse by
forcing me to remove important information or include irrelevant details
(citing the reviewers publications mostly).

~~~
quantumhobbit
Peer review is broken not because the process of review is broken but because
the peers are broken. Reviewers, like all academics, have screwed up
incentives to get there name on as many papers as possible and those papers
cited widely, regardless of the quality of the science involved. They will
always find a way game the system to their benefit.

Recently heard about reviewers of a friend's paper blocking publication until
the reviewer could publish his research on the same topic first. The whole
system is just too screwed up to be fixed.

~~~
danieltillett
Yes this sort of bad thing happens, but more in the real high stakes games of
grant reviews than publications. You can easily kill the grant of one of your
competitors by just giving it a mediocre review (without any basis of course).
I would see this happen all the time to my colleagues where they would get a
spiteful review on a grant that would kill it and there was nothing they could
do.

~~~
dangerlibrary
At the NIH, at least, I know that enormous effort is put into ensuring that
there are not conflicts of interest between the reviewers and those being
reviewed.

I'm not particularly surprised to hear that some places do not put in that
effort - it is a slow, painstaking process - but there are places where this
is less of a problem.

~~~
danieltillett
I am in Australia where the pool of potential grant reviewers is pretty small
for most areas. When you have a grant success rate of 15% it is all too easy
to nobble your competitors via the review process.

~~~
jsprogrammer
Why is science viewed as a competition there?

~~~
dflock
For exactly the same reasons it is everywhere in the world: because there
isn't enough funding to go around - and the grant application process is
_explicitly designed_ to be competitive.

~~~
jsprogrammer
It doesn't seem to be _everywhere_ though. I know in the US, research is often
collaborative and a single grant may fund many researchers. Those researchers
do not seem to be competing internally for funding (though, I have seen
conflict as to whose name should appear first on a publication!). I presume
the same is likely true in Australia. Why is some research able to be
collaborative while other must (apparently) be competitive?

~~~
apathy
Note also that collaborations compete with other collaborations for program
grants, and R01s ( _the_ measure of a "real" PI) are inherently not
collaborative. They are by their very nature competitive. And they are the
metric by which principal investigators are judged.

~~~
jsprogrammer
Many real PIs support collaborating researchers and students though.

~~~
apathy
Right, but students are written into the grant. Ultimately, nobody gives a
shit which post doc or student you put on a project; the assumption (often
flawed) is that they're all the same as far as the modular budget is
concerned.

If they were special, the reasoning goes, they'd have their own F32 or T32 to
work on the project.

------
apathy
Buried in the middle of this (wonderful) article is the heart of the problem
-- academia has foolishly placed its metrics into the hands of editors and
publishers, who have corrupted the living hell out of it. Ctrl-F "Cochrane"
and witness the exchange between an editor, who benefited from the status quo,
and a scholar, who did not and does not.

Academia != scholarship and has not for some time. There is no longer a good
reason for traditional for-profit journals to exist. (Before someone says it:
SpringerNature likes to pretend that editorial independence is possible, but
they'll have no choice save to fire their "news" guys if their board asks for
this).

Please recall that the entire point of the World Wide Web was to share physics
papers. The arXiv exists because of physicists (who quickly noticed that HTML
wasn't a good substitute for LaTeX when writing math-heavy papers). The
problem is not technological. It's social. And until the incentives are fixed
(the Cochrane Collaboration in Britain has gone a long, long way to address
this, and now the Wellcome Trust is going even further), nothing of any real
import will change. In the USA, NIH could make a lot of positive changes (and
in exchanges with mid- to senior executive level directors, I honestly believe
they're trying to do so). But it will take time. Academia moves with glacial
speed, when it moves at all.

------
lordnacho
Perhaps it's a question of incentives. What exactly do you get out of reading
a quite complex piece of work and saying your opinion?

The closest I can come, as a non-academic, is perhaps reading other people's
code.

It's bloody hard. You need to concentrate, and it's not like reading a
newspaper article at all. Even small errors are not easy to spot, and it's
even hard to know whether the code is structured in the way the comments say.

It somewhat makes sense to do the exercise if I have to be using the codebase.
If I'm just commenting and giving a thumbs up/down, it's quite easy to reduce
effort, come up with some generic comments, and see how the other reviewers
do. Which is a recipe for letting errors through.

~~~
jsprogrammer
Reviews shouldn't be opinion, but facts. Those facts can then be disseminated
to larger audiences. The incentive is to ensure that only factual information
is claimed in publication so that everyone else can be as informed as
possible, thereby (hopefully) allowing us to all make better decisions (and,
perhaps, this could even accelerate the above described process).

~~~
apathy
Every time you use the word "should" you beg the question.

Why, exactly, are incentives aligned to make it so? Or is the problem that the
incentives are _not_ in fact set up to encourage the desired outcome?

Anything else is wishful thinking. People seldom do what they know to be
right; they do what is convenient, then repent. It follows that one must make
the right thing the convenient or rewarding thing if one wishes to see humans
doing it.

By and large, people are somewhat predictable. This can be used for good as
well as the more familiar evil uses.

~~~
jsprogrammer
Reviews should be facts by definition of a review (as I use it), so the
question is not begged.

The incentives are 'aligned' that way because it is the only way to benefit.
Publishing into an echo chamber (or void) is not science, by definition.

~~~
apathy
You've been out of academia for a while, I take it?

Publons (mostly uncited) are the coin of the realm. I don't usually bother
submitting a methods paper unless I know a few citations are in the tank, but
some people publish just to get their metrics up. A lot of people, really. The
more I think about it the luckier I feel not to _have_ to continually publish
jackoff papers. If I wasn't so lucky in terms of collaborators... Well, I
guess I'd GTFO of academia, really.

Lots of folks have to "show productivity" with bullshit papers that don't
matter and no one reads. Needless to say, reviewing such papers can be a
rather dull affair.

------
chmike
I'm enclined to think that an open review process by using a web system with a
space for open discussion/commenting could be a good system. This wasn't
possible before the web.

Actual system is publish or not, but it is possible now to publish everything
and use a rating system. This removes the risk of plagiarism and anteriority
disputes.

~~~
ManlyBread
I disagree. Something similar exists on the wikipedia and it's still open to
biases and agendas, it's just a matter of who is more persistent and which
side the admins take.

~~~
randall
But you don't need the 'gatekeeper'. Imagine if you had a 'trustability'
factor that people could cryptographically sign as themselves. You could build
a 'trust web', where papers link to other papers by an author. That author's
trust ratings could go up based on the citer's evidence.

There's clearly tech waiting to be built here, which might or might not be a
startup.

~~~
yayday
There's already a "trustability" factor: it's called a name. When I search for
a new paper and find something interesting, I look at the first author, the
name of the PI, and the name of the university or laboratory associated with
this paper.

If I'm familiar with the work of the first author then I already have an idea
of how much I can trust his/her work. If I don't know this person then I look
at the coauthors and especially the PI. If the PI is a heavy hitter in my
field then there's my trustability factor right there. If I don't know the PI
but I know the laboratory or university it's affiliated with then there's my
third line of a trustability metric.

~~~
apathy
You don't consider the journal? You're a better man (or woman) than I am, if
so. Otherwise agreed, reluctantly. This implicit trust is frequently
manipulated to relax standards by experienced PIs, and in biomedical work that
is a source of numerous problems.

In an ideal world, work from an unknown author would receive the same scrutiny
as that from an established lab, and vice versa. Obviously this isn't the
case, but the further we drift from it, the more likely we become to see
sloppy results wasting everyone's time.

It can't just be about the narrative. That simply isn't science. That's
storytelling, and confusing the two has caused a great deal of harm to science
in the public eye.

~~~
yayday
I'd be lying if I said the journal didn't matter strictly speaking, but in
general as long as it's a "real" journal (as opposed to one of those obviously
fake journals I keep getting spammed by) then I don't really care.

Don't get me wrong though, just because the paper has a heavy hitter as the PI
or even as first author, doesn't mean I treat the content with less scrutiny.
It's more that I know my time is extremely limited so I'm more willing to
spend more time dissecting a paper from an established first author or PI than
an unknown person (unknown also meaning that it's not a paper someone
recommended to me or a paper someone cited in another work).

It's more like, I'll _make_ time to read papers with certain authors on it
because I know that they do good work. I'll also _make_ time to read something
if it's published in Nature et al for example. But a random paper from a
random author in a random group? It's not that I don't trust this person's
work, it's more that I'm unfamiliar with that person's work and there's other
things I could spend my time doing as well. I'll download the paper, throw it
on my iPad, etc, and I'll get around to it eventually.

------
mariusz79
I've got a simpler solution - scientific paper should not be considered valid
until another team replicates the findings. That would quickly get rid of all
of the fake results, plus would weed out all of the research that nobody
really cares about. Also require that all data is shared, and that until your
results are not verified or replicated by a team from another
university/country you don't get any more funds.

~~~
Jabbles
That's not going to work for Astronomy, or results of multi-year studies.

~~~
mariusz79
it will, if you require that the data is shared and can be analyzed by another
group. In some cases it will slow down the progress initially, but I think in
the long term we would be getting much better results.

~~~
yayday
You're already moving your goalposts. Your first post said replicates the
finding. If I simply post my results and someone else looks at the data and
agrees with me then are they really replicating my finding or did they, oh
what's the word… peer review my work and agreed?

~~~
Fomite
I really dislike the idea that "Your code runs on your data" is a form of
replication.

~~~
apathy
It's a minimum standard and one that many published works cannot meet.
Universal enforcement of this minimum would itself be a big step forward.

~~~
Fomite
It is a minimum standard, and a useful one. It mostly annoys me when certain
groups of advocates seem to treat it as an end in and of itself, and evidence
that something is "repeatable" and will end the problems in obtaining
scientific evidence.

I come from a field where something isn't a reproduced until it's also found
in an entirely different study on a different population.

~~~
apathy
Oh, same here. Two of my current manuscripts have now been replicated in 3-5
independent clinical cohorts. The thing is, if the field won't even recognize
that "ability to produce the same outputs given your inputs" is critical, good
fucking luck with things like "actually replicates in a separate population".

Neuroscience is particularly horrendous about this.

------
jkot
I personally prefer reproducibility over peer review.

~~~
peatmoss
I'd be loathe to choose. That said, reproducibility is the part that's
obviously broken in many cases. Peer review is broken in more obfuscated ways.

------
zyxzevn
The peer-review is mixed with a reductionistic structure. Each scientific area
is specialized. The articles are reviewed by people in the specialized fields.
Often they find something that is not entirely in their field of knowledge.
That is because they are trying to "science" something new.

So they can produce theories that seem a solution from their field of
knowledge, but are not valid in another field. The peers will not see this
problem. Another problem is that engineers that try to apply this science find
practical problems. But because they are seen as "of lesser knowledge" their
practical criticism is often rejected.

The best example I know is the "theory" that magnetic fields can bump into
each other, producing energy. In this model the fields are made of flux lines,
which can bumb into other flux lines. The flux lines will then break and
reorganize, producing energy.

Yet, as I write this, you may already think that this is bullshit, because
flux-lines are imaginary lines to describe a magnetic field.

Now, with this in mind, look at magnetic reconnection. This is a theory made
scientists specialized in astronomy, and not in the area of electromagnetism.

I believe that this problem is in every (specialized) area of science.

------
Toenex
We could allow everything to be published and then use a continual monitoring
approach. After all the limiting resource these days is not paper but
reviewers time, so lets distribute that. What is wrong with model like HN uses
for open review and comment on work? Even a voting system to help identify the
most 'important' works.

Edit: Haha great minds chmike

------
altoz
Hypothesis: peer-reviewed articles aren't rigorous

Prediction: a flawed article will pass peer-review

Testing: [https://svpow.com/2013/10/03/john-bohannons-peer-review-
stin...](https://svpow.com/2013/10/03/john-bohannons-peer-review-sting-
against-science/)

Analysis: Peer review is not rigorous

~~~
wapapaloobop
Isn't there are standard just _below_ rigour, though, that is still worth
checking for, something like 'no obvious errors within the current paradigm'?

------
arca_vorago
I'll preface by saying I'm only a sysadmin with a love of science, but I
learned the most about peer review and scientific publishing while working in
biotech at a genetics company. At some point I realized I actually had to have
some understanding of the science to properly admin it, and began reading lots
of papers, and I slowly started to realize just how bad many papers are.

There are a few key issues:

1\. Scientists who "collaborate" with other scientists but do a small fraction
of actual work get their names on papers as number fodder. Anytime I hear
"I've been published in over 1000 journal articles" now I generally become
more skeptical.

2\. Lack of reproducibility. Not only in the methods and the documentation of
the methods, but also the fact that most things just simply aren't even tested
by a third party.

3\. Publishing. They have far too long locked up information the public
deserves to know, which is bad enough, but then they do a bad job of it and
allow bad science in. The largest part of the problem imho because they
created the situation that semi forces the scientists into questionable paper
writing tactics.

------
Pinatubo
There's a peer review scandal currently underway involving a top economics
journal.

[https://gborjas.org/2016/06/30/a-rant-on-peer-
review/](https://gborjas.org/2016/06/30/a-rant-on-peer-review/)

------
aaxe
www.Examine.com does a great job analyzing nutrition/supplements studies.

