
Scientists aim to pull peer review out of the 17th century - fanf2
https://www.npr.org/sections/health-shots/2018/02/24/586184355/scientists-aim-to-pull-peer-review-out-of-the-17th-century
======
RcouF1uZ4gsC
Some observations.

First, I think that the scientific literature peer review process becoming
more like Amazon's reviews would be a disaster for science.

Second, although a scientific discovery may take months to publish in a
journal, that does not mean that the scientific community is in the dark about
it. Many times a finding is presented at various conferences well in advance
of it being published.

However, scientific peer review is broken. I think one of the best things that
can be done is to separate data gathering from analysis. With machine
learning, there will be a great demand for high quality raw data. It would be
great if scientists and institutions could achieve recognition for collecting
methodologically sound, high quality data. Often times, a high quality data is
not made public, but is milked to produce a series of papers by the
institution or researcher that are all based on the same private data source.

~~~
jcranmer
> I think one of the best things that can be done is to separate data
> gathering from analysis.

That is a _stupendously_ bad idea. The central theme of modern scientific
theory is hypothesis testing: you formulate a hypothesis, and then run
experiments to see if the hypothesis holds up. If you run hypotheses on pre-
canned data, what you very quickly end up with is p-hacking: running several
slightly different hypotheses until one clears the significance bar. P-hacking
is already a problem that occurs in practice, particularly in the social
sciences (it might not be far off the mark to say that most of the results,
particularly the most hyped ones, are the result of such p-hacking), and your
proposal would make it more readily available and acceptable.

~~~
daveguy
I agree. Separating analysis from data gathering is generally a bad idea
(especially if anyone with Matlab is doing the "analysis"). Having signed
hypothesis and methods _before_ data collection would be good. In that case
different labs could do the acquisition (as long as methods for acquisition
and analysis are established before hand). Even better to make the acquisition
part double blind. There is a strong infrastructure to do this with medicine,
but a lot of scientific study require technical expertise that is limited. In
medicine there are hundreds (thousands?) of labs or more that can do a double
blind study. In a narrow scientific field there may be only a few experts in
the world.

It is a difficult problem, but I do think there are better ways to organize
scientific study.

------
leoc
The formal pre-publication peer-review system doesn't date back to C17: it was
a response to the surge in submissions to the prestige journals in the post-
war period. For example, _Nature_ 's BSD editor John Maddox first introduced
it to that journal, but wasn't very fond of it. So it's another of the weird
and sometimes damaging changes produced by the postwar boom in university
education and science funding, for all that that boom greatly increased the
quantity of scientific research and the number of people who got to
participate in it.

~~~
dnautics
we might do better to bring peer review BACK to the 17th century, when it was
"actually being reviewed by your peers" instead of "a gateway to publication"

~~~
twtw
I don't know much about the 17th century, but late-19th century "peer review"
was definitely way more fun than today. There were a ton of published
criticism by peers, often with colorful language and creative ad hominem
attacks.

As an example, the exchanges between Heaviside and Preece are highly
entertaining.

> Mr. Preece is much to be congratulated upon having assisted at the
> experiments upon which (so he tells us) Sir W. Thomson based his theory; he
> should therefore have an unusually complete knowledge of it. But the theory
> of the eminent scientist does not resemble very closely the eminent
> scienticulist." (Heaviside, The Electrician, 1887, qtd in Nahin's biography)

Heaviside commonly referred to Preece with names such as "the eminent
scienticulist," the "unscientific speculator," and occassionally "the Nameless
One."

------
kickout
Agree, most public research (In the U.S) is in a poor place as far as "asking
the right questions". The amount of under powered and poorly designed studies
that get published has driven me to almost ignore research in all but a few
journals. The justification (and conclusions) for these studies are delusional
at best

There is a lot of high quality science, but the signal to noise ration is
quickly getting out of hand. Peer review is a busted methodology and i salute
place such as arxiv and bioarxiv that seek to disrupt the old paradigms

------
allthenews
Sounds like a movement toward crowdsourcing, long overdue for our increasingly
broken scientific process.

If the HN experiment has worked for so long, surely, with proper controls on
participation and interaction, there's no need to worry about any kind of
"Wild Wild West," as the author put it.

What I wonder is whether anonymity would be better or worse for some kind of
semi-public peer review forum.

~~~
kickout
Anonymity would be better in my opinion (to start off with). Way too much name
recognition, and "good old boys club" dynamics happening in way too many areas
of science i follow (genetics, agriculture, prediction)

~~~
neuromantik8086
Pseudonymity might also be a good option (although it also has its pitfalls):

[https://twitter.com/jaipelai/status/900513036552859648](https://twitter.com/jaipelai/status/900513036552859648)

[https://news.ycombinator.com/item?id=15918851](https://news.ycombinator.com/item?id=15918851)

------
exelius
Needs more blockchain IMO. I’m actually serious about that too — individuals
can choose to trust / distrust other individuals they know to be experts in
their personal network, and if there’s enough proximity to their trust graph,
they can know that there is some rigor behind that. Approve too many papers
that are garbage and your status will fall and people will unlink you.

If there was any system crying out for a distributed proof-of-work / proof-of-
trust system, it’s the peer review process. Provide incentive to review and
proper analytics visibility are all there; the only key is that you would have
to involve institutions in the signing chain in order to ensure a blockchain
“account” belongs to a single individual and not, say, a troll farm working
for a pharma company.

~~~
lotophage
That's a terrible idea. Science shouldn't rely on "trust", and reviewers and
reviewees should be anonymous to one another. A fact stands on it's own; it's
not dependent on it's source.

~~~
8bitsrule
"Shouldn't?" True, in an ideal world. But in this one, a 'fact' stands on its
own if it can be replicated. Which is only possible for someone who has the
expertise, time, and inclination.

And that's the easier part; arguing the conclusions drawn from observations is
where it can get really gritty ... particularly when the conclusions discredit
the work of others.

Much of classical science was constructed on a basis of reputation amongst
familiar colleagues... a form of trust. But a close look at history reveals
that factions and bias have always existed.

Times have changed, and everyone can't know everyone else. Another corrosive
modern 'fact' is that science is largely driven by who will pay for it. E.g.,
government and corporate interests may be less concerned with facts than the
'right' facts.

------
dguo
The whole process of getting published seems so strange to me. I only know an
overview about how it works from friends who are in PhD programs, but I
suppose I've gotten used to the open source world and the Internet in general,
where anyone can "publish" globally and basically for free. Sure most of that
content is subpar, but that's why we have methods and tools for curation.
YouTube probably won't put my video on their homepage, but they're also not
stopping me from putting it online in the first place.

I've wondered what it would take to build something like a GitHub for science.
Eisen's APPRAISE system seems like a huge step towards that, and I hope it
works out.

~~~
tialaramex
Nothing stops you publishing globally and for free, but in some disciplines
nobody will read what you wrote if you do that unless they've already heard of
you.

In some disciplines you absolutely can just self-publish and get noticed.
Arxiv, starting in the 1990s, allowed you to upload a TeX document about your
work in say, astronomy and other astronomers who had the World Wide Web (this
was still the 1990s after all) could read it, pass comment and perhaps
recommend it to others. These days Arxiv has sections for physics,
mathematics, astronomy, computer science and related disciplines.

However: Some disciplines and sub-disciplines have a huge problem where
enthusiastic amateurs want desperately to tell everybody in that discipline
about some nonsense. For them systems like Arxiv are hopeless, because every
day you're going to get fifty documents from some guy living with his mom who
is convinced he's harnessing zero point energy with a paperclip and a pair of
fridge magnets, or a 53 year old self-taught cryptographer who is convinced
her scheme for "reusable one time pads" works even though she doesn't
understand why the OTP is provably secure in the first place.

~~~
dguo
Yep, I'm aware of Arxiv. What I should have said is that while it's certainly
possible for people to self-publish, it's not the norm (as far as I can tell)
for most serious scientific efforts. Whereas in the software world, it's not
at all surprising when even huge companies like Microsoft and Google release
software using GitHub.

I don't see nonsense as a significant problem. There are plenty of throwaway
projects on GitHub that receive no attention, but we have methods of curation
(i.e. creator reputation, reviews, etc.) for surfacing the things that do
deserve consideration. These methods may be frequently imperfect and
inefficient, but if they work for code, retail products, etc., I don't see why
they can't work for scientific papers.

------
fngs13
There are a few reasons why peer review is failing, but may not be obvious to
outsiders.

First, to be selected as a peer reviewers for any reputable publication, one
needs to have a credible publication record in the field (e.g. 5-10
publication in the peer review literature in the same or a closely related
field in the last 2-5 years).

Second, the handling editor needs to be aware of who your work and consider it
relevant before being selected. Inherent in this step is some knowledge of
where you trained, you collaborators/coauthors, your stature in the field,
along with any other potential bias (e.g., age, sex, nationality. etc).

Third, if elected as one of three potential reviewers for a manuscript, you
must be willing to volunteer two to four hours of your time to handle the
task, usually within a few weeks of the invitation. Inherent in this step is
also the potential interest level of the manuscript, you opinion of the
authors and their past work, the quality of writing, the quality of the
publication, your relationship to the editor and editorial board and any
potential conflicts of interest that may arise because of your past and
present work.

Fourth, peer-reviewers are volunteers. The work is not credited and certainly
not 'credit-worthy" in an academic sense (it doesn't help in tenure or
promotion decisions. The reason that anyone accepts a peer-review invitation
is to gain some insights into what potential competitors/collaborators are
working on, before the work is made public in the literature. Alternatively,
it is done as a favor to an editor. Peer-reviewing is critical to ensure that
the quality of science remains high, but every moment spent on the task is one
that is not spent working on one's own publications, grants, proposals or
other work for which they are either paid or must secure fund to remain
competitive.

Fifth, the reward for doing a good job is receiving more invitation to do more
peer reviews. Simply put, most editors are insufficiently aware of others
working in the field beyond their own small network of colleagues, so they go
back to those they know. This works until an editor has no friends or
colleagues left and the run away from him/her at meetings.

Sixth, the sheer volume of submission into the scholarly literature has
increased at a rate that is about twice the historical rate, largely drive by
open access publishers and the demand for more publications/year/academic
scientist (note that scientists in the private sector do not waste their time
chasing after meaningless publications). In the last year or so, the number of
published articles based on industry around 1.5 - 1.75 M. When you factor in
the average rejection rate (around 40%), the need for three peer
reviewers/article, the number of usable reviews/number of invitations
(typically 8-10), your get an idea of the scope of the problem facing
publishers.

Are there solutions. Yes. Automation, when applied in the right places, works.
Tools are already available for publisher to prescreen manuscript for
completeness and accuracy at the time of submission and to alleviate much of
the grunt work that is pushed to peer-reviewer (because the work for free,
remember?). There are also tools that are coming to market that help to
identify credible peer-reviewers that editors may not be aware of because they
are simply outside of their network of contacts. Those same tools can be used
to ensure that the reviewers are selected to review manuscripts that are
likely to be relevant to them, of interest, and reasonably well written 9or at
least comprehensible). On the other hand, AI tools to screen manuscript that
are describing new things are highly unlikely to work, because there will be
nothing on which the algorithms would be based. This takes real intelligence
and insight that comes with years of deep reading and work in a field. Even if
a AI based peer review system could be developed for one field based on
modeling after a human expert, it is unlikely that the technology could be
readily generalized. Block chain? How would that work? What happens when there
are conflicting opinions?

The other solution would be for journal to impose much stricter requirements
on publishing (real quality, no just "good enough, which is the emerging
trend) and to impose limits on each academic scientist (e.g. no more than 100
articles/life). Other possible solutions include increasing the cost of
publication. To some extent this is already done with journals in the
sciences, where prestige publications can demand a premium article processing
fee. Also, many funding agencies and academic institutions discount articles
published in lower tiered journals, so perhaps some level of natural selection
might apply. The other point to consider is if an increasing portion of the
research budget is being expended to publish in open access journals, we may
eventually reach the point in which there is nothing left in the budget to
actually pay for the salaries, stipends, and research and the peer review
problem will disappear.

~~~
GeorgeCarlin123
"It's a Big Club, and You Ain't in It" \--George Carlin

