
How to fix peer review  - flipchart
http://www.economist.com/blogs/babbage/2013/12/scientific-publishing
======
mbq
The current state of things is that one has to collect many papers about
certain hypothesis and analyse them on his own to get any idea of the what the
truth may be -- neither the IF nor good names guarantees anything, on the
other hand you can find real marvels in some obscure conference proceedings.

This way I doubt we should give even more complex decision-making and
responsibility to the editors; I would rather go for forcing better
reproducibility of papers and more open reviewing -- publishing reviews,
opening up for more, yet shallow reviews (like lab protocol goes to a
biologist, result evaluation goes to a statistician, possible impact goes to
some recognised figure in the field) and making publication of comments and
rebuttals easier.

------
6d0debc071
I wonder about the efficacy of paying non-trivial bounties for errors spotted
in papers. I mean if you're not paying people for spotting errors in something
then what difference does it make if they're good at it or not?

Asides from giving people an incentive to actually spend some time and
expertise on them, it'd also provide a rough guide to how important you
expected a paper to be to the field. If you weren't willing to pay a lot for
the discovery of errors, then you'd expect that for one reason or another
errors in the piece wouldn't make much difference; either because its claims
are unimportant or because you didn't think they'd be hard to find.

Problems?

I could see people trying to contact someone who was reviewing their paper to
counter-bribe them, or someone who was reviewing trying to contact the author
to elicit such a bribe. Double blinding might take care of some of that but in
some areas you can guess fairly easily who someone is.

Perhaps a combination of double-blinding and multiple reviewers - preferably
with some from a number of related disciplines - with flagged errors that are
already reported, so that the bribes that would be required of the author
don't scale well.

~~~
dalke
What counts as an error? A typo? A grammar error? A mistake in a citation? A
transposition of two column headers? Omitting a ')' in an equation?

How much is each error worth? Is it a sliding scale based on the presumed
severity of the error?

Who pays?

What if the person paying disagrees with the person making the claim? Who
settles the disagreement? How do they get paid?

~~~
6d0debc071
> What counts as an error?

I feel like, 'let the market decide.' is probably a reasonable answer here.

Science is always an interest-directed endeavour, and we have these problems
in conventional review anyway, they're just harder to provide a consistent
definition of. I would be inclined to say that typos and grammar errors
probably aren't worth paying for - but that's just me. If journals turn out to
really care about those things, why not let them pay for them?

Heck, if they know ahead of time that that's what they're interested in paying
for, let them forward the paper to a grammar expert and get even more utility
for the money.

How much is each error worth, related to who pays. Well, make bounty funds and
let each fund make its own choices. If a bunch of Christians really want to
disprove evolution, let them put a massive bounty out on errors in biology
papers. I imagine that such Christians wouldn't care much about typos or
grammar mistakes, they might care more for citation errors, and a lot more for
errors in maths or evidence.

You pay for the truth that's important to you.

And if the person paying disagrees with the person making the claim, then the
person making the claim presumably doesn't get paid. But if you keep doing
that, then you damage your ability to buy truths that are important to you.
Which is ultimately self-destructive. Bug bounties spring to mind as an
example there. If the Christian fund never pays out, people will just stop
working with the Christian fund.

And that's a decent signalling mechanism it seems to me. If people claim to
value the truth in some area, but they never pay out and no-one wants to work
with them... then it's probably a decent bet that they don't really. After
all, they won't pay for it.

One potential problem there, I suppose, is ignorant people putting out a
bounty on something and ending up paying out a lot of money for errors that
aren't actually errors. Because they don't know enough to judge for
themselves. But in so far as you have a motivation to avoid paying for lies,
you'd then have a motivation to hire people who knew more about the subject
than you to vet them - which isn't a bad thing. I mean if you didn't know
enough to assess the thing in the first place, all this does is provide a
reason for you to be cautious about pretending to know things you don't.

~~~
dalke
My feeling is that your proposal makes no economic sense. What's the market?
How big is that market compared to the external market? People pay to have
their articles published. The price ranges between $100 and $1,500. As a
reward, people get prestige and career advancement.

Others journals a free to publish, but the readers pay an access fee.

In any case, in most cases the publication fee is much smaller than the time
and effort needed to write a paper.

So a pittance fee - even $100 for all error corrections per paper - isn't
enough to change that overall market. You need to have serious money to make a
difference.

If someone has evidence which, say, shows rabbits and velociraptors co-
existed, then it's more cost effective to publish that paper in its own right.
I can't figure out why another publication/error correction system would be
more effective.

Consider also that many research fields are very small. The next person to
understand the paper well enough to correct it might not come around for 10
years. Market-based solutions don't work well on long timescales like that.

------
plg
"Each began with a scientist who had reached an initial opinion as to which of
two opposing hypotheses is more likely to be true."

Herein lies (at least) one problem. Scientific publishing is about presenting
experiments + data and then interpreting the results in the context of
hypotheses. Interpretation ought to be about evidence in favour of or against
various hypotheses ... and not about forming an opinion about a particular
hypothesis (and then designing experiments to validate that opinion)

One can't though ignore the current culture especially in academic science. No
Assistant Professor is going to get a paper into Nature or Science by
carefully presenting a balanced assessment of evidence for and against a
hypothesis. It's a crying shame, because the current culture is implicitly
training our young scientists to behave in an unscientific way.

------
lmm
This model provides a way to ensure that hypotheses that most scientists
disagree with continue to be published. Is that really what we want, though?
When a hypothesis turns out to be false, it's good that a consensus forms on
this fact.

~~~
flipchart
The problem is, is that the hypothesis is not always wrong. There are many
examples in history where new hypotheses came along which go against the
generally accepted dogma of the relevant field which scientists try to squash.

Think about it this way: if a hypothesis comes along which is counter to the
work you've been doing for god knows how many years, are you really going to
agree with it (publicly). This new idea threatens your career, so you do
everything possible to get rid of it. The new system tries to keep around
these new ideas so that they can be evaluated multiple times, as the people
doing the evaluation change.

~~~
theintern
I'm not sure that's true. I'm a researcher and at least among the younger
generation of researchers, the anonymous peer review system often results in
"oh shit" moments where you get a paper to review that you have to
begrudgingly admit is good and recommend for publication, even though it makes
some of your work invalid.

That's just one persons experience though, I have come across proud professors
(and reviewers of my own papers) who seem to let pride come before
professionalism.

Peer review should also be blind. Too often a famous name can get papers
published that would not be published otherwise.

~~~
flipchart
It's awesome that you stick to the moral high road despite the consequences
for your own work. I think that the problem may become more pronounced as the
length of a researcher's career grows.

 _Peer review should also be blind. Too often a famous name can get papers
published that would not be published otherwise._

Fully agree, although that only solves the problem of pushing through papers
based on researcher merit (not the topic of rejecting papers on the basis of
conflict with your own agenda)

