
Our broken peer review system, in one saga - nkurz
https://familyinequality.wordpress.com/2015/10/05/our-broken-peer-review-system-in-one-saga/
======
roel_v
"In a world with limited space for publishing research – which is not our
world – this would be a good reason to reject the article."

LOL - so essentially: "printing/copying of PDF is cheap, so Y U no publish
this 'study' of a regression that a first-year stats student could do with an
interpretation of that regression that is devoid of any scientific method?"

Yes, if your paper is weak to begin with, you need to jump through more hoops
(i.e., sell your research better).

Look, getting your papers published sucks sometimes, but if anything, the OP
_confirms that peer review works_. Or maybe what it shows is how it fails in
that when people take weak research and just keep submitting to journal after
journal until one is tricked into accepting.

(the further I read on the page, the more I'm convinced we're subtly being
trolled here: "The new knowledge was all created two years before it was
published." "new knowledge"? A regression on a publicly available data set
with the main dependent being an ambiguous 3-option question on a highly
contentious and nuanced topic; and some 'discussion' which the authors _admit_
in the first paragraph is unverifiable and basically an opinion piece?)

(edit: typo)

~~~
stdbrouw
A scientific conclusion does not become intrinsically less interesting just
because all it took was a regression a first-year stats student could do.
Simple and obvious is often better.

The suggestion in the linked article that, "hey, we ran a regression so now we
can just come up with whatever interpretation we like" irked me as well, but
you really do have to read the comments of the peer reviewers: one is peeved
by the fact that they don't talk about his/her pet topic, another that they
don't include this or that kind of historical context, another tries to be
smart about statistical methodology but in the process shows their
incompetence (a beta coefficient _is_ a measure of effect size, you dummy)...
when all the authors want to do is to point at the fact that opposition to
pornography has declined over time. You can argue that this is not
interesting, that this is obvious, but the whole point of science is to not
accept things just because they seem obvious.

A new journal launched recently, by the way, which is aimed at exactly this
sort of finding: [https://sciencematters.io](https://sciencematters.io) The
idea is that scientists will be able to publish empirical results without the
requirement to craft an entire story around it. Science needs theory but it
also needs facts.

------
jacobolus
My impression from the reviews is that most of the reviewers thought the study
was too easy to run (just grab a widely known data source and press the
“regress” button in some statistical analysis package), not novel or
interesting enough, and with a fairly speculative analysis relative to the
data. [Personally I have no opinion about the importance of the paper, which I
didn’t read.]

It’s hard for reviewers to politely articulate “we just didn’t think your
paper was cool enough for our journal” or “we thought your analysis was weak”,
but most of them did seem to be trying. My impression is that the author
didn’t really take the criticism to heart, and had trouble reading between the
lines.

These journals surely get _many_ similar paper submissions, and this authors’
opinion notwithstanding, they can’t publish them all.

I agree that the system is too slow, too arbitrary, and too high-stakes
though. It’s tragic that quantity and venue of published papers is the main
way academics are judged.

It would indeed be nice if scholars could (as an option) publish their papers
without as much oversight, and then have more of the discussion back and forth
about the paper’s relevance take place in public. Then if the paper turns out
to be significant, it could be republished in a leading journal.

~~~
danieltillett
The problem is the use of which journal a paper is published in as a measure
of the quality of the paper. Until moronic hiring and tenure committees use a
better measure of research quality than the journal a paper is published in
then these sub-optimal practices will continue.

------
URSpider94
To play devil's advocate for a moment, I'm not sure that the paper is complete
as it's described. The authors do a fairly straightforward statistical
analysis on an openly-available dataset, and that's that. I see many reviewers
asking the question, "yes, so what?" which is exactly the question that we
should be asking as scientists.

To go back to basics: start with a theory or model, make a prediction based on
that model, then test to see if that prediction is correct. In this work, the
authors explicitly stay away from anything other than a hand-waving
explanation of their results, and seem surprised that the reviewers keep
asking them to "frame" their results in a broader context.

It would be as if Newton published an observation of fruit falling off a tree,
and failed to mention gravity. An interesting observation, perhaps, but not an
advancement of science.

~~~
rubidium
Yea, it seemed like a pretty typical fight to publish an interesting but thin
finding. Not that the publishing system isn't broken, but this story doesn't
seem to highlight why very well.

------
ISL
The arXiv is an immediate solution to the author's concerns. Preprints have a
long history in physics; the journals are the official stamp of importance and
plausibility, but most timely results appear first on the arXiv.

If you put a paper there, and someone reads it/finds it useful, or if someone
does a literature search on the subject, the arXiv entry will get cited.

<looks at arXiv, notes lack of a sociology section>

I don't know why arXiv hasn't broadened its scope outside of the natural
sciences, but I'm certain that a) Ginsparg has a good reason, and b) they're
happy to help others implement similar systems.

~~~
dougmccune
There are actually a lot of solutions to the problem, and the author seems
aware of them, as discussed at the very end of the article. There are pre-
print servers like arXiv, there are mega journals that have a slightly
different type of peer review (what I'd call methodological review as opposed
to"importance" review) like PLOS One, PeerJ, SAGE Open, etc. And yet even
though the author is aware of these options, he acknowledges that it all comes
down to how his publication will be perceived by his peers, and the non-
traditional options simply don't yet have enough reputational weight. So
fundamentally even though there are lots of other options, they aren't
realistic because of the politics of academia.

To be fair, most of these non-traditional systems cater almost exclusively to
hard sciences and don't focus on sociology.

------
danieltillett
Makes me glad I am not a sociologist!

The same basic problem occurs in all fields in that many reviewers and editors
are basically idiots. When you get a rejection you look through the reviewers
comments for the useful suggestions (mostly these are about spelling or
grammar, but occasionally you will get a real gem) and send it out again.
Whatever you do do not change the paper to suit the reviewers or editor. Just
shoot it out again as quickly as possible.

~~~
cmrivers
This is a risky strategy - your paper can end up back in the hands of the same
reviewers from round one.

~~~
danieltillett
Yes this can occur, but supringly rarely. What always amazes me is how rarely
the reviewers ever agree on anything. Getting published is just a matter of
sending out your work until you get two or three reviewers who say accept.

I have had the situation where all three reviewers said publish and the editor
rejected the paper - of course this paper has gone on to get over 600
citations so it was not the best decision that editor has ever made.

------
rrrrtttt
I can't see how arXiv helps with the outlined problems. The goal of publishing
is getting a stamp of approval from your peers, which you can then bring to
hiring or promotions committees to buttress your case. Arxiv merely helps you
put your paper online, which I would argue is a trivial problem (and has been
so since the personal university homepage was invented).

------
Others
I read the paper before I finished the article, and I really didn't like the
abstract. Reading the original abstract in the article, it makes me sad that
it was made so much worse.

