
Mistakes Reviewers Make - sjrd
https://sites.umiacs.umd.edu/elm/2016/02/01/mistakes-reviewers-make/
======
mbrundle
Academic journal article reviewing is a very peculiar world, and I wasn't
impressed by what I saw of it. My observations (as a former postdoc in
biomedical research) were:

Reviewers get no pay or remuneration, pauce guidelines, and no training (which
is where articles like this can make a difference). There are no tangible
career benefits for doing it (other than 'everyone else does it'), because you
won't get any sort of official record for papers that you've reviewed. (And
because it's single-blind, you'll never be credited on the paper.) There's
very little feedback or quality control on reviews exerted by editors. You
can't ever discuss the paper with your fellow reviewer(s). And it's an
enormous time sink - reviewing a paper properly takes at least two hours,
depending on the length and complexity. This is a real issue when you're in a
field where doing lab research, writing your own grants and papers, reading
the latest literature to keep up-to-date with the field, and possibly doing
some teaching or admin, already takes up most of your time.

It's a seriously broken system. I inherently like the idea of doing reviews
because it feels like you're giving back something to the community, but it
ended up feeling like this good will was being taken advantage of by the
journals, particularly the for-profit ones. I'm amazed that the whole system
continues to work as well as it does.

~~~
bloaf
All that _and_ reviewers who have made a name for themselves in a small-ish
field can view new entrants into that field are competitors rather than
collaborators.

------
susan_hall
This part reminds of some of the job interviews I've gone to, as a software
developer:

"Detail-oriented: New researchers are often immersed in the minutiae of
research, such as building software, collecting data, and running experiments.
This means that they tend to focus on details (which may or may not be
significant) rather than the bigger picture."

I am in my 40s, yet when I go to a job interview I am often interviewed by
people in their 20s. I have 20 years experience with dozens of technologies.
And yet, just recently, I found myself facing a long list of questions about
the details of specific technologies, for instance, NodeJS. While I may not
know the details about NodeJS, I had no trouble learning Struts and then
Spring and then Ruby On Rails. Is there any reason to think I can't pick up
the details of NodeJS? I have done one major project with Node, is it really
crucial that I know all the latest packages before I get a job at your
company?

In these interviews I am often surprised by the focus on very specific aspects
of particular technologies. Who really cares? We all need to learn some new
technologies for any job, even if it is just the specifics of the software
that the company has built.

I am often surprised at the extent to which my 20 years of experience is
discounted. However, I run into this less often when I am interviewed by
someone who is in their 30s or 40s or 50s -- they seem more willing to
recognize that I've had a long career and I've learned a lot of tech.

~~~
sadadar
I think this is small sample size. I spend a lot of my time as an engineering
leader teaching people how to interview for engineers. A lot of them are young
but intrinsically recognize that trivia questions aren't important. Because I
read about things like behavioral interviewing now instead of the release
notes of the new webpack I get to make an impact in my org, but lots of smart
companies do the same.

------
scott_s
I commonly have academic computer science papers rejected for a variety of
these reasons. That is, the reviewers do not have any factual or methodology
concerns; they don't think we're wrong, or that we made any mistakes designing
our experiments. They just don't like it.

I've started to call these "Your baby is ugly" reviews.

------
kartikkumar
I don't think this has a lot to do with being a new reviewer. My experience
has in fact been that due to time pressure, senior researchers make decisions
on papers for completely the wrong reasons. They pervert the cause of science
because of the rat race. The one that really sticks out in the list, that I'm
a stickler for, is "details".

The devil IS in the details. If a paper can't communicate the important
details, then how can you ever claim that your work is reproducible? If your
work is not reproducible, then it has no place in a scientific journal. In my
field, a lot of senior researchers haven't executed a line of code in years,
so the natural feeling is that the "detail" isn't important. If your code is
not open-source and can't be audited, you better have the details in place.

Another thing that I think deserves A LOT more attention is the co-author
list. There should be more in place to stop what's all too common: people
forcing themselves, especially senior researchers, on to papers that they have
no business being on. The setup in a lot of academic environments is such that
this can't be tackled from the inside. I think this should be a fundamental
part of a "reviewer manifesto": figure out who wrote the paper and if you
can't, ask to find out.

------
verylongaccount
Hear, hear. I would add one thing: make sure that you don't advance claims
without evidence (this goes for all scientific enterprises, not just writing
reviews). Just because your review is anonymous, do not think it excuses not
upholding the basic standard of science: all claims must be stated as clearly
as possible and supported by evidence.

~~~
nernst
Felt this was glossed over. It's all well and good to not be too harsh for the
reasons they mentioned, but ultimately the point is to 'peer review' the
science.

In fact I would say an important "mistake reviewers make" is ... not actually
doing much work. I've seen some appalling 2-3 line comments like "seems fine",
even from senior academics. That's not even to talk about problems with
misunderstood p-values, not reading the algorithm closely, not walking through
the proof manually, and so on.

