I think they're clearly correct, and that even on HN, in a community that (roughly) prides itself on being scientifically literate, there are broad misunderstandings of what peer review means (during the bogus "Sokal Squared" hoax, for instance, many commenters implied that peer review prior to publication was meant to encompass replication). Also, while I'm not a "scientist", I've gotten to do some peer review work for ACM and Usenix, and even in the little bit of review I did, I seen some shit. There is much less formality and oversight to review than you might expect.
But replicating a proof or a complex experiment is way beyond what reviewers can accomplish.
Another problem with peer review, especially in experimental fields, are malicious reviewers. Big PIs typically gain control of a particular subfield by publishing something first, then getting lots of papers to review and blocking said papers and/or stealing good ideas from them.
That's a bold claim, especially the "typically" part. How can you back it up?
I have witnessed this in some groups I have been a member of. Very unethical behaviors. As I said above blocking papers in peer review, or even rejecting them, and simultaneously sharing key bits from said papers with postdocs is a routine practice.
Knowing journal editors and reaching publication agreements before papers are even written is also very common, and hardly surprising to someone that has been in the field for some time.
The one consolation is that at least we know who's reviewing, who's an asshole, and that this space is as hot as we thought it was.
Furthermore, the standard for mathematical proof has also changed over time, most significantly in the early 20th century. This led to a number of existing results needing to be re-proved (or thrown out! Some were incorrect!).
Exactly what qualifies as a proof is a FASCINATING debate. Mathematics is created by consensus, just like all other knowledge.
>My first paper (and, in fact, the first paper) on public key cryptography was submitted in 1974 and initially rejected by an unknown "cryptography expert" because it was "...not in the main stream of present cryptography thinking...."
Is there any proof peer review has
increased scientific productivity?
It is relatively new
Peer review is a relatively new procedure in the same way that California is a relatively new state :-p
If you name a famous publication prior to 1900, it almost certainly was published without peer review.
Most revolutionary scientific discoveries were made without it.
I married into the field, didn’t work in it, but that’s what I’ve observed from the side.
A system with an "editorial teaming willing to delegate when necessary" simply becomes the current system as the editorial team is willing and able to delegate, and it's neccessary pretty much all the time because of resource constraints - because being an editor generally also is an unpaid volunteer position done in addition to your normal work in academia.
Same as you, I’ve had reviewers reject research based on a reviewer’s stake in the outcome, and also had good reviews with constructive feedback. As a reviewer I’ve also helped "tear apart" an article that had good results but was poorly structured to help them clean up their article.
The underlying problem with Nature is that they pick the flashy stuff AND present it in a very very condensed matter -- removing all the important technical details -- to make it look exciting for a wider audience. For the few hundred people actually in the subfield the best course of action is often to ignore the PR piece in nature and go hunt for the technical paper with all the details that is published in parallel in a "lesser" journal. That is the one the tells what the actual progress was and what you want to consider for your own work.
I don't object to the condensed part so much. When I was a physicist, people had a similarly dim view of Nature. But Physical Review Letters also published four-page papers. And PRL was definitely a bigger prize than the long form PR(xyz) papers.
Now if you happened to want to drill in and do work based on someone else's, then you would no doubt prefer the long-form. But if you are keeping up with the field, then the 4-pager is much preferred.
You should be able to lay down what the actual science is in a small space. And if it's a theory paper, that space is enough to have a few equations that the reader could then puzzle out justifications for herself.
We said that in biochemistry as well. Also science. There are some huge stinkers there. When arsenic life came out we were like, of course, it's horsedung, it's in science!!
In the process, they wrote some papers that, especially if you weren't assuming the person made up the data underlying them, were...middling compelling.
I read one of the papers - the dog park one - and my conclusion was much the same as one of the peer reviewers. That it was an interesting concept, presented slightly too stridently (a common thing with papers written by say...students), and interesting enough as a frame to jump off from.
If someone not in the field can write a "middling compelling" paper in a distant field while trying to be absurd, then that still means something. If some jerk (basically) can just scribble a "publishable paper" off in some semi-major discipline while literally doing the opposite of trying, then clearly that semi-major discipline has very little value to it. We're not creating these large infrastructures with journals and university libraries and peer review and the Imprimatur of Scientific Credibility and credentialed university graduate programs and millions of dollars in grants for things any idiot can scribble down. We've got the Internet at large for that already.
If it's by design or somehow correct that peer review can't catch these things, then my opinion of the situation is even worse than if they were merely a demonstration of things slipping through a net, and peer review on any level is apparently utterly worthless.
Your defense may be superficially interesting, but if taken seriously, only deepens the problem.
In doing so, they ended up mimicking the field too well.
I mean, we can try to puff up Sokal as some kind of massive genius who got to publication-quality skill levels in multiple fields in less than a year, but... in that case, why shouldn't I accept his take on the results, since he's apparently that much of a genius?
There just isn't a way around it. If it's that easy to get to "publishable", it isn't worth being an academic discipline.
Compare someone in one of those fields trying to publish a particle physics paper by just imitating what they think are the important characteristics.
I’m not at all surprised they managed, after devoted effort to do so, in having middling success getting papers into some okay journals.
Better than nothing, but very far from ironclad. Only replication really verifies scientific findings. Everything else is just window dressing.
(I say this as a regular reviewer; for whatever reason, this particular week I’m reviewing for both Science and Nature.)
Think of it as the type checking by the compiler or a code review by somebody on your team, NOT a papal announcement of unchangeable dogma.
"I can run your code/experiment/etc. in your population and get your results" isn't the gold standard either. "We approached it from a different way, and with a different population, and we get a consistent answer" is both harder and more compelling.
1. dramatically reduces the pace of progress
2. exacerbates publication bias
3. creates a false sense of accomplishment
4. creates a false meritocracy
5. creates many perverse incentives
(3) and (4) unfortunately wrap into grant financing as well. The merit of your research isn't measured on its impact on the human condition, but on it's 'impact' factor (a measure derived from your publications).
The gatekeeper problem here is pernicious as well. If you become a highly cited author your ability to get/maintain financing improves. You also become a gatekeeper as a peer reviewer. Which means that you are now strongly incentivized to accept papers from people who cite your work or align themselves with you, and reject everything else.
What is absolutely amazing here, is that the peer review process is opaque. It is my belief that if you knew who reviewed which papers you would quickly discover that mild to severe abuse of peer review is the norm, not action by a handful of bad actors. This is because the entire academic reward system is wrapped into the process. Getting your name on a big paper can have lifelong ramifications on your ability to get grants, start companies, do consulting work, etc.
Peer reviewers should probably get paid for the work. If they don't get paid then their incentive to do review must come from somewhere else, vague notions of improving the field don't cut it. Peer review should obviously be transparent. Some people might be uncomfortable signing their name to a paper rejection, but its time to get over that. A small payment might help reviewers overcome this discomfort. It is bizarre that peer reviewers don't get paid. Peer review is valuable work if done right, and without payment all the reward of being a peer reviewer comes for the wrong reasons.
Finally, I agree with some other comments. Publication should not be contingent on peer review, it should come first. This would increase the pace of progress, reduce publication bias, reduce the false meritocracy, reduce the ability for bad actors to censor research, and more. The cost would be a larger number of publications, but perhaps this would help people realize that many of the publications coming out right now are of little value.
Transparency is badly needed in many facets of academic research. My company made a site that helps bring transparency into literature review (sysrev.com).
As you presumably know a many journals have experimented with open peer review, but editors still need to police the reviews to look for bias. It solves some problems but creates others.
Just because other journals failed in the past doesn't mean we shouldn't try again in the future! Maybe there were some mistakes we could learn from? Also the internet is becoming a more familiar tool used more often by more people every day, maybe it just wasn't the right time previously.
> In the late 2000s, widespread debate and controversy ensued after Budden and colleagues (2008a) found that a switch to double-blind review in the journal Behavioural Ecol- ogy led to a small but notable 7.9% increase in the 2proportion of articles with female first authors
Others agree: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5629234/
> Findings from studies of journals that have actually adopted the practice are non-conclusive. For example, Budden and colleagues show that the introduction of double-blind review by Behavioral Ecology was followed by an increase in papers with female first authors (Budden et al., 2008). However, a second paper reports that female first authorship also increased in comparable journals that do not use double blind review (Webb et al., 2008). A study by Madden and colleagues shows that the adoption of double-blind reviewing in the SIGMOD conference led to no measureable change in the proportion of accepted papers coming from “prolific” and less well-established authors (Madden and DeWitt, 2006), but a later study contests this conclusion (Tung, 2006).
We would all have to think long and hard about how to align incentives to get the desired results.As a first pass I think hiring some amount of people to review every paper, and then allowing anyone who wants to come by and leave their comments as well might be a place to start. I think some sort of reputation system similar to stack overflow might be a motivator as well.
I'd much prefer double-blind peer review.
The ideal of peer review is pretty much the gold standard of scientific work. By that, I mean the idea that your work should undergo rigorous scrutiny by your peers before it gets accepted for publications. As any other human system, though, the process is still subject to politics and biases.
My hope is that a truly open review system might one day democratize science and ameliorate the issues with the current review system. For example, it would be cool to put all the revisions, reviews and authors’ comments in the open as an online appendix. That will show how the paper changes as it moves trough the process, the issues that the reviewers brought up, and how the authors responded to them. It will also help show if there is systemic bias against certain researchers or topics.
We have arxiv now; the whole process could open up even more, slightly closer to research in the open, like open-source development in the open. This is to make peer review easier.
Then what is currently "publication" becomes what it should be, in the sense of adding value: curation, filtering the most interesting, least dubious results.
So, what you're suggesting seems a lot like shipping code before code review, and seems to have many of the same problems. What do you do in the long period before critical issues have been addressed? What do you do if the submitter just says "fuck you, my code works, I have more important things to do"?
Anyone can submit to most conferences and journals. Anyone at all! No PhD or reputation needed and it’s blind. All that matters is the quality of the research.
To submit to arxiv you need to be approved by someone as a legitimate researcher, or beg for reviews from people you’ve never met without any anonymity.
It’s somewhat a step backwards in some ways.
I don’t think that letting anyone submit is a good solution.
I think their process is totally reasonable and I’d probably do the same... but I do think it’s a fact that it’s now less accessible and more based on reputation and credentials and we should acknowledge that.
If you really have a decent breakthrough, I'm pretty sure your best bet is to directly contact some relevant people in the field.
I suppose that changing the technoligcal base to fully electronic, immediate-mode publishing should help.
Building a review workflow, or several, around the fully electronic and open archive of all published results should be doable.
Storing online everything by default, including all the negative results, proofs of the null hypothesis, full datasets and code, etc, should not be overly expensive, but would help reproducibility and further research a lot.
I did not have to do something like this.
Whenever the problem has come up for a colleague, it has been straightforward to endorse.
In my reply, I was just pointing out that peer review still plays an important role in the development of knowledge and I was wondering if alternative systems that could replace it would actually be feasible. That lead me somewhere off topic.
By the way, the “weaponization” of peer review is nothing new. You can see it used in the past to discredit research linking smoking or lead to health issues. You can also see of the possible risks or benefits (depending on your point of view) of rejecting peer reviewed research in the anti-vax movement.
I am however ambivalent, or even perhaps against allowing public comments directly into a paper. We've all experienced, no doubt, the a naive but loud internet com-mentor spreading FUD on comment sections and social media on topics they simply know nothing about, and do so prolifically. It is nice to think that good arguments will float to the top, but often it is the loudest and least nuanced argument, right or wrong, which gets taken up.
I've had lengthy discussions about this with my colleagues, and I don't think the problem is with peer review per se, it's with how peer review, and by extension, scientific literature, is interpreted and utilized. It's not just with the lay community, it's among scientists themselves, as well as scientific consumers as in engineering, healthcare, etc.
People now treat science, and scientific publications, as if it's a farm or factory. The produced commodity is the peer-reviewed paper. Once research, via a paper, is peer-reviewed, it's blessed as the truth, and if it's not, it's worthless. The goal of many institutions is now not to produce sound research per se, but to receive money (largely due to indirect cost income) to produce the commodity of peer-reviewed paper. Once a paper is accepted as in press, it can be cited as if it were truth, which provides a building block for something else. The truth per se doesn't matter, only the citability of it as peer-reviewed.
I realize this sounds a bit conspiratorial but I do think this is basically how it works at this point in time, at least in certain fields. Peer-review is overvalued, as are single papers, and contributions of particular researchers in many cases. That doesn't mean they don't have any value, just that their real value is probably much less than we assume.
I agree that peer review can be improved a lot, and will likely always have an important role in some form, but I don't think fundamental problems in the field will go away unless there's a change in perspective on the real scientific process as a whole, to something more nuanced and gradual.
Science is all about experimentation, yet we currently measure papers by number of citations rather than by experimental support.
Replication. If science has a gold standard, it is this.
Replication is much less valuable, scientifically and professionally, than non-scientists think it is. Simply repeating a published investigation runs a much higher risk of repeating any experimental errors or faulty assumptions that might have harmed the first one.
The whole point of science is to find knowledge that persists beyond one particular perspective; knowledge that is independently verifiable. Rote replication is not the best way to find this type of knowledge.
As for peer review, its purpose is simply to sharpen the communication of a completed study. Even if every study was replicated, the papers would still benefit from peer review.
I'm thinking of something like relativity or quantum mechanics. Suppose a study in those fields fails replication. The whole thing still holds together, to the point where controversies at the part per billion level make it to the front page of the newspaper. Perhaps even most studies, taken in isolation, would be found to have problems when subjected to the strictest criteria for replication. Choosing replication as a silver bullet would be an unnecessary distraction.
Now, what about fields where there is no unifying theory on the horizon? If replication is all we've got, then sure. I can certainly see the point, especially if the results affect personal decisions (diet, medications, etc.) or public policy.
I suspect that "gold standards" can hurt science as much as help. Telling people that science is bunk because of the "replication crisis" contradicts the fact that messy science has produced results of astounding accuracy and predictive power. Learning from success should be at least as important as installing safeguards against failure.
Gold standards are identified as such because they’re the best. Not everything needs to meet the gold standard to be debatable. Peer review is adequate for further research, but perhaps not policy initiatives. An unreviewed paper is enough to start simple inquiries. Et cetera
And there are also ways in which replication is orthogonal to peer review. Replication can't by itself tell you whether a piece of research makes a significant contribution to the field, or whether it is itself derivative, or poorly presented.
Actual replication is much more effort and consequently slower than just a "docker run".
To argue that publication should have fewer restrictions, one needs to show that the modest number of papers that are improperly rejected by every journal (not just the first submission) equals or outnumbers the overwhelming number of submissions that should not be published because they reflect a misunderstanding of their field or a misinterpretation of the data. For many (most) scientists, the problem is not that important results are unpublished, the problem is that it is almost impossible to keep up with the current gated literature, particularly when that literature is full of mistakes. While it is certainly true that some reviewers are biased and some ill-informed, most of the time papers are rejected because the authors did not communicate well (for papers that should be published) or because the authors did not understand that their data did not support their conclusion (papers that should be rejected).
The scientific literature needs a better signal to noise ratio, not more noise.
Yup, if you are actually doing good work it is likely to be so different to what is going on that people have a hard time evaluating it.
My mom has helped many PhD students take their degree over the years, and in the process helped with their papers. Several have been rejected, some multiple times. Frequently it was down to presentation issues (the student had not explained the certain things clearly enough or similar), or there was disagreement about how strong conclusions one could draw from the results.
For example, recently a paper was rejected because the experimenters had forgotten or overlooked an important detail which meant they couldn't control for a certain variable. One of the reviewers picked this up, and rejected the paper because the findings could possibly be explained by this uncontrolled variable. So they had to resubmit with a weaker conclusion (this is when my mom got involved).
In this case the review process was harsh but fair. The experimenters had goofed, and the reviewer caught it.
Of course when doing ground breaking experiments, I guess the process might not be optimal. But the majority of scientists are not doing that.
Basically there are lots of reasons to get rejected that have nothing to do with the quality or validity of the research.
I suggested there were ways to approach writing it for that audience, but they'd need to extensively rework the paper. The reason I know that? I've done the same thing, for the same journal.
A simpler and more elegant solution is hard for the old guard to take in. You are sweeping away years of knowledge that they have built up about their earth-centred universe to propose an all new sun-centred solar system.
If you are in the position of having written some work that is ahead of its time then how do you know that? Rejection isn't good enough. The reasons why are important. If you have not presented your work neatly or used language that others don't like then that won't help. If you haven't talked about some of the smaller points that the learned people think are important about the topic then you could be done for. We need something more than 'rejection', failed for presentation reasons could be helpful so we can know if the jury is out of some 'rejected' ideas.
Nothing ever gets rejected for being twenty years ahead of its time. It would be helpful if this was an option.
One feature of peer review that's overlooked, is that there are different journals with different standards, and you can choose a journal that fits what you're trying to accomplish.
This peer review then seems to not serve much purpose.
An example is the recent first finite prime gap paper where the result was quickly strengthened several times by other authors long before the paper made it into any journal.
Another example here: https://mobile.twitter.com/adrianprw/status/1156534906618597...
Most venues are double blind, so reviewers wouldn't know about the latter things. Having fewer women in the reviewer pool is unfortunate, but it's hard to see how it would effect the mean review quality in a direct way assuming that men and women are equally good at reviewing (maybe in an indirect way it could harm reviews by increasing the load per-reviewer).
BUT - paid reviews are not feasible for many journals, especially open access/low-cost journals where margins may be thin. (Conversely, Elsevier should be able to pay their reviewers in gold bullion, rather than taking advantage of the scientific community's altruism, but that's another conversation).
Even something like an app that you submit papers to, which then anonymizes them, standardizes formatting and presents them to randomly selected experts for review in a certain time period would go a long way compared to the current process. In my lab at a prestigious Canadian university, the papers' authors names were visible as well as the reviewers names because any back and forth post review would happen through email. I think journals and universities need to buy into a system that standardizes this process so at least these basic requirements of impartiality can be met.
> an app that you submit papers to, which then anonymizes them, standardizes formatting and presents them to randomly selected experts for review in a certain time period
Except for that it's not an app (that would be kind of pointless, why would I want that on a phone instead of a webapp), that's pretty much the standard in my field. Anonymization is done through the authors (if the conference/journal wants it to be) and ensured by the conference submission system. Formatting is standardized anyway.
The problem is the "randomly selected experts" part, for many (most?) subfields suitable reviewers are a scarce resource, to the point that in quite a few double-blind processes I've seen one or both parties could recognize each other's work or review.
I've always found openreview  a nice way to at least make the process public, especially when anonymity is an illusion anyway.
Secondly, I think the problem of having few experts in the field is a legitimate problem but I don't think the status quo is a good solution either. I think it erodes trust in the scientific community which is very hard to build back up.
On the app/phone app thing: Ah, fair enough, sorry about that.
Overtime we have shown that many mistakes are made in labs and we know that many experiments are not replicable.
I think its time that we ask researchers using public funds to film their entire experiment from beginning to end. Every pipette, every prep etc.
Peer review should also have a video audit. Then a replication in another lab. The cost of ensuring results are what they say they are would be offset by less wasted research dollars on false rabbit holes.
Video is simply a better solution.
Most studies become uninteresting after you read their terrible data.
More recently, some megajournals have joined the trend and are trying to expand it, particularly eLife: https://elifesciences.org/labs/ad58f08d/introducing-elife-s-... (also covered in https://www.nature.com/articles/d41586-019-00724-7 ).
For some context see:
Peer review is a very helpful component of a system that slowly filters and refines academic work. It’s like the first air intake stage of a jet turbine.
Is this really commonplace?
They've started detecting incognito mode and won't show content, all you have to do is just use a completely different browser program like Safari if you normally use Chrome, but its different and changes for every site
FYI, Chrome is making changes to make it harder to detect incognito mode. https://nakedsecurity.sophos.com/2019/07/22/chrome-76-blocks...