Contrast this to the peer review culture of popular open source projects - major pull requests have extensive and transparent dialogue, and disagreements known. Meanwhile there is no barrier to releasing anything new.
I am also a developer. There are far more people that can give feedback on most code, and also it doesn't take a lot of apparatus or money to get good at coding. Also CS/OSS is relatively young when compared to other disciplines. We've almost always done things with honest and sometimes brutal feedback. Even academic research is usually announced/shared at conferences. Look at what happened when Nature tried to make an AI journal....  I think there is still value in someone getting paid to manage research and referees and ensure a high quality product. Open peer review is just going to be a lot harder for some of these disciplines with limited experts.
- Many research fields have anywhere between 3 and 10 groups working in them.
- A review always transpires the background of the reviewer. You just cannot "mask" the shape of your knowledge around a highly-specialized subject. This includes your approach to the problem, the issues you are most interested in (and hence know more about), the references you give, etc.
With a closed system, you only get to see:
- Reviews of your own paper, without knowing who wrote them.
- Reviews of the papers you review (there are usually 3-4 reviewers per paper, and you get to see the other reviewer's reviews and who they are).
With these pieces of information, academia is already full of grudges and strong-arming around. Here's an anecdote:
I was once in a conference, and met a colleague that made the effort to approach me and comment that he was great friends with my advisor (who didn't attend), and just wouldn't stop praising him. After a while, he switched to ranting about the review process, and how a specific reviewer was a moron that wouldn't understand anything and so on. Of course, that reviewer was my advisor and I did know but he obviously didn't.
Given the high egos involved in academia, I am pretty sure that if there was a track record of all reviews, researchers would figure out who reviewed their papers. The backlash would then be ugly, and the entire ecosystem would end up more corrupt that it already is (imho).
I've been on all sides of the fence, and I just don't see a better solution than just having secret reviews. Nice words don't matter (much, to most people) when the e-mail starts reading "We are sorry to inform you...".
If all reviewer comments were collected and then published at the same time to prevent them influencing each other, how could that make the situation with respect to “frank and honest” reviews any worse than it is already?
Are there entirely secret journals where nothing is accessible to non-contributors, for the same reasons? I suspect the answer is no, which is why these feuds often play out in attack-counterattack sequences of published papers.
Yes, and this already happens (hence the grudges I was commenting about).
> How does maintaining secrecy for the per-paper improve the situation with respect to “frank and honest” reviews?
By increasing the uncertainty. If I am 95% confident it is you who screwed me last year and 50% certain this paper I got now is yours, I will look at it with a harsher attitude than if I'm only 20% and 20% certain.
If you were to publish all reviews then researchers would have much more information to convince themselves that it indeed is "that guy".
It is sad, and I personally despise it, but that's what it is from my personal experience.
PS: I quit academia after getting the PhD, and this was among the reasons for me to quit. The other major reasons being that I don't want to keep relocating somewhere else in the world every 2 years until I'm in my 40s-50s, I don't like the overselling and result dishonesty in general, and I actually enjoy working in industry too).
Ouch, that’s a good (and depressing) point.
I think it is unlikely you can separate the two precisely. If it be so, mathematicians can replace doctors who make diagnoses. I don't think all empirical facts can fit nicely into a formal proof system.
But still, any effort to reduce the burden on the human referees would be welcome. This said proof system for peer review would be most useful to math and theoretical CS (though not so much to biomedical sciences).
We could at least have the data and the statistical analysis (code) accompany the paper.
Dialogue is an unmitigated good, but dialogue also tends to be dominated by the normative voice. Where science is different than OSPs is that OSPs need to work with what exists now, whereas science needs to engage in tension and informed dialogue in a more foudnational way..
And keeping everything together in one (de)central database would make it possible for people, so inclined, to annotate other people's work with new references to support or debunk the work long after it was published, or to clarify ambiguous language, etc. Those annotations, too, could be subject to filtering as needed.
People could build reputations and whole careers around tying up loose ends instead of the "publish or perish" grind.
How do you propose that is done?
For my job I build neural networks for text processing. I spend a lot of time reading papers in the field.
And yet if I look at something in an adjacent field (even something as close as something like open information extraction) I have trouble telling which papers are important.
How on earth am I supposed to tell if something in a further removed field which attracts more crackpots (say probability theory or something) is a fringe theory or a breakthrough from a new author?
I'd note the example of the Gaussian correlation inequality where even people in the field weren't aware it had been proven for 3 years after publication.
What I'm imagining is a sort of layered approach with raw "article" (or some other unit) data at the bottom and indexing, tagging, clustering, filtering, reviewing, commenting, linking, etc. layered on top with possibly many implementations to choose from.
The analysis/filtering would also apply to people augmenting the data, so if some users are really good at tagging certain types of junk as junk, you could easily filter out that junk. If there emerges a cluster of users who keep tagging certain interesting material as "woo" then you could filter them out or even use them to discover interesting material.
I doubt there's a silver bullet (at least today) that could reliably distinguish between unconventional-bad and unconventional-good work, but keeping the baby and the bathwater together opens the door to such an algorithm in the future.
I can see two noncynical reasons for it one. One is educational purposes. Two would be quality control - if it is already debunkable or widely known there is no point in publishing in Nature "thinking done in the brain".
The cynical abuses and perverse incentives however are myraid and academic politicking is already infamous. I don't know what solution will work but understanding the system goals and functions should help decide what to replace and what to keep.
Because I am not at all prepared to assess the quality or accuracy of a quantum physics paper and I doubt a molecular biologist is prepared to assess the quality or accuracy of a sociology paper. In what I have seen there are a lot of cargo cultists who think kthey understand a field, and use big words, and just don't get it. I feel that there should be some filtering function to remove that.
As an example:
This guy: https://www.mountainproject.com/forum/topic/113602967/near-m...
This guy: https://www.reddit.com/r/badphilosophy/comments/7x3t1g/stand...
This guy: http://ecclesiastes911.net/
This guy: https://arxiv.org/search/math?searchtype=author&query=Simkin...
are all the same person.
What would help? My opinions...
-Abandonment of journal metrics, they really serve no purpose besides trophy hunting at this point
-Stronger transparency initiatives
-Public review (c.f. Lim)
-better journal metadata (it affects citations)
-Greater shaming of misbehavior of ALL types
-Elimination of stupid policies like 'issue lengths' The article either should be or
should not be published...saying there isn't space in a given issue is insane given that >>90% of article access is online.
-a hell of a lot of older faculty retiring out of the way of science.
I'd prefer to keep article lengths in place. It's easy to write a lot of text (as quite a lot of high school students understand), but it's challenging--and very important--to be able to convey information in a very concise format. The process of trimming down the text to squeeze it under the page limit is very useful in getting the ideas contained in it to a more refined and easily-understood format.
I agree, in theory, with your point. I think the unfortunate thing that is lost in most efforts at concision are lack of detail rather than fluff. Fluff is bad, but I think detail is important...especially detail on how things are situated in prior literature. As a reviewer, I have often found there to be a lot of articles using 'strategic concision' to gloss over not doing things properly, or flat our not knowing what you are doing. Saying 'we used method x' means I have to trust you did it properly...or infer from other things how well you did it. Supplementary material could be a potential route here, but I just sent an article back for the second time. The first time they did the 'we did x' and I asked for more detail. The second time they said 'we did x by doing y and z'...but y and z were very very wrong.
There are places where being able to explain concepts concisely are important or a viable tactic. Journal manuscripts are (in theory) a permanent archive of new knowledge for human society and have never struck me as the place were concision is a relevant parameter.
The inclarity, and multiple verticals seems more like an example of Goodhart's law than anything else.
Summarized, the goal of academia is and (for most of time in whatever form it took) is to produce and disseminate new knowledge for human society. Creating new knowledge is what we now call research. Publishing papers isn't a raison d'etre...it's a metric for that.
I'll note that Ernest Boyer is far more articulate on this than I am...
There are a lot of things that only exist as suspicions and intuitions inside of a researcher's head, and that sort of information is much more likely to come out in a back and forth reviewing a paper than in actual published literature.
It's based on open standards and an open platform.
W3C Web Annotations:
“A platform for scholarly publishing and peer review that empowers researchers with the
Autonomy to pursue their passions,
Authority to develop and disseminate their work, and
Access to engage with the international community of scholars.”
Harry Crane, an Associate Professor of Statistics at Rutgers, is one of the founders and a good follow on Twitter.
1 - https://researchers.one/
Its how science worked for most of its history. Really, Elsiever is responsible for the profusion of people publishing fluff to weak journals, subsidized by the government.
PeerJ and PLOS One are good starts, but until academia is no longer a slave to "impact factor," it will have limited effect.
Replicating researches? That's how we prove findings are not wrong or biased, usually.
The top 50 research universities could get together and decide to no longer give consideration to pay-walled articles. It wouldn't affect them AT ALL...the research they do is already treated as pretty much cannon no matter where it appears. Everyone else would follow, and the journals would quickly respond with change.
Yeah I know it's unlikely but they have enough social capital to actually have an impact.
The simple fact is that like any replacement system, there has to be a reason for it to exist. And for a social network, there have to be network effects in place. Fragmentation is a problem. "Oops, we ran out of VC money..." is a problem. Hell, "We got bought by Spinger" is a problem.
In the very long term, they don't have to exist along side each other. In the medium term though, they do. Otherwise, it's like going from "I'm going to quit my job tomorrow with no savings!" to "I live sustainably on my off the grid farm" with no plan for how to eat in the middle.
I think we could start fixing things by incentivizing good peer review. There’s a lot of ways to do this and I’d be interested in a discussion of different schemes. It will by no means be easy but I think that’s what needs to happen.
It has been amazing to come to understand just how effective the corporate takeover of universities has been in the thinking of senior faculty. I have had faculty complain about policies that they literally are the only one's empowered to change because it has never occurred to them that they actually have power.
Conferences, generally, are a better path to science because of dialogue. Having a conference with 2500 presentations because everyone needs another CV line item don't accomplish that because no one is in the room.