Hacker News new | past | comments | ask | show | jobs | submit login

What would be a good alternative to peer-review though? Genuinely interested.



It strikes me that the challenge of peer review is that it is non-transparent, and gate-keeping. It is impossible for one to read the peer review feedback related to a published paper, and it's similarly difficult to publish controversial pieces that break established norms.

Contrast this to the peer review culture of popular open source projects - major pull requests have extensive and transparent dialogue, and disagreements known. Meanwhile there is no barrier to releasing anything new.


There's a difference with scientific publishing for most disciplines. I work for a non-profit scientific publisher. Much of the research today that is coming out is extremely specialized. Thus there aren't many people in the world that have the expertise to referee a paper. So the review is anonymous in order to get a more frank and honest review (we hope).

I am also a developer. There are far more people that can give feedback on most code, and also it doesn't take a lot of apparatus or money to get good at coding. Also CS/OSS is relatively young when compared to other disciplines. We've almost always done things with honest and sometimes brutal feedback. Even academic research is usually announced/shared at conferences. Look at what happened when Nature tried to make an AI journal.... [1] I think there is still value in someone getting paid to manage research and referees and ensure a high quality product. Open peer review is just going to be a lot harder for some of these disciplines with limited experts.

[1] https://www.sciencemag.org/news/2018/05/why-are-ai-researche...


Anonymous doesn’t have to mean secret, although it isn’t always trivial to achieve.


Examine the facts:

- Many research fields have anywhere between 3 and 10 groups working in them.

- A review always transpires the background of the reviewer. You just cannot "mask" the shape of your knowledge around a highly-specialized subject. This includes your approach to the problem, the issues you are most interested in (and hence know more about), the references you give, etc.

With a closed system, you only get to see:

- Reviews of your own paper, without knowing who wrote them.

- Reviews of the papers you review (there are usually 3-4 reviewers per paper, and you get to see the other reviewer's reviews and who they are).

With these pieces of information, academia is already full of grudges and strong-arming around. Here's an anecdote:

I was once in a conference, and met a colleague that made the effort to approach me and comment that he was great friends with my advisor (who didn't attend), and just wouldn't stop praising him. After a while, he switched to ranting about the review process, and how a specific reviewer was a moron that wouldn't understand anything and so on. Of course, that reviewer was my advisor and I did know but he obviously didn't.

Given the high egos involved in academia, I am pretty sure that if there was a track record of all reviews, researchers would figure out who reviewed their papers. The backlash would then be ugly, and the entire ecosystem would end up more corrupt that it already is (imho).

I've been on all sides of the fence, and I just don't see a better solution than just having secret reviews. Nice words don't matter (much, to most people) when the e-mail starts reading "We are sorry to inform you...".


In the current system, when you get to see the reviews of your paper, there are still the same challenges for anonymity and ugly consequences that you’re pointing out. Perhaps even to a greater degree, because the information is “leaked” through gossip rather than in a formal process.

If all reviewer comments were collected and then published at the same time to prevent them influencing each other, how could that make the situation with respect to “frank and honest” reviews any worse than it is already?

Are there entirely secret journals where nothing is accessible to non-contributors, for the same reasons? I suspect the answer is no, which is why these feuds often play out in attack-counterattack sequences of published papers.


> So when you see the reviews of your own paper, there are the exact same challenges for anonymity and ugly consequences as you’re pointing out.

Yes, and this already happens (hence the grudges I was commenting about).

> How does maintaining secrecy for the per-paper improve the situation with respect to “frank and honest” reviews?

By increasing the uncertainty. If I am 95% confident it is you who screwed me last year and 50% certain this paper I got now is yours, I will look at it with a harsher attitude than if I'm only 20% and 20% certain.

If you were to publish all reviews then researchers would have much more information to convince themselves that it indeed is "that guy".

It is sad, and I personally despise it, but that's what it is from my personal experience.

PS: I quit academia after getting the PhD, and this was among the reasons for me to quit. The other major reasons being that I don't want to keep relocating somewhere else in the world every 2 years until I'm in my 40s-50s, I don't like the overselling and result dishonesty in general, and I actually enjoy working in industry too).


> By increasing the uncertainty. If I am 95% confident it is you who screwed me last year and 50% certain this paper I got now is yours, I will look at it with a harsher attitude than if I'm only 20% and 20% certain.

Ouch, that’s a good (and depressing) point.


It sounds like machine checked proofs or other mechanical formalisms need to be employed to ensure scientific validity.


Machine checked proofs only work for deductive, non-experimental disciplines. You can't use machine checked proofs to tell if a drug is effective or not. Human referees have to be involved no matter how advanced the AI is.


You can precisely "quarantine" the "human referree part within the framework of a formal proof. That's still a huge step forward from the status quo of reasoning and opinion being smeared together and nothing formal.


> You can precisely "quarantine" the "human referree part within the framework of a formal proof.

I think it is unlikely you can separate the two precisely. If it be so, mathematicians can replace doctors who make diagnoses. I don't think all empirical facts can fit nicely into a formal proof system.

But still, any effort to reduce the burden on the human referees would be welcome. This said proof system for peer review would be most useful to math and theoretical CS (though not so much to biomedical sciences).


I take it as trivially true that they can be separated shallowly. Then the question is how big the informal "leaps of faith" are. If they are big, the next thing is to look for "strands" of reasoning (lemmas that may not be connected) inside them, as a sort of formal proof reverse-marginalia.


How does a machine proof check a psychology study? Or a drug trial?


open question! :)

We could at least have the data and the statistical analysis (code) accompany the paper.


Yes a fat table, a big proof, and some marginalia which is wonderful pure humanities.


Sounds like you are trying to solve a social problem with technology.


gatekeeping in science is a theoretical good...the difficulty in practice is highlighted by the general 'three paths' for citations...articles that never get cited, articles that get cited a lot at first and then are forgotten, and articles that aren't cited until much later and then cited en masse. That last category is really the 'good science' because it does what we want foundational research to do - disrupt current and limiting ways of thinking.

Dialogue is an unmitigated good, but dialogue also tends to be dominated by the normative voice. Where science is different than OSPs is that OSPs need to work with what exists now, whereas science needs to engage in tension and informed dialogue in a more foudnational way..


I'd favor something like clustering and client-side filtering over gatekeeping. Mainstream academic research can coexist just fine with fringe theories and industry research in the same database as long as one can efficiently distinguish between them.

And keeping everything together in one (de)central database would make it possible for people, so inclined, to annotate other people's work with new references to support or debunk the work long after it was published, or to clarify ambiguous language, etc. Those annotations, too, could be subject to filtering as needed.

People could build reputations and whole careers around tying up loose ends instead of the "publish or perish" grind.


Mainstream academic research can coexist just fine with fringe theories and industry research in the same database as long as one can efficiently distinguish between them

How do you propose that is done?

For my job I build neural networks for text processing. I spend a lot of time reading papers in the field.

And yet if I look at something in an adjacent field (even something as close as something like open information extraction) I have trouble telling which papers are important.

How on earth am I supposed to tell if something in a further removed field which attracts more crackpots (say probability theory or something) is a fringe theory or a breakthrough from a new author?

I'd note the example of the Gaussian correlation inequality[1] where even people in the field weren't aware it had been proven for 3 years after publication[2].

[1] https://www.quantamagazine.org/statistician-proves-gaussian-...

[2] https://en.wikipedia.org/wiki/Gaussian_correlation_inequalit...


Arxiv accidentally created a crackpot filter from apparently a simple semantic classifier along with a count of stop words used in the article, which acts as a very coarse "style" metric.[1] Reasonable-looking crackpot work is usually classified as "general physics" instead of being rejected entirely. Unfortunately it also lumps some legitimate but unconventional research in with the crackpots and makes Arxiv itself somewhat of a gatekeeper.

What I'm imagining is a sort of layered approach with raw "article" (or some other unit) data at the bottom and indexing, tagging, clustering, filtering, reviewing, commenting, linking, etc. layered on top with possibly many implementations to choose from.

The analysis/filtering would also apply to people augmenting the data, so if some users are really good at tagging certain types of junk as junk, you could easily filter out that junk. If there emerges a cluster of users who keep tagging certain interesting material as "woo" then you could filter them out or even use them to discover interesting material.

I doubt there's a silver bullet (at least today) that could reliably distinguish between unconventional-bad and unconventional-good work, but keeping the baby and the bathwater together opens the door to such an algorithm in the future.

[1] https://arxiv.org/abs/1603.03824


Here is a question what is the sort of theoretical good for the gatekeeping? What is the functional purpose of it? I think that answer would prove insightful for determining the future.

I can see two noncynical reasons for it one. One is educational purposes. Two would be quality control - if it is already debunkable or widely known there is no point in publishing in Nature "thinking done in the brain".

The cynical abuses and perverse incentives however are myraid and academic politicking is already infamous. I don't know what solution will work but understanding the system goals and functions should help decide what to replace and what to keep.


>Here is a question what is the sort of theoretical good for the gatekeeping? What is the functional purpose of it? I think that answer would prove insightful for determining the future.

Because I am not at all prepared to assess the quality or accuracy of a quantum physics paper and I doubt a molecular biologist is prepared to assess the quality or accuracy of a sociology paper. In what I have seen there are a lot of cargo cultists who think kthey understand a field, and use big words, and just don't get it. I feel that there should be some filtering function to remove that.

As an example:

This guy: https://www.mountainproject.com/forum/topic/113602967/near-m...

This guy: https://www.reddit.com/r/badphilosophy/comments/7x3t1g/stand...

This guy: http://ecclesiastes911.net/

and

This guy: https://arxiv.org/search/math?searchtype=author&query=Simkin...

are all the same person.


That guy has some pretty interesting articles.


So the short answer is I don't know, and I don't think there is a single answer. Different types of scholarship need different standards and processes in my eyes. It isn't so much 'whats the alternative' to what is really a singular step as much as it is a rethink of how scholarship is performed, evaluated, rewarded, and considered. Otherwise the incentives across the playing field don't change enough to make meaningful change meaningful.

What would help? My opinions...

-FOSS publishing -Abandonment of journal metrics, they really serve no purpose besides trophy hunting at this point

-Stronger transparency initiatives

-Public review (c.f. Lim)

-better journal metadata (it affects citations)

-Greater shaming of misbehavior of ALL types

-Elimination of stupid policies like 'issue lengths' The article either should be or should not be published...saying there isn't space in a given issue is insane given that >>90% of article access is online.

-a hell of a lot of older faculty retiring out of the way of science.


> -Elimination of stupid policies like 'issue lengths' The article either should be or should not be published...saying there isn't space in a given issue is insane given that >>90% of article access is online.

I'd prefer to keep article lengths in place. It's easy to write a lot of text (as quite a lot of high school students understand), but it's challenging--and very important--to be able to convey information in a very concise format. The process of trimming down the text to squeeze it under the page limit is very useful in getting the ideas contained in it to a more refined and easily-understood format.


I meant n articles per issue more than allowing articles to be length>m. My bad for the in-clarity :)

I agree, in theory, with your point. I think the unfortunate thing that is lost in most efforts at concision are lack of detail rather than fluff. Fluff is bad, but I think detail is important...especially detail on how things are situated in prior literature. As a reviewer, I have often found there to be a lot of articles using 'strategic concision' to gloss over not doing things properly, or flat our not knowing what you are doing. Saying 'we used method x' means I have to trust you did it properly...or infer from other things how well you did it. Supplementary material could be a potential route here, but I just sent an article back for the second time. The first time they did the 'we did x' and I asked for more detail. The second time they said 'we did x by doing y and z'...but y and z were very very wrong.

There are places where being able to explain concepts concisely are important or a viable tactic. Journal manuscripts are (in theory) a permanent archive of new knowledge for human society and have never struck me as the place were concision is a relevant parameter.


The Sphenodon in the room is that the raison d'etre of the entire Academic structure is unclear. So that should be clarified first. Advancing science is one thing. Publishing as many papers as possible, breaking research into MPUs is another. Lecturing to youth completely uninterested to be taught is yet another vertical.


I think you may be confusing vague with unclear...

The inclarity, and multiple verticals seems more like an example of Goodhart's law than anything else.

Summarized, the goal of academia is and (for most of time in whatever form it took) is to produce and disseminate new knowledge for human society. Creating new knowledge is what we now call research. Publishing papers isn't a raison d'etre...it's a metric for that.

I'll note that Ernest Boyer is far more articulate on this than I am...


Open publishing and commenting would be a good start---having a dialog, like is done at conferences. Older academic journal articles (pre 1900) read much more like discussions than like the hundred dollar word vomits of modern academic publishing. The broken incentives are at the core of this rotten fruit, though. Just making journals open isn't enough.


We have (almost) open publishing and open commenting. Did that improve anything?


I think so. I've followed the back and forth on a few papers on openreview.net and I found the comments nearly as interesting and informative as the paper they were commenting on.

There are a lot of things that only exist as suspicions and intuitions inside of a researcher's head, and that sort of information is much more likely to come out in a back and forth reviewing a paper than in actual published literature.


I have been distinctly unimpressed with open peer review, and given it's antithetical to something I actually think does concretely have benefit (double-blind peer review) I've gotten rather curmudgeonly about it as a fix.


There's open commenting? I've never seen the back and forth of the review process be published. It should be published.


https://hypothes.is supports threaded comments on anything with a URI; including PDFs and specific sentences or figures thereof. All you have to do is register an account and install the browser extension or include the JS in the HTML.

It's based on open standards and an open platform.

W3C Web Annotations: http://w3.org/annotation

About Hypothesis: https://web.hypothes.is/about/


Some machine learning conferences use openreview.net for this.


ResearchOne(1) was founded to address the issues with peer review and the academic journal system.

“A platform for scholarly publishing and peer review that empowers researchers with the

Autonomy to pursue their passions,

Authority to develop and disseminate their work, and

Access to engage with the international community of scholars.”

Harry Crane, an Associate Professor of Statistics at Rutgers, is one of the founders and a good follow on Twitter.

1 - https://researchers.one/


This answer will not please the 'publish or perish' economy which has built up around this: editorial discretion (which is a sort of peer review) and about 10x fewer papers published per capita involved.

Its how science worked for most of its history. Really, Elsiever is responsible for the profusion of people publishing fluff to weak journals, subsidized by the government.


Peer-review is a good tool, but it is not the only tool that should be considered. And the problem is not peer-review specifically, but the implementation of the system and rent-seeking middlemen like Elsevier who do nothing but extract payment from taxpayers and researchers.

PeerJ and PLOS One are good starts, but until academia is no longer a slave to "impact factor," it will have limited effect.


> What would be a good alternative to peer-review though? Genuinely interested.

Replicating researches? That's how we prove findings are not wrong or biased, usually.


Yes. The fact that we have so much published research that can't be replicated indicates to me that peer-review is doing less work than we think it is.


Actually I came up with one...

The top 50 research universities could get together and decide to no longer give consideration to pay-walled articles. It wouldn't affect them AT ALL...the research they do is already treated as pretty much cannon no matter where it appears. Everyone else would follow, and the journals would quickly respond with change.

Yeah I know it's unlikely but they have enough social capital to actually have an impact.


That's not an alternative to peer review. It's a solution to remove the journals from the research, and it has a lot if's.


fair point.


Why not just build a kind of a social network where people would post research papers, "like" interesting and high-quality papers and reference papers relevant to their research, post their comments and for every participant you would be able to see what papers they have published and reviewed? Sounds fairly simple. Why not?


People have tried this, or spins on it, and they don't particularly gain traction. Part of the problem is these have to exist alongside the traditional structure, and "Hey academics, do you want to be busier for intangible rewards?" is a hard sell.


What are, if any, the reasons these have to exist alongside the traditional structure other than the fact only specific reputable traditional journals are recognized as valid publication places by universities boards and other relevant institutions?


So you mean, "other than the fact that reputable journals are the entire basis for the dissemination of new research results, the core of hiring and promotion, and a major aspect of funding"?

The simple fact is that like any replacement system, there has to be a reason for it to exist. And for a social network, there have to be network effects in place. Fragmentation is a problem. "Oops, we ran out of VC money..." is a problem. Hell, "We got bought by Spinger" is a problem.

In the very long term, they don't have to exist along side each other. In the medium term though, they do. Otherwise, it's like going from "I'm going to quit my job tomorrow with no savings!" to "I live sustainably on my off the grid farm" with no plan for how to eat in the middle.


I think the op is not saying that peer review as a concept is broken, more that the current peer review system is broken.

I think we could start fixing things by incentivizing good peer review. There’s a lot of ways to do this and I’d be interested in a discussion of different schemes. It will by no means be easy but I think that’s what needs to happen.


For reference, good peer review is incentivized in some places. A journal I read gives awards every year for the best reviewers, and good reviewers end up as Associate Editors.


Thanks for pointing that out. I have started to see those recently. Not sure if it is a recent phenomena or if I’ve only noticed it recently.


A good alternative to the current system of pre-publication peer review would be post publication peer review. The former, or one pass editorial review, is how scientific publication worked almost universally pre WW2. There are many scientific disciplines where everything important happens in preprints and publication in a journal is for archival purposes more than communication. Economics, Physics, Mathematics, Political Science, Genetics and Genomics, every discipline with substantial coverage on SSRN or the arxiv works on post publication peer review.


For many STEM fields, conference talks, presentations and written contributions are notably faster and better suited to non-tenured applied researchers than the journals route.


unless the people who review their tenure cases don't count them because they are woefully out of date and out of touch.

It has been amazing to come to understand just how effective the corporate takeover of universities has been in the thinking of senior faculty. I have had faculty complain about policies that they literally are the only one's empowered to change because it has never occurred to them that they actually have power.

Conferences, generally, are a better path to science because of dialogue. Having a conference with 2500 presentations because everyone needs another CV line item don't accomplish that because no one is in the room.


It depends on how much waving you do actually need for your career or personal satisfaction, though. All these openness, transparency and calls to dialogue are the exact opposite I can see in my industrial, applied field of research. Most results here are kept private in order to attempt first exploitation or to be ready in case a suitable market arises. I can understand pure sciences and medicine, among STEM curricula, work inherently different, though, and rightly so.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: