Hacker News new | past | comments | ask | show | jobs | submit login
Why isn't preprint review being adopted? (theroadgoeson.com)
69 points by dbingham 48 days ago | hide | past | favorite | 160 comments



One of my profs once remarked, "All of science is done on a volunteer basis." He was talking about peer review, which--as crucial as it is--is not something you get paid for.

Writing a review--a good review--is 1) hard work, 2) can only be done by somebody who has spent years in postgraduate study, and 3) takes up a lot of time, which has many other demands on it.

The solution? Its obvious. In a free market, how do you signal if you want more of something to be produced? Bueller? Bueller?

Yeah, that's right, you gotta pay for it. This cost should just be estimated and factored into the original grant proposals--if its not worth $5k or $10k to fund a round of peer review, and perhaps also funds to run confirming experiments--well, then its probably not research worth doing in the first place.

So yeah, write up the grants to include the actual full cost of doing and publishing the research. It would be a great way for starving grad students to earn some coin, and the experience gained in running confirming experiments would be invaluable to help them get that R.A. position or postdoc.


I don't disagree with the proposed idea of paying for review, but I would prefer also to have guardrails to ensure a good review. I would be willing to pay for a good review because it makes the paper/investigation better. But let's face it: under the current paradigm, there are also a lot of really bad reviews. It's one thing when it's apparent that a reviewer doesn't understand something because of a lack of clarity in the writing. But it's also extremely frustrating when it's obvious the reviewer hasn't even bothered to carefully read the manuscript.

Under a payment paradigm, we need mechanisms to limit the incentive to maximize throughput as a means of getting the most pay and instead maximize the review quality. I assume there'd be good ways to do that, but I don't know what those would be.


So, we just need a meta-review to review the reviews. At a cost, of course. And in order to keep that honest, we need a meta-meta-review...


chuckle recall, a paper is given to 3 or 4 reviewers. No need for hierarchies of reviewers; it’s more like a jury; if all the reviewers more or less come to the same conclusion we can have a high confidence that the decision is the correct one.

Under the proposed plan, if one of the reviewers gave a review which was radically different, or otherwise obviously slap-dash job, payment could be withheld and another reviewer commissioned.


many CS conferences have something literally called a "meta-review" and then there are further senior people who read and oversee the meta-reviews. it stops there though.


Unfortunately the state of meta-review is similar to that of reviews. Rarely delves deeper, mostly acts as a summarizer for the independent reviews.


Who picks the meta reviewers?


Or possibly just a way to review the reviewers. This opens itself up to competitor bias, though, so it would need to be thought out in a way to minimize that.


we need twitter community notes for science


And also has limited value as we practice it today.

A useful review would involve:

(a) "This paper won't be accepted by Cochrane for meta analysis", "N=20 get out of here", ...

(b) Researchers provided their data files and Jupyter notebooks, the reviewer got them to run

(c) Reviewers attempt their own analysis for at least some of the data (think of the model of accounting where auditors look at a sample of the books)

(d) Reviewers come visit the lab and take a look at the apparatus

(e) Something like a PhD defense

(f) Summarize/formalize discussion of a paper that happens at a conference or online venue

(g) 5 months, 5 years, or 40 years later (once in my case) somebody goes through the math line-by-line and finds a mistake on equation 178 that propagates and affects the rest of the calculation. This knowledge ought to be captured and "stapled" to the paper.

I wouldn't say peer review is useless, I think it did improve papers I wrote a little but reviewers do not put enough effort into to reliably catch validity problems. If you believe in meta-analysis, which you should, read this book

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1114087/

the first one (a) is really important because the overwhelming majority of papers are excluded in Cochrane reviews for most medical topics. If 80-90% of papers in some fields are not suitable for inference that raises tough questions such as "Why does work like this get published?" and "Why does this work get funded?" If we got half as many papers but 80-90% of them were usable that would be a revolution.


> I wouldn't say peer review is useless, I think it did improve papers I wrote a little

Let's get back to basics here. What is the whole point of doing things scientifically in the first place? I.e. what is so special about the combination of 1) communicable descriptions of experiments, which can be re-run by anybody, and 2) the performing of said experiments and reporting the results, and 3) having others review and reproduce the results?

Why are those 3 practices soooo special? Its because they are designed to either a) persuade everybody to agreement with you that your results are valid, or b) persuade you that you are wrong.

I.e.. the scientific method is uniquely designed for the purpose of compelling agreement among peers. Its the 2nd best method we have of reaching consensus (the first best being the methods of proof used in mathematics. )

This is important because even if you make a new gee-whiz discovery, its useless unless other people agree that it is real. And unlike politics, or a beauty contest, or an aristocracy, it doesn't matter who you are, if your results are observable and reproducible, other scientists will agree with you, and your discovery has some chance of being generally useful. And that's why they are anonymous--its supposed to cut through the biases and prejudices which, alas, sometimes prevent us from coming to consensus.

The point of peer review is to get a read on whether or not the author of the paper has justified their claims in enough detail that the results are likely to compile agreement. Yeah, we see if 3 or 4 people can agree on it, before asking everybody to agree on it. Whoda thunk?

Point being, peer review isn't to help you improve your paper.

> reviewers do not put enough effort into to reliably catch validity problems.

Yes. This. Any why don't they? Because there's only so much work you can expect somebody to do for free. Lets pay them.


> If you believe in meta-analysis, which you should,

I view meta-analysis as being like those mortgage-backed securities which crashed the world back in 2008. I mean, yeah, theoretically, a bond whose yield is a weighted average of mortgages should be less risky than any one of the mortgages in it.

But....the devil is in the details. When I start seeing meta-analysis which claim that, e.g., masks don't protect you from a respiratory illness which spreads by coughing and sneezing, or that vaccines are at best worthless and at worse cause autism, wellll.....

....I have to conclude that garbage-in, garbage-out.


Unfortunately, what you'll actually incentivize is spending as little effort as possible to get the money paid out.


I hear this sentiment a lot.

There was a time when academia was intensely driven by culture. People did stuff because they cared about the community surviving.

It is, in fact, possible to choose the "cooperate" triangle of the prisoner's dilemma, but it takes a lot of work, and is vulnerable to being wrecked by people who don't care / believe in the goals.


If you keep increasing publication requirements in a Red Queen’s race, you both increase the amount of peer-review required and also reduce the time that qualified scientists have to do it. No matter how much people want to build a functioning system, they only have so many hours in the day.


A valid concern, but what is driving this red-queen race is that you are competing with people who cheat. If you’ve got one really solid paper published, but your competitor fraudulently published 10, it’s real tempting to fraud a few papers yourself.

But they multiply fraudulent papers because they can get away with it, and they can get away with it because nobody is really reviewing or replicating those results.

I propose a fix: if a paper wouldn’t get published unless somebody has replicated the results—-there would be a lot less papers published, and the the expectation of how many papers a prof should publish a year would fall to reasonable levels.

It’s the fraud which is killing us. Get rid of the fraud and a lot of the other problems with academia would start resolving themselves.


The idea that fraud is rampant and driving the explosion in publication rates is deeply tempting to folks who don’t work directly in a scientific field. But it’s a misconception that’s largely driven by non-scientific media and bloggers. In practice literal fraud does exist, but it’s relatively rare in most scientific fields. The explosion in publication rates is largely caused by a combination of (1) more people entering the field, and (2) “paper slicing” where people submit smaller contributions as part of each paper, in order to maximize publication metrics.

As far as “no publication without replicating the results”. Think for a second about how that will work. How is anyone going to replicate a result if it’s not published? This is literally the reason that scientific publication exists in the first place: to provide a channel by which new results can be conveyed to other scientists so that they can learn from, and in some cases, replicate those results. So clearly you don’t want to prevent publication of non-replicated results, you just want to prevent people from getting some kind of academic brownie-point credit for those publications. Perhaps that’s workable, I don’t know. But you need to be a lot more knowledgeable and curious about the scientific publication system in order to have any hope of improving it. Taken literally, your “no publication without replication” proposal would inhibit scientific replication entirely.



> “paper slicing”

I'm glad we agree that paper slicing is also plaguing academia. When I was a grad student, it bugged me to no end that those guys at Stanford were out-publishing me--because they got enough results for 1.5 papers, and then made (3 choose 2) papers out of them.

And yeah, if you think I wasn't tempted to follow suit, think again. I eventually left academia because I didn't want to cheat, or compete with cheaters, when we were being graded on the curve of "publish or perish".

> fraud is rampant and driving the explosion in publication rates is deeply tempting to folks who don’t work directly in a scientific field.

So, you are an assistant prof who was passed over for tenure, or you didn't get that Harvard appointment--it went to Francesca Gino, whose vita looks sooooo much better than yours, because she's doing TED talks to promote her book called "Why It Pays To Break The Rules In Work And Life". She's making $1 million a year consulting, while you are trying to get funding for your research, which isn't as flashy but at least is real science...

... if you are graded on the curve, how will you look against someone who cheated? It's the prisoner's dilemma.

> How is anyone going to replicate a result if it’s not published?

In my proposal, it works the same that way can they review a paper if it's not published. You write up a paper, detailing the claims you are making and the experiments and methods you used to justify your claims, and you submit it for publication.

Then, 3 or 4 anonymous reviewers decide whether it's promising enough to go to the next step: hire another research group to replicate the results. If the results replicate, then and only then are they published.

Yes, it's more expensive, so to finance it I propose that when the grant is being written, the principle investigators should also estimate what it would cost to hire reviewers and to fund experiments at another lab to confirm the results.

When the grant is approved, that money is put in escrow until the research is done and submitted for publication, and disbursed to the reviewers and replicators to compensate them for pausing their own research and evaluating someone else's. If it replicates, it's published. If it doesn't replicate, you don't get to write up books on how to cheat your way to the top while you are cheating your way to the top.

> This is literally the reason that scientific publication exists ...

100% agree, its great to share new results and all---but if a "result" doesn't replicate, its not a result. What, exactly, are Gino's peers supposed to learn from her fraudulent papers? If I'm trying to decide what to research, how do I know which lines are actually promising or not? Do I just go with researchers at a name-brand university (like Gino at Harvard?)

These days papers can be written by machine as fast a a machine gun fires bullets. There's got to be some way of separating the signs from the noise.

> brownie-point credit for publications...

Publication is very important; one of my professors explained it this way: if you don't publish your results--i.e. you don't convince your peers that what you did is worthy of publishing--its like you didn't do anything at all. Then whole idea of research is to contribute to the edifice of science, and publishing is the vehicle by which that contribution is made. It's how you deliver the contribution to everybody else. And peer-review is how it is determined whether you actually made a contribution.

So the solution can't be to just stop caring about how many papers are published.

> Taken literally, your “no publication without replication” proposal would inhibit scientific replication entirely.

I hope my explanation above addresses this concern....


"Publication" is the process of writing up works and distributing them to other researchers. Having fast and efficient channels for publishing our results is essential to the progress of science.

I strongly object to any proposal that would interrupt this essential process. If you (somehow) prevented me from distributing my preliminary results to other scientists, or from reading their preliminary results, then I would actively work around your proposal. Not because I hate you, but because science fundamentally requires access to these new ideas. Moreover, any researchers who waited for replication would fall behind and get scooped by the better-connected Ted-talk giving folks, who would obviously keep exchanging results.

However, it's obvious that when you say publication you don't mean it in the literal sense of communicating scientific results. Instead what you're trying to reform is academic credit. We all know that scientists [and the bureaucrats who fund them] love to find ways to measure research productivity, and they've chosen to use publications (in specific venues) as a proxy for that contribution.

And, following Goodhart's law, any measure that becomes a target ceases to be a good measure.

So the purpose of your proposal really has nothing to do with scientific publication for its first-order effects (i.e., distributing results to colleagues) but rather, you just want to reform the way that credit is allocated to scientists for purposes of funding, promotion, and general adulation (that's a joke.)

My suggestion is: if you want to reform the way credit is allocated, why not focus explicitly on that goal? There's no need to put in place complicated new systems that will make it harder for anyone to publish (and that real scientists will instantly work around.) Instead, just go to the NSF and NEH and various university administrations and convince them to stop hiring/funding/firing researchers using the metrics they currently use?

I think overall you'll find that this is still a tough sell, and a massive uphill battle. But it has the advantages of (1) addressing the actual problem you're concerned with, (2) not preventing researchers from doing their job, (3) has fewer overall points of failure, and (4) isn't subject to being worked around by funders/administrators who simply adopt new measures after you "reform" the publication system (e.g., switching to citation counts for arXiv works, or surveying other researchers for their opinion.)


> If you (somehow) prevented me from distributing my preliminary results

Dude, nobody can prevent you from putting your paper on arxiv. But arxiv is a vanity press-I'm sorry to put it in such negative terms, but that's the God's honest truth of it.

We have free speech; you can get a vanity press to print whatever you want, as fast as you want it to. But that is not doing science. Alas, arxiv is needed, because the rest of the system is so broken. But imagine if you could submit your paper to any journal, and be guaranteed that it would be peer reviewed within 2 weeks. We could do that if we didn't have to rely on volunteer labor, but paid qualified people to do the job thoroughly and on time.

> Having fast and efficient channels for publishing our results

Have you ever submitted a paper to a journal? If so, I'm sure you were as frustrated as I was that my paper was just sitting on reviewers desks for a year before they got around to giving it a cursory glance.

If we actually payed reviewers, we could specify that the reviews must be done on a certain time schedule. My proposal would greatly accelerate the rate at which non-fraudulent, scientific results get published and communicated to other researchers.

> I strongly object to any proposal that would interrupt this essential process.

Its not an essential process to science to pick the fastest and cheapest vanity press.

What essential for science is getting repeatable results. That's science.

> Having fast and efficient channels for publishing

Ever faster publishing of ever more vanity projects is not science, nor does it help science. Quite the opposite.

> it's obvious that when you say publication you don't mean it in the literal sense of communicating scientific results.

No, that's exactly what I mean. But putting something up on arxiv isn't "communicating scientific results." Until it has been peer reviewed and shown to be replicable, it just isn't a scientific result.

> [change metric, etc]

I don't want to change the metric. We don't have a bad metric--we have frauds claiming they have met the metric when they haven't. The problem isn't with the metric, its that the metric isn't actually being enforced like it should be.

> any measure that becomes a target ceases to be a good measure.

Peer reviews and replications are not a measure of how good your science is. The measure for good vs poor science are things like how useful your results are, how general they are, whether they unify previously disconnected areas of knowledge...things like that.

Whether you are generating repeatable results or not isn't the difference between poor science and good science. It's the the difference between doing science and not doing science.

You can't get rid of peer review, and the demand for repeatable results and still be doing science. Science is the the process of getting peer reviewed and repeatable results.

Peer review and repeatability isn't how you judge the results of the race--they are the bare minimum requirements to enter the race to begin with.

> your proposal really has nothing to do with scientific publication for its first-order effects (i.e., distributing results to colleagues)

Distributing what results? If your "results" can't pass peer review, or they can't be replicated, the are not scientific results. If you skip those steps, you are are skipping doing science. You are not a scientist, you are just cosplaying a scientist.

In order to deliver scientific results, quickly or slowly, you actually have to have produced scientific results.

> if you want to reform the way credit is allocated, why not focus explicitly on that goal?

Well, I'm not trying to reform the way credit is allocated. I'm trying to stop people from getting credit for fraudulent results. Before credit is allocated, something credible must have been produced!! And until your paper passes peer review, it is not credible, and until it has been replicated, its not science.

> that real scientists will instantly work around.

Even calling them "real scientists" betrays a deep conceptual error. There are not "real scientists" and "other kinds of scientists." There are scientists, and there are non-scientists.

The distinction isn't between "real scientists" who take every shortcut and cheat as much as they can get away with, and "poor schlubby scientists" who don't have the guts to cheat.

Scientists (not "real scientists", just scientists) insist on peer review and reproducible results. You can't "work around" the most basic criteria for something be scientific and still be a scientist doing science.

> just .... convince them to stop hiring/funding/firing researchers using the metrics they currently use

So...I'm supposed go to the national science foundation, and every body which funds scientific research, every institution which purports to hire scientists---and somehow convince them to stop actually doing science?

sigh its not your fault man. The problem has been going on for so long now, that there are generations of cosplayer-professors, who have been graduating cosplayer-ph.d.s. Imagine people going to Star Trek conventions, dressing up, geeking out---but after few generations they forget that they are cosplaying, and think they are actually on a starship....

Seems ludicrous, but that's kind of what arxiv has inadvertently done. It doesn't help that have people who cosplay being "science" journalists or "science" popularizers, who trawl arxiv for juicy headlines, and happily write up an article about "research" which hasn't been peer reviewed or replicated. It just encourages more race-to-the bottom, by encouraging "researchers" to post exaggerated claims.


It's a long post, so just a few short thoughts.

1. You're very angry at some people in the field. I get that, everyone in science shares these feelings to some degree. But I think that kind of bitterness is bad for your objectivity and [more importantly] bad your soul. You need to find a way to let it go. This isn't shade, it's genuine advice. Holding onto this resentment is terrible for your mental health, and it ruins the joy of actually doing science for its own sake.

2. Substantively, arXiv isn't "vanity press." Your use of this term again makes it seem like you are fixated on the role of publication for academic credit rather than publication as a means to communicate results. A number of fast-moving fields use preprints as their primary communication channel (ML is a big example.) Even slow-moving fields rely on preprints to exchange new ideas. There's a higher risk of incorrect results and "spam" in this area, but scientists routinely work with these results anyway because that's how we learn about new ideas quickly.

(Specifically in my field [of cryptography] new un-reviewed results are useful because I can usually determine accuracy myself, either by reading the proofs or running the code. If you try to convince me that I should ignore a novel result with a correct proof because it's "not science," well, all you're going to convince me of is that you don't understand science. I realize that for experimental work this can be more challenging, but even un-replicated results can still be useful to me -- because they may inspire me to think about other directions. Adding a slower layer of "professional replication and peer review" would be a net negative for me in nearly all cases, because replication takes a lot of time. At most it would be helpful for promotions and funding decisions which again is not why I read papers!)

3. I don't expect you to reform the incentive process at NSF, NEH, Universities, etc. These are incredibly difficult tasks. At the same time, reforming that stuff is much less ambitious than what you're proposing, which is to fix all academic publishing with the follow-on effect that your reforms will then repair all those broken incentive problems. To use an analogy: you're proposing to terraform Mars in order to fix the housing crisis, and I'm suggesting that maybe we just build more houses here on Earth. If your response is that I'm being unreasonable and that building more houses here on Earth is much too hard, then you're definitely not going to succeed at building new houses on Mars.

4. Your main proposal is to (somehow) come up with a pot of money to make peer review paid. I don't hate that idea, since I would love to be paid and have better peer-review. I am skeptical that this would dramatically increase speed, quality and availability of peer reviewing, especially when you include something as nebulous as "replication" into the goals of this new process. I am skeptical that the money exists. And I am skeptical that this will prevent "cheating" and "gaming" of the resulting systems. Most likely it will prove impossible to do at all and even if you did it, it will just cause less money to be allocated to actual research.

But if you can make it happen, I won't object.


> I would love to be paid and have better peer-review.

Well, let's build on that common ground :-)

> fast-moving fields use preprints as their primary communication channel

Note, I'm not proposing any changes in the preprint system. Maybe you can explain why you think getting faster and better peer reviews would stop researchers from rapidly sharing ideas?

> I am skeptical that the money exists.

Francesca Gino made over $1 million a year at Harvard. Its not a question of can we afford to do this, its a question of can we afford NOT to do this??

If they would have funded a $20k replication study 15 years ago to see if Dan Ariely and Francesa Gino's paper was an actual scientific result, how much money would Harvard and all the funding agencies saved?

It would have even been better for Ariely and Gino--yeah, its no fun when your hypothesis is disproven, but that's a lot better than suffering from a career-ending fraud scandal.

I think the proposal would be more than self-funding, inasmuch as it would prevent money being wasted on frauds.

> I am skeptical that this will prevent "cheating" and "gaming" of the resulting systems.

I'm sure that we will always have "evil scientist"-types. But right now, the system actually incentivizes fraud, and punishes honest researchers.

Can we at least get the incentives right?


It depends! There should probably also be a process by which reviewers themselves get graded. Then paper writers can choose whether to splurge for fewer really amazing reviewers, or a larger quantity of mediocre reviewers. Also, readers will be able to see the quality of the reviewers that looked at a preprint.


How do you have all three of anonymous authors, anonymous reviewers, and reviewer ratings ?


It might be possible to have a third-party manage the reviewer ratings. Although I suspect some fields are so small/niche that if someone wanted to associate some random ID with a real person, they could match writing styles etc.


How isn't that third party just reinventing journals ?


The product is different. Consumer Reports isn't reinventing cars; it provides a service by independently rating cars.


Splurging for "amazing reviewers" could also be gamed to "splurge on those reviewers who are likely to rubber-stamp my submission to get paid" (not unlike some of the questionable open-access journals' current business models).


Is that different from any other job?


Opposed to now where it appears lots of science is peer reviewed with all the problems found later?


Replacing a broken system, with a broken and also expensive system, does not sound like an improvement.


It might cost $100k - $1m (or more) to repeat the work and run replications. The $5k - $10k mentioned earlier would be enough to allow time reading and thinking and checking some stuff on pen-and-paper.


> The $5k - $10k mentioned earlier would be enough to allow time reading and thinking and checking some stuff on pen-and-paper.

The average postdoc in the US earns $32.81/hour, according to the results of some googling. Even taking overheads into account, $5k should cover more than a week's full time work.


In what area of science would it take only a week or two to replicate?

It might take several days to a week of literature review just to fully understand the problem. Then you might need equipment, chemicals, cultures, etc. Then depending on the area of science, doing the actual experiment could take several weeks (waiting for reactions, computer simulations, etc). Then possibly tricky analysis and statistics on top of that.

Nowadays, science is deep


Science is very deep; I’m sure that to replicate some studies it would cost as much as performing the original study did.

But whether it’s $10k or $100k, we really should provide the funds to do it. Expensive? Yeah, but not as expensive funding grants for generations of psychology professors and getting nothing—or worse than nothing—in return.

Psychology could fix its replication crisis tomorrow if as part of writing every grant, they also calculated what it would take for another group to replicate their experiments, and put that money in escrow to hire reviewers and replicators who had to sign off on any papers published.


Or, research just gets published online free of charge for everyone to access, and important work will prove itself over time by being discovered, discussed and by becoming influential.

If anyone wants to change something about an article (the writing, the structure of the paper, or anything else a reviewer might want to edit) they can just do it and publish a new version. If people like the new version better, good, if they don't they can read the old version.

Peer review as a filter for publishing is terrible in a time when making a few megabytes of text and images accessible is literally free. If anyone wants to run a content aggregator (aka a Journal) they just do it. If they want to change something about the article before it's approved for the aggregator they can contact the authors or ask someone to review it or whatever.

Just make it accessible.


> Just make it accessible.

We already have that system, it's called the internet. Nothing stops you or I from putting our ideas online for all to read, comment on, update, etc.

The role of the publishers, flawed as it is, has little to do with the physical cost of producing or providing an article, and is filling (one can argue badly) a role in curation and archival that is clearly needed. Any proposal to change the system really has to address how those roles are met, hopefully cheaper than currently but definitely not more expensive because mostly people don't get paid (in $ or career or anything) now - or has to provide a funding source for it.

I don't really see how your outlined scenario addresses that, at least not in a way that's functionally different than today. Can you expand?


> We already have that system, it's called the internet. Nothing stops you or I from putting our ideas online for all to read, comment on, update, etc.

Are you asking how arxiv is different from blogspot?


No, although the mechanism for hosting the content aren’t that important.

Preprint servers are very useful but haven’t replaced journals for good reasons.


What are those reasons? The only thing I see that journals do which preprint servers couldn't easily take over is prestige.


You have the causality wrong. Prestige comes to journals by doing a good (or at least, better than peers) job of being a journal, which is providing a necessary function to the academic research process. If you want to improve on that system, you have to improve on those functions, or reduce the reliance on them by providing something better.

Put it another way, if you can design a system with a better ROC curve for classifying research, with a better TP rate for good papers, and have it cost less in real terms that current academic papers, then you are on to something. If all you've got is "papers should be free" or "it's too hard to access publishing from the outside" what you have are complaints, not solutions.


Like I said, it's not clear to me what exactly established journals have been doing "better" historically than a preprint server could. You say they are better at "being a journal" -- OK. They are established, well connected (to science communities, industry, journalism, funding agencies, etc.) and have been maintaining a reputation for a long time, usually much longer than preprint servers exist. That's basically prestige, which isn't nothing (I didn't claim prestige is nothing). However, this doesn't demonstrate journals do anything relevant to the advancement of science fundamentally better right now.

What I "have" is that

1. It's not obvious that a journal is fundamentally better at organizing unpaid voluntary reviewers compared to a preprint server.

2. Scientific publishing has insanely high profit margins. How come? My theory is that they are selling prestige first and foremost, i.e., a luxury good, (to scientists, universities and funding agencies simultaneously) and purchasing decisions there are made by people who are spending public money, not their own. Both of these points (luxury good, public spending) seem like strong contributors to high margins. The public is paying for the research and for access to articles, while journals nowadays on first glance seem to only provide a little bit of web hosting, a little bit of reviewer coordination and a little bit of manuscript editing.

3. It's not obvious that the submission and peer review systems we have now (in journals) is worth the time and effort. The role of peer review is misrepresented in journalism and the expectations are not met. If one could separate publication ("preprint") on one side, and, on the other side, review and "being featured by important outlets or institutions", authors could save a lot of time time (that could be used for more research). Others would have access to interesting results earlier and be able to build on top of them. Next, in a separate process some institution could select important works, scrutinize and review them, perhaps paying experts to do so, and perhaps replicate where appropriate.

The issue with this is that academics need the prestige provided by journals for career advancement, universities need the prestige to justify their spending to funding agencies and politicians, and funding agencies likewise need the prestige to justify their spending to politicians. The "replication crisis" and the like indicate that this prestige is overvalued. The hope is, economically speaking, that the market for "academic prestige" can either be disrupted, or the price the public has to pay can be lowered "through competition". It's interesting what that might look like. Preprint servers, open data and more direct science communication seem like steps in the right direction.


I'm clearly not articulating my point well. Obviously the idea is to "disrupt" the academic publication and review process, but this discussion seems to be focusing on the probably the easiest part - making and hosting the documents.

> Next, in a separate process some institution could select important works, scrutinize and review them

This is basically what happens now. Pre-prints are for things that aren't necessarily ready yet (hence the "pre") but cooked enough to review and discuss and build on. The formal publication process takes some percentage of them (depending on server, could be quite small) and works through a publication process.

Currently that is mostly done by for-profit journals organizing the work. If you want to propose.

So what you are suggesting is that we do away with that (fine!) and replace it with --- something handwavy (not fine). There has to be some real proposed mechanism of organizing the work that needs to be done that a) doesnt' waste even more time of the limited pool of people who can and will do a reasonable job of reviewing, even worse editing, does at least as good a job filtering out the large amount of noise to find signal, and is at least as robust against manipulation.

For what it's worth, many of your arguments about the lack of efficacy of the system or other flaws don't seem to me to capture how much worse it could be. Best not lose track of that in trying to make it better....


> So what you are suggesting is that we do away with that (fine!) and replace it with --- something handwavy (not fine).

I wasn't really trying to suggest any concrete system to replace the current one. Neither would I be able to do so nor would it really matter since such a system couldn't be implemented in a top-down fashion. I was pondering how things are and why, which is hard enough, as well as what trends I see positively (which are simultaneously actionable recommendations for both funding agencies and scientists).

> many of your arguments about the lack of efficacy of the system or other flaws don't seem to me to capture how much worse it could be

Sure, I think science as a whole has never been more productive. Many trends also look positive: besides what I named above, there is also increased industry collaboration for applied research, increased funding overall, etc. The main challenge will be the price of creating fraudulent submissions going down and hacking the system becoming more prevalent. I think the only way to address this is to significantly reduce the "perceived authority" of any work that comes from using a LaTeX template, as well as authority that comes with the label "peer reviewed".


> I think the only way to address this is to significantly reduce the "perceived authority" of any work that comes from using a LaTeX template, as well as authority that comes with the label "peer reviewed".

Opening up access unavoidably makes the signal to noise problem worse, not just for the reasons you note (fraud, exploits) but also average quality drops. Whatever changes are made, will need a more effective filter, not less effective.


Your concerns are valid, and I think that my proposal of paying reviewers and replicators addresses them all.

> It's not obvious that a journal is fundamentally better at organizing unpaid voluntary reviewers compared to a preprint server.

So, let's not rely on unpaid, voluntary labor. Pay them.

> they are selling prestige first and foremost,

Yes. So give them a better business model--if they can make money reviewing papers, they won't have to create artificial value by creating artificial scarcity.

> The role of peer review is misrepresented in journalism and the expectations are not met.

If you pay somebody, you can specify the expectations you think should be met. If they don't meet those expectations, they don't get paid.

> this prestige is overvalued.

The prestige is not overvalued--it is just too easily obtainable by fraud. Something has got to be done.

> Preprint servers, open data and more direct science communication seem like steps in the right direction.

They are vanity presses.

And they don't even do what you think they are doing. Today, the problem isn't too little information, it's too much misinformation. LLMs can chug out papers by the millions. Is a search engine going to help you cut through that and find what you are looking for? What if 2 million papers which match your search criteria? You gunna read through them all, trying to find the 5 papers which were actually written by a real scientist?

Are you even going to see them? Is the search engine going to do a better job than peer review of presenting you the papers you actually want to read?


> my proposal of paying reviewers and replicators addresses [all concerns]

Your other comment didn't say who might be paying reviewers. Journals clearly won't (why should they, they have grrat profits in the current system and will fight tooth and nail to delay any changes whatever). Universities and even funding agencies cannot (conflict of interest).

> Is the search engine going to do a better job than peer review of presenting you the papers you actually want to read?

I do actually expect to see that happen.


> Your other comment didn't say who might be paying reviewers.

In the parent comment to this thread, I talk about this. My proposal is that when a researcher writes up the grant to get their research funded, they should estimate how much it would cost to pay reviewers and replicators, and include those figures in the cost.

If the researcher gets the grant approved, then the funding agency will put the money for peer review/replication into escrow. When the research is finished and the investigators have written up a paper to describe their methods and results, the money in escrow is disbursed to the reviewers and replicators.

If reviewers agree its good, and if it replicates, then the paper is published. If not, well we just dodged a bullet.

> I do actually expect to see that happen.

Are search engines getting better or worse for you? It was a lot easier getting the right paper from a search engine 10 years ago. Now, you just get half a page of irrelevant ads, and another half page of links boosted by payola.

Just imagine what it will be like when there are literally MILLIONS of bad papers for each good paper. Then Billions. There is no finite limit to the amount of bullshit that LLMs can--and therefore will--output.


That's just not true. Most publicly fundes research is hidden behind paywalls.


No it's exactly true. You can write up anything you want and put it on a site. The post I was replying to was suggesting an open access system (both read and write) for exchanging ideas. This exists.

What it doesn't do is effectively replace the non-open system for access to academic journals. I have a lot of sympathy for open (read) access to research, particularly publicly funded. It just isn't sensible to wave a wand and say "all papers are free to read now" without some plan for the other parts of the system and the ecosystem (academic research) that relies on it.


Why do you think that peer review needs the journals to function?


I didn't say that. It needs something. Handwaving about an emergent community isn't useful - moving from todays system to something else needs something concrete.


Whether or not your papers pass peer review—-and which journals it is published in—are important criteria for hiring, tenure, whether your grants are funded, etc.

If you get rid of peer review, it’s not science. It’s just a vanity press.


>If you get rid of peer review, it’s not science. It’s just a vanity press.

If you believe replicability is central to science, the current paradigm doesn't necessarily converge on science either. And when people are graded on how many publications they garner, it borders on turning publication into a symbol of status rather than one of science.


> If you believe replicability is central to science.

I do believe that, but it doesn’t matter what anybody believes, replicatable experiments and results, which peers can review and agree on, are the soul of science.

Without that it’s not science, it’s just creative writing.


Then don’t you think the first part of that process is explaining the methods and results of your experiment? That’s precisely what the current situation does. What’s lacking is the replication incentives.


I propose to supply those incentives—-pay for good reviewer and replicators.

If you want something done, you gotta pay for it. We can’t just rely on volunteers.


I'm on board with that idea, as long as we can also provide guardrails against the perverse incentives of paying for them. E.g., we need to avoid frivolous reviews/replication as well as something evolving into a "pay for a good review" service.


Well, if you are paying somebody to do something, you gain a lot of leverage:

1. You can negotiate a due date. No more waiting for years before the journal's reviewers actually review your paper.

2. You can negotiate a set of deliverables. You can specify that they can't just say "this sux"; they have to show the lines where the big hairy proof are wrong, or if its an algorithm, they have to actually code it up and run it themselves before they say it doesn't work.

3. You can more reliably attract good reviewers. If you aren't begging for people to volunteer, but you are paying good money, you can be a lot pickier about who you hire.

I mean, I've been a consultant: what are the guardrails that I won't rip-off my clients? I don't want to ruin my reputation, I want repeat business, and I want to be able to charge a high hourly rate because I deliver a premium product.

Same guardrails would apply to peer reviewers and to reproducers.


Sure, but I’m poking at the bad leverage you can also wield.

1) you can create undue schedule pressure that results in a cursory review that may not catch the more nuanced problems in your investigation.

2) you can be more belligerent about not sharing data. If they want to get, they won’t argue.

3) you can pay for reviewers who you know will give a positive review. Without guards against this, it’s almost a certainty that the glut of PhDs will result in some treating it like a scam de hustle where it’s more about the economics than the science.

Some consultants are well known to play the game where they tell clients what they want to hear rather than what they need to hear. I don’t think consultancy is a good model for this.


#1 isn't an issue unique to paying peer reviewers. We've learned how to negotiate such hazards.

#2 Seems like a team who wants their paper published would be super-helpful to the reviewers and replicators....why wouldn't they be maximally motivated to help them to by sharing data, techniques, etc...and writing their paper so that its easy for reviews and replicators to do their jobs.

#3 The authors of a paper don't get to choose who their reviewers are!!

> consultants...play the game

And yet we have millions of clients hiring millions of consultants, and somehow they are able to make it work....yeah, all these issues can arise in other contexts, we know how to deal with them.


You are right that #1 isn't unique. But I think you're wrong that we've got the issue easily solved because it's rooted in human psychology. Just look at the last few years of Boeing headlines and tell me you still think schedule pressure in a competitive environment is a solved problem.

Your response to #2 assumes the researcher wants to create the most transparent and highest-quality paper. Because of perverse incentives, I don't think this is the case. Many times researchers just want a publication because that gets them the career status they're after.

Good point on #3, but it still leaves the question about the tradeoff between quantity and quality. I can surely churn out many more reviews of questionable quality than I can a single, well-researched and thoughtful review. The quantity vs. quality tradeoff is really what is at the heart of that point.

>And yet we have millions of clients hiring millions of consultants

The existence of that market doesn't mean the market does what you're claiming. Many times, consultancy is a mechanism to pay for plausible deniability rather than a novel solution.


re #1: Yeah, bad apples will be bad apples, but that doesn't stop us from hiring people to build us airplanes and run aerospace companies. Right now we are assuming that humans are so angelic they will give us quality reviews for free.

re#2: Under my proposal, an researchers in an independent lab would have to read a paper to see how to design and conduct an experiment to replicate the results. And if it didn't reproduce, they don't get their paper published.

Given the stakes, don't you think researchers would exert every effort to make their paper as transparent and as easy-to-read, as possible? How carefully would they describe their experiment if they knew somebody was going to take their description and use it to check their work?

Re #3: Yeah, but again that's not a problem specific to my proposal. The same risk hangs over every employer-employee relationship.


I think the idea of requiring the review process to require replication would potentially be a good approach, given we're aware of the downsides. For example, I've worked in labs with sensitive data, or with proprietary processes that they would not want to share. This would mean the advocated process would result in a lot less sharing of methods. Maybe there's a chance there could be vetted independent labs that meet stringent security requirements, but that adds another layer of bureaucracy which could, again, result in less sharing of information. There's a balancing act to be considered, and I agree that we are probably too far on the one side of that equation currently.

Most of your rebuttals seem to hinge on "yeah, but that problem isn't unique to publishing." That is a kind of side-stepping that misses the point. The point is we need to create a system that mitigates those downsides, not ignore them. I I don't think a store manager would be okay saying, "Well, people steal from all kinds of stores, so we don't need to try to minimize theft." They recognize stealing is a natural outcome given human tendencies and create a system to minimize it within reasonable boundaries.


> Most of your rebuttals seem to hinge on "yeah, but that problem isn't unique to publishing." That is a kind of side-stepping that misses the point.

If there is a specific objection you'd like to revisit, I'd be happy to discuss it. But I wouldn't self-describe what I'm doing as "sidestepping"--I'd say its avoiding bike shedding and keeping the conversation focused.

I mean, it's a pretty facile objection to say some variation on "but if we pay them how do we know we'll get our money's worth?" when we pay for goods and services all the time with very high confidence that we'll get what we pay for.

Surely, there's plenty of considerations to discuss, and I've tried to squarely address all objections which are specific to this proposal. But how to hire and use consultants, or how to ensure you get what you contracted for, are largely solved problems, and off-topic.

> This would mean the advocated process would result in a lot less sharing of methods.

I don't think my proposal would even apply to internal R&D groups who wanted to keep things proprietary. I mean, I can certainly understand wanting to reserve some methods or data as being proprietary. But choosing to do that is, ipso facto, not sharing them. How would paying reviewers and replicators for their time cause any less sharing to happen?

I mean, if your paper doesn't describe the experiments you performed in enough detail to allow other groups to replicate it, its not a scientific paper to begin with. It's either a press release, or a whitepaper, or some other form of creative writing, and publishing it is either public relations, or advertisement--not science.

Which is not to say that it's immoral or useless, or to denigrate it in any way. Not everything we do has to be science. My proposal is just for scientists communicating scientific results with other scientists. Maybe I'm missing something, but I don't see how it would inhibit the kinds of practices you are describing in any way.

It would make it harder for people to claim their "results" are scientific, but are not. It would be a big obstacle to publishing fraudulent papers in scientific journals. It would make it harder for somebody to claim the mantel of "science" to give credibility to their claims. But I really don't see how paying reviewers and replicators would stop anybody from sharing as much or as little as they wanted to.


Apologies, but when the central claim is about mitigating downsides of adding money into a system and you acknowledge the potential for downsides exists but fail to provide any mitigation, it is sidestepping the main focus of the discussion.

I also think there is a misunderstanding when you’re talking about internal R&D. The situation I’m talking about isn’t where someone wants to protect a proprietary method, but rather proprietary data. I could have sensitive information that I don’t want to share, but also recognize a method I’ve developed is useful to others. The harder you make it to share that method (by requiring me to sanitize all the data to make it non-sensitive) the less likely I’m going to share it. When things like security or law come into play, the easiest path is always “no.”

>If there is a specific objection you'd like to revisit

Take the fact that whenever you inject pay into a system, it tends to pervert that system away from the original goal and into a goal of maximizing pay. You acknowledge that but just say it isn't unique. I agree it's not unique, but what I'm after is how do you propose to mitigate it (assuming your goal isn't to simply maximize pay, but rather provide some balance of quality, pay, and quantity). What guardrails do you put in place? Maximum on the number of reviews per quarter? That might limit those reviewers who can crank out many quality reviews. Do you instead provide a framework for reviewing the reviews for quality? That adds another layer of bureaucracy to an already bureaucratic system. Do you implement reviewer scorecards? A decaying rate of pay for each review?

And on and on. Again, the intent wasn't to imply these are unique problems but to probe for good fixes. Those aspects you say are digressions (consultancy etc) are topics you brought to the discussion, seemingly to address the mitigation question without actually providing a specific response. Doing "whatever they do elsewhere" isn't really an answer.


And the current system is a holy grail that can never change? You can still based your hiring based on reviews made by peers


The outcome would most likely be exactly this: "it's probably research not worth doing in first place" (and why would you want to signal you want more busy work?)


PREPUBLICATION REVIEW IS BAD! STOP TRYING TO REINTRODUCE IT.

Sorry for the all caps. Publishing papers without “peer review” isn’t some radical new concept—it’s how all scientific fields operated prior to ca. 1970. That’s about when the pace of article writing outstripped available pages in journals and this system of pre-publication review was adopted and formalized. For the first 300 years of science you published papers by sending it off as a letter to the editor (sometimes via a sponsor if you were new to the journal), and they either accepted or rejected it as-is.

The idea of having your intellectual competitors review your work and potentially sabotage your publication prospects as a standard process is a relatively recent addition. And one that has not been shown to actually be effective.

The rise of Arxiv is a recognition by researchers that we don’t need or want that system, and we should do away with it entirely in this era of digital print where page counts don’t matter. So please stop trying to force it back on us!


Hi! Author here.

Preprint review as it is being discussed here is post-publication. The preprint is shared first, and review layered on top of it later. Click through to the paper[1] I'm responding to and give it a read.

But, also, prepublication review doesn't need to be "reintroduced". It's still the standard for the vast majority of scholarship. By some estimates there are around 5 million scholarly papers published per year. There are only about 10 - 20 million preprints published total over the past 30 years since Arxiv's introduction.

There are a bunch of layered institutions and structures that are maintaining it as the standard. I don't have data for it to hand, but my understanding is that the vast majority of preprints go on to be published in a journal with pre-publication review. And as far as most of the institutions are concerned, papers aren't considered valid until they have published in a journal with prepublication review.

There is a significant movement pushing for the adoption of preprint review as an alternative to journal publishing with the hope that it can begin a change to this situation.

The idea is that preprint review offers a similar level of quality control as journal review (which, most reformers would agree is not much) and could theoretically replace it in those institutional structures. That would, at least invert the current process: with papers being shared immediately and review coming later after the results were shared openly.

[1] https://journals.plos.org/plosbiology/article?id=10.1371/jou...


Thanks for your charitable response to a reader who quite obviously misunderstood what you were writing about. You might want to update your article to make that clear, as "preprint review" isn't a term I had ever encountered before, and a simple reading of your article doesn't obviously indicate that you were talking about post-publication review. (Note that outside of academia, upload to arXiv is publication, so yes your "preprint review" would be a post-publication review.)


No worries. I was really writing for an audience of academics and, in particular, people involved in the various open science and publication reform movements. Sharing here was a bit of an after thought, so it wasn't written for an audience unfamiliar with the intricacies of academic publishing. It's a complicated enough space as it is that this is already a 4-5 article series with the potential to grow, even when I'm writing with the assumption of quite of a bit of knowledge.

But yeah, "preprint review" is considered post-publication review both in and outside of academia. There are nuances to what is consider "publication" in academia. A preprint is not a "Version of Record", meaning it doesn't count towards tenure and promotion. The movement pushing for preprint review is attempting to layer review on top of already public preprints in the hopes that reviewed preprints can begin to count as VORs. It's unclear whether that will work.

Some models, like eLife's, seem more promising than others. But eLife got a ton of backlash when they switched to their new reviewed preprint model, so it remains to be seen whether it will work in the long run.


Quality control was handled fine enough by editors for literally all output on the planet pre 1970s.

There’s nothing physically stopping that paradigm from returning now that we have the internet, other than the fact that there’s only a few thousand (?) such editors.

If somehow there were fifty thousand such editors, then the whole peer review system would be completely unnecessary.

Of course not enough people want to pay for that many editors, but that doesn’t stop a partial adoption by those willing to do so via some arrangements.


There are nearly 50,000 commercial journals and a long tail of non-commercial journals each with teams of editors. There are probably hundreds of thousands of people currently serving as editors.

The issue isn't editorial bandwidth, it's that peer review is currently built into the promotion and tenure structure for academics, who produce the vast majority of scholarship and thus dictate the shape of the scholarly publishing system.

Academics have to publish the papers in peer reviewed journals for it to count towards tenure and promotion. And in fact, they are limited to a small set of journals that are deemed high quality enough for their fields. These journals are chosen by tenure and promotion committees composed of their senior peers and school administration. There are over 1000 R1 and R2 universities worldwide, each with hundreds of departments each with their own tenure and promotion committees. So changing the system is a massive collective action problem.


I’m including only actually competent, full time editors, with sufficiently high reputation that their decisions will be taken seriously. There’s definitely not 50000 of those.

A huge number of journals by numerical count, along with their ‘editors’, are literally laughed at in many fields.

As you’ve mentioned, trying to expand the actually reputable number by 10x, 20x, etc… is a huge problem.

Hence it has to be paid for, quite highly paid for, otherwise the coordination problem is probably impossibly difficult.


You are right. It is often a problem that established competitors get to review articles that contradict their work and, unsurprisingly, try to sabotage them. Incentives are not well aligned.

A good mid-ground is something like the non-profit journal ELife, where articles are accepted by an editor and published before review, then reviewed publicly by selected reviewers.

Very transparant, and also leaves room for article updates. See the whole process here: https://elifesciences.org/about/peer-review.


That’s a better system. But why involve the journal in review at all?

Journals should go back to just publishing papers and any unsolicited letter-to-the-editor reviews, reproductions, or commentary they receive in response. Why add a burden of unpaid work reviewing every single paper that comes through?


I believe ELife may eventually get deeper paid reviews. That is a reason to involve the journal in this process. The way reviews at ELife work can be seen as solicited letter-to-the editor reviews, as these do not influence the outcome. Your article is already published.


Maybe I'm not explaining my point very well. I think the fact that some papers don't get reviewed, even post-publication, is a feature not a bug. Otherwise it starts to present an unnecessary and damaging hurdle to publication, even if the reviews are post-publication because it is an implicit promise of future work.

A proper scientific journal is an efficient clearinghouse for information, in the same way that Hacker News is an efficient clearinghouse for tech news and commentary. Journal editors play the role of dang in this setup, applying a minimal but necessary amount of moderation and editorial decisions. But imagine if you weren't allowed to post a comment to HN unless you first lined up 2-3 high-karma individuals to provide a thoughtful reply. No doubt the result would be high quality discussion, but there'd only be a handful of comments on even front-page posts, and I suspect that the overall value of the site would be vastly less than the present HN.

There should be as few as possible editorial hurdles to clear in order to publish in a journal, e.g. one of the co-authors having published in the journal before or being sponsored by a respected person in the field. And the review that occurs before publication should consist of (1) spellchecks and such, and (2) formatting. This is how science worked for hundreds of years, and there's no reason we can't continue operating this way now that journal page lengths are a non-issue.

(I'm not a reactionary though. I'd like to improve upon the old format in many ways. In particular I'd want journals to provide specific support for publishing reproductions of existing work or 3rd party submitted supporting documents, and much better methods for retracting or correcting a paper.)


I've found the entire peer review concept in academia to be extremely odd. I'm not entirely sure what problem it solves. It seems like you have one of two types of people reading these articles:

* People who are already specialists/familiar enough with concepts. They'll either call BS upfront or run experimentation themselves to validate results.

* People who aren't specialists and will need to corroborate evidence against other sources.

My entire life as a software engineer has been built blogs, forums, and discussion from "random" people doing exactly the above.


> run experimentation themselves to validate results

Let's just breeze through an that like it's nothing haha.

Also the people reading these peer reviewed articles range from new grad students to researchers with decades of experience.

There are many ways to see whether an article is high quality of not, which includes peer review, the journal it's published in, the research lab that wrote the paper. Reading a paper itself is a multi hour ordeal and you want to have a decent idea that it's not a crap paper before diving in. Believing what the paper says is something of a gamble because you really cannot just replicate an experiment that easily. And you need to read many many many papers before you can start doing your own research. So you want lots of assurances that a paper is good quality.


Agree. With more compute-intensive experiments in ML, I am not sure if any reviewer re-runs experiments. The reviewing is already voluntary - who's going to pay out of their pocket for the compute? It's not feasible.


Re-running experiments is out of scope for any reviewer. You're supposed to trust their data, but review that their conclusion is actually supported by the data, as shown.


Yeah, I agree. But I said what I did to make the larger point that empirical studies are hard to vouch for without some level of trust on both sides. For ex if someone selects favorable datasets, metrics, "random" seeds, p-hacking etc.


Most of the world isn’t ‘about’ software engineering. Software has some attributes that not very many other fields do. And software people have a habit of thinking that their experiencing writing software grants them some sort of transferable expertise, which it certainly does not.


https://en.wikipedia.org/wiki/Replication_crisis

The reputation of scientific researchers has been greatly harmed by the current system. Please, help find a way to fix it, or at the very least don't hinder people trying to fix it. Thanks to the way we do things now a coin flip is _better_ than peer review. Public trust in science is at an all time low. I really hope you don't think "this is fine".


> Thanks to the way we do things now a coin flip is _better_ than peer review. Public trust in science is at an all time low. I really hope you don't think "this is fine".

I don't think you read my post? I'm advocating we get rid of the "peer review" [sic] system entirely.

The sibling post is right though that this problem is with bad journalism (and bad institutions), not bad science. People think that "peer review" is actually some sort of scientific hurdle that strengthens the paper. It is not, it was never meant tho fill that role, and has been totally morphed by journalists into something it has no business being.


IMO 'public trust in science is at an all time low' is because of bad journalism more than bad papers.

Papers are written by academy-type individuals for academy-type individuals, not for consumption by non experts. An academic is usually pretty fast to determine if a paper is to be trusted.

So interpreting and extrapolating to the extreme the results of a minor paper in an obscure journal is more bad journalism than bad science.

Then we wonder why people don't trust science..


Also, the author IMHO failed to clearly explain how "preprint review" wasn't a contradiction in terms (though they do seem to gesture towards the main issue being commercialization of journals, in the first post).

In the same vein, the positive mention of Github and "open source platform" (another contradiction in terms) were at first red flags in the third article, but at least they then mentioned the threat of corporate takeover.


> The idea of having your intellectual competitors review your work potentially sabotage your publication prospects as a standard process is a relatively recent addition. And one that has not been shown to actually be effective.

If this is true (and I'm not doubting you, just acknowledging that I'm taking your word for it) then why abandon the entire system? Why not just roll it back to the state before we added the intellectual competitor review?


The prior state was a situation of no pre-publication review other than the editorial staff of the journal. We should go back to that, yes. By disbanding entirely the “peer review” system that currently exists.


>> Why not just roll it back to the state before we added the intellectual competitor review?

Journals don't add much value outside of their peer review.

Most researcher don't care about the paper copies, or pagination, or document formatting services provided by publishers. Their old paper based distribution channels are simply not used.


>Why not just roll it back to the state before we added the intellectual competitor review?

The only thing I can currently think of is that the pace of research* has grown so much that a small group of editors may be unable to handle the amount of submissions. This could result in a) an inability of the editors to thoroughly vet the submissions, b) difficulty in "good" submissions being found (ie, separating the wheat from the chafe), or c) a further devolution into very, very niche journals just to make the scope manageable for the editors

* I would concede that a very, very large proportion of current research is either heavily derivative or auto-cited, so overall growth isn't to be conflated with growth in quality research


You are mistaken in thinking that editors are supposed to vet submissions. That's not their job! Their job is only to weed out crackpot pseudoscience, which is a much easier task generally and also mostly solved by reputation, or papers outside of the journal's scope. And to edit the remaining papers into a constantly formatted journal publication. And that's done easy enough these days with LaTeX and online printing.

It is NOT the job of journal editors to rate or vet sincere submissions they receive that are on topic for their journal. This only started happening about half a century ago, when demand for publication started to significantly outstrip supply of journal pages, back when journals were actually printed on dead trees and had limited number of pages to keep costs down. The idea then was "we're getting 50 submissions but can only print 12, so let's rate them and pick the best ones." So they started the 'peer review' [sic] process to externalize that vetting cost. It largely didn't exist before then. Only now we can accept all 50, because why not? The marginal cost of one more PDF is practically nil.


>Only now we can accept all 50, because why not?

Because the downside to creating an ever growing haystack is that it becomes increasingly difficult to find a needle. Making it easier to create a deluge of bad research won’t help me find the worthwhile research that would actually help me in my job.

If I had the choice between collecting “all the data” and just collecting “the really good and relevant data” I’m opting for the latter. You are also contradicting yourself by saying it’s not the editors job to vet submissions, yet also say they weed out “crackpot” work. All you’re saying is they lazily/loosely vet submissions. I’m saying the overall system (not just the editors) have a role in providing a reasonably sized haystack (and would also admit the current system is not great at this, but it’s better than a wild-west approach)


You just use better tools to manage it. Fine-tuned LLMs and Google Scholar like search engines help here.

To stretch an analogy it is like email. The job of the editor is the same as the spam detection service run by hosted email providers. They actually go in and actively hide scams and worthless ad email from you, and we thank them for it. Some email providers have recently started offering "focused inbox" modes where they prioritize emails for you too. I don't use that, but I could see why some people do. But importantly they don't block email based on those heuristics, like they might do for spam. You still get non-priority emails. But imagine a world where gmail straight up blocked/rejected email which it didn't consider priority. Would you want that?

The situation with journals is comparable. Editors have a spam/crank detection duty, but they shouldn't be rejecting manuscripts beyond that.


What you’re describing is essentially an arms race in quantity. Yes, we can use tools to help sort, but those same tools can also be used to deluge the inbox and obfuscate the bad. In fact, one of the best ways to sort is by using specific journals/journal metrics as a proxy for quality. That is much, much easier (and productive) than trying to sort based on some Google scholar advanced query. For example, it's much easier for a journal to retract an article that was shown an inability to replicate than to create a search to do the same.

The tone of your comment is very techno-optimist, which is very on brand for HN. In that view, every problem is solved by technology, even those that are created by technology. I would argue there are some problems that are better solved with less technology, not more.


> Editors have a spam/crank detection duty, but they shouldn't be rejecting manuscripts beyond that.

If the system is working, publication in a reputable journal serves as a useful, albeit imperfect, indicator of scientific quality.

Top journals shouldn't be publishing deeply flawed work, or even decent work in clear need of a rewrite. It's not just about spam and cranks.


The whole process comes from a time when publishing was expensive, should be cheap as chips now. The system needs a rethink so "quality" can somehow bubble up to the surface given mass publication is so simple.


Because I don't want to spend two years fighting the second referee when she's fundamentally misunderstood the point of my paper.

I'm no longer in academia. Either take what I put up on arxiv or leave it. I _really_ don't care.


You actually touch on an interesting point. Is peer review still necessary? When there is limited journal space, sure. But now that we have effectively unlimited space, let it be reviewed when people go to cite it. We've already seen a lot of peer reviewed papers be retracted later, so should we just accept the reality?


Peer review is the backbone of journals, and (some) journals serve an absolutely critical function in academia: they increase the signal:noise ratio. Most of what is published is noise; without the curation provided by (certain) journals, anything of significance is very likely to be drowned out.

As a casual example in the biomedical sciences, the Journal of Biological Chemistry has an output of ~30,000 pages per year, most of which is 'noise'. That's just ONE journal. The journal Cell, on the other hand, has an order of magnitude less, most of which is 'signal'.

EDIT: This is not to say the peer review approach doesn't need work, and lots of it. The whole current approach to research needs an overhaul. I'm just saying it's a bit hasty to throw the baby out with the bath water.


No, it is not necessary. Just as it was not necessary for the first 300 years of the Royal society’s existence. “Peer review” used to be just a stand in phrase for the marketplace of ideas—seeing how your peers evaluate your work in a public process of getting published and sending letters to the editor. Only in living memory has it been warped into this pre-publication gatekeeping process.


Peer review is an important signal to potential citers. If everyone has to fully read and understand every paper in order to (responsibly) cite it, there's gonna be a lot less research. Given the exposure of how much bad research there is, maybe there does need to be a slowing and focus on quality, but I think we still need peer review, although it definitely need to be reformed somehow because it's clearly broken.

We need to take a serious look at the incentive structure in academia because it's not guaranteeing the scientific results that we expected it to. I don't think we should just abandon the system though.


Peer review can be very simply replaced by skimming the abstract. If the abstract doesn't make sense the paper doesn't make sense either.

All the disciplines which use arxiv as their main journal are doing well enough without peer review.


Necessary to share ideas? No.

Necessary to share important ideas? Maybe.

One of the aspects of journals that I don't see talked about much here is the curation of articles. Saying (good) journals aren't necessary is like saying journalists aren't necessary to get your news in the internet era. Not all information is necessarily good, and we often rely on systems/people to help winnow the amount we have to sift through. This is important in a society where the sheer amount of information developed far outpaces what we can consume in several lifetimes.


It's more like saying that maybe we don't need the NYT, the people who lied about Iraq: https://www.theguardian.com/world/2004/may/27/media.iraq, to get good quality journalism when the internet exists.

Journals are in the same boat. Any good they have done is dwarfed by the few 'honest mistakes', like Alzheimer's cabal: https://www.statnews.com/2019/06/25/alzheimers-cabal-thwarte..., which have done more to stifle science than anyone since Stalin picking which biological theories are socialist enough to be true.


I think you're conflating the notion of perfect curation. I don't think anyone is claiming that journals or journalists are infallible. They certainly make m, although I don't think I agree that the good is "dwarfed" by them, except when used as a direct attempt to undermine credibility by adversaries.

Put differently, do you think bad ideas spread more easily with the internet? The current research seems to think so, and if you agree, I don't see how that mitigates the spread of bad information. If anything, it exacerbates it.


I think that every bad idea spread by the internet dwarfs the harms done by ideas spread by government. Every issue with the middle east over the last 20 years is the direct result of a crusade launched on a lie enthusiastically repeated by the same people claiming the moral high ground today.


Yes. The crap people publish even with peer-reviews is bad enough. A few subfields in CS have bitten a couple times by relying on non peer reviewed ecosystems.


While not always true my metric for clear/understandable is for other people to understand it. This usually supports my argument when people show me a document and I have no idea what its saying, they argue its perfectly clear... my definition was for people other than the author to grasp the intended meaning.


Peer review goes beyond simple issues about clarity or misunderstanding. In particular, peer review is sometimes seen as an adversarial process.

Often, the reviewer will not understand because he is not the intended audience. Other times, he will understand but he just doesn't like your method, because he is working in an opposite direction. Or maybe your method is a direct competitor of his and yours work better, which incentivizes some people to block your work.


Or your paper goes against the current paradigm or is otherwise politically unpalatable.


> Other times, he will understand but he just doesn't like your method, because he is working in an opposite direction. Or maybe your method is a direct competitor of his and yours work better, which incentivizes some people to block your work.

Oh you mean those phantom "off topic"/"out of scope" reviews.


If the intended audience is reading it, I would agree. However, many reviewers seem to be assigned and agree to review a topic that they are ill-equipped to understand without a substantial amount of background knowledge.

A good reviewer in this situation will review the referenced papers to bolster their understanding. A bad reviewer will expect everything to be spelled out within the manuscript, and, unfortunately, the length limits often don't permit that kind of write-up.


Precisely. The continued hyperspecialization of science shrinks the potential pool of qualified reviewers to zero. I rarely get knowledgeable reviews these days. Instead of disputing my methods, they complain about the font or figure styles, or raise questions that weren't answered in the paper because they are common knowledge in the field.


The review process is broken.

Reviewing pre print papers isnt any more effective than reviewing printed papers. Review, and publication is a meaningless bar.

Publish -> people find insight and try to pick it apart -> You either have flaws or you get reproduced... Only then should your paper be of any worth to be quoted or sighted from.

The current system is glad-handing, intellectual protectionism and mastrubation.

Academia has only itself to blame for this, and they are apparently unwilling to fix it.


"Publish -> people find insight and try to pick it apart -> You either have flaws or you get reproduced... Only then should your paper be of any worth to be quoted or sighted from."

This is already how it's supposed to work. The review before publication is a fairly superficial check that just confirms that what you describe follows basic scientific practices. There is no validation of the actual research. A proper reproduction is what's supposed to come after publication.

IMO the real problems are that a) there isn't much glamour and funding for reproducing other's studies and b) "science journalists", university PR departments and now in part people on social media are picking up research before people on the field looked at it or misrepresent it. Suddenly the audience is a lot of folks who never were the intended audience of the process.


There is a pretty simple way to change all of that.

Academic standards: You are not longer allowed to site a non reproduced paper in yours.

Citations matter as much as the print, put the hurdle there and all of a sudden things will change real quick.


> Academic standards: You are not longer allowed to site a non reproduced paper in yours.

> Citations matter as much as the print, put the hurdle there and all of a sudden things will change real quick.

The reproducibility crisis is just a symptom of the publish-or-perish culture that pushes academics to churning out bad research. Academia already over-emphasizes publishing positive results at the expense of studying important questions. Your solution would further incentivize low risk, low impact research that we have too much of.

Aside from that, there are a lot of edge cases that would make this difficult. If I do five studies that are modifications of each other, and all show the same basic effect, but I publish it as one paper, does that count as being reproduced? What if a result has only ever been reproduced within a single research group? Does the Higgs Boson need to be confirmed at at a collider outside the LHC?


>> What if a result has only ever been reproduced within a single research group? Does the Higgs Boson need to be confirmed at at a collider outside the LHC?

Yes. It does. Take out all the detectors and let another team build their own and come in and prove it. The really expensive part was the big concrete doughnut in the ground. After that wee could ... you know... shut the fuckin thing down so we stop pouring money into a literal hole in the ground. Idk we could do some more science after that. What is missing from quantum theory that the LHC running is going to find?

IF we stopped funding mastrubatory string theory and put more of that into practical physics maybe we would have had something else for the LHC to do...

>>> Academia already over-emphasizes publishing positive results at the expense of studying important questions

Academia is made up of Academics, they aren't inclined to fix their own problem... it as effective as the church policing its own issues.

I love what Jack Horner has done (see: https://creation.com/dino-puberty-blues ) but he would never have been able to achieve any of that if those people were alive and his peers.


> You are not longer allowed to site a non reproduced paper in yours.

I fully agree that in an ideal world, that would be the case. But some reproductions (especially now with machine learning) could cost millions of dollars and years to do. I don't think that's a reasonable or feasible thing to require.


Well, then, at least there should be pressure around having to mention this : a non-replicated study (by an independent group) is after all inherently suspect.


>> could cost millions of dollars and years to do

https://en.wikipedia.org/wiki/List_of_colleges_and_universit...

Not only do they have the money they are charging the students more than ever.


This is a very fair criticism and something that absolutely should be discussed, although I think it's a separate issue as papers can be written and published by anybody, regardless of backing from academic institutions.


I think we need to find ways of giving status to reproducing studies. Maybe not as much as novelty, but definitely something greater than what it is currently.


IMO reproducing findings could/should be a mandatory part of PhD training.


Even if academics could review all the papers on a preprint server (which the article argues -- rightly -- that they can't), it wouldn't solve the perceived problems (or the actual problems) with scientific peer review.

The vast majority of irreproducible papers aren't detectible as irreproducible at time of publication. They look fine, and many actually are fine. They just don't reproduce. That's an expected outcome in science. The system will self-correct over time.

IMO, the main actual problem with peer review is that non-practitioners put too much faith in it. Nobody in science actually takes a paper on faith because it's been published, and you shouldn't either. Peer review is little more than a lightweight safeguard against complete nonsense being published. It barely works for that. Just because you found a paper doesn't mean you should believe it. You have to understand it.

A secondary actual problem is that it's impossible to reproduce a lot of papers, or they're methodologically broken from the start (e.g. RCTs that are not pre-registered, or observational studies without control groups). These are problems we could actually solve. For example, just requiring that any paper publish the raw study data would help to self-control the system. There are high-profile researchers out there, right now, who do little more than statistically mine the same secret data set -- these people are likely publishing crap, but we have no way to prove it, because the data is secret.


To your second point, I alway go back to this quote: "You can't fix by analysis what you bungled by design" (Light, Singer and Willett, 1990).

If a paper is broken by design, there isn't much to do after the fact. It's just broken.

The problem is that doing a good RCT takes both time and effort, with the huge risk of having null results, which usually results in a desk rejection from most top journals.

So, you either are a top-fund raising researcher who can both fund multiple RCTs and people to support them, or you just try your best with what you have and hope to squeeze a paper out from you did.

Releasing the data won't really help much if the data generating process is flawed. Sure, other people will be able to run different kind of analyses (e.g., jackknife your standard errors instead of just using a robust correction), but I'm not sure how helpful that will be.

A third issue that I have also encountered is that journal editors have an agenda when putting together an issue, which sometimes overwrites the "quality' of the research with "fit" to the issue. This could lead to "lower quality" articles to be published because they fit the (often unspoken) direction of the journal. Most editors see their role as steering the field towards new directions (a sort of a meta service to the field) and sometimes that comes at the expense of the quality of the work.


> Releasing the data won't really help much if the data generating process is flawed. Sure, other people will be able to run different kind of analyses (e.g., jackknife your standard errors instead of just using a robust correction), but I'm not sure how helpful that will be.

It allows motivated people to catch more subtle forms of nonsense. Data colada, for example, has caught outright fraud, but only through herculean efforts. Imagine what groups like this might do if they had the raw data.


>> (Light, Singer and Willett, 1990)

A citation like the one above should normally point to a full reference in a bibliography section. Did you forget the \bibliography{} command at the end of your comment?


Hacker News does not have a LaTeX compiler.


There is no place to make a note that something doesn't reproduce so its extremely time consuming or you need some source of tribal knowledge. In my postdoc I was trying to make some porous films and a bunch of paper's methods didn't seem to work, maybe I did it wrong, maybe some detail wasn't described, who knows. I couldn't get it to work and there was no way to document that failure.


I wonder if there's a way to document such replaceability failures as an erratum to the original manuscript. I feel like this would help in at least two major ways:

1) It provides a reproducibility filter. If a method isn't shown to be reproducible, publically documenting that adds to the body of knowledge, and this would help drive an incentive towards reproducing work rather than just searching for novelty. It would document work that would otherwise be lost because there's no incentive to showcase it. When the lack of reproducible results isn't public, it's now more likely that others may waste considerable effort in the same vein.

3) It may enlist the original authors to help understand why the work didn't reproduce well. Maybe the secondary effort lacked some crucial step or understanding. The people best positioned to remedy this are the original authors, and this secondary publication incentivizes them to dialogue with those who couldn't reproduce the outcome. It doesn't mean they have to engage, but it at least gives them some reason to involve themselves in the process.


2) ???


If you are willing to provide a more substantive response, I might be able to clarify.


I think they meant that you only list 1) and 3) and 2) was missing.


> The vast majority of irreproducible papers aren't detectible as irreproducible at time of publication. They look fine, and many actually are fine. They just don't reproduce. That's an expected outcome in science.

This is not entirely true. A power analysis is how you determine reproducibility, and researchers should be doing it before they begin collecting data. Reviewers can do it post-hoc with assumptions about the expected effect size (which might come from similar studies). False positives produce inflated effect sizes, so if a result is marginally significant but shows a large effect, that is a good heuristic the result will not reproduce.


> A power analysis is how you determine reproducibility

All a power analysis does is reduce the chance that the result is a false negative. It doesn't reduce the chance of a false positive.

> False positives produce inflated effect sizes

Not always. Lots of studies publish as "significant" as soon as they get a p-value just under .05. Inflated effect sizes are certainly a sign that something could be wrong, but it's just one indicator.

Regardless, even if you have a power analysis at the conventional threshold of 80%, and a p-value of .05, you're still going to get spurious positive results 5% of the time, and spurious negative results 20% of the time, by definition.


> All a power analysis does is reduce the chance that the result is a false negative. It doesn't reduce the chance of a false positive.

This is true when we are dealing with an uninformative prior, but published research is known to be biased toward positive results and uncorrected multiple comparisons. This situation leads to small sample studies with high random variance being paradoxically correlated with significant results. High random variance appears as a false large effect size in the published result, so if the power is low when calculated with a smaller (adjusted) effect, there is reason to believe that the p-value is inflated. See e.g. Andrew Gelman's work on small sample studies, garden of forking paths or [0].

> Not always. Lots of studies publish as "significant" as soon as they get a p-value just under .05. Inflated effect sizes are certainly a sign that something could be wrong, but it's just one indicator.

Exactly! The implication being the above.

[0] https://en.wikipedia.org/wiki/Why_Most_Published_Research_Fi...


Peer review is a spam filter, and it’s quite useful in that content. But it’s a spam filer for people who filter out 99.9+% of papers by having such a narrow scope they probably recognize several names on a given paper.


In AI work - which naturally lends itself to replicability improvements - we could get truly solid replicability by ratcheting up the standards for code quality, testing, and automation in AI projects. And I think llms can start doing a lot of that kind of QA and engineering / pre-operationalization work, because the best llms are already better at software and value engineering than the average postdoc AI researcher.

Most AI codes are missing key replicability factors - either the training data/trainer are missing, the code has a poor testing / benchmark automation strategy, the code documentation is meager, or there's no real CI/CD practice for advancing the project or operationalizing it against problems caused by the anthropocentric collapse.

Some researchers are even hardened against such things, seeing them as false worship of harmful business metrics, rather than a fundamental duty that could really improve the impact of their research, and it's applicability towards a universal crisis that faces us all.

But we can put the lie to this view with just one glance at their code. Too much additional work is necessary to turn it into anything useful, either for further research iterations or productive operationalization. The gaps in code quality exist not because that form of code is optimal for research aims, but because researchers lack software engineering expertise, and cannot afford software engineering labor.

But thankfully the level of software engineering labor is not even that great - llms can now help swing that effort.

As a result I believe that we should work to create standards for AI assisted research repos that correct the major deficits of replicability, usability, and code quality that we see in most AI repos. Then we should campaign to adopt those standards into peer review. Let one of the reviewers be an AI that really grills your code on its quality. And actually incorporate the PRs that it proposes.

I think that would change the situation, from the current standard where academic AI repos are mainly nonreplicating throw-away code, to an opposite situation where the majority of AI research repos are easy to replicate, improve, and mobilize against the social and environmental problems facing humanity, as it navigates through the anthropocene disaster.


The solution to the dated model exists, it's git/github. I'm trying to build a "git journal" (essentially a github org where projects/research gets published paired with a substack newsletter), details here [0]

Let me know if you have a project you'd like to get on there! Here's what it looks like, a paper on directed evolution [1]

[0] https://versioned.science/ [1] https://github.com/versioned-science/DNA_polymerase_directed...


I would worry that preprint review would turn into another front of the culture wars for certain fields and science by consensus.


A fun system would be: you 'have to' peer review publicly a (part of a) paper (at least) when you cite it in your paper.

So that often cited paper get a lot of different (small) public reviews that can be curated from time to time, obscure papers get at least one review justifying why it's relevant to cite them in the new work.

Some could argue that this is too much work added to the writing process.. But.. At the same time.. Shouldn't we read the papers we cite? Why not automatically write a small review of it? It has not to be huge, only the justification on why we (can) use it in our work.


Yeah, such a system would be great.

A bit like Google Scholar. Papers are indexed and you can access the references easily. And you would be able to comment and review certain lines. Everyone could add notes that certain equations are wrong, etc. In best case authors would engage in the discussion too.

But obviously this won't work because some papers are behind paywalls :/


Peer review is pretty unpopular round these parts. In mathematics/TCS I've had mostly good experiences. Actually most of the time the review process improved my papers.

Clearly something is rotten about the way peer review is implemented in the empirical sciences. But think of all those high profile retractions you read about these days. Usually that comes about by a sort of post hoc peer review, not by anything resembling market forces.


Not to be mean to the HN community but at least a substantial minority of people who complain about peer review on here have no experience of peer review, even in applied CS and AI and machine learning, which are the hot topics today. But they've read that peer review is broken and, by god, they'll let the world know! For science!

I publish in machine learning and my experience is the same as yours: reviews have mainly helped me to improve my papers. Though to be fair this is mainly the case in journals; in conferences it's true that reviewers will often look for reasons to reject and don't try to be constructive (I always do; I still find it very hard to reject).

This is the result of the field of AI research having experienced a huge explosion of interest, and therefore submissions, in the last few years, so that all the conferences are creaking under the strain. Most of the new entrants are also junior researchers without too much experience- and that is true for both authors and reviewers (who are only invited to review after they publish in a venue). So the conferences are a bit of a mess at the moment, and the quality of the papers that get submitted and published, overall low.

But that's not because "peer review is broken", it's because too many people start a PhD in machine learning thinking they'll get immediately hired by Google, or OpenAI I guess. That too shall pass, and then things will calm down.


Agreed wholeheartedly but didn't want to come out swinging!

Not only that but my experience of reviewing has also been positive, and has given me ideas for research and how to present research (notation, paper structure etc)

Except my first review which was a 100 page survey paper on a very specific kind of inequality that exists for practically any graph invariant, so every page was pretty much identical just with alpha then beta then omega then chi... And the deadline was my birthday!


Up until the 1940's, "publish or perish" wasn't the obsession, publishing what was thoroughly vetted but not before it was ready. The sheer volume of substandard, barely novel papers allowed and the artificial expectations to produce a blizzard of publications foisted on researchers are the central problems.


They should just open a comment field on arXiv.

Then I can anonymously critique the paper without fear of the authors rejecting my career making Nature paper.


I know someone working on a plugin for that currently.


"True peer review begins after publication." --Eric Weinstein


Do you have a URL/citation for that? I'd like to use this quote in a paper.


Does openreview.net not count as preprint review in the sense the author means? It has substantial uptake in computer science.


Nice catch! I was going from the data shared in that paper[1] and didn't notice that it excluded OpenReview.net (which I'm aware of). The paper got their data[2, 3] from Sciety and it looks like OpenReview isn't included in Sciety's data.

It may have been excluded because OpenReview (as I understand it) seems to be primarily used to provide open review of conference proceedings, which I suspect the article puts in a different category than generally shared preprints.

But it would be worth analyzing OpenReview's uptake separately and thinking about what it's doing differently!

[1] https://journals.plos.org/plosbiology/article?id=10.1371/jou...

[2]https://zenodo.org/records/10070536

[3] https://lookerstudio.google.com/u/0/reporting/b09cf3e8-88c7-...


I do agree it's a bit different. How close maybe depends on what motivates you to be interested in the preprint review model in the first place? Could imagine this varies by person.

In a certain sense, the entire field of comp sci has become reorganized around preprint review. The 100% normal workflow now is that you first upload your paper to arXiv, circulate it informally, then whenever you want a formal review, submit to whatever conference or journal you want. The conferences and journals have basically become stamp-of-approval providers rather than really "publishers". If they accept it, you edit the arXiv entry to upload a v2 camera-ready PDF and put the venue's acceptance stamp-of-approval in the comments field.

A few reasons this might not fit the vision of preprint review, all with different solutions:

1. The reviews might not be public.

2. If accepted, it sometimes costs $$ (e.g. NeurIPS has a $800 registration fee, and some OA journals charge APCs).

3. Many of the prestigious review providers mix together two different types of review: review for technical quality and errors, versus review for perceived importance and impact. Some also have quite low acceptance rates (due to either prestige reasons or literal capacity constraints).

TMLR [1] might be the closest to addressing all three points, and has some similarity to eLife, except that unlike eLife it doesn't charge authors. It's essentially an overlay journal on openreview.net preprints (covers #1), is platinum OA (covers #2), and explicitly excludes "subjective significance" as a review criterion (covers #3).

[1] https://jmlr.org/tmlr/


Here's a recent effort at peer reviewing pre-prints that started in 2017 https://prereview.org/


I just wanted to share this just incase anyone is in a situation where they don’t trust their partners anymore. There is no harm in wanting to know what your second half is into, it saves you from wasting more years of your life with people who do not deserve you. I will leave the hacker’s contacts below just incase anyone needs his services and assistance. Just a mail to hackerspytech @ gmail com Encouraged by these review, I contacted the expert hackerspytech @ gmail com based off many reviews i got from here and he gave me applications and login details to access my partner’s phone, it was like a dream come true because i never expected his service to be so top notch with every details i needed. This allowed me to view text messages, calls, WhatsApp conversations, and her location . With the evidence I gathered, I was able to make necessary changes in my life. If you need to access your partner phone for spying purpose, I highly recommend this expert. it’s 100% possible to hack someone WhatsApp using a phone number with the support of a professional ethics expert




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: