Hacker News new | past | comments | ask | show | jobs | submit login

One of my profs once remarked, "All of science is done on a volunteer basis." He was talking about peer review, which--as crucial as it is--is not something you get paid for.

Writing a review--a good review--is 1) hard work, 2) can only be done by somebody who has spent years in postgraduate study, and 3) takes up a lot of time, which has many other demands on it.

The solution? Its obvious. In a free market, how do you signal if you want more of something to be produced? Bueller? Bueller?

Yeah, that's right, you gotta pay for it. This cost should just be estimated and factored into the original grant proposals--if its not worth $5k or $10k to fund a round of peer review, and perhaps also funds to run confirming experiments--well, then its probably not research worth doing in the first place.

So yeah, write up the grants to include the actual full cost of doing and publishing the research. It would be a great way for starving grad students to earn some coin, and the experience gained in running confirming experiments would be invaluable to help them get that R.A. position or postdoc.




I don't disagree with the proposed idea of paying for review, but I would prefer also to have guardrails to ensure a good review. I would be willing to pay for a good review because it makes the paper/investigation better. But let's face it: under the current paradigm, there are also a lot of really bad reviews. It's one thing when it's apparent that a reviewer doesn't understand something because of a lack of clarity in the writing. But it's also extremely frustrating when it's obvious the reviewer hasn't even bothered to carefully read the manuscript.

Under a payment paradigm, we need mechanisms to limit the incentive to maximize throughput as a means of getting the most pay and instead maximize the review quality. I assume there'd be good ways to do that, but I don't know what those would be.


So, we just need a meta-review to review the reviews. At a cost, of course. And in order to keep that honest, we need a meta-meta-review...


chuckle recall, a paper is given to 3 or 4 reviewers. No need for hierarchies of reviewers; it’s more like a jury; if all the reviewers more or less come to the same conclusion we can have a high confidence that the decision is the correct one.

Under the proposed plan, if one of the reviewers gave a review which was radically different, or otherwise obviously slap-dash job, payment could be withheld and another reviewer commissioned.


many CS conferences have something literally called a "meta-review" and then there are further senior people who read and oversee the meta-reviews. it stops there though.


Unfortunately the state of meta-review is similar to that of reviews. Rarely delves deeper, mostly acts as a summarizer for the independent reviews.


Who picks the meta reviewers?


Or possibly just a way to review the reviewers. This opens itself up to competitor bias, though, so it would need to be thought out in a way to minimize that.


we need twitter community notes for science


And also has limited value as we practice it today.

A useful review would involve:

(a) "This paper won't be accepted by Cochrane for meta analysis", "N=20 get out of here", ...

(b) Researchers provided their data files and Jupyter notebooks, the reviewer got them to run

(c) Reviewers attempt their own analysis for at least some of the data (think of the model of accounting where auditors look at a sample of the books)

(d) Reviewers come visit the lab and take a look at the apparatus

(e) Something like a PhD defense

(f) Summarize/formalize discussion of a paper that happens at a conference or online venue

(g) 5 months, 5 years, or 40 years later (once in my case) somebody goes through the math line-by-line and finds a mistake on equation 178 that propagates and affects the rest of the calculation. This knowledge ought to be captured and "stapled" to the paper.

I wouldn't say peer review is useless, I think it did improve papers I wrote a little but reviewers do not put enough effort into to reliably catch validity problems. If you believe in meta-analysis, which you should, read this book

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1114087/

the first one (a) is really important because the overwhelming majority of papers are excluded in Cochrane reviews for most medical topics. If 80-90% of papers in some fields are not suitable for inference that raises tough questions such as "Why does work like this get published?" and "Why does this work get funded?" If we got half as many papers but 80-90% of them were usable that would be a revolution.


> I wouldn't say peer review is useless, I think it did improve papers I wrote a little

Let's get back to basics here. What is the whole point of doing things scientifically in the first place? I.e. what is so special about the combination of 1) communicable descriptions of experiments, which can be re-run by anybody, and 2) the performing of said experiments and reporting the results, and 3) having others review and reproduce the results?

Why are those 3 practices soooo special? Its because they are designed to either a) persuade everybody to agreement with you that your results are valid, or b) persuade you that you are wrong.

I.e.. the scientific method is uniquely designed for the purpose of compelling agreement among peers. Its the 2nd best method we have of reaching consensus (the first best being the methods of proof used in mathematics. )

This is important because even if you make a new gee-whiz discovery, its useless unless other people agree that it is real. And unlike politics, or a beauty contest, or an aristocracy, it doesn't matter who you are, if your results are observable and reproducible, other scientists will agree with you, and your discovery has some chance of being generally useful. And that's why they are anonymous--its supposed to cut through the biases and prejudices which, alas, sometimes prevent us from coming to consensus.

The point of peer review is to get a read on whether or not the author of the paper has justified their claims in enough detail that the results are likely to compile agreement. Yeah, we see if 3 or 4 people can agree on it, before asking everybody to agree on it. Whoda thunk?

Point being, peer review isn't to help you improve your paper.

> reviewers do not put enough effort into to reliably catch validity problems.

Yes. This. Any why don't they? Because there's only so much work you can expect somebody to do for free. Lets pay them.


> If you believe in meta-analysis, which you should,

I view meta-analysis as being like those mortgage-backed securities which crashed the world back in 2008. I mean, yeah, theoretically, a bond whose yield is a weighted average of mortgages should be less risky than any one of the mortgages in it.

But....the devil is in the details. When I start seeing meta-analysis which claim that, e.g., masks don't protect you from a respiratory illness which spreads by coughing and sneezing, or that vaccines are at best worthless and at worse cause autism, wellll.....

....I have to conclude that garbage-in, garbage-out.


Unfortunately, what you'll actually incentivize is spending as little effort as possible to get the money paid out.


I hear this sentiment a lot.

There was a time when academia was intensely driven by culture. People did stuff because they cared about the community surviving.

It is, in fact, possible to choose the "cooperate" triangle of the prisoner's dilemma, but it takes a lot of work, and is vulnerable to being wrecked by people who don't care / believe in the goals.


If you keep increasing publication requirements in a Red Queen’s race, you both increase the amount of peer-review required and also reduce the time that qualified scientists have to do it. No matter how much people want to build a functioning system, they only have so many hours in the day.


A valid concern, but what is driving this red-queen race is that you are competing with people who cheat. If you’ve got one really solid paper published, but your competitor fraudulently published 10, it’s real tempting to fraud a few papers yourself.

But they multiply fraudulent papers because they can get away with it, and they can get away with it because nobody is really reviewing or replicating those results.

I propose a fix: if a paper wouldn’t get published unless somebody has replicated the results—-there would be a lot less papers published, and the the expectation of how many papers a prof should publish a year would fall to reasonable levels.

It’s the fraud which is killing us. Get rid of the fraud and a lot of the other problems with academia would start resolving themselves.


The idea that fraud is rampant and driving the explosion in publication rates is deeply tempting to folks who don’t work directly in a scientific field. But it’s a misconception that’s largely driven by non-scientific media and bloggers. In practice literal fraud does exist, but it’s relatively rare in most scientific fields. The explosion in publication rates is largely caused by a combination of (1) more people entering the field, and (2) “paper slicing” where people submit smaller contributions as part of each paper, in order to maximize publication metrics.

As far as “no publication without replicating the results”. Think for a second about how that will work. How is anyone going to replicate a result if it’s not published? This is literally the reason that scientific publication exists in the first place: to provide a channel by which new results can be conveyed to other scientists so that they can learn from, and in some cases, replicate those results. So clearly you don’t want to prevent publication of non-replicated results, you just want to prevent people from getting some kind of academic brownie-point credit for those publications. Perhaps that’s workable, I don’t know. But you need to be a lot more knowledgeable and curious about the scientific publication system in order to have any hope of improving it. Taken literally, your “no publication without replication” proposal would inhibit scientific replication entirely.



> “paper slicing”

I'm glad we agree that paper slicing is also plaguing academia. When I was a grad student, it bugged me to no end that those guys at Stanford were out-publishing me--because they got enough results for 1.5 papers, and then made (3 choose 2) papers out of them.

And yeah, if you think I wasn't tempted to follow suit, think again. I eventually left academia because I didn't want to cheat, or compete with cheaters, when we were being graded on the curve of "publish or perish".

> fraud is rampant and driving the explosion in publication rates is deeply tempting to folks who don’t work directly in a scientific field.

So, you are an assistant prof who was passed over for tenure, or you didn't get that Harvard appointment--it went to Francesca Gino, whose vita looks sooooo much better than yours, because she's doing TED talks to promote her book called "Why It Pays To Break The Rules In Work And Life". She's making $1 million a year consulting, while you are trying to get funding for your research, which isn't as flashy but at least is real science...

... if you are graded on the curve, how will you look against someone who cheated? It's the prisoner's dilemma.

> How is anyone going to replicate a result if it’s not published?

In my proposal, it works the same that way can they review a paper if it's not published. You write up a paper, detailing the claims you are making and the experiments and methods you used to justify your claims, and you submit it for publication.

Then, 3 or 4 anonymous reviewers decide whether it's promising enough to go to the next step: hire another research group to replicate the results. If the results replicate, then and only then are they published.

Yes, it's more expensive, so to finance it I propose that when the grant is being written, the principle investigators should also estimate what it would cost to hire reviewers and to fund experiments at another lab to confirm the results.

When the grant is approved, that money is put in escrow until the research is done and submitted for publication, and disbursed to the reviewers and replicators to compensate them for pausing their own research and evaluating someone else's. If it replicates, it's published. If it doesn't replicate, you don't get to write up books on how to cheat your way to the top while you are cheating your way to the top.

> This is literally the reason that scientific publication exists ...

100% agree, its great to share new results and all---but if a "result" doesn't replicate, its not a result. What, exactly, are Gino's peers supposed to learn from her fraudulent papers? If I'm trying to decide what to research, how do I know which lines are actually promising or not? Do I just go with researchers at a name-brand university (like Gino at Harvard?)

These days papers can be written by machine as fast a a machine gun fires bullets. There's got to be some way of separating the signs from the noise.

> brownie-point credit for publications...

Publication is very important; one of my professors explained it this way: if you don't publish your results--i.e. you don't convince your peers that what you did is worthy of publishing--its like you didn't do anything at all. Then whole idea of research is to contribute to the edifice of science, and publishing is the vehicle by which that contribution is made. It's how you deliver the contribution to everybody else. And peer-review is how it is determined whether you actually made a contribution.

So the solution can't be to just stop caring about how many papers are published.

> Taken literally, your “no publication without replication” proposal would inhibit scientific replication entirely.

I hope my explanation above addresses this concern....


"Publication" is the process of writing up works and distributing them to other researchers. Having fast and efficient channels for publishing our results is essential to the progress of science.

I strongly object to any proposal that would interrupt this essential process. If you (somehow) prevented me from distributing my preliminary results to other scientists, or from reading their preliminary results, then I would actively work around your proposal. Not because I hate you, but because science fundamentally requires access to these new ideas. Moreover, any researchers who waited for replication would fall behind and get scooped by the better-connected Ted-talk giving folks, who would obviously keep exchanging results.

However, it's obvious that when you say publication you don't mean it in the literal sense of communicating scientific results. Instead what you're trying to reform is academic credit. We all know that scientists [and the bureaucrats who fund them] love to find ways to measure research productivity, and they've chosen to use publications (in specific venues) as a proxy for that contribution.

And, following Goodhart's law, any measure that becomes a target ceases to be a good measure.

So the purpose of your proposal really has nothing to do with scientific publication for its first-order effects (i.e., distributing results to colleagues) but rather, you just want to reform the way that credit is allocated to scientists for purposes of funding, promotion, and general adulation (that's a joke.)

My suggestion is: if you want to reform the way credit is allocated, why not focus explicitly on that goal? There's no need to put in place complicated new systems that will make it harder for anyone to publish (and that real scientists will instantly work around.) Instead, just go to the NSF and NEH and various university administrations and convince them to stop hiring/funding/firing researchers using the metrics they currently use?

I think overall you'll find that this is still a tough sell, and a massive uphill battle. But it has the advantages of (1) addressing the actual problem you're concerned with, (2) not preventing researchers from doing their job, (3) has fewer overall points of failure, and (4) isn't subject to being worked around by funders/administrators who simply adopt new measures after you "reform" the publication system (e.g., switching to citation counts for arXiv works, or surveying other researchers for their opinion.)


> If you (somehow) prevented me from distributing my preliminary results

Dude, nobody can prevent you from putting your paper on arxiv. But arxiv is a vanity press-I'm sorry to put it in such negative terms, but that's the God's honest truth of it.

We have free speech; you can get a vanity press to print whatever you want, as fast as you want it to. But that is not doing science. Alas, arxiv is needed, because the rest of the system is so broken. But imagine if you could submit your paper to any journal, and be guaranteed that it would be peer reviewed within 2 weeks. We could do that if we didn't have to rely on volunteer labor, but paid qualified people to do the job thoroughly and on time.

> Having fast and efficient channels for publishing our results

Have you ever submitted a paper to a journal? If so, I'm sure you were as frustrated as I was that my paper was just sitting on reviewers desks for a year before they got around to giving it a cursory glance.

If we actually payed reviewers, we could specify that the reviews must be done on a certain time schedule. My proposal would greatly accelerate the rate at which non-fraudulent, scientific results get published and communicated to other researchers.

> I strongly object to any proposal that would interrupt this essential process.

Its not an essential process to science to pick the fastest and cheapest vanity press.

What essential for science is getting repeatable results. That's science.

> Having fast and efficient channels for publishing

Ever faster publishing of ever more vanity projects is not science, nor does it help science. Quite the opposite.

> it's obvious that when you say publication you don't mean it in the literal sense of communicating scientific results.

No, that's exactly what I mean. But putting something up on arxiv isn't "communicating scientific results." Until it has been peer reviewed and shown to be replicable, it just isn't a scientific result.

> [change metric, etc]

I don't want to change the metric. We don't have a bad metric--we have frauds claiming they have met the metric when they haven't. The problem isn't with the metric, its that the metric isn't actually being enforced like it should be.

> any measure that becomes a target ceases to be a good measure.

Peer reviews and replications are not a measure of how good your science is. The measure for good vs poor science are things like how useful your results are, how general they are, whether they unify previously disconnected areas of knowledge...things like that.

Whether you are generating repeatable results or not isn't the difference between poor science and good science. It's the the difference between doing science and not doing science.

You can't get rid of peer review, and the demand for repeatable results and still be doing science. Science is the the process of getting peer reviewed and repeatable results.

Peer review and repeatability isn't how you judge the results of the race--they are the bare minimum requirements to enter the race to begin with.

> your proposal really has nothing to do with scientific publication for its first-order effects (i.e., distributing results to colleagues)

Distributing what results? If your "results" can't pass peer review, or they can't be replicated, the are not scientific results. If you skip those steps, you are are skipping doing science. You are not a scientist, you are just cosplaying a scientist.

In order to deliver scientific results, quickly or slowly, you actually have to have produced scientific results.

> if you want to reform the way credit is allocated, why not focus explicitly on that goal?

Well, I'm not trying to reform the way credit is allocated. I'm trying to stop people from getting credit for fraudulent results. Before credit is allocated, something credible must have been produced!! And until your paper passes peer review, it is not credible, and until it has been replicated, its not science.

> that real scientists will instantly work around.

Even calling them "real scientists" betrays a deep conceptual error. There are not "real scientists" and "other kinds of scientists." There are scientists, and there are non-scientists.

The distinction isn't between "real scientists" who take every shortcut and cheat as much as they can get away with, and "poor schlubby scientists" who don't have the guts to cheat.

Scientists (not "real scientists", just scientists) insist on peer review and reproducible results. You can't "work around" the most basic criteria for something be scientific and still be a scientist doing science.

> just .... convince them to stop hiring/funding/firing researchers using the metrics they currently use

So...I'm supposed go to the national science foundation, and every body which funds scientific research, every institution which purports to hire scientists---and somehow convince them to stop actually doing science?

sigh its not your fault man. The problem has been going on for so long now, that there are generations of cosplayer-professors, who have been graduating cosplayer-ph.d.s. Imagine people going to Star Trek conventions, dressing up, geeking out---but after few generations they forget that they are cosplaying, and think they are actually on a starship....

Seems ludicrous, but that's kind of what arxiv has inadvertently done. It doesn't help that have people who cosplay being "science" journalists or "science" popularizers, who trawl arxiv for juicy headlines, and happily write up an article about "research" which hasn't been peer reviewed or replicated. It just encourages more race-to-the bottom, by encouraging "researchers" to post exaggerated claims.


It's a long post, so just a few short thoughts.

1. You're very angry at some people in the field. I get that, everyone in science shares these feelings to some degree. But I think that kind of bitterness is bad for your objectivity and [more importantly] bad your soul. You need to find a way to let it go. This isn't shade, it's genuine advice. Holding onto this resentment is terrible for your mental health, and it ruins the joy of actually doing science for its own sake.

2. Substantively, arXiv isn't "vanity press." Your use of this term again makes it seem like you are fixated on the role of publication for academic credit rather than publication as a means to communicate results. A number of fast-moving fields use preprints as their primary communication channel (ML is a big example.) Even slow-moving fields rely on preprints to exchange new ideas. There's a higher risk of incorrect results and "spam" in this area, but scientists routinely work with these results anyway because that's how we learn about new ideas quickly.

(Specifically in my field [of cryptography] new un-reviewed results are useful because I can usually determine accuracy myself, either by reading the proofs or running the code. If you try to convince me that I should ignore a novel result with a correct proof because it's "not science," well, all you're going to convince me of is that you don't understand science. I realize that for experimental work this can be more challenging, but even un-replicated results can still be useful to me -- because they may inspire me to think about other directions. Adding a slower layer of "professional replication and peer review" would be a net negative for me in nearly all cases, because replication takes a lot of time. At most it would be helpful for promotions and funding decisions which again is not why I read papers!)

3. I don't expect you to reform the incentive process at NSF, NEH, Universities, etc. These are incredibly difficult tasks. At the same time, reforming that stuff is much less ambitious than what you're proposing, which is to fix all academic publishing with the follow-on effect that your reforms will then repair all those broken incentive problems. To use an analogy: you're proposing to terraform Mars in order to fix the housing crisis, and I'm suggesting that maybe we just build more houses here on Earth. If your response is that I'm being unreasonable and that building more houses here on Earth is much too hard, then you're definitely not going to succeed at building new houses on Mars.

4. Your main proposal is to (somehow) come up with a pot of money to make peer review paid. I don't hate that idea, since I would love to be paid and have better peer-review. I am skeptical that this would dramatically increase speed, quality and availability of peer reviewing, especially when you include something as nebulous as "replication" into the goals of this new process. I am skeptical that the money exists. And I am skeptical that this will prevent "cheating" and "gaming" of the resulting systems. Most likely it will prove impossible to do at all and even if you did it, it will just cause less money to be allocated to actual research.

But if you can make it happen, I won't object.


> I would love to be paid and have better peer-review.

Well, let's build on that common ground :-)

> fast-moving fields use preprints as their primary communication channel

Note, I'm not proposing any changes in the preprint system. Maybe you can explain why you think getting faster and better peer reviews would stop researchers from rapidly sharing ideas?

> I am skeptical that the money exists.

Francesca Gino made over $1 million a year at Harvard. Its not a question of can we afford to do this, its a question of can we afford NOT to do this??

If they would have funded a $20k replication study 15 years ago to see if Dan Ariely and Francesa Gino's paper was an actual scientific result, how much money would Harvard and all the funding agencies saved?

It would have even been better for Ariely and Gino--yeah, its no fun when your hypothesis is disproven, but that's a lot better than suffering from a career-ending fraud scandal.

I think the proposal would be more than self-funding, inasmuch as it would prevent money being wasted on frauds.

> I am skeptical that this will prevent "cheating" and "gaming" of the resulting systems.

I'm sure that we will always have "evil scientist"-types. But right now, the system actually incentivizes fraud, and punishes honest researchers.

Can we at least get the incentives right?


It depends! There should probably also be a process by which reviewers themselves get graded. Then paper writers can choose whether to splurge for fewer really amazing reviewers, or a larger quantity of mediocre reviewers. Also, readers will be able to see the quality of the reviewers that looked at a preprint.


How do you have all three of anonymous authors, anonymous reviewers, and reviewer ratings ?


It might be possible to have a third-party manage the reviewer ratings. Although I suspect some fields are so small/niche that if someone wanted to associate some random ID with a real person, they could match writing styles etc.


How isn't that third party just reinventing journals ?


The product is different. Consumer Reports isn't reinventing cars; it provides a service by independently rating cars.


Splurging for "amazing reviewers" could also be gamed to "splurge on those reviewers who are likely to rubber-stamp my submission to get paid" (not unlike some of the questionable open-access journals' current business models).


Is that different from any other job?


Opposed to now where it appears lots of science is peer reviewed with all the problems found later?


Replacing a broken system, with a broken and also expensive system, does not sound like an improvement.


It might cost $100k - $1m (or more) to repeat the work and run replications. The $5k - $10k mentioned earlier would be enough to allow time reading and thinking and checking some stuff on pen-and-paper.


> The $5k - $10k mentioned earlier would be enough to allow time reading and thinking and checking some stuff on pen-and-paper.

The average postdoc in the US earns $32.81/hour, according to the results of some googling. Even taking overheads into account, $5k should cover more than a week's full time work.


In what area of science would it take only a week or two to replicate?

It might take several days to a week of literature review just to fully understand the problem. Then you might need equipment, chemicals, cultures, etc. Then depending on the area of science, doing the actual experiment could take several weeks (waiting for reactions, computer simulations, etc). Then possibly tricky analysis and statistics on top of that.

Nowadays, science is deep


Science is very deep; I’m sure that to replicate some studies it would cost as much as performing the original study did.

But whether it’s $10k or $100k, we really should provide the funds to do it. Expensive? Yeah, but not as expensive funding grants for generations of psychology professors and getting nothing—or worse than nothing—in return.

Psychology could fix its replication crisis tomorrow if as part of writing every grant, they also calculated what it would take for another group to replicate their experiments, and put that money in escrow to hire reviewers and replicators who had to sign off on any papers published.


Or, research just gets published online free of charge for everyone to access, and important work will prove itself over time by being discovered, discussed and by becoming influential.

If anyone wants to change something about an article (the writing, the structure of the paper, or anything else a reviewer might want to edit) they can just do it and publish a new version. If people like the new version better, good, if they don't they can read the old version.

Peer review as a filter for publishing is terrible in a time when making a few megabytes of text and images accessible is literally free. If anyone wants to run a content aggregator (aka a Journal) they just do it. If they want to change something about the article before it's approved for the aggregator they can contact the authors or ask someone to review it or whatever.

Just make it accessible.


> Just make it accessible.

We already have that system, it's called the internet. Nothing stops you or I from putting our ideas online for all to read, comment on, update, etc.

The role of the publishers, flawed as it is, has little to do with the physical cost of producing or providing an article, and is filling (one can argue badly) a role in curation and archival that is clearly needed. Any proposal to change the system really has to address how those roles are met, hopefully cheaper than currently but definitely not more expensive because mostly people don't get paid (in $ or career or anything) now - or has to provide a funding source for it.

I don't really see how your outlined scenario addresses that, at least not in a way that's functionally different than today. Can you expand?


> We already have that system, it's called the internet. Nothing stops you or I from putting our ideas online for all to read, comment on, update, etc.

Are you asking how arxiv is different from blogspot?


No, although the mechanism for hosting the content aren’t that important.

Preprint servers are very useful but haven’t replaced journals for good reasons.


What are those reasons? The only thing I see that journals do which preprint servers couldn't easily take over is prestige.


You have the causality wrong. Prestige comes to journals by doing a good (or at least, better than peers) job of being a journal, which is providing a necessary function to the academic research process. If you want to improve on that system, you have to improve on those functions, or reduce the reliance on them by providing something better.

Put it another way, if you can design a system with a better ROC curve for classifying research, with a better TP rate for good papers, and have it cost less in real terms that current academic papers, then you are on to something. If all you've got is "papers should be free" or "it's too hard to access publishing from the outside" what you have are complaints, not solutions.


Like I said, it's not clear to me what exactly established journals have been doing "better" historically than a preprint server could. You say they are better at "being a journal" -- OK. They are established, well connected (to science communities, industry, journalism, funding agencies, etc.) and have been maintaining a reputation for a long time, usually much longer than preprint servers exist. That's basically prestige, which isn't nothing (I didn't claim prestige is nothing). However, this doesn't demonstrate journals do anything relevant to the advancement of science fundamentally better right now.

What I "have" is that

1. It's not obvious that a journal is fundamentally better at organizing unpaid voluntary reviewers compared to a preprint server.

2. Scientific publishing has insanely high profit margins. How come? My theory is that they are selling prestige first and foremost, i.e., a luxury good, (to scientists, universities and funding agencies simultaneously) and purchasing decisions there are made by people who are spending public money, not their own. Both of these points (luxury good, public spending) seem like strong contributors to high margins. The public is paying for the research and for access to articles, while journals nowadays on first glance seem to only provide a little bit of web hosting, a little bit of reviewer coordination and a little bit of manuscript editing.

3. It's not obvious that the submission and peer review systems we have now (in journals) is worth the time and effort. The role of peer review is misrepresented in journalism and the expectations are not met. If one could separate publication ("preprint") on one side, and, on the other side, review and "being featured by important outlets or institutions", authors could save a lot of time time (that could be used for more research). Others would have access to interesting results earlier and be able to build on top of them. Next, in a separate process some institution could select important works, scrutinize and review them, perhaps paying experts to do so, and perhaps replicate where appropriate.

The issue with this is that academics need the prestige provided by journals for career advancement, universities need the prestige to justify their spending to funding agencies and politicians, and funding agencies likewise need the prestige to justify their spending to politicians. The "replication crisis" and the like indicate that this prestige is overvalued. The hope is, economically speaking, that the market for "academic prestige" can either be disrupted, or the price the public has to pay can be lowered "through competition". It's interesting what that might look like. Preprint servers, open data and more direct science communication seem like steps in the right direction.


I'm clearly not articulating my point well. Obviously the idea is to "disrupt" the academic publication and review process, but this discussion seems to be focusing on the probably the easiest part - making and hosting the documents.

> Next, in a separate process some institution could select important works, scrutinize and review them

This is basically what happens now. Pre-prints are for things that aren't necessarily ready yet (hence the "pre") but cooked enough to review and discuss and build on. The formal publication process takes some percentage of them (depending on server, could be quite small) and works through a publication process.

Currently that is mostly done by for-profit journals organizing the work. If you want to propose.

So what you are suggesting is that we do away with that (fine!) and replace it with --- something handwavy (not fine). There has to be some real proposed mechanism of organizing the work that needs to be done that a) doesnt' waste even more time of the limited pool of people who can and will do a reasonable job of reviewing, even worse editing, does at least as good a job filtering out the large amount of noise to find signal, and is at least as robust against manipulation.

For what it's worth, many of your arguments about the lack of efficacy of the system or other flaws don't seem to me to capture how much worse it could be. Best not lose track of that in trying to make it better....


> So what you are suggesting is that we do away with that (fine!) and replace it with --- something handwavy (not fine).

I wasn't really trying to suggest any concrete system to replace the current one. Neither would I be able to do so nor would it really matter since such a system couldn't be implemented in a top-down fashion. I was pondering how things are and why, which is hard enough, as well as what trends I see positively (which are simultaneously actionable recommendations for both funding agencies and scientists).

> many of your arguments about the lack of efficacy of the system or other flaws don't seem to me to capture how much worse it could be

Sure, I think science as a whole has never been more productive. Many trends also look positive: besides what I named above, there is also increased industry collaboration for applied research, increased funding overall, etc. The main challenge will be the price of creating fraudulent submissions going down and hacking the system becoming more prevalent. I think the only way to address this is to significantly reduce the "perceived authority" of any work that comes from using a LaTeX template, as well as authority that comes with the label "peer reviewed".


> I think the only way to address this is to significantly reduce the "perceived authority" of any work that comes from using a LaTeX template, as well as authority that comes with the label "peer reviewed".

Opening up access unavoidably makes the signal to noise problem worse, not just for the reasons you note (fraud, exploits) but also average quality drops. Whatever changes are made, will need a more effective filter, not less effective.


Your concerns are valid, and I think that my proposal of paying reviewers and replicators addresses them all.

> It's not obvious that a journal is fundamentally better at organizing unpaid voluntary reviewers compared to a preprint server.

So, let's not rely on unpaid, voluntary labor. Pay them.

> they are selling prestige first and foremost,

Yes. So give them a better business model--if they can make money reviewing papers, they won't have to create artificial value by creating artificial scarcity.

> The role of peer review is misrepresented in journalism and the expectations are not met.

If you pay somebody, you can specify the expectations you think should be met. If they don't meet those expectations, they don't get paid.

> this prestige is overvalued.

The prestige is not overvalued--it is just too easily obtainable by fraud. Something has got to be done.

> Preprint servers, open data and more direct science communication seem like steps in the right direction.

They are vanity presses.

And they don't even do what you think they are doing. Today, the problem isn't too little information, it's too much misinformation. LLMs can chug out papers by the millions. Is a search engine going to help you cut through that and find what you are looking for? What if 2 million papers which match your search criteria? You gunna read through them all, trying to find the 5 papers which were actually written by a real scientist?

Are you even going to see them? Is the search engine going to do a better job than peer review of presenting you the papers you actually want to read?


> my proposal of paying reviewers and replicators addresses [all concerns]

Your other comment didn't say who might be paying reviewers. Journals clearly won't (why should they, they have grrat profits in the current system and will fight tooth and nail to delay any changes whatever). Universities and even funding agencies cannot (conflict of interest).

> Is the search engine going to do a better job than peer review of presenting you the papers you actually want to read?

I do actually expect to see that happen.


> Your other comment didn't say who might be paying reviewers.

In the parent comment to this thread, I talk about this. My proposal is that when a researcher writes up the grant to get their research funded, they should estimate how much it would cost to pay reviewers and replicators, and include those figures in the cost.

If the researcher gets the grant approved, then the funding agency will put the money for peer review/replication into escrow. When the research is finished and the investigators have written up a paper to describe their methods and results, the money in escrow is disbursed to the reviewers and replicators.

If reviewers agree its good, and if it replicates, then the paper is published. If not, well we just dodged a bullet.

> I do actually expect to see that happen.

Are search engines getting better or worse for you? It was a lot easier getting the right paper from a search engine 10 years ago. Now, you just get half a page of irrelevant ads, and another half page of links boosted by payola.

Just imagine what it will be like when there are literally MILLIONS of bad papers for each good paper. Then Billions. There is no finite limit to the amount of bullshit that LLMs can--and therefore will--output.


That's just not true. Most publicly fundes research is hidden behind paywalls.


No it's exactly true. You can write up anything you want and put it on a site. The post I was replying to was suggesting an open access system (both read and write) for exchanging ideas. This exists.

What it doesn't do is effectively replace the non-open system for access to academic journals. I have a lot of sympathy for open (read) access to research, particularly publicly funded. It just isn't sensible to wave a wand and say "all papers are free to read now" without some plan for the other parts of the system and the ecosystem (academic research) that relies on it.


Why do you think that peer review needs the journals to function?


I didn't say that. It needs something. Handwaving about an emergent community isn't useful - moving from todays system to something else needs something concrete.


Whether or not your papers pass peer review—-and which journals it is published in—are important criteria for hiring, tenure, whether your grants are funded, etc.

If you get rid of peer review, it’s not science. It’s just a vanity press.


>If you get rid of peer review, it’s not science. It’s just a vanity press.

If you believe replicability is central to science, the current paradigm doesn't necessarily converge on science either. And when people are graded on how many publications they garner, it borders on turning publication into a symbol of status rather than one of science.


> If you believe replicability is central to science.

I do believe that, but it doesn’t matter what anybody believes, replicatable experiments and results, which peers can review and agree on, are the soul of science.

Without that it’s not science, it’s just creative writing.


Then don’t you think the first part of that process is explaining the methods and results of your experiment? That’s precisely what the current situation does. What’s lacking is the replication incentives.


I propose to supply those incentives—-pay for good reviewer and replicators.

If you want something done, you gotta pay for it. We can’t just rely on volunteers.


I'm on board with that idea, as long as we can also provide guardrails against the perverse incentives of paying for them. E.g., we need to avoid frivolous reviews/replication as well as something evolving into a "pay for a good review" service.


Well, if you are paying somebody to do something, you gain a lot of leverage:

1. You can negotiate a due date. No more waiting for years before the journal's reviewers actually review your paper.

2. You can negotiate a set of deliverables. You can specify that they can't just say "this sux"; they have to show the lines where the big hairy proof are wrong, or if its an algorithm, they have to actually code it up and run it themselves before they say it doesn't work.

3. You can more reliably attract good reviewers. If you aren't begging for people to volunteer, but you are paying good money, you can be a lot pickier about who you hire.

I mean, I've been a consultant: what are the guardrails that I won't rip-off my clients? I don't want to ruin my reputation, I want repeat business, and I want to be able to charge a high hourly rate because I deliver a premium product.

Same guardrails would apply to peer reviewers and to reproducers.


Sure, but I’m poking at the bad leverage you can also wield.

1) you can create undue schedule pressure that results in a cursory review that may not catch the more nuanced problems in your investigation.

2) you can be more belligerent about not sharing data. If they want to get, they won’t argue.

3) you can pay for reviewers who you know will give a positive review. Without guards against this, it’s almost a certainty that the glut of PhDs will result in some treating it like a scam de hustle where it’s more about the economics than the science.

Some consultants are well known to play the game where they tell clients what they want to hear rather than what they need to hear. I don’t think consultancy is a good model for this.


#1 isn't an issue unique to paying peer reviewers. We've learned how to negotiate such hazards.

#2 Seems like a team who wants their paper published would be super-helpful to the reviewers and replicators....why wouldn't they be maximally motivated to help them to by sharing data, techniques, etc...and writing their paper so that its easy for reviews and replicators to do their jobs.

#3 The authors of a paper don't get to choose who their reviewers are!!

> consultants...play the game

And yet we have millions of clients hiring millions of consultants, and somehow they are able to make it work....yeah, all these issues can arise in other contexts, we know how to deal with them.


You are right that #1 isn't unique. But I think you're wrong that we've got the issue easily solved because it's rooted in human psychology. Just look at the last few years of Boeing headlines and tell me you still think schedule pressure in a competitive environment is a solved problem.

Your response to #2 assumes the researcher wants to create the most transparent and highest-quality paper. Because of perverse incentives, I don't think this is the case. Many times researchers just want a publication because that gets them the career status they're after.

Good point on #3, but it still leaves the question about the tradeoff between quantity and quality. I can surely churn out many more reviews of questionable quality than I can a single, well-researched and thoughtful review. The quantity vs. quality tradeoff is really what is at the heart of that point.

>And yet we have millions of clients hiring millions of consultants

The existence of that market doesn't mean the market does what you're claiming. Many times, consultancy is a mechanism to pay for plausible deniability rather than a novel solution.


re #1: Yeah, bad apples will be bad apples, but that doesn't stop us from hiring people to build us airplanes and run aerospace companies. Right now we are assuming that humans are so angelic they will give us quality reviews for free.

re#2: Under my proposal, an researchers in an independent lab would have to read a paper to see how to design and conduct an experiment to replicate the results. And if it didn't reproduce, they don't get their paper published.

Given the stakes, don't you think researchers would exert every effort to make their paper as transparent and as easy-to-read, as possible? How carefully would they describe their experiment if they knew somebody was going to take their description and use it to check their work?

Re #3: Yeah, but again that's not a problem specific to my proposal. The same risk hangs over every employer-employee relationship.


I think the idea of requiring the review process to require replication would potentially be a good approach, given we're aware of the downsides. For example, I've worked in labs with sensitive data, or with proprietary processes that they would not want to share. This would mean the advocated process would result in a lot less sharing of methods. Maybe there's a chance there could be vetted independent labs that meet stringent security requirements, but that adds another layer of bureaucracy which could, again, result in less sharing of information. There's a balancing act to be considered, and I agree that we are probably too far on the one side of that equation currently.

Most of your rebuttals seem to hinge on "yeah, but that problem isn't unique to publishing." That is a kind of side-stepping that misses the point. The point is we need to create a system that mitigates those downsides, not ignore them. I I don't think a store manager would be okay saying, "Well, people steal from all kinds of stores, so we don't need to try to minimize theft." They recognize stealing is a natural outcome given human tendencies and create a system to minimize it within reasonable boundaries.


> Most of your rebuttals seem to hinge on "yeah, but that problem isn't unique to publishing." That is a kind of side-stepping that misses the point.

If there is a specific objection you'd like to revisit, I'd be happy to discuss it. But I wouldn't self-describe what I'm doing as "sidestepping"--I'd say its avoiding bike shedding and keeping the conversation focused.

I mean, it's a pretty facile objection to say some variation on "but if we pay them how do we know we'll get our money's worth?" when we pay for goods and services all the time with very high confidence that we'll get what we pay for.

Surely, there's plenty of considerations to discuss, and I've tried to squarely address all objections which are specific to this proposal. But how to hire and use consultants, or how to ensure you get what you contracted for, are largely solved problems, and off-topic.

> This would mean the advocated process would result in a lot less sharing of methods.

I don't think my proposal would even apply to internal R&D groups who wanted to keep things proprietary. I mean, I can certainly understand wanting to reserve some methods or data as being proprietary. But choosing to do that is, ipso facto, not sharing them. How would paying reviewers and replicators for their time cause any less sharing to happen?

I mean, if your paper doesn't describe the experiments you performed in enough detail to allow other groups to replicate it, its not a scientific paper to begin with. It's either a press release, or a whitepaper, or some other form of creative writing, and publishing it is either public relations, or advertisement--not science.

Which is not to say that it's immoral or useless, or to denigrate it in any way. Not everything we do has to be science. My proposal is just for scientists communicating scientific results with other scientists. Maybe I'm missing something, but I don't see how it would inhibit the kinds of practices you are describing in any way.

It would make it harder for people to claim their "results" are scientific, but are not. It would be a big obstacle to publishing fraudulent papers in scientific journals. It would make it harder for somebody to claim the mantel of "science" to give credibility to their claims. But I really don't see how paying reviewers and replicators would stop anybody from sharing as much or as little as they wanted to.


Apologies, but when the central claim is about mitigating downsides of adding money into a system and you acknowledge the potential for downsides exists but fail to provide any mitigation, it is sidestepping the main focus of the discussion.

I also think there is a misunderstanding when you’re talking about internal R&D. The situation I’m talking about isn’t where someone wants to protect a proprietary method, but rather proprietary data. I could have sensitive information that I don’t want to share, but also recognize a method I’ve developed is useful to others. The harder you make it to share that method (by requiring me to sanitize all the data to make it non-sensitive) the less likely I’m going to share it. When things like security or law come into play, the easiest path is always “no.”

>If there is a specific objection you'd like to revisit

Take the fact that whenever you inject pay into a system, it tends to pervert that system away from the original goal and into a goal of maximizing pay. You acknowledge that but just say it isn't unique. I agree it's not unique, but what I'm after is how do you propose to mitigate it (assuming your goal isn't to simply maximize pay, but rather provide some balance of quality, pay, and quantity). What guardrails do you put in place? Maximum on the number of reviews per quarter? That might limit those reviewers who can crank out many quality reviews. Do you instead provide a framework for reviewing the reviews for quality? That adds another layer of bureaucracy to an already bureaucratic system. Do you implement reviewer scorecards? A decaying rate of pay for each review?

And on and on. Again, the intent wasn't to imply these are unique problems but to probe for good fixes. Those aspects you say are digressions (consultancy etc) are topics you brought to the discussion, seemingly to address the mitigation question without actually providing a specific response. Doing "whatever they do elsewhere" isn't really an answer.


And the current system is a holy grail that can never change? You can still based your hiring based on reviews made by peers


The outcome would most likely be exactly this: "it's probably research not worth doing in first place" (and why would you want to signal you want more busy work?)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: