Hacker News new | past | comments | ask | show | jobs | submit login

>If you get rid of peer review, it’s not science. It’s just a vanity press.

If you believe replicability is central to science, the current paradigm doesn't necessarily converge on science either. And when people are graded on how many publications they garner, it borders on turning publication into a symbol of status rather than one of science.




> If you believe replicability is central to science.

I do believe that, but it doesn’t matter what anybody believes, replicatable experiments and results, which peers can review and agree on, are the soul of science.

Without that it’s not science, it’s just creative writing.


Then don’t you think the first part of that process is explaining the methods and results of your experiment? That’s precisely what the current situation does. What’s lacking is the replication incentives.


I propose to supply those incentives—-pay for good reviewer and replicators.

If you want something done, you gotta pay for it. We can’t just rely on volunteers.


I'm on board with that idea, as long as we can also provide guardrails against the perverse incentives of paying for them. E.g., we need to avoid frivolous reviews/replication as well as something evolving into a "pay for a good review" service.


Well, if you are paying somebody to do something, you gain a lot of leverage:

1. You can negotiate a due date. No more waiting for years before the journal's reviewers actually review your paper.

2. You can negotiate a set of deliverables. You can specify that they can't just say "this sux"; they have to show the lines where the big hairy proof are wrong, or if its an algorithm, they have to actually code it up and run it themselves before they say it doesn't work.

3. You can more reliably attract good reviewers. If you aren't begging for people to volunteer, but you are paying good money, you can be a lot pickier about who you hire.

I mean, I've been a consultant: what are the guardrails that I won't rip-off my clients? I don't want to ruin my reputation, I want repeat business, and I want to be able to charge a high hourly rate because I deliver a premium product.

Same guardrails would apply to peer reviewers and to reproducers.


Sure, but I’m poking at the bad leverage you can also wield.

1) you can create undue schedule pressure that results in a cursory review that may not catch the more nuanced problems in your investigation.

2) you can be more belligerent about not sharing data. If they want to get, they won’t argue.

3) you can pay for reviewers who you know will give a positive review. Without guards against this, it’s almost a certainty that the glut of PhDs will result in some treating it like a scam de hustle where it’s more about the economics than the science.

Some consultants are well known to play the game where they tell clients what they want to hear rather than what they need to hear. I don’t think consultancy is a good model for this.


#1 isn't an issue unique to paying peer reviewers. We've learned how to negotiate such hazards.

#2 Seems like a team who wants their paper published would be super-helpful to the reviewers and replicators....why wouldn't they be maximally motivated to help them to by sharing data, techniques, etc...and writing their paper so that its easy for reviews and replicators to do their jobs.

#3 The authors of a paper don't get to choose who their reviewers are!!

> consultants...play the game

And yet we have millions of clients hiring millions of consultants, and somehow they are able to make it work....yeah, all these issues can arise in other contexts, we know how to deal with them.


You are right that #1 isn't unique. But I think you're wrong that we've got the issue easily solved because it's rooted in human psychology. Just look at the last few years of Boeing headlines and tell me you still think schedule pressure in a competitive environment is a solved problem.

Your response to #2 assumes the researcher wants to create the most transparent and highest-quality paper. Because of perverse incentives, I don't think this is the case. Many times researchers just want a publication because that gets them the career status they're after.

Good point on #3, but it still leaves the question about the tradeoff between quantity and quality. I can surely churn out many more reviews of questionable quality than I can a single, well-researched and thoughtful review. The quantity vs. quality tradeoff is really what is at the heart of that point.

>And yet we have millions of clients hiring millions of consultants

The existence of that market doesn't mean the market does what you're claiming. Many times, consultancy is a mechanism to pay for plausible deniability rather than a novel solution.


re #1: Yeah, bad apples will be bad apples, but that doesn't stop us from hiring people to build us airplanes and run aerospace companies. Right now we are assuming that humans are so angelic they will give us quality reviews for free.

re#2: Under my proposal, an researchers in an independent lab would have to read a paper to see how to design and conduct an experiment to replicate the results. And if it didn't reproduce, they don't get their paper published.

Given the stakes, don't you think researchers would exert every effort to make their paper as transparent and as easy-to-read, as possible? How carefully would they describe their experiment if they knew somebody was going to take their description and use it to check their work?

Re #3: Yeah, but again that's not a problem specific to my proposal. The same risk hangs over every employer-employee relationship.


I think the idea of requiring the review process to require replication would potentially be a good approach, given we're aware of the downsides. For example, I've worked in labs with sensitive data, or with proprietary processes that they would not want to share. This would mean the advocated process would result in a lot less sharing of methods. Maybe there's a chance there could be vetted independent labs that meet stringent security requirements, but that adds another layer of bureaucracy which could, again, result in less sharing of information. There's a balancing act to be considered, and I agree that we are probably too far on the one side of that equation currently.

Most of your rebuttals seem to hinge on "yeah, but that problem isn't unique to publishing." That is a kind of side-stepping that misses the point. The point is we need to create a system that mitigates those downsides, not ignore them. I I don't think a store manager would be okay saying, "Well, people steal from all kinds of stores, so we don't need to try to minimize theft." They recognize stealing is a natural outcome given human tendencies and create a system to minimize it within reasonable boundaries.


> Most of your rebuttals seem to hinge on "yeah, but that problem isn't unique to publishing." That is a kind of side-stepping that misses the point.

If there is a specific objection you'd like to revisit, I'd be happy to discuss it. But I wouldn't self-describe what I'm doing as "sidestepping"--I'd say its avoiding bike shedding and keeping the conversation focused.

I mean, it's a pretty facile objection to say some variation on "but if we pay them how do we know we'll get our money's worth?" when we pay for goods and services all the time with very high confidence that we'll get what we pay for.

Surely, there's plenty of considerations to discuss, and I've tried to squarely address all objections which are specific to this proposal. But how to hire and use consultants, or how to ensure you get what you contracted for, are largely solved problems, and off-topic.

> This would mean the advocated process would result in a lot less sharing of methods.

I don't think my proposal would even apply to internal R&D groups who wanted to keep things proprietary. I mean, I can certainly understand wanting to reserve some methods or data as being proprietary. But choosing to do that is, ipso facto, not sharing them. How would paying reviewers and replicators for their time cause any less sharing to happen?

I mean, if your paper doesn't describe the experiments you performed in enough detail to allow other groups to replicate it, its not a scientific paper to begin with. It's either a press release, or a whitepaper, or some other form of creative writing, and publishing it is either public relations, or advertisement--not science.

Which is not to say that it's immoral or useless, or to denigrate it in any way. Not everything we do has to be science. My proposal is just for scientists communicating scientific results with other scientists. Maybe I'm missing something, but I don't see how it would inhibit the kinds of practices you are describing in any way.

It would make it harder for people to claim their "results" are scientific, but are not. It would be a big obstacle to publishing fraudulent papers in scientific journals. It would make it harder for somebody to claim the mantel of "science" to give credibility to their claims. But I really don't see how paying reviewers and replicators would stop anybody from sharing as much or as little as they wanted to.


Apologies, but when the central claim is about mitigating downsides of adding money into a system and you acknowledge the potential for downsides exists but fail to provide any mitigation, it is sidestepping the main focus of the discussion.

I also think there is a misunderstanding when you’re talking about internal R&D. The situation I’m talking about isn’t where someone wants to protect a proprietary method, but rather proprietary data. I could have sensitive information that I don’t want to share, but also recognize a method I’ve developed is useful to others. The harder you make it to share that method (by requiring me to sanitize all the data to make it non-sensitive) the less likely I’m going to share it. When things like security or law come into play, the easiest path is always “no.”

>If there is a specific objection you'd like to revisit

Take the fact that whenever you inject pay into a system, it tends to pervert that system away from the original goal and into a goal of maximizing pay. You acknowledge that but just say it isn't unique. I agree it's not unique, but what I'm after is how do you propose to mitigate it (assuming your goal isn't to simply maximize pay, but rather provide some balance of quality, pay, and quantity). What guardrails do you put in place? Maximum on the number of reviews per quarter? That might limit those reviewers who can crank out many quality reviews. Do you instead provide a framework for reviewing the reviews for quality? That adds another layer of bureaucracy to an already bureaucratic system. Do you implement reviewer scorecards? A decaying rate of pay for each review?

And on and on. Again, the intent wasn't to imply these are unique problems but to probe for good fixes. Those aspects you say are digressions (consultancy etc) are topics you brought to the discussion, seemingly to address the mitigation question without actually providing a specific response. Doing "whatever they do elsewhere" isn't really an answer.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: