Hacker News new | past | comments | ask | show | jobs | submit login

I hear this sentiment a lot.

There was a time when academia was intensely driven by culture. People did stuff because they cared about the community surviving.

It is, in fact, possible to choose the "cooperate" triangle of the prisoner's dilemma, but it takes a lot of work, and is vulnerable to being wrecked by people who don't care / believe in the goals.




If you keep increasing publication requirements in a Red Queen’s race, you both increase the amount of peer-review required and also reduce the time that qualified scientists have to do it. No matter how much people want to build a functioning system, they only have so many hours in the day.


A valid concern, but what is driving this red-queen race is that you are competing with people who cheat. If you’ve got one really solid paper published, but your competitor fraudulently published 10, it’s real tempting to fraud a few papers yourself.

But they multiply fraudulent papers because they can get away with it, and they can get away with it because nobody is really reviewing or replicating those results.

I propose a fix: if a paper wouldn’t get published unless somebody has replicated the results—-there would be a lot less papers published, and the the expectation of how many papers a prof should publish a year would fall to reasonable levels.

It’s the fraud which is killing us. Get rid of the fraud and a lot of the other problems with academia would start resolving themselves.


The idea that fraud is rampant and driving the explosion in publication rates is deeply tempting to folks who don’t work directly in a scientific field. But it’s a misconception that’s largely driven by non-scientific media and bloggers. In practice literal fraud does exist, but it’s relatively rare in most scientific fields. The explosion in publication rates is largely caused by a combination of (1) more people entering the field, and (2) “paper slicing” where people submit smaller contributions as part of each paper, in order to maximize publication metrics.

As far as “no publication without replicating the results”. Think for a second about how that will work. How is anyone going to replicate a result if it’s not published? This is literally the reason that scientific publication exists in the first place: to provide a channel by which new results can be conveyed to other scientists so that they can learn from, and in some cases, replicate those results. So clearly you don’t want to prevent publication of non-replicated results, you just want to prevent people from getting some kind of academic brownie-point credit for those publications. Perhaps that’s workable, I don’t know. But you need to be a lot more knowledgeable and curious about the scientific publication system in order to have any hope of improving it. Taken literally, your “no publication without replication” proposal would inhibit scientific replication entirely.



> “paper slicing”

I'm glad we agree that paper slicing is also plaguing academia. When I was a grad student, it bugged me to no end that those guys at Stanford were out-publishing me--because they got enough results for 1.5 papers, and then made (3 choose 2) papers out of them.

And yeah, if you think I wasn't tempted to follow suit, think again. I eventually left academia because I didn't want to cheat, or compete with cheaters, when we were being graded on the curve of "publish or perish".

> fraud is rampant and driving the explosion in publication rates is deeply tempting to folks who don’t work directly in a scientific field.

So, you are an assistant prof who was passed over for tenure, or you didn't get that Harvard appointment--it went to Francesca Gino, whose vita looks sooooo much better than yours, because she's doing TED talks to promote her book called "Why It Pays To Break The Rules In Work And Life". She's making $1 million a year consulting, while you are trying to get funding for your research, which isn't as flashy but at least is real science...

... if you are graded on the curve, how will you look against someone who cheated? It's the prisoner's dilemma.

> How is anyone going to replicate a result if it’s not published?

In my proposal, it works the same that way can they review a paper if it's not published. You write up a paper, detailing the claims you are making and the experiments and methods you used to justify your claims, and you submit it for publication.

Then, 3 or 4 anonymous reviewers decide whether it's promising enough to go to the next step: hire another research group to replicate the results. If the results replicate, then and only then are they published.

Yes, it's more expensive, so to finance it I propose that when the grant is being written, the principle investigators should also estimate what it would cost to hire reviewers and to fund experiments at another lab to confirm the results.

When the grant is approved, that money is put in escrow until the research is done and submitted for publication, and disbursed to the reviewers and replicators to compensate them for pausing their own research and evaluating someone else's. If it replicates, it's published. If it doesn't replicate, you don't get to write up books on how to cheat your way to the top while you are cheating your way to the top.

> This is literally the reason that scientific publication exists ...

100% agree, its great to share new results and all---but if a "result" doesn't replicate, its not a result. What, exactly, are Gino's peers supposed to learn from her fraudulent papers? If I'm trying to decide what to research, how do I know which lines are actually promising or not? Do I just go with researchers at a name-brand university (like Gino at Harvard?)

These days papers can be written by machine as fast a a machine gun fires bullets. There's got to be some way of separating the signs from the noise.

> brownie-point credit for publications...

Publication is very important; one of my professors explained it this way: if you don't publish your results--i.e. you don't convince your peers that what you did is worthy of publishing--its like you didn't do anything at all. Then whole idea of research is to contribute to the edifice of science, and publishing is the vehicle by which that contribution is made. It's how you deliver the contribution to everybody else. And peer-review is how it is determined whether you actually made a contribution.

So the solution can't be to just stop caring about how many papers are published.

> Taken literally, your “no publication without replication” proposal would inhibit scientific replication entirely.

I hope my explanation above addresses this concern....


"Publication" is the process of writing up works and distributing them to other researchers. Having fast and efficient channels for publishing our results is essential to the progress of science.

I strongly object to any proposal that would interrupt this essential process. If you (somehow) prevented me from distributing my preliminary results to other scientists, or from reading their preliminary results, then I would actively work around your proposal. Not because I hate you, but because science fundamentally requires access to these new ideas. Moreover, any researchers who waited for replication would fall behind and get scooped by the better-connected Ted-talk giving folks, who would obviously keep exchanging results.

However, it's obvious that when you say publication you don't mean it in the literal sense of communicating scientific results. Instead what you're trying to reform is academic credit. We all know that scientists [and the bureaucrats who fund them] love to find ways to measure research productivity, and they've chosen to use publications (in specific venues) as a proxy for that contribution.

And, following Goodhart's law, any measure that becomes a target ceases to be a good measure.

So the purpose of your proposal really has nothing to do with scientific publication for its first-order effects (i.e., distributing results to colleagues) but rather, you just want to reform the way that credit is allocated to scientists for purposes of funding, promotion, and general adulation (that's a joke.)

My suggestion is: if you want to reform the way credit is allocated, why not focus explicitly on that goal? There's no need to put in place complicated new systems that will make it harder for anyone to publish (and that real scientists will instantly work around.) Instead, just go to the NSF and NEH and various university administrations and convince them to stop hiring/funding/firing researchers using the metrics they currently use?

I think overall you'll find that this is still a tough sell, and a massive uphill battle. But it has the advantages of (1) addressing the actual problem you're concerned with, (2) not preventing researchers from doing their job, (3) has fewer overall points of failure, and (4) isn't subject to being worked around by funders/administrators who simply adopt new measures after you "reform" the publication system (e.g., switching to citation counts for arXiv works, or surveying other researchers for their opinion.)


> If you (somehow) prevented me from distributing my preliminary results

Dude, nobody can prevent you from putting your paper on arxiv. But arxiv is a vanity press-I'm sorry to put it in such negative terms, but that's the God's honest truth of it.

We have free speech; you can get a vanity press to print whatever you want, as fast as you want it to. But that is not doing science. Alas, arxiv is needed, because the rest of the system is so broken. But imagine if you could submit your paper to any journal, and be guaranteed that it would be peer reviewed within 2 weeks. We could do that if we didn't have to rely on volunteer labor, but paid qualified people to do the job thoroughly and on time.

> Having fast and efficient channels for publishing our results

Have you ever submitted a paper to a journal? If so, I'm sure you were as frustrated as I was that my paper was just sitting on reviewers desks for a year before they got around to giving it a cursory glance.

If we actually payed reviewers, we could specify that the reviews must be done on a certain time schedule. My proposal would greatly accelerate the rate at which non-fraudulent, scientific results get published and communicated to other researchers.

> I strongly object to any proposal that would interrupt this essential process.

Its not an essential process to science to pick the fastest and cheapest vanity press.

What essential for science is getting repeatable results. That's science.

> Having fast and efficient channels for publishing

Ever faster publishing of ever more vanity projects is not science, nor does it help science. Quite the opposite.

> it's obvious that when you say publication you don't mean it in the literal sense of communicating scientific results.

No, that's exactly what I mean. But putting something up on arxiv isn't "communicating scientific results." Until it has been peer reviewed and shown to be replicable, it just isn't a scientific result.

> [change metric, etc]

I don't want to change the metric. We don't have a bad metric--we have frauds claiming they have met the metric when they haven't. The problem isn't with the metric, its that the metric isn't actually being enforced like it should be.

> any measure that becomes a target ceases to be a good measure.

Peer reviews and replications are not a measure of how good your science is. The measure for good vs poor science are things like how useful your results are, how general they are, whether they unify previously disconnected areas of knowledge...things like that.

Whether you are generating repeatable results or not isn't the difference between poor science and good science. It's the the difference between doing science and not doing science.

You can't get rid of peer review, and the demand for repeatable results and still be doing science. Science is the the process of getting peer reviewed and repeatable results.

Peer review and repeatability isn't how you judge the results of the race--they are the bare minimum requirements to enter the race to begin with.

> your proposal really has nothing to do with scientific publication for its first-order effects (i.e., distributing results to colleagues)

Distributing what results? If your "results" can't pass peer review, or they can't be replicated, the are not scientific results. If you skip those steps, you are are skipping doing science. You are not a scientist, you are just cosplaying a scientist.

In order to deliver scientific results, quickly or slowly, you actually have to have produced scientific results.

> if you want to reform the way credit is allocated, why not focus explicitly on that goal?

Well, I'm not trying to reform the way credit is allocated. I'm trying to stop people from getting credit for fraudulent results. Before credit is allocated, something credible must have been produced!! And until your paper passes peer review, it is not credible, and until it has been replicated, its not science.

> that real scientists will instantly work around.

Even calling them "real scientists" betrays a deep conceptual error. There are not "real scientists" and "other kinds of scientists." There are scientists, and there are non-scientists.

The distinction isn't between "real scientists" who take every shortcut and cheat as much as they can get away with, and "poor schlubby scientists" who don't have the guts to cheat.

Scientists (not "real scientists", just scientists) insist on peer review and reproducible results. You can't "work around" the most basic criteria for something be scientific and still be a scientist doing science.

> just .... convince them to stop hiring/funding/firing researchers using the metrics they currently use

So...I'm supposed go to the national science foundation, and every body which funds scientific research, every institution which purports to hire scientists---and somehow convince them to stop actually doing science?

sigh its not your fault man. The problem has been going on for so long now, that there are generations of cosplayer-professors, who have been graduating cosplayer-ph.d.s. Imagine people going to Star Trek conventions, dressing up, geeking out---but after few generations they forget that they are cosplaying, and think they are actually on a starship....

Seems ludicrous, but that's kind of what arxiv has inadvertently done. It doesn't help that have people who cosplay being "science" journalists or "science" popularizers, who trawl arxiv for juicy headlines, and happily write up an article about "research" which hasn't been peer reviewed or replicated. It just encourages more race-to-the bottom, by encouraging "researchers" to post exaggerated claims.


It's a long post, so just a few short thoughts.

1. You're very angry at some people in the field. I get that, everyone in science shares these feelings to some degree. But I think that kind of bitterness is bad for your objectivity and [more importantly] bad your soul. You need to find a way to let it go. This isn't shade, it's genuine advice. Holding onto this resentment is terrible for your mental health, and it ruins the joy of actually doing science for its own sake.

2. Substantively, arXiv isn't "vanity press." Your use of this term again makes it seem like you are fixated on the role of publication for academic credit rather than publication as a means to communicate results. A number of fast-moving fields use preprints as their primary communication channel (ML is a big example.) Even slow-moving fields rely on preprints to exchange new ideas. There's a higher risk of incorrect results and "spam" in this area, but scientists routinely work with these results anyway because that's how we learn about new ideas quickly.

(Specifically in my field [of cryptography] new un-reviewed results are useful because I can usually determine accuracy myself, either by reading the proofs or running the code. If you try to convince me that I should ignore a novel result with a correct proof because it's "not science," well, all you're going to convince me of is that you don't understand science. I realize that for experimental work this can be more challenging, but even un-replicated results can still be useful to me -- because they may inspire me to think about other directions. Adding a slower layer of "professional replication and peer review" would be a net negative for me in nearly all cases, because replication takes a lot of time. At most it would be helpful for promotions and funding decisions which again is not why I read papers!)

3. I don't expect you to reform the incentive process at NSF, NEH, Universities, etc. These are incredibly difficult tasks. At the same time, reforming that stuff is much less ambitious than what you're proposing, which is to fix all academic publishing with the follow-on effect that your reforms will then repair all those broken incentive problems. To use an analogy: you're proposing to terraform Mars in order to fix the housing crisis, and I'm suggesting that maybe we just build more houses here on Earth. If your response is that I'm being unreasonable and that building more houses here on Earth is much too hard, then you're definitely not going to succeed at building new houses on Mars.

4. Your main proposal is to (somehow) come up with a pot of money to make peer review paid. I don't hate that idea, since I would love to be paid and have better peer-review. I am skeptical that this would dramatically increase speed, quality and availability of peer reviewing, especially when you include something as nebulous as "replication" into the goals of this new process. I am skeptical that the money exists. And I am skeptical that this will prevent "cheating" and "gaming" of the resulting systems. Most likely it will prove impossible to do at all and even if you did it, it will just cause less money to be allocated to actual research.

But if you can make it happen, I won't object.


> I would love to be paid and have better peer-review.

Well, let's build on that common ground :-)

> fast-moving fields use preprints as their primary communication channel

Note, I'm not proposing any changes in the preprint system. Maybe you can explain why you think getting faster and better peer reviews would stop researchers from rapidly sharing ideas?

> I am skeptical that the money exists.

Francesca Gino made over $1 million a year at Harvard. Its not a question of can we afford to do this, its a question of can we afford NOT to do this??

If they would have funded a $20k replication study 15 years ago to see if Dan Ariely and Francesa Gino's paper was an actual scientific result, how much money would Harvard and all the funding agencies saved?

It would have even been better for Ariely and Gino--yeah, its no fun when your hypothesis is disproven, but that's a lot better than suffering from a career-ending fraud scandal.

I think the proposal would be more than self-funding, inasmuch as it would prevent money being wasted on frauds.

> I am skeptical that this will prevent "cheating" and "gaming" of the resulting systems.

I'm sure that we will always have "evil scientist"-types. But right now, the system actually incentivizes fraud, and punishes honest researchers.

Can we at least get the incentives right?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: