Hacker News new | past | comments | ask | show | jobs | submit login

>Why not just roll it back to the state before we added the intellectual competitor review?

The only thing I can currently think of is that the pace of research* has grown so much that a small group of editors may be unable to handle the amount of submissions. This could result in a) an inability of the editors to thoroughly vet the submissions, b) difficulty in "good" submissions being found (ie, separating the wheat from the chafe), or c) a further devolution into very, very niche journals just to make the scope manageable for the editors

* I would concede that a very, very large proportion of current research is either heavily derivative or auto-cited, so overall growth isn't to be conflated with growth in quality research




You are mistaken in thinking that editors are supposed to vet submissions. That's not their job! Their job is only to weed out crackpot pseudoscience, which is a much easier task generally and also mostly solved by reputation, or papers outside of the journal's scope. And to edit the remaining papers into a constantly formatted journal publication. And that's done easy enough these days with LaTeX and online printing.

It is NOT the job of journal editors to rate or vet sincere submissions they receive that are on topic for their journal. This only started happening about half a century ago, when demand for publication started to significantly outstrip supply of journal pages, back when journals were actually printed on dead trees and had limited number of pages to keep costs down. The idea then was "we're getting 50 submissions but can only print 12, so let's rate them and pick the best ones." So they started the 'peer review' [sic] process to externalize that vetting cost. It largely didn't exist before then. Only now we can accept all 50, because why not? The marginal cost of one more PDF is practically nil.


>Only now we can accept all 50, because why not?

Because the downside to creating an ever growing haystack is that it becomes increasingly difficult to find a needle. Making it easier to create a deluge of bad research won’t help me find the worthwhile research that would actually help me in my job.

If I had the choice between collecting “all the data” and just collecting “the really good and relevant data” I’m opting for the latter. You are also contradicting yourself by saying it’s not the editors job to vet submissions, yet also say they weed out “crackpot” work. All you’re saying is they lazily/loosely vet submissions. I’m saying the overall system (not just the editors) have a role in providing a reasonably sized haystack (and would also admit the current system is not great at this, but it’s better than a wild-west approach)


You just use better tools to manage it. Fine-tuned LLMs and Google Scholar like search engines help here.

To stretch an analogy it is like email. The job of the editor is the same as the spam detection service run by hosted email providers. They actually go in and actively hide scams and worthless ad email from you, and we thank them for it. Some email providers have recently started offering "focused inbox" modes where they prioritize emails for you too. I don't use that, but I could see why some people do. But importantly they don't block email based on those heuristics, like they might do for spam. You still get non-priority emails. But imagine a world where gmail straight up blocked/rejected email which it didn't consider priority. Would you want that?

The situation with journals is comparable. Editors have a spam/crank detection duty, but they shouldn't be rejecting manuscripts beyond that.


What you’re describing is essentially an arms race in quantity. Yes, we can use tools to help sort, but those same tools can also be used to deluge the inbox and obfuscate the bad. In fact, one of the best ways to sort is by using specific journals/journal metrics as a proxy for quality. That is much, much easier (and productive) than trying to sort based on some Google scholar advanced query. For example, it's much easier for a journal to retract an article that was shown an inability to replicate than to create a search to do the same.

The tone of your comment is very techno-optimist, which is very on brand for HN. In that view, every problem is solved by technology, even those that are created by technology. I would argue there are some problems that are better solved with less technology, not more.


> Editors have a spam/crank detection duty, but they shouldn't be rejecting manuscripts beyond that.

If the system is working, publication in a reputable journal serves as a useful, albeit imperfect, indicator of scientific quality.

Top journals shouldn't be publishing deeply flawed work, or even decent work in clear need of a rewrite. It's not just about spam and cranks.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: