Hacker News new | comments | show | ask | jobs | submit login
Why publish science in peer-reviewed journals? (genomesunzipped.org)
55 points by ig1 on July 13, 2011 | hide | past | web | favorite | 49 comments

I have one single solitary paper that has been published in a peer reviewed journal, so my experience is present but limited.

What I gained out of the peer review process was an article that had grey areas filled in and statistical methods applied better. The quality of the reviewed article was much better than the quality of the submitted article. The reviewers were reading the article with an even-handed critical eye; they weren't just looking for support for their pet opinions.

There's also an aspect of criticism - criticism is expected from reviewers, but random dude A on the internet is frequently going to get something along the lines of "who are you to criticise my methods", or any request for more info declined. "Please provide more info" will work if you need to get a paper published, but won't work if you have already published your paper on scribd - where's the motivation to do rework (beyond honour of course, but scientists are human and that does not always apply)

And then there's the point that popularity doesn't equal 'correct'. Here on HN I've seen time and time again that a throwaway popular comment gets upvotes, and a well-reasoned but unpopular comment gets downvotes. Peer-reviewed articles should be about scientific rigor, not visceral feeling. Consensus of the masses is orthogonal to the needs of scientific rigor.

The main article describes very, very little in the way of quality assurance.

Yes, the popularity of useless, emotive comments here really bugs me. There is an inverse correlation between the points I get for a comment and the amount of time and expertise I put into it.

In the scientific case, though, "popularity" would be measured by the extent of citation. (Really, it already is.) That is a considerably higher bar than "I think I'll click on that arrow." For a paper to be cited, it has to make enough of an impact that people remember it and its ideas shape their thinking.

I agree. I've had two articles reviewed by now (not in a science field, but still), and only one reviewer was relatively unhelpful (they thought the paper was too dense and insufficiently well presented to do a detailed review); all the others, even whey they could not recommend acceptance, went out of their way to suggest various ways of making the paper(s) better.

People are in general much more likely to bother giving negative feedback than to give positive feedback, much less defend other people's efforts. I'm afraid such a system would be no different: would a ton of "likes" offset the negativity of a couple of arseholes wreaking havoc in the comments to make it worth it?

I see a pretty significant issue with "one-click voting" for a scientific paper. Reading papers is often a tough exercise and one that involves rereading, thinking, and rereading again. How many times have you read a dense webpage, reread it over the course of a week, then clicked the upvote on HN? Never. You glance at the webpage, think "that's neat" and click away.

This isn't to say that the journal-of-reviewers model is perfect, but I am wary of casually tossing it away in favor of "upvotes." Upvotes have massive semantic ambiguity: does it mean the article is accurate? Does it mean the article is funny? Is it maybe wrong, but important to think about? Does it merely match the reader's interest? How do you express something such as "we believe your underlying methodology is correct but aren't certain your analyses are rigorous enough to prevent X." A downvote?

A better model would be something like what the Real-World Haskell book used when it was under development: allow comments (anonymous or otherwise) on each paragraph, and refine the content based on those comments. Of course, this does have the twin problems that it doesn't keep results private, and makes credit assignment a real pain :)

Even with all the problems it has, I actually prefer peer-review right now. Some of the stuff that gets self-published on arXiv is of sketchy quality, and there's just no quick way of figuring that out by just reading the abstract if a 'known' person hasn't co-authored it.

In physics at least, a substantial proportion of peer-review goes via arXiv first anyway, so it's not really an either-or. In many cases, the real peer-review happens when people see a new paper in their area on arXiv, they send in comments or challenges, etc. to the authors, and in the best case, by the time a journal article is submitted for formal peer review, many of the relevant peers have already reviewed and helped improve it.

Some mathematicians have also gotten pretty active in using blogs and wikis for peer review and even actively doing research. I don't expect journals to go away, but I wouldn't be too surprised if there are some areas of the sciences where they become more of a formal archival service: where papers go to be filed on shelves once the real action/review/dissemination has already happened.

fwiw, that's true for cs and stat, and I assume for math too. What I refer to is something like this:


This is a particularly egregious example; but if I'm looking for something on a different subject where I don't know some of the notable researchers, or don't know what I'm looking for, it would take me a lot more time than I'd like to spend to filter out the more interesting papers.

  > allow comments (anonymous or otherwise) on each
  > paragraph, and refine the content based on those 
  > comments.
PLoSOne uses this model.

While I'm in favor of opening up journals and possibly putting out some nonprofit alternatives, and while I recognize that the peer review process is definitely not without its problems, be careful not to discount it completely. I cringe when I hear phrases like "value to a community" and "collective opinion" used to refer to scientific articles. The average reader is not qualified to judge the merits of an article, hence the peer review process. Some great examples: the age of the earth, evolution, global climate change; three areas where public opinion is divided but experts overwhelmingly agree. The value of science is based on its accuracy (which is best determined by experts) not its appeal to the masses.

I was assuming that the article was suggesting a system that is similar to reddit, but with important differences. For instance, requiring real names and public voting records. Or, you could reduce a person's vote importance if they are voting on an article outside their area of expertise. There are probably other things you could do to.

Even cooler would be to have each person (explicitly or implicitly) set other people's vote importance. I can imagine a complicated system wherein voting cliques are detected, and used to feed back into the rankings of what papers show up on your page.

Ideally, the voting could be more than just "+1/-1" as well, potentially with a couple different axes, so that I could, for instance, say that I think a paper is important to the field, but likely wrong. Just because a paper is controversial doesn't mean I don't want it to be discussed (for instance, if it introduces a new computational technique, but the input data is flawed somehow).

You're matching an engineering solution to a social problem. Trying to map a paper's social significance to an abstract number is really hard, and in the process you lose a lot of ability to present papers based on taste. Showing me a paper because "these numbers are high and you tend to like these kinds of numbers" is fundamentally different than "you should see this because it is groundbreaking." On the other hand, if we can create a metric for elegance and another for "brings the reader closer to enlightenment," then I'm all for it.

The property you describe is attack resistance of trust metrics.


Apart from the lack of anonymity, I think that the ideals of such a system are very similar to the ideas of any social voting system (such as Stack Overflow, Slashdot, reddit, HN). The goal is to simply have a rating system that works, right?

The trouble is that we still haven't managed to do this well in the technology world. And some of us believe that we need to go back to some level of curation.

I'd love to see a system as described that actually works. But until we (the tech world) manage to figure out a system that works, I'm not convinced that scientific publishing will fare any differently.

The technology world already has it; We have Wikipedia (Edit history), Advogato (Paths), StackOverflow (Rep), etc that encourage personal profiles. You are more valued as a loyal member than one that just started.

Unlike Reddit, Slashdot and HN that do not encourage personal profiles. Do you even remember my name? What about other HNers besides pg? Why do you read their posts anyway? They work because it's a form of almost pure democracy being applied and have the same problems of it eg tyranny of the majority.

What the scientific model can do is a multitude of above; institution API confirming credentials (Referral), history (paths, did this person do other reviews?), etc. Cheap and still effective peer-review. I'm not sure which kind of government type applies to my suggestion above, but there are many forms of government they can apply.

More: https://secure.wikimedia.org/wikipedia/en/wiki/List_of_forms...

> Or, you could reduce a person's vote importance if they are voting on an article outside their area of expertise.

I see an immediate problem there. There are magnitudes more people _outside_ any given area of expertise than inside. Even small votes add up.

Ladies and gentleman, I present to you the logarithmic scale.

All these systems are incredibly vulnerable to capture. Despite their many drawbacks, the best thing peer reviewed journals have going for them is their relative independence.

You cringe when you hear phrases like "collective opinion" regarding science, because science is supposed to be objective, correct? Further, I suppose you recognize that science is not subject to the will of the majority, but to the scientific process, right?

Yet, when it comes to scientific papers, you want to see them subject to the filtering of "experts" to assure they are politically correct?

During my time working in a national lab I had several papers published. Around me, there were over 100 Phds, all of whom were actively publishing papers, and we worked together on them, at least to review them. We were investigating an area of physics regarding the electrical properties of some rather exotic materials. One thing about the peer review process is that it is very highly political.

Often, perfectly legitimate papers would be refused for publication because they disagreed with a scientific view of one of the peers. Sometimes we'd have to submit the paper to another journal.

But at least this level of "politics" was scientific politics. I don't like that some Professor Emeritus is holding to an older view, as that forces us to make our argument a bit stronger, or to do a bit more experimentation to resolve his concerns. But, sometimes, there was just no way we were getting published in that journal. This is completely subjective. Sometimes our case was really not strong enough, but often, it was simply our position was too modern.

And that was the state of affairs (at that time, in that area) without any undue government influence.

I believe the earth is very old and evolution is an observable phenomena. But I don't see how any scientist who also holds this view is threatened by articles proposing the alternative. Some creationist makes a publication claiming that evolution is wrong-- should be amusing!

On "global climate change" there is no scientific consensus. The idea that there is a "consensus" is a political talking point that lets people who don't understand the science short circuit debate by claiming that the scientific community has reached a conclusion. Worse, given the political nature of that position, government is starting to influence the actual publication of papers to make sure they comply with the AGW political correctness filter.

I shudder to think the road this country is going down if political correctness remains in the review system.

However, I recognize the right of journals to choose their publication standards as they wish. This is also why I oppose any sort of government "blessing" or requirements for journals. Having made "peer reviewed" a phrase that is code for "legitimate", I don't want to see the politically correct crowd start dictating which panels of peers are "legitimate" and which ones are not.

If it is time for a new journal to cover a new branch, then let a new journal form. This is how many of them got started in the past-- people unable to find publication elsewhere, for reasons such as topic, length, nature of the research, formed new journals.

PS- FWIW, AGW is really easily disproven if you are familiar with the basic science. The IR absorption of CO2 is below that of water vapor, the CO2 levels in the past have been higher than the "tipping point", etc. AGW, is really a political, not a scientific, movement. This is what is so insidious about the idea that "All scientists agree"- it makes proponents of this theory feel like they don't have to defend their position, and that anyone who disagrees is "rejecting science", when it is the reverse. "collective opinion" is not how science works. Yet, I've never met an AGW proponent who actually understood the AGW theory, and the science behind it, well enough to debate the issue, but I've gotten a lot of hate for failing to regurgitate the party line. I half expect to get banned from this forum for taking this minority view, in fact.

> I half expect to get banned from this forum for taking this minority view, in fact.

This kind of sentence reveals much about your psychology, and not in your favor, you should avoid it.

I think your comment is a little confusing, and I am not sure if you re trying to argue in favor of crowd-sourced reviews/ratings, apart from arguing against the effectiveness of peer review and against "government influence."

Anyway, if this is the case (that you believe that the opinion of peer scientists is of lesser value than that of the general public), I would like to point out that you are not examining properly any alternative to peer reviews, in particular crowd-sourcing. You also take your experience emotionally, and some arguments seem weak evidence for the dysfunction of the system as a whole:

> ... Sometimes we'd have to submit the paper to another journal.

> I don't like that some Professor Emeritus is holding to an older view, as that forces us to make our argument a bit stronger...

My point is that journals should use whatever methodology to choose papers for publication that they feel is appropriate. And that there should be no forced "peer review" standard, since this methodology is currently being used to exclude valid scientific research for political purposes.

This forum bans people for having a minority opinion. I wanted to mention that, because I would not be surprised to be banned for expressing such an opinion.

When it happens, you have no way of knowing it. So, you probably think this forum is an open one where any opinion is allowed so long as you are not trolling. That is not the case.

Indeed, I am thinking so, but I am ready to doubt my beliefs.

Do you have anything substantial to back this? Surely, there must be external pages presenting the problems to which you refer, on websites are out of reach of anyone in control of such a censoring system.

"But I don't see how any scientist who also holds this view is threatened by articles proposing the alternative. Some creationist makes a publication claiming that evolution is wrong-- should be amusing!"

You're right that science should be objective. You then say that evolution is an observable phenomena. Well then, how could a paper disputing evolution be objective?

They're welcome to submit such a paper, as long as it's good science. It's not threatening, it's just potentially misleading to people who read it and gives the author the chance to claim legitimacy through publication. That's what the peer review process exists for, to weed out bad science, so that the general public is not required to try to evaluate claims that require expertise they don't have. Again, it certainly has its problems, but it's better than outsourcing to non-experts. Peer review can also sometimes be politically motivated, but, at least in my experience, that's often the argument of someone who's angry because their paper was rejected for legitimate reasons.

The fact that you think "AGW" is simply and provably false is also a beautiful demonstration of why we need peer review. You have a ridiculously simplistic view of the issue and of the status of consensus because you're opining on a question that's outside your field. (I'm just guessing, but you're not a climatologist, are you?) There are a lot of "experts" like yourself making similar claims who have no idea what state the field is in. If you tried to get such a paper published, professional climatologists would say, "hey, we're not idiots, we study this for a living and have taken basic physics and chemistry. Nice try." And they would send your paper back, as they should.

Exactly, you advocate "peer review" to censor scientific papers that don't agree with a pre-ordained political position.

That's not science, and that's not objective.

If you understood the issues, you'd understand that I disproved the AGW hypothesis in my earlier post. It really isn't that difficult. Obfuscation and complexity are the tools of politics, not science.

The article doesn't touch on the fact that all this scholarship, usually funded in some way with taxes, is locked up behind pay walls unless you want to pay $10-$40 per article (that's probably a bigger obstacle). However, ArXiv works because at least with harder sciences you have proofs and empirical results that are more clearly defined. It would be harder to judge papers in economics, humanities, etc. (there's already a lot of variance in quality of scholarship even in the current published model, I don't see how it would flourish under an open model aside from becoming more like the blogs we read here - no offense).

There have been many experiments where people have passed on jargon-laden papers more fiction than research off to "respected" journals. If you can solve the quality/trust problem and make it acceptable for governments and other funding bodies, then we might move forward.

The thing about the current model is that they distribute pdfs of all things when I think the raw text, bibliography files, and even data should be distributed with the article and available at no cost.

There's also the problem of the cost of publishing a paper in the first place. In my field (astronomy), publishing a paper in a peer-reviewed journal will cost you at least $1000 with an extra $100 for every table in it. You can easily spend $3000 for the privilege of publishing a paper in a peer-reviewed journal and then pay $20 for the privilege of reading it.

The article doesn't touch on the fact that all this scholarship, usually funded in some way with taxes, is locked up behind pay walls unless you want to pay $10-$40 per article (that's probably a bigger obstacle). However, ArXiv works because at least with harder sciences you have proofs and empirical results that are more clearly defined.

Most of the stuff on arxiv is also published in peer reviewed journals. That's why they are called preprints.

But not necessarily. There are some landmark papers like Grigori Perelman's solution of the Poincaré Conjecture that only exist on ArXiv.

The Social Science Research Network (SSRN; http://www.ssrn.com/) is an effective preprint service for the Social Science that seems to have the same professional clout as ArXiv for it's domain. Yet there are no respective preprint services for biotechnology... why?

Seems like there should be one just by connecting the dots between ArXiv and SSRN.

I wouldn't mind a two tiered system, with a curated membership and paywalls, and the riffraff/unwashed masses in a lower tier. Exclude the latter from making comments, but give out the knowledge for free.

My experience (in computer science, ymmv) is that a vast majority of researchers would agree that, with the Internet, the profits made by journals are excessive relative to what they do (ie. what they actually do, not what they outsource to other researchers).

The reason why there are still there isn't because we're convinced that we need them (as the article implied), but because of inertia. This is the criterion by which most researchers judge each other and themselves, and it survives because few people have the guts to refuse it: everyone knows it's evil, but it's hard to walk away. People are lured by prestige, and by the tangible consequences of prestige (like funding).

We do not need a killer app to get rid of the journals, we need a good number of motivated and prominent researchers to opt out and bootstrap something else.

Not to mention that publication bias leads to incorrect results. See http://www.theatlantic.com/magazine/archive/2010/11/lies-dam... or the PLoS paper: http://www.plosmedicine.org/article/info:doi/10.1371/journal.... I am most hopeful for Google Scholar or Mendeley (mendeley.org) to open this up, but who knows?

Yesterday I got an email from a journal asking me to submit a paper. They needed it within 2 weeks. And it would be published 2 weeks thereafter. After I spewed coffee through my nose, laughing, I clicked the "spam" button.

Publication in real journals is slow; reviewers are asked to send comments within a month, but they often take more than that. Publication in real journals is difficult; the best-ranked journals reject 90 percent of submissions. Publication in real journals is flexible; complicated decisions about how to transform unpublishable into publishable are made in each step along a long, often meandering and complicated, path.

The simple answer to the question posed ("why publish in peer-reviewed journals") is that, for many fields, not doing so ends a career.

But if the question is, as it seems by comments here and in many conversations I've had with students, "what value do peer-reviewed journals have" the answer is that, for many fields, doing so improves the science for the individual paper, and for the larger corpus.

The issues he discusses seem to be addressed by phygg, essentially a combination of arvix and digg for physics: http://www.phygg.com/phygg/

I see no reason why this shouldn't work for arvix as a whole.

Most recent "paper" on Phygg, fram July 2nd, begins Red Bull Hats: Wholesale from factory, limited time! Get MLB Hats, NFL Hats, New Era Hats for your favorite teams. It's gathered net upvotes. Most papers there have been automatically cross-posted from Arxiv.

I don't think we can take Phygg to be a proven model. There's a wider issue of how new gatekeepers can be established for such a new model.

and no mention of mendely? http://www.mendeley.com/

the pain point is that people are focused on the publication - you need to focus on the people. i don't care what you think about a paper, but i want to know what Feynman has published and who he keeps an eye on. professional scientists know who to keep an eye on - the rest is noise. science is the oldest social network there is...

"peer review is costly (in terms of time and money)"

This is an objection why? Of course science costs time and money.

"However, journals do perform a service of sorts: they filter papers, ostensibly based on interest to a community, and also ostensibly ensure that the details of an analysis are correct and clear enough to be replicated (though in practice, I doubt these filters work as intended)."

Nobody cares about your doubts. Back it up.

>Of course science costs time and money.

One problem is that peer review is essentially free for the journals, and people have to take time for it out of the other things they have to do (like teaching, or scrambling to get Yet Another Publication before the tenure review comes up, or doing the reporting to 550 external agencies). People are expected to uphold the highest standard of rigour essentially in their free time. In that sense, the current peer review leads to waste, by decreasing the efficiency of the investments originally made into other things.

Using this system, you can still gather a bunch of web-published articles, print them out together, and call the results "Nature". Just in case you miss the current system.

It gets you funding

This. I guess at least a fraction of scientists are open towards a change in reviewing. However, during funding applications and interviews, the number of publications in journals is very important. Hence, if you want to make any chance in science, you submit to journals.

Oh, yeah. I want to publish in PLoSOne, but my more established collaborators (the ones who have arranged the NIH grants) absolutely need to publish in Nature, etc.

The author's ideas are wacko and won't work.

I've published two technical papers in peer-reviewed journals where I was the only author, published several more papers in peer-reviewed journals where I had co-authors, and wrote a Ph.D. dissertation that was reviewed as "an original contribution to knowledge worthy of publication".

The papers I published were reviewed essentially on the usual criteria of "new, correct, and significant".

Here is one place the author gets way off track: For a technical research paper, e.g., new, correct, and significant, reading it is difficult, and judging if it is new, correct, and significant is more difficult. In an experimental science, judging if the work is reproducible adds more difficulty. It's TOUGH work.

The work is so tough that for some huge majority of papers that are published, nearly the only careful reading at least for some years will be during the review process. Maybe later the paper will get referenced, carefully read by many people, etc.

So, the idea of the article of this thread that commonly for papers that just appear on-line there can be 'votes' from many competent readings is just uninformed, misinformed, ignorant, just plain wrong, brain-dead nonsense.

Here's more: Now, when an author A sends a paper to journal X, there is a good chance that the paper is actually in or close to the specialty S of journal X. So, the reviewers of journal X have a good chance of being among the best people to review the paper. If journal X does publish the paper, then author A, the paper, and people interested in specialty S get a lot of help because journal X has said that the paper is new, correct, and significant for specialty S.

STILL, even with all this help for specialty S, the paper will likely be carefully and competently read at least for some years essentially only by the reviewers of journal X.

Since it is so difficult even with the help of journal X in specialty S to get such papers read, for a paper just posted on-line there is no hope at all.

Maybe in some experimental sciences reading a paper is relatively easy; however, judging if the experimental work is reproducible promises to be difficult.

Maybe not in experimental science but in essentially all of the rest, there is a fundamental difficulty about research that is new, correct, and significant: Almost necessarily nearly no one will understand the paper easily or understand the paper at all. Else, given basic academic competitiveness, the results in the paper would long since have been old instead of new. So, the research and review processes have to work always right at the edge of what can currently be understood at all. For meeting this fundamental difficulty, the present journal system helps, and just putting papers on the Internet is hopeless.


Moreover, there are several peer-reviewed journals which are already open (like PlosOne, JASSS or any other at http://www.doaj.org/ ).

The "upvote" mechanism is actually already there although it is called "number of citations". The only thing that is missing is an open system that presents such information in the proper way (something like Scopus, but free/open).

Did you read the Richard Smith article he linked to?


Yes, as that article makes clear, the point is not that a voting system is perfect, it's that the current onerous system provides no benefit over much simpler arrangements.

Okay, I just read


Yup, again, his ideas are wacko and won't work.

Again, the main problem with just putting the papers on-line and letting people 'review' them then is that mostly the papers won't get read nearly as carefully as with the present peer-review process; or, even if some paper does get read carefully, other readers mostly won't have any very good way to know this.

Uh, publish several well regarded papers in specialty S, and then maybe get invited to review some papers in specialty S. Do well reviewing and impress some journals in specialty S, and maybe get invited to be an editor of a journal in specialty S. Do well as an editor, and when Elsevier, etc., starts a new journal in specialty S, maybe get invited to the the editor in chief. That takes ballpark 20 years. Net, the present paper review system has a severe 'promotion' mechanism that is an enormous aid to quality the journal readers can value. That is, the readers have a good way to know about the quality of the reviewing. With the proposal of this thread, even if a paper got a careful review, the readers would not know this. That is, the proposal throws out the baby with the bathwater.

As I described, reading research papers and judging if they are new, correct, and significant is TOUGH work. The present process makes a good effort at this tough work, and his proposal omits any very serious effort at such work.

With irony, eventually his proposal would lead to a system of 'upvotes' from 'respected reviewers'! Or, "I don't understand the paper, but it was upvoted by Professor Issac Iatrogenic at Sawbones Medical School."!

He still wants voting, but he has no well designed 'voting system', and the present peer-review process does. If he wants a better system, then okay, but his proposal is for essentially no system, and that would be much worse.

For his many arguments, mostly they omit a big, huge point: He's considering the problems with the papers that were accepted but is not considering the no doubt on average much more severe problems with the huge number of papers that were rejected. His proposal to 'publish' just by posting on the Internet would result in all the rejected papers also being posted and, then, of course, soon, still more junk papers.

Why buy Tide laundry detergent? Why drink Coke? Are we afraid that both Tide and Coke might give us something toxic? No. Of course not. Why? Because Tide and Coke have been used by many millions of people and are from companies with a huge financial fortune to lose if their quality falls. So, we 'trust' Tide and Coke. So, Tide and Coke are for us valuable 'brands', "valuable" because we can trust them because we know how much evidence there is that the products are at least okay.

So, similarly, the peer-reviewed journals provide researchers with such valuable 'brand names' they can trust better than just some paper posted on the Internet.

Net, the author of


just doesn't 'get it'. He's looking for something he wants, doesn't get as much as he wants, and proposes a system that is still worse.

Let's take his point about publication delays. That doesn't matter very much! Once the paper is submitted, it's no longer a secret, and mostly relevant researchers don't actually have to wait for the printed version two years later to make use of the paper.

Let's take his point about the 'cost' of reviewing: Well the reviewers LIKE to review because it helps them keep up in their fields and, thus, is good for their careers! The effort to review is not a 'cost of the peer-review system' but just routine work by the reviewers for their own careers. In a hot field, reviewing the papers can be an advantage in time; the reviewer gets the papers maybe two years before someone just looking at published papers.

Let's take his point about reviews of papers after they are published: Sure, over time, including after publication, the reputation of a paper can change. He wants to use this effect to replace the initial peer-review process. But he ignores that he is talking about papers that have been published and, thus, have passed peer-review, that is, are already 'certified' as being on the inside of the fence. If just publish papers on the Internet, then that first step of being 'certified' will be missing and the paper might just languish much longer before anything like the after publication review process he likes now.

Or, he's missing the big point about how to filter water: First, just before the pick up of the pump, have a screen that filters out the rocks down to the size of sand. Second, have a fiber filter that filters out all the solids down to about 10 microns. Maybe next deal with the usual suspects of compounds of calcium, iron, and manganese. Next deal with bacteria. Next maybe use reverse osmosis or distillation to deal with everything else. Likely now have some good water to drink. Well, paper reviewing also has 'stages', and one of the early ones, but not the last one, but one that helps the last ones, is peer-review. E.g., we don't ask the reverse osmosis to filter out the gravel!

His point about novel, original, or innovative work being rejected is poorly considered: We're not talking popularity of fashion frocks here. Instead, novel, etc., or not, a paper still has to be new, correct, and significant. Believe me, a solid proof that P = NP will not be rejected because it is too 'novel'! It will be accepted because the work is, again, new, correct, and significant. It is 'novel' mostly because it is especially significant. Again, to get published, pass new, correct, and significant. If after that the paper is also wildly 'novel', so be it; it still got published!

He missed the big, huge point about focus on research specialties: As I described, if paper from author A goes to journal X in specialty S, then likely already the paper is in the right set or an appropriate set of editors, reviewers, and researchers. If the paper is well regarded, then the people in specialty S learn about it fairly quickly. So journal X serves as an 'accumulation point' for specialty S. If just publish papers on the Internet, then lose the value of such 'sets' and 'accumulation points'.

So, indeed, if his proposal were to become accepted, an early step would be to have, for each specialty S, a special place to post papers in specialty S.

Then to keep down the noise level, that special place for specialty S would have a 'moderator'. A good 'moderator' would be the editor in chief of journal X. Soon he would get too busy and would want some editors, say, from journal X. The editors would get too busy and soon would want some reviewers, say, from journal X. Then all concerned would want some quality control for their special place for specialty S. So, the moderator would call himself the editor in chief again and insist that his editors insist that their reviewers actually do good peer-reviews. Maybe the whole Internet site would be sponsored by Elsevier. Then they would charge for the papers that passed the reviews. Now except for tree cutting, we would be back where we are now.

If he wants to improve the quality in publishing, then I have a suggestion for him: Do some brilliant research work, write it up as a paper, and submit it!

Richard Smith does not propose a voting system, but the complete elimination of gatekeepers. It's worth emphasising that he is talking about biomed exclusively: other disciplines have different scholarly cultures. Most crucially, the commercial impact of biomed papers is higher than in other fields.

When you write

>reading research papers and judging if they are new, correct, and significant is TOUGH work.

I agree, but when you write:

>The present process makes a good effort at this tough work

I am surprised you say this. Smith points to studies that show that referees' opinions have very little correlation, which suggests the minimum level of filtering actively done by referees is very low. It does not follow from this, though, that the institution of journals considered as a whole does not perform a useful filtering function, which is to say that I agree with you about your point about successive filters.

>Publication delays Many journals follow anonymous reviewing and forbid distribution of preprints.

The other point, which he does not make, is that referees have very little incentive, besides the opinion of the editorial board, to do their job properly.

>His point about novel, original, or innovative work being rejected is poorly considered. ... If after that the paper is also wildly 'novel', so be it; it still got published!

There's no doubt in that case that the journal has been an impediment to science.

I don't advocate abandoning peer review. I do think we should be more open-eyed about the problems with it. It is a costly system, and one that does not work the way it is intended. Science needs gatekeepers, as you argue, although they need not resemble the current journal system.

On Smith's article and a voting system, uh, I read Smith's article ASAP! I was assuming, without careful checking, that Smith's article was much like that of this thread. Looks like you got me in some too fast reading!

Yes, it looks like these articles are for biomedical work. I'm sure it's different in many small respects and maybe some larger ones.

Uh, I said that the present system makes a good "effort", and you countered with some of the problems with the "results"! You may be right!

The "correlation" is not very impressive: The review process is not quite that simple. Here's a dark, dirty little secret: Often at some point in the toughest part of the paper, some reviewer thinks, "I tried to look up the background for that, and I don't have the prerequisites even for the background. I can't take out a year to study to be ready to read this paper. The parts of the paper I can read look rock solid. The new parts I can read look nice. His references are high quality. His writing is carefully done. He doesn't make wild claims. It looks like a solid piece of work. I can't find anything wrong with it. Let him publish it: If it has a big error, then maybe that will come out and be on him. I'll give him a pass and go to dinner." or some such. So in this case correlation doesn't mean much!

"Many journals follow anonymous reviewing and forbid distribution of preprints." Sure. In most simple respects, the paper is still all locked up until it appears in print maybe two years after submission. Still work goes on! So, the author of the paper can present parts of the work in lectures and seminars. Guys down the hall know and can start working on extensions. The author can work on extensions and give talks and reference the paper as "in submission". Generally the word gets around! Net, the publication delay doesn't much slow the actual research work. That is, the actual exploitation of the work in the paper doesn't really start just with the publication and people reading that, Instead, 'the word gets around'.

"The other point, which he does not make, is that referees have very little incentive, besides the opinion of the editorial board, to do their job properly." The referees have a LOT of "incentive ... to do their work properly" if they want progress in their academic careers! Good referees get invited to be editors; good editors get invited to be good editors in chief. A referee, editor, or editor in chief is a 'gatekeeper' and, thus, a somewhat powerful guy. If the journal is good, the names are up in lights. So, professional reputation is enhanced. Professional connections are made. In terms of HN, there are connections in a professional social graph. A referee, editor, etc. gets to know who the up and comers are in their research field. They get early notice of new research results and, then, can start working on extensions. If they do a bad job, then their editor will soon not send them anymore papers to review!

"There's no doubt in that case that the journal has been an impediment to science." I don't see that. Einstein published general relativity. People are still trying to understand it, test it, and understand the implications. So, at the time of publication, just how novel it was was not at all clear. But it got published and passed the first step in the filter. Slowly people understood that the paper was very important and worked hard on it. The publication process didn't end the work on the paper but did its job. I can't believe that many of the reviewers actually understood the Riemannian geometry. Still the paper got published. If the paper was actually nonsense, then that would be on Uncle Al.

I believe that the current system is okay. The people objecting just want the system to be much more than it is; they may be asking for too much. One way to improve is to quit chopping down trees, but that change is likely on the way. But replacing the current system by putting papers on the Internet and letting nearly anyone 'vote' would be much worse.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact