Hacker News new | comments | show | ask | jobs | submit login

high quality work could be measured by community support.

Absolutely not. Assessing the quality and correctness of groundbreaking research requires a large amount of both talent and effort; a reddit or HN style voting system would swamp the signal with uninformed noise.

Absolutely agree. It seems to be really hard even for experienced researchers to assess which papers will be seen as important in the future [1].

However, maybe something like PageRank might work out well. The more citations your papers get, the more weight your reviews will have. Certainly, there are problems, like people learning how to game the ranking system. On the other hand, the current system is certainly worse in this regard.

[1] http://www.bartneck.de/publications/2009/scientometricAnalys...

That would certainly help. But more than anything else, I think what we need (at least in computer science) is a mechanism to penalize people who publish the exact same research at half a dozen conferences.

Running pagerank on authors with an unnormalized weighting function of "how many papers does author X have which cite something written by author Y" (so that if author Y publishes the same paper multiple times it doesn't have any effect even if both copies get cited, while author X publishing the same paper several times downweights all of his outgoing links equally) could work, though.

Don't search engines already penalize duplicate content? Maybe it would suffice to 'just' put all papers online, convert all references to hyperlinks, and let Google sort out what is important and what not. Actually, (at least in computer science) I don't think that stretching one's research across several publications is being done widely - at least when looking at 'top' conferences.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact