Hacker News new | past | comments | ask | show | jobs | submit login

I think this is actually something that can be experimentally examined.

Take a sampling of a large number of papers, give them some sort of rating based on whether they provide enough information to reproduce, how clear their experimental and analytical methodology was, whether their primary data and scripts are available, etc, and then look at that rating versus their citations.

Hopefully, better papers get more attention and more citations.

(And yeah, "peer review" as it is done before a paper is published is not supposed to establish a paper as correct, it is supposed to validate it as interesting. Poor peer review ultimately makes a journal uninteresting, which means it might as well not exist.)




That sounds like a very interesting idea. At the least, it would be interesting see the major classes of reproducibility problems. And there may well be a lot of low-hanging fruit, as the comments on this page suggest about data corpuses in computational fields.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: