Hacker News new | comments | ask | show | jobs | submit login

I have an alternative proposal: do a study right the first time.

That means:

A) Pre-registering the study design, including the statistical analysis. Otherwise, attaching a big label "Exploratory! Additional confirmation needed!"

B) Properly powering the study. That means gathering a sample large enough that the chances of a false negative aren't just a coin flip.

C) Making the data and analysis (scripts, etc.) publicly available where possible. It's truly astounding that this is not a best practice everywhere.

D) Making the analysis reproducible without black magic. That includes C) as well as a more complete methods section and more automation of the analysis (one can call it automation but I see it more as reproducibility).

Replication of the entire study is great, but it's also inefficient in the case of a perfect replication (the goal). Two identical and independent experiments will have both a higher false negative and false positive rate than a single experiment with twice the sample size. Additionally, it's unclear how to evaluate them in the case of conflicting results (unless one does a proper meta-analysis--but then why not just have a bigger single experiment?).




Your proposal is comparable to saying that checks and balances are not needed in a democracy, politicians just need to govern "right". This is about incentivising scientists to do the right thing instead of merely demanding it, like you do.


How is advocating for a new set of best practices any more "demanding" or wishful than a regime of obligatory replication? And how is this categorically different from current practices such as peer review, disclosing conflicts of interest, an IRB, etc.?




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: