Hacker News new | comments | show | ask | jobs | submit login

I think with the increased visibility of scientific research to the general public, it's less that science needs to stop accepting unrepeated results, but instead the paper process needs to be updated to reflect the new level of availability, and journal databases need better relationship views between papers and repeated tests.

As an outsider looking in on the Scientific process, I am not really sure how applicable my opinions are, but I see these as useful changes.

Basically, in reverse order, my suggestions for science to adopt are as follows:

Papers in databases need to have fields related to reproduction studies, and it needs to start becoming a prideful part of the scientific process; just as there is a lot of pride and money, researchers should start to thump their chest based on the reproducibility of their work, actively seeking out contemporaries and requesting a reproduction study as part of the pubilshing process, and subsequently updating.

The papers published themselves should take a moment (perhaps no more than a paragraph) to include a "for media" section that outlines the "do's and don't's" on reporting on the research. For example, cancer research should clearly state examples of acceptable understandings in lay person terms as a sort of catch for sloppy reporting. Something like "Do not write "cure for cancer found" or "Effective treatment", instead write "progress made, etc". Basically put a sucker punch to outlandish headlines and reporting right in the paper itself, and let journalists who want to be sensationalist embarrass themselves.

This seems like two very simple acts that could raise the bar for science a bit.




Those are both good but the key here is the media needs to understand that scientific papers that have not been independently verified are in a "maybe" state.

Of course, they probably do know this and just choose to ignore it because "Unverified Study that MIGHT Point to M&M's Being Good For You" won't get as many clicks as "M&M's Are Good For You Says New Study!"


This is sort of why I think having it stated explicitly within the paper, not just an aside but part of the actual process. It's to pit less scrupulous journalists against one another, in an "honor among thieves" sort of way I guess. If someone wants to go ahead and write clickbait, they can, but it leaves them open to someone else looking to discredit them going "well, did you even read the paper? they told you not to write that."

it's not so much checking for the public purpose, it's for others.


I think also it would be helpful if they listed all the possible flaws first, including things about if it has been replicated etc.




Applications are open for YC Winter 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: