Hacker News new | comments | ask | show | jobs | submit login

We shouldn't "accept" or "reject" results at all.

It's not a binary option. One poor experiment might give us some evidence something is true. A single well reviewed experiment gives us more confidence. Repeating the results similarly does. As does the reputation of the person conducting the experiment and the way in which it was conducted.

It's not a binary thing where we decide something is accepted or rejected, we gather evidence and treat it accordingly.

So many scientists I talk to don't have a basic understanding of philosophy of science. I don't necessarily blame them--I understand why "philosophy" as an academic field is seen as a soft, speculative, and pretentious field compared to the rigor of science, but as Daniel Dennett said, “There is no such thing as philosophy-free science; there is only science whose philosophical baggage is taken on board without examination."

These days, if you ask a scientist "So how do we prove something is true using science?" they'll be able to recite Popper's falsificationism as if it's a fundamental truth, not a particular way of looking at the world. But the huge gap between the particular theory that people get taught in undergrad--that science can't actually prove anything true, just disprove things to approach better hypotheses--and the real-world process of running an experiment, analyzing data, and publishing a paper is unaddressed. The idea that there's a particular bar that must be passed before we accept something as true is exactly what got us into this mess in the first place! There's a naive implicit assumption in scientific publishing that a p-value < 0.05 means something is true, or at least likely true; this author is just suggesting that true things are those which yield a p-value under 0.05 twice!

What's needed, in my opinion at least, is a more existential, practically-grounded view of science, in which we are more agnostic about the "truth" of our models with a closer eye to what we should actually do given the data. Instead of worrying about whether or not a particular model is "true" or "false," and thus whether we should "accept" or "reject" an experiment, focus on the predictions that can be made from the total data given, and the way we should actually live based on the datapoints collected. Instead, we have situations like the terrible state of debate on global warming, because any decent scientist knows they shouldn't say they're absolutely sure it's happening, or a replication crisis caused by experiments focused on propping up a larger model, instead of standing on their own.

^ One of the only reasonable responses here.

The arXiv, but instead of saying "published" or "not published" we put a score on the paper suggesting the probability one should believe its thesis.

Kinda like Rotten Tomatoes, but... for science?

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact