Hacker News new | past | comments | ask | show | jobs | submit login
Highlight negative results to improve science (nature.com)
93 points by headalgorithm 11 days ago | hide | past | web | favorite | 17 comments





There's another issue at play here that's tightly connected to this one. Underpowered studies. Here's the process. I look elsewhere (e.g. similar studies) to make an educated guess about the size of effect I'm going to look for. Let's say I land on 2 units. Then I ask how many samples I need to detect a 2 unit effect with some low false positive procedure. I want to detect my 2 unit effect with high probability if it's there.

So you end up with three numbers. The probability you find something given that there's nothing, often 5%. The probability you find nothing given that there's something (something being a 2 unit effect), often 20%. The size of effect you're using in these calculations (2 units).

The smaller effect size you're looking for, the more data you need. The lower error rates you're looking to get, the more data you need. So you make sacrifices and accept higher error rates. You make optimistic effect size estimates (making your true error rates even higher).

Using p < 5% as a positive result publishing threshold is bad enough. Using power > 80% as a negative result publishing threshold is even crazier. That's a 20% error rate for publishing negatives. Higher if you used optimistic effect estimates.

What this all means is yes we should publish negatives, but a negative worth listening to is going to be expensive. In a bayesian sense, it's the difference between me spending little money to learn that the effect of coffee on mood is probably between -1 and 1 (a bad negative result) vs spending a lot of money to learn that it's probably between -0.001 and 0.001 (a more useful negative result).


Thank you for summarizing this. Overly powered studies are rare, and this is the primary reason that null results are not given the status as positive results do. They should go into a journal of negative results and boldly state a power analysis.

Reason #37 that I am no longer a scientist: most of my time as a scientist was spent attempting to replicate the positive results of other scientists and failing, because the work was not replicable.

I actually trust negative results way more than positive results now- it's a proxy that you can trust the investigator. I wish we had a Journal of Negative Results.


Such a fascinating take. When I was young I used to think that scientists and the scientific process were perfectly objective. As you get older you learn that human beings have human being problems. Those don't magically disappear once you become a scientist.

You see this even in highly objective fields such as physics, where much progress comes from the previous generation simply dying.

Our own ego's, as well as the factors mentioned in the article, create problems much more than you might expect when you're young and naive.


Is physics truly objective? I see people still arguing about quantum mechanics, and it looks like the safest conclusion is that we don't actually live in a straightforward, temporally causal world (prediction: https://en.wikipedia.org/wiki/Wheeler%27s_delayed-choice_exp..., experimental support: https://en.wikipedia.org/wiki/Delayed-choice_quantum_eraser)

Nobody's making progress because people are dying (we still see many vestiges of the Cophenhagen interpretation!)


Yeah, I see your POV.

Think it depends where you're coming from. Physics is objective if by objective you mean we can predict how the wavefunction will evolve according to the Schrodinger equation.

(Depending on who you ask) it might be less objective if you ask: ''But what's REALLY going on?'' Then you start getting into the interpretations of qm and different scientists will have different views.

I think it's at least likely that we'll be able to answer that question in a few decades (or at least move the ball forward). The view of Everettians, for example, is falsifiable and there are experiments going on as we speak trying to achieve that.


There are Journals of Negative Results. This isn't really a fix, because people will still not publish negative results unless there's an incentive to do so. And there rarely is, because most negative results will be considered as boring (unless they disprove an existing, popular result).

What we really need is to decouple the publishing decision from the outcome. A way to do this is to use registered reports.


There are a few journals of negative results. For example, https://jnrbm.biomedcentral.com/

"Journal of Negative Results in Biomedicine (JNRBM) ceased to be published by BioMed Central as of 1st September 2017"

I'm not sure about journals of negative results (though I ran across this effort: https://www.negative-results.org). However, I'm aware of, at least, one workshop on negative results. That was the 2015 ERROR workshop in e-science domain. Please see https://press3.mcs.anl.gov/errorworkshop for details.

That's also why I gave up pursuing a Ph.D. in psychology and shifted my focus from experiment design to computational analysis & modeling. I failed to replicate a few cornerstone experiments with modern equipment and a more diverse subject group, no matter how hard I tried.

https://slate.com/technology/2013/05/weird-psychology-social...


Don't let your dreams be dreams. Everyone would like this.

So how many negative results has Nature published lately?

If studies with negative results are as valuable as studies with positive results, do the sales curves for reprints look similar for the 100 best-selling studies with positive results and the 100 best-selling studies for negative results?

It’s not monetarily as valuable, but scientifically.

Applicable across mostly everything, right? Relationships, business, Amazon product reviews.....




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: