
First analysis of ‘pre-registered’ studies shows sharp rise in null findings - danso
https://www.nature.com/articles/d41586-018-07118-1
======
cbkeller
Geologist here. This is super-important for hypothesis science.

At the same time, I think one of the biggest inaccuracies in the public
perception of science (and one of my pet peeves) is the idea that all science
is hypothesis science. It turns out there's also still plenty of discovery
science to be done -- and while it's less common than it was 100 or 200 years
ago, it's quite important!

In geology, this often quite literally takes the form of a blank space on the
map -- there are plenty of unmapped quadrangles on the geologic map of the
world at 1:24,000 and finer scales, and the USGS will pay you just to learn
how to fill them in [1].

This is one of the few subjects on which the misinformation is so pervasive
that even the wikipedia article [2] is substantially inaccurate (I blame the
overly-simplified formulation of the scientific method that most of us are
first exposed to in elementary school). The one part that you can correctly
infer from the wp article though, is that discovery science is having a bit of
a resurgence recently thanks to the proliferation and reuse of large datasets.

[1]
[https://ncgmp.usgs.gov/about/edmap.html](https://ncgmp.usgs.gov/about/edmap.html)
[2]
[https://en.wikipedia.org/wiki/Discovery_science](https://en.wikipedia.org/wiki/Discovery_science)

One of the simplest distinguishing characteristics is that there's no such
thing as a negative result in discovery science: if you're mapping a blank
area, whatever you find will be something we didn't know before.

~~~
SubiculumCode
I think neuroscience is much like exploration. You can make hypotheses, but we
barely know what is going on, so the hypotheses aren't really well grounded.
Find hats there, publish the result, build the map, so to speak, so that
future scientists can make more informed hypotheses.

~~~
Balgair
Oh man, in neuro, discovery-centric science is really controversial. Not the
findings themselves, but the _funding_. If you propose to just stain/viral-
inject some tissue, electro-physiologically map some connections, etc, you'll
just get back comments that say 'fishing expedition'. Maybe you can do this
with NIH when you propose to contrast it with diseased tissues, but even then,
it's unlikely (Well, less likely than the 'normal' low rate of proposals being
funded).

~~~
JoeAltmaier
I'm appalled. That fundamental science is called 'a fishing expedition' in a
derogatory way, says whole volumes about how academia is failing.

~~~
robotresearcher
Don’t be appalled. This is not a failure. There are limited funds. Hypothesis
driven work is thought to be more probably productive.

If you were handing out the funding, assuming equally good teams, do you
choose the work that targets Parkinson’s mechanisms or the functional map of a
less-explored bit of CNS?

We _are_ doing basic exploration as well. We are just prioritizing targeted
work in a resource constrained environment.

~~~
Balgair
Maaaaybe. Truth be told, it's all about networking. Grants go to the friends
of the reviewers more so than to those deemed most deserving [0]. Granted,
it's _really_ hard to read a bunch of proposals and determine whose is 'most'
deserving. So reviewers act like the humans they are and give the grants to
their friends.

[0]Total anecdata, I have no source for this.

~~~
robotresearcher
I don't review grants for my friends. This is a strong community norm. It's
unethical to do so. This causes a shortage of qualified reviewers, but that's
less of a moral hazard than conflicts of interest.

------
xupybd
This would be a huge leap forward. I can't blame researchers for tailoring
their research to help their own careers but this should help prevent that. A
null finding might not be great for the researcher but it's a win for science.

~~~
cabalamat
> A null finding might not be great for the researcher

The fix for that would be to change science funding so a null finding doesn't
hurt the researcher's career.

~~~
syrrim
Never gonna happen. Think about it: the principle of science is mapping the
empirical world with a logical one. A null result represents a failure to do
this. In a binary case (either this or that), a null result gives us
information, but the majority of null results tell us nothing more than "our
hypothesis was wrong". Having a wrong hypothesis means you got unlucky sure -
but repeated enough times and treated statistically it means you are a less
skilled scientist than someone who gets more affirmative results.

~~~
pbhjpbhj
You're in a maze of twisty passages. If someone can tell you 'turning left is
just a wall' then you can focus on the other routes.

~~~
TeMPOraL
But in reality, it's more like you were blind in a forest, and that someone
was operating a LIDAR with binary output (there is/isn't something less than
50 meters ahead). They tell you that at 120.001 degrees there is something.
Then they tell you that at 120.002 degrees there is something. Etc. If null
results would be treated equally to a positive finding, then people would just
spam null results.

~~~
pbhjpbhj
You're absolutely right. Isn't it good that my rubbish attempt was there to
stimulate your useful one ^_^

There's something to be said for brute force on rare occasions though.

------
salty_biscuits
Do you think they pre registered their study into the effect of pre registered
studies on null hypothesis rate?

~~~
hn_throwaway_99
This was a meta-analysis, not a study of its own.

~~~
hannob
You can totally p-hack a meta-analysis, you have many degrees of freedom in
interpreting the data.

Preregistration of a metaanalysis makes sense and should be standard practice.

~~~
Bartweiss
The whole framing of meta-analyses as the pinnacle of reliability is sort of
concerning. They're a huge help with messy questions like "what the hell is up
with priming?", but between choosing which studies to include, standardizing
their controls, normalizing their results, etc. it's amazing how far they can
be skewed.

One thing I don't know: what does preregistration even look like for a
metaanalysis? You can state some conditions up front, but a lot of the degrees
of freedom only come into play once you're deep in the work looking at
specific studies.

------
m3kw9
Negative findings are just as good to publish so people don’t repeat mistake
twice

~~~
gus_massa
The problem is that it is too easy to have wrong ideas. If there only metric
is how many studies you (or your group) have published, some will try to game
it.

~~~
rtpg
In domains like chemistry, you hae this research process of "try every combo
of material x process and see what you get".

Hopefully detailed null results will avoid too many people going down the same
path. But even more hopefully, it will let other researchers read the process
and perhaps find another way to actually get to success.

"We tried A x B in this way, that way, some other way, none of them worked" is
pretty valuable info. Also pretty good when it comes to confirmation of
theoretical work

~~~
adtac
>We tried A x B in this way, that way, some other way, none of them worked

How searchable is this data? Like, do I need to be an expert who is up-to-date
on most proceedings in the subfield to know this, or is this information easy
to pull up with a few searches?

~~~
analog31
In my experience, it's buried in dissertations.

------
bo1024
Hard to disentangle two possible mechanisms: (1) null findings that would
normally get tossed are being submitted and accepted at higher rates (i.e.
preregistration increases acceptability of null findings), (2) null findings
that would normally get p-hacked into positive findings and published are not
(preregistration working as intended). Both good.

------
monktastic1
If I'm understanding correctly, the novel studies had a higher hypothesis
success rate than the replicated studies. I imagine this is because there were
many (successful) "debunking" attempts?

~~~
PeterisP
There's the aspect of _why_ you would choose to replicate a particular study
instead of doing something else - there's always more ideas than your ability
to execute them.

At least for me, there are two cases why I'd bother to do that and why I could
write in a grant proposal that this work is necessary - either I want to build
on that study; in which case most likely I wouldn't publish _just_ a pure
replication but rather a comparison of my changes with my replication of the
original study, and this paper would be counted as a novel study.
Alternatively, I'm doing a replication because I'm not certain whether the
original study is actually true, because I have some solid reason to believe
that it's wrong.

~~~
pbhjpbhj
All studies need directly replicating, surely.

~~~
PeterisP
Possibly yes, someday, if they're still relevant. I mean, not every paper even
deserves to be read, there's a lot of garbage published somewhere with an
imitation of peer review. There's a lot of publications that have never been
cited and likely won't ever be, much less replicated.

As I said before, at any point of time in any scale of research the next
research steps that "need to be done" vastly outnumber the resources to do it,
so obviously not all of these things can be done. The majority of reasonable
grant proposals get rejected, so that research doesn't get done - it's all a
matter of prioritization; unless there's solid argument that this research
task is within, say, the top 20% of the important research tasks, it won't get
done. And most studies are not so important to justify repeating the effort;
perhaps it was justified to expend X resources to get to that result, it
doesn't necessarily mean that it's worth to spend 2X resources to get to a
slightly more certain result after replication. It only _needs_ replicating if
lots of people are going to build their research on top of these results, and
that simply doesn't happen for most studies.

------
forapurpose
Here's one way to think of the value of positive or negative results: Everyone
talks about the value of negative results, but will we ever see one of those
studies on the front page of HN?

------
chiefalchemist
But wouldn't be beneficial to have these published in public somewhere? Else,
I would imagine, there could be plenty of "re-invention" going on.

Also, at the very least, if I had a general idea, these "nulls" would help me
to further refine what I might want to poke at. The way null is used here is
wrong and misleading.

~~~
0xffff2
Null results are beneficial to the scientific community at large, but
detrimental to the researcher(s) publishing the null result. This results from
a system in which researchers are judged on a metric approximated by the
percentage of positive results they publish.

~~~
chiefalchemist
Not so sound snarky (I know that's not welcome here) but that doesn't sound
very scientific to me. If ultimately science is a method-based process to find
truth, then hiding facts feels hypocritical.

No doubt, I agree with you. We all do. My rub is that I see repeated calls
from science that the public bow to its mastery. Unfortunately, a
institutional lack of transparency is hardly grounds for trust. Perhaps it's
time for science to hold itself up to it's own standards?

------
lolc
> Allen adds that their analysis is exploratory, and that there could be other
> explanations for the findings.

I like how they're clear how their findings are dogfood at this point. They
should pre-register a proper meta-study now :-)

In a more serious vein: Do they actually intend to pre-register meta-studies
too?

------
forapurpose
I wonder how much of the bias for publishing positive results is a holdover
from the days of having only paper journals:

On the Internet, we have unlimited room for publishing results; there's no
reason not to publish the negative ones.

When research was published only in paper journals, there was scarce space
even for positive results; possibly it would have been considered a waste to
use that precious space for negative results. Also, people reading those
journals had limited time and wanted the most important results; they may have
expected what we would call a 'curated' collection of studies. These days the
studies are published in databases and nobody expects them all.

~~~
Jolter
The Internet may have unlimited space for publishing, but it does not have
unlimited bandwidth in peer review. So, the increased publication of null
results will still mean slightly fewer published positive results.

~~~
forapurpose
Will the negative results be peer reviewed?

------
Fuxy
Not to mention even a study that didn't pan out still has a lot of data that
may be used for something else in the future.

Even a failed creation has parts that can be re used in other projects after
all.

------
RandomInteger4
The obsession with positive findings is the most absurd thing when you think
about it right?

Like, I've read anecdotes from people saying that the editors or whomever at
some of these journals might turn down publishing null findings. Think about
that for a second. What does that tell you about their mindset? What possible
reason would you have for not publishing null findings?

Editor: "We gotta move these journals johnny! They need those spicy findings.
If the findings aren't spicy, this stuff won't sell."

That's hyperbole, but that's essentially the only reasoning I can think of,
and it's absurd. Like, what professionals reading journals are going to be
like "Whelp, the findings in this journal haven't been spicy. I can't dab on
this nonsense. I'm going to start reading the other journal." said no
researcher ever; neither literally or in essence.

~~~
SquishyPanda23
Null findings are often not as useful as you might think.

A point null hypothesis for a continuous variable is literally always false.
Especially for the softer science, I've even seen studies mocked for having
too many data points, since it's known that with enough data null hypotheses
are false.

The story is better if your null hypothesis is an interval, but then you're
really just obliquely using the interval to bound something you could be
measuring more directly anyway.

What I'd like to see is moving away from null hypothesis testing altogether
and focusing on measuring things. For example, focusing on measuring effect
sizes, or the probability that a hypothesis is true.

~~~
thiagotomei
But what does "measuring more directly anyway" mean if you're trying to
measure something that may or may not exist?

For instance, in searches for new phenomena in high-energy physics, one
usually puts an upper limit on deviations from the expectation of "known
physics" (i.e., standard model). That essentially translates to statements
like "if this particle exists, its mass should be higher than X TeV, or else
we would have seen it already in our data". Of course, in reality, the
particle probably does not exist, so you cannot really measure its mass!

~~~
SquishyPanda23
Sorry this is so late, but you can measure the probability that the particle
exists.

Null hypothesis tests basically try to calculate the probability of a data set
given the null hypothesis. What you really want is the probability of a
hypothesis given the data set.

So in that case, you want to estimate the probability of theories of physics,
such as those that include the particle and those that don't.

