
Economics journal only publishes results that are no big deal - paulpauper
https://www.vox.com/future-perfect/2019/5/17/18624812/publication-bias-economics-journal
======
dmurray
There are at least a couple of others:

Journal of Articles in Support of the Null Hypothesis (multidisciplinary,
dominated by the social sciences) [0]

New Negatives In Plant Science (biology, discontinued) [1]

Journal of Negative Results (ecology) [2]

At first I thought JASNH was more an art project than a serious journal. Now
I'm not so sure.

[0] [http://www.jasnh.com/](http://www.jasnh.com/)

[1] [https://www.journals.elsevier.com/new-negatives-in-plant-
sci...](https://www.journals.elsevier.com/new-negatives-in-plant-science)

[2] [http://www.jnr-eeb.org/](http://www.jnr-eeb.org/)

~~~
vanderZwan
> _At first I thought JASNH was more an art project than a serious journal.
> Now I 'm not so sure_

Perhaps it started out as such? Even if it did it represents a legitimate
need, and if there is any field which could embrace an arts project and take
it seriously it's the social sciences. I for one would be happy if they did!

(Thanks for the links!)

------
netcan
Academic journals, in general, need to evolve for a lot of fields.

Besides being too closed and a bit crusty, I think journals work well for
sciences and possibly also for philosophy. That's not surprising, since they
evolved around these. But, for "soft sciences" (economics, psychology, etc.),
I don't think the system works very well. Possibly some areas of
health/medicine sciences/studies too.

First, they have a lot of problems in practice. The replication crisis is a
big example, and it impacted psychology most severely. Economics has this
issue, which kind of comes down to ex post hypothesizing.

There are fairly serious issues with generalizability. Even if results of some
economic or behavioral study/experiment are good, can it be generalized past
the narrowest terms of the experiment.

The publishing system (or, maybe "publishing" is not even the ideal frame)
needs to work in a way that promotes accumulation of evidence, data and
replication but the system seems to be producing sprawl.

I'm not sure what the answers are, but glad to see there's an interest in
change.

------
Vinnl
The effort is admirable, but I don't see this fixing the problem. An important
part of the evaluation of academics is the reputation of the journals they
publish in, and that reputation is primarily driven by the "impact" of their
publications, in the worst cases measured by the Impact Factor [1]. Negative
results usually just don't have what is understood as impact by evaluators.
This means that there is little incentive to actually publish negative results
here, and that those results will likely always be disproportionately
outnumbered by selective results - unless the evaluation system changes.

In other words: I don't think scarcity of publication venues is the main
reason few negative results get published.

(Disclosure: I'm involved with another project that hopes to alleviate the
problem. [2])

[1] [https://medium.com/flockademic/the-ridiculous-number-that-
ca...](https://medium.com/flockademic/the-ridiculous-number-that-can-make-or-
break-academic-careers-704e00ae070a)

[2] [https://medium.com/flockademic/why-replication-studies-
are-n...](https://medium.com/flockademic/why-replication-studies-are-not-
rewarded-and-how-to-fix-that-523c2387820e)

~~~
thanatropism
Part of the problem is actually in the process of being solved: the social
role that Holy Immaculate Science had been acquiring.

I know this sounds a bit reactionary in times of antivaxers and even flat-
earthers, but we cannot afford to trust "Science" in the way we had been doing
up to the mid 2010s. How much social policy was being enacted in the name of
p-hacked psychology and social science? How much of our baseline ideologies?

Even the first announcement of results from the LHC was botched. Now, you may
say "great, this is science, it's supposed to be conjectures and refutations".
But then we can't trust it as guarantee of ground truth. Scientific papers
can't be used in service of internet debates, etc.

~~~
umvi
> How much social policy was being enacted in the name of p-hacked psychology
> and social science? How much of our baseline ideologies?

I'm interested in how often some interest group _pays_ for p-hacked results so
that they can use it to strengthen their position in some way?

In my opinion, it's very hard to have good science whenever there is external
money in the picture. The temptation/pressure to slightly tweak the variables
or scope of the study to achieve a desired outcome for the benefactor is just
too high.

The worst part is that this allows people to strengthen their biases by cherry
picking "scientific studies" that seem to agree with their position even if
intuitively something doesn't seem right. And science is the highest authority
you can appeal to, so there is no way to refute it short of funding your own,
better study that arrives at the opposite conclusion.

------
mikorym
This is very relevant in biology. Countless Honours and MSc students every
year try to achieve "results" and probably the majority of these may have as a
first result a negative result, or if you will, the _empty_ result.

And there is nothing wrong with that.

In fact, it is very useful. Any result that either proves nothing new or
doesn't prove or disprove anything strengthens our confidence in the existing
data knowledge base. This should even happen at school science fairs: If a
student presents to me a study about how bicarbonate of soda does nothing to
help plant x with growth, I would probably rate it highly as long as the
content is consistent.

It sounds stupid to students, but "proving nothing" is _not nothing_. It is
actually another grain of rice in the scale of how we approach topics that
have not yet achieved clear results. And sometimes it's more than just a
grain, it may be a confirmation of old results that if you had not redone the
experiment yourself would remain an abstract or somewhat removed prospect.

~~~
raxxorrax
Agreed and additionally people seem to hold their bachelor and master theses
to vigorous standards. My bachelor thesis was better than my masters, but
neither are worth anything. Not topically, not didactically, not
scientifically. I hope nobody will ever read them again. I still got good
grades.

I was already employed while writing both and really didn't put my attention
to it. I doubt you can reasonably expect these works to be anything more. At
least not if the focus isn't an academic career with further degrees.

Maybe learning to write something formal is worth it, but the results are
expected to be disillusioning from a scientific standpoint. Or maybe it is the
exceptions that are the goal here.

~~~
newsoul2019
If you ever find yourself in the presence of something novel that you need to
report on, your prior training and experience will make your report or writeup
more accurate, reliable, and trustworthy.

------
mathgenius
Two other places where we should be rewarding boredom: finance, and politics.
There's no reason why good, important work should be correlated with
excitement or interest.

------
whack
I agree with the other commenter that this is a great step forward, but it
still doesn't solve the problem of academics being incentivized to publish in
the most prestigious journal possible. Magazines like SURE are by design, not
going to be nearly as prestigious as others which publish more "sexy" results.

It would be far better if the most prestigious journals pre-committed to
publishing studies based on their methodology and hypothesis, before seeing
their results.

~~~
mic47
> "problem of academics being incentivized to publish in the most prestigious
> journal possible."

Is this a problem? If you have interesting, ground-breaking, or just really
surprising results, what is wrong with trying more prestigious journal?
Problem is not that there are journals that take only "interesting" results,
but that a lot of research is not published at all, just because results are
as expected. And this journal fixes that.

> It would be far better if the most prestigious journals pre-committed to
> publishing studies based on their methodology and hypothesis, before seeing
> their results. This is quite interesting proposal, could work well for
> experimental sciences, but worse for stuff like math / computer science,
> theoretical physics.

~~~
Vinnl
> Problem is not that there are journals that take only "interesting" results,
> but that a lot of research is not published at all, just because results are
> as expected. And this journal fixes that.

It does not fix that because, as GP mentioned, researchers are incentivised to
publish in prestigious journals, which this journal won't become. Thus,
negative results still won't get published.

~~~
w0m
I've always thought academics need volume as well as quality publications for
tenure, this looks like a landing spot for cutting-room-floor CV filler papers
that would normally get ignored. Still helpful for career progression @ good
universities.

------
JoeAltmaier
Journals can be study-space filters for large-P-value results. Statistically a
certain number of these results are by chance, and will prove irreproducible.
That can mean, depending on the mean size of studies, that a good fraction of
everything in a particular journal is wrong.

~~~
Symmetry
Like with a funnel plot?

[https://en.wikipedia.org/wiki/Funnel_plot](https://en.wikipedia.org/wiki/Funnel_plot)

------
olalonde
Funny how the acronym spells out "sure" (guessing it was deliberate).

~~~
chris_wot
Funny how no other journal has called themselves Series of Highly Impressive
Trends.

~~~
hackerpacker
funny this coming from vox

~~~
Symmetry
Vox publishes it's share of viral clickbait but it also publishes some quite
good investigative journalism. Like Sara Kilff's stuff on hospital billing.
There are writers at Vox who I think are bad but also writers there good
enough for me to put them in my RSS reader. There's less editorial control at
Vox which means both that the bad is worse but also that the good is better.
And the bad is probably what goes viral on Facebook to pay for the stuff I
consume.

~~~
anthuman
Vox is the left's version of infowars or breitbart. Why vox is allowed here
but their polar opposites are not is beyond me.

Vox and "good journalism" are not words that belong together. Vox is like most
"news" today - pure biased agenda. Hopefully, in a few years time, a recession
washes away all the trash rotting in the media space today.

~~~
icebraining
Who says breitbart is not allowed here?

[https://hn.algolia.com/?query=www.breitbart&sort=byDate&pref...](https://hn.algolia.com/?query=www.breitbart&sort=byDate&prefix&page=0&dateRange=all&type=story)

------
inlined
Hopefully journals like this will have a huge benefit at helping meta-
analysis. As the article describes, mere chance can produce p < 0.5 with
enough studies. Without the null hypothesis studies being published/included,
meta-analysis can suffer from huge sampling bias and draw horrible conclusions
because they treat the outlier as the norm.

------
GershwinA
Nice, I never thought there's such a problem to begin with, but it makes
sense. Goes broader than just "boring" study results, clickbaits and
controversial topics are flooding the news feed, dulls the critical thinking.
Reading through something boring at first may give interesting results,
learned it first hand studying Hegels dialectics :D

~~~
dao-
> Reading through something boring at first may give interesting results,
> learned it first hand studying Hegels dialectics :D

Nice. If you enjoyed that, you may want to check out Adorno's Dialectic of
Enlightenment and/or Negative Dialectics.

------
2T1Qka0rEiPr
If you like this, you might also like:
[https://twitter.com/justsaysinmice](https://twitter.com/justsaysinmice) \-
which tries to tackle over-reaching conclusions from experimentation in mice

------
timwaagh
They should make some kind of pop-sci summary and publish it as a magazine.
Serious intellectuals will subscribe and will henceforth be more fun at dinner
parties.

"Did you know, dear aunt, that in 1992 a study analysed the relationship
between the amount of spiders in College bedrooms and GDP"

"and what did they find? are spiders bad for the economy?"

"well absolutely nothing, of course, not even that there was no relationship,
but it's nice they tried and it was a very high quality study".

------
alwaysanagenda
Now that we've got SURE for economics, let's do one for climate change.

After all, "publication bias affects every research field out there."

Vox is basically explaining fake news without any sense of irony about how
this plays out in every other industry and field.

> "Let’s say hundreds of scientists are studying a topic. The ones who find
> counterintuitive, surprising results in their data will publish those
> surprising results as papers.

>The ones who find extremely standard, unsurprising results — say, “This
intervention does not have any effects,” or, “There doesn’t seem to be a
strong relationship between any of these variables” — will usually get
rejected from journals, if they bother turning their disappointing results
into a paper at all.

>That’s because journals like to publish novel results that change our
understanding of the field. Null results (where the researchers didn’t find
anything) or boring results (where they confirm something we already know) are
much less likely to be published. And efforts to replicate other people’s
papers often aren’t published, either, because journals want something new and
different."

Very similar to my complaint on this issue:
[https://news.ycombinator.com/item?id=19432720](https://news.ycombinator.com/item?id=19432720)

Oh, and look, it's sponsored by The Rockefeller Foundation, the definition of
globalism writ large:

[https://en.wikipedia.org/wiki/Rockefeller_Foundation#Beginni...](https://en.wikipedia.org/wiki/Rockefeller_Foundation#Beginnings)

~~~
icebraining
How is it similar to your complaint, other than both being about potential
biases in the scientific literature? Theirs is about publication bias against
null resources, yours is against people without the education background that
qualifies them to run a particular study; I don't see how they're that
similar.

