
New academic journal only publishes 'unsurprising' research rejected by others - apsec112
https://www.cbc.ca/radio/asithappens/as-it-happens-thursday-edition-1.5146761/new-academic-journal-only-publishes-unsurprising-research-rejected-by-others-1.5146765
======
kraetzin
Half of my PhD thesis is considered "unpublishable" because, after doing the
work, my supervisors felt it's actually "unsurprising" that it didn't work
out. We took methods that had been exploited to improve on previous results
for over a decade to their logical extreme, and found that this method no
longer leads to improvements. After doing the work it seems obvious. A paper
on the subject would almost be considered uninteresting, and a high ranking
journal would ignore it (which is why it's considered "unpublishable").
However, nobody has published this information, and it would help others to
not make the same mistake.

I wonder how many times similar "mistakes" have been made by PhD students
across disciplines.

~~~
bumby
My experience: I proposed a thesis to an advisor who deemed it unlikely to
work. He ran it by a colleague who came to the same conclusion. I went to a
different major and pursued the same thesis and invited the previous faculty
who initially turned it down to the defense because it was relevant to their
field.

It was frustrating to hear them voice their opinions in the defense that they
felt, “of course it world work.” After seeing the data, they took the exact
opposite side claiming it was obvious to the point of being of limited
publishing value.

~~~
sullyj3
[https://en.wikipedia.org/wiki/Hindsight_bias](https://en.wikipedia.org/wiki/Hindsight_bias)

~~~
MaxBarraclough
I recall Dan Ariely mentioning this in one of his books. His field is
psychology/behavioural economics, a field where very often either outcome of
an experiment can seem obvious after the fact. (Questions like _Do newborn
babies have an intuitive understanding of gravity?_ )

As I recall, he restructured his lectures, asking upfront for a show of hands
as to which outcome everyone anticipated, before the big reveal. After making
this change, he had fewer people approaching him after lectures saying how
obvious the outcome was.

~~~
spidersouris
Do you know which book?

~~~
MaxBarraclough
It's one of the three of his books I own, I'm afraid I can't easily narrow it
down further. (I own all three as audiobooks, and I recall Simon Jones reading
it, but it turns out he read all three.)

• Predictably Irrational

• The (Honest) Truth About Dishonesty

• The Upside of Irrationality

~~~
elliekelly
Tangentially related I wish there were a way to “search” within audio books.
Once you’ve finished the book its almost impossible to figure out where a
specific chapter or passage is if you’d like to go back.

~~~
cestith
The semantic data format people have had a point all along. Just because
digital audiobooks are inspired by books on cassette is no reason the data
format can't support all sorts of metadata. We could have a format for written
and read aloud works that highlights every word in the text on a screen as
it's read when used in the proper player software, with user notations,
bookmarks, indexes, an completely searchable by full text.

~~~
jfengel
Kindle supports this with Whispersync. I don't know how the file format works.

------
choxi
Why don’t research groups just publish to their own websites or directories
like arxiv? What’s the role of an academic journal in 2020?

Honest question, I’d love to see more blogging from hard science academics but
I’m wondering if there’s a reason why that’s challenging or if it’s just
academic culture. We should have a Substack/OnlyFans for scientists.

~~~
refurb
Speaking from my own experience in the physical sciences, labs don't self
publish unsurprising results because there are only so many hours in the day
and it's not worth the effort.

Even just putting the results on your own website is a lot of work. Pulling
all of the data together, analyzing it, putting it in visuals, writing up the
results. It can be hard to justify committing that much time to something
where the pay off is "other people might be interested".

~~~
willj
Hmm... it seems like this would be a good thing for undergrads to do in a
class setting or an internship. It would give them experience writing an
_actual_ paper, albeit with null results.

~~~
gus_massa
It would require a lot of babysitting. Some objects have a few slightly
different definitions (for example,one book has a definition, and the other
book the definition is 1/2 of that). Sometimes the programs to run the
calculations don't use the same variables than the paper (perhaps the team
changed the opinion, or the main reference, and the graph must show x+y vs y).
Sometimes the work that should be included in the paper is underdocumented,
...

Another difficult part is to select what to publish, for example cut the dead
branches and add a few more data about the interesting part. It is not usual
to get a bunch of data and just publish it without some additional work.

~~~
mycall
> It is not usual to get a bunch of data and just publish it without some
> additional work.

I thought that is what data lakes and event sourcing is suppose to solve.

~~~
gus_massa
I'm not sure what that means, but we are not using it.

In medicine some studies are preregistered, but one of the lessons of Covid-19
is that each week there is a new study that is clearly unregistered, without a
control group or with a definition of control group that makes me cry (like
"an unrelated bunch of guys in another city").

I think the people in particle physics have a clear process to "register" what
they are going to measure and exactly how they are going to processes it. (The
measurements are too expensive and too noisy, so it is very easy to cheat
involuntarily if you don't have a clear predefined method.) Anyway, I don't
expect them to have the paper prewritten with a placeholder for {hint,
evidence, discovery}.

In most areas you just put in the blender whatever your hearth says and hope
the best. Or run a custom 5K LOC Fortran 77 program (Fortran 90 is for
hipsters).

If you get an interesting result for X+A, Y+A and Y+B, you probably try X+B
before publishing because the referee may ask, or more B because B looks
promising.

If you run a simulation for N=10, 20, 30 and get something interesting, you
try to run it for N=40 and N=50 if the interesting part is when N is big, or
for N=15 and N=25 if the program is too slow and the range is interesting
enough.

And it is even more difficult in math. You can't preregister something like
"... and in page 25 if we are desperate we will try integration by parts ...".

------
whatever1
We also need a journal to publish methods that failed. I did so much work
during my PhD that was dead end and is not documented.

~~~
DonCopal
[https://www.journalnetwork.org/journals/international-
journa...](https://www.journalnetwork.org/journals/international-journal-of-
negative-and-null-results)

~~~
MattGaiser
Is it normal for journals to charge a fee for publishing?

~~~
kraetzin
This is normal practice. To publish in a respectable journal you are charged
£1000+. To publish your paper as open access, you can be charged another
~£1000 for the privledge (IEEE).

~~~
jiggunjer
I think it's laughable they still charge extra for color pictures.

~~~
akjssdk
They charge extra to have your pictures printed in color, online everything is
in color anyway. I think that is a fair practice no?

~~~
jiggunjer
No. If you want to submit color regardless of medium. Besides, color printing
isn't that costly compared to their high margins. It's in their favor too if
people use color images, the age of simple scatterplots has passed for most
fields.

------
eloff
Good, science needs more of this. A negative result is a result, and if you
don't publish it, then you end up skewing the distribution of results. Which
can lead to people (and meta studies) drawing incorrect conclusions.

~~~
DrBazza
Ben Goldacre talks about this in Bad Science. Well worth a read.

------
austinl
I'd also love to see a journal that agreed to publish work _before_ any
results are known. Researchers would submit hypotheses and methodology for
review, and the journal would publish the results after the experiments were
conducted, regardless of their outcome.

It would still incentivize interesting hypotheses, but wouldn't lead to
results-biased publications.

~~~
laGrenouille
This is a great idea and already being done (somewhat) in some of fields of
psychology through pre-registration [0, 1, 2]

[0]
[https://en.wikipedia.org/wiki/Preregistration](https://en.wikipedia.org/wiki/Preregistration)
[1]
[https://www.psychologicalscience.org/publications/psychologi...](https://www.psychologicalscience.org/publications/psychological_science/preregistration)
[2]
[https://www.psychologicalscience.org/observer/preregistratio...](https://www.psychologicalscience.org/observer/preregistration-
becoming-the-norm-in-psychological-science)

------
Vinnl
The root problem is the reward system in academia. Even today, when the
relevant players are discussing how to improve it, the question is always
phrased as needing to find better ways to reward excellence.

If you want to reward excellence, you're by definition looking for the
exceptional, the surprising. It might sound good, and surprising discoveries
are necessary, but they're not the only thing that's good for science.

We need ways to recognise more than just exciting results; researchers should
also be rewarded for robust contributions to the body of human knowledge.

(This is also why Plaudit.pub, a project I volunteer for, allows researchers
to explicitly recognise robust or exciting work as separate qualities.)

~~~
remus
> If you want to reward excellence, you're by definition looking for the
> exceptional, the surprising. It might sound good, and surprising discoveries
> are necessary, but they're not the only thing that's good for science.

I think this depends a lot on how you define 'excellence'. I agree that
currently 'excellence' == novel and interesting findings, but we could
interpret it to mean 'excellent science' in the broader sense, where showing
that another study fails to replicate, or getting a negative result, are
equally important parts of the scientific process.

------
timwaagh
this is old news from last year, also there seems to be only one issue with
one article, which means this project is stillborn.
[https://ir.canterbury.ac.nz/handle/10092/14932/browse?type=d...](https://ir.canterbury.ac.nz/handle/10092/14932/browse?type=dateissued)

~~~
owenshen24
Oof, that sucks to hear. Thanks for letting us know it's not been updated.

------
throwawayiionqz
The challenge of modern academia in certain fields is not to publish but to be
read by others. Everyone is so busy publishing that very few papers get decent
readership (retweets and citations happen mostly without reading the
substance).

I would rather have an upper limit on the number of papers one can publish in
a year than more avenues to publish unsubstantial findings.

------
chunkyks
I just gave a briefing today where I work, that had two conclusion slides: On
the first, "None of this matters", and on the second "I've no idea".

One the one hand, that's pretty nihilistic. On the other hand, it's cool that
I was able to explore it and come to the conclusion that everything is fine
and some aspects of the past are unknowable.

------
klysm
The problem is researchers aren't going to want/be able to spend the time to
properly document negative/unsurprising results. The financial incentives in
place don't support it despite its incredible value.

~~~
e2021
But isn't the reason they don't want to write it up because they know they
won't get any credit for it? That is exactly what this is trying to remedy

~~~
gus_massa
There are many method to evaluate the work.

Sometimes it is just the count the number of published papers, or that
dividing by the number of authors, or some weight if you are the first or the
last author.

Number of citations, h-index, ...

About the journal, there is the impact factor and many somewhat arbitrary
ranks...

A paper in a totally obscure journal that is not cited by other papers, has
the same weight than a blog post.

------
MaxBarraclough
As a non-academic: this journal will presumably end up with a low impact
factor, right? So it might help with the problem of negative results not
getting published at all, but it doesn't address the 'academic penalty' for
getting negative results, right?

Seems to me that preregistration [0] is the real answer here. If all journals
went with preregistration, there'd be no need for a journal like this, right?

[0] [https://plos.org/open-science/preregistration/](https://plos.org/open-
science/preregistration/)

------
barrenko
I still shudder when I think of the fact Tim Ferris mentioned, that trouble
with thesis advisors is among leading cause of suicide in young men. Had
similar trouble myself.

------
nippoo
Related (but more general): the Journal of Articles in Support of the Null
Hypothesis, [https://www.jasnh.com](https://www.jasnh.com)

------
onurcel
I like the idea but I don't really get why they accept "statistically
insignificant" results. What I expect from a scientific paper is to prove
(even emprically) something. For example, if a paper claims something like "we
show that using method X instead of Y doesn't improve the results", it won't
get published on most journals... except this one, which is awesome, but the
paper still has to prove that claim.

~~~
progval
Doesn't "statistically insignificant" mean papers which conclude something
like "we find no correlation between X and Y"?

~~~
BenoitEssiambre
No! It means the data is too noisy to make any conclusion.

~~~
ellis-bell
not necessarily. it could. but if x and y are truly uncorrelated variables
that are observed with unlimited precision, then you still would _not_ reject
the null that they are uncorelated

~~~
BenoitEssiambre
"Truly uncorrelated" is a problematic concept. Most of the time, any two
things within each other's light cones, have at least tiny direct or indirect
effects on each other. These effects are usually not that tiny in experimental
settings because of imperfect experimental tools and procedures.

~~~
gus_massa
Like the speed of sound and the height of the table where the experiment is.

Note that the pressure of the air changes with height. For a few inches the
change is negligible and very difficult (impossible?) to measure, but the
effect must be out there.

I had once a problem with the temperature of the room, you usually ignore it,
but it has a bigger effect like a 5% variation.

------
quelltext
> Nick and Andrew did a similar experiment, they found no effects whatsoever.

> And so they send this to a journal and the response they got was, well, you
> know, you probably wouldn't expect an effect because this is not fairly
> novel information to the students. So we are not going to publish this
> because it's not really surprising.

I don't quite understand that example.

If every one else in their papers indeed suggested that telling students of
the cost should solve the issue, and everyone believed them, then how can it
be claimed that the experiment results are not surprising. The whole point was
to disprove what apparently was blindly accepted because of common sense;
being then told that in fact the opposite is common sense seems like a slap in
the face.

I didn't read the paper, so maybe I'm missing something, but I'm not sure it's
actually a good example for that journal. Had they done experiment in
isolation with no preexisting notion or consensus of what might solve the
graduation issue, maybe.

------
nrev
I initially read this headline as an Onion-article style joke. That said, I
don’t disagree with some of the ideas set forth here. I mean, I think this
same sort of thing is the reason I regularly find myself reading journal
articles and thinking that the title is so misleading/essentially just
scholarly clickbait titles. From my perspective, however, I would be really
interested to see a journal do something like this but instead with
ambivalence about the use of particular stat models. (i.e. I definitely feel
that there is still a religiosity in how some reviewers/journals/even
disciplines have singular loyalty to p-values, but this may merely be my own
perception or anecdotal experience).

------
ay
“We kind of want to fill the void and publish results that are the opposite of
that —unsurprising, weaker, statistically insignificant, not conclusive and so
on.”

These are at least 5 Distinct categories, hope they make a distinction...

------
catsarebetter
I can see this journal being 10x-20x the size of normal journals

~~~
refurb
A very good point. My experience was that maybe 5% of the work I did made it
into a publication.

If I wrote up everything that didn't work, my PhD would have been 10 years
instead of 5.

~~~
catsarebetter
I sympathize, your work that was never published was probably of equal caliber
to your published work, and just as valid.

------
User23
Perhaps they can publish Dijkstra's little gem[1] on the Pythagorean theorem?
He explicitly asks what journal would publish such a triviality. However it's
a really neat little bit of reasoning and I quite enjoyed it.

[1]
[https://www.cs.utexas.edu/users/EWD/transcriptions/EWD09xx/E...](https://www.cs.utexas.edu/users/EWD/transcriptions/EWD09xx/EWD975.html)

------
60secz
Null Hypothesis quarterly needs to be on every coffee table.

------
realradicalwash
this is nice, and a clear step forward. but more is needed to fix a semi-
broken system. because publication bias is one thing, but there is also
something like a _reception bias_.

there are already null result journals and workshops. it is increasingly
possible to publish negative results / unsurprising results.

still, imv those results are less likely get cited. because negative results
are often messy that can be hard to understand. and people don't want to pick
up a paper and then not understand. so it's no surprise that positive results,
certainly the way they are framed, often neatly come together. the 'stories'
they tell you are often easy to grasp mono-causal stories. it's almost like
you read them and you feel good about yourself, because you feel like you
suddenly understand a difficult problem.

and then, which paper will you cite in your own work? the paper you
understood, that has that catchy title? or the one that you struggled through,
the one that painted a picture of a far more complex, messy reality?

this kind of reception bias will be very hard to fix. it takes more critical
editors, reviewers, and readers to fix this.

------
ineedasername
Building off the example in the article about college students, I can attest
to this: For years it's been required for colleges to be upfront and disclose
the total costs, and financial literacy initiative at orientations &
introductory courses have pushed the issue explaining the additional costs of
taking longer than 4 years to graduate. The results have been indiscernible
from prior to this.

------
a9h74j
I haven't seen anyone mentioning this scary thing, true when I was at a top-10
university:

 _No significant-enough publication, no Ph.D., even after years of all-in work
and an accepted dissertation are done._

Things might have changed, but this is up there with _foreign language
requirement_ in terms of fine print nobody reads going in.

------
cjfd
It is an improvement over the current situation but the question remains what
these journals are good for besides gobbling up money that should go to
research instead. There are preprint servers. If they can create a comment
system where scientists can leave comments what are journals good for anymore?

------
anigbrowl
Love the idea but wish there was a better writeup. Transcripts of radio
interviews seem stilted and miss the tone context that can indicate whether
something was a joke or a casual aside.

I think the journal itself could be quite valuable and hope it succeeds.
Perhaps this model can be generalized to other fields too.

------
michaelmior
This reminds me of the Failed Aspirations in Database Systems workshop at VLDB
back in 2017. I don't think it turned out to be quite what I was personally
hoping it to be, but the idea is great.

[https://fads.ws/](https://fads.ws/)

------
plaidfuji
There's a name for that: it's called a "low-level journal"

------
kkylin
Reminds me of this:
[https://en.wikipedia.org/wiki/Rejecta_Mathematica](https://en.wikipedia.org/wiki/Rejecta_Mathematica)

which sadly did not last.

------
nurettin
Why isn't "verbal intervention for university students didn't make them
graduate any faster at all" an interesting result?

------
hoseja
I wonder what is the actual, hidden, incentive for academia to stay this
dysfunctional. It can't just be Hanlon's razor, right?

------
punnerud
If every SW-developera had the same mindset they would have build a lot from
scratch every time

------
markstos
For those involved in QA, the benefits of boring work are unsurprising.

------
asah
Is there a journal that publishes "beautiful experiments" ?

------
rossdavidh
So, it's a year old, and only one article? Now that is surprising.

~~~
jarmybarmy
An unsurprising result for sure.

------
Udik
Climate change science is in dire need of a journal like this.

------
throwaway590007
That journal already exists lol, it's called PlosONE.

~~~
anticensor
PLOS accepts positive results as well.

~~~
throwaway4747l
To the detriment of the authors who should know better than publish there if
they have something interesting

------
gverrilla
20th century+ science = pyramid scheme

------
nxpnsv
There is only one article.

------
mjfl
The problem with null results and boring results is that there are too many of
them.

------
quixoticelixer-
Huh, this is my university

