Hacker News new | past | comments | ask | show | jobs | submit login

You could publish it in the Journal of Trial and Error (https://journal.trialanderror.org), which I created with a number of colleagues a couple years ago!

Our editor-in-chief was interviewed for this related Nature article a couple months ago (https://www.nature.com/articles/d41586-024-01389-7).

While it’s easy pickings, it’s still always worth pointing out the hypocrisy of Nature publishing pieces like this, given that they are key drivers of this phenomenon by rarely publishing null results in their mainline journals. They are have extremely little incentive to change anything about the way scientific publishing works, as they are currently profiting the most from the existing structures, so them publishing something like this always leaves a bit of a sour taste.




Providing _places_ to publish the result is only part of the problem. The other part is incentivizing scientists to do so. And similarly, Nature itself is responding to incentives. The core problem is that scientists themselves, the individuals, do not mostly display much interest (in a revealed preference kind of way) for null results. If scientists were interested and wanted to read articles about null results, then either journals like Nature would do so, or the numerous examples of journals like yours that have come and gone over the years would have been more succesful and widespread.

Because of this revealed lack of interest, high tier journals don't tend to take of them (correctly responding to the lack of demand), and journals like your that specifically target these kinds of articles A) struggle to succeed and B) remain relatively "low impact", which means that the professional rewards to publishing in them are not very high, which means that the return on effort of publishing such a work is lower.

Don't get me wrong, the scientific community could do a lot more to combat this issue, but the core problem is that right now, the "market" is just following the incentives, and the incentives show that, despite what the stream of articles like this one over the past few decades is that most scientists don't seem to actually have an interest in reading null-result papers.


What if to each publication of a non-null result, academics are given the opportunity to publish their nulls as well, if only as a appendix or better, a counterpublication to their main conclusions? I don't buy the argument that papers need be of max-n length, now that documents and journals can be easily stored and distributed.

I would love something like Living Papers [0][1] to take off, where the null an non-non results could be compared interactively on similar footing.

[0]: https://github.com/uwdata/living-papers

[1]: https://idl.uw.edu/living-papers-template/


A null result may be a dead end and so there is no related paper worth publishing it in.

A null result should be published right away in a searchable place, but probably isn't worth a lot of effort in general. I tried X, it didn't work, here is the raw data.


That's my thought exactly--not a related paper but simply providing additional room for discussing the less shiny bits of the same experiment.

Even if the whole thing is a null, the setup, instruments, dependencies and what methods worked/didn't work is worth describing by itself.


All of that - the setup, the instruments, dependencies, methods - should be pre-submitted to the journal before the experimental results arrive. The journal should be the one that uses the data from the experiment and runs your pre-submitted program over the data to produce a result.

Papers need to be published backwards.


I agree that in an idealized way, this would be much better. But what do you do about going through all this process and ending up with a bad reviewer?* In those cases, how would you handle re-submitting to a different journal without looking like you're creating those artifacts after-the-fact to suit your outcome? Would the pre-submittals need to be handled by some third party?

* the current process still has a lot of luck in terms of getting assigned referees. Sometimes you just plain get a bad reviewer who just can't be bothered to read the submission carefully and is quick to reject it. I would hate to see a system that only allows for a single shot at publication


The review happens at the experimental design stage, not the publishing stage. Very little actual work has been done at that stage. After such a review, the journal agrees to publish your results regardless of outcome.


Right now you don't even know who will publish you paper until all that is done. Your experiment might be try some promising molecule/drug in a petri dish, and see what happens, if the results are amazing you will get in a different journal than if the results are something happens but the control molecule/drug is better.


Right, I'm saying journals should be picking which to publish before the results arrive.

They should even publish it before the results arrive.

And then publish again after.


From the article: “A 2022 survey of scientists in France, for instance, found that 75% were willing to publish null results they had produced, but only 12.5% were able to do so.”


The question is how many of them are willing to review and read these publications. Of course as an individual scientist (not me, but someone who does experiments), I'd love to capitalize on my work, even if it is unsuccessful (in the sense of null result), by publishing it. But do I, and scientific community in general, care about null results? I'd say mostly no. Null results, if universally published, would overwhelm already overwhelmed publication system.

If you think it will be helpful to others to know about specific failure, put it in a blogpost or even on arxiv. Or talk about it at conference (for CS, workshop).

Also, if we use publications as a measure of scientists success, and we do, is a scientist with a lot of null results really successful?


Obviously most scientists are not going to be interested in null results from adjacent subfields, but when it comes to specific questions of interest it is absolutely useful to know what has been tried before and how it was done/what was observed. I know a lab that had documentation not only on their own historical null results but also various anecdotes from colleagues' labs about specific papers that were difficult to replicate, reagents that were often problematic, etc.

That is a non-ideal way for the scientific community at large to maintain such info. Trying to go through traditional peer review process is probably also non-ideal for this type of work though, for reasons you cited. We need to be willing to look at publication as something more broadly defined in order to incentivize the creation of and contribution to that sort of knowledge base. It shouldn't be implemented as a normal journal just meant for null results - there's really no need for this sort of thing to be peer reviewed specifically at the prepub stage. But it should still count as a meaningful type of scientific contribution.


In the old days, Science Weekly[1] used to print 4-5 paragraph summaries of published research in a three-column layout. The magazine was dense with information across a huge number of topics.

And in the very old days, newspapers used to publish in tabular form local election results and sports games.

I feel that Nature could dedicate one to two pages of one paragraph summaries of null results with links to the published papers.

It's amazingly easy to skim such pages to find interesting interesting things!

[1] I think that was the name; I canceled my subscription when they changed to a Scientific American wannabe. I was looking for breadth not depth! I could always get the original paper if I wanted more information.


I agree incentivization is definitely a big part of the problem, but I think in general a bigger issue is that as a society we tend to reward people who are the first to arrive at a non-null result. This is as true in science as much as in any other area of human endeavor.


Years ago, I came across SURE: Series of Unsurprising Results in Economics with the goal of publishing good, but statistically insignificant, research.

https://blogs.canterbury.ac.nz/surejournal/


I thought "statistically insignificant" meant we couldn't conclude anything. So I was surprised.

[1] says:

> In statistical hypothesis testing,[1][2] a result has statistical significance when a result at least as "extreme" would be very infrequent if the null hypothesis were true

So I understand this journal publishes results for which a hypothesis was tested, found to give insignificant results, which would rule out the hypothesis assuming the research was correctly conducted, without biases in the methodology, with a big enough sample, etc. Which would be worthy to know but no journal usually takes this research because it doesn't make the headlines (which yes, I've always found was a shame).

Do I get this right?

[1] https://en.wikipedia.org/wiki/Statistical_significance


Yes, statistical insignificance doesn't "prove" the null hypothesis, it just fails to reject it. It's a subtle, but sometime misunderstood distinction. It's a measure of how big the effect size is and how often you'd expect to see it just by chance rather than due to the variables you're measuring. If it's a really extreme difference, we expect it to happen less often just by chance alone than if it's a really miniscule difference.

>Which would be worthy to know but no journal usually takes this research because it doesn't make the headlines.

That's usually correct, which gives rise to all kinds of issues like the article talks about. It can result in a lot of wasted time (when you're conducting what you "think" is a new experiment, but it's been done many times but unpublished because it doesn't provide statistically significant results). It provides little incentive for replication, which can lead to stronger conclusions about the results than may be warranted, etc.


The flip side of this is that there is almost always a very small effect, even if you are testing a crazy hypothesis (there are very weak correlations between all sorts of things). So you can often get a ‘significant’ result just by using a huge sample, even though the effect size is too small to matter practically.


Omg I love this. For like 20 years I’ve joked about “The Journal of Null Results and Failed Experimenrs” and it looks like you and your friends are actually doing it.

There’s so much to learn from these cases.


Good idea but bad name. Error implies mistakes, which will deter people.


Disagree, as we also publish “failed” research, where authors reflect on their experiment, such that others may learn from it, and that the others may still gain something useful academically from this (citations, a publication).

One of our goal is to change the perception of and culture around failure in academia. Research/science is not just a steady upward trend of progress, it comes with a lot of trial and error. Academics’s success and job prospects however depend mostly on them publishing in high impact journals, which in turn only publish “interesting” aka positive results, which creates this very toxic publish-or-perish culture. Having an experiment fail is a natural part of doing science, but academic institutions punish you for not producing positive results. By providing a place to publish these failed experiments, at least it provides some relief for this problem. This is not real change however, that needs to happen at a much higher level, but that is one we do are not able to impact. Ideally our journal will not be necessary in the future, as we detail in our opening editorial: https://doi.org/10.36850/ed1


I see. I guess I see part of the problem you're trying to solve as reflected in the language of "failed" and "error" as opposed to framing say non-replication of a prior false positive as a "correction" or "additional evidence" (against a prior false positive). It may not matter to everyone but some funders might wince as research they've funded as being "failure" or "error." Just something to consider.


If you publish null results you accelerate the development of competing hypotheses by your competition. It's best to make sure they waste as much time as possible so you can maintain an edge and your reputation. /s


so trueeee! science is ofc a zero-sum game, and we should stifle competition and collaboration as much as possible. how else will i get that 3rd citation on my phd thesis? /s




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: