I left the field 15 years ago because I didn't want to spend my career measuring zero, and presumably over time it will eventually dry up. The conditions for its existence seem to be more to do with having a highly trained group of people who have exhausted all plausible avenues of research in a given area and are left chasing a few scraps. In areas where there are still plenty of positive results to be had, the tendency will always be to emphasize the positive.
As a partial solution to this tendency, in my applied physics work, where I did get positive results, I tried to include a section in papers entitled "Things That Didn't Work So Well" that sketched failed approaches to save other people the trouble of trying stuff that seemed like a good idea but didn't pan out... at the very least we should expect that from the average publication, and be suspicious of any experimental paper that does not include some description of the blind alleys.
In mathematics, being suspicious might be a bit much, but it's a very valuable (for your audience) habit to have. For example, the book Introduction to the Theory of Computation by Michael Sipser does this a lot: it often explores avenues that lead to dead ends before arriving at the correct proof. This is the way mathematicians work, and it's usually hidden from papers or other textbooks. I'm guessing that this is either for economy or because authors want to look like the first thing they tried was the right one.
That sounds incredibly fun. The crazier and zanier the better. Were there awards for 'Most likely to actually be random words', those are my favorites.
Also, I just started in neuroscience from an EE/Physics background. You would be AMAZED at what gets published. The rigor of physics is just.... incomprehensible in bio. Granted, yes, the experiments are 'squishy' and hard to quantify. You run a gel, and you get signal or not, counting it hard enough as is, let alone trying to measure anything. Still, the papers we read.... It's like they tried to do math, but gave up halfway through. I've seen papers where they start out with 2 decimal accuracy and errorbars in Figure 1, drop the errors completely in figure 2, and then just go to counting by 10s in Figure 4. Somehow, they derive significant results that journals will publish with only 20 cells or something. Mind you, nowhere do they quote the temperature, the pressure, what their saline solution actually is, how many failure experiments they ran, etc. The physicist in me is just ... pain. And oh god, the egos. Papers will have just one author on them, the PI, and when you look at the webpage, 20 faces show up for the postdocs alone in that lab. I have no idea where they get these fools, but they seem to be in large supply.
Feynman had a lecture (please help, I can't find the link) about what it takes to get a rat to not smell food behind a door. Turns out, it's a lot of work. I've been in smell labs that just plain ignore the research, even when they know it's there. "It's too much work and funding, besides, the data should suss out the real mechanisms." The McGill study earlier this year on pain in male rats being modulated by the sex of the experimenter? Yep, just plain ignored as well.
I'll share some links here on the issues in bio, it's a lot.
The question was, how did the rats know, because the corridor was so beautifully built and so uniform, that this was the same door as before? Obviously there was something about the door that was different from the other doors. So he painted the doors very carefully, arranging the textures on the faces of the doors exactly the same. Still the rats could tell. Then he thought maybe the rats were smelling the food, so he used chemicals to change the smell after each run. Still the rats could tell. Then he realized the rats might be able to tell by seeing the lights and the arrangement in the laboratory like any commonsense person. So he covered the corridor, and still the rats could tell.
He finally found that they could tell by the way the floor sounded when they ran over it. And he could only fix that by putting his corridor in sand. So he covered one after another of all possible clues and finally was able to fool the rats so that they had to learn to go in the third door. If he relaxed any of his conditions, the rats could tell.
Now, from a scientific standpoint, that is an A-number-one experiment. That is the experiment that makes rat-running experiments sensible, because it uncovers that clues that the rat is really using-- not what you think it's using. And that is the experiment that tells exactly what conditions you have to use in order to be careful and control everything in an experiment with rat-running."
That we have done such extensive searches without finding anything is an amazing property of nature. The growing body of experimental null searches serve to provide tighter and tighter constraints on our understanding of nature.
If you put your best new idea to the test, and it fails, it's not a failure, it's a success. Now you know your best idea, the one you thought could lead you forward, is a dead end. Your next steps will be better-guided and take you further forward.
I don't know enough about your field to understand the results you're talking about, but in machine learning "failing to reject the null hypothesis" generally indicates failure to find, not a genuine negative result.
I find strong negative results to be compelling, but have little interest in wading through failure to find.
Aside: In NLP + ML, there was a now defunct publication called the Journal For Interesting Negative Results: http://jinr.org/
I strongly approve.
The Journal of Articles in Support of Null Hypothesis collects experiments that didn't work. Not very much volume, not a huge area of prestige, but there should be no shame in publishing there. The content is very diverse and pretty fun.
Titles like "No Effect of a Brief Music Intervention on Test Anxiety and Exam Scores in College Undergraduates"; "Parenting Style Trumps Work Role in Life Satisfaction of Midlife Women"; "Does Fetal Malnourishment Put Infants at Risk of Caregiver Neglect Because Their Faces Are Unappealing?"; "Is There an Effect of Subliminal Messages in Music on Choice Behavior?". Plenty more cool stuff.
Everyone says you should learn from your mistakes, but then they are never shared for others to learn from.
The example that comes to mind is pluripotent stem cells (see e.g. discussion at http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjo... ) where publications only really advance knowledge about the "dogma" of a field when someone finds a way to publish results that oppose it.
I tend to take the majority of medical research with a grain of salt, for the reasons listed here and in the article, unless there's some very convincing meta-analysis or successfully reproduced evidence. Call me overly cynical, but that we calculate a parameter or administer a drug or change our methods because of some article you read last month in NEJM is beyond bogus.
DISCLAIMER: I'm not a biologist myself - this is a second-hand story from a biologist friend so if the story doesn't hold up under close scrutiny, my apologies.
However ecology is much cheaper than mol. biology. Essentially its ecology = manpower vs mol. biology = manpower + expensive technology + expensive chemistry.
Also ecology has been one of the most resistant fields to sharing data whereas genomics (the big data end of mol. biology) has been the most enthusiastic. There was a vey noticeable divide in opinion on twitter when PLOS journals mooted making authors make all their data public.
I think this is precisely because ecologists invest so much on collecting their field data, so they become possessive. Ecologists can become very senior PI by essentially owning a long term ecological study and assigning their students to continuing it.
Genomics and to a lesser extent Mol. Biol. has been keener recently to share as there is a strong open source ethos coming in from the computing side - and it is often a very expensive multi-centre, multi-agency collaboration.
The incentive to publish frequently is not universal, however, some reviewing committees ask only for one to present the best 5 papers in the last 10 years, for example.
Probably the best summary post of his is this one: http://www.theguardian.com/commentisfree/2011/nov/04/bad-sci...
Sounds like a simple and effective solution.
John Ioannidis' work is fascinating in that respect as well.. not just in the medical field, but science in general.
Where this may make sense is when Watson grows up, and you can aggregate the volume of garbage to fill in the holes of knowledge. But that's more than a couple years off I suspect.
> Ask journal editors and scientific peers to review study designs and analysis plans and commit to publish the results if the study is conducted and reported in a professional manner (which will be ensured by a second round of peer review).
If the study design is peer reviewed, I would hazard a guess that there would be less bad science, not more. Currently study design takes second priority to significant results, which is perhaps why we have the problems of inconsistent research in the first place.
Replicating findings should be given higher priority, pre-registering methods and analyses should be encouraged/required, but it's important to stop short of "publish all the things".
I'd also like to know who conducts quality research and who conducts shit research, cause that might influence my funding/spending patterns (if I had money to spend).
The first disincentive comes from funding bodies: NIH et. al (NIGMS, NIEHS, ...) don't like to pay for you to do "someone else's science". If you manage to get a grant, and it comes out in a progress report that you did repeat too much of other people work, be prepared to get that funding reduced and or cut.
Academic departments strongly discourage new hires from publishing negative results and /or repeating other peoples work (mostly because this will likely decrease chances of getting published and funded).
Academic journals hate to publish negative results, but seemingly have no problem publishing bad science (yes Nature, I'm looking at you: http://retractionwatch.com/2014/09/11/potentially-groundbrea...). Early in my PI's career, she tried to publish a very important negative fining in a high impact journal. The article's acceptance was accompanied by a personal letter from the editor urging her to consider other journals for negative results.
Another barrier quite honestly is ego. While it may sound as if my boss is "one of the good ones", alas, she is not. On occasions that I have asked to repeat other group's seemingly unbelievable results myself, I've been flatly denied on grounds that this kind of work does not express the sort of originality of research produced by her lab. In other words, nobody wants to be known as "that lab", the nay-sayers of the field, those that would dare to question a colleague's ideas.
Finally, this lead me to the last barrier I have observed: scientific communities / societies. If you are of the lucky few that end up publishing negative results of major significance, prepare to not be invited to dinner at next years Society for X annual meeting. Yes, in many ways life-science is stratified just like high school. You have the cool kids on track for the nobel, the weirdoes in their corner pushing the boundaries of what is possible, the "jocks"/ career scientists who manage to turn a couple of tricks and some charisma into a living, and finally the tattle-tales who seem to piss everyone off with their negative results. These are HUGE oversimplifications / generalizations, but I really think that all of these barriers need to be addressed in some way to fix life science.
"There are two kinds of scientific progress: the methodical experimentation and categorization which gradually extend the boundaries of knowledge, and the revolutionary leap of genius which redefines and transcends those boundaries. Acknowledging our debt to the former, we yearn, nonetheless, for the latter."
In any case, I'm pretty excited that it's coming under pressure to improve. Publication is really a method of communication and the revolution in communication of the last generation is a profound step change in human history, in my opinion. To use some terms that our great predecessors would have been comfortable with, science is a way to uncover the truth using light. Experimentation, debate, publication, review: these are all ways of making light.
Bringing modern communication into science and the collaborative opportunities inherent in better communication is a potentially very bright light.
Nature, and Nature's Laws lay hid in Night.
God said, 'Let Newton be!' and all was Light.
-- Alexander Pope
...but getting sufficient widespread adoption is the big problem. And given that peer reviewers are not particularly effective in improving paper quality except in the most egregious cases, we should wonder whether an entirely different model is more appropriate.
For negative results to be published, they too should follow basic patterns of positive results - innovative, scientifically rigorous. There are always more negative results possible than positive results. A negative result should be something which people think would intuitively work but wouldn't. For example - an apple falling from a tree and floating in thin air isn't a negative result because we all know that it's supposed to fall down to the ground via gravity.
Edit - We also have a Journal of Negative Results