Hacker News new | comments | show | ask | jobs | submit login

> It's not that the results are completely invalid

Especially if only 11% of them are reproducible, they're only 89% invalid.

I'd go so far to say: If the reproducibility rate is 11%, you're not doing science, you're just pursuing funding.




You're assuming that only "reproducible" results are valid, which is incorrect. Results that are reproducible within a lab, but not when attempted by other people in different settings indicate that there's an unaccounted for variable, and that falls into the 89%. There's still something valuable there, it's just that we don't yet know all the variables.

We could just give up, or we could try to continue study of something incredibly complex. Given that we are gaining some ground, it's clearly not a useless endeavor, it's just an extremely difficult one.


If its not reproducible it is not science.


Define reproducible, in the terms of the precise actions that people take. Is it reproducible if the same scientist repeats the experiment in the same lab and gets the same results? Because that's the current bar of reproducibility, and that 89% that is not "reproducible" certainly passed that bar.

What was being tested was a different lab, with different materials, trying to get the "same" results, for some definition of same. If you give 100 programmers an algorithms book, and tell them to produce code for a binary search, and only 25% of the programmers are able to make something that works, does that mean that binary search is only 25% reproducible?

If five different companies benchmark five different web frameworks for their application, and come to 2-4 different answers about which one is the 'best,' does that mean that the benchmarks are not reproducible? Of course not.

What's being highlighted here in this study is the extreme diversity of biological models. And one doesn't necessarily expect exact reproducibility in other people's hands, because we simply don't have technology to characterize every single aspect of a biological model, and it's impossible sometimes to even recreate the exact same biological context. Is something "reproducible" if it means that it replicates in 5% of other cell lines, 25% of other cell lines?


>Is it reproducible if the same scientist repeats the >experiment in the same lab and gets the same results? >Because that's the current bar of reproducibility, and that >89% that is not "reproducible" certainly passed that bar.

I don't follow you here. The above does not seem to be the current meaning of "reproducible":

http://en.wikipedia.org/wiki/Reproducibility

The same person doing the same experiment is repeatable, not reproducible. And I don't believe even the repeatable bar has been met, as very few projects have funding to do the same experiment twice.

The fact that a given investigator can "repeat" his experiment have very low weight among professional scientists, because we are all human. Irving Langmuir's famous talk about Pathological Science, and especially the sad story of N-rays, is a warning to every scientist.

http://en.wikipedia.org/wiki/Pathological_science

http://www.cs.princeton.edu/~ken/Langmuir/langB.htm#Nrays


So there is a theory that if something doesn't reproduce it's because the other guy was just incompetent, and that may be the case, but just like everyone wants to believe they're above average, everyone will want to go to the theory that the other guys just aren't any good, when I suspect that that will be much less of a factor. At any rate, when you start trying these drugs on the wide diversity of the patient population, if they're not super robust, they don't be of much use anyways.


Huge swaths of astronomy are functionally unreproducible. We can argue about the math but many phenomena exist as a single example and/or that's basically static on our time scales. The best we can do is see if the math seems to produce similar looking structures when (sparsely) simulated.


There's a difference between an observation and an experiment. Lab experiments should be reproducible. The fact that astronomical events are not reproducible does not make the study of them unscientific, but it also doesn't imply that lab experiments should be one time events.


Would there be a problem with just calling it something else other than science then? I don't see the need to bend definitions of words to account for inconvenient circumstances. I am a programmer and do hard stuff, I don't require people to call me a scientist. Mathematicians do hard stuff, they don't complain that they're not called scientists. Engineers, too, are not scientists. It's not pejorative, just a statement of fact. If what you say is correct, then what is the issue with just saying astronomy is not a scientific field?


Well attempting to exclude astronomy/astrophysics from the umbrella of science would be bending the definition far more than the current status quo.

Part of this is that science isn't about experimentation, it's about observation. We perform experiments when possible so that we have more stuff to observe, or more controlled events.

Much of astronomy, atmospheric physics, geology, medicine, the "soft" sciences and I'm sure plenty of other "hard" fields are at the mercy of certain phenomena having sweet FA for data points. And I'm sure they all do what astrophysicists do: make sure that what we do have plenty of data for works; make our extrapolations with as few assumptions/rounding errors as we can; and revisit existing models anytime we find a new data point.

It's as scientific as anything it's simply going to take longer to sort out in some cases.


I once had a very long and intense argument with a guy who was offended because I thought that I wasn't a scientist even though I studied Software Engineering (not even Computer Science).

Apparently people take that crap seriously.


I think this is a straw man. Astronomers, astrophysicists, etc. go to enormous lengths to address these issues over time and are well aware of the shortcomings of their work. When black holes were predicted, none had been observed. I'd suggest that astronomy is a terrible place to make claims about irreproducibility.

Cosmology on the other hand...


It's really critical that we don't confuse research that's nor reproducible with fraud. Very few scientific theories survive unmodified over time, so lack of reproducibility isn't a criticism and we really need to move the debate past this. Every theory is expected to be inaccurate as it only explains the data using the understanding of the time, but this isn't an indictment of the research or the researcher and studies of outright fraud indicate that that actually only happens around 1% of the time.

Reproducibility isn't about calling out people whose work isn't reproducible, it's about identifying and promoting the most robust stuff.


There are lots of parts of science that can't be experimented on (e.g. astronomy). Even for those parts that are experimental, just because you're wrong doesn't mean you're not doing science.

The current test (if I remember my philosophy of science correctly is about falsifiability - it's not science if its claims can't be disproven. From this perspective, bad experiments are still science - someone predicted that similar experiments would behave similarly, and their prediction was falsified. This is how science is supposed to work.

It gets problematic when any failure to reproduce instinctively gets explained away as experimental error on the part of the second experimenter. Even worse is when experimenters (as in this case) work to have failures to reproduce hidden from the scientific community (the authors of this study had to sign contracts that they would not identify specific failing studies before they were given the necessary data about experimental procedure.



That's not true. Part of science is doing experimentation. If an experiment doesn't reproduce, you need to find out why that is. The hypothesize and test part of the scientific process is every bit as much science as the rest, even if your tests show that an idea is wrong, or that more is going on then you thought.


What about the experiments run by using the LHC? No other organization has a similarly sized particle accelerator, so by your definition it is not science because it cannot be reproduced elsewhere?


If 11% of the papers are reproducible and people are trying to reproduce the results then your still making progress. Just the slow and expencive kind. Considering how complex the subject matter is I don's think it's reasonable to expect anything else.

The problem is people are not trying to reproduce results which harms the field and slows everything down.


Actually very few people are trying to reproduce results. It is a lose-lose situation. Either you confirm the previous results which won't get published or you can't confirm them which means you are either not as competent as the original researcher or you have to embarrass your colleague and bring shame on your profession. Neither of which gets you published or helps you career or gets you more funding. The incentives are all screwed up.

These Amgen researcher had to sign agreements that they would not publish the results of their attempts to reproduce these experiments before they could get enough detail to attempt to reproduce them. Clearly this is not how science is supposed to work but it is exactly how businesses operate. Very sad.


After seeing how much of the research is done, I'd agree with this more.

This is a problem that is getting worse as funding is getting cut more, people feel they have a need to get a paper out, regardless of the results. You get positive results, make a story up about it, then run to publish it before trying to reproduce it or look further into the data. While this doesn't happen in every lab, I'm unhappy to say that I've seen this happen in many "high impact" labs.


Why do you say funding is getting cut? NIH's budget has almost doubled in the last decade [1] and many of the other funders have seen similar growth as well as new funders appearing every year.

I don't think the problem is lack of funding but screwed up incentives. When medical reaseach became focused on funding the quality of the results suffered. And if the vast majority of landmark cancer research can't be reproduced much of that money was wasted.

The solution will require a huge cultural change which may be impossible. However step one is recognizing the problem. And some efforts are already underway such as journals like PLoS which publish negative results and more recently The Reproducibility Project and Reproducibility Initiative [2,3]. Still it will be difficult.

[1] http://officeofbudget.od.nih.gov/pdfs/spending_history/Mecha...

[2] http://www.openscienceframework.org/project/EZcUj/wiki/home

[3]https://www.scienceexchange.com/reproducibility


http://news.sciencemag.org/scienceinsider/2013/05/nih-detail...

Already funded grants are getting cut ~20% across the board. There is a ton of cuts going on right now to the NIH budget, google it and take a look. This has been happening for years now, trying to get an RO1 (large research grant) is becoming more and more difficult and it isn't helped by the constant changing in requirements.

Every other point I agree 100% with, the culture change has to happen. Nothing is impossible, and it just takes the right people to make the right things to happen. Everyone recognizes the problem, I can't tell you how many times I hear people complaining about the same problems over and over. The problem is that they aren't taking action, and with no action, nothing is going to take place. While there are many efforts in place (my project being one of them, http://omnisci.org), they need to be implemented properly. The same rule of startups applies to science. The ideas/concept means nothing with proper execution.

While the culture change may be slow, the academic world is having a really hard time keeping up. NIH is also fighting to stay afloat. I have a few friends who work as program officers and they really have a negative outlook on the future of research funding.


You are right about the sequester cuts. I was looking at the annual numbers of the NIH front page which didn't include 2013. I wonder why 5.5% overall cuts translate to 20% cuts. The SciMag article makes it seems like they were only cutting the number of grants not the size which kinda makes sense. Perhaps they are treating funded grants worse which seems crazy. Wouldn't this potentially waste the money already spent if the project can't be finished on 20% less?

Good luck with omnisci.org this is the sort of thing that would help: open sharing of data, techniques and negative results. If this was the norm things could be very different. But one thing I have learned is it is very hard to change and organizations culture.


I've seen this happen as well, but I think the problem is more with the "make up a story" part and less with the "run to publish" part. I've seen really really interesting results that defy explanation get passed over for publication in favor of something more mundane that can "tell a story" because, it seems, stories get funded...intriguing research? not so much...




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: