We could just give up, or we could try to continue study of something incredibly complex. Given that we are gaining some ground, it's clearly not a useless endeavor, it's just an extremely difficult one.
What was being tested was a different lab, with different materials, trying to get the "same" results, for some definition of same. If you give 100 programmers an algorithms book, and tell them to produce code for a binary search, and only 25% of the programmers are able to make something that works, does that mean that binary search is only 25% reproducible?
If five different companies benchmark five different web frameworks for their application, and come to 2-4 different answers about which one is the 'best,' does that mean that the benchmarks are not reproducible? Of course not.
What's being highlighted here in this study is the extreme diversity of biological models. And one doesn't necessarily expect exact reproducibility in other people's hands, because we simply don't have technology to characterize every single aspect of a biological model, and it's impossible sometimes to even recreate the exact same biological context. Is something "reproducible" if it means that it replicates in 5% of other cell lines, 25% of other cell lines?
I don't follow you here. The above does not seem to be the current meaning of "reproducible":
The same person doing the same experiment is repeatable, not reproducible. And I don't believe even the repeatable bar has been met, as very few projects have funding to do the same experiment twice.
The fact that a given investigator can "repeat" his experiment have very low weight among professional scientists, because we are all human. Irving Langmuir's famous talk about Pathological Science, and especially the sad story of N-rays, is a warning to every scientist.
Part of this is that science isn't about experimentation, it's about observation. We perform experiments when possible so that we have more stuff to observe, or more controlled events.
Much of astronomy, atmospheric physics, geology, medicine, the "soft" sciences and I'm sure plenty of other "hard" fields are at the mercy of certain phenomena having sweet FA for data points. And I'm sure they all do what astrophysicists do: make sure that what we do have plenty of data for works; make our extrapolations with as few assumptions/rounding errors as we can; and revisit existing models anytime we find a new data point.
It's as scientific as anything it's simply going to take longer to sort out in some cases.
Apparently people take that crap seriously.
Cosmology on the other hand...
Reproducibility isn't about calling out people whose work isn't reproducible, it's about identifying and promoting the most robust stuff.
The current test (if I remember my philosophy of science correctly is about falsifiability - it's not science if its claims can't be disproven. From this perspective, bad experiments are still science - someone predicted that similar experiments would behave similarly, and their prediction was falsified. This is how science is supposed to work.
It gets problematic when any failure to reproduce instinctively gets explained away as experimental error on the part of the second experimenter. Even worse is when experimenters (as in this case) work to have failures to reproduce hidden from the scientific community (the authors of this study had to sign contracts that they would not identify specific failing studies before they were given the necessary data about experimental procedure.