I have long maintained that the NIH should set aside 25% of its budget to fund spot checking of the research output of the remaining 75% of funded research. Even if this funding is only sufficient to review a handful of papers each year, it stands to have a transformative effect on the level of hype and p-hacking in many fields, and could nearly eliminate the rare cases of actual fraud. It would also improve the overall quality and reliability of the literature too.
But who could do the spot-checking? In many fields, there is only one team or a few authors capable of understanding, reproducing, or verifying certain research. In a two-party situation, if you pay the opposing team, they have incentives to exaggerate doubts. Often research papers are written in such a confusing way that it is impossible or very expensive to reproduce or verify the results (e.g. repeat a year-long study).
I think it would be better if there were incentives that rewarded quality over quantity. At the moment, my university always says that quality is of the utmost importance, but then threatens to terminate my job if I cannot publish x number of papers in a given year.
>, there is only one team or a few authors capable of understanding, reproducing, or verifying certain research
If the research isn't documented well enough to reproduce/verify, then the paper shouldn't pass review in the first place. The NIH could make it a condition of funding that papers are detailed enough to be reproducible.
My sense is that this would prevent 98% of papers in the social sciences from ever being published. How do you decide whether to renew a researcher's position when the average time to get a single paper through peer review is 5+ years (my average is 2-3 contract renewals per year)? This is not compatible with today's pace.
I am not saying that all this is a desirable situation. It is very unfortunate, and I wish there was an easy solution. My first research paper took 5 major revisions and 6 years to get through peer review. All the reviewers criticized was the wording and my unwillingness to conform to the accepted views in that particular community; I almost lost my job over this, but once the paper was accepted, it won several awards. All of this leads me to believe that peer review is very subjective and prone to error, and I don't have a solution for that.
The goal of science isn't to publish papers, it's to investigate hypotheses. If 98% of social science papers cannot be replicated then that's an indictment of social science, not the requirement to replicate.
I suspect a lot of "hard" science papers would be caught as well so it's a necessary quality control method
Replication is one of the central issues in any empirical science. To confirm results or hypotheses by a repetition procedure is at the basis of any scientific conception. A replication experiment to demonstrate that the same findings can be obtained in any other place by any other researcher is conceived as an operationalization of objectivity. It is the proof that the experiment reflects knowledge that can be separated from the specific circumstances (such as time, place, or persons) under which it was gained.
Public research that nobody else can do should be considered utterly worthless. Publication should be predicated on independent replication. Papers are supposed to be instructions for replication! If it's impossible to replicate from the paper, the paper fails its primary purpose IMO.
> Often research papers are written in such a confusing way that it is impossible or very expensive to reproduce or verify the results (e.g. repeat a year-long study)
From an economic perspective, is this a very desirable situation?
That's not an accurate comparison, in my experience. Police operate as a thin blue line. Academics operate as jackals and hyenas in the Serengeti, and many would eviserate literally anyone if they could get another grant. Best admired from afar.
It doesn't change the fact that internal vigilance departments are often strangled by reducing their funding (often based on the fact that they haven't done much), and causing various roadblocks by preventing information flow to these departments.
The second one is a little less relevant for published research, although it could take a different form when implemented in Academia.
Income tax departments in my country give a small reward (3-5% of the amount in question) if you report tax fraud.
Maybe we could have something like that, where the vigilance department receives a small amount of money paid off from the penalty imposed on the researcher with bad/fraudulent research.
>An award worth between 15 and 30 percent of the total proceeds that IRS collects could be paid, if the IRS moves ahead based on the information provided
You're going to have to provide evidence, as in actual bona-fide promotions handed out, or Tenure committees expressly spelling out and counting this as a factor in their decision
Agreed, once you know what to look for and how to reproduce it. What do you do if you can't reproduce? That may mean the original research paper doesn't disclose everything (malicious or not, but malicious is REALLY frequent) or missed an important factor (sometimes doing a reaction in a slightly scratched glass container will change the outcome entirely).
To come back to the malicious part, for many researchers, not publishing the exact way they do things is part of how they protect themselves from people reproducing their work. Some do it for money (they want to start a business from that research), others to avoid competition, others because they believe they own the publicly funded research...
And sometimes you fail to reproduce something because you failed to do it properly. I don't know how often that happens in the field on in the lab, but it's extremely common on the computational side.
Very often, the thing you are trying to reproduce isn't exactly the same that was published. You have to adapt the instructions to your specific case, which can easily go wrong. Or maybe you did a mistake in following the instructions. Or maybe you mixed the instructions for two different cases, because you didn't fully understand the subtleties of the topic. Or maybe you made a mistake in translating the provided scripts to your institute's computational environment.
Part of the problem is that methods sections in contemporary journals do not provide enough information for exact replication, and in the most egregious cases let authors write stuff like "cultured cells prepared according to prevailing standards".
From the site: "One of the problems was room temperature was too low elsewhere. Tumors don't grow when mice are chilly because the vessels vasocontrict, it cuts off the blood supply and drugs don't circulate.”
That means there are important validation/verification steps left out of the whole process. Sure, it's impossible to give every detail, and naturally there's always a time constraint, but if there's a hypothesis of action it needs to be verified. (Again easier said than done.)
That's awful. In any field, a writeup of a discovery should include enough information for a peer of the author to reproduce the results. Ideally, it would include enough detail for an enthusiastic amateur to reproduce the results.
This is how we write pen testing reports at work. A pen testing report written that way ~20 years ago is one of the things that got me interested in pen testing. But I apply it to all of my technical writing.
If lack of reproducibility in science is as big a problem as it seems to be, maybe journals should impose a randomized "buddy system" where getting a paper published is conditional on agreeing to repeat at least 2-3 other experiments performed by peers. Have 3 peer researchers/labs repeat the work. If at least 2/3 are successful, publish the paper. If not, the original researchers can revise their instructions once or twice to account for things the peers did differently because the original instructions didn't discuss them.
Hopefully needing to depend on the other organizations for future peer review would be sufficient to keel everyone honest, but maybe throw in a secret "we know this is reproducible" and a secret "we know this is not reproducible" set of instructions every once in awhile and ban organizations from the journal if they fail more than a few of those.
For corner cases that require something truly impractical for even a peer to reproduce independently ("our equipment included the Large Hadron Collider and a neutral particle beam cannon in geosynchronous orbit"), the researchers that want to publish can supervise the peers as the peers reproduce the work using the original equipment.
This would obviously be costly up front, but I think it would be less costly in the big picture than thousands of scientists basing decades of research on a single inaccurate paper.
I also think that forcing different teams to work together might help build a collaborative culture instead of the hostile one that's described elsewhere in this discussion, but maybe I'm overly optimistic.
One problem with ideas like this is that the academia is a meritocracy (or at least it attempts to be), and a meritocracy needs merits that can be assessed in a timely fashion. If people need publications for jobs, promotions, and grants and proper peer review takes too long, they will create a parallel track of publications with lower standards. Over time, the parallel track will become dominant.
That already happened in CS, which inherited slow and thorough journals from mathematics. Because peer review was taking too long, people elevated abstracts in conference proceedings to the status of papers. The idea was that you submitted an extended abstract to a conference with limited peer review. After receiving feedback, you would write the actual paper and submit it to a journal for proper peer review. But because the work was already published in conference proceedings, people often didn't bother with the full paper.
In some countries, administrators resisted this and only considered journal papers real publications. Those administrators were universally reviled by the CS community. Over time, most of them budged and started accepting conference papers as merits. And so CS became a field with lower than average standards for peer review.
In biochem sometimes there are a lot of skills involved, to the point where it's almost magic intangible qualities making an experiment succeed. Especially for more manual work.
For my masters' research I spent 6 years refining a super niche technique until I was able to reproduce my own work.
No if you want you have virtually unlimited supplementary information you can attach to your publication. It is really about a mix of doing as little as possible and hiding details so competitors can't do it.
I think part of the reason why nobody seems serious about replication is that it's actually mostly not worth it.
Most research is useless and pointless, with only a few exceptions. We don't have a way to figure out which topics are the exceptions, so someone has to do the research. It's not worth it (or rather: extremely high financial risk) for companies or individuals to do it, so governments gave to fund it.
At this point, the current amount of fraud does not justify replicating even 1% of studies. We would get less scientific advancement in total. The current situation likely does justify some small investments in shaping incentives.
>>It's not worth it (or rather: extremely high financial risk) for companies or individuals to do it
The problem is that it's hard to reliably capture value from research. A good example is LLM's. If OpenAI, Google, Meta and AWS had been able to build a wall round GTP3.5 Turbo and above models then I expect that they could have captured all the value of the research effort.. as it is I don't think that is/will be the case - it's almost too easy to replicate as Mistral have shown. Note: I'm not saying it's trivial or something, but if you spend a few $million on it you can get close enough, and then spending a few $million more will get you all the way. Also, I am not talking about building a frontier model today (which requires $100millon or so and some difficult skills/organisation) but rather a model in say 3 years time with the frontier performance of todays models.
As I said in another comment, I think it'd be much easier to convince Nature to create a new article type, Replication Study.
Nature, as well as other top journals, do not publish results that report already published findings. Replication Studies would be the exception. These would provide independent validation of a recent prominent article.
People would think twice before misrepresenting findings in top journals, and good ideas would spread out more quickly.
Wait, you don't need to be rich. With pay-to-publish (also known as free access), it is an absolute goldmine.
People generally don't want to do the work of editing and publishing, or lack the academic knowhow to do it. But if that is not an issue, I don't think money will be an issue either.
There isn't enough incentive currently to publish reproductions, starting a new journal using the same general publishing model isn't going to change that. But with money to burn you could add some incentive, or you could at least do things to improve publication quality like actually paying for good peer reviewers.
Do you really think that the current situation poses a >25% cost on scientific productivity? Do you think your system would be able to recapture that?
That assessment does not match up with what any practicing scientist thinks is even within the realm of possibility for harm to science.
Reading these conversations is like listening to C-suite execs at big companies talk about what employees are getting away with via work at home policies.
In the 2000s a friend worked at a top 10 pharma company with the fun job of reproducing results from genetics papers... 25% were reproducible. The rest were not, even after communicating with the authors to get conditions as close as possible.
So, yes, the current situation can safely be assumed to pose at least a 25% cost on science. And "productivity" is the wrong term here. The harm of fraudulent/bad science runs much deeper than productivity
I don't know about your friend, but there was an option piece, not a scientific study, that reported about that number of "landmark" cancer research papers as being "not-reproducible" in industry hands.
However, it did not have methods, it didn't say how they were not reproducible, as in a figure or an effect etc.
The closest thing to a definition of "reproducible" was a footnote on a table defining it as "sufficient to drive a drug development program," which is not at all the same thing as reproducible.
Which is to say, I'm skeptical of these anecdotes.
Here is an overview of the three papers of the "Reproducibility Project: Cancer Biology" (1). The detailed results are in references 2-3
And if you've only ever encountered a single opinion piece on the reproducibility problem in biology/pre-clinical research, then I highly recommend you do a targeted keyword search.
I have read that paper too, but you were referencing the ~25% from your friend in industry, which corresponds to the editorial I cited, not these far more rigorous studies.
> then I highly recommend you do a targeted keyword search.
There's no reason to be insulting, especially when linking to well known studies.
We can quibble over the number, maybe it is low as 10%. The cost to reproduce a study will be significantly higher than to produce it in the first place, due to different expertises, equipment and so forth. I estimate at least a 10X factor.
And I am intimately familiar with what researchers “get away with’ while ‘working at home’. As a researcher who tried to reproduce several research papers only to discover the original scientists were wildly exaggerating their claims or cleverly disguising fundamental shortcomings, I can assure the cost is quite high to the scientific community, well in excess of 25% of the annual $48B NIH budget.
I hold a healthy disdain for my fellow scientists. The only way to get them to play by the rules in my view is to have a threat of a research audit hanging over them.
This is the key part, even legitimate findings tend to be exaggerated. A great step forward would be to distribute grants to reproduce new key findings.
For example, the NIH could identify the top findings from 2024 that need to be reproduced, and seek expressions of interest to reproduce these and/or other important findings identified by applicants. Perhaps, also reach an agreement with top journals to publish replications as a new article type, and link them to the original one, just like they do with comments/news & views.
It would instantly make those publishing super edgy findings much more careful, just in case, and things would become more efficient.
I love the idea of putting each year’s top papers under the microscope. Present day hucksters publishing in Nature will think thrice about making unsupported claims. Further, the automatic reproduction of the research will serve as cement in the fundamental building block nature of science.
Yes, exactly. It'd act as deterrent for anyone considering to exaggerate or misrepresent findings. Also, it'd lead to a quicker dissemination of true findings.
Currently, academic publications are in a bit of market for lemons situation [1], where the seller (authors) have much more information than the buyers (readers, funders).
> Do you really think that the current situation poses a >25% cost on scientific productivity? Do you think your system would be able to recapture that?
Yes and yes. I'm 6 years past defending my PhD and I have low confidence in being able to reproduce results from papers in my field (computational biophysics).
I was recently at an industry-heavy biophysics conference that ran a speed dating event, and my conversation starter was "what fraction of papers in our field do you trust?". I probably talked to ~20 people, with a median response of ~25%.
Even a tiny amount invested in reproduction studies and accountability would go a long way. Most papers in _computational_ biophysics still don't publish usable code and data.
It’s bad enough that too often I trust companies over academics nowadays. At the end of the day, a company has to answer to the customer. If what they offer doesn’t actually work, they go out of business. Academics often don’t have to answer to anyone. Just be smart and make the paper look good while being careful not to do something that could get you nailed for outright fraud.
Most stuff in papers can't be replicated so you can't really trust anything and are forced to see what actually works and is worth building upon. This is very expensive both in time and money.
There are oodles of people doing rote labwork/innovation-less but nonetheless skilled scientific-like work. How are they motivated? Academia's perverse reputational economy is the aberration, not the norm.
It would be so interesting if we came to a consensus that "cascading deletes" should apply to research papers. If a paper is retracted 20+ years later, and it has 4,500 references, those references should be retracted non-negotiably in cascading fashion. Perhaps such a practice could lead to better research by escalating the consequences of fraud.
This comment suggests a lack of understanding of the role of references in papers. They aren't like lemmas in proofs. Often an author will reference a work that tried a different approach to solve the same problem the authors are trying to solve, and whether that other paper is problematic or not has nothing to do with the correctness of the paper that refers to the other work.
Now, it's possible that in a particular case, paper B assumes the correctness of a result in paper A and depends on it. But that isn't going to be the case with most references.
If there were grant money for incorrectly claiming "this other thing that isn't a computer behaves just like a computer", well, we wouldn't need VCs anymore.
> If a paper is retracted 20+ years later, and it has 4,500 references, those references should be retracted non-negotiably in cascading fashion.
Imagine you're reading a research paper, and each citation of a retracted paper has a bright red indicator.
Cites of papers that cite retracted papers get orange. Higher degrees of separation might get Yellow.
Would that, plus recalculating the citation graph points system, implement the "cascading deletes" you had in mind?
It could be trivial feature of hypertext, like we arguably should be using already. (Or one could even kludge it into viewers for the anachronistic PDF.)
That would be overwhelming and coarse. You wouldn’t know if an orange or yellow paper actually relies on the retracted citations or it just mentions them in passing, unless you dig through the paper yourself to figure this out yourself, but most people won’t do that.
I think a better method would be for someone to look over each paper that cites a retracted paper, see which parts of it depend on the retracted data, and cut and/or modify those parts (perhaps highlight in red) to show they were invalidated. Then if there’s a lot of or particularly important cut or modified parts, do this for the papers that cite the modified paper, and so on.
This may also be tedious. But you can have people who aren’t the original authors do it (ideally people who like to look for retracted data), and you can pay them full-time for it. Then the researchers who work full-time reading papers and writing new ones can dedicate much less their time questioning the legitimacy of what they read and amending what they’ve written long ago.
I don't know which way would be better, since I don't know the subtleties of citations in different fields. I'll just note that automatically applying this modest taint to papers that cite retracted papers is some incentive for the person to be discerning in what they cite.
Of course, some papers pretty much have to be cited, because they're obviously very relevant, and you just have to risk an annoying red mark appearing in your paper if that mandatory citation is ever retracted.
But citations that are more discretionary or political, in some subfields (e.g., you know someone from that PI's lab is going to be a reviewer), if you think their pettiness might be matched by the sloppiness/sketchiness of their work, then maybe you don't give them that citation, after all.
If this means everyone in a field has incentive for citations to become lower-risk for this embarrassing taint, then maybe that field starts taking misconduct and reviewing more seriously.
Thus incentivizing authors to add citations to established papers for no reason other than to increase their own trust score. Which already happens to a degree but this would magnify that tenfold.
The question is how many of the citations are actually in support? As in: some might be citations in the form of "Donald Duck's research on coin polishing[1] is not considered due to the controversial nature". Or even "examples of controversial papers on coin polishing include the work of Donald Duck[1]".
I don't think "number of citations" typically make this distinction?
Also for some papers the citation doesn't really matter, and you can exclude the entire thing without really affecting the paper.
Regardless, this seems like a nice idea on the face of it, but practically I foresee a lot of potential problems if done "non-negotiably".
I love the idea. It would also dampen the tendency to over-cite, and disincentivize citation rings. But mainly encourage researchers to actually evaluate the papers they're citing instead of just cherry picking whatever random crap they can find to support their idea.
Maybe negative citations could be categorized separately by the authors and not count towards the cited paper's citation count and be ignored for cascading citations.
If the citation doesn't materially affect the paper, the author can re-publish it with that removed.
> If the citation doesn't materially affect the paper, the author can re-publish it with that removed.
This paper is 22 years old. Some authors have retired. Some are dead.
I really think that at the very least it needs a quick sniff test. Which is boring uninteresting work and with 4,500 citations that will take some effort, but that's why we pay the journals big bucks. Otherwise it's just going to be the academic variant of the Scunthorpe problem.
And/or do something more fine-grained than a binary retraction, such as adding in a clear warning that a citation was retracted and telling readers to double-check that citation specifically.
If you are cherry-picking cites that agree with you, that is a much bigger scandal than you citing a paper that ended up being retracted 22 years later. The point of citations is to cite the relevant literature, pro and con.
I guess those kind of citations should be put in different category that doesn't increase citation count of the referenced paper, in other words raising its prestige. These kind of citations shouldn't do that anyway.
So now if you want to cite come paper you have to decide which papers you'd die and live with, and consequently your paper prestige will be dependent on how many other papers want to die and live with yours.
There's an argument to be made that citing something to disagree with it should increase its prestige but not its credibility (to the extent that those can be separated): you're agreeing that it's important.
Most citations are just noting previous work. Here are some papers citing the retracted one. (Selected randomly).
>Therefore, MSC-based bone regeneration is considered an optimal approach [53]. [0]
>MSC-subtypes were originally considered to contain pluripotent developmental capabilities (79,80). [1]
Both these examples give a single passing mention of the article. It makes no sense for thousands of researchers to go out and remove these citations. Realisticly you can't expect people to perform every experiment they read before they cite it. Meanwhile there has been a lot of development in this field despite the retracted paper.
Jumping in with the others, this is not good. When I've written papers in the past, and used peer reviewed, trusted journals, what else am I supposed to do? Recreate every experiment and analysis all the way down? Even if it's an entirely SW project, where maybe one could do that, presumably the code itself is maliciously wrong. You'd have to check way too much to make this productive.
> Recreate every experiment and analysis all the way down?
If an experiment or analysis is reliant on the correctness of a retracted paper, then shouldn't it need to be redone? In principle this seems reasonable to me—is there something I'm missing?
EDIT: Maybe I misunderstood... is your point that the criterion of "cites a retracted paper" is too vague on its own to warrant redoing all downstream experiments?
I think usually there's too much building off of each other for this. Standing on the shoulders of giants and whatnot. To me that's the purpose of society and human evolution but I won't get preachy. I didn't read the stem cell paper, but I'll use it for example. Let's say the stem cell paper says "stem cells are one type of cell from the human body" which cites some paper that first found stem cells. Maybe that paper cited the paper that first found any cells. And that one cited a paper about the molecular makeup of some part of the cell. And that cited a paper about what it means for an atom to be in a molecule. And that cited some paper about how atoms can contain electrons, and then that electrons are particles and waves.
I think, personally, it's unrealistic to expect every researcher who mentions anything that has an electron in it (aka most things) to need to recreate the double slit experiment. Or, to harvest the stem cells themselves instead of buying them from trusted suppliers. Yes I do as I type this out see more that if more re-experimenting was done it would help detect fraud. But crucially, it really doesn't matter what an electron is to people determining that stems cells are in humans. The "non-negotiably" is what worries me. There should be some negotiation to say "hey your paper uses this debunked article. You have x days to find another, proven paper that supports the argument, or remove the argument entirely, or we'll retract your paper as well." I think that's valid. Especially since the fraud here wouldn't be impacting the author using the bad paper (most of the time, I would imagine) but rather the ones writing the paper. I would hesitate to believe that people faking such crucial, potentially lifesaving research care that some nobody they'll never meet might be upset their paper doesn't make it.
I think really what I'd like to see instead is more checking done at the peer review stage. To me that's the whole point of the journal. I'm biased on this having been rejected during the peer review stage and disliking how expensive journal articles can get, but at the end of the day, that's the point of them. They should be doing everything in their power to ensure that the research is accurate. And if we can't trust that, what's the point to the journals at all? May as well just go on blogs or something.
That would certainly lead to people checking their references better. But a lot of references are just in passing, and don't materially affect the paper citing it.
One would hope that if some work really did materially depend on a bogus paper, then they would discover the error sooner rather than later.
It probably makes sense to look over papers that cite retracted papers and see if any part of them rely on the invalidated results. But unless the entire paper is worthless without them, it shouldn’t be outright retracted.
How many papers entirely depend on the accuracy of one cited experiment (even if the experiment is replicated)?
This is not at all what a citation means. If someone writes a math paper with a correct result, and the proof is wrong, then you cite that paper to give a corrected proof. If someone writes a math paper where a result itself is incorrect, then you cite that paper to give your counterexample. A citation just means the paper is related, not that it's right or you agree with it.
Just because you cite a paper doesn't mean you agree with it. At least in CS, often you're citing a paper because you're suggesting problems with it, or because your solution works better. Cascading deletes don't really help here - they'd just encourage you not to criticise weaknesses of earlier work, which is the opposite of what you're trying to achieve.
Depends on the paper, it would still require review mechanisms. “Nuke it from orbit”is an overreaction to this, as the debunked paper may play very little part other than as a reference.
Like very valid research being lost because they mention a retracted paper for some minor point that doesn’t really have a major impact on the final results.
The priority isn't about punishing you, or about your feelings or career at all. It's about the science.
If you cite something that turns out to be garbage, I'd imagine the procedure would be to remove the citation and to remove anything in the paper that depends on it, and to resubmit. If your paper falls apart without it, then it should be binned.
I can't think of a single paper that would fall apart to any of its cited papers being retracted. What field of science operates that way?
Science papers are novel contributions of data, and sometimes of purely computational methods. A data paper will stand on its own. A method paper will usually (or at least should) operate across multiple data sets to compare performance, or if only on a single dataset it's gonna to be a very well tested dataset.
If MNist turns out to be retracted, would we have to remove all the papers that used it over their years? That's about as deep as a citation can get into being fundamental and integral to a paper. And even in that case nearly any paper operating in that dataset will also be using other datasets. Sure, ignore a paper that only evaluates on a single retracted dataset, but why even bother retracting, as the paper would be ignored anyway, because what significant paper would use a single benchmark?
But 99.9% of citations have less bearing on a paper than being a fundamental dataset used to evaluate the claims in the paper. And those citations are inherently well-tested work product already.
So if people actually care about science, they would never even propose such a scheme. They would bother to at least understand what a citation was first.
You might not be at fault but your work depends on that wrong work, so your work is probably wrong too and readers should be aware of that. If it doesn't depend on it, then don't cite it! People cite the most ridiculous crap, especially in introductions listing common sense background knowledge with a random citation for every fact. That stuff doesn't really affect the paper so it could just be couched in one big "in my opinion" instead.
Academic papers have to cite related research to situate their contribution, even if they're not directly building on that research. When researchers can't reproduce a paper's results, they have to cite that paper when reporting that, or no one will know what they're talking about and the bad paper cannot be refuted. The whole system also needs many compare and contrast citations that aren't built on directly or at all, so you know what a paper is doing and not doing.
Yea, I hadn't really considered those kinds of citations. I was thinking of the piles of worthless citations that authors often put in simply because they're supposed to cite every fact, even if it's something that's common sense which they're not treating critically and doesn't affect their own work so they just did a quick search for any paper that made that claim.
> but your work depends on that wrong work, so your work is probably wrong
No, absolutely not, that's pure fallacy.
There might be some small subset of citations that work like a mathematical proof, but how many of these 4500 citations could you find that operate that way?
> There might be some small subset of citations that work like a mathematical proof
And even then, you're just weakening the result, not throwing it out entirely: instead of a proof of X that cites a proof of Y, you have a proof that Y implies X.
Cascading invalidate. I don’t think it should disappear, I think it should be put in deep storage for future researchers doing studies on misinformation propagation.
1- "The errors the authors corrected “do not alter the conclusions of the Article,” they wrote in the notice."
2- "the Blood paper contained falsified images, but Verfaillie was not responsible for the manipulations. Blood retracted the article in 2009 at the request of the authors. "
3- "The university found “no breach of research integrity in the publications investigated.” "
4- "The notice mentions two image duplications Bik wrote about on PubPeer. Because the authors could not retrieve the original images, it states:
the Editors no longer have confidence that the conclusion that multipotent adult progenitor cells (MAPCs) engraft in the bone marrow is supported.
Given the concerns above the Editors no longer have confidence in the reliability of the data reported in this article."
In some fields more than half of the research is somehow not reproducible. Some is attributed to fraud, some incompetence. As a whole it makes science produced by these fields worse than a coin flip. Psychology is by far the worst culprit.
We're at an inflection point in history where the scientific method dictates we shouldn't trust many fields that use the title "science".
I don't think there's a single academic research lab out there that doesn't heavily doctor their data to make it publishable to some extent. The best of them "merely" cherrypick data. The pressures of academia make being honest an extremely bad career choice that can end up with your unemployment.
Publish or perish. You can't publish if your results aren't good.
Straight up fabrication is more common than we'd hope, but probably not systemically threatening. I'm much more concerned about how poorly replication goes even when authors are not malicious/generally following the "status quo" for methodology in their subfield.
Doesn't this paper make a fairly straightforward claim, which is either true or not? Hasn't there been any further research in the past 22 years to either effectively support or undermine the conclusions?
Unfamiliar with academia here, and I can't quite figure it out from TFA - does a retraction always imply wrongdoing, instead of mere "wrongness?" Or are papers sometimes retracted for being egregiously wrong, even if their methods were not intentionally misleading?
I've certainly seen papers retracted over copyright/IP issues with images or other details. Funnily enough, this doesn't mean the article goes away, just that it gets covered with a "Retracted" watermark.
Retractions are primarily associated with wrongdoing, but are sometimes also issued for "honest mistakes". If so it's typically with a very clear explanation, like in the link below.
Also, in biomedical research papers can get retracted if they can't show the subjects consented to have their samples (e.g. removed tumors) used in research even if the science itself is sound.
Not being familiar with her, that isn't telling me anything.
It seems like you're implying she's written exceptionally shoddy papers.
But on the other hand she could also just be exceptionally honest -- one of the very few researchers to retract papers later on when they realize they weren't accurate, as opposed to the 99+% of researchers that wouldn't bother.
Also I would imagine that retraction rates might vary tremendously among fields and subfields. Imagine if a whole subfield had all its results based on a scientific technique believed to be accurate, and then the technique was discovered to be flawed? But the retractions wouldn't have anything to do with honesty or quality of the researchers.
Having been in academia, having felt the pressure, knowing reproduction is not sexy and takes time away from "actual experiments", knowing some theories or groups have cult-like status, knowing that not having papers means not getting a PhD, despite working hard, being smart, knowing that this is (experienced as) very unfair, etc... I'm very sure that 4 in 10.000 is the tip of the iceberg.
We need more reproduction. Or have some rule: Check all assumptions. Yes, it's a lot of work, but man will it save a lot of fake stuff from getting out there and causing a lot of useless work.
Having considered it I reckon it could be due to some systemic abuse of the process. Or it could be that she is working in a field where there is a high uncertainty rate.
Why don't you explicitly state which you think it is?
No, many honest researchers retract their own papers because they found a problem that cannot be solved by publishing a correction/errata (a kind of mini publication that corrects the original work).
It is extremely bad to use number of retracted papers as a judging factor for a researcher. Using the number of retracted papers because of fraud (fabrication of images, data, stealing work, plagiarism...). Self plagiarism is a slightly different case with a much broader grey area.
I actually retracted one of my papers. It was before it was published, but after I had submitted it. I had discovered a flaw in my methodology the night before that did have material impact on the results. I was so stressed out for 24 hours until I spoke to my advisor.
My advisor was very chill about it. He said that retractions aren't a big deal and was glad I spotted the issue sooner rather than later.
I corrected the experimental methodology and while the results weren't quite as good, they were still quite good and I got published with the correct results.
> I corrected the experimental methodology and while the results weren't quite as good, they were still quite good and I got published with the correct results.
I disagree. Your new results were much better, because they were sound.
The article said there was no finding that the primary author did anything wrong but that the original photos were no longer available so the paper could not be corrected.
NOTE: I DON'T FOLLOW THIS WORK CLOSELY: I am not sure that there are any successful programs using pluripotent somatic (adult) stem cells, if they even really exist, though there's lot of successful work with differentiated stem cells. So I think there's an unstated subtext as you surmise.
This paper was very important and eagerly received because the GW Bush administration had banned federal funding for research using foetal stem cells as a sop to the religious right (all that work moved to sg and cn, and continued in Europe).
If wrongdoing is the same as intentional deceit, I would guess there are some that were not intentional, but instead driven by incompetence or simple mistakes.
Fraudulent/doctored images don't fall in to the incompetence/mistake category though.
Some types of mistakes/incompetence: improperly applied statistics, poor experiment design, faulty logic, mistakes in data collection.
Okay, I can answer this. Papers are never retracted for theory proven wrong and they are always retracted when wrong-doing is found. This is why the high level research stuff always has researchers recording their data and notes. Before computers, my exp early 1990s, We had to record everything in a notebook and sign it.
That statement is wrong. Papers do get retracted because a major innocent error is found. This often happens at the request of the author (typically with an explanation from the authors). . See the comment a bit further up for an example.
There's a difference between "theory proven wrong" and "proof being wrong". A finding that Theory A is wrong is still a valid finding. A wrong finding about Theory A is just a lie, it carries no value, and should thus be retracted.
In practice, wrong findings that aren't due to misconduct and aren't very recent are usually not retracted though. It's just considered part of the history of science that some old papers have proofs or results now known to be false. It is pretty common in mathematics, for example, for people to discover (and publish) errors they found in old proofs, without the journal going back and retracting the old proof. A famous example is Hilbert's (incorrect) sketch of a proof for the continuum hypothesis [1].
I have a friend who got their paper retracted, because it turns out they had made a big mistake in implementing an algorithm -- so big that after fixing it, the results entirely disappeared.
In that case, the retraction isn't didn't really get any publicity, and I'm actually proud of them for doing it, as many people wouldn't bother.
However, in practice I would say the majority of retractions are for wrongdoing on the part of the authors.
I wish, particularly in the case of the modern internet, that it was easier for authors to attach extras to old papers -- I have old papers where I would like to say "there is a typo in Table 2.3", and most journals have basically no way of doing this. I'm not retracting the paper over that of course! This is one advantage of arXiv, you can upload small fixes.
> Or are papers sometimes retracted for being egregiously wrong, even if their methods were not intentionally misleading?
There could be a mistake the authors made which led to a wrong interpretation. Like, someone might write another article commenting on that mistake and wrong conclusions. But that wouldn’t be a reason for retraction. Something should be incredibly wrong for authors or journal to do that. Retractions due to fraud are much more common.
Academic research rarely (if ever) cares about "intentions" of the authors. I'd say papers are exclusively retracted for being "egregiously wrong" (or at least not trustworthy), and never for any "wrongdoing". The wrongdoing just happens to be a pretty good indicator that the conclusions probably aren't trustworthy.
Can confirm. My PI didn't review anything. Sent their journal reviews to students and told us to sign their name at the bottom. Straight up told us to falsify our results on more than one occasion (I refused). I reported them to admin. Admin didn't investigate, didn't even contact the witnesses I named, and gave the professor tenure. This was at UT Austin about ten years ago. Academia is broken.
I know, I'm in academia. But this silence is why these things keep happening. PIs hold power over their students to keep them in line, through letters of recommendation to networking and post doc/job offers. We need to work to correct that.
And this is why it's expected that the PI will take responsibility for any papers published by their lab – if the PI isn't doing their job, they should face the consequences.
We strongly need a "prestigious" journal devoted to publishing reproductions of other studies. Moreover, we need to change our perception of a good scientist. Doing novel research is awesome and great but reproducing other people's work is also important - it is a fundamental pillar of science.
The problem is even more pronounced with more and more specialized and expensive equipment required for doing certain experiments.
> It's trivially obvious that some kinds of stem cell can become any type of cell, given that we all had our beginnings as a single cell.
It’s not that obvious, as the brain grows, certain kind of cells die off and never come back. For example at 4-5 years of age being able to speak different phonenes is lost due to mass die off of a certain type of brain cell.
Could be the same for the pair of cells that start a human life. Once their purpose is served they may never exist again.
Do you have a source on the phonemes situation? First time I hear of it, and I'm pretty sure I've seen people learn them past that point. I know they struggle with it, but, as an example, even japanese people can learn the effective difference between l and r given enough effort.
It’s why certain ethnicities cannot pronounce words in their non-native language. Like people from the US not being able to make certain consonant sounds in German or Chinese being unable to pronounce the R consonant. So go look it up yourself.
You are referring to the critical period [1] of (second) language acquisition, which is generally thought to end with the onset of puberty [2].
Neurodevelopmentally, this period coincides with extensive synaptic and dendritic pruning and increased myelination (of axons) [3], which result in the loss of some connections and the strengthening and acceleration of others. Cell loss is not thought to be a major driver of brain maturation, nor is it thought to occur more frequently during this time window.
> It’s not that obvious, as the brain grows, certain kind of cells die off and never come back.
I'm not sure what difference that would make. Those brain cells (and all the other brain cells that don't die off) were still formed by successive divisions of the single cell that resulted from sperm meeting egg. Therefore, that original cell is capable of producing any cell type found in the body.
It's important to distinguish between the types of stem cells when referring to pluripotency (i.e., the ability for a cell to differentiate into almost any cell type in the body). Embryonic stem cells are considered pluripotent. Adult stem cells are more correctly referred to as "multipotent" in that they can be coaxed into differentiating into other cell types, but typically into cell types close to their own lineage.
This is a big problem with Belgian scientists. Their culture puts a lot of pressure on publishing and so on so they tend to falsify flashy results over just doing the science.
This statement holds up pretty well if you drop the country from it. Academia follows incentive structures just like everything else.
Say you were a software engineer who was paid by how often you shipped code with a nice title but you didn't have to give people the binaries so noone ever ran them. That is, the difference between nice documentation about code that never quite existed and scruffy documentation about code that does really useful things is you get money for the first and fired for the second.
Academia isn't quite that extreme but it does have incentives pointed in that direction.
The authors were all employed by the University of Minnesota Medical School, and I think only one is Belgian. Not sure how much Belgian science culture has to do with it.
I hope you realize there’re many different labs with different attitudes. I have one of my degrees from Ghent and what you describe never came up. AFAIK there wasn’t even a requirement to publish a paper for a PhD student to graduate anymore.
I'm guessing you consider "vetting" to be confirming as correct, rather than the vetting that referees already do? In that case, how would anybody know about it to vet it? As soon as it's communicated to other researchers to work on it's effectively public, as trying to keep all research secret like that would lead to enormous amounts of duplicated effort and slow scientific progress almost to a halt. Besides, some research (especially theoretical) takes years or decades to be confirmed and accepted, and some research is reporting extremely rare things that may be unconfirmable but will eventually, because they were published, be part of a larger theory.