Hacker News new | past | comments | ask | show | jobs | submit login
Faked Beta-Amyloid Data. What Does It Mean? (science.org)
250 points by hprotagonist on July 25, 2022 | hide | past | favorite | 168 comments



Recent big threads yesterday and a few days ago with 250+ comments each:

https://news.ycombinator.com/item?id=32212719

https://news.ycombinator.com/item?id=32183302

And a danglist with more

https://news.ycombinator.com/item?id=32213973


Not a dupe, per se. Derek Lowe's summaries of current events are themselves valuable and aren't just "hey look, [this thing]!".


That's a good point. I call this sort of post either a follow-up or a quasidupe, depending on how dupey (?) it is.

The problem is that the HN discussions on a topic cluster tend to be much the same, even if the article itself isn't. But I've taken the [dupe] stigma off this one now.


HN dupes are mostly by value rather than by reference and there's also a kind of topic repetition limit even when the there's more commentary on the topic.


Might as well keep it updated I suppose.

Two decades of Alzheimer’s research was based on deliberate fraud - https://news.ycombinator.com/item?id=32212719 - July 2022 (295 comments)

Potential fabrication in research threatens the amyloid theory of Alzheimer’s - https://news.ycombinator.com/item?id=32183302 - July 2022 (236 comments)

Alzheimer’s amyloid hypothesis ‘cabal’ thwarted progress toward a cure (2019) - https://news.ycombinator.com/item?id=31828509 - June 2022 (307 comments)

How an Alzheimer’s ‘cabal’ thwarted progress toward a cure - https://news.ycombinator.com/item?id=21911225 - Dec 2019 (382 comments)

The amyloid hypothesis on trial - https://news.ycombinator.com/item?id=17618027 - July 2018 (43 comments)

Is the Alzheimer's “Amyloid Hypothesis” Wrong? (2017) - https://news.ycombinator.com/item?id=17444214 - July 2018 (109 comments)


this isn't a HN comment page, but it contains comments from co-authors, alzheimer's researchers, and most critically the lab lead (professor ashe) who overaw lesne's paper: https://www.alzforum.org/news/community-news/sylvain-lesne-w...

professor ashe declined to comment for the science article but commented here. notably, she claims the journalist conflated two forms of Aβ and drew invalid conclusions.

hat tip to @atombender for surfacing this page.

tldr: many scientists believe the fraud is grave and inexcusable but the impact on research is greatly exaggerated. comments on twitter from other researchers seem to echo this sentiment.


Pubpeer is an incredible project [0], and unless I am mistaken, you could say that this finding may not have occurred without it. For those that do not know, it is a website (with browser extension) which allows a comment section for academic publication, with anonymous comment, for posting critique and questions about papers. I suggest all of my peers to install the extension, as a pre-warning system for poor quality papers. This paper in question can be found here [1], and it is a good example of what simple image processing techniques can be used to verify, or question claims. It is unfortunate that while things like NMR, XRD, SEM/TEM images, and Western blots are commonly faked, and found out- not all science is based off of verifiable data at the reader's end.

[0] https://pubpeer.com/static/about

[1] https://pubpeer.com/publications/8FF7E6996524B73ACB4A9EF5C0A...


To quote someone I know in Alzheimer research - “I’m pretty sure it’s all fake”

It’s funny because more than once I’ve had conversations off-the-record with people who had the same concerns.

What I think is most troubling is how long it took and how many papers support the claims.

In other words, there are thousands of papers supporting beta-amyloid and/or building off the data / theory. All of that data and everyone conducting that data should be in question. Yes, sometimes you’ll see correlations even though the mechanism is not what you’d expect. That said, for 16 years? Lol

No, in reality, this should indicate there is systemic fraud in the industry. I don’t think everyone’s a fraud, it’s as much about what doesn’t get published as what does get published. How does this happen? I commented on incentives prior here: https://news.ycombinator.com/reply?id=32213123

I personally don’t see how we can advance science with the current credentialing and paper citation == academic success we’ve seen grow the last 50 years. In many ways, fundamental research has stalled, with applications of research advancing (giving us the illusion of advancing the fundamentals).

To fix this problem, an overhaul is needed. Most importantly, the gate keeping mechanisms should be removed.


These trends have a long history.

Halsted's 'radical mastectomy' was similar. He had an incorrect theory of cancer (centrifugal spiral) that caused a lot of unnecessary harm, but it was not possible for people to push back against it because of his status in the community (iirc pushback eventually came from someone doing research in London where Halsted was less powerful). The emperor of all maladies is a great book about cancer generally which touches on this.

Mendel's peas are another example - breakthrough that went ignored for 40 years because Mendel was a nobody and biologists of the time had their own nonsense theories not backed by real empiricism.

Phlogiston, Elan Vital, etc. - a little different since they weren't even pretending to be empirically true. Just people making stuff up.

Falling into made up tribal nonsense is the default state of humanity - even for scientists, it's hard to think independently to overcome that. At least scientists are supposed to have that as a goal, but focusing on this difficulty and how we're all affected by it does lead to better thinking imo - at least it helps recognize these failures as they're happening more quickly.


Another good example is the deciphering of Maya script. Yuri Knorozov did it[0], but who was he? Some Soviet nobody, so the field treated him with disdain and upheld the nonsense theories of the leading Mayanist, J. Eric Thompson[1]. Until Thompson died, that is.

[0] https://en.wikipedia.org/wiki/Yuri_Knorozov [1] https://en.wikipedia.org/wiki/J._Eric_S._Thompson


hahaha when I saw the name I thought "this the guy with the cat?" and then clicked through and it is! Why did I know about this guy, but only in the context of the cat?


Proof of reincarnation right there. That man came back as Grumpy Cat.


Mendel faked his data. Its ironic, I think you intended him as an example in the opposite direction. He is an example where he is famous for having the right idea, but modern analysis statistically proves his data were too good to be true.

[1] https://pubmed.ncbi.nlm.nih.gov/27578843/


Did he?

From the abstract: "my analysis could not clearly determine whether the bias was caused by misclassifying ambiguous phenotypes or deliberate falsification of the results."


More correctly, it's believed he picked several traits that segregated independently, allowing his demonstration of mendelian genetics to be clearer, while leaving some more murky evidence unpublished. Whether this is a hallmark of a cheater or a genius scientist, is up for the historians to decide, but it's obvious he was correct in the long run, although his understanding of genetics is a far cry from our current understanding of molecular biology.


Isn't "Phlogiston" basically negative oxygen? Which is why it actually had reasonable empirical support at the time.

I've read that the guy we now credit as discovering oxygen, was actually trying to (and died believing he did) create 'de-phlogistoned' air, and because there was no phlogistons in it, things burned really well in it as it sucked the phlogistons out of the burning matter even faster than normal air.

The key difference is that for 'negative oxygen' to be a thing it would need negative mass as oxidised (or de-phlogistoned) materials gained weight.

Just looked it up on Wikpedia, in case I was repeating urban legends:

> During his lifetime, Priestley's considerable scientific reputation rested on his invention of carbonated water, his writings on electricity, and his discovery of several "airs" (gases), the most famous[7] being what Priestley dubbed "dephlogisticated air" (oxygen). Priestley's determination to defend phlogiston theory and to reject what would become the chemical revolution eventually left him isolated within the scientific community.

https://en.wikipedia.org/wiki/Joseph_Priestley


Yes the individual is frail and easily derailed, which is why we have institutions. On whose shoulders the whole process of science rests. So less individual guilt, more answers regarding institutional failures and errors in process design. Obviously, the skinner box built from incentives and punishment, is hacked and useless at best, dangerous at worst.


I think you have it backwards - often it’s institutional power that delays disruptive results and an individual that discovers new things despite that.


I agree with you that disruption comes from individuals, but the institutions nourish individual types and filter for individuals. Worst of all, some institutions reward the "meta-hack" aka hacking institutions to circumvent the selection pressure (after all its what they themselves often do to thrive) and before you know it its a office full of people who are extremely good at grant hacking and bad at research.

I wonder what would happen, if you would allow for 20 % of the money to be distributed at random.

Like if the grant application fails, you still have a chance for a wildcard.


A sortition-like system for grants would help for sure. The problem is, how many proposals are you permitted to submit? If you flood the process you stack the deck for a random win in your favour.

I think random reviews from professionals outside of a field might be possible as well, at least reviewing to what extent the data supports a hypothesis. For instance, say you had a random set of statisticians review the datasets and accompanying hypotheses for 2 or 3 competing theories in biology and cast their votes on what hypothesis seems most plausible given their assessment of the data. They wouldn't be familiar enough with the field to recognize the "leaders" that might be influencing a field politically rather than empirically. Not a perfect system for sure, so maybe someone has a better idea.


To be fair, there are a lot of exceptions to Mendel's laws and he falsified some of his data so it matched his expectations.


This is one of the primary arguments against immortality as well. If Newton were still alive we'd be arguing about why GPS satellites get clock skew and gravitational lensing.


And, uh... what are the arguments for immortality?


It’s okay to recognize a problem without proposing a solution. I agree that there are alarming levels of fraud. But heading straight to “tear down the journals” seems mistaken.

New progress is rarely made by burning down old structures. New work tends to transcend old work. So the best way to remove the gate keeping mechanisms is to make them irrelevant.

I suspect that a few more generations of tech progress might make that a reality. It seems like a matter of time till some teenage upstart is hacking on their brain the same way I hack on ML, for better or worse. And teenagers tend to notice correlations that the old guard miss.

We’re just not there yet. And that’s fine. Our options are to work within the confines of existing systems, or build new systems. Removing mechanisms is a bit like removing a dam because it’s leaking. The best strategy is probably to fix the leaks.


> But heading straight to “tear down the journals” seems mistaken.

Ultimately, I think journals can be reformed but it’ll break their business model.

Effectively, you can publish without reviews. Then have credentialed people anywhere anonymously making comments / challenges. You can also let the public similarly comment. Similar to movie reviews - people in and out of the industry could both review.

On top of that, allocate 25% of grants for replication which could be done in a somewhat public manner and attached to as a report on the paper. Things like that.

That said, Einstein had his work initially rejected. There weren’t strong journals at the time as a gate keeper. Just people would publish and publicly debate — I don’t think there’s an issue with that now. Particularly, With the internet.


> Effectively, you can publish without reviews.

Yes, you can. If you want an academic career, you'll also need to publish in prestigious outlets though.

I would love a better model for academia, but I haven't heard a convincing one, let alone that gaining sufficient traction.


I think one of the fundamental issues is that in science it is absolutely forbidden to accuse someone of fraud. If you do that you will be cancelled to use a modern term.

And no peer reviewer dares question the integrity of the data or researcher, it's taken as gospel.

So while many might know about, nobody dares say a thing.


You will not be cancelled as long as you can back your claim. Like in everything in this world. The equivalence of cancelling in HN is of course downvoting to oblivion. We have that tool here as well. Controversial opinions are heard as long as they have backing or/and something important to add to the conversation.


You can report fraud to the ORI (Office of Research Integrity) and they won't do a thing about it. They won't open an independent audit of any federally-funded academic researcher until after the fraud has been publicly exposed in the media, as this one has been. Even then:

> "The agency’s reply, which Schrag shared with Science, noted that complaints deemed credible will go to the Department of Health and Human Services Office of Research Integrity (ORI) for review. That agency could then instruct grantee universities to investigate prior to a final ORI review, a process that can take years and remains confidential absent an official misconduct finding. To Science, NIH said it takes research misconduct seriously, but otherwise declined to comment."

https://www.science.org/content/article/potential-fabricatio...

There is simply no willingness on the part of federal funding agencies to investigate their own grantees after they receive credible evidence of fraud.


> So while many might know about, nobody dares say a thing.

Maybe, but it's more likely that compiling solid evidence for fraud is just far too resource intensive and there is no reward for replication--especially negative replication.

And committing the fraud is far too tempting--you got your needed publication for tenure, and you're mostly safe as long as nobody accidentally makes your paper the cornerstone for something that becomes famous.

As for peers, you can spend your time moving your own ideas forward, or you can take a detour to prove one particular important researcher's ideas wrong. It's pretty clear which is going to be more beneficial to your career.

As the article points out, this was triggered by some short sellers looking to make a buck--not anybody in the field itself.


For what it’s worth, this hasn’t been my experience in ML. But I’ve had a limited window into academia.

We were always questioning each other’s results. Especially when something seemed too good to be true. “Integrity” can take many forms, and it’s surprisingly easy to fool yourself when you’re doing work in the field. So the default assumption was that we were all fooling ourselves, not each other, unless proven otherwise.


But were you accusing others in public of intentional fraudulent activities? Because it's a different thing than "you have bad statistics here" or "wrong assumptions here".


That’s true.


ML is a bit different because if the authors provide sufficient data, then any fraud is trivially obvious on replication and if they don't provide sufficient data, then that is a reasonable criticism on its own without having to go into motives.

However, in other areas where you have "real world" experiments, you don't even expect the experiments to replicate - two clinical trials on different sets of patients won't necessarily yield the same results, and different results when repeating a biological experiment does not necessarily imply fraud; we know that in this domain (unlike ML) we sometimes do have unknown confounders that experiments don't control for.


we have peer review....but peer review is human, and some fields, it's a small world.

I'm not in that particular field of neuroscience (albeit Alzheimer's patients do come in to the clinic...because where else are they going to go?). But when I was in the research world, a big factor was who you knew, and who you could cozy up to, in terms of getting grants, papers published in top journals, etc. Who you ate lunch w/ in big conferences. That being said, fortunately, there was an element of recognition of skill/ability and good work, as well as a decent amount of (sometimes irrelevant) challenge/response and general peacock-ing in terms of the peer review...but it was a reminder that science is above all, a human endeavor and not immune to humanity's sins.


I'm not sure how less gatekeeping mechanisms would be helpful ? Wouldn't that just lead to more crap being produced, and informal power structures and cliques forming ?

It sounds like research needs to be held accountable to some kind of standard for accuracy or quality.


One issue is that the current gatekeepers (peer reviewers for journals, grant proposal scoring committees, promotion committees, etc.) are often the people most prominent in their field. On one hand this makes sense for obvious reasons (an expert is the most equipped to judge their field), but on the other hand things like the amyloid hypothesis get 'baked-in' because, well, it's pretty hard to ask those same individuals to highly rank a large grant proposal that goes against their own theory.

So I think the answer is gatekeeping needs to be different -- not less.


Do note that the more the gate is kept, the stronger the incentive for cheating. Moreover, peer review is overrated.

An experiment by the NIPS conference in 2014 found that ~60% of submissions are in the "gray middle" - not obviously great or obviously crap.

In more detail: they split the PC in two, and had 10% of papers (166) reviewed by both halves. Each half had to accept ~37 papers. The halves disagreed on 21 accepted papers [1].

So yeah, if your submission is sciencey, it's the flip of a coin whether peer review will accept it.

[1] https://hunch.net/?p=467864


Agree.

I don't think we want /r/science moderators determining what is accepted research. But there must be some way to democratize the process short of waiting for the perfect AGI to moderate publications.


It seems plausible that informal power structures may be easier to disrupt with better data than formal power structures.


> I personally don’t see how we can advance science with the current credentialing and paper citation == academic success we’ve seen grow the last 50 years.

As a scientist I would be thrilled to see someone remove these systems. The question then is: what to replace it with that would work better? None of our resources are infinite.

It's even harder when the fundamental problem is research fraud.


I'm more worried about, "how do we illustrate the clear need for a change without pouring gasoline directly onto the anti-intellectual bonfires that already exist?"

Every one of these little scandals points us to opportunities to improve, but also is confirmation bias for people who think science is just one system of superstitions amongst a field of viable contenders.


> I'm more worried about, "how do we illustrate the clear need for a change without pouring gasoline directly onto the anti-intellectual bonfires that already exist?"

Asking this question is in and of itself directly pouring gasoline onto the bonfires.

Just work on fixing the damn problems. Trying to PR them is part of the problem.


It doesn’t really matter either way. At a meta-level this isn’t about Alzheimer’s, and it isn’t about “fixing science”. Nobody in this discussion has any constructive idea how to cure this horrifying disease, or how to build a scientific establishment that isn’t vulnerable to fraud. If that’s all we cared about this would be a boring policy discussion about how to fix peer review. But nobody here is interested in having that discussion: this discussion is about whether we should discard the input of the scientific community as hopelessly corrupt.


I'm surprised that no one has mentioned prediction market(PM) as the alternative. This is old paper but IMO it is still persuasive.

https://mason.gmu.edu/~rhanson/gamble.html


Back when I used to serve on conference review committees I'd talk about the drunk and the lamppost problem: finding the keys in the dark is too hard, so he looks under the lamppost where the light is better (the authors make a simplifying assumption that allows for an elegant approach but the solution is then irrelevant to solving the original problem).

In this case curing Alzheimer's is too hard, but we can make drugs that clear up the plaques that are associated with Alzheimer's and not pay attention to whether doing that actually helps patients: we can document that the plaques are reduced and publish on that basis.

So I suspect that a lot of the research is correct but of no value, because the wrong problem is being "solved".


Cognitive bias kept me from seeing this at first, but this is really a sign, not just of a broken set of incentives, but _decline_.

If you're old enough, imagine how we (USA-centric) would mock flawed research coming out of the Soviet Union.

Well, here we are. And unfortunately there are plenty of signs of decline once it dawns on you that's what's happening.


Indeed. Our institutions are severely degraded, if not horribly broken. I hope it is not irreparably so, but I am not optimistic. The signs of societal and imperial decline are everywhere these days and we do not seem to be able to meet the challenges, many existential, that lay before us.

I'm not sure what to do.


There's somebody on this thread who works in the field of Alzheimers research who says this story is exaggerated.


Well you don't need to go through the gate keeping mechanism, you can just publish a blog. The gate keeping mechanism is like a web of trust that retain only those that are considered valuable in the community, so researchers can have a bounded number of articles to read, at least this is what I think.


It's exactly the same problem 'web of trust' has though - incentives for the gate keepers are NOT aligned with accuracy necessarily.


Yeah, but I think the problem is not really in the web of trust but in the incentive for researchers. If we are still stuck in this publish or perish trend, even if we replace those publishers with journals based on 'open communities', this kind of things will still happen. However, if we abandon the use of such indices (H-index, citation count, whatever), what should we use to evaluate grant applications? Sales pitch like startups asking for VC money?

I don't know, I think the system is problematic but I can't really think of something else that is feasible.


>In other words, there are thousands of papers supporting beta-amyloid and/or building off the data / theory.

It is important to note that the fraud doesn't encompass all of beta-amyloid, but is specific to a subtype called Ab*56.

The plaques are there in autopsy tissue. We can see them. What is causing them is still uncertain.

>No, in reality, this should indicate there is systemic fraud in the industry.

Disagree. This indicates that the field can successfully identify and remove bad/fraudulent lines of inquiry in a relatively short amount of time (a decade).

>A complete overhaul is needed

This is extreme. Fraud happens. Scientific process is designed to root out unverifiable theories, by design. The process is working as intended, why burn it down?


"the field can successfully identify and remove bad/fraudulent lines of inquiry in a relatively short amount of time"

It took 16 years and only happened due to a couple of neuroscientists spotting a short selling opportunity, which allowed them to fund an investigation. Phrased differently, the field itself did not find this fraud. Outsiders did, when motivated by non-academic systems.

"The process is working as intended, why burn it down?"

The mis-allocation of hundreds of millions of dollars on the basis of a Photoshop, with the only outcome being that everyone involves says "no comment", is not by any measure the process working as intended.


It's specific to the most important subtype, which single handedly rescued the entire beta-amyloid theory from the waste bin it was headed to in the early 2000s.

Even 5-10 years ago I heard stories of researchers criticizing the "beta-amyloid cabal" for blocking every other avenue for research, as they were the ones in control of the journals (the referees, editors) and grants.


The process is not working as intended.

16 years to detect and correct major fraud in the field? That's simply inexcusable, and a sign that things are definitely not working properly.


What do you mean by "the gate keeping mechanisms" should be removed? Do you mean the difficulty of publishing? Or tenure? Or degrees?


All of those gate keeping mechanisms have systemic issues and need to be reformed at the very least. "Removed" might be a bit strong, but I definitely see how one could reach that perspective. Regardless, they certainly have lost the credibility to claim any monopolistic authority over the domains they purport to serve.


a) It's easy to suggest gatekeeping mechanisms like degrees, tenture, journals controlling publication "should all be reformed", without proposing any specific alternatives; but they all exist to prevent an obvious set of anti-patterns; if you eliminate them you just get different anti-patterns. Can you make a specific proposal of what's a better system to replace them with? b) In any case, the $$ incentives and root-causes are distorted, no point in scapegoating the gatekeeping mechanisms. The US pharmaceutical industry (for example) is 3.2% of GDP, and is 48% of the global pharmaceutical market. That amount of $$ is obviously going to distort policy at all levels. c) Scientific fraud should be criminalized, there should be stiff penalties and they should actually be enforced (actual prison, fines, losing job and tenure, retracting PhDs). Otherwise you're playing a race-to-the-bottom in a consequence-free environment, against someone else who's going to do unethical things. d) Until you address the root-causes/incentives, people's behavior won't change, why would it.

Compare to the recent similar scandal on "self-reported data" for US News college rankings. [https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...]


revolutionary ideas will always come from outside, because institutions naturally can only form around what they already assume to be true... but taking that to mean institutions serve no purpose is terribly shortsighted

one thing that does have to go though is the implication that the peer-review process is 'science' and anything outside of it is 'not science', which is just embarrassing


I think this problem is much more severe in cases where a more accurate assessment would be, "we have no idea what is going on or how to stop it, even though we desperately want to". People are more likely to grasp at straws if there is nothing other than straw available.

This does not, of course, mean that it's not still a big problem; far from it.


One of the big questions I have is what happens if there's no significant financial incentive tied to a therapy. Lithium seems quite promising, for example. However, it's an abundant salt that requires very little research (meager funding opportunities) and can't be patented (non-existent commercialization opportunities.)


You can still IP protect delivery mechanisms and branding. Insulin and tums are examples, respectively. I don't see what the potential for issues here is unless it's an ultra-rare disease where the total market isn't worth the certification costs. That doesn't describe Alzheimer's.


What’s one of the first things they teach when dealing with statistical analysis?

Correlation is not causation


that is quite the accusations, verging on slanderous, based on hearsay.


Interesting article. From an non-researchers perspective, I find a couple points extremely disturbing.

- The failure to notice and act on the faked data in the Lesné papers is still a disgrace, and there’s plenty of blame to go around among other researchers in the field as well as reviewers and journal editorial staffs.

- Every single Alzheimer’s trial has failed.

I strongly suspect there is more fraud, just because of human nature. It looks multiple checkpoints are failing. We also have the replication crisis going on. It's pretty clear at this point incentives are misaligned at every level of the research pipeline.

It's a bad time for this highly public research failure. The general public's faith in experts is dropping, and maybe for good reason. As the economy, and quality of life deteriorates, I think we will see the public demand "results" from experts.


A lot of systems are based on trust. The checkpoints are sometimes in place not so much as a thorough verification but rather as a signaling mechanism, a vote of confidence.

While I do think that a better aligned system of immediate incentives can ameliorate the symptoms, it's only a patch for moral behavior.


It's also possible that Alzheimer's is an incredibly challenging disease and the tools we have to deal with it are suboptimal. People have a hard time distinguishing failure from malice, unfortunately. It's understandable to want someone to blame.


It is a hard disease to study for sure. The hardest part of it is that there's no effective test for it, you just have to check for a somewhat vague constellation of symptoms over a person's entire lifespan.

I think a big part of the obsession with the amyloid plaques hypothesis is that it was the only issue that was consistent among patients that they could actually test for, so rather than continuing the harder work of searching in the dark for something better, researchers latched onto exploring how and why that was associated with Alzheimer's.

And it makes sense in a way, there's obviously something going on and figuring out why the plaques are produced should provide some mechanistic insight into what's going on. And that's true, in the same way that capturing all of the EM emissions from a laptop will provide some insight into what's going on inside the laptop, but that information will likely only be useful if you already have a good mechanistic model of a laptop's inner workings (people have successfully read data across an air gap this way). I suspect that's what will happen with amyloid plaques as well, ie. plaques will only make sense retrospectively.


I agree.

However I also think, we are seeing evidence of malice, greed, and maybe desperation in these acts of scientific fraud. Experts need to quickly define what malice in their respective fields means. Then most importantly they need to act on it and restore integrity, discipline, and show results.

Otherwise the general public will define it and act on it themselves.


One way for something to be extremely challenging is for it to be impossible


People want to point the finger at structural problems with academia to explain why this wasn't caught earlier. But what about industry? Didn't they have a huge incentive to catch this sort of thing before they spend boatloads of money on clinical trials, and didn't they have some level of immunity to concerns about tenure committees and journal editors? If this was so easy to catch if it weren't for the troubling incentives of academia, why did drug companies still spend billions on it?


Because of the potential upside.

If your potential market is “all aging people”, that’s a pretty big market. Even better when you realize that as people are older, they have more money, so the market you’re targeting tends to have more money to spend on medical expenses.

Now, couple that with a highly competitive landscape where the first to market will capture the vast majority of that market and you can see how some steps might get skipped.

Finally, if we did have random results in clinical trials, if you had 20 companies running a trial, you could expect at one of them to randomly have a positive trial (p <= 0.05). So you could have “promising” results for a small trial that would only fail once you had a larger number of participants.

The market incentives and potential for lost opportunities were probably enough to justify a larger risk for some companies.


There were no clinical trials targeting this result. From the linked article:

> Did the 56 Work Lead to Clinical Trials?

> [snip] And the answer is that no, I have been unable to find a clinical trial that specifically targeted the AB56 oligomer itself

It was industry (well short traders) that led to this being uncovered:

> He was originally hired by two other neuroscientists who also sell biopharma stocks short - my kind of people, to be honest - to investigate published research related to Cassava Sciences and their drug Simufilam, and that work led him deeper into the amyloid literature.


According to the article, the research in question did not result in any clinical trials.


> why did drug companies still spend billions on it?

Due dill is hard. Why did no one catch ubeam? Why is energous still a thing? Why has no one called out LAZR on their nonsense?


> why did drug companies still spend billions on it?

Because they made even more off it.


Only one drug has been even provisionally approved--Aduhelm--and that approval is so mired in controversy that everyone is refusing to pay for it, so I doubt its company has made even a million dollars in revenue off of it.

No, Alzheimer's has been a multibillion dollar sinkhole for drug development cash with absolutely nothing to show for it.


No, they lost money on it. Clinical trials aren't cheap.


How? They aren't getting revenue until they can sell a drug that has passed clinical trials, and no clinical trials have been successful for Alzheimer's.


"First off, I’ve noticed a lot of takes along the lines of “OMG, because of this fraud we’ve been wasting our time on Alzheimer’s research since 2006”. That’s not really the case, as I’ll explain..."

I'm not seeing the explanation. The Amyloid hypothesis was weakening in 2006 and a series of apparently ground-breaking but actually fake results gave new life over a significant period of time. Yes the author says: "the main inaccuracy in that statement is that we’ve been actually been wasting our time in Alzheimer’s research for even longer than that." but the problem is testing a false hypothesis and discarding isn't a waste of time, it's how science should work. Fraud and forgone failures, of course, are how science shouldn't work.

And yes, there's lot of fraud around but I don't see how this isn't especially damaging.


> I'm not seeing the explanation.

It's in there:

> But my impression is that a lot of labs that were interested in the general idea of beta-amyloid oligomers just took the earlier papers as validation for that interest, and kept on doing their own research into the area without really jumping directly onto the 56 story itself. The bewildering nature of the amyloid-oligomer situation in live cells has given everyone plenty of opportunities for that! The expressions in the literature about the failure to find 56 (as in the Selkoe lab’s papers) did not de-validate the general idea for anyone - indeed, Selkoe’s lab has been working on amyloid oligomers the whole time and continues to do so.

So labs wanted to work on this for their own reasons, not because of this result.

> Did the 56 Work Lead to Clinical Trials? > That’s a question that many have been asking since this scandal broke a few days ago. And the answer is that no, I have been unable to find a clinical trial that specifically targeted the AB56 oligomer itself

So there weren't even any wasted clinical trials.

Basically his point is that maybe this drew some extra researchers into that area, but it was a worthwhile area anyway. The fact that nothing worked isn't particularly relevant because nothing has worked for Alzheimer's anyway.


You can escape the *s with \ (i.e. \*56) to fix the formatting of *56.


Here is a lengthy response from more than 20 researchers in the field [0]. In brief, the claims made in the article above are exaggerated. As someone who works in the field, and who is skeptical of the amyloid hypothesis, I agree.

[0] https://www.alzforum.org/news/community-news/sylvain-lesne-w...


It’s so shocking that they don’t want their field to be discredited.


Not really. There is plenty of criticism of the amyloid hypothesis within the field, and even specifically of the oligomer variant thereof that Schrag focused on. You can even read that criticism in the link I provided.

Schrag's investigation did not (in my opinion) discover any fraud in the Lesné paper, even if the paper perhaps over-states its case.

The amyloid hypothesis has several very strong strands of evidence in its favor, specifically that it is (currently) the only way to reconcile the indisputable genetic and pathological evidence for mutations in the genes APP and presenilin both producing early-onset Alzheimer's disease. Amyloid is derived from the action of presenilin on APP.

Until an alternate way of reconciling this observation is found, the amyloid hypothesis will always have supporters.


Having read it, I don't recognize anything that suggests the Science article is an exaggeration. Perhaps there's something in the comments you're referring to, but if so can you be more specific?


Scientists are often very circumspect in how they phrase things, but this particular phrasing in the main article is notable:

>“This is not a real scientific problem, but it is most unfortunate for general science credibility,” Selkoe wrote to Alzforum.

Edit: I should add that the "oligomers" described in the Lesne article have been found by other researchers, see [0]

[0] https://scholar.google.co.uk/scholar?hl=en&as_sdt=0%2C5&q=Aβ...


I read that it might actually be that these specific Oligomers might not be detectable in anything but genetically engineered mice - and detection in humans has not actually been proven outside papers in question, is this true?


It's quite difficult for researchers to gain access to human brains, needless to say, so the number of studies attempting to find specific oligomeric species in human brain are limited.

However (and without attempting an exhaustive search), detection of oligomeric amyloid (if perhaps not the specific species reported by Lesné are reported, e.g.

https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1750-3639....


So you're in complete agreement with the article, then? You've said nothing that wasn't mentioned.


High molecular weight assemblies of Abeta have been repeatedly identified by numerous researchers from human tissue and other sources. However, similar to myself, Derek Lowe didn't investigate the voluminous literature on the topic to see if the specific Aβ*56 species has been found in extracts of human tissue by others.

With a little extra searching, I found this article below which does appear to replicate detection of a potential 56 kDa form of Aβ in human tissue by western blot of SDS-PAGE gels, as shown in the Lesné paper (Figure 1), though they express concern that its true concentrations may be very low and may be artefacts.

https://onlinelibrary.wiley.com/doi/full/10.1111/j.1582-4934...


In the article above? This is in response to the original article by Charles Piller, not this summary by Derek Lowe.


Apologies! Yes, the Pillier article. Unfortunately I can't correct the original comment.


Amusing. Second-hand experience[0] with "exciting science areas" led me to this belief already. It's one of those interesting things. For folks outside of science, they think most of this stuff is backed. You can see it in the fact that people will quote "there is evidence that X is Y" using some random study, commonly on HN.

There's a common reaction to this sort of thing where people act like it is some betrayal of a sacred pact, etc. etc. but there is so much fake science out there that it is really hard to believe that there is actually some betrayal and it's not just the reality of the field.

And always there'll be some guy who comes up with a "I've never heard of this and neither has any of the ten guys I talked to about it". Just do the math. At a 10% fake rate (which I would consider extraordinary) there's a 1 in 3 chance that you wouldn't detect it with just independent observations of you and 10 friends (and that's assuming you and your friends are independent here, which is very generous). At a 5% fake rate (which is still horrible), there's a 1 in 10 chance that even you and 40 of your friends would not detect it.

The thing with not-so-rare things is that you can fail to detect it through sheer chance.

0: https://news.ycombinator.com/item?id=25926188

P.S. hn.algolia.com is excellent. I managed to find this comment in a couple of seconds on it.


The "I've never heard of this ..." comment can also be explained by the possibility that the person making the comment is one of the fraudsters. This adds 10% to the 1/3 chance in your example. So no matter how high the rate of fraud is, we would expect to hear comments like this.


I did not think about that, but you are right (with appropriate maths modifications)


Good write up on a very disturbing case of blatant academic fraud.

Some of the methods mentioned that were used to discover it are pretty simple - prior compression artifacts showing borders around sub areas of a large graph, indicating cutting and pasting of elements, etc.

What I don’t get, is why the underlying data doesn’t get included and any graphs just generated from that? Why have an actual embedded image and that’s all good?


This type of data is 'old school' molecular biology.

Creating those blots is a grind. It takes hundreds of hours to set up those experiments and they often don't work, or go wrong, and you have to do it all over again. The people who are good at making these are usually the people who aren't really up to date with the whole 'computers' thing.

Yes, there would a hundred ways to easily verify the provenance of data from an image of a blot all the way to publication. However, this area of research is very slow to incorporate the latest computer tech.


[Be aware that your comments appear to be default-dead. I've vouched for this one, but you may want to contact dang to ask for your account to be unflagged: I can't see anything objectionable in your post history so it's probably a false-positive].


Thanks for letting me know. Mods couldn't figure out why either and fixed it.


Western blots are analog -- the captured image is the underlying data.


Could one imagine imagining equipment cryptographically signing the images they spit out?

(Sorry if this is a super naive HN-esque comment – the topic is far outside my field, and I don't mean to actually suggest this as a solution, I'm just curious.)


That's definitely possible.

These sort of images are typically re-arranged to fit nicely in a figure for publication. It's now more common that any figure that has a western blot within it, you also have to provide the raw unaltered image. Though this wasn't standard practice at the time.

As someone in the field, western blots are terrible for a number of reasons. Even if you get nice bands and you're honest about what you show. The companies that make the antibodies for these commonly sell people duds that will bind to a number of unknown things, and people blindly trust that since the website said it's specific for protein-XYZ, that's what they're measuring.


Unfortunately, it seems that in this case it is the author of the article published by Science (Schrag) who is remiss and has written a poorly-researched article. See the link I provided elsewhere in this thread.


I'm not in academia but here's what I don't understand.

Generlaly, my understanding was that important results typically get replicated by other institutions and teams. Not necessarily to prove that initial result but to work out the limits of whatever the conclusion was. Imagine you discovered that ibuprofen could reduce the risk of stroke by 30%, for example. You would have other teams that might start playing with the factors and asking different questions like:

- What if we raise or lower the dosage?

- Waht if we mix that with other medications?

In doing so you'd get further confirmation of the original result.

You'd think there'd be some similar studies done of the original result but that doesn't seem to have been the case or this fraud would've been discovered already.

So for anyone who does know about how this works, why didn't this happen?


One neglected issue here is the role of government funding agencies and universities in controlling who gets funded. It's common practice for universities to encourage their own researchers to get government positions at funding agencies, where they can help shepherd grants back to their parent institutions, for example. Something like this seems to have gone on with Alzheimer's funding - some people involved with the source lab ended up directing funding at the federal level.

So, if you're a researcher who decides to publish refutation of a fundamental claim by the leading stars in the field, you risk getting on a funding blacklist, and your grant review doesn't get approved. Hence, the safer thing to do is simply ignore the research you suspect to be fraudulent and take your own research in a different direction. There's also the concept that they don't want to discredit the entire field by exposing fraud and thereby risk Congress spiking the funding entirely or something like that.

Incidentally, a lot of the silence on the very plausible notion that Sars-CoV2 escaped from a lab, and that its transmissibiliy and virulence likely were increased in the lab as a consequence of various gain-of-function (well-meaning I suppose) research techniques (serial passage, CRISPR engineering, etc.), appears to be due to similar concerns (i.e. researcher who even discuss the possiblity might run afoul of people like Fauci and cohort who still control virology research funding).


Sadly it is the very reasonable assumption that is wrong: important findings often do not get replicated. Usually the setup and execution (even working from great notes) is long and difficult, and often involves something specialized like trans-genetic mice (engineered for experimenting on this specific thing). And the benefits for being the lab that successfully repeats an experiment are approaching 0, and because there are so many thing that could go wrong a failure to replicate is not often a slam-dunk repudiation.

So there is little incentive for people to do this. This should probably be fixed, maybe with some right-of-passage for Post-Docs being assigned to replicate important works, with explicit funding from the NIH (or someone) earmarked for this as a sort of training budget.


> You'd think there'd be some similar studies done of the original result but that doesn't seem to have been the case or this fraud would've been discovered already.

As Derek Lowe says, this did happen:

> I could be wrong about this, but from this vantage point the original Lesné paper and its numerous follow-ups have largely just given people in the field something to point at when asked about the evidence for amyloid oligomers directly affecting memory. I’m not sure how many groups tried to replicate the findings, although (as just mentioned) when people did it looks like they indeed couldn’t find the 56 oligomer. And judging from the number of faked Westerns, that’s probably because it doesn’t exist in the first place. But my impression is that a lot of labs that were interested in the general idea of beta-amyloid oligomers just took the earlier papers as validation for that interest, and kept on doing their own research into the area without really jumping directly onto the 56 story itself. The bewildering nature of the amyloid-oligomer situation in live cells has given everyone plenty of opportunities for that! The expressions in the literature about the failure to find *56 (as in the Selkoe lab’s papers) did not de-validate the general idea for anyone - indeed, Selkoe’s lab has been working on amyloid oligomers the whole time and continues to do so. Just not Lesné’s oligomer.

In other words, the result was tried to be replicated a few times, and no one could replicate it. But the groups working in this space were already pursuing similar paths before this paper, and the results were taken not as a "target this one specific thing and you'll be golden" but rather "here's more evidence that targeting this class of thing is useful in this place."

I'll also point out that by the time you're talking dosage questions, you're generally tackling clinical trials (phase II trials are meant to establish the dosing regime), which is far downstream of work that gets published academically.


> He was originally hired by two other neuroscientists who also sell biopharma stocks short

I'm blown away by this investment strategy. What a brilliant way to monetise exposing fraud. Get your proof, short the stock, publish your proof, profit!


In finance there are a number of hedge funds who do this, like Citron Research which specializes in exposing fraud and short selling it for profit.


This incentivisation is one of the classical arguments for allowing short sellers in the market


This is an improvement over the status quo, which is that fraud exists, hurts patients, and continues to exist.


Unfortunately it's more common than you think... I remember this case in particular from some years ago:

https://www.legalreader.com/st-jude-medical-files-lawsuit-ag...


> Unfortunately it's more common

Why is calling out fraud unfortunate?

With respect to St. Jude, Muddy Waters was at least partially right [1].

[1] https://www.hipaajournal.com/fda-confirms-muddy-waters-claim...


Presumably GP is not saying that the calling out of fraud is unfortunate, but that the fraud itself is unfortunate.


Precisely, sorry if it was not clear enough. Pointing vulnerabilities and betting on the side to effectively become a stakeholder raises some concerns on your true intentions, even if it could be legal to do so.


There are several companies that do this! Hindenburg Research and Muddy Waters Research are two.


Doing well by doing good


the silver lining, I hope, is that this will highlight how important independent verification and reproduction of results is in academia, people "know" that but funding for it is still scarce as its always more exciting to try to find something new than to validate a known result.


Replication is far from trivial. It's still worth pursuing, but it's easy to overlook how challenging it can be to successfully execute scientific procedures.

Labs generally specialize, as there's a long learning curve to climb before you can reliably execute even 'bedrock' molecular biology protocols like immunoassays. Small ambiguities in protocol can lead to failure, and there's always simple human error involved that can tank a result. Generally you'll have positive controls available to tell you whether a protocol was successfully executed, but there are cases where that's simply not practical.

In the end, 'failure to replicate' does not necessarily mean there was anything wrong with the original work. Positively concluding that requires a lot of additional work that could explain the discrepancy.


But OTOH, what even is the scientific value of the original paper if they cannot provide a clear protocol with a substantial chance of replication?

“Here, I did this magic trick, but I’m unable to tell you sufficient detail for how it works!”


While independent verification and reproduction is hard, I wonder if there is any requirement for researchers to at least publish their data set for statistical analysis and further research.

Also, I found it interesting that even though computer science research are usually easier to reproduce, a lot of journals and conferences do not mandate artifact evaluation, this is just considered nice to have for submission. If we can have mandatory artifact evaluation, even something not reusable and can just repeat the experiment in the paper, it will be much easier to verify the claims in the papers and compare different approaches.


> I wonder if there is any requirement for researchers to at least publish their data set for statistical analysis and further research.

Not generally, though the tide is slowly turning in the right direction. Unfortunately many laws/policies pushing for openness and transparency in research are sidestepped with the classic "data available upon request," a.k.a. "I promise I'll share the Excel files if you email me" (they will not).


I don't understand why can they use this as an excuse. If they can share the data upon request, why can't they just publish that as well? Is that related to some legal/privacy issue?


> Is that related to some legal/privacy issue?

Possibly in some medical or social science fields, I don't know. I know there is not such an issue in chemistry and materials science. There also may be some complications for collaborations with industry, but that's kinda a different situation. For people whose career development is not strongly tied to reproducibility of their work (a.k.a. everybody) it's just another step in the overly complex process of publishing in for-profit journals. Funding agencies generally aren't going to punish people for using this excuse and the watchdogs/groups concerned with reproducibility have no teeth.

Not an excuse, but journals don't make it easy to share files, as hard as that is to believe. Some will only take PDFs for supplemental information and many have garbage UIs, stupidly small file size limits, etc. Just uploading to a repo (or tagged release) on GitHub is common these days because there is much less friction.


No one at any point in the funding cycle benefits from asking questions.

In fact, for most of the people in a position to ask the kinds of questions that need to be asked, they risk their entire career when they do so.

The entire industry has issues.


Independent verification is why scientific fraud is so dangerous.

On the one hand you'll eventually get caught, but only after potentially millions has been spent on said catching.


Isn't that exactly what has NOT happened here? Some combination of:

* People not checking

* People checking and joining the fraud

* People checking, not joining the fraud and then getting their work suppressed.


How did we rule out an underlying pathology that triggers toxic buildup of these substances, either as a byproduct or even as a defense mechanism (perhaps to isolate / destroy neurons that are infected).

It would seem that if cognitive decline were caused by some neurological disease and the body were fighting that, it might cause all kinds of dangerous looking things to happen to neurons. And, if you do dangerous things to neurons, perhaps you also get cognitive decline.


We didn't rule it out. I worked in an Alzheimer's lab about 15 years ago. In our outside voice for Grant committees, publications, etc. We acknowledged the amyloid hypothesis but internally almost every other group meeting the postdocs and grad students disclaimed that we still couldn't rule out "common factor X". It was also morbid humor we would say "oh maybe the amyloid hypothesis is wrong" whenever something strange would come up. We only about 10% joking. It's nice to see that skepticism has finally bubbled up across the field.

I like to think we were one of the more honest groups in the field -- I have post-publication corrections on my papers, the lab issued an 11-page retraction (with cautions about what not to do experimentally) when it turned out one of our lab's major findings was an artifact [0].

It's worth noting WHY the amyloid hypothesis sticks around, it's because the prion hypothesis is extremely well supported by the data (even though I think pruisner is a shady dude), so it's not a stretch to think that other amyloid diseases work in a similar way [1].

[0] the grad student who pushed through this paper is a huge scientific hero.

[1] there are some subtleties though.


I guess I should post a positive example of what the field (should be/should have been) doing: https://onlinelibrary.wiley.com/doi/full/10.1002/pro.2339

Ms. Murray (aforementioned hero) is doing a startup these days so if you are or know any VCs that talk a good game about funding science then see if they'll throw some money her way.


Definitely hasn't been ruled out. Researchers have hypothesized that the amyloid plaques are a protective mechanism; unfortunately, we still don't know why they form.

One hot area of research is currently focused on herpesviruses such as HSV1. Here [1] is a paper summarizing some of that research.

The thing about viruses is that they are weird things with unexpected behaviours. For example, we are still learning about Epstein-Barr (EBV, which is another type of herpesvirus); only this year did researchers come to suspect that EBV could be responsible for MS.

[1] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8234998/


It hasn’t been ruled out, it just hasn’t been able to be identified either.

Some research is ongoing related to viral infection from some common viral species for instance.

The problem is that we have non-destructive and reliable way to image or sample the brain at the resolutions necessary to see what is going on right now.

If you can’t see or sense what’s happening until it’s too late, it’s really hard to figure out what is going on.

If doing post-Mortem brain biopsies (most common way) for Alzheimer’s patients and the plaques is the equivalent of picking through the wreckage of a lost war, it’s pretty hard to figure out who was the enemy and who was collateral damage.


There are some but research into viral origins or other competing ideas seem to be marginalized and underfunded, I don‘t have first hand insights into it but read about it here https://www.statnews.com/2019/06/25/alzheimers-cabal-thwarte... and elsewhere.


> we have non-destructive

I think you meant to say we don't have non-destructive?


Thank you, yes. I can’t edit my comment anymore. :(


It hasn't been ruled out, but as Derek Lowe mentions, there is a good amount of genetic evidence that increased cleaving of APP and thus more amyloid-beta leads to earlier, faster and more severe Alzheimer's.

And certain mutations in the amyloid-beta producing APP-cleavage machinery leads to familial dominant Alzheimer's, where the carriers will almost certainly develop early Alzheimer's.

So there is not many ways around APP/amyloid-beta having some role in Alzheimer's pathology.


Re solutions: now do Project Veritas.

If you're a whistleblower and you're not getting shunned by "respectable" people, you're probably doing something wrong.


Worth rereading; "Time to assume that health research is fraudulent until proven otherwise?" from no less a source than the British Medical Journal [0]. Another example; 5-HTTPLPR studies [1] in psychiatric research (over 450, "proving" the effects of the gene on basically every psychological malady) stretching all the way back to 1996.

[0]: https://blogs.bmj.com/bmj/2021/07/05/time-to-assume-that-hea...

[1]: https://slatestarcodex.com/2019/05/07/5-httlpr-a-pointed-rev...


Unfortunately, this has become a rational precaution. Capitalism has thoroughly undermined humanity’s greatest tool.


Well, the problem is one of misaligned incentives rather than capitalism per se. From the BMJ article:

"Researchers progress by publishing research, and because the publication system is built on trust and peer review is not designed to detect fraud it is easy to publish fraudulent research. The business model of journals and publishers depends on publishing, preferably lots of studies as cheaply as possible. They have little incentive to check for fraud and a positive disincentive to experience reputational damage—and possibly legal risk—from retracting studies. Funders, universities, and other research institutions similarly have incentives to fund and publish studies and disincentives to make a fuss about fraudulent research they may have funded or had undertaken in their institution—perhaps by one of their star researchers."

Capitalism is certainly the economic context we operate in, but I'd hardly say it is the root cause of most of the above, which could apply equally in other economic funding models.


I think the main issue is that papers and studies with negative results are almost never published. There should be some procedure in place that ensures that papers will always be published no matter what is the result. This decision shouldn't be in the hands of the researchers and even less so on the investors.

Direct investor oversight over the research is very concerning. With the current method we can only blindly trust that scientist will be willing to sacrifice their career to publish a research that could potentially bury their investor. This makes absolutely no sense and will almost always end in conflict of interest.

I don't know the answer, but the current status quo is deeply flawed and needs to change quickly.


This sounds just like cholesterol to me. Yes, beta amyloids are correlated with Alzheimer’s and cholesterol is correlated to cardiac issues. But correlation is not causation. Why is their seemingly no mainstream discussion that perhaps these correlations are simply the markers of an underlying biological function that has become disregulated and leads to the presence of higher amyloid/cholesterol/etc? When we try to treat cancer, do we focus on the symptoms, or do we try to obliterate the cancer? As far as I can tell, we aren’t looking for the “cancer”, we are trying to treat symptoms.


> Why is their seemingly no mainstream discussion that perhaps these correlations are simply the markers of an underlying biological function that has become disregulated and leads to the presence of higher amyloid

That's a discussion point pretty much every time the amyloid hypothesis is mentioned in the scientific literature.

I'm not sure where you're going to get information about the pathophysiology of Alzheimer's disease, but it doesn't really strike me as a "mainstream" topic where they're going to discuss the finer details or debates.


> That's a discussion point pretty much every time the amyloid hypothesis is mentioned in the scientific literature

I don't feel like that's true. Back when I was a grad student doing this research it was striking to me how little that was in the literature considering how much we said it internally.


This is why censoring people on social media platforms when they disagree with "the experts" is so dangerous.

Science is not about listening to experts. It's about the data, and about falsifiable theories.


From the article:

<< Now Cassava is a story of their own, and I have frankly been steering clear of it, despite some requests. To me, it’s an excellent example of a biotech stock with a passionate (and often flat-out irrational) fan club.

I wonder if this is part of the issue. It is now something of a social club, where allegiance is to the clan. I initially wondered if it is the function of the internet, but the I remembered that various cliques existed way before that. Internet just put a spotlight on it.

The only real question is whether the current club can be reformed.


When you have problems like this, they must be viewed from a systemic perspective. The truth is that our modern institutions of science are fraught with perverse incentives; people battling for funding, feeling pressure to publish and going to war over a few scarce tenured positions. I don't know how we solve this, but I think reforming academia with some well-placed regulation and an increase in publicly funded research would be a place to start.


Two reasons why this hypothesis may not be true:

1) Universal failure of plaque-reduction drug trials.

2) Numerous post-mortem counter-examples: dementia w/o plaques and plaques without dementia.


Where is the "trust science" crowd when stuff like this goes down. It's strangely quiet in here. Shouldn't they be here telling us how this is a mistake because the editorial and peer review process prevents this? Something about how science brought us all of technology? Oh, that was actually capitalism? Oh great science, where great minds get paid by industry to submit their doctored conclusions to a journal, attached with a small bribe.


I've never understood "trust science" as "blindly trust scientists". I've always understood it as "trust the scientific process", which is sensible in my opinion, as the process is designed to determine truths.

When people falsify results, what they are doing isn't really a product of the scientific process, they are just committing fraud.


Particularly for heavily politicized topics, "trust the science" usually means something like "trust that the people that the media has designated as scientific authorities always follow the scientific process, understand statistics, and are objective searchers for truth; moreover, don't try to substitute your expertise for theirs."

The scientific community can't simply no-true-scotsman every claim pushed with the banner of Science that turns out to be false or fraud if it wants the true claims to be considered seriously.


So they're still here, and they're brain dead.


Seriously? Science is a process, not a result, and science is literally revealing the fraud here.


We'll have to wait until someone breaks it down into a rhyming Harry Potter reference or something.


Why not read the original article that actually answers this question? The way we found out about faked data is also through science.

So, is this a feel-good comment to "stick it to the science" when you are on a tech forum? Or did you find this mistake first and feel vindicated that your position is correct?

Just crowing about capitalism means nothing. After all, the incentive to cheat also came from the same capitalism.


Ask HN: What do you think about a law that criminalizes faking and doctoring data for their research? Clearly we need greater punitive consequences for these types of behavior that's becoming an epidemic in research.


Effectively, it's the same reason why the death penalty does not deter crime from happening.

The threat of career-ending shame & humiliation is remote and unlikely. As another poster said much further up-thread, there is more fraud going on that doesn't get detected.

Criminal penalties in this realm are a bit draconian. More punishment & punity is not the answer.


That makes it even harder to call out bad science. You hate how your colleague cut corners but do you want to testify against them in court in your free time and have them sent to jail?

I'd go the opposite way and give job security to anyone who is doing solid science to remove some of the weird incentives. There is upside to be amazing and win prizes or get better positions, but there is also space for people to "just" reproduce, or generally do important but non-glamorous stuff.


Wasn't HSV also implicated in Alzheimer's. Seeing most of the world is infected with cold sores I wonder if we should question this research too?


In this age of politically-motivated scientific disinformation, the one thing we don't need is fraudulent science. We need to be able to trust which drugs and vaccines work, anc which don't. We need to be able to trust the predictions about global warning and the societal impact of it so we can chart the optimum path with as little disruption and suffering as we can. In short, we cannot afford not to be rational, not to ground our decisions in hard facts and solid models.

Scientists like Lesné should be detected much earlier in their careers.


The situation in Academia is similar to child abuse in the Churches. These people often are detected early, and get shuffled around.

A lot of people know about this problem, have for a long time. A lot of people continue to do nothing about it.


Just another reminder that scientific consensus should not be blindly accepted with religious fervour.


That the cycle of science works slowly. Slower than to be applicable in your lifespan.


Is there a new leading theory?


So the University of Minnesota faked Alzheimer's research and injected Linux kernel bugs on purpose? [1]

Not feeling so proud as an alumni today.

[1]: https://news.ycombinator.com/item?id=26887670


It probably means that the Bredesen protocol is the cure for Alzheimers. He has a literally proven track record of success with hundreds of patients.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: