Hacker News new | past | comments | ask | show | jobs | submit login
The legacy of lies in Alzheimer's science (nytimes.com)
307 points by apsec112 14 days ago | hide | past | favorite | 210 comments




It is hard to describe how painful was to read about the overwhelming evidence of study manipulation by Masliah and others a few months ago. My father-in-law was diagnosed with this terrible disease in his late fifties, but before that he went through several misdiagnoses, including depression. He lost his job and i am now convinced that it was because he was on early stage of the disease, which affected his memory and ability to communicate. I later learned about this being something under study because apparently is a pattern: https://www.nytimes.com/2024/05/31/business/economy/alzheime... This triggered all kind of troubles as you might imagine.

After that, he went through a violent period and within a few months he could no longer speak or eat on his own. He now wears diapers and we had to hire a professional caregiver to help with his daily routines. Our family impact has been dramatic, we are not a large family so we had to spend a significant amount of resources to help his wife who is his main care giver. We have since received some assistance from the public healthcare system, but it took time, and the support did not keep pace with the rapid progression of his symptoms.

I have seen relatives pass away from other causes but this is by far one of the cruelest ways to die. After a few years of dealing with this disease, i cannot fathom any justification - good or bad - for the massive deception orchestrated, apparently for the sake of Masliag and others' careers. I hope they are held accountable and brought to trial soon for the damage they have caused to society and science.


if you can think of things that you or your family might have done to start managing the consequences earlier, please write them down and share.

early onset Alzheimers is a slow burn beast but you can't blame anyone for not seeing what you see or even denying the evidence.

my pops always thought his mom is just fucking with him when she showed symptoms and then (grandma) laughed about it a few minutes later ...

when I witness others or hear them talk about their parents, I'm quite often reminded of my grandma and wonder how to manage these early symptoms and set up frameworks and strategies to reduce the subliminal and subtextual reinforcement of negative reactions to triggers, both in the care-taker and the Alzheimers "patient"/ loved one ...


Professional advice and care are the best options but i think i can share some of our personal experience for what they are worth. I would say having a certain level of pragmatism is important without being insensitive with the person with this disease, for example one thing i wish we would have done earlier was to declare my father in law as "legally incapacitated", i am not sure what is the proper term in english but here in spain it means that they cannot legally bind themselves to anything without the consent of a tutor/guardian. This is a painful process because it takes time (judges, doctors, etc... must asses the case) and this acts as a constant reminder to the person with the disease of what is coming, making him/her feels useless. Another key aspect is being focused from the begining on what the healthcare system can provide for your relative as the disease progress, these bureucratic processes tend to be slow on my country, and since dementia progresses at different rates for each individual, it’s best to start looking into options early. Symptoms might advance rapidly in younger individuals.

Equally important is being sensitive and getting ready for what is to come, i don't think you can be optimistic so instead we focused on appreciating the time together, i started having more personal conversations with my father in law, we got closer and somehow that bond remains. We learned to be very patient because he went through a long period of aphasia, if he got stuck on certain phrases, we just gave him a bit of push completing the words he was missing. We also took long walks on nature. Being surrounded by a peaceful environment is important. As the disease advanced, he started to feel very confused, he couldn't recognize us sometimes and that made him angry, there was usually small hints before the outburst triggered, again being patient and trying to calm him down was important. Because he was relative young he was physically strong what was challenging but it is manageable, i think he needed to feel safe so focusing on that was a good strategy. Now he is not longer able to communicate, occasionally he says a few words and sometimes it can be even funny, it is fascinating how the brain still responds to humor even with dementia. When I sense he is nervous, I gently touch his back, and I can feel how much it reassures him. Support groups to share these experiences are important, take all the help you can get. When the disease reaches its final stages, seek the help of professionals. Their support can help the family maintain a sense of normalcy and be functional.

By the way, I just realized that I said it’s hard to be optimistic, but one thing I can truly say is that I value life so much more now. Every day that I wake up and feel present is a gift.


THANK YOU Pops died of Parkinson & Mum of Dementia.

My sole (selfish) aim on this matter being 500% certain my son be not burdened by my own demise. Brought a GOOD Lad into world I struggle to make sense of...

Consult:

https://www.youtube.com/watch?v=8QxIIz1yEsA

Thanks also for re-enforcing my silly notion that with few exception humanity are by nature GOOD, aside "Under Duress" ...

This is the sort of data (shove it under the rug) which profit-seeking 'studies' overlook. Inclusion could affect the bottom line, ya see.

Fossils (old folk) retired, what tax revenue do we generate?

It is terrifyin' being my age. A "dementia" diagnosis is very profitable to INSURANCE as in Medicare (EN-US) see?

Invitation for curious minds

Please AUDIT "treatment guidelines". Center for Medicare Services. FOLLOW THE MONEY

I close with HAY'LL NAW, n0t on our Watch. Should only myself stand alone, no problem at all... WE ARE NEVER ALONE

https://youtu.be/8QxIIz1yEsA


Make sure you always have some energy left when taking care of him. Don't burn yourself out. Like they say to new parents - regularly get a babysitter no matter what.

My pops died from Lewy body dementia about 2 months ago. It gets much rougher toward the end. The more carers you have the better, it's almost impossible by yourself. You'll never forget the period you're going through now. Make sure you have no regrets when it's all over.


I'm so sorry to hear about what you are going through.

one small piece of advice from someone who had to do something similar with close family: Look into death doulas. Though your FiL is not dying, these people have a lot of resources and experience and may be able to assist you as life is so chaotic.

Again, so sorry to hear about your situation and what you're going through.


Counteradvice. Avoid seeking help from any kind of denominational spirit guides. Illusions do not relieve grief.

Oh no.

No, death doulas aren't woo woo, at least the ones I've used.

They're more : Here's a the best brand of diapers, here' how to move a 200lbs man about in a bed to clean him, here's good local grief support groups, etc. In my experience, the paid home aids aren't really good at those things.


I work in neurotech and sleep, and our focus on slow-wave enhancement has recently had 3 papers looking at the impact in Alzheimer's.

I'm not a scientist or expert, but we do speak with experts in the field.

What I've gathered from these discussions is that Alzheimer's is likely not a single disease but likely multiple diseases which are currently being lumped into the one label.

The way Alzheimer's is diagnosed is by ruling out other forms of dementia, or other diseases. There is not a direct test for Alzheimer's, which makes sense, because we don't really know what it is, which is why we have the Amyloid Hypothesis, the Diabetes Type 3 hypothesis, etc etc.

I fear the baby is being thrown out with the bathwater here, and we need to be very careful not to vilify the Amyloid Hypotheses, but at the same time, take action against those who falsify research.

Here's some of the recent research in sleep and Alzheimer's

1) Feasibility study with a surprisingly positive result - take with a grain of salt - https://pubmed.ncbi.nlm.nih.gov/37593850/

2) Stimulation in older adults (non-AD) shows positive amyloid response with corresponding improvement in memory - https://pmc.ncbi.nlm.nih.gov/articles/PMC10758173/


The way Alzheimer's is diagnosed is by ruling out other forms of dementia, or other diseases. There is not a direct test for Alzheimer's, which makes sense, because we don't really know what it is,

Correction here: while other tests are sometimes given to rule out additional factors, there is an authoritative, direct test for Alzheimer's: clinically detectable cognitive impairment in combination with amyloid and tau pathology (as seen in cerebrospinal fluid or PET scan). This amyloid-tau copathology is basically definitional to the disease, even if there are other hypotheses as to its cause.


> likely multiple diseases which are currently being lumped into the one label

You also described "cancer" what, 30 years ago?

We knew the symptoms, and we knew some rough classification based on where it first appeared. It took readily accessible diagnostic scans and genetic typing to really make progress.

And the brain is a lot harder imaging target.


This - it's more like an end-stage failure mode of a self-regulating, dynamic system which has drifted into dysfunction.

A case of "lots of things have to work perfectly, or near enough, or a brain drifts into this state". This seems to be very much the case with cancers, essentially unless everything regulates properly, the system would on its own devolve into cancer.

Like a gyroscope which will only spin if balanced, only this gyroscope has 10^LARGE moving parts.

And when you consider the ageing process, you're talking about multiple systems operating at 50%, 75%, 85% effectiveness, all of which interact with one another, so it's inevitable that self-regulating mechanisms start to enter failure cascade.

In terms of interventions, a lot of the time it seems like the best fix is to look at which of the critical systems is most deteriorated, and try and bring that one up. So, for example, diet and exercise can restore a degraded circulatory system by a meaningful amount, but you can be an Ironman triathlete and still develop Alzheimers in your 60s. If we can find reliable ways to do the same for sleep, that will be worthwhile, and likely there are other systems where we might do the same - immune, liver, kidneys and so on.


It sounds like corrosion in metals. There are many different damage mechanisms and protective effects, but at the end of the day you see weakened, oxidised metal.


Yes, except that metal isn't a dynamic, self-regulating structure. The body is in a constant state of actively fighting against its own decay.


Passivation is a similar kind of dynamic balance. Formation of a protective oxide layer. That's how stainless steel, aluminium don't corrode despite being pretty reactive. See also galvanized steel, which is also pretty dynamic with all the electrochemistry, transfer processes and kinetics going on.

This is spot-on. A corollary is that the best way to fight these diseases is by doing "maintenance" early on. But we currently don't have good enough models of all those multiple systems, and a functional, "good" state may vary significantly from individual to individual. Yet, those computational models could, in due time, save lives.

Be as it may, I don't believe we can create those models manually, one by one. It's just too complicated and costly, and as this article shows, prone to turf politics. The good news is that we are having great advances in molecular simulation, and there are gargantuan investments coming to develop the world's computational capabilities. Furthermore, even the current crop of AI tools is good enough to locate and make information accessible to non-experts. So, if you are looking for a hobby in front of a computer, consider given these computational problems some attention.


Yep, I neglected to consider that the system isn't just self-regulating but also self-calibrating.

Even if we can't _fix_ all those systems, getting reliable predictive/status markers would be a good start. Just being able to scan for ten or twenty blood markers and work out which aspect/s of health someone needs to focus their efforts on improving. It doesn't feel like we're a million miles from being able to do that now.

As to the multiple systems - we probably want to add hormonal and short/medium term regulatory systems to that list (being able to activate even a 40yr old's motivational/dopamine response in a 75yr old would be a significant step). And take into account that there are designed-in trade-offs (slower cellular repair in ageing bodies is very possibly an evolutionary adaptation to deal with higher cancer risk in cells with more mutation damage, for example).


You just described the exact end goal of my overall research program, if we can keep the stroke, AD, and Down syndrome + AD cohorts running. Associating blood biomarkers differentially with individual disease processes and comorbid health factors is the holy grail.

Perverse professional norms and plenty of miscreants have made it a much harder road for other dementia researchers, but it's worth keeping on, and there are so many avenues that haven't been pursued.


The sleep bit is what we are working on. We increase slow-wave delta power, increasing the effectiveness of the glymphatic system to flush metabolic waste from the brain.

There is more than a decade of research into this process, the studies I pointed at earlier are focused on older adults, which see a larger improvement than a younger population, but lots of studies in university aged subjects due to the nature of research.

We have links to more of the research papers on our website https://affectablesleep.com/research


So I absolutely think you're on the right track, but also you've got a product to sell.

Certainly piqued my interest at least, depending on price point, how long it takes to demonstrate effectiveness and so on.


If we're talking about effectiveness in AD, there are LOTS more studies to go, so I don't expect to see a result there for at least 5 years.

However, effectiveness is subjectively felt on day one (depending on your current sleep habits, it's more effective in people who are somewhat sleep deprived).

We link to a bunch of the existing research on our website, and more than just "sleep time" we can track effectiveness to biomarkers such as HRV.


So in the clinic - what we usually do is a detailed neuropsychology battery. Also a patient history, but the neuropsych does provide some quantitative measures.

If there's clear amnestic memory loss, verbal fluency decline and visualspatial processing decline, it's more probably Alzheimer's. vs. if there's other features in terms of frontaldysexecutive functioning, behavioral changes, etc, then you think FTD or possibly LBD if there's reports of early visual hallucinations.

Amyloid PETs are getting a bit better so there's that. Amyloid-negative PETs w/ amnestic memory loss are being lumped under this new LATE (Limbic-Predominant Age-Related TDP-43 Encephalopathy) but that definition always felt a bit...handwavey to me.

11-32 adults is a good pilot paper but you have to raise funding for Phase II and III trials.


Awesome clarification! Thank you.

We're not the researchers, we are developing the technology to support research. They are somewhat hamstrung with the currently available technology. There are other benefits to slow-wave enhancement, non-clinical and beyond dementia. It has been suspected this could play a role in prevention of AD, we were very suspicious (and still are a bit) that we could have a direct impact in treating AD.

Having said that, the paper that looked at people with AD saw improvement in sleep, so even if we can help them subjectively feel less exhausted, that could be a quality of life benefit, even if non-clinical.

I completely agree, that proving effectiveness in treatment is a long road, but we're going through a non-clinical use first. If the research works out, we can look into clinical use at a later date.


I want to throw something by you: what if dementia-like diseases can be linked to sleep disorders? Several papers recently have linked CSF flow through the brain during sleep (the glymphatic system), which appears to be “cleaning” the brain of waste, like the plaques seen in Alzheimers. Specifically, what if a disruption of CSF production is being impacted by, say, an imbalance of CSF precursors (like electrolytes) caused by something like a hormonal imbalance or low-grade kidney issues that don’t get picked up by normal tests? This would short-circuit the glymphatic process not allowing the body to create as much CSF as it needs. Then the brain won’t get as much cleaning as it should.

EDIT: sorry, I did not see your later reply saying that you WERE focusing on sleep disruptions! my bad. And, interesting…


It is actually and very unfortunately both.

One of the researchers we are connected to has posited that increased amyloid build up decreases the effectiveness of the glymphatic system, which is why we see lower delta power in older adults which then further reduces the ability to remove build-up. It's a vicious cycle.

We also see this with cortisol levels in under slept individuals and exacerbated in people with AD.

In response to a lack of sleep, the body increases cortisol, that increase in cortisol introduces challenges in getting good sleep, which increases cortisol.

For most of us, this isn't an issue as we get on top of our sleep within a fairly short period and sort this out, but in people with AD, there is a cortisol dysregulation, and potential a circadian issue as well.

One hypothesis why the AD paper I linked to earlier had such a huge impact on AD sufferers was because PTAS has been shown to have a 15% decrease in early night cortisol levels.

The cortisol levels were not checked in the AD subjects because it was a feasibility study, but I'll be speaking with the researchers next week, and hopefully that is the plan for the next stage research.


It's also hard to distinguish whether sleep disruptions cause AD or the other way around.

I think it's fair to say that early sleep disruptions are related to the onset of AD, and then AD itself exacerbates these issues, I commented further above.

>I fear the baby is being thrown out with the bathwater here, and we need to be very careful not to vilify the Amyloid Hypotheses, but at the same time, take action against those who falsify research.

The problem is that as many of these studies build upon each other, many other studies are tainted. A great review would be necessary to sort this mess out - but with the US research insttutions in complete disarray, we are years away from such progress.


So as a lay person with an active interest in the topic, my reading of 2) is that in the treatment group, some people showed improved sleep physiology AND improved memory, and this was attributed to the treatment, but the group as a whole did not.

If some improve, and the group average score remains unchanged, does that mean some got worse, or is it a case of the group average not being statistically significant?

What this suggests to me is that there is _surely_ a link between sleep quality and memory performance, but that whether or not the proposed treatment makes any difference - that is, whether the treatment caused the sleep improvement - is doubtful. At best it seems to be "it works moderately well for some people, and not at all for others". Am I reading it correctly?


You are reading that correctly, however, it is likely a limitation of the technology they used in the study.

Strangely, the didn't mention how they decided on stimulation volume, however, most studies will either set a fixed volume for the study, or measure the users hearing while they are awake, and then set a fixed volume for that person.

Our technology (and we're not the first to do this) adapts the volume based on brain response during sleep in real-time.

When you don't do this you risk either having the volume so low that it doesn't evoke a response, or so high that you decrease sleep depth, and don't get the correct response.

Therefore, anyone who did not get the appropriate volume would end up as a non-responder.

It is also more challenging for previously used algorithms to detect a slow-wave in older adults because the delta power is lower, so some of these participants may have had limited stimulation opportunities.

We've developed methods which improve on the state of the art, but we have not validated those in a study yet.


I feel that sleep will ultimately be the answer and the cure. Personal anecdote - one of my uncles is affected by this disease. One day, my aunt could not wake him up in the morning as hard as she tried. He would mumble and try to go back to sleep. When he finally awoke after 10 minutes of my aunt basically yelling at and slapping him, he was, in her words - "back to the man I used to know". Completely lucid and able to keep up with conversation, remembering everything etc. Two days later he was back to his old confused baseline.


Certainly pop-sci but great on this topic:

https://www.waterstones.com/book/why-we-sleep/matthew-walker...

I'm not as optimistic as you that sleep will be a cure, but I'd be very surprised indeed if sleep quality weren't preventive. (Proving this might be more difficult, though - correlation/causation).

It's almost an argument by process of elimination - why else would literally every living thing with a brain need to spend so much of its time asleep? How is it that we still don't fully know what sleep (as distinct from either rest or unconsciousness) is actually for?

Multiple studies show that night shift work is bad for the brain - and for those with a habit of working nights (probably quite a few of us on HN, from time to time), if a recreational drug made your brain feel as bad as an all-nighter can, that would surely be one you'd put in the "treat with great caution" category, no?

No doubt the glymphatic system (a central part of higher animal physiology which was only discovered in the last 25 years) has a role to play. It may be that, as with cancer, once the degenerative process gets beyond a certain point, it's hard to stop - but I'm hopeful that science will unlock a good deal of understanding around prevention over the next decade or so - even if that's not much more than an approach to sleep hygiene analogous to "eat your 5 fruit and veg a day, don't have too much alcohol or HFCS, and make sure to do a couple of sessions of cardio and a few weights every week".


> Alzheimer's is likely not a single disease but likely multiple diseases which are currently being lumped into the one label

Back at uni 25 years ago they told me the same about schizophrenia and I'm sure it's still valid.


Sometimes you do need to throw the baby with the bath water.. when you have sufficient evidence that the baby is dead lol. The amyloid hypothesis sucks ass, we have spent decades and hundreds of billions trying to make it work but we need to stop making Fetch work!

I was once very skeptical of the amyloid hypothesis, but I think the genetic evidence is very convincing. Most (all?) cases of familial early onset AD are caused by mutations that increase amyloid. And there is a rare variant of the amyloid gene that is completely protective against AD. Plus the efficacy of Leqembi.

a bit like all all tumors were lumped under the same term even though biologically they differed quite a lot at the tissue level

Science needs an intervention similar to what the CRM process (https://en.wikipedia.org/wiki/Crew_resource_management) did to tamp down cowboy pilots flying their planes into the sides of mountains because they wouldn't listen to their copilots who were too timid to speak up.

...on the evening of Dec 28, 1978, they experienced a landing gear abnormality. The captain decided to enter a holding pattern so they could troubleshoot the problem. The captain focused on the landing gear problem for an hour, ignoring repeated hints from the first officer and the flight engineer about their dwindling fuel supply, and only realized the situation when the engines began flaming out. The aircraft crash-landed in a suburb of Portland, Oregon, over six miles (10 km) short of the runway

It has been applied to other fields:

Elements of CRM have been applied in US healthcare since the late 1990s, specifically in infection prevention. For example, the "central line bundle" of best practices recommends using a checklist when inserting a central venous catheter. The observer checking off the checklist is usually lower-ranking than the person inserting the catheter. The observer is encouraged to communicate when elements of the bundle are not executed; for example if a breach in sterility has occurred

Maybe not this system exactly, but a new way of doing science needs to be found.

Journals, scientists, funding sources, universities and research institutions are locked in a game that encourages data hiding, publish or perish incentives, and non-reproducible results.


The current system lies on the market of ideas - ie if you publish rubbish a competitor lab will call you out. ie it's not the same as the two people in an aircraft cabin - in the research world that plane crashing is all part of the market adjustment - weeding out bad pilots/academics.

However it doesn't work all the time for the same reasons that markets don't work all the time - the tendency for people to choose to create cosy cartels to avoid that harsh competition.

In academia this is created around grants either directly ( are you inside the circle? ) or indirectly - the idea obviously won't work as the 'true' cause is X.

Not sure you can fully avoid this - but I'm sure their might be ways to improve it around the edges.


> The current system lies on the market of ideas - ie if you publish rubbish a competitor lab will call you out.

Does not happen in practice. Unless you're driven by spite, fanaticism towards rigorousness, or just hate their guts there is zero incentive to call out someone's work. Note that very little of what is published is obvious nonsense. But a lot has issues like "these energy measurements are ten times lower than what I can get, how on earth did they get that?" Maybe they couldn't or maybe you misunderstood and need to be more careful when replicating? Are you going to spend months verifying that some measurements in a five-year-old paper are implausible or do you have better things to do?


Sure - such direct contradiction is rare - call out was the wrong phrase - that mostly only happens which people try and replicate extraordinary claims.

Much more common is another paper is published which has a different conclusion in the particular area of science which may or may not reference the original paper - ie the wrong stuff get's buried over time by the weight of other's findings.

You could say that part of the problem is correction is often incremental.

In the end the manipulation by Masliah et al came out - science tends to be incremental, rather than all big break-throughs and I'd say any system will struggle to deal with bad faith actors.

In terms of bad faith actors - you have two approaches - look at better ways to detect, and looking at the properties of the system that perhaps creates perverse incentives - but I always think it's a bad idea to focus too much on the bad actors - you risk creating more work for those who operate in good faith.


How is that correction mechanism supposed to work though? Do you mean the peer review process?

Friends in big labs tell me they often find issues with competitor lab papers, not necessarily nefarious but like “ah no they missed thing x here so their conclusion is incorrect”.. but the effect of that is just they discard the paper in question.

In other words: the labs I’m aware of filter papers themselves on the “inbound” path in journal clubs, creating a vetted stream of papers they trust or find interesting for themselves.. but that doesn’t provide any immediate signal to anyone else about the quality of the papers


> How is that correction mechanism supposed to work though? Do you mean the peer review process?

No. I meant somebody else publishes the opposite.

One of the things you learn if you are a world expert in a tiny area ( PhD student ) is that half the papers published in your area are wrong/misleading in someway ( not necessarily knowingly - just they might not know some niche problem with the experimental technique they used ).

I agree peer review is far from perfect, and there is problem in that a paper being wrong is still a paper in your publication stats, but in the end you'd hope the truth will out.

People got all excited about cold fusion - then cold reality set in - I don't think the initial excitement about it was a bad thing - sometimes it takes other people to help you understand how you've fooled yourself.


I expressed the same idea here not too long - the value of any one individual paper is exactly 0.0 - and was downvoted by it, but I believe this is almost the second thing that you learn after you publish, and what seems to confuse the "masses" the most.

You (as a mortal, human being) are not going to be able to extract any knowledge whatsoever from an academic article. They are _only_ of value for (a) the authors, (b) people/entities who have the means to reproduce/validate/disprove the results.

The system fails when people who can't really verify use the results presented. Which happens frequently... (e.g. the news)


I'm in academia, and I think it has many good points.

The number one issue in my mind is competitors labs don't call you out. It's extremely unusual for people to say, publicly, "that research was bad". Only in the event of the most extreme misconduct to people get called out, rather than just shody work.


Yeah I don't think CRM is the correct thing in this case... I just think that there needs to be some new set of incentives put in place such that the culture reinforces the outcomes you want.

There actually are checklists you have to fill out when publishing a paper. You have to certify that you provided all relevant statistics, have not doctored any of your images, have provided all relevant code and data presented in the paper, etc. For every paper I have ever published, every last item on these checklists was enforced rigorously by the journal. Despite this, I routinely see papers from "high-profile" researchers that obviously violate these checklists (e.g.: no data released, a not even a statement explaining why data was withheld), so it seems that they are not universally enforced. (And this includes papers published in the same journals around the same time, so they definitely had to fill out the same checklist as I did.)

Not to mention that scientists spend a crazy amount of time writing grant proposals instead of doing science. Imagine if programmers spent 40% of their time writing documents asking for money to write code. Madness.


Project managers and consultants do actually write those documents/specifications justifying the work before the programmers get to do it.

Indeed. You do need some idea of what you are going to do before being funded.

The tricky bit is that in research, and this a bit like the act of programming, you often discover import stuff in the process of doing - and the more innovative the area - the more likely this is to happen.

Big labs deal with this by having enough money to self-fund prospective work, or support things for extra time - the real problem is that new researchers - who often have the new ideas, are the most constrained.


Kinda making my point :P

If your org does this, that's a problem.


No it's not a problem -- it's necessary.

If you work at a large company, it could consider 1,000's of different new major features or new products. But it only has the budget to pay for 50 per year.

So obviously there's a whole process of presentations, approvals, refinement, prototypes, and whatnot to ensure that only the best ideas actually make it to the stage where a programmer is working on it.

Same thing with a startup, but it's the founders spending months and months trying to convince VC's to invest more, using data and presentations and whatnot.

It's not a problem -- it's the foundation of any organization that spends money and wants to try new things.


How else would it work? The onus needs to be on someone to make sure we are doing worthwhile things. Like anything else in life, you need to prove you deserve the money before you get it. Often that means you need to refine your ideas and pitches to match what the world thinks it needs. Then once you get a track record it lowers your risk profile and money comes more easily.

Sounds sensible, bu the major unasked question it avoids is, was the current funding and organization structure of science in place when the past scientific achievements were achieved.

the impression I get from anecdotes and remarks is that pre-1990s, university departments used to be the major scientific social institution, providing organization where the science was done, with feedback cycle measured in careers. Faculty members would socialize and collaborate or compete with other members. Most of the scientific norms were social, possible because the stakes were low (measured in citations, influence and prestige only).

It is quite unlike current system centered on research groups formed around PIs and their research groups, an machine optimized for gathering temporary funding for non-tenured staff so that they can produce publications and 'network', using all that to gather more funding before the previous runs out. No wonder the social norms like "don't falsify evidence; publish when you have true and correct results; write and publish your true opinions; don't participate in citation laundering circles" can't last. Possibility of failure is much frequent (every grant cycle), environment is highly competitive in a way that you get only few shots at scientific career or you are out.


Imagine if everybody in every software company was an "engineer," including the executives, salespeople, and market researchers. Imagine if they only ever hired people trained as software engineers, and only hired them into software development roles, and staffed every other position in the company from engineering hires who had skill and interest at performing other roles. That's how medical practices, law firms, and some other professions work.

For example -- my wife is an architect, so I'm aware of specific examples here -- there are many architecture firms that have partners whose role consists of bringing in big clients and managing relationships with them. They are never called "sales executives" or "client relationship management specialists." If you meet one at a party, they'll tell you they're an architect.

Apparently it's the same thing with scientific research. When a lab gets big enough, people start to specialize, but they don't get different titles. If you work at an arts nonprofit writing grant applications, they will call you a grant writer, but a scientist is always a scientist or a "researcher" even if all they do is write grant applications.


And Boeing was like that. Before the merged with McDonald Douglas. Before the MAX disaster. Before the failed Starliner.

> Imagine if programmers spent 40% of their time writing documents asking for money to write code.

The daily I'm not taking part anymore at work started today at 9:30 as always, and has currently (11:50) people excusing themselves because they have other meetings...

We need a revolution on exposing bad managers and making sure they lose their jobs. For every kind of manager. But that situation isn't very far from normal.


If this was applied in science we'd be still be flying blind with regards to stomach ulcers because a lot of 'researchers' thought bacteria couldn't live in the stomach (it's obviously a BS reason)

Yes, CRM procedures are very good in some cases and I would definitely apply it in healthcare in stuff like procedures, or the issues mentioned, etc.


The higher-level problem is that there are tons of scientific papers with falsified data and very few people who care about this. When falsified data is discovered, journals are very reluctant to retract the papers. A small number of poorly-supported people examine papers and have found a shocking number of problems. (For instance, Elisabeth Bik, who you should follow: @elisabethbik.bsky.social) My opinion is that the rate of falsified data is a big deal; there should be an order of magnitude more people checking papers for accuracy and much more action taken. This is kind of like the replication crisis in psychology but with more active fraud.


This is why funding replication studies and letting people publish null results and reproductions of important results is fundamental.

It will introduce a strong incentive to be honest. Liars will get caught rather quickly. Right now, it often takes decades to uncover fraud.


Unfortunately as you spend more time investigating this problem it becomes clear that replication studies aren't the answer. They're a bandage over the bleeding but don't address the root causes, and would have nearly no impact even if funded at a much larger scale. Because this suggestion comes up in every single HN thread about scientific fraud I eventually wrote an essay on why this is the case:

https://blog.plan99.net/replication-studies-cant-fix-science...

(press escape to dismiss the banner). If you're really interested in the topic please read it but here's a brief summary:

• Replication studies don't solve many of the most common types of scientific fraud. Instead, you just end up replicating the fraud itself. This is usually because the methodology is bad, but if you try to fix the methodology to be scientific the original authors just claim you didn't do a genuine replication.

• Many papers can't be replicated by design because the methodology either isn't actually described at all, or doesn't follow logically from the hypothesis. Any attempt would immediately fail after the first five minutes of reading the paper because you wouldn't know what to do. It's not clear what happens to the money if someone gets funded to replicate such a paper. Today it's not a problem because replicators choose which papers to replicate themselves, it's not a systematic requirement.

• The idea implicitly assumes that very few researchers are corrupt thus the replicators are unlikely to also be. This isn't the case because replication failures are often due to field-wide problems, meaning replications will be done by the same insiders who benefit from the status quo and who signed off on the bad papers in the first place. This isn't an issue today because the only people who do replication studies are genuinely interested in whether the claims are true, it's not just a procedural way to get grant monies.

• Many papers aren't worth replicating because they make trivial claims. If you punish non-replication without fixing the other incentive problems, you'll just pour accelerant on the problem of academics making obvious claims (e.g. the average man would like to be more muscular), and that just replaces one trust destroying problem with another.

Replication failure is a symptom not a cause. The cause is systematically bad incentives.


Incentives are themselves dangerous. We should treat incentives like guns. Instead we apply incentives to all manner of problems and are surprised they backfire and destroy everything.

You have to give people less incentives and more just time to do their basic job.


> You have to give people less incentives and more just time to do their basic job.

That was the idea behind tenure, but then tenure became the incentive. Job security at a socially prestigious job with solid benefits is a huge incentive to fraud even to people who don't care about science, and then for people who do care about doing science and have been directing their entire adult lives towards that end, they face a cataclysmic career bifurcation: either they get a tenured academic research position and spend their lives doing science, or they leave science behind altogether and at best end up doing science-adjacent product development which at best best, if they join a pharmaceutical company or something like that, might sometimes closely resemble scientific research despite being fundamentally different in its goals.

Given the dramatic consequences on people's lives, fraud should be expected. Academic research should acknowledge the situation and accept that it needs safeguards against fraud as surely as banks need audits and jewelry stores need burglar alarms.


OP here. Perhaps I didn't explain it well, but I think the key is to de-incentivize bad behavior, and to make sure people publishing have some skin in the game.

Right now, it's the opposite. The system rewards flashy findings with no rigor. And that's a slippery slope towards result misrepresentation and downright fraud.


I think this hits the nail on the head. Academics have been treated like assembly line workers for decades now. So they’ve collectively learned how to consistently place assembled product on the conveyor belt.

The idea that scientific output is a stack of publications is pretty absurd if you think about it for a few minutes. But try telling that to the MBA types who now run universities.


You do need to incentivize something. If you incentivize nothing that's the same thing as an institution not existing and science being done purely as a hobby. You can get some distance that way - it's how science used to work - but the moment you want the structure and funding an institution can provide you must set incentives. Otherwise people could literally just stop turning up for work and still get paid, which is obviously not going to be acceptable to whoever is funding it.

I think it’s an interesting feature of current culture that we take it as axiomatic that people need to be ‘incentivized’. I’m not sure I agree. To me that axiom seems to be at the root of a lot of the problems we’re talking about in this thread. (Yes, everyone is subject to incentives in some broad sense, but acknowledging that doesn’t mean that we have to engineer specific incentives as a means to desired outcomes.)

I think there is some misunderstanding here. Incentives are not some special thing you can opt to not do.

Who do you hire to do science? When do you give them a raise? Under which circumstances do you fire them? Who gets a nicer office? There are a bunch of scientist each clamouring for some expensive equipment (not necessarily the same one) who gets their equipment and who doesn't? Scientist wants to travel to a conference, who can travel and where can they travel? We have a bunch of scientist working together who can tell the other what to do and what not to do?

Depending on your answers to these questions you set one incentive structure or an other. If you hire and promote scientist based on how nicely they do interpretive dance you will get scientist who dances very well. If you hire and promote scientist based on how articulate they are about their subject matters you will get very well spoken scientist. If you don't hire anybody then you will get approximately nobody doing science (or only independently wealthy dabbling here and there out of boredom.)

If you pay a lot to the scientist who do computer stuff, but approximately no money to people who do cell stuff you will get a lot of computer scientist and no cell scientist. Maybe that is what you want, maybe not. These don't happen from one day to an other. You are not going to get more "cancer research" tomorrow out of the existing cancer researchers if you hike their salary 100 fold today. But on the order of decades you will definitely see much much more (or much less) people working on the problem.


I meant to cover that in the last (parenthesized) sentence or my post. There will always be incentives in a broad sense, but it is not necessary to “incentivize” people via official productivity metrics. Academics used to figure out who to promote without creating a rush for everyone to publish as much as possible, or to maximize other gameable metrics. I don’t kid myself that there was ever a golden era of true academic meritocracy, but there really did used to be less of an obsession with silly metrics, and a greater exercise of individual judgment.

Einstein, Darwin, Linneaus. All science as a hobby. I don't think we should discount that people will in fact do it as a hobby if they can and make huge contributions that way.

Einstein spent almost all of his life in academia living off research grants. His miracle year took place at the end of his PhD and he was recruited as a full time professor just a few years later. Yes, he did science as a hobby until that point, but he very much wanted to do it full time and jumped at the chance when he got it.

Still, if you want scientific research to be done either as a hobby or a corporate job, that's A-OK. The incentives would be much better aligned. There would certainly be much less of it though, as many fields aren't amenable to hobbyist work at all (anything involving far away travel, full time work or that requires expensive equipment).


> There would certainly be much less of it though

I doubt that very much. And making such a statement without any evidence seems... unscientific :P


>>>• Replication studies don't solve many of the most common types of scientific fraud. Instead, you just end up replicating the fraud itself. This is usually because the methodology is bad, but if you try to fix the methodology to be scientific the original authors just claim you didn't do a genuine replication.

>>>• Many papers can't be replicated by design because the methodology either isn't actually described at all, or doesn't follow logically from the hypothesis. Any attempt would immediately fail after the first five minutes of reading the paper because you wouldn't know what to do. It's not clear what happens to the money if someone gets funded to replicate such a paper. Today it's not a problem because replicators choose which papers to replicate themselves, it's not a systematic requirement.

Isn't this what peer review is for?


Yes but it only works if the field has consistently high standards and the team violating them is an outlier. In a surprisingly large number of cases replications are hard/impossible because the entire field has adopted non-replicable techniques.

> • Many papers can't be replicated by design because the methodology either isn't actually described at all, or doesn't follow logically from the hypothesis. Any attempt would immediately fail after the first five minutes of reading the paper because you wouldn't know what to do.

This is indeed a problem. But it can be solved by journals not accepting any paper that is not 1) described in such a way that it can be fully replicated, and 2) include the code and the data used to generate the published results

Just that would go a long way.

And then fund replication studies for consequential papers. Not just because of fraud but because of unintended flaws in studies, which can happen of course.


Journals can't police academia because their customers are university libraries. It's just not going to happen: journals are so far gone that they routinely publish whole editions filled with fake AI generated content and nobody even notices, so expecting them to enforce scientific rigor on the entire grant-funded science space is a non-starter ... and very indirect. Why should journals do this anyway? It's the people paying who should be checking that the work is done to a sufficient standard.

You’re right but it’s what journals are supposed to do - that’s why we pay. Otherwise we don’t need journals and everyone can just self publish to bioRxiv.

They'll only do it if the customers demand it. But the customers are universities, if they cared they could just fix the problem at the source by auditing and validating studies themselves. They don't need to pay third parties to do it. Journals often don't have sufficient lab access to detect fraud anyway.

I do think the journal ecosystem can just disappear and nobody would care. It only exists to provide a pricing mechanism within the Soviet-style planned economy that academia has created. If that's replaced by a different pricing mechanism academic publishing could be rapidly re-based on top of Substack blogs and nothing of value would be lost.


That's a tremendously expensive way to detect fraud. There's funding, but also the people replicating need to have at least near the expertise of the original authors, and the rate of false positives is clearly going to be high. Mistakes on the part of the replicator will look the same as scientific fraud. Maybe most worryingly, the negative impact of human studies would be doubled. More than doubled, probably- look at how many people claimed to successfully replicate LK-99. A paper may need to be replicated many time to identify unknown.

Maybe first we could just try specifically looking for fraud? Like recording data with 3rd parties (ensuring that falsified data will at least be recorded suspiciously), or a body that looks for fabricated data or photoshopped figures?


> Right now, it often takes decades to uncover fraud.

Where? Many in this thread are talking as if there is monoculture in academia.


> My opinion is that the rate of falsified data is a big deal

Have anything that backs that up? Other than what you shared here?

I would be very interested in the rate on a per author level, if you have some evidence. Fraud "impact" vs "impact" of article would be interesting as well.


See, for example, a paper mill that churned out 400 papers with potentially fabricated images: https://www.science.org/content/article/single-paper-mill-ap...

Influential microbiologist Didier Raoult had 7 papers retracted in 2024 due to faking the ethics approvals on the research. https://www.science.org/content/article/failure-every-level-...

Fazlul Sarkar, a cancer researcher at Wayne State University, had 40 articles retracted after evidence of falsified data and manipulated and duplicated images. https://www.liebertpub.com/doi/10.1089/genbio.2024.29132.ebi

Overall, Elisabeth Bik has found thousands of studies that appear to have doctored images. https://www.newyorker.com/science/elements/how-a-sharp-eyed-...


All of those examples have no relative meaning. If there are millions of papers published per year, then 1000 cases over a decade isn't very prevalent (still bad).


Here's some numbers from insiders with relative meaning.

https://blog.plan99.net/fake-science-part-i-7e9764571422

‘It is simply no longer possible to believe much of the clinical research that is published, or to rely on the judgement of trusted physicians or authoritative medical guidelines. I take no pleasure in this conclusion, which I reached slowly and reluctantly over my two decades as an editor of the New England Journal of Medicine.’ — Marcia Angell

0.04% of papers are retracted. At least 1.9% of papers have duplicate images “suggestive of deliberate manipulation”. About 2.5% of scientists admit to fraud, and they estimate that 10% of other scientists have committed fraud.

“The case against science is straightforward: much of the scientific literature, perhaps half, may simply be untrue.” — Richard Horton, editor of the Lancet

The statcheck program showed that “half of all published psychology papers…contained at least one p-value that was inconsistent with its test”.

The GRIM program showed that of the papers it could verify, around half contained averages that weren’t possible given the sample sizes, and more than 20% contained multiple such inconsistencies.

The fact that half of all papers had incorrect data in them is concerning, especially because it seems to match Richard Horton’s intuitive guess at how much science is simply untrue. And the GRIM paper revealed a deeper problem: more than half of the scientists refused to provide the raw data for further checking, even though they had agreed to share it as a condition for being published.

After some bloggers exposed an industrial research-faking operation that had generated at least 600 papers about experiments that never happened, a Chinese doctor reached out to beg for mercy: “Hello teacher, yesterday you disclosed that there were some doctors having fraudulent pictures in their papers. This has raised attention. As one of these doctors, I kindly ask you to please leave us alone as soon as possible … Without papers, you don’t get promotion; without a promotion, you can hardly feed your family … You expose us but there are thousands of other people doing the same.“


There is no reward for finding falsified data. If there were, we'd find a lot more.

Are you saying publications need a bug bounty?

Yes! Labs should escrow $10k for a year for each paper they publish. If anyone finds fraud, they get to keep it.

The article says:

Yet despite decades of research, no treatment has been created that arrests Alzheimer’s cognitive deterioration, let alone reverses it.

Nowhere in the article does it mention that anti-amyloid therapies such as donanemab and lecanemab have so far successfully slowed decline by about 30%. They may not yet be "arresting" (fully stopping) the disease, but it's pretty misleading for the article to completely omit reference to this huge success.

We are currently in the midst of a misguided popular uprising against the amyloid hypothesis. There were several fraudulent studies on amyloid, and those responsible should be handled severely by the scientific community. But these fraudulent studies do not constitute the foundational evidence for the amyloid hypothesis, which remains very solid.


From what I've read, those drugs are very good at removing amyloid, but despite that, they don't seem to make much of a noticeable (clinically meaningful) difference in the people treated with them. I personally would not call that a "huge success".

If they are so good at cleaning up the amyloid, why don't people have more of an improvement? I think everyone agrees amyloid is associated with Alzheimer's, the question is how much of a causative role does it play.


From what I've read, those drugs are very good at removing amyloid, but despite that, they don't seem to make much of a noticeable (clinically meaningful) difference in the people treated with them. I personally would not call that a "huge success".

After many decades of research, we've gone in the last few years from no ability whatsoever to affect the underlying disease, to 30% slowdown. To be clear, that's a 30% slowdown in clinical, cognitive endpoints. Whether you call that "meaningful" is a bit subjective (I think most patients would consider another couple years of coherent thinking to be meaningful), and it has to be weighed against the costs and risks, and there's certainly much work to be done. But it's a huge start.

If they are so good at cleaning up the amyloid, why don't people have more of an improvement?

No one is expected to improve after neurodegeneration has occurred. The best we hope for is to prevent further damage. Amyloid is an initiating causal agent in the disease process, but the disease process includes other pathologies besides amyloid. So far, the amyloid therapies which very successfully engage their target have not yet been tested in the preclinical phase before the amyloid pathology initiates further, downstream disease processes. This is the most likely reason we've seen only ~30% clinical efficacy so far. I expect much more efficacy in the years to come as amyloid therapies are refined and tested at earlier phases. (I also think other targets are promising therapeutic targets; this isn't an argument against testing them.)

I think everyone agrees amyloid is associated with Alzheimer's, the question is how much of a causative role does it play.

To be clear, the evidence for the amyloid hypothesis is causal. The association between amyloid and Alzheimer's has been known since Alois Alzheimer discovered the disease in 1906. The causal evidence came in the 1990's, which is why the scientific community waited so long to adopt that hypothesis.


Reading between the lines if we gave people those drugs before they show any symptoms we should be able to do even better. Has this been tested? How safe are those drugs? What should the average person be doing to avoid accumulating amyloids in the first place?


Reading between the lines if we gave people those drugs before they show any symptoms we should be able to do even better. Has this been tested?

I do expect early enough anti-amyloid treatment to essentially prevent the disease.

Prevention trials of lecanemab and donanemab (the two antibodies with the clearest proof of efficacy and FDA approval) are ongoing: https://clinicaltrials.gov/study/NCT06384573, https://clinicaltrials.gov/study/NCT04468659, https://clinicaltrials.gov/study/NCT05026866

They have not yet completed.

There were some earlier prevention failures with solanezumab and crenezumab, but these antibodies worked differently and never showed much success at any stage.

How safe are those drugs?

There are some real safety risks from brain bleeding and swelling, seemingly because the antibodies struggle to cross the blood-brain barrier, accumulating in blood vessels and inducing the immune system to attack amyloid deposits in those locations rather than the more harmful plaques in brain tissue. A new generation of antibodies including trontinemab appears likely to be both more effective and much safer, by crossing the BBB more easily.

What should the average person be doing to avoid accumulating amyloids in the first place?

There's not much proven here, and it probably depends on your individualized risk factors. There's some evidence that avoiding/properly treating microbial infection (particularly herpes viruses and P. gingivalis) can help, since amyloid beta seems to be an antimicrobial peptide which accumulates in response to infection. There may also be some benefit from managing cholesterol levels, as lipid processing dysfunction may contribute to increased difficulty of amyloid clearance. Getting good sleep, especially slow wave sleep, can also help reduce amyloid buildup.


What about supplementation with curcumin?


Would it be fair to say that it's causal in terms of process, but perhaps not in terms of initiation?

That is, there's a feedback loop involved (or, likely, a complex web of feedback processes), and if a drug can effectively suppress one of the steps, it will slow the whole juggernaut down to some extent?

Am reminded a little of the processes that happen during/after TBI - initial injury leads to brain swelling leads to more damage in a vicious cycle. In some patients, suppressing the swelling results in a much better outcome, but in others, the initial injury, visible or not, has done too much damage and initiated a failure cascade in which treating the swelling alone won't make any difference to the end result.


I’m not sure I understand the process vs. initiation distinction you’re asking about, but yes I do believe there are other targets besides amyloid itself which make sense even if the amyloid hypothesis is true. Anything in the causal chain before or after amyloid but prior to neurodegeneration is a sensible target.

Sure, I was just talking about a step in a feedback loop or degenerative spiral rather than whatever initiates the feedback loop in the first place.

>If they are so good at cleaning up the amyloid, why don't people have more of an improvement?

I have zero knowledge in this field, but there's a very plausible explanation that I think is best demonstrated by analogy:

If you shoot a bunch of bullets into a computer, and then remove the bullets, will the computer be good as new?


Have you seen the price of ammunition lately? I think we'll need a huge NIH grant to run that experiment.


... Had to wipe the screen.

THANK YOU

> a huge NIH grant

One sentence like a Simo Häyhä round.

NIH & grants are a result -- of what cause? Urgently I encourage curious minds to rigorously & objectively discover "what cause"


Does your computer exhibit any plasticity? After how long are we taking the post-sample?


Those quoting the 30% figure may want to research where that figure comes from and what it actually means:

“Derek Lowe has worked on drug discovery for over three decades, including on candidate treatments for Alzheimer’s. He writes Science’s In The Pipeline blog covering the pharmaceutical industry.

“Amyloid is going to be — has to be — a part of the Alzheimer’s story, but it is not, cannot be a simple ‘Amyloid causes Alzheimer’s, stop the amyloid and stop the disease,'” he told Big Think.

“Although the effect of the drug will be described as being about a third, it consists, on average, of a difference of about 3 points on a 144-point combined scale of thinking and daily activities,” Professor Paresh Malhotra, Head of the Division of Neurology at Imperial College London, said of donanemab.

What’s more, lecanemab only improved scores by 0.45 points on an 18-point scale assessing patients’ abilities to think, remember, and perform daily tasks.

“That’s a minimal difference, and people are unlikely to perceive any real alteration in cognitive functioning,” Alberto Espay, a professor of neurology at the University of Cincinnati College of Medicine, told KFF Health News.

At the same time, these potentially invisible benefits come with the risk of visible side effects. Both drugs caused users’ brains to shrink slightly. Moreover, as many as a quarter of participants suffered inflammation and brain bleeds, some severe. Three people in the donanemab trial actually died due to treatment-related side effects.”

https://bigthink.com/health/alzheimers-treatments-lecanemab-...

And here’s a Lowe follow-up on hard data released later:

https://www.science.org/content/blog-post/lilly-s-alzheimer-...


“Amyloid is going to be — has to be — a part of the Alzheimer’s story, but it is not, cannot be a simple ‘Amyloid causes Alzheimer’s, stop the amyloid and stop the disease,'”

It's not quite that simple, and the amyloid hypothesis doesn't claim it to be. It does, however, claim that it's the upstream cause of the disease, and if you stop it early enough, you stop the disease. But once you're already experiencing symptoms, there are other problem which clearing out the amyloid alone won't stop.

What’s more, lecanemab only improved scores by 0.45 points on an 18-point scale assessing patients’ abilities to think, remember, and perform daily tasks.

As I point out in another comment, the decline (from a baseline of ~3 points worse than a perfect score) during those 18 months is only 1.66 points in the placebo group, It's therefore very misleading to say this is an 18-point scale, so a 0.45 point benefit isn't clinically meaningful. A miracle drug with 100% efficacy would only achieve a 1.66 point slowdown.


“But once you're already experiencing symptoms, there are other problem which clearing out the amyloid alone won't stop.”

Ok, maybe we’re just arguing different points here. I’ll grant that amyloids have something to do with all of this. I’m having a more difficult time understanding why one would suggest these drugs to a diagnosed Alzheimer’s patient at a point where it can no longer help.

Or is the long term thought that drugs like these will eventually be used a lot earlier as a prophylactic to those at high risk?


I’m having a more difficult time understanding why one would suggest these drugs to a diagnosed Alzheimer’s patient at a point where it can no longer help.

My central claim is the the drugs help quite a lot, by slowing down the disease progression by 30%, and that it's highly misleading to say "only 0.45 points benefit on an 18 point scale", since literally 100% halting of the disease could only have achieved 1.66 points efficacy in the 18 month clinical trial.

This is like having a 100-point measure of cardiovascular health, where patients start at 90 points and are expected to worsen by 10 points per year, eventually dying after 9 years. If patients given some treatment only worsen by 7 points per year instead of 10, would you say "only 3 points benefit on a 100 point scale"?

Or is the long term thought that drugs like these will eventually be used a lot earlier as a prophylactic to those at high risk?

I do believe that they will be more (close to 100%) efficacious when used in this way, yes.


And that is the core problem with what happened. There may actually be a grain of truth but now there is a backlash. I'd argue though that the mounds of alternative explanations that weren't followed up on should likely get some priority right now since we know so little about them there is a lot to learn and and we are likely to have a lot of surprises there.

I see this as the same problem with UCT (upper confidence for trees) based algorithms. If you get a few initial random rolls that look positive you end up dumping a lot of wasted resources into that path because the act of looking optimizes the tree of possibilities you are exploring (it was definitely easier to study amyloid lines of research than other ideas because of the efforts put into it). Meanwhile the other possibilities you have been barely exploring slowly become more interesting as you add a few resources to them. Eventually you realize that one of them is actually a lot more promising and ditch the bad rut you were stuck on, but only after a lot of wasted resources. To switch fields, I think something similar happened to alpha-go when it had a game that ended in a draw because it was very confident in a bad move.

Basically, UCT type algorithms prioritize the idea that every roll should optimize the infinite return so it only balances exploration with exploitation. When it comes to research though the value signal is wrong, you need to search the solution space because your goal is not to make every trial find the most effective treatment, it is to eventually find the actual answer and then use that going forward. The trial values did not matter. This means you should balance exploration, exploitation AND surprise. If you do a trial that gives you very different results than you expected then you have shown that you don't know much there and maybe it is worth digging into so even the fact that it may have returned less optimal value than some other path its potential value could be much higher. (Yes I did build this algorithm. Yes it does crush UCT based algorithms. Just use variance as your surprise metric then beat alpha-go.)

People intrinsically understand these two algorithms. In our day to day lives we pretty exclusively optimize exploration and exploitation because we have to put food on the table while still improving, but when we get to school we often take classes that 'surprise' us because we know that the goal at the end is to have gained -some- skill that will help us. Research priorities need to take into account surprise to avoid the UCT rut pitfalls. If they had for the amyloid hypothesis maybe we would have hopped over to other avenues of research faster. 'The last 8 studies showed roughly the same effect, but this other path has varied wildly. Let's look over there a bit more.'


yeeeess...but when you look at the slope of the decline on the NEJM papers describing the clinical trials of lecanumab and donemumab...are you really slowing the decline?


To be clear, I think you're asking whether maybe the drugs just provide a temporary "lift" but then the disease continues on the same basic trajectory, just offset a bit?

The studies aren't statistically powered to know for sure, but on lecanemab figure 2, the between-group difference on CDS-SB, ADAS-Cog14, ADCOMS, and ADCS-MCI-ADL (the four cognitive endpoints) widens on each successive visit. Furthermore, while not a true RCT, the lecanemab-control gap also widens up to 3 years in an observational study: https://www.alzforum.org/news/conference-coverage/leqembi-ca...

On donanemab figure 2, there is generally the same pattern although also some tightening towards the end on some endpoints. This could be due to the development of antidrug antibodies, which occurs in 90% of those treated with donanemab; or it could be statistical noise; or it could be due to your hypothesis.


What kind of soured me on whether to recommend lecanumab in the clinic or not - the effect size and the slope, vs. the risk of hemorrhages/"ARIAS".

I mean, if you're looking at an steady 0.8 pt difference in CRS-SB, but the entire scale is 18 points, yes, it's "statistically significant" w/ good p-values and all, but how much improvement is there really in real life given that effect size?

Plus, if one is really going to hawk something as disease modifying, I'd want to see a clearer plateauing of the downward slow of progression, but it's pretty much parallel to the control group after a while.

There is some chatter in the Parkinson's world - the issue and maybe the main effort isn't so much clearing out the bad stuff (abnormal amyloid clumps/synuclein clumps) in the cells, it's trying to figure out what biological process converts the normal, functioning form of the protein into the abnormal/insoluble/nonfunctional protein.....at least assuming amyloid or synuclein is the root problem to begin with...


What kind of soured me on whether to recommend lecanumab in the clinic or not - the effect size and the slope, vs. the risk of hemorrhages/"ARIAS".

I don't claim that it's obviously the right move for every Alzheimer patient at the moment. It would be great to increase the effect size and reduce ARIA rates. My central claim, again, is that the amyloid hypothesis is correct, not that we have a cure.

the issue and maybe the main effort isn't so much clearing out the bad stuff (abnormal amyloid clumps/synuclein clumps) in the cells, it's trying to figure out what biological process converts the normal, functioning form of the protein into the abnormal/insoluble/nonfunctional protein

Yes, but it appears that these are one and the same thing. That is, amyloid and tau (mis)conformation seems to be self-replicating via a prion-like mechanism in locally-connected regions. This has been established by cryo-electron microscopy of human proteins, as well as controlled introduction of misfolded proteins into mouse brains.


Downvoters, are you sure you have a rational basis for downvoting this informative post? Do us HNers really know enough to discredit the amyloid hypothesis when 99.9% of us know nothing other than it's gotten some bad press in recent years?

I googled lecanemab and it does have the clinical support claimed. I don't see anyone questioning the data. I'm as surprised as anyone else, even a little suspicious, but I have to accept this as true, at least provisionally.

For anyone who wants to start grappling with the true complexity of this issue, I found a scholarly review [1] from October 2024.

[1] The controversy around anti-amyloid antibodies for treating Alzheimer’s disease. https://pmc.ncbi.nlm.nih.gov/articles/PMC11624191


https://www.reddit.com/r/medicine/comments/1057sjo fda_oks_lecanemab_for_alzheimers_disease/

"Lecanemab resulted in infusion-related reactions in 26.4% of the participants and amyloid-related imaging abnormalities *with edema or effusions in 12.6%*."

https://en.wikipedia.org/wiki/Cerebral_edema

"After 18 months of treatment, lecanemab slowed cognitive decline by 27% compared with placebo, as measured by the Clinical Dementia Rating–Sum of Boxes (CDR-SB). This was an absolute difference of 0.45 points (change from baseline, 1.21 for lecanemab vs 1.66 with placebo; P < .001)"

https://www.understandingalzheimersdisease.com/-/media/Files...

Sum of boxes is a 19 point scale. So, for those keeping track at home, this is an incredibly expensive treatment that requires premedication with other drugs to control side affects as well as continuous MRIs for an ~%2.3 absolute reduction in the progression of dementia symptoms compared to placebo, with a 12% risk of cerebral edema.

Now, I'm no neurologist, but I'd call that pretty uninspiring for an FDA-approved treatment.


"This was an absolute difference of 0.45 points (change from baseline, 1.21 for lecanemab vs 1.66 with placebo; P < .001)"

Sum of boxes is a 19 point scale.

It's an 18 point scale, but more to the point: the decline in the placebo group was only 1.66 points over those 18 months, and the mean score at baseline was just over 3 points. So even 100% efficacy could only possibly have slowed decline by 1.66 out of 18 points (what you would call a 9.2% absolute reduction) in the 18 months of that experiment. And full reversal (probably unattainable) would have only slowed decline by about 3 points.

I agree that the side effects of anti-amyloid therapies are a serious concern. The reasons for this are being understood and corrected in the next generation of such therapies. For example, I expect trontinemab to achieve better efficacy with much greater safety, and there is already preliminary evidence of that. Furthermore, there are improved dosing regimens of donanemab which improve side effects significantly.

Note that my claim is not that the existing drugs are stellar, and certainly not that they're panaceas. Simply that the amyloid hypothesis is true and there has been tremendous progress based on that hypothesis as of late.


As much as you're chiding people for being a part of a "misguided popular uprising", you're not really making a good case for anti-amyloid therapies. It started at "wow, 30%!" in this comment chain, and now it's at "barely having an effect over a placebo" being tremendous progress?

It seems like you didn’t understand my comment if you think I’ve changed my position from 30% efficacy.

I don't think you've changed your position. Reading the thread, your mention of 30% is super misleading and you should've lead with how little progress has been made instead of chastising people correctly upset with the lack of progress.

You have to understand that CDR-SB is a very sensitive measurement. Yes, it's an 18-point scale, but from 4.5 to 18 it's just measuring how bad the dementia has gotten. The vast, vast majority of healthy people will score 0. Going from 0 to 0.5 is a massive difference in cognitive ability.

To emphasize your point, I don't think anyone will notice if someone's alzheimers is 2.3% better.

These rating scales like CDR-SB (invented by drug companies or researchers who are funded by drug companies) are very good at making the tiniest improvement sound significant.


> Downvoters, are you sure you have a rational basis for downvoting this informative post?

Citing relative improvement (30%) instead of absolute improvement (2%) and not explicitly designating it as such.


“Slowed decline by 30%” is explicitly designating it as such.

I disagree. "Slowed decline by 30%" to me means an absolute reduction of 30% in some rate expressed as unit X over unit time, and that's what I thought you meant until another commenter pointed out that it was a relative reduction. IMHO it's not an explicit callout unless you are using the words 'relative' and or 'absolute'

I did not downvote, but OP failed to provide a link to back up his claim, or to make explicit what "slowing decline by about 30%" even means.

In light of the fraudulent and scandalous approval of aducanumab [0] (which also targeted amyloid), such claims must be thoroughly referenced.

[0] https://en.wikipedia.org/wiki/Aducanumab#Efficacy


If it helps, here’s info from Dr. Derek Lowe, a 30+ year pharma chemist and author of In The Pipeline. For further research on the topic, he has many other posts on the topic, some of which are linked in the links below.

https://www.science.org/content/blog-post/aduhelm-again

https://www.science.org/content/blog-post/goodbye-aduhelm

https://www.science.org/content/blog-post/alzheimer-s-and-in...


How do you know what the downvote status is?


There is anecdotal evidence and perhaps even some small studies showing that a keto diet can halt and even reverse Alzheimer's symptoms.

Compared to that, reducing the speed of decline isn't terribly impressive. It's better than nothing to be sure! But what people want is BIG progress, and understandably so. Billions have been spent.


Billions have been spent because it's a challenging disease to understand and treat. I want big progress too. But we shouldn't let our desire for big progress cause us to lose our ability to objectively evaluate evidence.

I have no opposition to a properly controlled randomized controlled trial of the keto diet, or other proposed therapies (many of which have been conducted, and are for targets other than amyloid which are completely compatible with the amyloid hypothesis). Until a proper RCT of keto is conducted, anecdotal claims are worth very little compared to the evidence I referred to.


I'm far, far more interested in anecdotes about completely halting or reversing decline than I am in rock solid data about a 30% reduction in decline speed.

Antibiotics started out as an anecdote about something whose effect was so stark it couldn't be missed. Chasing promising anecdotes is far more valuable (in my opinion) than attempting to take a 30% effect to a 100% effect.

Others are free to feel differently of course. I'm open to hearing about 100 different times that finding a tiny effect that got grown and magnified into a huge effect that totally changed medicine. I'm just not aware of many at this point.


You can be interested in what you want. But the interest in anti-amyloid therapy came from the basic science indicating amyloid pathology as the critical but-for cause of the disease. It wasn't just a blind shot in the dark.

To my knowledge, there's no such basic science behind a keto diet for Alzheimer's.


Turns out there are enough studies for a meta analysis. Is that basic science? I'm not sure what counts.

https://www.sciencedirect.com/science/article/pii/S127977072...

If amyloid is truly the critical "but for" cause then how on earth is it possible that reducing amyloid burden doesn't really make a difference?

https://scopeblog.stanford.edu/2024/03/13/why-alzheimers-pla...


Turns out there are enough studies for a meta analysis. Is that basic science?

Basic science in this context means research investigating the underlying disease process to develop knowledge of how it works mechanistically, as distinguished from (and as a precursor to) developing or testing treatments for the disease. This helps us direct resources in plausibly useful directions rather than merely taking shots in the dark, and it also helps us to interpret later clinical findings: e.g. if we see some cognitive benefit in a three-month trial, is that because the underlying disease process was affected (and hence the benefit might persist or even increase over time), or might it be because there was some symptomatic benefit via a completely separate mechanism but no expectation of a change in trajectory? For example, cholinergic drugs are known to provide symptomatic benefit in Alzheimer disease but not slow the underlying biological processes, so that worsening still continues at the same pace. Or if we see results that are statistically borderline, is it still worth pursuing or was the very slight benefit likely a fluke?

So a meta-analysis of ketogenic diets in Alzheimer disease is not basic science, though that doesn't mean it's useless. But what I'm saying is it's really helpful to have a prior that the treatment you're developing is actually targeting a plausible disease pathway, and the amyloid hypothesis gives us that prior for amyloid antibodies in a way that, to my knowledge, we don't have for ketogenic diets.

https://www.sciencedirect.com/science/article/pii/S127977072...

Thanks, I just took a look at this meta-analysis. The studies with the strongest benefits on the standard cognitive endpoints of MMSE and ADAS-Cog — Taylor 2018, Qing 2019, and Sakiko 2020 — all lasted only three months, which makes me suspect (especially given the context of no theoretical reason to expect this to work that I'm aware of) this is some temporary symptomatic benefit as with the cholinergic drugs I mentioned above.

But it's enough of a hint that I'd support funding a long-term trial just to see what happens.

If amyloid is truly the critical "but for" cause then how on earth is it possible that reducing amyloid burden doesn't really make a difference?

I've argued elsewhere in the thread that it does make quite a difference, but there's still a lot of work to do, and I've said what I think that work is (mainly: improving BBB crossing and administering the drugs earlier).


There was absolutely no theoretical reason that some moldy cheese would kill bacteria but thankfully Fleming noticed what happened and we got antibiotics.

There was no theoretical reason that washing your hands would do anything to combat the spread of disease and all the smart doctors knew otherwise. Some kooky doctor named Semmelweisz proposed that doctors should wash their hands between childbirths in 1847, 14 years before Pasteur published his findings on germ theory in 1861. When some doctors listened to him maternal mortality dropped from 18% to 2%.

I'm all for basic science when the statistical significance becomes so great it really starts to look like causality and then you start figuring stuff out.

It doesn't seem like the statistical significance of the amyloid theory is strong enough that the direction of the arrow of causality can be determined. That's too bad.

The strength of the effect of keto diet interventions in Alzheimer's is pretty strong to my understanding. Which should be aggressively hinting that there's likely some as-yet unknown causality that's worth investigating. We don't have to spend billions to do that. But we do need more funding for it which is hard to get while all the amyloid hypothesis folks are really invested and clamoring.


There was absolutely no theoretical reason that some moldy cheese would kill bacteria but thankfully Fleming noticed what happened and we got antibiotics.

Again, I'm in favor of people investigating all sorts of random shit.

I agree that sometimes unexpected things pan out. If you want to run a carefully conducted, large long-term trial on ketogenic diets in Alzheimer's, I support you. I'm just skeptical it'll pan out, and on priors I'll put greater expectation on the approach with a scientifically demonstrated mechanistic theory behind it.

I'm all for basic science when the statistical significance becomes so great it really starts to look like causality and then you start figuring stuff out.

It doesn't seem like the statistical significance of the amyloid theory is strong enough that the direction of the arrow of causality can be determined.

What are you basing this one? The p-value on lecanemab's single phase 3 trial was below 0.0001. And the causal role (not mere association) of amyloid in the disease has been demonstrated for years before significant efforts were invested developing therapies to target amyloid in the first place; most convincingly in the genetic mutations in APP, PSEN1, and PSEN2.


I agree more science is certainly better than less. But patentable therapies will always get a disproportionate amount of funding for big science (versus a basically-free dietary change).

For ketones and cognition, also look for studies on MCT oil. Such as https://pmc.ncbi.nlm.nih.gov/articles/PMC10357178/


There are certainly theoretical reasons why it might help. There is definitely a link between AD and blood sugar. Having diabetes doubles your risk of AD. The brain regions hit first and worst in AD have the highest levels of aerobic glycolysis (in which cells take glucose only through glycolysis and not oxidative phosphorylation, despite the presence of adequate oxygen).

To the extent a keto diet can reduce resting blood sugar levels and improve insulin sensitivity, there is good reason to think it is a candidate to slow AD.


It's possible you might adopt a different attitude if one day you're diagnosed with rapid onset Altzheimer. At that stage you'd be forgiven for muttering 'basic science be blowed'. Keto (or whatever) offered some relief for my friend Bill, I'll give it a try given it's my survival at stake.

Plate tectonics was suggested in 1913 and not supported (to put it politely) at that point by 'basic science'. It took until 1960 to be accepted. A paradigm shift was needed as Kuhn explained.

Meanwhile, this paper (2024) https://www.sciencedirect.com/science/article/pii/S127977072... 'Effects of ketogenic diet on cognitive function of patients with Alzheimer's disease: a systematic review and meta-analysis'

concludes "Research conducted has indicated that the KD can enhance the mental state and cognitive function of those with AD, albeit potentially leading to an elevation in blood lipid levels. In summary, the good intervention effect and safety of KD are worthy of promotion and application in clinical treatment of AD."


There are also studies showing a plant-based diet can reverse Alzheimer's symptoms as well. It has to do with atherosclerosis.


Can you provide a source for this?

I'm not aware of any RCT showing long-term improvement of Alzheimer's symptoms from any treatment. I am aware of 1) long-term slowing of worsening (not improvement) from anti-amyloid therapy, 2) short-term benefits but no change in long-term trajectory from other therapies, and 3) sensational claims without an RCT behind them.


Sadly this stems from a structural problem in biology and medicine, and is far from exclusive to the field of Alzheimer's. Some reforms are urgent, otherwise progress is going to be incredibly slow. The same pattern is repeated again and again. Someone publishes something that looks novel, but is either an exaggeration or a downright lie.

This person begins to attract funding, grant reviews and article reviews. Funding is used to expand beyond reasonable size and co-author as many articles as possible. Reviews mean this person now has the power to block funding and/or publication of competing hypotheses. The wheel spins faster and faster. And then, we know what the outcome is.

The solution is to make sure reviews are truly independent plus limitations on funding, group size and competing interests. I think that tenured researchers that receive public funding should have no stock options nor receive "consulting" fees from private companies. If they want to collaborate with industry, that's absolutely fine, but it should be done pro bono.

Furthermore, if you are a professor who publishes 2 articles per week and is simultaneously "supervising" 15 postdocs and 20 PhD students at 2 different institutions then, except in very few cases, you are no longer a professor but a rent seeker that has no clue what is going on. You just have a very well oiled machine to stamp your name into as many articles as possible.


I'm not a fan of many of the practices you complain about here, but I will say this: We get paid too little for what we do for way too long....6 years of grad school (24k/yr) 6 year post-doc (42k/yr) in California, when I was in those positions anyway. Today, at UC Davis, assistant professors in the UC system start at $90,700 [1, for salary scale], which is often around 12 years after their undergraduate degree. That's in California, where a mortgage costs you $3,000 a month, minimum.

[1] https://aadocs.ucdavis.edu/policies/step-plus/salary-scales/...


Why do employees keep voluntarily accepting that type of abuse? Low wages aren't a secret and the employees doing that work aren't idiots so they must know what they're getting into. Are they doing it out of some sort of moral duty, or as immigrants seeking permanent resident status, or is there some other reason? Presumably if people stopped accepting those wages then the wages would have to rise.


Those numbers aren’t accurate anymore they’re out of date and now much higher. Also, I voluntarily pay my students and postdocs 2-3x those numbers currently.

But ultimately (1) those are seen as training positions that lead to a tenured faculty position, which pays fairly well, and has a lot of job security and freedom; (2) certain granting agencies limit what you can spend on students and postdocs, to levels that are too low for HCOL areas.


A lot of this is defined by the NIH, K23/R01 grant amounts specify what kind of salary they support and are kind of defined by the powers that be...

UC system def has more overhead than usual, and there may be some cost-of-living adjustments but...

Hence why a lot of us went into private practice...


I'll add in that it's a buyer's market. There are plenty of post docs with no work (who want to work in the academic space) so if you don't want the post there are plenty in line who do.

There's no post-doc-research union to set and enforce reasonable pay scales, but equally a union would have difficulty adjusting rates to local cost of living.

Put another way - supply and demand baby, supply and demand.


UC postdocs are unionized… but it’s not a great union. The higher paid postdocs saw their pay and benefits go down from the union.

You keep bringing up one state in the union as if the whole system is flawed because of this one state.


Picking a state that's not California nor New York, Massachusetts, etc.: https://www.indeed.com/career/assistant-professor/salaries/A...

Says $60,000 for University of Arkansas or 2/3rd of what was listed for California.

I'm not an academic nor do I live in California (or Arkansas) but $90,000/year after 6 years working below minimum wage and 6 years barely above doesn't sound that great in <economic terms>. Hopefully people are getting benefits from teaching or research.


Except that is where the majority of research is done, where the prestigious schools and students are, and where those people want to live. Plenty of good research gets done in Arkansas, and Texas, and North Carolina, but not in the cheap parts. It happens in Research Triangle, or Austin, or Fayetteville. It doesn't happen at Oachita Baptist College in Arkadelphia.

The somewhat good news is that people get into science and medicine because they believe in them, and they're often willing to work for peanuts so that big pharma can take their work and charge Medicare $80k/yr for a new drug that might work.

There's huge problems in academia and it's incentive structure, but I don't think they're related to be being in urban vs rural America (they exist just as badly in Europe, China, and India)


it also depends on the lab.

It's a very small world for various reasons and sometimes, there's a good combination between a PI and a hosting institution. Sometimes there's not. If the guy who's doing what you want to do has his lab at UAB, you go to UAB. That being said, once you get your K23 or R01, because of NIH matching funds, you have more of a choice of where you go.


Those places you list have gone up painfully in a relative sense (like everywhere lately) but are nothing like the absurdity of California. You don't need two highly-paid professional incomes to afford a house with a long commute. There's also Atlanta, Houston, Dallas, Chicago, central Florida, many others.


$60K is very hard to believe. I wouldn't trust indeed.com for faculty salaries.

For one thing, the variance is high between departments. Engineering gets paid a lot more than history, for example.

According to https://www.univstats.com/salary/university-of-arkansas/facu..., assistant professor is $91K. The caveats still apply - some likely get a lot more, and some a lot less.

Fortunately, it's a public university, so we can see actual salaries:

https://app.powerbi.com/view?r=eyJrIjoiZGM3Yzg2YzMtNDY3YS00N...

Looking at some random assistant professors in relevant departments, I see:

One who earns $113K

Another who earns $94K

Another who earns $96K.

These are regular departments, (biochemistry, biology, etc). Not medical departments. I'm sure those ones get paid more (e.g. one I personally knew in Houston got $180K in a public university in Dallas).

So mid $90's would be my guess.

Then note that these are 9 month salaries, and the typical deal is they get up to a third more from grants (their "summer" salary). So total compensation for assistant professors would be about $120K.

Faculty salaries are not that bad, as a rule. What's really bad is the number of years they spend trying to get a postdoc.


for biomedical, it is what your grant stipulates. I.e. I think way back when, K23's paid $98k for 75% time (or maybe 70%) and your institution agreed to pay the rest. Sometimes, they would actually pay you more to try and get close to fair market value, but that is if the department is generous. For famous institutions, like UCLA, or Brigham and Women's, the law of supply and demand is not on your side bc if you don't like the low salary, there's a giant line of wannabes waiting to take your spot.


Are you sure that those are 9 month salaries?


Published historical pay will be 12 month salaries. Most non tenured professors will refuse the optional summer salary and work all summer for free, because they have to pay for it from their own grants- it means hiring one less student, and less chance of getting tenure.

Fairly certain. It's also in line with salaries I know from other departments. This is the "fixed" amount of the salary. The grant portion is variable, and also not paid by the state, so it's usually not required by the law to disclose.


You mean hire one or get one?


That one state is where the apex of the system is. It's where a lot of, maybe most, research happens, it's where perhaps most tech development happens, it's even where a lot of our popular culture is determined. It's where ~everyone is aiming for, even if only a fraction of them will make it there, so it affects the whole system.


You sincerely believe this?


All that given, most people still live on the East coast, as in 80% past Nevada. Culture is arguable.

For Americans, there is a clear difference between what behavior might be normal in the bay area/silicon valley all the way to LA than it is for NY, Boston, Houston, Miami, Detroit, etc.

I'd even assert "most tech development" is just plain wrong. It's certainly where many companies are HQ'd, but those same companies have offices all over the states, and each one offers/specializes in different products.

It also depends on what you mean by "tech development" of course. R&D projects and new developments, maybe there's an edge. I have a much stronger feeling that more research is performed in the Boston -> DC metropolis than the equivalent (as in distance) metropolis spanning from LA -> Silicon Valley.


I was contextualizing my response because cost of living is higher in California, and some of those numbers may seem more reasonable if it were in Arkansas, for example.


I think NIH does a cost of living adjustment.


Instead of amyloid and tau, we now have a bunch of promising new leads:

- insulin and liver dysregulation impacting the brain downstream via metabolic dysfunction

- herpesviruses crossing the blood brain barrier, eg after light head injury or traveling the nervous system

- gut microbiota imbalance causing immune, metabolic, or other dysregulation

- etc.

These same ideas are also plausible for MS, ADHD, etc.


curious if you could link to relevant papers? Thanks!


There’s a good discussion in the previous article discussed on HN, including links to various papers.

1. https://news.ycombinator.com/item?id=42893627

2. https://pmc.ncbi.nlm.nih.gov/articles/PMC8234998/


The NIH already has total funding limits for grant eligibility, and the issue of competitors blocking your publications is pretty much eliminated by asking them to be excluded as reviewers, because we almost always already know who is going to do that. A competent editor will also see right through that.

I did my postdoc in a very well funded lab that was larger than even your examples- and they legitimately could do big projects nobody else could do, plus postdocs and grad students had a lot more autonomy, which helped them become better scientists. The PI worked at a distant/high level, but he was a good scientist and a skilled leader doing honest and valuable research, and had economies of scale that let him do more with the same research dollars. It was the least toxic and most creative and productive lab I’ve ever seen. Banning that type of lab would be to the massive detriment of scientific progress in my opinion.

I also disagree about banning consulting and startups for PIs- that is arguably where research ends up having the highest impact, because it gets translated to real world use. It also allows scientists to survive in HCOL areas with much less government funding. Frankly, I could make 4x the salary in industry, and if I were banned from even consulting on the side it would be much harder to justify staying an academic while raising a family in a HCOL area.

I am also very upset about academic fraud and have seen it first hand- but I think your proposed solutions would be harmful and ineffective. I’m not sure what the solution is, but usually students, postdocs, and technicians know if their PI is a fraud and would report if it were safe for them to do so, but they don’t have enough power to do so. Fixing that would likely solve this. Even for a junior PI, reporting on a more senior colleague would usually be career ending for them, but not who they are reporting on.


I agree with you that OPs ideas don't work. But I don't agree with you either. Let me point to a far more fundamental problem which is that even your well meaning lab is still a ponzi scheme that survives only because it gets cheap labor in the name of more trainees when there's no evidence that the academic system can handle MORE people. Even if we have the funding, the peer review systems we are dependent on are clearly becoming less effective just because of pure volume and breadth.

I have other issues with the system but in the end this is the most important problem I see to be solved first.


We're talking here specifically about how to solve the problem of academic fraud- not how to solve the problem of treating trainees more fairly. That's an important problem also, but not what my comments were addressing. Still, I'll share my thoughts on that also.

In my lab I voluntarily pay competitive industry level salaries- I pay solid 6 figures to postdocs, usually about 2x what other labs pay. I often pay my trainees more than I make myself, and I often only hire one person on a grant that most people would fund a whole lab on.

It's a gamble, but it seems I get more skilled people that stay longer, and can actually afford to live, which works better than having a few more people that are all super stressed and looking for a better job. So far, it's payed off for me, and I would recommend it to other academics.

Second, very few of my postdocs and grad students actually want low paying academic jobs. Most want to join the biotech startup scene, where they'll make a lot more than what they'd make as an academic, and the demand for people with their skills far exceeds the supply. I am giving them the training they need to actually get those jobs, and succeed at them while paying very well in the process. I talk with them about what they actually want- and I make sure it happens for them to the best of my abilities.

When I was a grad student/postdoc, I really wanted to become a PI, and was worried it was a ponzi scheme because so few people ended up doing so. But when I was actually a postdoc, I realized most of my fellow postdocs were highly competitive for faculty positions, but still choosing not to do them by choice. Many were even receiving academic offers and turning them down for industry offers with much higher salaries. I was even co-recruited to a tenure track position, along with a respected colleague I was really hoping to work together with for decades- his co-acceptance of the offers was a big reason I chose this institution. About a week into his academic job, he got an 'offer you cannot refuse' from a startup and left on the spot.

But overall, my situation is someone unusual- mostly because my field is currently in extremely high industry demand. There are indeed a lot of grad students and postdocs making pennies, and with no real job prospects beyond a tiny shot at a faculty job.


at this moment, what NIH? No study sections for grants since the inauguration...


devil's advocate - is that not the very definition of scaling?


It's nice to live in a world where actions have consequences. When the media coverage got too much, Marc Tessier-Lavigne finally had to resign as president of Stanford, so he could focus on his job as a Stanford professor.


I can't tell whether your post is a joke. Yes, Tessier-Lavigne was forced to resign. But Stanford let him stay on as a professor. That was terrible: they should have kicked him out of the university.


They are joking.


I also can't tell whether Stanford is joking, but the notion that he's a good fit for the job of biology professor is definitely funny!


I'm no expert, but I suspect it is a longer process to remove someone from a tenured professor position, than to remove them as President. We don't know that they won't eventually happen.


There are betrayals so severe that a grindingly slow due process is actually itself an addition betrayal. Not arguing for a kangaroo court, but tenure should not be a defense for blatant cheating.


Like most HN readers I'm in computer science, but I do academic research so I review papers - not a lot, but probably 20-25 a year, sometimes more.

I get paid nothing to do this - it's considered "service", i.e. it's rude to submit a lot of papers and not review any in turn. (it turns out there are a lot of rude people out there) In general no one in academic publishing gets paid anything, although an "area editor" who tries to convince people like me to review papers might get paid a bit if they work at a lower-quality for-profit journal. (other fields have high-quality for-profit journals, but not CS)

Some of the papers I review may be fraudulent. I have no way of figuring this out, it's not my job, and I don't have access to the information I'd need to determine whether they are.

The use of images in certain life sciences papers has made it much easier to detect a certain class of fraud, although even these checks would be difficult for an individual reviewer to perform. (the checks could be integrated with the plagiarism checker typically run on submissions before they are reviewed, and I think some journals are starting to do this)

In CS it would be much more difficult to detect fraud, because there's no equivalent to the standard and easily-compared western blot and photomicrograph images in life sciences papers. However I'll note that a lot of CS venues are starting to have an "artifact evaluation" phase, where authors of accepted papers submit their software and a team of students try to get it to work themselves. It's not mandatory, but most authors try for it - the main purpose is to encourage reproducible science, but it also creates an environment where fraud is more difficult.

(I'm only aware of conferences which have artifact evaluation - none of the journals I know have tried this)


thank you for your service.

> no one in academic publishing gets paid anything

How come none of those crypto and or VC bros are closing in on this issue? Better and more peer-reviewed research means more areas to raise economically viable ideas in and so the gravitational pull would get stronger in coming generations.

And those chain-guys talk a lot about trust, verification, decentralization, (proof of work) and so on ...

It's a "boring niche" that needs some ruckus, if you ask me.


The amyloid hypothesis, as described to me by someone working in the field, is not only wrong, but it is harmful to the patients. His research is studying that it is more probable that the plaques are actually protective and do not cause the memory loss and other disease symptoms directly. This idea was pushed aside and ridiculed for years all because of the greed and lies by people like Eliezer Masliah.

https://pmc.ncbi.nlm.nih.gov/articles/PMC2907530/

https://journals.sagepub.com/doi/abs/10.3233/JAD-2009-1151

https://www.utmb.edu/mdnews/podcast/episode/alzheimer's---co...


Yes! This plays into my favorite pet theory - that herpesviruses (and/or other microbes) are the cause of Alzheimers:

https://pmc.ncbi.nlm.nih.gov/articles/PMC5457904/pdf/nihms85...


Now journalists dunk with science publication fraud, claiming its wider than it could possibly be! It will certainly get engagement!

More burn the bridges science journalism; "the worst that can happen is somehow the common place"; erode trust in institutions more, so and on so on so forth.

We are currently plummeting into nihilism, see you at the bottom, hope the clicks were worth it.


What is the incentive to call out a competitor on their lies?

If you want to be truly cynical, it's to your benefit to NOT call them out; then other-competitors might spin their wheels following the negative results of your competitor when you're not wasting your time chasing the fiction.


That applies if one were maximizing for patient outcomes, but not if maximizing for grants

Those who would condemn "science" need to explain why their concept is different from listening to the weather report and doing the exact opposite -- that's not a recipe for success, and "science" has gotten far more right than wrong over the years.

I’m a caregiver for my WWP. I wanted to inform PWP that there’s hope, the Uinehealthcentre. com has been of great help. the PD-5 treatment programme they offer has completely help with reversing my wife Parkinson's symptoms.

Say it again and loud. No null publications leads to reproducibility crisis.

Especially in an academic discipline that fundamentally bowing to it's industry counterparts for scraps.

This coming to you from a field which in modern times which reinvented the trapezium rule...


Similar well-founded concerns were raised before:

"How an Alzheimer’s ‘cabal’ thwarted progress toward a cure" (2019) https://news.ycombinator.com/item?id=21911225


How can I trust anything now? I recently asked ChatGPT if learning multiple languages could slow dementia, and now I realized there’s no way to know the answer to this even if I confirm it isn’t hallucinating.

Fwiw, we touched in this topic in some of my linguistics classes. Even then (~7 years ago), the claim was that learning multiple languages _slowed the appearance of some symptoms_. We debated whether that was really the same as slowing the disease, or if it was just hiding the affects. It probably depends who you ask.

Based on the scale and impact of fraudulent results, I wonder if some form of LLM based approach with supervised fine-tuning could help highlight the actually useful research.

Papers are typically weighted by citations, but in the case of fraud, citations can be misleading. Perhaps there's a way to embed all the known alzheimer's research, then finetune the embeddings using negative labels for known fraudulent studies.

The the resulting embedding space (depending on how its constructed; perhaps with a citation graph under the hood?) might be one way to reweight the existing literature?


Highlighting research that's useful is probably too difficult, but highlighting research that definitely isn't should be well within the bounds of the best existing reasoning LLMs.

There are a lot of common patterns in papers that you learn to look for after a while, and it's now absolutely within reach of automation. For example, papers whose conclusions section doesn't match the conclusions in the abstract are a common problem, likewise papers that contain fake citations (the cited document doesn't support the claim, or has nothing to do with the claim, or sometimes is even retracted).

For that matter you can get a long way with pre-neural NLP. This database tracks suspicious papers using hand-crafted heuristics:

https://dbrech.irit.fr/pls/apex/f?p=9999:1::::::

What you'll quickly realize though is that none of this matters. Detection happens every day, the problem is that the people who run grant-funded science itself just don't care. At all. Not even a little bit. You can literally send them 100% bulletproof evidence of scientific fraud and they'll just ignore it unless it goes viral and the sort of media they like to read begins asking questions. If it merely goes viral on social media, or if it gets picked up by FOX News or whatever, then they'll not only ignore it but double down and defend or even reward the perpetrators.


I think you misunderstand my proposal. I am not describing a fraud classifier.

I am describing fine-tuning an embedding space based on papers with known fraud.

The content of the fraudelent paper which includes information such as authorship and other citations can be made to exist on an embedding space.

Supervised fine-tuning on labels will alter the shape of that embedding space.

The resulting embeddings that would come from such a fine-tuned model would generate different clusterings of papers than what you'd get than if you did not have the labeled fraudelent data at all.


I honestly cannot tell if you're being serious or sarcastic.


Me neither. But it’s very much in keeping with other seriously-intended suggestions I’ve heard. Optimism is fine until it becomes just dreaming and wishing out loud.


Can you explain to me what about the idea I presented is flawed or infeasible?


Sorry, I guess you were not being sarcastic. LLM's are good at vocabulary and syntax, but not good at content (because nothing in their architecture is designed to do that). Since the kind of article we're looking for would be fine if written exactly the same, but it was true, an LLM is not a good match for finding this.

Now there might be algorithms that could, for example automatically checking for photo doctoring or reuse of previously used images that are not attributed. These sorts of things would also not be an LLM's forte.

My apologies again, it's just that LLMs are the subject of so much hype nowadays that I genuinely thought you might be saying this in jest.


I think you misunderstood my proposal

LLMs are good a producing embeddings which are latent representations of the content in the text. That content for research papers includes things like authorship, research directions, and citations to other papers.

When you fine-tune a model that generates such embeddings with a labeled dataset representing fraud (consisting of say 1000s of samples), the resulting model will produce different embeddings which can be clustered.

The clusterings will be different between the model with the fraudulent information and without the fraudulent information.

Now using this embedding generation model, you (may) have a way to discern what truly significant research looks like, versus research that has been tainted from excess regurgitation of untrustworthy data.



not to cure, but to prevent alzheimer's, just eat teaspoon of virgin coconut oil every evening. As bonus you'll get more vivid dreams

The big problem is not science per se but capitalist pharma. Science eventually self corrects but capitalism creates a huge inertia driven by the fact people do not want to lose money and do everything possible to push things as much as they can. So much investments went into the amyloid hypothesis.

The story of $SAVA is paradigmatic. Every neuroscientist knew that stuff was based 100% on fraudulent results but they nevertheless managed to capitalise billions.


Cassava Science is mentioned in the article and their problem is that the drug they backed was based on fraudulent academic research. The professor has been indicted by the DOJ for defrauding the NIH of $16M in grants. This isn't an indictment of capitalist pharma because the original fraud wasn't done in a capitalist system.

adult age is not a disease anymore than childhood is a disease

The article starts with a non-sequitur -- "we have tackled some disease, so we should be able to tackle alzheimer's, but we haven't"

It would be great if opinion articles never made it to the front page, on any media aggregator, anywhere.

They don't make it to the front page of a newspaper.


HN is not a newspaper. Opinion pieces fit perfectly well within the guidelines as long as they are factual and interesting.


... sure?

There is a trend of flagging many other categories of media on this site, even if its "factual and interesting". I was making a plea to the void of this community to stop eating this category of bullshit.


I am going to point out how much the vibes have shifted here. When earthquake scientists gave false reassurances in the run-up to the 2009 earthquake at L'Acquila, likely contributing to the many deaths there, Italian criminal proceedings resulted. At that time, though, the usual suspects decided that this was science under attack and there was high-level protest and indignation that scientists might face legal consequences for their statements. https://en.wikipedia.org/wiki/2009_L%27Aquila_earthquake#Pro...


I think that's an apples-to-oranges comparison.

First, keep in mind that while those scientists were originally convicted, they were exonerated on appeal. The appellate court's opinion was that the responsibility was on the public official who announced that the area was safe without first consulting the scientists.

Secondly, this case isn't clear cut. Some still fault the scientists for not correctly interpreting the seismology data and not speaking up against the public official who was supposedly to blame. There's a big question around whether these scientists were correct or not in their judgement.

At any rate, this is pretty far from scientific fraud. Seismology (as far as I, a layman, understands it) is not a science where you can make exact predictions. Being wrong about the future isn't fraud, it's just unlucky.


You're conflating outright fraud with a difference of scientific opinion?


To be fair, it looks like there was fraud in the Italy case - just of the builders not the scientists.


The issue isn't whether someone else had a contrary opinion: the issue is that (just going by the linked reports) Bernardo De Bernardinis came out from the meeting with the scientists and informed the public that there was "no danger". Now, either the scientists felt that this was a reasonable summary of what they had said, or they didn't: either of those is bad, in different ways.


Okay, honest question: in what world is a magnitude 5.9 a "major" earthquake? 5.x earthquakes happen multiple times per day somewhere, and the same area had known much worse earthquakes in the preceding century.

How bad do your buildings have to be to get a 3-digit death toll from something that weak? I'd expect a 3-digit toll for "number of fine-china plates broken".


Don't be a dick.

Christchurch was absolutely fucked by a 6.1 Mw(USGS) magnitude earthquake in 2011 - shallow depth and close to the city so we had severe outcomes. We have reasonably good earthquake building codes in New Zealand.

Poorer countries with more fragile infrastructure could be devastated by a smaller quake - depending on local circumstances.

The only thing that saved Christchurch from far far worse outcomes was that the most dangerous buildings were already empty because of damage from a larger earthquake in 2010.

I admit the log scale means that 6.1 is far bigger than 5.9... However you handwaved 5.x which shows your ignorance of log scales.

We had plenty of aftershocks below 6.1 - they are extremely unpleasant and have awful psychological effects on some people.

And we are hundreds of kilometers away from the major Southern Alps faultline - the equivalent to San Andreas. So we were blessed with some newly discovered minor faultlines.

https://en.m.wikipedia.org/wiki/2011_Christchurch_earthquake

https://en.m.wikipedia.org/wiki/2010_Canterbury_earthquake


Why should scientists be held to higher account than, say, politicians?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: