When my grandma fought Alzheimer's over the span of a decade it was very striking to see that the effect of a lack of hydration and the disease itself kind of overlapped. When she had a particularly bad day we used to give her a saline IV and you could immediately notice how she cleared up and regained her brain powers after each IV.
I'm not up to date on current research into that direction, but I think hydration plays a very big role in Alzeimer's symptoms on the elderly.
I know it sounds ridiculous, but once I'm rich enough I want to build an elderly care center/hospital with urine-based hydration monitoring for each participant so family members can be sure that their loved ones have a healthy water intake each and every day.
My guess is that fluids intake could be tackled to some extent by a measured drinking device. The difficulties are spillage being counted as consumption, simple and safe operation for people with lower cognitive, physical and sensory abilities, the need for hospital-level cleanliness of the device, and of course cost. I do not know why we do not have devices like this when people can die from lack of fluids.
A less obvious way technical people can help is to test the accessibility of their products. A device with a black button on a black background, or which in some other way obscures its controls and functions, is a failure in this respect. Make all the functions visible, easy to understand and easy to operate. Test websites with automated tools like https://try.powermapper.com/Demo/SortSite
I'd like to one day have a room-sized analysis unit which screens feces/urine of all attached toilets and performs various tests on it. Analysis results could be assigned to individuals based on the DNA footprint.
It could immediately tell you that your sugar intake is too high or that you should drink more water. With this kind of immediate feedback people will be able to live healthier because they can correlate food <-> result on their body.
Also we could do so many interesting studies..
1 - https://www.youtube.com/watch?v=DJklHwoYgBQ
Consumers have demonstrated that they're willing to save $5-20 a month in exchange for sending gym checkins, daily steps counts, or digital monitoring of their driving habits uploaded to their insurers.
I don't think it'll ever be mandatory but I can see a day where the "discount" for agreeing to continuous monitoring and data upload is so great that it'd be cost prohibitive for the average consumer to not be a part of it. On the personal health front I think the biggest stopper is the cost of the data collection devices. That's what's relegated it to higher end targets like diabetic monitors (for now).
I would be looking to my government to ensure such unwanted and unwelcomed intrusions into an individual's private life by private companies aren't required to achieve that. Whether that be through the nationalisation of such services or some other method. Anything else is an affront to the implied social contract which allows such private companies to profit from their activities in society.
Stupid thing is I need a 100M exit first to sponsor this kind of thing.
You're probably referring to gut bacteria, which live in your colon. It's true that a lot of research is being done towards these bacteria, and they appear to play an important role in various systems.
However, stools are usually coated in a thin layer of mucus -- which makes them a bit shiny and serves as a lubricant. It's unlikely that enough stool interacts with the toilet bowl to get much interesting data out of it. And even if it did, it would probably be contaminated by the presence of other people's stool.
This process felt extremely invasive. I would have killed for a toilet like what you describe, and I feel as though my physical/medical treatment would have been more successful.
(Somewhat tangentially, the food given at hospitals doesn't make sense for someone who has been nutritionally deficient for years).
Such a simple solution.
There have been attempts to build a wearable hydration sensor but no one has made the technology work reliably yet. It's a big potential market if someone can find a solution.
One thing that's confusing and distressing to me is the amount of self-inflicted difficulty that the diseases seem to cause people to suffer from. If asking "What have you had to drink? Have you taken your meds? What have you eaten?" can cause them to feel so much better, what is it about aging, about the treatments, and about the disease that makes people neglect taking care of themselves?
It's usually hard to convince someone who is thirsty to not drink. Depending on who you talk to, the level of hydration that results from drinking when you're thirsty may not result in being completely well-hydrated, but that's not difficult or painful to do either - keeping a water bottle with me seems to get close enough. But it's not like you can blame the person with the disease. What's the systematic problem that causes so many elderly to suffer from dehydration?
Here's some recent research you might be interested in:
Last I heard in the US, the Federal government was still making it difficult to run clinical trials for Alzheimer's using marijuana. I think any federally funded medical trial using marijuana needs to be approved by the DEA.
A PET scan is not a biomarker. The Amyloid Beta (AB)* is the biomarker, the PET w/ tracer (such as the Pittsburg Compound Beta) are used to detected the biomarker.
It's also important to note that Amyloid Beta is only part of the problem. Neurofibrillary Tangles (NFT) are the primary biomarker of Alzheimer's Dementia, and happen inside the neurons. NFTs disable to ability for neurons to communicate with each other, and lead to the death of disabled neurons.
It is completely possible that we can't reverse AD, but can only slow it down / halt progression.
*Was it AB40 / AB42 / or a different Amyloid Beta?
I would expect this test (and therefore the hypothesis) to fail if the plaques themselves are a byproduct rather than the cause of degradation in brain function. That said, I'm surprised I haven't read much in Gates' Notes or other funding sources on potential ties to HSV1 considering all the increasingly supportive research between the two.
According to Fossel, their telomerase gene therapy reverses cognitive decline in animal models, and they're confident the same will apply to humans. Human trials begin in 2018.
I wonder whether Gates has seen Telocyte's work.
It's crazy how much focus has been on clearing amyloid without much luck at all. 224 clinical trials and only 2 drugs going through.
I’m making this investment on my own, not through the foundation. The first Alzheimer’s treatments might not come to fruition for another decade or more, and they will be very expensive at first. Once that day comes, our foundation might look at how we can expand access in poor countries.
It's intriguing they're able to draw that line, but it makes sense if they want to keep the Gates Foundation focused.
As a rule when children normally live to adulthood population growth halts soon after.
There are countries that do worse, but even in the most underdeveloped, war ravaged hell-holes in the world living past 5 is not an anomaly - as of 2012 even in sub-Saharan Africa the under 5 mortality rate was down below 100 per 1000 (though still awful - compare with 6 per 1000 for the most developed countries and a world average of 48), and even the very worst - Sierra Leone - was at 234 out of a 1000. Only a handful of countries are anywhere near 200, and only 16 above 100.
I agree with your overall point about focus, but exaggerating is not useful.
It has been a disappointment to see the Gates Foundation near entirely bypass this area of research up to fairly recently, presumably because they do hold this strange view of aging being either a non-issue, or a problem of wealthy regions.
From a utilitarian perspective, nothing that goes on in the poorest parts of the world rises to the level of harm done by age-related disease. Not even close.
There is no danger that aging will be overlooked.
For an example of the new type of venture emerging now as a result of all of this you might look at Leucadia Therapeutics, working on a way to restore drainage through the cribiform plate, but there will be more in the years ahead to complement the immunotherapy for amyloid and tau mainstream.
Wow, I did "notice" a strong correlation between gum disease and dementia in my own Family. This is obviously my own uneducated guess but it's humbling to see serious people looking into it! There very well could be some sort of dementia that is indeed related to gum disease!
Also I wonder how a citizen scientist like Gates does a deep dive on this topic a like this. There are 10's of thousands of research papers on the subject and countless books.
I've never heard of this before. Some cursory googling only turns up a sole author publication by Alan MacDonald in Medical Hypotheses and a few news pieces by dodgy organizations like the Spirochaetal Alzheimer's Assocation.
Despite what a lot of people learn in undergrad psych classes, the evidence linking amyloid beta and tau to AD is quite good and getting stronger with every new molecular paper that comes out.
BTW, there doesn't seem to be a correlation between Lyme disease incidence and AD https://www.ncbi.nlm.nih.gov/pubmed/24840565
You might see https://doi.org/10.3389/fnagi.2017.00336 as a sample review paper that talks about oral bacteria that are capable of spurring amyloid formation in addition to spirochetes.
Lastly one of the complaints I've heard from someone involved in the cerebrospinal fluid drainage side of Alzheimer's research is that it is really, really hard to fight the amyloid clearance by immunotherapy dominant faction to get any funding or attention for alternatives. They just don't want to hear it.
There was also the case of actor/songwriter Kris Kristofferson being misdiagnosed with Alzheimer's for years. He had Lyme. This is anecdotal of course.
(edit) Thinking about it more, I think the anecdote you mentioned is a bit different from the parent posts. The anecdote is about a misdiagnosis (he only had Lyme, not Alzheimers), while the parent posts are about people actually having both diseases.
1) Develop a drug (or find one to remarket). Find a disease that it treats or invent one.
2) Pump money into advocacy groups devoted to the disease.
3) Find research that supports the use of your drug.
4) The advocacy group spreads awareness of the disease, even if it's made up, and people start asking their doctors to test for it. Some may even receive a diagnosis.
5) Public pressure reaches the FDA, AMA, et al to consider the use of the drug for treatment.
About 100 discontinued clinical trials targeting amyloid beta disagree with you there, https://www.nature.com/articles/nrd.2017.194
Sure, but what causes the amyloid beta? Why wouldn't a long-standing syphilis-like bacterial infection in the CNS cause that sort of protein buildup?
It's a little shocking to see a "former Alzeheimer's researcher" presume that tau buildup and bacterial infections are mutually exclusive..
That's like saying heart disease can't be caused by arterial scarring because it's caused by plaque buildup.
Recent research shows that lack of sleep increases both amyloid beta and tau. One of them increases after a single bad night. The other takes a few days of too little sleep to start spiking.
There may be other factors, but the research suggests we need to work on sleep hygiene for at risk populations. This is a non drug intervention that I can wholly get behind.
> "Spinal taps showed that the more deep sleep people missed out on, the higher their levels of A-beta in the morning. Tau levels didn’t budge because of just one night of slow-wave sleep disruption, but people whose activity monitors indicated they had slept poorly the week before the test also had higher levels of that protein."
Love your plaque line - although plaque is compensation and amyloid beta and tau are not, or not per se.
The plaques seem almost similar to remnants of an incomplete chemical synthesis. For instance, glutamate is synthesized in the brain and is used as a neurotransmitter.
If a bacterial infection prevented glutamate synthesization (potentially by utilizing some crucial component for it's own propagation), you'd see a build-up of otherwise useful amino acids that would manifest as plaques.
The plaques should hypothetically be cleaned by neuronal autophagy but something could be throwing that off as well.
And if neurotransmitter production is degraded, of course cognition will suffer. You seem similar symptoms in Schizophrenia as well.
Your answer is interesting. For those following I should note that compensation means a curative or attempted curative event (as opposed to a disease event such as bacterial infection.)
As what is probably an aside, neurotransmitter production being actively downregulated as a compensatory mechanism (to lower local metabolism needs) is possible, and an interesting thought (whether my thought or your thought) (just not one that creates a buildup of tau etc.) I haven't encountered any evidence that this is compensatory and not just a downstream effect. Your mileage may vary. The evidence that mental activity reduces disease progression might be taken as disconfirmation of such an hypothesis - except that mental practice could be helping mask the underlying (progress of the) disease by increasing skill/adaptation.
Re schizophrenia, I now think of it as a definition affected by a kind of "publication bias." The underlying disease process (or processes) may have rather random results depending on just what part of the brain is most affected - only when the amygdala is strongly affected is a patient likely to get a diagnosis of Schizophrenia, since this can make them (or make them seem) "a danger of self or others."
Edit: When I had the pleasure of watching Bill Gates do this, it was one of the most impactful moments in my life. When he would question a SME on a topic he came across as knowledgeable as the SME who has dedicated years to the topic. Bill Gates literally will have read all around the topic and his mind operates like a poker player or chess player who is thinking on multiple levels. His questions are probes to see how many levels you are thinking on, and I think it also informs him of other angles. He’s a brilliant guy.
I feel like I could improve how I approach learning things in fields that are completely new to me, or where I'm far from an expert. From everything you have said and that I've read elsewhere, Gates and others at his level seem to have really honed their approach to learning and becoming able to have an informed conversation with an SME.
Example, I’m making this up, but it’s a similar type of question I saw, someone is explaining how their application stores data, he will ask, “are you encrypting it?” Response: yes “how?” Response: using XYZ “why not ZYX?” Response: well, we haven’t thought a lot about that yet Response: “well what’s the impact on the processor when encrypting?” Response: not sure...
The engineer thought he was building a app to do X but Bill Gates is operating at a different level, perhaps he has a theory on the future of hardware, or how he wants to deploy this in developing countries. Or he read a recent paper where researchers highlighted the impact of encryption for this specific type of application that he thinks you missed. If he respects you, he will point you into a certain direction and it’s a peer to peer discussion and sharing of ideas, if he doesn’t...it’s painful to watch.
Thats not to say that Gates isn't extremely intelligent and hardworking, just that his approach isn't necessarily the best. Also, Spolsky's anecdote is from a few decades ago. Gates will have matured a bit since then.
There is actually quite a lot of lyme disease in Northern California.
That’s why we need research. We don’t know if they have a common cause or just a common mechanism of action. They very well could be the same disease; Lyme Disease is not well understood and has been implicated in all sorts of neurological diseases such as Multiple Sclerosis and Alzheimer’s.
But the link isn’t quite obvious enough and the potential treatments, which rules out government money; and aren’t considered to be profitable, which rules out commercial money. Good for Gates stepping in here
In short, they stopped manufacture because the demand was not high enough to defend the lawsuits - even though science was on the side of the vaccine (though this is partially hindsight, it takes years to do good science much less time to file a lawsuit based on a correlation)
There is a new vaccine entering trials now. I hope it works, but only time will tell. (for all I know it may have failed trials - success will take more time, but failure can be quick and I wouldn't always find out)
They aren't just limited to the Eastern US.
It's really hard for a medical layman such as myself to sift through all the science. This source is interesting for the story of its discovery via ethnobotany even if it doesn't end up a smoking gun cause of Alzheimer's or other neurodegenerative diseases.
I doubt there are 400,000 papers worth reading in all of medicine. A major skill in research is understanding what to read, and how carefully.
Prepping to write a review usually involves about 150 articles, for me. About 50 of them just need a cursory look, and the abstract often suffices. Another 80 or so need some more attention; you might be looking for patterns in the methods used, cross-referencing results, figuring out what labs are doing what, tracking authors over time, etc. This is how you understand the state of a field.
The last 20 or so are the bulk of your time. A few are genuinely excellent and will be a highlight of the review. Some are simply solid work that requires a good amount of attention. A fair number are terrible and highly misleading. These are the papers that look okay, but have some flaws that need to be addressed by the field at large. It takes some tact to write about these, but it's usually not too bad.
Finding good authors, especially ones that write good reviews, can get you up to speed in a field in an afternoon. An evening chasing down citations and digging a bit on your own, and you're probably about where Bill was with this. Since Alzheimer's is a pretty big subject requiring familiarity with a variety of topics, I'd give it two weeks.
Very important studies are usually unexpected - that's part of what makes them important so we're verging on a tautology, there. So it's a matter of hindsight, in every field which contemporary articles are in fact "the classics." Your certainty and trust in current authorities broad knowledge and open-mindedness is a product of not knuckling down to read the abstracts of a few hundred thousand unrecommended journal articles - as well as a profound ignorance about the history of medicine.
That history more than suggests that if you want quality>authority (of contemporary publications) you'd better start reading, and reading nearly everything.
400,000 hours = 50,000 days (at eight hours a day) = 208 years (at 5 days a week, 48 weeks a year)
If the abstracts wern't really read and understood, just skimmed, perhaps it would be possible to do in a tenth of the time, so 21 years.
In other words, there's a LOT of dross, but there are real gems lying there unread and uncited, too.
So only one in ten or a hundred articles might take an hour or more.
Average for all but the vital or good articles (after accumulating a good background and vocabularly), when I timed it back then: 100 an hour without losing any significant information. The vital remainder of 100 abstracts viewed took me (more of a guess, now) on average not more than double that amount of time, again; perhaps less time. That varied a lot because I was looking most keenly for overlooked knowledge(studies) and pairs of studies that were significant only once linked. Therefore I sometimes was willing to follow tangents to tangents just in case and could speed up at that point because it was nearly all dross.
Back when I was in university supposedly studying the history and philosphy of science and language, I had to visit medical libraries and use the CD disks from the then-private company Pubmed, and walk back and forth to a large, printed medical encycopedia chained to a pedestal and the desk-sized CD reader Pubmed provided. I don't include that reading as I took no surviving notes and wasn't counting abstracts way back then. Being able to research online when I returned to medical reading a couple of decades later was an immense relief. Switching tabs to get to a medical dictionary isn't nearly as much exercise, but it is a lot faster. Printed medical dictionaries for professionals have a LOT of pages to turn, in addition to the walking.
We could use all sorts of data, not just from research or pharmaceutical companies. Is there an overview of what data is available or planned for, from clinical to commercial? And what data-integration platforms are planned for? I suspect many countries are trying to invent this on their own, while a world-wide approach could yield better results.
That's very encouraging. Because what if it's something else?
If it's something else, then chances are good it'll be far harder to fix, perhaps even requiring genetic engineering or medical nanotechnology. Finding out is still valuable, of course...
You hit the nail on the head, and I think you bring up an interesting point.
Ask anyone who has a condition stigmatized by the medical community, say chronic fatigue, about how the medical system reacts to their symptoms.
You will find that in a many cases, doctors will try to deny the existence of the disease. I think this is because it is so complicated as to elude a 'simple solution'.
Perhaps our doctors and medical researchers, are too predisposed to look for simple solutions in these cases, because that approach worked so well in the past.
Maybe we've picked all the low hanging fruit, and now it's time to tackle the tougher problems, using a more system-wide approach.
We seem to be stuck in the mindset of: single disease/single cause/single solution.
I'm happy that we're doing so. For a very long time this was treated as just a fact of life, immutable.
I'm basing my statements mainly on what I've heard from people who are, and I've yet to hear about glia cells. If any easily fixable cause turns out to be the truth, then I'll be happy.
Whatever the reason, this has the potential of being life-changing for a lot of people. Would love to see more billionaires involved with this kind of initiative.
Death is terrible, and should be eradicated, but Alzheimer's is a few steps above that. There's nothing worse than seeing a loved one be slowly destroyed by that horrific disease, and I'm so glad there are people willing to work on it.
We are sensitive to death but it is the only mechanism I know that would control overconsumption of natural resources. If nobody dies, then we would deplete the resources in a few decades.
The worst solution we can come up with is to literally kill everyone. We can find a better way.
Imho it'd only work the other way around: For us to beat death, we first have to solve our sustainability/resource problems.
What point is there to "endlessly living" if the reality of that boils down to being stuck in overcrowded and miserable living conditions? That sounds more like hell than utopia.
In combination, those should dramatically increase the carrying capacity of the world.
Ignoring all of that, though... Don't you think it's likely that people will start caring more about the future onceb they know they'll live to see it?
I don't think we'll see anything like increased exponential progress of the kind that has been (and has already slowed down) during the industrial age. All low hanging fruit have been gathered, and at this point there are diminishing returns.
We will have a few breakthroughs here and there but nothing to write home about the way the steam engine, electricity or the computer have been.
>Ignoring all of that, though... Don't you think it's likely that people will start caring more about the future once they know they'll live to see it?
Most people are terrible managing their own lives a few years ahead, much less decades ahead. So no reason to think they'll be any more apt when its something so vague as centuries ahead.
This conversation is going in circles. Has anyone brought up “death gives life meaning”?
A platitude uttered by people who believe they have no way out, and are desperate to make themselves feel better about that. Not necessarily a bad thing, so long as there really is no way out, but catastrophic once that changes.
Destroying death will lose us something, most certainly; there are any number of great works of art and philosophy that would never have been created without the pressures and emotions it creates. But none of that, I think, can even remotely make up for the sheer awfulness of the downside.
There has to be a better solution than this creeping horror.
Imho it's way more desperate trying to find "a way out of death" by defining it as something that could be "destroyed" (how to destroy the unexisting?), instead of simply accepting that literally nothing lasts forever, most certainly not rather fragile biological life. There's nothing "creepy horrific" about it unless you make it so.
Our biological lifespans are in large parts responsible for forming our subjective views of the universe, making us ask for a purpose of that limited existence. If we'd ever reach a state where that doesn't matter anymore, then we've most likely left our biological shells behind and reached a level of consciousness which we couldn't even imagine right now.
I'm not sure if such a state would even still be perceived by us as "being alive" and not rather as simply "existing".
An AI was "let loose" with a variation of Asimovs laws of robotics, and interpreted the requirement not to let anyone come to harm through inaction as requiring it to literally end death.
The main character is a woman that feels it left her life without meaning, and who in response started gruesome competitions involving designing the most outrageous ways to die. Or rather, getting as close to the moment of death as Prime Intellect will let them before reviving them.
I changed only one parameter, while your assumption changes a whole lot of them in a very optimistic way. Which model do you think is more realistic?
> Don't you think it's likely that people will start caring more about the future onceb they know they'll live to see it?
Eh, imho people are too diverse for anything universal like that. Might as well pose the question if life actually still has meaning without death? If there's no end, how do we define the beginning and the different phases that used to be in-between?
Those kinds of questions would probably be way more relevant than "Let's all apply some foresight, now that we don't actually need it anymore due to being immortal!".
The idea of those wanting that is to perpetuate themselves (and their current generation) in eternity, screw successors.
Nonetheless, physical bodies of any kind cannot seem to exist as they are for eternity. Even diamonds decay. Abstractions change as well-- ideas thought to be immortalized actually do change, actually do die. And they can be immortalized in time, in the past-- but towards the future, there has never been a concrete conception of such immortality, only in the frozen sense.
Also prices. New kinds of mining are invented to circumvent cost restrictions. Otherwise, certain kinds of metals and gasses would stay forever buried as unusable junk.
Also note that fish that are significantly easier to eat (not a lot of bones, easy to farm, etc.) are more utilized. "Trash fish" are usually just full of bones or too much work to prepare to eat in a tasty way.
I strongly disagree with this statement. We learn through death, and yet we don't know what death is. We do not know the fundamentals of death, yet it occurs on all abstractions.
I think of this poetically, because I don't understand death. But all things being relative, life does not make sense without death.
Edit: On this topic I would also like to point out the eagerness of the immortal mindset. We are living 30 years+ in comparison to our ancestors, but it seems we approach an asymptote. With alzheimers and other neuro-degenerative diseases, we lose our mental faculties. We lose memories, emotions, we lose everything. Yes this is the topic at hand, but does that not indicate something fundamental as well? We have a reached a point where living implies the dissolution of what we used to be, of what we spent so long striving towards the future to do. And at that point of degeneration, we no longer exist in that sense. We can no longer orient ourselves towards the future, and decay while living.
If there is something after death-- if whatever composes our consciousness is actually tangible, autonomous, and not an emergent result of our brains (similarly, if emergence itself is autonomous), then after the alzheimers individual dies in body, he/she will not exist in consciousness either. After this point, his/her consciousness will have dissolved.
And these are rather philosophical points, but this is how I see things. Science and philosophy, science and spirituality even, do coexist in my subjective experience.
Is death really that terrible? There's plenty of things I'd like to try before I die, and I hope to live until I'm much older, but on the other hand becoming immortal doesn't seem like paradise to me. If I were to suggest what we'd be better off eradicating I'd suggest fear of death over death itself. I'd rather live 100 years free from fear of dying than 1000 years in constant fear of dying.
There is nothing wrong with self-interest. It's why all the people posting in this thread are still alive. A given person's spending in the developed world on their food budget in a year, is a scaled equivalent in resources to what Gates is probably going to spend on Alzheimer's. The difference is, if his investment pays off it helps millions of people. That's the best kind of self-interest.
You might not be far off on “he’s getting older”. When you’re younger, you don’t think about the impact of “old people” diseases.
Hearing that a couple billion people will get Alzheimer’s is not nearly as impactful as seeing someone a little older than you with it.
Within reason - the end doesn't justify the means.
For those who haven't listened to the podcast, the treatment is effectively just being exposed to a 40Hz (IIRC) flashing light. Which sounds surprisingly simple.
Alzheimers is terrible to witness at any age - the elderly included. While it's certainly tragic to see in younger people, it's no less tragic to see older people in a state of fear and panic, when they should be enjoying their twilight years.
For those who want to check similar approaches google keyword - ketogenic diets and Alzheimer’s
I think this is yet another case for a ketogenic diet/intermittent fasting...
>"Most drug trials to date have targeted amyloid and tau, two proteins that cause plaques and tangles in the brain."
This is assumption, not fact:
edit: oh, there it is, 25 lines beginning at line 8201.
This comment made me laugh with the tragedy that is the modern web.
In other words, what would you do if you were in Gates position?
Message to most websites that fail to load without their cruft - I just don't need you.
It's completely toxic to the green movement.
I think Bezos must be applying some of these hacks on himself; he got recently in shape while in previous decades he seemed to let himself go. Bill might think about the same as he is visibly deteriorating and his liver seems to be not functioning well (spots on his skin everywhere). They should add calorie restriction/frequent fasting as well. It's unlikely they will beat the death, but might make their final moments less horrible.
I think he's always had freckles, no?