Hacker News new | past | comments | ask | show | jobs | submit login
Why Drugs That Work in Mice Don't Work in Humans (thelri.org)
173 points by apsec112 25 days ago | hide | past | web | favorite | 111 comments



"Only 14% of drugs that are tested on humans succeed in demonstrating effectiveness[1], and all of these are drugs that have been found efficacious in animals, so successful animal studies are very far from a guarantee by themselves."

Regardless of the reason, this seems fine/workable. Animal studies are a step in the funnel. I assume the step from petri dish to mouse is similar.

I wonder how many false negatives get produced. Drugs which would have worked on humans, but don't on mice, for similar reasons.


14% is bad when you consider that the NEXT step after preclinical experiments is basically clinical trials (and Phase I is not to test efficacy ) which effectively costs several hundred millions of dollars or more.

And the pharmas would just add the cost of 86% failed trials to the other meds that do pass, so they effectively pass the cost of their bad decisions to us.

I worked in a lab that studies antibody therapeutics for almost a decade and can go on a tirade of all the shitty decisions companies seem to make in candidate identification, but fundamentally I really believe that this number won't be this low if the people making the decisions in pharma companies know what they're doing but then the CEO of gsk used to sell lipsticks so who am I to talk right?


I've always assumed that the reason for the low rate of success at that stage was just a result of things that work on rats not working on humans. Your post is suggesting that there is some evidence available before testing on people that would help increase this rate but gets ignored. I haven't seen anyone express this view before and would like to learn more about it. Would you please elaborate on what kind of information isn't being used for decision making but should be?


I am not the parent, but I worked at a few biotech startups.

My impression is that you are _mostly_ correct (a lot of animal models for disease don't accurately reflect the analogous state in humans).

However, I was privy to a situation similar to that described by the person to whom you replied.

* A start-up that I worked at developed a "candidate drug" to treat a condition (I don't want to get too detailed).

* The drug was good at ameliorating the disease in our simple models, but we also had a strong suspicion that the drug would be toxic in humans, higher animals.

* A decision was made that we would determine if the drug was toxic in three different species of large animal (I think we chose some type of monkey, dogs and I don't remember).

* Our "go/no-go" decision was: If the drug was NON-toxic in at least 2 of the 3 animal species, we would move to clinical (human) trials.

* 6 months later, we got the results: drug was NON-toxic in 1 of the 3 species.

* This looks like "no-go", right?

* Wrong. Board of directors put pressure on the CEO and senior management ("We put 8 figures of our hard-earned $ into round B and you promised us clinical trials over a year ago.").

* CEO folded like an accordion and got everyone to agree that we should proceed w/ Phase I trials (small n of humans tested, primarily to determine toxicity and maximum tolerated dose) despite the results of the animal studies.

* Phase 1 trial starts. Everyone gets sick (maybe 10 patients) after about 5 days of taking the drug. Trial is halted.

* We go back to square 1.

CEO should have never folded under the pressure because it was a waste of money, time, and it put people at risk.

I've only seen this once (going to trial with a risky drug), so I think it is a rare event.

edit2: Ironically, this is an example of a situation where the animal-studies CORRECTLY predicted what would happen in humans.


> CEO should have never folded under the pressure because it was a waste of money, time, and it put people at risk.

At the same time, he board of directors should never have exerted the pressure, for the same reasons.


Agreed.

That company was unusual in that there always seemed to be a tension between the board and the company (distrust). And both sides (CEO and board) seemed to have a short-term view of everything (we were going to "flip" the company to a large pharma), which caused a lot of bad decisions to be made.

The other (more successful) start-ups that I worked at had a much more harmonious relationship with their boards (e.g., "We are in this together and we are here for the long-term.")


> seemed to have a short-term view of everything (we were going to "flip" the company to a large pharma),

> which caused a lot of bad decisions to be made.

Unfortunately, if they were trying to flip the company; then they may have made the 'right' (from the perspective of payoff) call.

I can imagine under some models, that if you are 86% confident that it won't work, and only 14% confident that it will work; then the millions wasted may have been easily justified by the potential payoff if it was in the $100 million dollar range.

Normally, we wouldn't bat an eye at such a decision; but the ugly part here is the high risk exposed to the people participating in the study :(. However, even that, a similar utilitarian calculus can be done to justify it. bleh.

Remind me not to be a Pharma CEO, it sounds like an all around unpleasant job if you aren't a complete sociopath.


Imagine a similar situation but where people died: Who's corporately responsible for that decision? The company, or the individual directors? How about the NEDs?


> Imagine a similar situation but where people died: Who's corporately responsible for that decision?

No idea. Thankfully, all patients stopped the drug and the toxicity went away.

I remember having a company-wide meeting where they showed us a result from the Phase I trial. It was a graph of time on the x-axis and the concentration of an important molecule in each patient's bloodstream on the y-axis. Every patient had the concentration of that molecule drop to dangerously low levels after about 5 days on the drug, and I'm thinking, "Is this a surprise to anyone here?"


No one. Look up Juno Therapeutic. Their trials killed more people than they saved


I would very much say that if the scientists and stakeholders really, really wanted to pore through the data to understand whether it might work or not, they can get some more insight from their experiments.

Most antibody therapies (which form the bulk of currently explored therapies) have to be tested on modified mice models simply because of the fact that these antibodies are raised against human targets that have to be engineered into the mice. But then such artificial models make lots of complications in their interpretation which people have to spend lots of time interpreting. I've seen on several occasions from just published data that there were glaring warning signs that a potential therapy might have adverse interactions with the immune system or such, and the company would just go ahead with trials anyway.


The failure in humans of stuff that worked in mice is actually the biggest driver of the cost of drug development

According to a widely cited paper on the cost of drug development [0], phase 2 has the lowest probability of succes of any stage in the funnel at 30-40%. Phase 2 is generally the first time a drugs effectiveness is studied in humans. It can cost $50-100M+ to get a drug through a phase 2 study

One of the major advances in drug development in the last few years is to reduce the "translational risk", i.e. The risk that animal models and other disease models are not predictive of human outcomes. A recent study suggests that we have actually started to make some improvements in this area which is a huge step in lowering drug costs and getting more new medicines

[0] https://www.nature.com/articles/nrd3078


If the overlap is that small, for me it begs the question of how many drugs would work in humans, but are ruled out because they fail on mice.


Model animals are used in researches mostly due to two reasons: cost, which is always a big hurdle for academic studies; and reproduction, especially for inbred mice. The use of inbred mice often is because the behavior and activity of the mice is well-characterized, high availability, and genetically more consistent. However, the second part nowadays is often challenged and they are not that consistently genetically as researches show. For drug development, the very first step is to create an animal model, however, this one also is difficult, since more often than not one cannot generate animal model replicates all symptoms, this could mean a model without the actual cause. I think this is also a reason why genetic studies of diseases nowadays is still a hot field, from there one can make an animal model and test whether that gene is the real cause in more confidence.


For those that are interested, the Animal Welfare Act of 1966 (an the many amendments to it) specifically spell out the animals that can be used for research and testing. There aren't that many of them, hence why rats and mice are so popular, as they are some of the few animals that can be used for testing. The AWA is structured such that all animals are covered by default, and only a few species are specifically excepted. It's an expansive law that also covers herding, meat processing, cosmetics, hunting, etc.

In general, the AWA is very restrictive in terms of animal testing for scientific research. Getting exceptions to use an animal for research is very intensive and highly regulated, taking years to get the approval. Getting approval for studies that may involve studying intentional pain, harm, etc to an animal can take up to a decade for approval, and with much study and research on other possible methods. Internal Review Boards (IRBs) are required to include local lay people, clergy, other non-scientists, etc in order to approve a study. Vivariums are also highly regulated and frequently inspected.

The US, at least nowadays, takes the welfare of animals extremely seriously with stiff fines and real consequences for breaches and lapses in protocol.

https://www.aphis.usda.gov/animal_welfare/downloads/AC_BlueB...

https://en.wikipedia.org/wiki/Animal_Welfare_Act_of_1966


While I agree that animal research is well-regulated in the US, many of the details in your comment aren't quite right.

The Animal Welfare Act specifies the minimum standards of care for research and exhibition of most species of mammals. Birds, rats, and mice aren't covered by the AWA, nor are "cold-blooded" animals. These are covered by other regulations from the Office of Laboratory Animal Welfare (OLAW; for federally-funded projects) or AAALAC (private funding). Farm animals are also not included, unless they are being exhibited or used for research.

The AWA requires research institutes to form an Institutional Animal Care and Use Committee (IACUCs), which includes scientists as well as clergy and members of the community. This is usually different from the Institutional Review Board (IRB), which regulates research involving human subjects. The idea is admittedly similar though.

The AWA does not limit the species involved in research. In theory, you could propose to set up a grizzly bear colony or something and the University of California has (had?) a hyena colony for a while. The IACUC is responsible for making sure that animals receive the care required by the AWA, OLAW, and other standards, so they might turn you down if suitable resources are not available or other, more humane approaches could be used (e.g., use a species that is less affected by captivity).


Thank you for the clarifications!


Except for maybe the last paragraph, this is not new information to basically anyone performing studies. The problem is we don't have any ethically and scientifically better alternatives.


We are getting there. In-vitro methods, often involving cell cultures, are gaining traction. In-silico, also.

Robotic automation and synthetic biology help with that a lot. Machine learning can help make sense of more limited data, for example predicting toxicity and other parameters for drugs so that you can better choose what to forward to Human trials.


In vitro studies have been done for ages, but are often a worse model than closely related organisms. For instance, there are various human cancer cell lines and while these are routinely used to study various aspects of biology (cancer being just one of them), they are generally a poor model for drug trials. The current research trend goes towards developing organoids — cell cultures with a distinct 3D organisation and biological makeup that mimics actual organs — that more closely resemble actual human tissues. It’s still early days though.


Organoids do count as in vitro. And as I said, synthetic biology, automation and machine learning help in improving these methods.

Nothing can replace certain studies in Humans completely. We can only hope to make those more effective by better choice of candidate drugs.


Absolutely. My phrasing wasn’t clear, I was contrasting organoids with the cell cultures mentioned in your reply, not with in vitro studies in general.


Organoids, organ-on-chip, 3D cell cultures, have all shown to be equivalent to each other, and better than standard 2D cell culture. Ultimately they are not very complex, definitely more than a monoculture, but no where near an organism that evolved from a single cell.

They haven’t shown to be a good proxy for a whole organism.

On term of disease model, they are typically artificial in the sense that genetically engineer the cell with a specific mutation that may or may not represent the disease or the population.

And if an experiment doesn’t work the way you want, you are a new set and sometimes it works the way you want.

The only two things they have over mice is that they can be from human origin, and sometimes that human origin might be in a disease state that you want to study, which might be the only way to generate a model for complex disease...


In-vitro and In-silico modelling might be of great use is for known problems that come up for candidate drugs in the early clinical phase: Interaction with receptors (blood/brain barrier, QT prolongation etc) or liver proteins or the like.

Of course, we are talking about the fruits on the drug tree which are potentially reachable by singular small molecule drugs.


I've been in science quite some time and worked with lots of rats in animal care facilities and this is the first time I've heard of the fact that mice need 30 deg. C instead of normal room temperatures to avoid stress.


There is an alternate. To wake up from the ignorant direction the so called studies are taking the human kind to. And stop calling it science. Dr. B.M. Hedge suggested long ago that all the studies done on animals or humans are wrong and inefficient in so many fronts. His words - According to ayurveda, human body can be classified into around 200 types based on the K/P/H balance of a body. A medicine that works for one may be totally neutral for another. So, you read a paper that claims it got awesome results on a random blindfold study, the question Dr. Hedge asks is what type of the 200 human body types is it efficient? Apparently there is a body type that gives neutral or negative results (ignoring side effects). Talk about working it on animals first! (-ve downvotes welcome)


> According to ayurveda, human body can be classified into around 200 types based on the K/P/H balance of a body.

Maybe you'd first need peer-reviewed research to prove that...


Here's a nice smackdown which I don't have the time or knowledge to offer myself:

"He abandons [Dr. B.M. Hegde] all rational thinking and embraces fantastical belief systems. The sad thing is I personally know many people delaying cancer treatment being swayed by his speeches and ending up with incurable metastasis later."

https://www.quora.com/Do-you-agree-with-the-controversial-vi...


Understood. May be the modern world needs another wim hof to become a subject to the existing science - validity though current limited existing medical tools to prove/validate that humans can get access to autonomous nervous system and heal themselves (which was thought to be impossible in modern days). The approach Dr. hedge took is not registering with the rest of the world according to the comments above. May be he should have proven his claims subjecting to the existing methods just to give that level of confidence who trusts it like wim hof did.


It's not about giving confidence to people, the existing methods are part of the fundamental idea of the scientific method. Dr. Hegde's claim of body types doesn't really mean anything until there's evidence to back it up. And modern medical trials are crafted and investigated with a wide variety of statistical tools to determine the exact impact of a treatment method. Only if his claims can stand up to this rigorous scrutiny can they be accepted the way existing models are. The rest of the world will then sit up and take notice, exactly because only then will there be something for them to take notice of.

An appeal to the ancient wisdom of Ayurveda doesn't really stand on it's own until it's backed by solid peer reviewed science. It's really not that hard, a well designed set of trials proving your point will have the entire world at your feet. Modern science is extremely receptive of new ideas that way.


Agree. Unfortunately, this is never going to happen. The few still practicing Ayurveda properly come from a background where they learned it from their ancestors within their family and practicing it for free and never allow commercialization of their knowledge. But they can gladly share the knowledge to people who want to learn. Food, Education and Medicine/health are the 3 things the practitioners in Indian culture believes to keep free. A few decades ago, you want to gain education, you seek a guru and become a disciple and he teaches for free. You got a health issue, you go to a doctor and he treats you for free. Same with food, every town/village used to have centers where food is served free to anyone. Although it's all gone decades ago, there are still practitioners who treat patients for free and only take donations and not fee. Sounds, funny but It's easy to understand the underlying wisdom, how many studies today are funded by people with commercial interests? Research on egg is funded by Meat industry, Research on milk is funded by Diary industry etc. How is that turning out for everyone?


In fact, most studies are funded either by governments or via donations from the public. Some public studies are partially funded by industry, but with a big caveat: industry has no say on how the studies are conducted, or even what is being studied: industry contributions to public research are not bound to research outcomes. And all this information is readily available, since publicly funded research institutes publish their funding sources.

It’s true that industry is funding some (publicly published) research directly, and that some research publications fail to disclose their funding source and other conflicts of interest. But for the vast majority of biomedical research, especially fundamental research, this simply isn’t the case.

And it requires a complete suspension of one’s critical thinking capacities to imagine that some vast, weird conspiracy encompasses all of public research, to suppress the “truth” that Ayurveda works, contradicting everything we know from modern medicine as well as basic physics and chemistry. And all that just to make a few pharmaceutical company bosses rich? Why would I, lowly researcher on a sub-par salary, contribute to such a conspiracy? It’s completely irrational.


> How is that turning out for everyone?

Dramatically increased life expectancy, reduced childhood mortality, etc.

There are problems with the modern health system, but it does seem to produce much better outcomes than what came before.


I appreciate the honestly in this answer, and your willingness to accept how people see something you clearly seem to care about.

As for commercialization of Ayurveda, I see this as exactly what's happening now with Patanjali etc trying to cash in on people's trust in their culture without providing a proper research base for their claims.

Another reply to your comment above makes very good points about research funding, and in my experience, funding is to a large extent non partisan, and free from industry influence. There are even rules on disclosure of funding about major studies, which makes it possible to criticize them.

I only wish that Ayurveda is held to the same standards as other medicine, and passes through the fire of testing the same way all modern medicine has. It's how we know that antibiotics work, or about interactions between medicines, or about side effects and complications. It will ultimately benefit the field, and medicine as a whole.


> Unfortunately, this is never going to happen. The few still practicing Ayurveda properly come from a background where they learned it from their ancestors within their family and practicing it for free and never allow commercialization of their knowledge. But they can gladly share the knowledge to people who want to learn.

How is it possible that they could have prevented that information from ending up in the hands of profiteers even after all these years? And no profit hungry pharmaceutical company has ever been able to rediscover and commercialize those techniques, even though the details of them are mostly freely available on the internet?


We know that there are things in humans that make them react differently to medications. We know _why_ this because of rigorous science.

Typing out that you are prepared for downvotes doesn't really change who has drank the Flavor Aid.


I don't get it. If you can't trust researchers who perform studies to validate their results, then what is it which makes this guy trustworthy to you? He has done strictly less than that to validate his results.


I am not against validity through studies. But the approach/methodology taken in most studies on food/drugs are very limited in nature and often flawed giving skewed results (e.g. I hope everyone knows about the approach taken in dividing fat into 3 categories and how it is taken as reference to ruin health of so many humans over decades through refined oils). I totally agree with him in that regard.


Dr. Hedge sounds like a quack.


What about drugs that work in humans, but not in mice? Could you ever get them into trials without animal data?


This is more worrying, how many medications did we miss because they don't work in mice ? Do we humans only get medications that work in both humans and at least one other species ?


The same number we missed because, as it turns out, killing people with unknown medications is a bad thing.


There is some work in this direction that is based on work with human cells (preferably primary cells) and building an otherwise compelling case. You have to be incredibly careful to do the appropriate safety studies and find some way to show efficacy in animals before you put the treatment into humans still due to FDA (and ethics) requirements.


If I'm remembering correctly Aspirin is a drug which doesn't have the intended effect in mice studies.

I could be wrong about the exact drug but it is something similarly common if it isn't. Hard to google because there are a million studies which pop up.


You test on monkeys, or chimps.


My go-to example is Vitamin C: it's an essential nutrient for humans, but mice, like most other animals, are capable of simply producing it. That's usually a good starting point for the "they're really different" conversation.


For the curious: https://en.wikipedia.org/wiki/L-gulonolactone_oxidase#Conseq...

I couldn't really make sense of the two proposed theories, other than one suggested perhaps it was an advantageous adaption in the presence of malaria.


I wonder if it's possible to use machine learning to speed up this process. Rather than doing computational biology, can we use a hybrid system? For instance, can we test this on mice and then feed some sort of parameter into a model to see the probability it working on humans rather than testing it directly on humans. I'm sure this won't be a perfect model but I wonder if it can raise the 14% to 20% while using less resources.


ML (or anything really) that could predict which molecules would work in humans without testing them in humans would probably be on of the most important advances in medicine ever, and would massively lower drug costs and open up new areas for drug research

Phase 2 failure -- ie stuff failing in humans after working in mice -- is the biggest cost driver in drug dev. You spend $50-100M+ to get there and 35% of things fail. If you could eliminate you'd cut a massive chunk of the cost and risk of drug dev

However I don't think ML is the thing to study if you want to create ML to predict whether drugs will work in people. There is not enough data about how the human body works to develop a good model. Better to work in biology to develop good models of disease, or in developing tools to better measure the molecular biology of the living human body. It's unclear if predicting drug effectiveness with ML will be possible in our generation

For more near term applications of ML in drug discovery and development see here: https://www.getrevue.co/profile/nathanbenaich/issues/6-impac...


The short answer is "probably". The use of AI in drug design is only just now starting, we'll see the results in about 10 years. I optimistically expect AI to be able to shave off ~2-3 years of the drug lifecycle.

edit: but it won't do much to optimize P2 trial success specifically, it'll simply help iterate substantially faster in the pre-P2 trial phases.


I'm not arguing for pigs over mice but I ask since transgenic pigs are often said to be future potentially viable transplant sources would not trials in pigs be a better model?

(Ethical nightmares)


Pigs are used in research, but they are expensive. Maybe 1000 times the cost of mice? The techniques described in the article (bad choice of temperature and environment) are cutting corners, but testing on pigs instead would completely change the amount of testing you can do.


Cost. Growing pigs is expensive, it takes more space, more food, and more time.


We are closer related to rodents than to pigs. Whether this is an argument for one thing or for another or for nothing at all, I really have no idea.


This isn’t quite true. The issue is that rodents evolve a lot quicker due to effective population size and generation time. As a consequence, the genetic divergence between rodents and humans is in fact greater than that between humans and pigs (see e.g. https://europepmc.org/abstract/pmc/pmc1142312), even though pigs branched off earlier than rodents.


Had a vague that something like that might be in play. I obviously have some reading up to do. My assumption was, that our branch would, until quite recently, have been rapid breeders too. And the pigs' one as well.


How many of these drugs actually do work in mice? Every study that comes out on the topic shows at least 50% and often closer to 90% of preclinical research does not replicate or even is not replicable in principle.


Research and drug development are pretty different leagues. A research paper might be 3 researchers with 20 mice. When your looking at preclinical trials it’s more like 300 people and 5000 mice per trial and god knows how many trials before something goes through.

The reproducibility crisis is happening in academia. There isn’t that much room for uncertainty when it comes to industrial drug development. You won’t find drugs making it all the way to market only for “another group” to be incapable of reproducing the effect.


> "When your looking at preclinical trials it’s more like 300 people and 5000 mice per trial and god knows how many trials before something goes through."

This is sampling to a foregone conclusion. It is guaranteed to yield unreproducible effects.

> "You won’t find drugs making it all the way to market only for “another group” to be incapable of reproducing the effect."

I doubt this from the description above.


While mice are not humans (who would have known), we could do a lot better by moving away from using inbred mice.

More positively we could make a lot more use of domestic pets in research. If we started to treat more pets as research subjects we could really do a lot of great research.


The purpose of inbred strains is to isolate the effects of the drug and the environment from the genetic effects. Drugs are routinely tested on multiple such strains.

There is a huge variety of mouse strains in use and under development. Basically anything you would see in a "fancy mouse" you can have as a strain.

Pets, like cats and dogs, owned by private owners, are used for studies all the time. And not just for veterinary purposes. Dogs have a wide variety of tumors, and studies on these can be used to inform research for Human oncology.

Beagles, kept in laboratory populations, are routinely used to study the cardiological safety of drugs. Such studies are rarely fatal, and dogs often have several of them during their time in the lab. They live in "colonies" of about 10 dogs each, and they look and behave just like very happy pets, even if a lot less trained. In the cases of the colonies I have been in contact with, they live in the lab for about 6 years, then they are spayed/neutered, get a complete dental and are adopted by private owners, often vets and vet students whom they already know.


The problem is in most cases a new drug is not tested on multiple strains. Often we get the result for the CL57/black mouse and nothing more. The end results is often meaningless since it is strain specific.

Yes I know pets are used in studies, but we use pets to only a fraction of their potential. We have 10s of millions of pets and we use a few thousand at most every year. I am lamenting the waste.


This works up until the part of the research where you cut the pet open to see what the effects were.


The thing with inbred mice is that the results are more consistent (or at least more so) than with regular mice. This allows you to use less mice and get statistical significance. If you do not use those than the sample size you'd need drastically increases. Animal research is plenty expensive as it is.


Yes I know why inbred bred mice are used, but the end result is often a more accurate result that is meaningless. Use more outbred mice and do fewer experiements.


Good luck in study section, especially with mouse specialists. I hope you have a mighty compelling argument for why a random sample of (say) house mice is going to be free of unmeasured confounding effects...


Humans have a shit ton of unmeasured confounding effects though. Which means there is an even bigger leap between inbred mice and human studies. Not saying we should abandon the current way of doing things, but a little more diversity in funded research might be a good thing, because there are some modes of failure right now.

A way to evaluate drug efficacy that relies less on comparisons of large groups would be a big step, because there are a lot of disorders that are probably multiple mechanisms manifesting in similar ways

Ultimately there is always going to be some trade-off between making a rigorous statement and a generalizeable one, especially because biology seems to have some pretty messy abstractions. As collecting and analyzing large amounts of data becomes more feasible, I don't see why there shouldn't be some efforts to consider genetic (or environmental) heterogeneity in animal models. Ideally I think it'd be cool to approach any given question with parallel methods that can try to address how the hypothesis holds up both in more narrow but well controlled situations, and in the "wild"


I hope the person demanding using inbred mice has a compelling argument for why inbred mice are a good model of outbred humans.

Yes we would do fewer experiments since you would need more mice per experiement, but the results of those experiements would be far more robust.


Good luck getting people to allow their pets being used for research!


People are actually very motivated to contribute to science, here's an example of research involving pets and extending their healthspan across aging: http://dogagingproject.com

Phase I clinical trial: https://www.ncbi.nlm.nih.gov/pmc/articles/pmid/28374166/

Phase II is in progress.

Interview with Matt Kaeberlein from the project: https://www.leafscience.org/dr-matt-kaeberlein-the-dog-aging...


This is a study of a new application of a drug that has already had very extensive trials in both humans and several species of animals that have demonstrated it to be safe (it's FDA approved for human use). I think you will find people dramatically less willing to test brand new drugs on their pets.


This question has already been asked of pet owners and when it is explained to them why most are willing to let their pets participate in trials of even new drugs.


Beagles are already commonly used in medical research.


People are really keen to allow their pets to be used in research as long as they understand the importance. Pets are a massive resource that is just being wasted.


My cats are certainly not being wasted. Who else would step across my keyboard when I’m typing?


You should consider yourself lucky to be able to use your cat's keyboard while she is taking her afternoon stroll.


I find this attitude astonishing. My pets are members of my family, and I'd generally rail against experimenting on my family.


Would you let your child participate in a clinical trial if they had a disease for which there was no good treatment? Pets suffer from most of the diseases humans suffer from (cancer, heart disease, diabetes, etc) and they also have no good treatment options. We should be using these sick pets in trials to find new cures for pets and humans.


Maybe you weren't clear or I misread you. I understood you to be saying that pets were generally not being used to their potential. (E.g. perfectly healthy pets could be used as research subjects, because they aren't doing anything else valuable at the moment..)


Of course not. No one would think experimenting on healthy pets would be reasonable or ethical, but not using our sick pets to help other pets and humans is a huge waste.


This all makes sense! I’ve heard there is a similar effect for single cell research, where a lot of the study of human cells says that they are full of support structures and making all kinds of chemical signals that actually are a stress response to being placed on a piece of glass rather than floating in liquid. Kudos to these researchers for making an honest attempt to get it right.


I wonder if a similar proportion of drugs that work in Humans wouldn't work in mice. Are we missing out on a ton of potential?


I'm not sure how common the practice is, but I've been told (by a trusted source) that if a med study doesn't get the desired results they just get a new set of mice and try again.

That is, results can vary based on the mice used for reasons not understood (but gut bacteria perhaps).


I work in pre-clinical pharma research, and this isn't really true in my experience, but there are a couple caveats. At least in my experience, we wouldn't run the same study again, but we might change the model. There are a few models that all point to a similar indication, so we might try both. If a treatment works in one but not the other, it's definitely seen as less strong evidence of efficacy, unless there's some very compelling mechanistic reason. That's not quite as crazy as it sounds, though, because translation from mouse to human is already poorly understood, so it can be hard to know which model will suggest positive effects in clinical trials.


I'm interested in how compounds are chosen for medical research. Do you just start with a wide range of compounds which you are able to make and fire them at a range of different potential medical issues? That seems staggeringly unlikely to find something useful.

I know some drugs are extracted by isolating a compound from a traditional remedy. That obviously makes sense.


It's just about every way you could imagine. I work at a large company, where my department has something like 75 people in 25 labs working on one disease area. Anything we think we can build a case for is worth considering, and we'll run down lots of leads that don't turn up anything. We get chemists involved to make tool compounds that might hit your target pathway, and eventually, if you build enough evidence to convince management, you'll put together a team. That team will involve a few biology labs, a chemist (could be a biochemist) or two, a toxicologist, and a pharmacokinetics person (ADME).

Then, if it's a chemical target, they'll make thousands of compounds to test in their preliminary tests to make sure the compound very basically attaches to the target. The ones that make it past that might make it into a cell-based assay then rodents. The path each treatment takes through specific assays is different for every project depending on the specifics, but it generally follows that progression.


In my (admittedly indirect) experience, there's always a specific medical issue being targeted, and usually a specific biochemical system (i.e. protein binding partner). Very occasionally, they end up with a drug for a condition completely different from what they were trying to treat, but that's the exception. (Viagra was one of these; I would have loved to have been a fly on the wall when they realized the implications of the side effect profile from the original trial.)


This is a huge field of research and development, with a number of different approaches.

Almost always, though, you are starting with a particular medical problem in mind, so the first step is to develop some kind of assay for detecting compounds which might be useful in treating it. Ideally, this would be a simple biochemical reaction, but it might be something involving cell culture.

For example, if you wanted to find new painkillers, you might look for chemicals inhibiting cyclooxygenase (as ibuprofen does). You can buy kits for doing that assay commercially [1], where you prepare a solution of the enzyme, add your test chemical, then add a substrate which emits light when the cyclooxygenase breaks it down, and measure the intensity of light produced.

If you wanted to find new anti-cancer drugs, you might look for drugs which cause proliferating cells to get stuck in the metaphase step of the cell cycle (as paclitaxel does). You would plate out some rapidly proliferating cells, add your test chemical, wait twelve hours, then fix them, stain them with a DNA-specific dye, and use a microscope to count the number of cells in metaphase (which is quite distinctive [2]). This is a lot more tedious than the cyclooxygenase assay, but we have robots that can handle liquids and plates of cells, and operate microscopes, and process images, so it can be highly automated, at a cost.

Then you take your assay and go hunting for molecules.

One approach is indeed just to start with a wide range of compounds. You can get libraries of small molecules [3] [4], so you give them to your robots (or graduate students), and put them all through your assay to find which ones work.

You can also start with mixtures of compounds, perhaps obtained from natural sources. For example, you could go and collect twenty species of fungus or sea sponge, grind them up, and put the extracts of each through your assay. If anything works, you then fractionate the extract somehow (eg by chromatography), and put each fraction through your assay. You pick fractions which work, fractionate them further, assay the sub-fractions, and repeat until you have got a pure substance with some activity, which you then characterise. Here, you can knowledge of ecology and biology to pick likely species - for instance, fungi are a good source of antibiotics, because they have to make antibiotics to defend themselves in their natural habitat.

Or you could start with some knowledge of the structure and function of the target (from X-ray crystallography, NMR, and good old fashioned biochemistry), and try to rationally design a molecule which will bind to and inhibit it. Computer simulations are useful here. Combinatorial methods let you design hundreds of molecules which might work, and then put them all through the assay.

Or you could hope that an antibody will do the job, and inject your target protein into some mice, wait for them to make an immune reaction to it, then collect their blood, extract B-lymphocytes, culture them in bulk, purify antibodies from the culture, then assay the antibodies. If something works, split the lymphocytes into single-cell clones, and assay each clone's antibodies one by one.

I don't work in this field, so my knowledge of these techniques is from undergraduate study, and one relative who grinds up sponges. It's possible some of the approaches i mention are obsolete, or were only ever speculative.

[1] https://www.abcam.com/cyclooxygenase-cox-activity-assay-kit-...

[2] https://www.le.ac.uk/bl/phh4/roottip.htm

[3] https://www.tdi.ox.ac.uk/small-compound-libraries

[4] https://wiki.nci.nih.gov/display/NCIDTPdata/Compound+Sets


Thanks, that was exactly the answer I was looking for but couldn't quite figure out how to formulate the question so that Google would provide a useful answer!


Wouldn't that introduce the possibility that a study gets the desired results just due to randomness?

If you roll a die enough times, eventually you'll get a 6.


Any mice trial should use 4 different breeds: one of the academic hallmark (black6 or balb/c) for publication purpose, one that represent your model (or so you think) because that’s the logical thing to do, and a truly wild type one (captured in the wild) to capture the noise.

Even this bias longevity piece needs to do that. This 4 breeds will all have different immune system, cognitive behavior and lifestyles, which are all central to aging.


The issue with that approach is that you’d multiply the number of sacrificed animals by 3 (or 4, did you forget a breed?), for limited gains in insight. Your approach would be a lot more defensible if it didn’t involve killing vastly more animals, and a fairly large increase in research cost. These things are always a balance, and there’s no evidence that always using your approach would be worth it.


I agree "any mice trial" is too strong a statement, and also 4 groups is kind of infeasible. But I do think there is not enough research on how these particular inbred mouse strains might bias research. Even the same strain sourced from different companies have produced different results, due to microbiome differences [1]. It's possible that we are way "overfitting" to these particular academic mice

[1] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5870873/


That makes 3...


The 4th is a group of dead mice, to make sure you don't accidentally start a zombie apocalypse.


Jokes are discouraged on HN, but you did make me snort out loud.


Human !== Mice

We can even argue that Human !== Human due to a million variables that differentiates them at microcellular level.


Is the converse true? Are there drugs that would only work in humans but not any animal models?


Short answer: Humans are not mice


Fascinating and very timely for me. I was privileged to sit through a talk by the head of cancer immunotherapy at our institution where he was explaining the past, present and future journey of cancer immunotherapy research and this exact topic of mouse model applicability to human subjects came up. His TLDR on this was that differences are likely due to - 1. Controlled lines of mice bred specifically to minimize confounding during pre-clinical studies 2. Immune response activation pathways differing from "in the wild" situations for actual subjects 3. The immense diversity of gut microbiota, their temporal and inherited gene signatures and impacts on drug metabolism which are incredibly hard to control for in experiments

Very nice to see an approachable post out there outlining a lot of these issues.


Because mice aren't humans.


Because we are not mice.


Actually; we're 75% mice, genetically speaking.


[flagged]


We detached this subthread from https://news.ycombinator.com/item?id=19269700.

> Eschew flamebait. Don't introduce flamewar topics unless you have something genuinely new to say. Avoid unrelated controversies and generic tangents.

https://news.ycombinator.com/newsguidelines.html


Aside from the obvious reasons why testing on a fetus is more ethically fraught than just ending it, it's also stupidly dangerous to test things in a live human, and a mother counts as a live human.


Ya, obviously that person only wants to talk about abortion.

That joke/idea was only tangentially connected to the topic of this article.

klmr 24 days ago [flagged]

> society has reached the point where we do not apply intristic value to a baby inside the womb, only outside the womb

This is completely false, and a typical mischaracterisation of abortion by anti-choice advocates. If what you said were true, abortion would generally be permitted up until the moment of birth. Yet virtually nobody wants that. Abortions are performed in extremely early stages of gestation. Where exactly to set the cutoff point is a touchy subject, but there’s a broad consensus that (with few exceptions) abortions are only permissible while the foetus hasn’t developed a connected central nervous system yet, has no functioning pain reception and no higher brain functions (in fact, in most countries the cutoff is considerably earlier than even that).

In other words, “society” does not apply an intrinsic value to a clump of cells (if it did, chemotherapy and amputation would be similarly ethically problematic). Instead, value is derived through tangible qualities, such as thought, and pain reception. Disagree all you want but don’t invent irrational reasons because they’re easier to attack. That’s deeply dishonest.


> Abortions are performed in extremely early stages of gestation

Typically, yes. Later abortions are necessary though as well, and when they happen typically it is to a person who wants to keep a child.

My wife and I had a son that failed to develop heart chambers, noticed at a 24 week check up. We had two options: He was likely to die in utero, but if we did not terminate, there was a possibility he could survive to birth, only to immediately die a painful death. We opted to not increase human suffering with a procedure that is no longer available in the state of Ohio, due to the heartbeat bill. I might also add that women have lost their ability to have children due to that bill, it's god awful.

Ultimately, I think we should accept that abortion is complicated, and get these pointless moralizations out of the law books.


A few (but not many) people would be against aborting a nonviable fetus. Fortunately these cases are rare, and IMO medically necessary abortion should not be conflated with "abortion on demand" in policy/laws.

I'm sorry for you and your wife that you had to go through that, it must have been exceedingly difficult for you.


[flagged]


Look up the statistics. Majority of abortions are before 21 weeks (only 1.3% after). The vast majority of late-term abortions are for medical reasons, such as non-viability of fetus or endangerment of mother. The point of the law is to prevent the government from interfering in a doctor’s ability to provide medical care. Northam made a gaffe when speaking, but he is an actual doctor who has witnessed the actual terrible tragedies that can happen during pregnancy, as opposed to armchair quarterbacks who are jumping up and down to get their religious worldviews validated by law.


> After 20 weeks of gestation

Doctors that do optional abortions at this point are, at worst, rare. At best, they'll lose their license.

> You either value life or you don't.

We are at an imbalance point on this planet, and we have overpopulated. Reducing the number of people being born in high carbon usage areas is probably one of the best things that we can do to protect life.




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: