Medical research is conservative, cautious, and regulated because we strive to move beyond the ugly abuses of our past[2]. It should not be the Wild West.
For insight into modern medical ethics, read the Belmont Report[3]. It's what contemporary researchers must read before conducting any studies involving human subjects. Where I work, we're tested on it.
(This is opinion, but...) Harm today is in research guided more by profit than by benefit. Chronic disease is more profitable than cures or prevention. With rapidly-emergent antibiotic resistance there is little private incentive to develop new antibiotics. We must rely on and expand research in the public interest (funded by NIH, philanthropy, and so on) to balance private biomedical research.
As kauffj eludes to though, it's also worth considering the implicit harm done through inaction. It should not be the wild west, but conversely, articles like this are indicative of a bias toward increasing prevention of active, explicit harm at the expense of more unseen, implicit harm through inaction.
A happy medium is good, but I'd say that that "medium" falls closer to the conservative side.
A lot of the value of "First, do no harm" comes from the point that we have a bias toward action over inaction. Humans, including doctors, want the ability to control our futures. So the temptation is constantly to do something instead of saying "well, that sucks, but we don't have strong reason to believe anything we do will substantially improve things." That's why we perform way too many interventions (particularly, invasive and surgical ones) that at best marginally improve average outcomes at incredible cost and substantial risk. Doctor and patient both want a magic fix. And all too often the fix is more harmful than helpful to the individual patient, even if on average it might be slightly beneficial.
Obviously there's a place for heroic surgical efforts and research into exotic drugs. But we over invest in those compared to the alternatives of prevention and palliation. Conservatism is useful in medicine.
There's a recent study [1] that shows that outcome for cardiac emergency patients are better (way better) when they're treated at a (good) hospital while the top cardiologists were away (at a conference).
That is, patient outcome is better for patients who go to a hospital with a good reputation, that has outstanding doctors (a teaching hospital) -- but the outcome is best when the top doctors are NOT present in the hospital when the patient checks in.
After controlling for many variables, the only explanation the authors of the study came up with to explain this apparent paradox, is that top doctors want to try anything, esp. the latest procedure/drug, while non-top doctors are more conservative.
At least in the short term, being conservative saves lives.
I don't think we need to group medicine as one thing that is too risk or conservative. We need to evaluate different branches of medicine. In the current culture, we have a bias to accept more risk from surgery than pharmaceuticals.
Imagine you want to be appear more muscular. You could get implants under general anesthesia or you could take steroids. The first is legal, but the second is illegal. It is illegal because society deems that taking steroids is excessively risky. However general anesthesia and surgery are orders of magnitude more dangerous.
This same bias present throughout medicine. Many surgical procedures and tests would never pass the FDA's requirements for pharmaceuticals.
For sure. I don't have a strong opinion either way on whether medicine is too conservative or liberal right now, which probably means it falls within the happy medium ground for some reasonable definition.
Mainly, I think it's worthwhile to point out what appears to be dogma on both sides of any debate.
I'm really surprised that this is news, or that it's being upvoted on HN. Apparently deaths during trials are really rare. From the article:
>A meta-analysis of non-cancer Phase 1 drug trials, published last year in The British Medical Journal, found serious adverse events in only 0.31 percent of participants, and no life-threatening events or deaths.
And the last event comparable to this, according to the article, happened in 2006 in a different country.
I was surprised by this. I would not have expected drug trials to be risk free, in fact I thought that was the whole point. Apparently they are already super conservative and safe. I'm not sure why people are alarmed or calling for more regulation.
It's probably impossible to 100% safety, and certainly not desirable. Parent comment is definitely right that we kill more people through inaction and need to be far more liberal. Total lives saved is all that should matter.
Somehow one person dying out of many drug trials over many years is news that's on the front page of HN. But the 22,000 people that died of cancer today isn't.
Part of the issue is, medically, we (where we = the medical community) killed those people, whereas cancer killed the other 22,000.
Medical ethics are expressly meant to protect subjects from experimentation purely on the basis of "This will probably help more people than it hurts." Given the events that prompted the development of those guidelines, I'm not sure they're wrong.
I'm not sure how this is even controversial. We should try to save the most people possible. It doesn't matter who is responsible for a death, it's just as sad whether it's cancer or a failed drug.
Any rational person would take a one in a billion risk of dying in a medical trial if it gave them access to all sorts of cures and treatments for terrible diseases. And that's if we were selecting participants at random from the entire first world population, when it's actually done by consenting volunteers who get paid for the risk they take.
It does matter who is responsible for a death. In the same way we don't pick people off the street and harvest their organs, so long as there are 2+ people on the donor pool who could use them.
To follow that:
1. People aren't rational.
2. You're not talking about "Roll this dice and we'll cure all manner of things." You're talking about "Roll this dice and we maybe, if everything goes well and it shows efficacy, might see a treatment for a disease on the market in a few decades".
3. It's not 1 in a billion. It's 1 in X, where X is unknown. Asking a volunteer to take that risk is both easier and more ethical if you've done your best to minimize X.
>In the same way we don't pick people off the street and harvest their organs, so long as there are 2+ people on the donor pool who could use them.
Well ideally we should do that. If it really saved more people. It could be done entirely voluntarily, through a lottery. And it would be rational for everyone to sign up, since you are more likely to need an organ at some point in your life, then to lose the lottery.
The rest of your comment is just a misunderstanding of statistics. Out of decades of hundreds of phase 1 trials, very very few people have died. The risk is known. We also have invented many new drugs that have had many benefits and saved many lives, which can also be measured.
I have a PhD in Epidemiology. Studying risk is what I do for a living. And no, the risk is not known. It's known to be "pretty damned low" for the aggregation of all Phase I trials, but for any given patient, and any given drug, we don't know the risk.
Approaching clinical trials in human subjects with caution and care is what has made that risk as low as it is.
That's just not how probability works at all. You absolutely can put a number on risk. You can look at past drugs like it that and determine what percent caused harm in humans after passing animal trials. You can narrow it down to drugs that are similar.
If you want to get really advanced, which isn't necessary, you could fit a bayesian model to all the data you have. Or use a prediction market to determine the risk.
Don't confuse uncertainty with unpredictable. Everything is uncertain, but few things are unpredictable.
As all of these drugs are novel, the best you can do with "similar drugs" is to come up with a decent prior for a Bayesian risk model. But since each of these drugs is new and untested then no, you don't have a numerical risk. You cannot say "1 in 167,234" or "1 in 1 Billion". You can say "Likely to be very, very small", but if you can conjure the risk of side effects for a drug with no data ever generated on that drug, I'd be impressed.
For the sake of argument, say there have been 100 drugs tested in the past. One of them killed someone, the rest were completely harmless.
Now the next drug comes along to be tested and you know absolutely nothing about it. You need to estimate the risk, and all you know is the information above. Of course your estimate must be 1%. Unless you have more information to update that probability, 1% is the optimal baseline prior.
You seem to be confused about what probability is. It's just a representation of uncertainty.
There is no such thing as "actual probability"! An event either happens 100% probability, or doesn't happen with 100% probability. There is nothing else. If you flip a coin, it doesn't have 50% probability of landing heads. The atoms and their path through the air is predictable, and it is already determined that it will land heads by the time it's left your thumb, and perhaps long before.
But you don't know that, so you can only estimate 50%. All probability is just a representation of your own uncertainty.
If you bet anything other than 1% risk, over time you will lose money. After thousands of trials, 1% of them will go bad.
Because people are actually kind of shitty at estimating probabilities, and overconfidence in this area is likely to kill or injure a lot of people. I'm a utilitarian by temperament, but a deontologist in practice because I'm aware that my powers of foresight are actually quite limited and I will very rarely be presented with neatly predictable conflicting imperatives as in the trolley problem. Many self-professed utilitarians wildly overestimate their own intellectual and analytical abilities and come to grief on the rocks of failed predictions. Unfortunately this also tends to involve a lot of innocent casualties too.
On a case by case basis you might have a point, but over time we have tons of statistics on how many drug trials fail and hurt people, vs succeed and save people. It would be relatively easy to calculate the risk/benefit objectively.
That's also an ideal use case for prediction markets, if you want to get really good predictions.
Also, I've heard this argument used before in other contexts. Do people really believe that because you can't make perfect predictions, you should default to inaction? That's just silly. You do the best with what you know and believe.
And if you really believe your predictions are wrong, then you can just adjust them accordingly and make them correct. If you don't believe that, then you obviously wouldn't find that argument convincing in the first place.
Because it very, very easily goes some very dark places, especially when you start talking about extreme risk to one person versus a diffuse benefit over several million.
>> "I'm really surprised that this is news[...] Apparently deaths during trials are really rare."
That's why it's news. If it was a regular accepted occurrence it wouldn't be news. At the start of the Syrian civil war deaths and battles were front page news, now they happen daily and we don't hear about it.
I don't think he is drawing the connection you see there. I read it like you the first time.
I think he meant that he is surprised it is newsworthy and is also surprised at how rare they are. He was assuming these were more common, but not being reported on.
I remember reading a paper on this topic many years ago (sorry I can't find it now), but the participants were at higher risk getting to the trial site than anything that happened during the trial.
The entire thrust of our medical regulatory system, from the Flexner Report to today, is the belief that it's better for 1000 patients to die of neglect, than 1 from quackery. Until this irrational fear of quack medicine is cured, there will be no real progress in the field.[1]
It's entirely possible that medicine would still be conservative, cautious, and regulated after becoming less conservative, cautious and/or regulated, depending on it's starting state. It's important to regularly assess whether a policy is too conservative or not conservative enough, as it's trivial to show that neither extreme is as useful as the middle ground.
You know the best way to "do no harm"? Do nothing. If you accept that doing nothing is, in fact, doing harm, then the comment you're replying to has a good point.
Gilead offers a cure (well, now, two) for Hepatitis C, yet there is an uproar about the cost (US$60,000 and up).
So, they are under a lot of pressure to drop the price severely and already charge much less outside the U.S.
Now, contrast if they had made it a chronic treatment, needing dosing forever, and charged US$10K-15K/year. The media would have hardly bat an eyelash at that cost... and Gilead would have have made more money in the long run.
Or vaccines, which have a guaranteed market basically forever (or at least until eradication), and are an active part of many drug company's portfolios.
It's a popular flippant statement, but it's not born out in fact.
Blindly applying "do no harm" to drug trials ignores opportunity cost. You can kill many thousands of people by delaying the release of a drug for a few months.
One thing is to be cautious in the medical advice you give to a patient. Another is to legally prohibit the patient from taking bigger risks. The idea that anyone but myself has authority over what goes into my body is absolutely ridiculous.
> The idea that anyone but myself has authority over what goes into my body is absolutely ridiculous.
Casting this as a ridiculous argument completely ignores the viewpoint that there may well be externalities caused by this behavior that don't necessarily affect the individual but do affect other individuals. If we take as an example the consumption of illegal drugs the argument can be made that it should be the choice of the individual and not restricted by society, however there is a potential for significant costs (due to the behavior of the individual when under the influence of the drug) that are placed onto the society, thus the subsequent restrictions placed on the individuals.
To be clear I am not intending to debate drug policy or be an apologist for the current war on drugs. My point is to illustrate that a perspective that considers the externalities of a behavior is not a ridiculous position to take as you assert.
I'd say that drug prohibition is ridiculous. It's not only an attack on people's freedom, but also a boon to organized crime and terrorism. A lot of people are willing to pay a lot of money for the product, so if the government stops honest businesses from selling it, the guys with the guns will step in.
This seems to go back to the question of if inaction is an action (or if it counts as partial action). If acting causes harm, but stops greater harm, does it count as more or less harm than not acting? I've never seen a definite answer either way.
Depends on your values, if you go "cold hearted" and say each life is worth as much, the mathematics are rather simple. Add morality, religion, and philosophy to the mix, and you are out of luck.
Medical research is conservative, cautious, and regulated because we strive to move beyond the ugly abuses of our past[2]. It should not be the Wild West.
For insight into modern medical ethics, read the Belmont Report[3]. It's what contemporary researchers must read before conducting any studies involving human subjects. Where I work, we're tested on it.
(This is opinion, but...) Harm today is in research guided more by profit than by benefit. Chronic disease is more profitable than cures or prevention. With rapidly-emergent antibiotic resistance there is little private incentive to develop new antibiotics. We must rely on and expand research in the public interest (funded by NIH, philanthropy, and so on) to balance private biomedical research.
1. https://en.wikipedia.org/wiki/Primum_non_nocere
2. https://en.wikipedia.org/wiki/List_of_medical_ethics_cases
3. http://www.hhs.gov/ohrp/humansubjects/guidance/belmont.html