Instead, they say, the standards should take into account the severity of the diseases that the drugs seek to address.
The FDA already does this. Take a look at the Duchenne's muscular dystrophy drugs currently up for review. One of them fail to show a statistically significant difference from the control group and the other one only had a single trial of 10 patients.
The likelihood is that one or both of the drugs will be approved since no treatments currently exist.
Then take a look at diabetic drugs. Any hint of cardiovascular side effects and the FDA slaps a black box warning on the package and asks for a 10,000 patient follow-up trial. This makes sense because drugs already exist to control diabetes.
This article seems to lay out a more mathematically driven method for statistical testing, but it's a bit ingenuous to say this is something new for the FDA.
Keep in mind also that as a patient you're allowed to take basically any drug you want, companies just aren't allowed to sell those drugs to other people and claim they treat a specific disease without FDA approval. So this whole article is basically just shilling for even less stringent requirements to market a drug as being effective, even though that hurdle is already trivially easy to overcome in these cases.
It is rather hard to take a drug if the company won’t give it to you.
Pharmaceutical companies are very reluctant to even give drugs to desperate patients because the FDA requires them to report all adverse side-effects even though they have no control over the conditions in which the drug is used. You can actually kill a totally fantastic drug by letting people use it outside of a controlled clinical trial.
I think you might have a misunderstanding of what a compounding pharmacy can do. They can't just whip up some chemical on demand. If a new drug is not for sale they can't compound it.
Don't forget the safety aspect of it as well. Even if a company wasn't claiming anything about their drug, if they sell something unsafe, they will get punished.
This apply to all products, not just medicines. If you sell any product that you know (or should have known) is unsafe then the tort lawyers will be after you as quick as a flash. There really is no need for additional product liability regulations for pharmaceuticals.
Another concern with approving drugs more easily: you might then never find out if they really work or not.
The reason we need very large, expensive, and strict clinical trials is because it's hard to see if a drug actually works or not otherwise - too many confounders. So if we pass drugs through with a higher error rate, we'll likely never know if some of them actually work or not. Maybe not a problem for a disease with no treatment at all, but imagine you have 10 treatments and you suspect that 7 of them don't actually work, but you don't know which 7. Is that better than just having 3 available treatments where only 1 of them doesn't actually work?
This is the heart of it. People like to paint this as a regulatory problem--the FDA keeping all these potentially useful drugs off the market due to its bureaucracy. But he fact is that 85-90% of drugs that hit early Stage I clinical testing don't make it through approval. That means the science is bad at predicting what will work and what won't work, and spectacularly so. That's a science problem, not a regulatory.
People bring up the "more harm than good" issue, but they're only thinking of one kind of harm: drugs that actually hurt people. But having ineffective drugs on the market also has a cost, and it's probably the bigger one. The market for effective drugs is almost certainly not efficient given the difficulty of ascertaining efficacy and the information asymmetries involved. Thus, there is a real risk of people dying because the inefficient market selects for drugs with good marketing over ones that work.
Drugs die for all sorts of reasons other than being ineffective. A major one is the cost of getting a drug approved is now so expensive that you can’t make money off it. The vast majority of diseases are effectively beyond treatment because the market for them is not large enough to support the cost of developing a new drug for them.
The other major reason for failure at Stage I/II is that the preclinical models we are using are not a good match for human disease. This means that a drug that works in a mouse model can’t ever work in humans because the mouse model does not reflect the human disease (Alzheimer's is the classic for this). We are using preclinical models we know are useless and are then surprised when the drugs found using them don’t work in humans.
Precisely. Here's a specific example: infantile spasms. I was helping my wife research this condition yesterday (for work -- thankfully neither of our children were affected). I found this[1] really great summary article from 2006 that synthesized the findings of numerous trials over the past 50 years and presented the data in a very clear and understandable way.
If you want to medicine to build on the foundations of history, you need to be able to reference the history.
This has somewhat already been solved. Avastin works really well for colorectal cancer, so it was tried in a really small trial in breast cancer. There was a small signal there, so the FDA gave conditional approval with the agreement that the company would run a larger test to confirm the results.
Well, they ran the bigger trial and it showed no benefit at all. The FDA rescinded conditional approval.
The crazy thing with the current regulations is the company would have been better off financially not ever doing the study. Since doctors can proscribe off-label, if they had not done the larger study they could have kept selling Avastin to breast cancer patients on the sly.
Avastin probably does work for some breast cancer patients if they have a cancer with the right mutations, but if the percentage is a small enough then the large study would not be able to detect this effect in the broad population.
Of course you can - what you do is grant approval conditional on you showing that they are effective once on the market (this is already done with Stage IV trials). With technology it would be much easier to track all patients using a conditionally approved drug and we would get far more useful data too.
> A patient with a seriously life-threatening disease like lung cancer is perhaps more willing to gamble on a risky drug in pursuit of a cure, while someone with a disease that has a high survival rate such as diabetes presumably cares more about avoiding adverse side effects.
If he's more willing to gamble, then why not just let him do it? The FDA's job should be to enforce truth-in-labelling (which means drugs with no beneficial use should be labelled as such, and those with horrible side effects should be labelled as such), and patients working in concert with their physicians should determine which drugs to take.
For this to work, we may need to limit the ability of drugs to be advertised. I'm okay with limiting the civil right to free speech of pharmaceutical companies in order to respect the fundamental human right to ingest whatever one wishes of patients.
If he's more willing to gamble, then why not just let him do it?
They already do. It's called expanded access programs. If a patient is going to die, they can ask the FDA for permission to take an unapproved drug. The FDA says yes 99% of the time.
You can only get access to the drug if the company gives it to you. The hard-nosed response of the company is to never say yes since you run the risk of the patient having a side-effect unrelated to your drug that can kill it when you try to get FDA approval. If you care about helping the most people then risking your drug this way is crazy.
On a related topic, http://www.fdareview.org argues that much of the FDA is more harmful than useful; in particular the efficacy requirement. Does anyone know good counterarguments?
There aren’t any. The FDA and it’s related regulatory authorities around the world are responsible for killing more people than any other organisation that has ever existed (conservatively in the hundreds of millions).
You might wonder how this could be. Basically the cost of getting a drug approved is such that large sections of the human population (poor people especially) have diseases for which we have no treatment or no cheap treatment. We need to get the cost of drug development down to the level that pharmaceutical companies can make a profit developing and selling medicines to poor people or people with what are considered rare diseases.
It is a very interesting question. The reason is what is known as regulatory capture. This is normally thought of as the regulators being captured by the industry they are trying to regulate, but it is really more a symbiotic relationship between both sides. Through regulation the large pharmaceutical companies can keep out the competition from any small pharmaceutical companies, while the regulators gain a nice job and importance.
The only losers here are the public since we miss out on cheap effective drugs because the regulations make them too expensive to produce for most diseases and people.
> refuting your BS would take far longer than you vomiting this utter, utter nonsense into HN
This comment violates the HN guidelines badly. We ban accounts that do this repeatedly. Please post civilly and substantively from now on, or not at all.
Glad to see reason is alive and well on HN. Don’t both trying to provide an argument, just hurl personal abuse.
Since I live in Australia I don’t ever take any drug that is approved by the FDA (this is the job of the TGA). Even if I did live in the USA, I fail to see me advocating for less regulation of the drug approval process somehow prevents me using a drug that has been approved in the past. I am advocating for more drugs, not less.
This doesn't address the point at all. That site isn't recommending getting rid of the FDA as far as I see, just changing certain parts.
Also, they have an analysis of the advertising aspect.
In other words, Dr. Pepine thought that the FDA restrictions preventing the advertising and promotion of aspirin for heart attack patients were responsible for the deaths of tens of thousands of people.
Actually you would see far fewer ads since you would remove the ability and need for advertising. The reason drugs are advertised now is because the drug companies have so few new drugs to sell (no drug off-patent is ever advertised). They need to maximise the returns from the few drugs they have to sell. If hundred of new drugs came out each year then mass marketing would not work since the market for each drug would be too small.
Yes there would be more “ineffective” drugs, but if you and your doctor had the choice of dozens of drugs for your disease which one would you choose - the one for which had no evidence that it worked, or the one where the drug company has scientific evidence? Even if they did not need to do trials for the approval process, pharmaceutical companies would still do them to convince doctors and patients.
Per the paper, for drug application to treat pancreatic cancer, the threshold should be lowered to 27.9%.
In other words, if a company wants to make money, it can randomly pick 4 non toxic compounds, and apply for FDA approval for treating of pancreatic cancer through click-trial, in average the company would get 4*0.279 ~=~ 1 compound approved and then market the medicine to the world that the medicine is effective.
They give weight/cost to type I and type II error, and their whole analysis can be changed if the weights changed.
I would treat the paper as a research-toy, self fulfillment. Try to change policy based on the paper would be absurd.
You have just pointed out the problem with all effectiveness trials. If you used the current threshold you can still do the same thing, you just need to test more non-toxic drugs.
A better approach would be to not have any effectiveness trials at all and just collect good data on all patients using the drug. If a drug really worked it would become obvious the more patients used the drug, while if it was worthless this would also become obvious. A new drug would start out unknown with only the most desperate trying it and overtime we would learn more and more about its effects and side-effects.
This bayesian approach was not possible in the past, but with modern technology it is now. We would get better drugs with better known side-effects at far lower costs.
So, how does the FDA deal with this issue today? Suppose the scientists wanted to market (some flavor) of jelly beans as a drug to cure acne (instead of to cause it), and they test 20 flavors and get evidence at p < 0.05 that white jelly beans cure acne. Or 1000 flavors and evidence at p < 0.001.
Does the FDA look for a causal mechanism that would make the drug effective? Or are the companies afraid to market things that they genuinely suspect are ineffective because they'll eventually get sued? Or is the FDA's effectiveness standard so strong that you just can't cost-effectively game it like this today? (The last possibility seems a little bit unlikely.)
This is a good question and one that is hard to know. In theory you need to have some mechanism of action, but drugs do get approved without one, or more commonly with a wrong one. Quite a few of the drugs in the psychiatric field are little more than placebos-with-bad-side-effects that got lucky.
As far as I know nobody has ever been sued over a drug that didn’t work. From a purely legal perspective you would better selling a sugar pill than a real drug that worked and caused 1 in 100,000 people to drop dead.
> if the drug is not effective (at all), there is 27.9% probability that the drug would be approved by FDA.
Which is only one of two possibilities, and hence, it cannot be 'the probability the drug is approved by the FDA', for the same reason that 'a p-value of 0.05' does not mean '95% probability the drug works'.
The real source of the problem is at the preclinical stage. The requirements here are so strict that almost no drugs make it out of mice and dogs and into humans for actual testing.
Pre-clinical only requires that you show a drug is not toxic and has a potential benefit. Most drugs fail because they are either toxic or don't do anything.
If only that was all you had to show in the preclinical stage.
The bigger problem is you can’t show either of these with preclinical testing. If you could then there would be no need for human testing.
The reality is most of preclinical testing outside of certain drug classes (e.g. antibiotics) is not that helpful for knowing if a drug will cause problems in humans or will even work. It has got so bad that most drugs on the market would not get through current preclinincal testing requirements.
The FDA already does this. Take a look at the Duchenne's muscular dystrophy drugs currently up for review. One of them fail to show a statistically significant difference from the control group and the other one only had a single trial of 10 patients.
The likelihood is that one or both of the drugs will be approved since no treatments currently exist.
Then take a look at diabetic drugs. Any hint of cardiovascular side effects and the FDA slaps a black box warning on the package and asks for a 10,000 patient follow-up trial. This makes sense because drugs already exist to control diabetes.
This article seems to lay out a more mathematically driven method for statistical testing, but it's a bit ingenuous to say this is something new for the FDA.