Of the 46 drugs approved under breakthrough designation, 25 were for oncology. The FDA often allows for a lot of flexibility with trial sign in oncology due to the high unmet need.
The author mentions a lack of trials being double-blinded, having a placebo or active comparator and clinical outcome vs. surrogate marker.
All of this is quite common outside of the breakthrough drug designation. Nothing new here. Double-blinding a trial isn't as big a deal when what you are measuring is not impact by observer bias (tumor size). In addition, if you're measuring a surrogate marker (that is an accepted proxy for clinical outcomes), you don't need a comparator arm.
I don't think any of these findings should be all that surprising or concerning.
Why is this the case? Wouldn’t there be concerns that a study population might be different from the general population and thus require an internal control?
FDA acknowledges that FVIII levels can serve as a proxy for clinical outcome (reduced bleeding episodes) in hemophilia A.
Our understanding of what is "normal" in FVIII blood levels is such that a single arm, surrogate biomarker study is sufficient for approval.
That was the focus of the JAMA article.
Have you ever tried to measure something like this and analyze the data? There are so many ways to skew the results...
That's not even to mention basic stuff like handling the treated rats more carefully so they are less stressed, etc.
On the other hand, in earlier times we had a lot of drugs that were in the "snake oil" category, making a lot of claims but doing nothing at all (or that were actually harmful).
A much larger set of drugs fall into a gradient of "as far as we can tell you'll just excrete it if you take too much" to "it's probably not _good_ if you unnecessarily take it, but it won't really be detectable in 4 weeks" to "oh god why would you ever prescribe that".
So if we were purely grading on safety, we'd be out quite a lot of pharmacology.
And does a statistical evaluation of the effects of the FDA regulations.
as the author points out, the incentives are always in favor of doing a bad job with regard to efficacy and trial design. a company that can produce a really great and well-proven drug is not going to make more money than a company that produces a shitty drug that is also approved for the same condition.
in fact, with the right marketing efforts, the execs would even see similar sales figures for both of these hypothetical drugs. hence why more is spent on marketing than R&D.
Testing for rare long-term effects requires a huge sample size, and billion dollar clinical trial budgets.
Teams do decide, early on, to try to discover drugs for serious diseases or common diseases, depending on how much money they think they can raise. Usually only proven teams can raise the huge amounts, so most pharma startups target serious niche diseases hoping for approval under the orphan drug program.
Yep, there is only a weak mapping between FDA approval and actual drug usefulness.