There are drugs such as tetracyclines that should never be used past their expiration dates because they degrade into toxic compounds. Certain classes of drugs such as anti-arrhythmics or drugs like warfarin are dosing critical to the point where I would not want them if they were out of date.
I worked in pharmaceuticals in a medically underserved community for a couple of years. At that time when drug samples expired, sales representatives had to return them to their companies for destruction. One doctor in the area made sure that all the drug reps knew he would accept short dated (but not outdated, which would have been against policy for reps) samples for a free clinic he ran. Everyone I knew participated when they had short dated samples. While reps could not distribute outdated samples, doctors had much more latitude in how they dealt with them. It was one of those rare and wonderful situations that was good for patients, created good will for reps and was all completely within regulations.
I should say this was some years ago and regulations may have changed since then.
Like a "dangerous to use after 01/01/2019" label on things that actually are dangerous.
Other items could have labels like, "Safe for use until 01/01/2025 - Effectiveness may be degraded after 09/05/2019"
But nobody wants to run long stability studies, because that costs a lot of money (i.e. several batches that have to be analyzed in regular basis, documented, reported, QA'ed, inspected, etc... - and you need to place them in numerous conditions, too, ambient temperature as well as stressed conditions depending on where you intend to market the drug), and bring very little returns at the end of the day. You'd rather want your engineers to work on developing new formulations and regulatory teams to spend time on the application of new drugs rather than adding one or two years of shelf life to an existing drug on the market.
Also know that you can't "extrapolate" and need actual data based on regulations of the big 3 geographies (US, Europe, Japan). So "models" are not accepted. Also, these agencies are extremely conservative when it comes to what impurities should be considered acceptable and at what levels they should be. Not saying they are necessarily wrong, but this is another constraints. I have seen cases where it was difficult to extend one drug's shelf life because of a single impurity being a little over the tolerated limit.
Some medication that expires quickly can be extended simply by refrigerating it. I wonder what other environmental controls could extend the shelf life of common, expensive medication?
By storing it for 2 years and testing it after.
> I wonder what other environmental controls could extend the shelf life of common, expensive medication?
Common, expensive is an oxymoron. The vast majority of all medication in use is practically free to produce. The packaging of drugs typically costs more than the drugs themselves. Drug discovery has long concentrated on small-molecular-weight drugs that can be mass-produced for a pittance. The reason drugs cost so much is that proving that they are safe and that they work costs so much. Because of this, there is little economic sense in trying to preserve drugs for longer (which would require more extremely expensive trials), when you can just trash the drugs and order up a new batch that has a combined msrp of $100M for $5k.
In their economics, small molecular weight drugs share more with software than with, say, bread. That is, practically all of the economic cost is upfront investment and the replication cost is non-existent.
Apart from biologic drugs, most small molecule compounds are dirt cheap to produce. So cheap that nobody is even striving to produce them in a cheaper, more efficient way. If you compare this kind of industry with chemical specialty industries, where the mindset is to increase yield as much as possible, in pharma the production cost is not much of a consideration. So, what you pay for your drugs has almost no relationship to how much it costs to produce. You are paying for patents, market exclusivity and the whole system that goes with it (the pharma companies are not the only ones benefiting from it).
> How did they come to an expected shelf-life of 2 years in the first place?
Stability studies supporting 2 years data (submission to approval takes one year, and compiling all data before submission for filing is a 6 months process, so you can start submission with 1 year stability data in your hands, and tell the regulatory agencies you will provide the 2 years data during the second half of the regulatory review to get 2 years).
Because that's good enough for most drugs. 3 years is usually ideal, but not always necessary. If your drug is used in high volume most patients won't get batches that are produced long ago anyway.
Any drug submission that supports shelf life via ALT data will be expected to also say "and we're doing real time testing to support this", and FDA will expect the follow up real-time data.
But if, in reality, that drug only lasts for a strong 5 years, after which time it loses effectiveness, especially in high humidity or with high UV exposure. We could catch this with real-time tests and adjust accordingly - before it actually affects the general public's health.
For the vast majority of manufactured products, HALT makes sense because any uncertainty that remains after testing is not likely to harm users. But with some of these drugs, any uncertainty, even after HALT, could cause major problems for users. And it seems that the FDA is unwilling to accept that risk. But as far as I know, the FDA is actively seeking methods for performing HALT that produce accurate and repeatable results.
Given the amount of money spent on medicines, I really think there is a good moral business case to identify the correct expiration dates and save money for people.
We just went through the Health care ordeal. Money saved anywhere is money saved for people who desperately need healthcare to work for them.
Given that it's the very same party that would be doing the testing for expiration dates, I kind of doubt they would research this out of the goodness of their corporate hearts. Effectively the same outcome could be realised without the effort of extra testing anything: by just lowering the price of medicine a bit; big pharma would make less money, the public would have more medicine for cheaper.
A case could be made for legislation that obliges free replacement of out of date medication for the original purchaser, perhaps with a small surcharge.
The issue here though is that it's not in the drug manufacturers interest to have super long dates. Putting labels that expire sooner means they can push higher volume.
I assume that would be done after bringing a drug to market, otherwise drugs could easily be delayed 10+ years if they're shelf-stable.
If it's a new drug and they can only say for sure that it lasts 6 months then put that on the label and ship it out. But assuming the drug will be out for 10+ years they should be continually testing the expiration date and updating as they've had more time to make that determination.
But like I said initially (and as you nicely pointed out) that costs money so why bother?
This needs regulatory work. And this bring very little returns, so your regulatory resources are better spent somewhere else. There are other incentives in place.
It's very likely it is not favorable to companies to look into this, just as it was revealed that the EpiPen expiration date was not really true.
What kind of error rate are you willing to accept in models of pharmaceutical shelf stability? Bearing in mind that errors potentially translate into deaths, probably disproportionately of the less privileged among us?
"able to calculate the longevity of chemicals, food, and other products without having to wait the actual time period"
I'm not anywhere near knowledgeable on pharma, but I worked on an FDA regulated medical device. If we made a claim, we had to test it. Knowing that e.g. every component in the machine was rated safe between temperatures X and Y did not mean that we didn't have to perform real environmental testing.
There is a great comment farther down about how complex label changes can be. Getting a revised label, or packaging through FDA is an enormous undertaking. This why sometimes dosing seems very weird. If research says we need a 2mg and a 4mg but later in the process patients need a .5mg it's a big mess.
Suffice to say it's a deep rabbit hole.
Harm due to unavailability of drugs or higher costs are much harder to track, of course.
Yet another case of a balance being hard to reach due to the visibility of costs on one side versus the difficulty of measuring on the other. So easy for the general public to see the cost of strictly enforcing overly conservative expiration dates as close to zero.
/offtopic: these issues are the achilles heals of democracies. Not that there's a better system.
Personally, I take all quotations like the ones in the article with a huge grain of salt. There's often enough of a communication barrier that even good reporters can misunderstand an answer, or for the researcher to be answering a different question than the reporter asks. And there's also the possibility that the researcher just had a failure of memory.
... which is harm. If you've applied appropriate medication to deal with the issue, but that medication has gone bad, then you're in a position where you think you've applied a remedy but haven't.
Even if you later find out that the medication was bad, you have no idea how much effectiveness there was left, which affects future dosage calculations.
I don't think you make people less likely to experiment by making experimentation successful most of the time.
There is already a super long information sheet in each of the drugs I take, and each time I read them to get a better grasp of how I should take them I'm flooded by useless warnings (could be all summed up by 'you might get worse taking this. If it happens, stop and call a doctor') and disclaimers.
Every time I needed specific practical info I had to go to a pharmacy and ask someone or search non official online resources. Even super basic stuff like 'should it be taken before, during or after a meal ?', no word what so ever in the notice.
It's hard to design middle ground solution for this. Between no information and too much and too complex information...
Come on worl, we have machine learning, we can find something.
I would push two prongs: reinforce that expiration dates are serious, and at the same time test and extend any/all dates to the actual behavior of each drug. turn a one year into 5, 10, 20? Hell yes.
Pretty obvious that big chunks of the health care system are rigged for profit, not efficiency or usefulness.
I think that a better approach to 'expiration dates' should be that highly controlled dispensaries (pharmacies) track lot and production dates. A given medication should have a standard expiration date based on /observed/ potency and reserving a safety factor (which should be known and recorded).
Drugs before the typical expiration date would have very infrequent random samples tested for effectiveness. If there is economic or technical incentive, medications nearing the expiration date would be tested and based on the results a new expiration date and testing schedule for the remaining units set.
What kind of reporting is this? Anything less than 100% is not "as potent as when manufactured", and the sentence implies some of those dozen weren't close to 100%.
> The idea that drugs expire on specified dates goes back at least a half-century, when the FDA began requiring manufacturers to add this information to the label. The time limits allow the agency to ensure medications work safely and effectively for patients. To determine a new drug’s shelf life, its maker zaps it with intense heat and soaks it with moisture to see how it degrades under stress. It also checks how it breaks down over time. The drug company then proposes an expiration date to the FDA, which reviews the data to ensure it supports the date and approves it. Despite the difference in drugs’ makeup, most “expire” after two or three years.
That seems to be the problem. There was a procedure in place to set expiration dates scientifically, and it was ignored for some reason, limiting the legal shelf life of even the most stable compounds to a few years.
They test how it degrades under stress to determine a maximum viability under perfectly stable conditions. They are not accounting for the typical medicine storage places: a bathroom cabinet where the temperature and humidity will vary wildly within a matter of minutes, a cool, damp fridge, or even the reading table next to your bed-- constantly exposed to UV light from the windows.
I'm relaying this secondhand from a former FDA contractor, but I can only surmise that for a company with the litagative risk level that any pharmaceutical entity incurs, the old adage of "Better Safe Than Sorry" is a major part of the story that's missing here.
It's not any more efficient to stockpile drugs, and it doesn't make sense to do so, as your need for prescription drugs is likely not going to be multiple years.
Develop an "indicator pill" that changes color at the same rate as an individual medication.
It would be in (or part of) the bottle with the other pills, and be customizable to react to humidity, temperature and time in the same way as the particular medication.
How feasible would that be for thousands of different drugs with hundreds of thousands of different formulations and chemical compositions and various storage factors ?
Do you mean that varying conditions make medicines last longer? You can't mean that medicines would last much longer if stored in perfect conditions, because that's not expiring "earlier than they should", it's expiring exactly on time for the poor conditions they are kept in.
Can you clarify?
Most chemical processes happen faster with a higher temperature. Some drugs decay into harmful substances (e.g. Aspirin).
Other factors, such as humidity or UV, may play a part as well.
Rather than testing every permutation of storage conditions over time, manufacturers put a safety margin. Individual consumers aren't in a position to remember exactly how they treated any given container of pills. Even if the manufacturer knew how a given sequence of storage practices affected their drug, nobody would be the wiser.
Well, it keeps out humidity, and might prevent air-exposure effects. It won't prevent heat effects and some other environmental effects.
Also, quite a lot are sold in bottles of pills or other non-blister-pack forms.
I remember vaguely having seen pills in bottles maybe 20 or 30 years ago. Again this is for France, I know that e.g. in the US one gets refills.
Which by the way I find great because the waste of pills is tremendous. It would have been much better to get the specific number prescribed.
Over here, benadryl is the main over-the-counter drug I see in blister packs. They say it's to slow down the meth cookers.
I'm a long time pollen allergy sufferer living in the hell that is South Florida. I know my allergy meds.
In Australia the only thing in bottles are vitamins and similar, though it is possible to get small bottles.
* acetaminophen / paracetamol
The only drugs that weren't within 10% of their original concentrations, according to the study, were amphetamines and aspirin. I can't think of any uses off the top of my head for those two where even 25% would make much of a difference to outcomes - amphetamines are mostly used as general purpose stimulants or to treat ADD and aspirin is not the blood thinner of choice for life and death situations. With amphetamines especially, what you eat and small factors like blood pH can have a massive impact on their effectiveness so we're already talking 10-20% differences day by day, let alone person to person.
Dosage studies happens between Phase 2 and Phase 3 trials (and Phase 4 "post marketing" too) and are the most expensive, longest and most scientific process that arguably any business engages in in the entire world.
Phase 1 is a safety test in healthy ~men. These seek to find the maximum dose before adverse effects appear in the healthiest humans.
Phase 2 is a safety/dosage test in a much larger and varied sample. We know dose ranges and begin testing them for efficacy in a varied population.
Phase 3 is the big one. It's a randomized, controlled, double blind test of SPECIFIC dosages in SPECIFIC target populations to scientifically prove efficacy over placebo.
The dose we take is not a "BEST GUESS"
It is the result of a billion dollars of scientific study and that PRECISE NME (new molecular entity) has been studied, at that PRECISE dose, in a target population, for it's effectiveness, and it must be better than existing-treatments and better than placebo.
It should be highlighted that this process takes on average 10 years, costs $1 billion dollars, and has a success rate (success meaning FDA approval) of about 1 in 10,000 attempts (hence the price tag).
Source: Drugs: From Discovery to Approval, Second Edition http://onlinelibrary.wiley.com/book/10.1002/9780470403587
No difference in response between people taking 20 mg prozac daily vs. every third day.
Edit: The $1 billion number is meaningless outside the context of large pharmaceutical companies. The cost for phase 3 trials is in the tens or hundreds of millions of dollars so the floor is a wide range and there have been instances of companies getting incredibly lucky and developing a drug for much less than a full billion.
You are the one who is completely wrong.
The dosages most certainly do start out with a "best guess" in the pre-clinical phase. For any given drug we would guess at the doses based on our calculations, and give a range on either end. If we thought the effective dose would be 1 mg/kg, we would run an LD50 of doses much higher than that, 10mg/kg, 100mg/kg. Sometimes our guesses would be wrong, and the low dose animals would die as a result. After figuring out a safe dose, we would then split it further into different dosing groups to test the effectiveness of the compound. Most people in the field are underpaid, overworked, and do sloppy work
Please talk to someone who has actually worked in the Pharma research field, they will tell you most of what they do is just guessing.. why do you think only 1 in 10,000 make it through?
It's all a guess until the end, and even then we're often not sure. Look at anti-depressants. We still don't know how a lot of those work but prescribe them to millions of people regardless. How do you think they determined the dose when they don't know how the drug works?
Hence the 'The dose we take is not a "BEST GUESS"' (emphasis mine). Unless you're a lab animal or a person in a clinical trial, the does you take is the end result of lots of money spent on first figuring out maximum safe dosages (Phase 1) to prescribing guidelines that doctors are taught (Phase 3).
Antidepressants are a unique group of drugs because the dosage is highly dependent on each person's unique brain chemistry and a huge part of a good psychiatrist's job is working with their patients to find the right dosage and combination. The problem is that many psychiatrists don't have the time (due to insurance billing practices) or the patience to do the work but any decent psychiatrist will tell you that recommended doses for antidepressants are just a safe starting point for most patients, not the dosage that they will eventually find most effective.
For the record, I have worked in the pharmaceutical industry on pre-clinical drug development and Phase 1-3 marketing applications and the GP's description is largely accurate whereas you seem to have a chip on your shoulder. I wouldn't trust a single book about the drug development process that wasn't heavily influenced by the pharmaceutical industry in one way or another just like I wouldn't trust anyone without semiconductor industry experience when talking about Intel's cutting edge fabrication processes.
How does one accomplish this absent trial and error?
This fact undermines the entirety of your argument.
"Best guess" is exactly what they do.
Please take your clearly ignorant bias and anti-pharma prejudice somewhere else - you have no idea what you're talking about.
We know how antidepressants work, the chemical reaction is well known and can be quantified at different doses. We just don't know why they work because we don't understand why/how the brain works.
Pedantic but important difference.
I have a prescription for dexamphetamine, but I don't use/need it a lot, so I have some bottles left that are a bit old--not past the expiration date, which appears to be slightly less than 3 years (after the date of the recipe), on the bottle I'm currently looking at. So I was curious about the amphetamines in particular, and if it's just efficacy deteriorating a bit that's fine, because in my personal experience the effect (which is quickly and clearly noticeable) varies easily by 25% already, depending on so many other factors (like what/how much I eat, how well I slept, stuff like that).
10-25 percent of....what?
Wrong. Therapeutic ratio, not dosage.
LSD's therapeutic index is unknown because its LD50 has never been established in humans or monkeys and you can't extrapolate from non-primate animals to humans (for example, rats can handle a dose of fentanyl that is a thousand times higher per kg than monkeys can, which is where many of our LD50 estimates come from).
I was going to post something long about LSD but it turns out I am just wrong. So thanks twice, and have my upvote.
Likewise, in a clinical setting (i.e, with a trained anesthesiologist in fentanyl's case), the dose at which 50% of patients die from a given drug is usually much higher because trained professionals can quickly receive feedback by monitoring the patient's symptoms and apply drugs or medical devices to keep them alive well beyond what could kill someone on the street.
The therapeutic index is the ratio of the therapeutic dose to the dose at which the drug starts becoming toxic in 50% of the sample. I repeat, for the third time, I was not talking about toxicity. We are discussing drug expiration dates and you will not experience toxicity from a drug that has been degraded over time to a dosage lower than the one you were prescribed, except in extremely rare circumstances. A drug's therapeutic index is entirely irrelevant to what I am talking about - I don't know how I can make that any more clear.
It also does not follow that because a drug is active in small amounts, i.e. micrograms instead of milligrams, that a small percent change in the dosage will be more important. Full stop illogical, and also not true in practice.
So yes, I'm not a doctor and I don't know the jargon. And when I read your comment I forgot the article was about expiration dates, i.e. percent decrease. And yet you were still wrong on that particular point.
In my personal experience (a decade of Burning man), when you do have a known concentration of liquid LSD (measured with an LCMS) that you carefully handle, store, and dilute for dosages, 100 mcg and 125 mcg can mean the difference between a good trip and a bad one in at least a tenth of my sample size and can drastically change the intensity of hallucinations (going from zero to fractal visuals to aliens) in a good quarter - not to mention the effect on the body high and introspection. If you're buying tabs, the chances that you are even close to the original concentration falls rapidly the further removed you are from the original source. Most people can't tell the difference between a 100mcg and 125mcg tab because those dosages are exaggerated and many people feel stereotypical effects at lower dosages. You can easily test this by eating a significant fraction of a blotter: you are extremely unlikely to experience the effects described by the literature of 50 ng/mL blood concentrations even if you're lucky enough to get to 5 ng/mL effects with a few tabs.
Saying that a 25% change is more significant for more potent drugs is illogical on the face of it, because a % is a dimensionless quantity. If there is some subtle reason why this trend exists I'd love to hear it.
Actually, the reverse is illogical because few biochemical systems have linear responses, especially when you're talking about something as complicated as the neurotransmitter systems that LSD effects. Potency depends on two factors: the affinity of the drug, or how well it binds to its target receptors, and efficacy, or the relationship between the concentration of the drug (and second order neurotransmitters) and the ability of the receptors to initiate a cellular response. Neither of those two factors are linear and they both change as the concentrations of the drug and its byproducts change. As neuron receptors are activated by LSD, they cause cells to release a flood of other neurotransmitters at varying concentrations (dependent on LSD concentration and individual brain chemistry) that start interacting in complex ways like preventing the LSD and other neurotransmitters from binding as effectively (lowering their affinity), potentiating the cellular response (increasing its strength aka increasing efficacy), or building tolerance (lowering efficacy due to exposure). Each receptor and neurotransmitter pair behaves differently. Neurotransmitters in general can't activate cellular responses before they are above a threshold potential that causes the neuron to react, after which the cellular response is never linear. A 2x increase in concentration will rarely achieve a 2x response unless it falls in a small range where the curve is mostly linear, and even then the complex interactions between all of the neurotransmitters will usually compound into a nonlinear response anyway.
These interactions get so complicated, for example, that you get many cases were an opiate A, which is technically 10x more potent than an opiate B, can actually be less potent at treating mild pain because it has a steep response curve that doesn't ramp up until a certain dose. 10mcg/kg of opiate B might be 10x stronger than 10mcg/kg of opiate A (which could be too low to even feel the painkilling effects of opiate A) but once you hit 50mcg/kg concentrations, opiate A might be 10x more potent than opiate B, whose effects plateau with doses higher than 20mcg/kg. Potency is typically expressed as the [A]50 - the concentration of the drug at which you reach 50% of the maximum effect, which depends on the therapeutic effect you're looking for. So in this example, the [A]50 of opiate A in mild pain scenarios can be 25mcg/kg versus opiate B's 7 mcg/kg while with severe pain, the [A]50 of opiate B can be above its LD50 and opiate A's can be 30 mcg/kg because from 25 to 30 mcg/kg opiate A has an exponential response curve. Like I said, it's very complicated and when measuring potency, you're actually measuring the effects of concentration on "arbitrary units," which could be something concrete like a chemo drug's effectiveness at killing cancer cells or something subjective like pain relief. When it's the latter, especially, potency has a very specific meaning in pharmacology that is very different from how the word is used colloquially.
Please don't talk about pharmacology. Period. You don't know what you are talking about and someone might make the mistake of believing otherwise and do something dangerous.
> There are a few compounds (like fentanyl or LSD) where the dosages are so small and the compound so potent that 10-25% makes a difference but for the majority, precision isn't that critical.
As in, typical dosages for fent or LSD are on the order of micrograms, so we expect them to be sensitive to a 25% change, for other drugs where a typical dose is measured in mg or grams, 25% is less likely to be a big deal, etc. This is the only way to interpret what you wrote, and it doesn't make sense. No such trend exists. None of the "basic facts" which you "explained" above (which I already knew) go anywhere near supporting this assertion.
Put another way, if you know the [A]50 you know one point on the dose-response curve, but in general that tells you nothing about the slope or shape of the curve near that point.
Judging from your comment history, you are not an actual doctor or biomed researcher, just a programmer in that industry. Which makes your teeth-gnashing about what an authority you are a tad less persuasive.
> Twelve of the 14 drug compounds tested (86%) were present in concentrations at least 90% of the labeled amounts, the generally recognized minimum acceptable potency. Three of these compounds were present at greater than 110% of the labeled content. Two compounds (aspirin and amphetamine) were present in amounts of less than 90% of labeled content. One compound (phenacetin) was present at greater than 90% of labeled amounts from 1 medication tested, but less than 90% in another medication that contained that drug.
Sounds like maybe concentrations are allowed to vary a bit from the label at manufacturing time? Not sure, maybe someone who knows more can comment.
Depending on necessity and value they can typically engineer more precision, but it gets very expensive very fast.
Relax, this is the real world, not a computer based series of 0s and 1s. Any such compound begins drifting off from the original specs as soon as it leaves the factory - and in a very real sense it's not exactly at 100% at any time post the moment of manufacture. The question is what kind of deviation can be tolerated.
The world of logic is built on the shifting ground of reality. Only redundancy and checksums keep it from crashing down in compounded errors.
There was a time in my life where our family lived dollar to dollar and skipped medical care due to cost. We would easily settle for 90% efficacy.
Several of the drugs they tested were > 100%, so it seems reasonable to assume that a lot of this is due to variation in manufacturing rather than to the drug degrading.
producing powerful physical or chemical effects. ( http://www.dictionary.com/browse/potent )
chemically or medicinally effective ( https://www.merriam-webster.com/dictionary/potent )
exerting or capable of exerting strong physiological or chemical effects ( http://www.thefreedictionary.com/Potent )
At the end of the day, there's no really reliable way to speed up time, and it would be a waste of time to add 10 years to a drug development timeline just so you can leave it on a shelf for 10 years to test if it still works.
And yeah, we could probably go through and retroactively test some Ibuprofin, but also remember that these drugs are repackaged / reformulated slightly every couple years for various reasons, and without a lot of legal protections, no company will consider it worth it to guess about drug lifetime.
...or in case of tablets that come in 2x/4x the dose and you need to cut them in half/four pieces, the precision of your cut and amount of tablet dust that is lost.
It's a lot of guess work.
For many drugs a 2% difference is probably unnoticeable. Why are Ibuprofen pills 200mg? Because it's a nice round number that happens to work for many people. If I have a headache I often break the 200mg pill and only take half of it. It still does the job fine for me with fewer side effects.
So if a 110lb wife gets the same dosage as a 200lb husband, how on earth would even a 20% difference in dosage amount make a big difference?
2% can only be done in fantasyland, like in a hospital you and the medicine are both closely weighed and administered.
How do you know that your doctor didn't take your weight into consideration when selecting the dose for a prescription medicine?
OTC drugs are often generally provided at a dose that should be safe for most adults, but may be significantly different in effectiveness based on weight.
Why is this a "surely"? On a scale of 100%, 2% is 2% because it's not hugely significant.
Disclaimer: this is conjecture.
To investigate how a drug breaks down over a ten-year timespan, you'd need to do at least one ten-year study of the drug. That seems difficult to justify.
It seems like it would be a good long term investment for a group of hospitals to partner together and begin a systematic study of the actual lifetimes of drugs. Instead of destroying all expired drugs they could send off samples to a lab for testing. They could even intentionally keep expired drugs in an off-limits area for 1, 2, 3 years past their expiration dates to gather more information. With enough hospitals participating they could gather a significant amount of data without any one hospital shouldering all of the burden. Once there is enough evidence that a particular drug is still potent the FDA could extend the allowed shelf life. This would save the hospitals money over time since they could stop paying to dispose of and replace as many expired drugs. It would also provide a general benefit to the public since people would have solid scientific evidence for the actual effective lifetime for particular drugs. This would include poison control centers, the hospitals themselves, etc. that would be able to better handle cases where someone has already taken an expired drug.
Of course an alternate way would be to require the drug companies to accept returns from hospitals of small amounts of expired drugs that they would then be responsible for testing and providing the data to the FDA. Limiting the returns to large institutions would keep the costs down (reducing push-back from the pharma companies).
Since the % is of labelled strength, it might still be “as potent as when manufactured” if the drug was at less than labelled strength when manufactured, though that, too, would be a serious issue.
If pill prices were set by some external force, this could at least be an important society-wide transfer. But in fact, in equilibrium pill prices will be affected by the expected rate at which pills expire without being consumed. Even when manufactures have a monopoly, the manufacturer-surplus maximizing price is determined by the demand curve of the consumer which takes into account the expiry rate of pills. (If 10% of pills expire before I consume them, they are worth 10% less to me in expectation.)
Yes, I'm sure there are market model where expiry dates create net economic drag, or a net value transfer between manufacturer and consumer, but it's not even clear which direction the transfer goes. Such an analysis depends on the details of the world and how they are reflected in your model, which are completely absent in this article. Most importantly, the intuition that "letting $100 pills expire for no good reason must cause $100 of damage" is completely false when the marginal cost of manufacturing is low.
(There are exceptions where the marginal manufacturing process is expensive, but the article doesn't focus on these.)
The point of the article is that this is an expensive mess for everyone except the manufacturer, which means approximately everyone. As a consumer, having the manufacturer "transfer" money from your pocket into their own sucks. Seems like the article's logic is actually pretty decent.
Also, even if we accept that the marginal cost of producing a pill is near-zero, that's ignoring the entire rest of the supply chain. It's not like drugs are magically teleported. There are real costs in time, money, and resources for creating the packaging, transporting drugs, stocking/shelving them, unstocking them, sending them to a disposal location, and then disposing of them. All of that is a complete loss. Most of it generates various forms of pollution, waste, and other externalities. This also sucks for ~everyone.
Basically, outside of theoretical econ-land, wasting perfectly good things is wasteful.
As I addressed in my second paragraph, it is not actually a transfer unless you introduce new assumptions into the standard model of a monopoly. Please describe them.
> There are real costs in time, money, and resources for creating the packaging, transporting drugs, stocking/shelving them, unstocking them, sending them to a disposal location, and then disposing of them.
These are, in almost all cases, trivial compared to the development cost of the drug. You can group them under the marginal cost of the pill.
> Basically, outside of theoretical econ-land, wasting perfectly good things is wasteful.
Much worse than theoretical econ-land is non-quantitative land where all "waste" is bad and cost-benefit analyses are never done, not even at the order of magnitude level.
Please, by all means, construct a more accurate model. But just criticizing the imperfections of a model, which always exist, and then making non-quantitative value judgements is how we get recycling programs that cost 10 times the value of the material they recover.
Gee, you get to just hand-wave an entire manufacturing process and supply chain into insignificance, but I need a model, huh?
Fine. Let's use a generic drug (ie, not paying patent/development costs) as a proxy for how much the entire supply side costs. How about loratidine. You can get that at Walmart (stocked on the shelf, since that's mostly what the article was discussing) for about 10 cents/pill. Of course that's completely ignoring the disposal costs. I strongly doubt that disposing of hospital/pharmacy quantities of drugs is as easy as chucking them in the municipal landfill. Let's say it costs 10% of the purchase price. So 11 cents/pill just to run a pill through the supply chain and then dispose of it.
Intensifying the napkin math...
CDC says there were ~3.4 billion visits that resulted in drugs "provided or ordered". 
Let's say that those were an average of 7 days worth of drugs ordered, so ~30 billion doses.
Let's say that 10% of that much is either wasted by the consumer, or wasted while sitting on a shelf before anybody bought it - 3 billion pills.
3 billion pills/year * 0.11 cents/pill = $330 million/year. I dunno, seems potentially significant, at least to an order of magnitude level. Especially since the costs of the FDA program to extend the shelf life was around 1% of that much.
 - https://www.walmart.com/ip/Equate-Loratadine-10-mg-300-ct/20...
 - https://www.cdc.gov/nchs/fastats/drug-use-therapeutic.htm
Nope, for this you just need a number! (In my comment, the "model" part referred to how monopoly pricing was done.)
> So 11 cents/pill just to run a pill through the supply chain and then dispose of it.
Good, your estimate process is a reasonable one, and I accept this number.
> $330 million/year...I dunno, seems potentially significant, at least to an order of magnitude level.
Ok, let's compare to what the article says:
> Experts estimate such squandering eats up about $765 billion a year — as much as a quarter of all the country’s health care spending.
So we see that our cocktail napkin math has been extremely useful, and that the article has overstated the size of the issue by some three orders of magnitude! (25% --> 0.01%) The many suggestions in the article that this is important on the level of the entire health care system is absolute bunk.
Thank you for engaging in this exercise with me!
ProPublica has been researching why the U.S. health care
system is the most expensive in the world. One answer,
broadly, is waste — some of it buried in practices that
the medical establishment and the rest of us take for
granted. We’ve documented how hospitals often discard
pricey new supplies, how nursing homes trash valuable
medications after patients pass away or move out, and how
drug companies create expensive combinations of cheap
drugs. Experts estimate such squandering eats up about
$765 billion a year — as much as a quarter of all the
country’s health care spending.
Tossing such drugs when they expire is doubly hard. One
pharmacist at Newton-Wellesley Hospital outside Boston
says the 240-bed facility is able to return some expired
drugs for credit, but had to destroy about $200,000 worth
last year. A commentary in the journal Mayo Clinic
Proceedings cited similar losses at the nearby Tufts
Medical Center. Play that out at hospitals across the
country and the tab is significant: about $800 million
per year. And that doesn’t include the costs of expired
drugs at long-term care pharmacies, retail pharmacies
and in consumer medicine cabinets.
(I do agree with your point that most of the cost, and especially the most egregious examples like the wasted epi-pens, don't represent real costs because of monopolies.)
And expired drugs comes down to inventory management. Don't buy more product than you can sell.
And don't blame the victims, which is what your sentence on inventory management does. If a hospital over-estimates patients usage of a drug and gets stuck with expired drugs due to excessively shortened expiry dates, that's on the maker. The maker knows the real expiry dates, usage can never be anything but an estimate.
I had a friend who worked on the federal Tamiflu stockpiles. The gov't and the manufacturer have a deal where the gov't gets a significant rebate on unused product that is either put back into circulation or the manufacturer buys it back and destroys it.
Remember, the gov't has companies competitively bid on manufacturing these products (particularly for generics which were most of the examples in the article). Is the company going to give a 100% money back guarantee if it's not used? Would you? I wouldn't.
So yes, there are certain scenarios (stockpiling) where manufacturers don't take back product. That's not most scenarios.
Most scenarios are hospitals holding 30-45 days on hand of product. If they screw up their inventory they should pay a penalty not the manufacturer.
Anticipating pill expiration and what will be excess supply is a difficult predictive modeling problem. At a practical level, it's difficult for pill expiration date to make it into the price, especially when the purchasers of the pill are different people than those who throw out the pill years down the line.
I am more than a little surprised by this. Fanconi syndrome, a kidney disease that can cause bone damage , has been repeatedly found in the medical literature to be caused by taking expired pills of the antibiotic tetracycline. 
That these people completely failed to remember these cases is distressing at best.
(I heard an interview with the author of this story on NPR Morning Edition today, and also scanned this story webpage. The only mention of tetracycline was by a web commenter on the story.)
Otherwise I agree I'm surprised by the reporting ignoring this because this is a classic board question in internal medicine.
Edit: Apologies that the above link has a paywall. When I accessed it via google it worked once, then shot up a paywall. If it doesn't work at all I'll remove it.
In this case, part of the news is that the FDA's 1986 Shelf-Life Extension Program ( https://www.fda.gov/EmergencyPreparedness/Counterterrorism/M... ) has been working fine for decades. This is important for people to know and it's worth re-publishing from time to time as a reminder.
They had another great article like this last month, https://www.propublica.org/article/hundreds-of-judges-new-yo... , which basically reported "local courts in New York state had terrible problems according to an in-depth 2006 New York Times report and are still terrible in 2017". Great stuff.
Likewise a lot of the reporting on "civil forfeiture continues to happen and continues to be unfair, here are more examples" provides a valuable civic service.
I know it can be tough in a news organization to re-report something that everyone already knows is true. It can certainly be tempting to pass over truly important stuff in favor of seeking out brand-new news -- a fair amount of science reporting is driven by "what's in the journals this week" -- but this kind of long-term focus on "what's still true that needs your attention" is equally valuable and great work.
Huh? US hospitals have a shortage of Baking Soda?
There weren't many days where the hospitals I've worked at weren't short on something. (And these are like, big name academic medical centers in big U.S. cities.) I've run into a shortage of normal saline before. Yes... salt water.
Sometimes hospitals will even stockpile basic supplies to keep from running into shortages... which only makes the whole situation worse.
It causes a lot of patient harm, even if it's usually not catastrophic because we usually have redundant meds. Recently my hospital was out of a common, generic, cheap antibiotic. For many infections in many patients, this meant we had to switch to the second-line drug -- which was more expensive, had more side effects, and was less effective against the bug. This type of thing happens to chemo drugs, to drugs you give after a heart attack, to drugs you need for planned surgeries or procedures.
I don't know how to solve this. Less regulation? More regulation? Subsidies for manufacturers of essential medications / supplies? I mean, normal saline is already ridiculously expensive enough .
For more reading: http://news.medill.northwestern.edu/chicago/hospital-drug-sh...
That doesn't sound like the cost of the saline is much of a problem. It sounds like hospital billing is the problem.
They are in shortage because it takes a long time (six-ish months) to certify a factory as producing something safe for consumption, and in the normal course of things, there is very little point in setting up a second line to produce medical grade sodium bicarb- no company is going to show a profit from it. So you get one place producing the national supply, just large enough to handle the normal demand. Then they have a problem (in this case, Pfizer was unable to source the glass ampules) and now there is no production.
: And with good reason. Producing mass quantities of sterile, consistent, things with the correct amount of material is hard.
Source: wife is a pharmacist tasked with doing her hospitals contingency plans on bicarb.
Decentralization and redundancy of medical production lines would make our country safer and more robust to unforeseen events. It would certainly be a benefit. I agree, however, that it would be more expensive.
Most of these medicines are like sodium bicarb- the national population doesn't need that much, but if you need it, it's hard to get a good substitute. Snake antivenom or daraprim are good examples of these niche products that are even more crucial than bicarb, but still probably not used enough for any one organization to be able to justify a second-source contract.
In point of fact, most drug companies have faced what are called patent cliffs, where due to troubles in the R&D pipeline drugs lose patent protections far faster than new drugs come online to replace them. While this is probably bad for society overall- we really want lots of new drugs getting discovered and making us healthier!- it means that patents protect smaller and smaller portions of the overall drug market. IOW, playing games with them has less and less overall value as the years have progressed.
Need to find new levers to have influence on this market.
: A totally different subject with a lot of theories as to why, but not much in the way of solid evidence.
: Yes, the 1k USD/pill Solvadi is amazingly expensive. By the same token, it actually works, and cures you after three months. This is the sort of medicine that improves humanity, even at that price. There are plenty of other drugs (especially cancer drugs that extend lifespan by averages of <4 months) that do not improve humanity, but Solvadi does. It just costs a ton.
You can't just take a box from the baking supplies shelf in Walmart.
A very few drugs (like erythromycin) become toxic so should be discarded.
Some drugs become less efficacious over time, but nobody really knows the shape of the curve as they are simply tested to see if they have the same efficacy on day E as they did on day 0. Well of course all drugs will become worthless as t approaches ∞ but you can guess that since tablets have a very low moisture content, if they are kept in a cool dark place it's likely they'll last a very long time. I also happily keep expired drugs in a controlled environment and use them; all drugs in my car's and backpack's first aid kits get replaced annually because they are exposed to harsh environments.
(Stockpiling drugs doesn't prove anything BTW: if you are stockpiling them against an emergency the presumption is that some efficacy is better than none).
Nobody is going to do accelerated life testing beyond what they have of course. I think extending the required lifetime is a good idea, though I question the size of the economic return claimed in the article.
The expiration dates on food are slightly more scandalous: the FDA doesn't require the same level of testing as they do on the medical side so they are mainly set stupidly short. Last week purchased some vacuum-packed lamb that had a manufacturer label with an expiration date a month away, but was prominently labeled to expire this week. And of course US egg producers take steps that reduce the storage time of eggs, which can be months old when you get them -- and then "expire" a week after getting home.
Are you sure this was an expiration dates, and not a "sell by" or "best before" date?
A sell-by date is the date at which the grocery store will voluntarily throw out the product—not because it has gone bad, but rather because its unprepared appearance/texture will have changed enough that people will either avoid the product on the shelf, or will complain and return the product, even though those differences disappear once the product is prepared. I'm talking here about things like "bruised" bananas, "slimy" baby carrots, raw chicken that's turned a slightly deeper shade of pink, and anything that "smells off" when taken out of the package. The grocery store can't make a profit keeping these products on the shelf past a certain point—even though they're perfectly fine and safe to eat—so it does some Business Intelligence stuff to optimize the profit-curve given (shelf-space opportunity-cost + complaint PR-cost + return-cost), and out comes a number they print on the label for when they should remove the thing to replace it with a newer one if they want to make the most money.
You might think that this makes for a lot of waste. But grocery stores are aware of this waste, and how it translates directly to lost profit opportunities, so they're usually on the hunt for ways to solve this. This is why so many groceries have in-store delis that make ready-to-eat meals: it's a way to turn products people aren't willing to buy, into products they are, by doing the preparation step themselves.
Often, also, grocery conglomerates have "up-market" and "down-market" marques they operate under. Down-market customer bases have higher tolerance for these "red-herring signs of spoilage" (probably because they've at some point tried these foods out of necessity and found out they were fine after all), so the conglomerate can take advantage of this when building a centralized logistics pipeline for its stores: when it acquires produce/dairy/meat/etc., the stuff that already has some of these signs (or will likely acquire them sooner) is sent to the down-market-branded stores, while the up-market-branded stores get the products that appear fresher and will continue to appear fresh for longer. This is a necessity—if they didn't do this, they wouldn't be able to keep the up-market stores in stock given said stores' customers' intolerance for signs of spoilage—but it conveniently allows the up-market store brand a cachet of having "the freshest" produce, allowing them to justify charging more for food from the same farms.
And, finally, some grocery chains have deals with e.g. homeless shelters, to provide them these voluntarily-expired products as ingredients for free-meals programs. Grocery stores get people to feel good with "can drives" for food banks, donating non-perishables, but perishables are in much greater demand, and it's often the stores themselves that are doing the most good there. (Why don't they talk this up for PR? Because the fact that this is "expired" food would have to come up, and the public will never bother to read and understand what I just wrote above.)
The deals for grocery chains are usually made with tax deductions as an incentive, as there are logistical costs associated with it, and they still dump/grind a very large volume of perishables every day. Talking this up for PR could be detrimental to grocery chains, as theirs clients might not appreciate the "inefficiency" of their distribution chain, as they are the ones paying for the donated/dumped perishables.
In the case of this lamb (sold by Trader Joe's FWIW) both were labeled "use before". Hmm.
I've yet to have a US carton of eggs go bad, and they often spend a few weeks in my fridge.
(Note: This is the only practical experiment on this subject that I am aware of. If anyone knows of a better study, please share the link.)
Eggs can indeed go bad, but not immediately, and last a lot longer if the cuticle is not washed off.
As far as I know eggs will not go bad at all - they will just slowly dry out.
I'm sure they do more - I only have a slim memory from hearing one of their founder's speak.
They do have a pretty incredible impact and deserve mention in this conversation - they've been attacking this problem from a "what can we do now" perspective for the last 5 or 6 years while others have been spending time debating what to do or if anything really needs to be done. They are good people.
We help orgs donate surplus meds that are often short-dated. We're tackling the $5B of unused, UNEXPIRED meds that go to waste every year in the U.S.
We'd love to talk to you (in confidence!) about how the PBM part of the world works. Please reach out to me if you'd be interested - contact information is in my HN bio.
It is like not using the seat belt could save your life if you are thrown away of the car at some specific circumstances, like happened to a person I know. But they are there because it works better most of the time as engineers could design safety with less variables.
My father had an emergency immune problem(that we have already identified as it is recursive) and the only required drugs in house were expired for some years. He took those before we bought the new ones and the old ones were like 1/3 potency of the new ones.
Given that most of the price of drugs come from intellectual property and patents and each pill cost dollar cents to make, I don't see the urgency of taking expired drugs.
If Hospitals trow away expired drugs then it is a good reason to take into account in the global negotiation process, and I bet they already do in countries that buy drugs in bulk.
In fact, some of them have track records of such "ethical" behavior, it wouldn't even surprise me if they turned them poisonous vs simply inactivating them.
I mean, if I am prescribed oxycontin, can I legally have the drug in my system for the rest of my life?
Christ. That is insane. What are people supposed to do for medicines that are used only in the case of an emergency? I really wonder what goes through some of these prosecutor's heads sometimes.
E.g., with alcohol, the “in the blood is illegal” rule would criminalize auto-brewery syndrome occuring in those legally denied alcohol (e.g., due to age), while the “in the blood is admissible as evidence of use” would not, though those with the syndrome might need to present evidence of it to avoid charges.
The myth is that "lie after" should be read as "equals", which is often far from the truth apparently.
Are you implying that you still hold this assumption and are looking for someone to convince you otherwise? If so, I can't even conceive of where you got this idea in the first place, let alone argue against it. Let's slap a little Occam's Razor on this: how would you even begin to enforce this?
But yes, I am interested in a definitive answer.
Laws around prescription pills are sometimes unexpected, like every prescription pill in your Mon-Tues-Wed-etc pill box (outside their prescription container) being a felony. So my grandma has numerous unexpected felonies in her purse.
Off the record, it was generally accepted that we should never, under any circumstances, use any of those drugs, bar adrenalin, for fear of doing more harm than good.
The cost was substantial and one company seemed to have the monopoly on supplying them. They also kept a record of the expiry dates and supplied replacements as stocks went out of date.
The most annoying thing was that they seemed to deliberately supply drugs with most of their lifespan expired.
They of course denied that this was a policy.
I found it quite humorous; I couldn't care less that their supplies were a bit 'expired,' nor that the sealed packages had sat in the trash can for a bit!
The best solution is to require drug makers to replace expired drugs with new, for free. Given that manufacturing costs are typically a tiny fraction of sales price, this is not an expensive warranty. This will also give them incentive to make expiry dates as reasonable as possible.
On the other hand, the economics of pharma industry are such that manufacturing cost is usually a very small proportion of price.
Therefore a better solution would be to introduce a mandatory new-for-old trade-in policy. So they wouldn't lose money on the deal, pharma companies would be rewarded at mfg cost. I.e. sans profit, marketing or research cost.
Why would they sign up for that when they can just as easily charge full price for those replacement drugs (as they do now)? The only people out there who can change this are making huge profits from the system currently in place, so why would they?
There's also the possibility that a person may be suffering from a new undiagnosed condition, which hadn't yet onset when they were initially prescribed the drug. In this case you could think of it less as an expiration for the drug, and more of a suggestion to seek additional medical guidance on continuing after a certain date.
There's also the remote possibility that a new drug is developed which could potentially become dangerous when expired. The general population would have to unlearn little morsels of knowledge such as this, in the meantime people could be harmed.
Edge cases like this are enough in my opinion to warrant not spreading blanket advice like this, even if it's nearly always true. Erring on the side of caution is the best approach with medical affairs, even if it costs a bit extra monetarily. Giving patients potentially dangerous advice so they can save money is ethically questionable at best.
Hospital systems and HMOs are large enough to save millions for a bit of testing, and once enough data is gathered they can easily and safely adjust their retention policies.
Small hospitals and pharmacies probably suffer more from expirations.
Doesn’t sound like it from the article...
They only published extended dates for some lots, since they expect more recent lots to still be in date by the time the shortage is addressed.
I may have spent the better part of a day a couple weekends ago relabeling a bunch of vials with a sharpie...
FDA requires testing. testing expensive... really ... how hard would it be to grab 50 tablets per drug for each environment (high UV, extreme humidity etc..) let them sit and once a year grab a pill and run chem analysis. Sounds like it could pretty much be automated, except for maybe the maintenance of the environments. Maybe 500k to 1m per year with the work out sourced
with various statistical validations. (Yes I realize this is hard part, damn humans) Take all the generic tablets and voila..
Drugs are cheap to produce
Really though, medicine doesn't cost much. It costs because society deemed it so. As it's been explained above/below, but, paying for research, marketing and regulations is probably 90% of the cost.
The hard question is
A more interesting question is how much can we afford to help others when they are sick? If the difference between health is as trivial as getting the right medicine
I know I'd feel guilty if I didn't do something. This is especially true for chronic disease, as the cost benefit is tilted.
Incentivizes the manufacturers to accurately measure expiration times and the marginal cost to them of replacement is much less than to the consumer.
W T F.
It seems the problem is that the suppliers' supplier had a problem supplying a component (not the bicarbonate itself, but the vial it comes in, I think) and this caused production to come to a halt while the issue was being sorted out. Presumably this particular component has been certified in some way and replacing it with an alternative requires recertification, though the news articles merely say that the companies could not divulge additional details.
I wonder more about whether such a fragile supply chain is inevitable in a mostly capitalistic society or whether we could do more to ensure stability.