> We conducted the PARACHUTE trial to illustrate the perils of interpreting trials outside of context. When strong beliefs about the standard of care exist in the community, often only low risk patients are enrolled in a trial, which can unsalvageably bias the results, akin to jumping from an aircraft without a parachute. Assuming that the findings of such a trial are generalisable to the broader population may produce disastrous consequences.
>Before you jump to the conclusion that we’re suggesting we jettison RCTs from clinical research, let us clarify that that is not our intention. In an ideal world, new interventions would always be carefully evaluated through rigorous RCTs before widespread adoption. But when pre-existing convictions about an untested intervention affect the population enrolled, even a well conducted RCT can provide misleading results. Without careful attention to context, extrapolating findings from such an RCT to the patient in front of us may be, well, a leap too far.
Like someone makes an inspired pun and some dude immediately exclaims "LITERALLY!"
So I'm glad to hear there's now some scientific evidence pointing to the futility of parachutes. Surely, the experiments were carried out in a constrained environment with some unavoidable assumptions baked in but it does rather convincingly suggest that parachutes consistently do not offer any help in the event of freefall.
A corollary might appear that parachutes might even do more harm, adding weight to the falling person which could cause further injuries unnecessarily upon touchdown.
I'll look forward to follow-up research.
The big question is how would altitude affect the results.
For experiments at higher altitudes they might need to locate an airstrip situated on a mountain, with enough runway to allow larger planes to land there and enough free space on the tarmac for several planes while they carry out the jumps. It will likely shed much more light and build confidence to the applicability of this new research and the obvious conclusion.
Also, I'd like to see jumps done from different kinds of aeronautic vehicles, by more people, different brands of parachutes etc if should there be any differences regarding that. But it looks like after a few rounds of serious experiments we should quickly conclude the benefits are thin and see no need for excess research but, rather, reaching a steady conclusion.
This coworker of mine soon after left the company by storming out of the meeting room and slamming the door, then took up playing online poker for a living.
The game is played by groups who agree on a monthly stake. Each month players offer an interest rate they're willing to pay (more like a tax), the highest rate is awarded the collective pot, less the interest which is redistributed to the other 'living' players.
Players who have already received the pot are 'dead' and do not collect their share of interest in future rounds (meaning they must pay the full stake). Once everyone is dead, the game ends. Some will have come out ahead, others may have been able to get a lump of cash when they needed it
It's some basic banking services for the unbanked in essence.
Some things never change.
Edit: Upon further reading, seems like someone is actively trying to make this happen.
I know because one of the founders of the company I work for used to jump, and he had to give up his hobby for that reason.
> 16. Newton SI. Law of Universal Gravitation.Philosophiæ Naturalis Principia Mathematica, 1687.
This is an amusing didactic example of how the "fine print" can invalidate a study's conclusion.
"Parachuting injuries: a study of 110,000 sports jumps", 1987, https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1478603/
All 6 deaths they observed, were either failures to activate the parachute (4 cases), or parachute malfunction (2 cases).
That has got to be one of the most torturous acronyms I've ever seen. That's a bit of a stretch.
Biomedicine has gone whole hog into torturing acronyms to make useful sounding cohort names.
Most of the names are something where they're clearly trying to squeeze out a cute cohort name at all costs.
If you jump with a parachute, you will see chance of death go from 0% at zero feet, to approximately 100% at 35 feet. From there, it will remain near 100% until you reach about 600 feet, which is about the minimum distance it takes a parachute to open. Above 1000 feet and chance of death floors out near 0%
In patients with head and chest injuries, a 50% mortality rate was estimated to occur at falls from 10.5m, compared to 22.4m in those without injuries to head or chest
They can't just randomly recruit people and have them jump to their near-certain deaths from ever increasing heights due to pesky medical ethics boards.
This matters because a person who's forced to jump from a significant height out of desperation (e.g. to avoid a fire) is likely to take certain precautions that people who accidentally fall don't take, and those trying to kill themselves intentionally avoid.
There was another case where someone "successfully" burned-in from 800ft (where we did all our jumps from) and survived because they did a proper landing or, perhaps, just pure luck.
From what I remember there were only a couple jump-related fatalities in the 3 years I was on jump status and the 82nd did something like a million jumps a year...which says a lot because if we went 82 days without a fatality we would get a day off and in 3 years we reached that goal exactly once.
Not sure about that. Although chance of death would be significant, there are many cases of people surviving falls from greater heights.
But really, falling sucks. There is a reason working in solar is more deadly than working in nuclear. And the reason is occasionally an installer falls off of a roof.
But overall, I believe it is better to be conscious. If you are falling from 3 miles up, you can glide to maybe 1 mile in any direction. This gives you the chance to try to land in a snow drift, in some mud, maybe a hay bail, or even a thick shrubbery, etc.
So, the theory is, if a cat falls from a height above 6 stories they either have less severe injuries or are just dead so the owner just doesn't bring them to the vet?
I would guess that at that height something must've broken the fall such that the cat suffered almost no injuries. I really don't see how the cat would survive the fall otherwise.
This article is making a comment about that.
If you want an RCT you need a control group who will be subjected to the horrible disease with standard treatment (and sometimes a group with no treatment), and a group subject to the experimental treatment. Who will volunteer for this trial? Perfectly health people? Or people with the disease?
Like, maybe the measles vaccine is all just placebo effect and good sanitation and handwashing practices. Can we do an RCT where we inject some children with saline instead of vaccine, then see what happens?
Because we can't be certain the children of crunchy hipsters who don't get immunised get measles is related to lack of immunisation, it might be they expose themselves to exotic strains by taking exotic holidays.
I love how much detail into which they go.
I love this paper. Whenever you see pop news articles saying "Study shows X is good/bad for you," if you dig in to the paper, it's often something like this. When you read the specific details, you realize the study doesn't generalize to "You should/shouldn't do X." But not enough people read the details, so it gets circulated into conventional wisdom :(
>Ethical approval: This research has the ethical approval of the Institutional Review Board of the Beth Israel Deaconess Medical Center (protocol no 2018P000441).
Let’s say you’re going to use some causal model, like a regression adjustment technique. You could for example assign people to the treatment group (receives parachutes) and the control (no parachutes), and then observe who lives and dies, as well as a bunch of potential confounders like altitude, age, fitness, whatever.
Fit a logistic regression to predict the outcome (survival) based on the treatment (parachute) controlling for the other characteristics. Then read off some effect size and ststistical significance.
Or better yet, and here’s the important part, you could make it a Bayesian logistic regression by considering prior distributions for the regression model’s fitted coefficients, and sampling draws from the posterior distribution of coefficients using the data set and your priors.
So what is the prior on the coefficient for the treatment term (parachutes)? Well, probably pretty damn high. Definitely some strongly informative prior, take your pick of historical data or effectiveness rates of physical safety equipment, whatever.
From this prior, and making some neutral assumptions via the priors on other weights, you could figure of what the effective sample size would be for a data set to disconfirm your prior (e.g. a posterior with a mode on the parachute coefficient far away from your strong prior). Sort of like a power analysis, but assuming a fake data set that shows nothing but failed parachutes. How much of that silly data would you need based on your prior?
What this would tell you is that you’d need some insane, physically ludicrous amount of data that flies in the face of an obvious prior, that what would be the point of running the study? You’re just going to confirm your prior.
So the real question is how often is this a realistic description of other situations when you want to study a treatment?
That’s the thing, right? That the author kind of wants to be snarky about.
But really, it’s pretty fair to say you don’t have such a strong prior that the study would be futile, even in cases when you sort of do feel like the conclusion is obvious (e.g. taking Tylenol leads to less pain, college kids prefer drinking instead of homework). While it passes some gut test of what’s obvious, that’s different from really betting on such a one-sided prior that a study is futile.
To me it suggests most of the sort of “duh” RCTs carried out are pretty much fine. Whether or not the study is worth it or is informative would be based on other priorities like cost, licensing or certification requirements, whether it’s of value to specialists who care about splitting hairs on accurate effect size measurement, etc.
It's an argument against clinical medicine being fixated on RCTs as the one-and-only form of evidence that can be taken seriously, even when - as you note - the Bayesian prior for such a situation is extremely high.
This happens more often in the field than many people might think.
- There’s a history to this analogy and paper within
biomedicine and this journal (BMJ) from a 2003 article,
which I’ll get to.
- Parachutes are a dangerous metaphor in medicine,
where almost nothing has an absolute risk reduction
of >99% (note: not 100% because, yes, a handful
of people have survived falling from altitude without
a parachute), especially over the time-frame
of a matter of hours.
- This should not be a call to stop attempting RCTs
(which is the conclusion some commenters have made),
but an exhortation to find ways to create better ones
when conditions are challenging. Frequently, objections
to doing an RCT because “how could we withhold X
from the control arm!”, are not as obvious once
the data are in.
The broader point is that the 2003 “parachute” article in the same journal (BMJ), was frequently incorrectly understood / used.
While “parachutes” makes an easily understandable headline, it is almost totally unrelated to the field of medicine where we rarely have a shot at doing something as obviously lifesaving as making someone hit the ground at 10 mph vs 120 mph. The problem is that people have cited the 2003 paper mentioned in the thread to justify a number of interventions that ended up not being better than prior care. The interventions were started in good faith because was “obvious” to their creators that doing X would be helpful (spoiler: it usually wasn’t or wasn’t that beneficial).
A lot of this is cribbed from Vinay Prasad, who has a twitter thread about this: https://twitter.com/VPplenarysesh/status/1073298754298556416
He is a controversial figure, but I think he does a good job of hammering home some important skepticism about a great deal of medical literature and practice to a broad audience.
I’d appreciate hearing objections to the above, btw.
Given that nobody would ever do this, it's quite obvious that there's some tongue-in-cheek angle to the story.
It isn't misleading if you can read between the lines.
Presumably they meant "less than", but it's hard to trust any of their conclusions with such sloppy attention to detail.
> participants included in the study were on aircraft at significantly lower altitude (mean of 0.6 m for participants v mean of 9146 m for non-participants; P<0.001) and lower velocity (mean of 0 km/h v mean of 800 km/h; P<0.001).
So essentially they had people jump about 2 feet from an airplane sitting at rest. Of course there were no injuries, and that was the point. Few people will participate in a trial in which the control group could die, and such a trial would be highly unethical.
"Parachute use to prevent death and major trauma when jumping from aircraft: randomized controlled trial"
"Parachutes prevent death when jumping from aircraft: randomized controlled trial"
Also, as my grandfather was a paratrooper: parachuting isn't a risk free activity even when the main canopy deploys properly, i.e., broken limbs on landing. And it's remotely possible to survive a fall without a parachute from terminal velocity height, i.e., Vesna Vulović. Edit: <- Edge-cases contrary to the obvious.