

Parachute use to prevent death and major trauma due to gravitational challenge - gmac
http://www.bmj.com/content/327/7429/1459

======
tokenadult
Some of the comments made by HN participants who posted during the first hour
this 2003 article was posted bring up the main point: clinical trials should
be informed by prior probabilities. That's why some medical researchers, while
not saying that clinical trials are a bad idea, identify the best practice as
"science-based medicine" (medicine that takes into account prior probabilities
and basic principles of science when evaluating clinical trials) rather than
"evidence-based medicine" (which many observers take to be a designation for
relying on clinical trials to evaluate treatments). The "About" page on the
Science-Based Medicine group blog site puts it well:

[http://www.sciencebasedmedicine.org/index.php/about-
science-...](http://www.sciencebasedmedicine.org/index.php/about-science-
based-medicine/)

"Good science is the best and only way to determine which treatments and
products are truly safe and effective. That idea is already formalized in a
movement known as evidence-based medicine (EBM). EBM is a vital and positive
influence on the practice of medicine, but it has limitations and problems in
practice: it often overemphasizes the value of evidence from clinical trials
alone, with some unintended consequences, such as taxpayer dollars spent on
'more research' of questionable value. The idea of SBM is not to compete with
EBM, but a call to enhance it with a broader view: to answer the question
'what works?' we must give more importance to our cumulative scientific
knowledge from all relevant disciplines."

The previous comments posted here mentioning what we can know about falling
human bodies from first principles of how free fall in the earth's near
gravitational field works, or by experiments with dummies including
instruments, or by historical experience with aircraft disasters, correctly
point out that randomized controlled trials should be designed with all the
relevant science in mind. Further, clinical trials should be designed both to
gain knowledge of the effectiveness of varying treatments and to minimize risk
to human subjects of treatments (jumping out of an airplane without a
parachute, for example, or using acupuncture, for another example) with no
strong evidence of effectiveness.

See my all-time favorite link to share in HN comments, LISP hacker and rocket
scientist (and now director of research at Google) Peter Norvig's article
"Warning Signs in Experimental Design and Interpretation"

<http://norvig.com/experiment-design.html>

on how to interpret scientific research.

AFTER EDIT: In answer to the question, "Where is the evidence that EBM or SBM
helps?", the evidence is all around you, all over the world. I've lived in
more than one country, and I have had access to the research library of a
major university health sciences center since I was in high school, and the
incremental improvements in human lifespan at all ages, young and old, and the
reduction in disease burden from all kinds of diseases

[http://www.scientificamerican.com/article.cfm?id=longevity-w...](http://www.scientificamerican.com/article.cfm?id=longevity-
why-we-die-global-life-expectancy)

are an outcome caused in large part by better understanding of how to prevent
or treat disease, an understanding that comes about from science-based
medicine.

The article I just submitted today

<http://news.ycombinator.com/item?id=4644708>

about systematic reviews of acupuncture research in China illustrates the
drawbacks of doing anything but science-based research on human disease
prevention and treatment. People can imagine that a lot of speculative
treatments work, but the way to demonstrate that one treatment or another
works is to investigate it carefully with the underlying science in mind.

~~~
pella
Where is the evidence that EBM or SBM helps?

~~~
praptak
You mean where's the _evidence_ that _evidence_ based medicine works?

I could produce some evidence but I'm afraid that then you'd ask me where's
the evidence that my evidence actually supports the hypothesis that evidence
based medicine actually helps.

~~~
nandemo
That's the problem of induction, a.k.a. "how do we know the sun will rise
tomorrow?" problem.

<http://en.wikipedia.org/wiki/Problem_of_induction>

------
cstross
Best understood in context as a combination of medical humour, and a critique
of evidence-based medicine. In this case pointing out the lack of controlled
randomized trials of parachutes, and how this would affect the evaluation of
the efficacy of parachutes as a life-saving medical intervention.

~~~
finnw
This must once have been true of seatbelts too, before all the experiments
that were done with dummies and cadavers.

One could argue that similar experiments should be done with parachutes:
Throwing dummies and cadavers out of aircraft onto various surfaces (rock,
grass, woodland etc.)

Interesting engineering problem too (you need a robot to pull the ripcord, but
then it needs to get out of the way so it does not interfere with the outcome
by causing additional injuries, adding too much weight etc.)

~~~
cstross
The problem here is that we have a condition ("jumping out of aeroplanes")
where, without an intervention ("give the patient a parachute") the
consequences are almost certainly fatal (yes, a handful of people have
survived falls from altitude without parachutes: but a fatal outcome is
sufficiently certain that defenestration has historically been used as a means
of execution).

It's a major ethics no-no to expose healthy patients to potentially-fatal
environments. It's also a major ethics no-no to withhold known life-saving
treatment in the name of continuing a trial. So we can't contrive a proper
randomized double-blind placebo-controlled test of the life-saving efficacy of
parachutes, where we push people out of planes repeatedly to see how many
survive without a parachute vs. _with_ a parachute.

We can't test parachutes any more than we can contrive an evidence-based test
of the efficacy of being-rescued-from-burning-buildings-by-firefighters, by
arranging for a city's fire department to randomly _not_ rescue half the
victims of house fires, so that we can subsequently compare their survival
rate with the survival rate of those who were rescued.

Basically, evidence-based assessment of medical treatments breaks down when it
hits the edge condition defined by the patients automatically dying if not
treated. _At best_ we can compare different types of parachute, or different
types of emergency tratment, and see which has a lower associated mortality
rate. But we can't, ethically, compare treatment/no-treatment if there's a
high likelihood of no-treatment resulting in death.

~~~
knowtheory
But the thing is that you can use proxies to study this to an extent.

Amongst rock climbers there is a naturally occurring population of free
climbers, and it is possible to note the type and severity of injuries between
different types of rock climbers (although i'm guessing your n is going to be
low).

The safety of parachutes themselves is an interesting question because they're
not used as a preventative mechanism, I think that's the primary difference
between them and climbing ropes/harnesses/pitons.

Edit: I just noticed that I didn't address the randomized component. I'd still
argue that evidence based medicine can actually include arguments that aren't
randomized trials. That said, randomized trials are still one of the strongest
sort of argument that can be made. As noted above, seatbelts are another sort
of device that I don't think we need randomized trials to test the efficacy
of.

p.s. Just finished (and enjoyed) the Apocalypse Codex. The Laundry Files is
one of my favorite series of books as someone who's always believed that
empiricism and existentialism are reconcilable. Please write more :)

~~~
cstross
_Please write more :)_

I'm writing #5 right now. Publication due in mid-2014.

~~~
knieveltech
Please excuse the hijack, anything in the pipeline for release in 2013?

~~~
gjm11
[http://www.antipope.org/charlie/blog-static/2012/10/still-
un...](http://www.antipope.org/charlie/blog-static/2012/10/still-under-
siege.html) mentions "Neptune's Brood", space-opera set 5k years after
"Saturn's Children" but it sounds like it may not actually be coming out until
next year, along with three volumes of Merchant Princes (the existing books,
in omnibus editions with minor changes) and maybe a UK edition of "The rapture
of the nerds".

~~~
cstross
"Neptune's Brood" comes out in July 2013. (It's a high-concept space opera and
a meditation on the 2007-08 liquidity crisis. Also, nominally, a sequel to
2008's Hugo-nominated "Saturn's Children".) "The Rhesus Chart" is in the
pipeline for July 2014. The other stuff is all, effectively, reprints.

------
Zarkonnen
This is kind of making me froth: of course parachutes are tested. Lots and
lots of R&D time has gone into developing better crew/passenger escape system
for aircraft.

The authors are simply whining about evidence-based medicine, which is now
finally being pushed after a realisation that a lot of the medical
interventions we use are based on gut feeling, hearsay, or ancient, tiny,
badly-implemented trials.

Human beings are extremely good at seeing patterns where there are none. We
need good, solid stats to keep us grounded in reality.

~~~
gwern
Yeah, The problem is always that very few, very few indeed, medical procedure
rises to the prior that parachutes have... but anti-EBM people somehow have a
surprisingly long list of such procedures, a list that rather resembles the
list they would've made _before_ EBM began insisting on RCTs.

~~~
beagle3
I like EBM. I would love to live in a world in which it is practiced properly.
One of the biggest problems with EBM, though, is that what often is advertised
EBM, actually isn't.

My ex was a medical doctor, and for fun I used to read NEMJ and BMJ articles
through her subscription. If you trace through references, you often find a
(reasonably rigorous) study with reasonable effect observed on a group 40
norwegian women aged 40-50; which in a later article referencing it is assumed
to apply to all women in that age group; and through the years is assumed to
apply to all women in all age groups. And that's the part that is "easy" to
trace (except no one ever does, and it's not really easy)

Additionally, the statistical evidence is inherently weaker than is presented
(in a way which no one can actually evaluate) due to the lack of accounting
for negative results and unreported trials.

Also, accepted login in EBM is not acceptable from a math perspective. Seth
Robert's recent blog post gives a striking example:
[http://blog.sethroberts.net/2012/10/11/jama-jumps-to-
conclus...](http://blog.sethroberts.net/2012/10/11/jama-jumps-to-conclusions-
about-vitamin-d/) ; Tell someone about YOUR (n=1, but better controlled than
most medical tests!) experiment with Vitamin D, and they'll likely counter
with "Oh, but that has been refuted. It's just placebo". That's a big
percentage of how EBM is used in practice. And that's the part I don't like.

~~~
gwern
> Seth Robert's recent blog post gives a striking example:
> [http://blog.sethroberts.net/2012/10/11/jama-jumps-to-
> conclus...](http://blog.sethroberts.net/2012/10/11/jama-jumps-to-conclus..).
> ; Tell someone about YOUR (n=1, but better controlled than most medical
> tests!) experiment with Vitamin D, and they'll likely counter with "Oh, but
> that has been refuted. It's just placebo". That's a big percentage of how
> EBM is used in practice.

Actually, it's funny that you bring that up. Yes, my own vitamin D self-
experiments were more rigorous than Roberts's vitamin D experiments/anecdata,
and both of my self-experiments used blinding precisely to avoid the placebo
refutation, so anyone trying to refute them so simplistically would be engaged
in poor reasoning; but Roberts is not innocent himself of poor reasoning
himself!

I pointed out in his _previous_ post on that study that his criticism makes no
sense since the injection happened once a month, which implies the following
dilemma (if we believe his criticism): the effect from bad timing should
either be undetectable due to contaminating only 1/31 of the data (1 day out
of each month), or if the negative effect is persistent over many days,
renders every single anecdote & self-experiment (including mine) completely
unreliable trash since none of them were expecting an effect which
contaminates so many days. So either his criticism is wrong or his entire
collection of data is worthless. (Besides that, plenty of vitamin D studies
have shown benefits while completely ignoring administration time. What's
sauce for the goose is sauce for the gander.)

So maybe EBM is abused in practice... but I don't see your examples showing
this.

~~~
beagle3
> So maybe EBM is abused in practice... but I don't see your examples showing
> this.

I was giving a (paraphrased) example from my history - it was 10 years ago,
and I didn't keep my notes.

But Seth's example is typical: A result (negative in this case) applicable
very narrowly is assumed to apply very broadly (By a top practictioner in the
field, and his view is unlikely to be rejected; most readers will take his
conclusion without trying to confirm or criticize the rationale).

> the effect from bad timing should either be undetectable due to
> contaminating only 1/31 of the data (1 day out of each month), or if the
> negative effect is persistent over many days, renders every single anecdote
> & self-experiment (including mine) completely unreliable trash since none of
> them were expecting an effect which contaminates so many days.

I don't follow you; In this experiment, they gave 100,000IU once a month;
meaning that there must be a sharp rise in the beginning, followed by a decay
which we believe (from other experiments) to happen slowly over a month.

If the effects are modulated by the _delta_ in vitamin D, rather than the
_level_ (e.g. if there's a phase-locked-loop somewhere), then Seth's reasoning
applies perfectly well: In the JAMA experiment, there is a (very small)
constant negative delta, and nothing for a biological PLL synthesizer to "lock
on" to - whereas with your experiment (and other anecdata collected by Seth),
the sleep cycle can be locked onto the Vitamin D delta (with 12 hour delay),
and all would make sense.

> so anyone trying to refute them so simplistically would be engaged in poor
> reasoning;

Yes, that's exactly my point. Most people who think they follow EBM are
actually engaged in poor reasoning, and even worse meta-reasoning, as in: "The
fact that other people did not notice that this evidence is inadequte, is
evidence that YOUR reasoning is flawed". Yes, more than one person with an
MD/PhD told me that if what I claim about how B12 is produced is right, they
would have learned about it in school; even after they relented and looked it
up themselves!

Furthermore, most placebo-controlled double-blinded experiments are not
actually blinded: the substance or procedure being tested is known to have an
effect (e.g. dry mouth) but it is not known if it has the _desired_ effect.
The placebo is known to have no effect (e.g. sugar pills). The patient
therefore knows when they got the real thing (though they're still in the dark
if they got the placebo).

The only way this is ever controlled for (if at all), is by comparing to an
accepted substance/procedure used for the same purpose - which might, itself,
be subject to the same kind of bias.

So, to cut a long story short ... The ideal of EBM is nice. The practice is
very, very far away, but is assumed by most EBM proponents to have all the
properties of the ideal.

~~~
gwern
> In the JAMA experiment, there is a (very small) constant negative delta, and
> nothing for a biological PLL synthesizer to "lock on"

Exactly. With nothing to lock onto, there should be no effect or a 'very
small' effect since people will continue their normal circadian rhythms and
light exposure levels which zeitgebers will, just as usual, adjust things
daily - if that delta over the entire month even matters, which it probably
doesn't since it's fat-soluble and should be released gradually at need from
fat stores. To say that there should be a strong benefit to colds and this
tiny non-existent effect explains why we don't see it (without also explaining
why this doesn't eliminate all other benefits observed from vitamin D, since
use of injections is common in vitamin D experiments) is just totally obvious
special pleading by Roberts.

(And yes, the relevant factor cannot be the _level_ , because that is
flagrantly inconsistent with my self-experiments where the level was kept
constant and only the timing was varied.)

> Furthermore, most placebo-controlled double-blinded experiments are not
> actually blinded: the substance or procedure being tested is known to have
> an effect (e.g. dry mouth) but it is not known if it has the desired effect.
> The placebo is known to have no effect (e.g. sugar pills). The patient
> therefore knows when they got the real thing (though they're still in the
> dark if they got the placebo).

'Most'? Cite.

~~~
beagle3
I guess I can't support 'most'. It did happen in two out of three experiments
for which I actually had intimate details.

But this is one of those things that will never get a trustworthy citation,
and yet is probable (replace "most" with "significantly many" to increase
probability according to your own probability measure).

A drug tested today on humans has, with high probability, been shown to be
safe, but most importantly, effective, in an animal model - or is already
known to be effective and safe in humans for another use.

Note the term "effective". That means that it is known to have a measurable
effect. For many drugs, it is therefore probable that the (human) subject
receiving the treatment can tell that it isn't placebo.

Specifically, quite a few psychiatric drugs are known to cause obesity,
compulsive thoughts and lactation (even in males). Niacin is known to cause
flushing (at lower levels, it is not externally visible, but the sensation is
still there). Viagra was discovered during an attempt to treat hypertension.
I'm sure the subjects could tell it from the placebo. Similarly for Minoxidil.

To actually assess how prevalent this is, you need at least access to notes
made during the experiment, or worse - you need to do measurements and stats
that weren't made in the first place (and which making may only make the
results less promising; which is a problem in for-profit development)

------
wac
Can I just point out that the BMJ's Christmas issue always includes
'humourous' pieces of research:

In 2011:

"Is 27 really a dangerous age for famous musicians? Retrospective cohort
study"

<http://www.bmj.com/content/343/bmj.d7799>

or

"Orthopaedic surgeons: as strong as an ox and almost twice as clever?
Multicentre prospective comparative study"

<http://www.bmj.com/content/343/bmj.d7506>

------
raverbashing
Yes.

All evidence of parachute efficacy is anecdotal (as well as the occasions
where the parachute failure "allegedly" caused the death of a person.

Enough with this pseudoscientific babble that parachutes work!!

I love this kind of humor, EBM is certainly important, but some people try to
make all and everything out of something.

------
praptak
Media report: "SCIENTISTS CONFIRM: PARACHUTES INEFFECTIVE" "JUMPING OFF A
PLANE WITHOUT A PARACHUTE CONSIDERED SAFE"

~~~
joelthelion
That's basically how the media report every little doubt in global warming
research.

------
csmattryder
I love our sense of humour, very British:

"Contributors GCSS had the original idea. JPP tried to talk him out of it. JPP
did the first literature search but GCSS lost it. GCSS drafted the manuscript
but JPP deleted all the best jokes. GCSS is the guarantor, and JPP says it
serves him right."

------
stereo
This is actually quite relevant for bicycle helmets. There is no evidence that
they actually make you safer, and the debate can get very passionate.

~~~
deltaqueue
Maybe I'm missing the point of this entire thread and don't truly grasp the
argument for EBM vs. correlation, but can someone explain why a statistically
significant correlation for this kind of efficacy is not evidence in and of
itself?

[http://www.sciencedirect.com/science/article/pii/S0001457500...](http://www.sciencedirect.com/science/article/pii/S0001457500000488)

~~~
streptomycin
Those aren't prospective randomized controlled trials. They're retrospective
observational studies. They are evidence of efficacy, no doubt, but not
particularly convincing evidence for all the normal reasons observational
studies are often unconvincing, but maybe even moreso in light of some other
contradictory results in the literature as well as the general concept of risk
compensation (i.e., it's not like a parachute where it's very clearly obvious
that it works).

------
paulsutter
Where's the post from tokenadult about the methodological shortcomings of this
paper?

~~~
tisme
Right here:

<http://news.ycombinator.com/item?id=4644687>

------
wcoenen
I see the point they're trying to make, but randomized controlled trials would
still be valuable to determine the relative effectiveness of different
parachute designs.

------
bicknergseng
If Monty Python were allowed a medical license....

"Contributors GCSS had the original idea. JPP tried to talk him out of it. JPP
did the first literature search but GCSS lost it. GCSS drafted the manuscript
but JPP deleted all the best jokes. GCSS is the guarantor, and JPP says it
serves him right."

------
zandor
For those that actually want to see how some canopies are tested then check
out this video from the R&D department of Performance Designs.

<http://www.youtube.com/watch?v=ERRrUcyOiE4>

------
pella
A humorous article in the BMJ that describes evidence-based medicine (EBM) as
a religion:

<http://www.ncbi.nlm.nih.gov/pmc/articles/PMC139053/>

------
linker3000
...for all values of gravity?

~~~
bilbo0s
Hilarious!

Doesn't it get just RIDICULOUS reading some EBM stuff.

------
aaron695
So what's the catch? Why is it wrong?

Is it we know lesser falls cause death and as such we can extrapolate out.

Or since the differences are so pronounced there is no need for randomised
placebo trails?

~~~
glenra
It's not really wrong. It makes the correct point that randomized controlled
trials are only one kind of useful knowledge. It's possible to know stuff that
can't be tested that way, so blind insistence on EBM is likely to throw out
some babies along with all that bathwater.

Seth Roberts often makes similar points in his blog - people running
experiments on themselves of size N=1 can often discover stuff that is useful,
even vital, but can't be discovered or demonstrated in a big double-blind
study.

Imagine that we had no scruples and were willing to throw people out of
aircrafts with dummy or with real parachutes. The ones who had parachutes
would still _notice_ when the parachute opened and slowed their fall. So we
can't rule out the placebo effect! :-)

~~~
trhtrsh
You could link the ripcord to release a toxin to cause unconsciousness.

------
ck2
[http://en.wikipedia.org/wiki/Annals_of_Improbable_Research#N...](http://en.wikipedia.org/wiki/Annals_of_Improbable_Research#Notable_articles)

------
clueless123
Sign at the loading ramp at Zephyrhills Skydiving center: "Gravity is the law,
obey the law!"

