
Regression to the mean is the main reason ineffective treatments appear to work - Amorymeltzer
http://www.dcscience.net/2015/12/11/placebo-effects-are-weak-regression-to-the-mean-is-the-main-reason-ineffective-treatments-appear-to-work/
======
fiatmoney
Also known as the Drill Sergeant Paradox.

Imagine that shouting has no actual effect on performance, but it is
traditional to shout at underlings when they do something particularly poorly.
When your trainees screw up, you berate them - and afterwards they _actually
do_ tend to do better. Unfortunately, this is because the screwup is more
often than not a random variation, and the improvement is due to the mean
regression, not the treatment. Conversely, praising them when they do well
(again, assuming no underlying effect) actually seems to _worsen_ their
performance.

~~~
brewdad
I wonder, has anyone ever tried berating the successes and praising the
failures?

~~~
nikanj
The VC culture? If you define failure as "didn't make any money" and success
as "made some money". Recently there was an article lamenting the "early exit
syndrome" that startups have, where they prefer getting some money to
potentially getting a lot more or nothing

~~~
p4wnc6
I think you might be referring to the early exit syndrome of Midwestern
startups [1].

The point of the article was not to say that exits are bad, only that to
develop a cultural milieu of technology in a particular region, there is a
need for at least some companies to stick it out and remain determined to
become long-lived and predicated upon a chosen workplace culture.

If companies _never_ do that, then a given region can't generate enough
inertia to remain a competitive place, and this has all sorts of bad effects
on the employment market in that area.

I thought Michael O. Church had a great extension of that article in his
recent post [2]. He argues that the broader short-term goals of VCs in general
are misaligned with the fundamental value creation premise that undergirds the
idea of start-up culture (or at least what _was_ start-up culture when the
term was new).

[1]
[https://news.ycombinator.com/item?id=10579370](https://news.ycombinator.com/item?id=10579370)

[2] [https://michaelochurch.wordpress.com/2015/11/17/its-not-
earl...](https://michaelochurch.wordpress.com/2015/11/17/its-not-early-exit-
disease-just-exit-disease-thats-killing-innovation-outside-of-and-inside-
silicon-valley/)

------
bigchewy
This is, unfortunately, rampant in healthcare. The natural variation amongst
people as well as the natural variation of our health throughout out life
makes actually analyzing healthcare outcomes incredibly difficult.

90%+ of published outcomes can be invalidated by simply looking at the
published data. If you want some chuckles, read blog posts by Al Lewis ripping
on research publised by companies touting their own performance. He's acerbic
and condescending but also, mostly, correct.

~~~
api
In a bio class a professor summed it up with "the hardest thing about medicine
is being able to tell whether anything actually works."

I told a friend of mine who's into alt.med once that yes I think a lot of
alt.med is bunk but percentage wise it's probably not _that much_ more bunk
than "mainstream" medicine.

It's easy for emergency medicine. Customer comes in with bullet wound, leaves
alive and healthy and without bullet wound. But outside of domains like that
it's crushingly hard to tell the difference between treatment and noise.
Compounding the fact is that humans aren't a standardized item that can be
compared against reference performance metrics. We're all different and our
genome and environment are constantly changing.

~~~
bonoboTP
> alt.med is bunk but percentage wise it's probably not that much more bunk
> than "mainstream" medicine

Oh yes it is!

First of all, the actual effort is there in real medicine, while alternative
medicine just doesn't care about evidence.

Also, most real drugs work as intended. The fringe cases get emphasized a lot,
but still, the basic things, like antibiotics, painkillers etc. work very
well. Medicine has advanced A LOT in the last 100-150 years.

Fake medicine, like homeopathy, is nothing like this. Don't mislead people.

~~~
Asbostos
I think depends on the medicine. Like fixing a bullet wound, some medicines
obviously work and you don't need a study to know that (for blood pressure,
insulin, severe pain, etc). But those are perhaps the ones we take for granted
and don't really include when we think about the idea of medicine being bunk.

What's more suspicious is medicine that people use just because they're in a
culture of taking medicine for everything. Reliefs for colds, coughs, mild
pain, sore throat, psychological problems, etc. These illnesses have a strong
get-better-anyway effect so it's very easy for people to believe that whatever
they took cured them. Don't believe me? Go to China and you'll see people
taking herb drinks for the same illnesses and having just as much faith in
their effectiveness. Other popular cures include drinking warm water and
"growing a pair".

~~~
scott_s
I'm not sure if your second paragraph is aimed at mainstream medicine, or
"alternative" medicine. Mainstream medicine does not claim to have a cure for
a cold. Mainstream medicine masks the symptoms so that people are less
miserable while the cold runs its natural course.

What's curious with your list is that one is a disease ("cold"; the
rhinovirus) while the others are _symptoms_ ("cough, mild pain, sore throat"),
and the other is an entire class ("psychological problems"). It's hard to
respond to what you said because you called them all "illnesses".

------
ashearer
So the research that established the placebo effect—an effect that’s well-
known, and widely regarded as an illustration of the importance of control
groups in research—itself had no control group? That’s incredible.

~~~
_dark_matter_
I don't know about that. There seems to be a lot of research investigating the
difference between placebo and no-treatment. The publication [0] mentioned in
article searches out all the studies that tested placebo treatment, and they
all included a no-treatment group.

Here's an excerpt:

>We included randomised placebo trials with a no-treatment control group
investigating any health problem

[0]
[http://www.ncbi.nlm.nih.gov/pubmed/20091554](http://www.ncbi.nlm.nih.gov/pubmed/20091554)

~~~
ashearer
I meant the original 1955 publication of “The Powerful Placebo” that put the
idea of the “placebo effect” into the public consciousness. (Use of the term
really takes off starting then [1].)

That article's more recent (2010) link supports its conclusion, which wouldn't
need to be stated unless it contrasted with what it calls “pharmacological
folklore” from the original publication, which the author says he “took
literally” for many decades after.

The point being that the 1955 version of the idea is currently much more
widely known than modern no-treatment comparisons, and most people don't
realize that its research basis is so flimsy.

[1]
[https://books.google.com/ngrams/graph?content=placebo+effect...](https://books.google.com/ngrams/graph?content=placebo+effect&year_start=1800&year_end=2000&corpus=15&smoothing=3&share=&direct_url=t1%3B%2Cplacebo%20effect%3B%2Cc0)

------
ajarmst
At school, I had a friend (studying to be a Physician) who referred to
Chiropractic as "applied regression to the mean".

------
nonbel
>"The only way to measure the size of genuine placebo effects is to compare in
an RCT the effect of a dummy treatment with the effect of no treatment at
all."

This is just wrong, the following procedure would be much more convincing than
a RCT:

Decide what you are measuring, collect group of people A, measure it, give
people placebo, measure it again, record results. Then repeat under different
circumstances with a different group of people B. Maybe even go back and do A
again. Are you getting consistent measurements? Good.

Now come up with an explanation for why the results have that distribution,
sources of variation, etc. Use that to quantitatively predict what should be
seen a new group of people C. Now go check group C. Did it match the
predictions? If so, good, keep at it.

Make a prediction for group D. Do the group D results match? If so, you are
probably interpreting the results right.

~~~
ashearer
How does that address the get-better-anyway effect, without a group to measure
it?

~~~
nonbel
Because you are predicting precisely how much better can be accounted for by
the placebo effect, no more, no less (of course within error bounds).

The get-better-anyway effect is too vague. Say theory A predicts value x is in
the range [3.1,3.3] and you observe x=3.36 +/-.1. Also, there is a theory B
that predicts x>0\. Theory A has been much more severely tested so is
supported by the evidence. Theory B, meh.

~~~
ashearer
Say theory A (the placebo effect theory) predicts a 25% improvement, because
it's been measured in the past with that result. Then there's also theory B
(the get-better-anyway theory, or null hypothesis), which wasn't measured
initially but in recent studies shows a similar 25% improvement.

I don't understand how more studies testing theory A and getting 25%, while
excluding theory B from testing, can show that A should be preferred over B.
The rationale that A has been tested more thoroughly and therefore has a
smaller confidence interval sounds circular.

~~~
nonbel
>"Say theory A (the placebo effect theory) predicts a 25% improvement, because
it's been measured in the past with that result."

That is not a theory. You have to come up with an explanation (a theory) for
why it should be that value. Then from that theory you deduce predictions for
what it should be under other circumstances. If the predictions are
consistently close to the observations it indicates you are onto something.

~~~
ashearer
The discussion is about how to test the placebo effect (which corresponds to
your example's "theory A"), so the proposed alternative to RCT would have to
include the process of first deducing the effect size. For this problem, how
would you realistically do that without referring to past experimental
results? There are so many unknowns, including psychology. The 1955 paper was
itself a survey of experimental results.

I expect it to be much easier to come up with a statistical prediction for the
get-better-anyway effect (regression to the mean), with more specificity than
x>0\. It would be needed regardless, in order to exclude the null hypothesis.

~~~
nonbel
My proposal includes collecting experimental data. It is just not in RCT form
(which was claimed to be "the only way").

There is no reason to ignore RCT results, but that is not the only way to get
the necessary data (which is what I disagreed with). An RCT is also not
sufficient on its own, you need the theory that explains the results as well.

~~~
nkurz
As an untrained outside observer (but one who has been thinking a lot recently
about the statistics of causality) I'll chime in to say that I'm siding with
'ashearer' here. As I'm reading it, your approach doesn't make sense to me. If
you want to distinguish between two treatments (placebo=treated and do-
nothing=untreated) you need to measure outcomes for both treated and untreated
individuals, and these individuals need to be "exchangeable" to some degree.

Perhaps when you say in the first post "give people placebo", you mean "give
the placebo to some subset of the individuals leaving the remainder
untreated"? I agree that you can often draw useful conclusions from an
observational study as long as some comparable individuals receive each
treatment, and as long as you can control for the initial differences between
the treated and untreated. But if you are saying that you don't need to look
at the results for the untreated group at all, then I think you are mistaken.

~~~
nonbel
>"Perhaps when you say in the first post "give people placebo", you mean "give
the placebo to some subset of the individuals leaving the remainder
untreated"?"

I meant just placebo. I am using predictive skill of the theory behind the
explanations to distinguish between them, this is common in
physics/astronomy/etc. As noted by asherer, placebo effect has been studied
for a long time in the manner of seeing how two groups differ. There is
nothing wrong with RCTs, but does near sole reliance on them (and their
observational analogue) appear to have lead to cumulative knowledge and growth
of understanding?

~~~
ashearer
Both hypotheses in this case are plausible. Beyond that, it would be great to
have the tools to numerically calculate a psychological phenomenon such as the
placebo effect the way a physicist calculates the energy of a particle. But
we're very far from having them, and in the meantime, I don't know of a
practical alternative to RCTs that would expand our knowledge at least as
much. (I think we can both agree that over-reliance on significance testing is
a problem, though not one limited to RCTs.)

------
tokenadult
This factual background deserves to be better known by Hacker News
participants, who often avidly discuss statements about medical research
without much background in medical research methodology. You, your friends,
and all of us deserve to have new proposed treatments and old standard
treatments evaluated thoroughly for safety and for effectiveness.

A really good source for all of us to follow about the latest research on
placebos in human medicine is the group-edited blog Science-Based Medicine,[1]
which is edited by several active medical researchers and also includes
lawyers, pharmacists, and even a reformed chiropractor among its contributors.
The blog includes many informative posts about the placebo effects[2] observed
in human medical research that help illuminate issues we all discuss a lot
here on Hacker News.

Some of my favorite recent articles on placebo effects from Science-Based
Medicine include "Are Placebos Getting Stronger?"[3] (21 October 2015) by
Steven Novella, a neurologist; "Placebo by Conditioning"[4] (29 July 2015),
also by Dr. Novella; "Should placebos be used in randomized controlled trials
of surgical interventions?"[5] (25 May 2015) by David Gorski, a surgeon and
cancer researcher; and "Placebo, Are You There?"[6] (24 February 2015) a
translation by a French-language article, translated by Harriet Hall. All go a
long way toward explaining just what has been shown, and just what has not
been shown, by previous research on placebo effects in human medicine.

"Placebo medicine" has so far only been shown to have any effect at all on
self-reported subjective patient symptoms such as pain and nausea that ebb and
flow in the natural course of untreated disease. If you broke your arm, you
wouldn't seek placebo treatment from your doctor, but actual effective
treatment. If you have an injury that causes chronic pain (as I do), you are
best off looking for the best available medically verified standard treatment,
and not looking for any kind of placebo treatment.

[1] [https://www.sciencebasedmedicine.org/about-science-based-
med...](https://www.sciencebasedmedicine.org/about-science-based-medicine/)

[2]
[https://www.sciencebasedmedicine.org/?s=placebo](https://www.sciencebasedmedicine.org/?s=placebo)

[3] [https://www.sciencebasedmedicine.org/are-placebos-getting-
st...](https://www.sciencebasedmedicine.org/are-placebos-getting-stronger/)

[4] [https://www.sciencebasedmedicine.org/placebo-by-
conditioning...](https://www.sciencebasedmedicine.org/placebo-by-
conditioning/)

[5] [https://www.sciencebasedmedicine.org/should-placebos-be-
used...](https://www.sciencebasedmedicine.org/should-placebos-be-used-in-
randomized-controlled-trials-of-surgical-interventions/)

[6] [https://www.sciencebasedmedicine.org/placebo-are-you-
there/](https://www.sciencebasedmedicine.org/placebo-are-you-there/)

