The Placebo Effect is something of a misnomer, as there are actually many different placebo effects, which bias trial data in subtle and interesting ways. Effects like regression to the mean and the natural history of the disease, for example, will result in objective changes to a patient's condition even in cases where the patient is acutely aware they are only taking a placebo.
Analyses comparing placebo interventions to no treatment reveal that the apparent power of the placebo may be overstated. No placebo effects are observed, for example, when comparing placebo to no treatment for objective endpoints, or binary endpoints (Hróbjartsson et al, 2001). They are observed in subjective endpoints (e.g. pain, nausea) where the condition of the patient is filtered through the opinion and biases of the patient and/or the clinician - which makes it quite possible that this aspect of placebo action can be accounted for by the experimenter effect.
All of which leads to my primary problem with this paper. It is a comparison of open-label placebo to no treatment, with a relatively small number of participants (n = 80), studying only subjective end-points (hello, experimenter effect). The media coverage of this paper (c.f. the NPR article) makes the claim that an "honest placebo" was given, with the patients informed they were only taking placebo, which is true. But patients were also told the placebo could "present significant improvement in IBS symptoms through mind-body self-healing processes", which just as readily primes the patient for the experimenter effect as does telling them they're taking a drug.
On top of that, the clinical relevance of the IBS-GIS improvement seen in the placebo arm is questionable, improvement from "(4) no change" to "(5) slight improvement" on a seven point scale.
Small effect, small numbers, and potentially flawed methodology.
OK, lets take this in order. Hrobjarrtsson and Goetzche's review actually undermines your point around there being many placebo effects (which is the approach taken by Benedetti), as they modelled wildly different clinical trials (the inclusion criteria were simply possessing both a placebo and no-treatment arm).
Furthermore, Meissner et al 2007 re-analysed those studies and found large effects where the outcome could be mediated by nervous system, and very low effects where there was no direct nervous system link.
Additionally, uncertain expectations (you may receive a drug) are given in clinical trials and Vase (2002) found that placebo effect sizes were much smaller in this context than they are when placebos are deceptively administered (this is a potent painkiller) (as they normally are in clinical practice and experimental research) (c.f. Kirsch & Wiexel 1998, Amanzio et al 2001). Also certain vs uncertain expectations are associated with differential amounts of dopamine release, which has been associated with response to placebo (Scott et al 2007, DeLa Fuente-Fernandez, 2002, 2004).
I agree with many of your comments around this paper, and essentially it was only published because Kaptchuk and Kirsch are two of the leading names in the non-clinical field.
And the papers from 2010, and has been getting this treatment for a while....
Though I disagree that Hróbjartsson et al undermines the notion of many placebo effects.. or maybe I don't, depending on our definition of placebo effect.
If we define "placebo effect" as any clinically significant change observed in the placebo wing of a clinical trial, there are many placebo effects. As I mentioned before, regression to the mean, natural history of the disease, experimenter effect, and so on.
Alternatively, we could define it as any clinically significant change observed in the placebo wing of a clinical trial which is not also observed in a no treatment wing. This would eliminate things like regression to the mean and arguably leave behind only a "true" placebo effect. Though even here, we have to account for bias (and perhaps even classical conditioning?) I wonder how much placebo effect is left if these are controlled for?
Within the context of analysing trial data, it is the former definition we are interested in. But within the context of discussing "the placebo effect" as a standalone phenomenon, I'd argue the latter definition is more useful.
Unfortunately, as most trials don't have no treatment arms, any clinically significant changes observed in the placebo arm are chalked up to "the placebo effect", especially by the media (though sometimes by clinicians), even when there is good reason to think many of those same changes would have been observed under no treatment. Which IMHO leads to a distorted view of the clinical relevance of the placebo effect outside the context of a clinical trial. Especially as the placebo effect appears to have a reputation as a bizarre mind-over-matter affair.
Ah, I see what the issue is here. I typically (used to) work in experimental placebo research, rather than clinical placebo research (I'm a psychologist by trade) so I would be more interested in those studies, rather than clinical trials.
In terms of what's left over after accounting for no treatment, Vase and H&G got into a big academic fight about this, and it appears that the effect size for placebo effects in pain is approximately (d=0.5), which is a relatively large effect (especially within psychology). This doesn't entirely account for experimenter effects, though with the use of a balanced-placebo design, those can be accounted for.
In terms of clinical trials, I would tend to agree with your second definition. Its not perfect, but its as good as it tends to get.
I think that one of the issues with placebo research is this notion of mind-over-matter, in that such a viewpoint is the reason that it is perceived as special, and also a reason why people disbelieve in it.
Based on old research (Levine, 1979) many (but not all) placebo effects in pain appear to be mediated by endogenous opioids,which I would take to mean that they are pretty naturally mediated by the brain, and so its a physical phenomenon. Many people do go a bit crazy with the woo around it though, I do agree.
Funny that this came up today, when I'm currently finalising a hopefully final draft of my thesis on the placebo (PROTIP: never, ever leave your university before submitting a PhD, it tends to go badly).
Suppose that by "placebo effect", we really are identifying a marker for something like "experimenter effect" or "physician effect" and only on subjective end-points.
I believe it would still be correct, and of practical value, to say that the placebo effect is real. Prescribing someone a placebo results in them receiving the experimenter/physician effect - they subjectively report feeling better - that's a good thing.
The problem with "they subjectively report feeling better" is that sometimes the people "lie" voluntary or involuntary. Perhaps they don't feel better but want to make the experimenter happy. Perhaps they don't feel better but wish to feel better. Too much possible sources of confusion.
Just using an exaggeration for comparison, if you pay $1000000 to the experiment subject in one arm to say that they fell better, then you will get a very big improvement, but it doesn't mean that they really feel better. This is an exaggeration, but the problem is that there are a lot of more subtle things that can change the self reported feeling.
I don't know how to measure the well feeling in a non subjective way. If I may just made up an inexistent medical device, perhaps I can put a 24hs endorphin measurer to the test subject and look for a difference in the mean concentration.
This is quite an old paper, and I actually wrote a blog post about it (back when I had a blog). However, my issue with this study is that there was no deceptive placebo condition (i.e. what normally happens). I actually attempted to replicate this finding and didn't find any effect when a deceptive condition was involved, suggesting that IBS may be a special case.
Interestingly enough, IBS patients have been found to have non-opioid mediated relief of pain, which is atypical (normally naloxone blocks these effects) so there may be something weird going on with this condition in more general terms.
Also, its worth noting that in modern conceptions of placebo, its part of every treatment. If you have ever felt the effects of a cup of coffee before approximately 30 minutes, that's probably a placebo. Ditto for headache tablets that work immediately, before the active substance could have gotten into the bloodstream.
Also, with respect to the drinking of water, that's unlikely to be an explanation as that quantity of water is typically not enough to provide relief from IBS.
>with respect to the drinking of water, that's unlikely...
With my reading, taking water - in gulps to wash down the pill - at particular times/regularity could be a/the significant factor getting measured in this study. Perhaps there is a threshold effect in taking the water (e.g: it helps people on the threshold of dehydration), or perhaps it's the time of day taken. I may be wrong, but I don't think it should just be summarily dismissed yet.
I have definitely experienced the coffee and pain killer placebo more often than not.
Thinking about it now, it make sense that once you ingest either to alleviate certain symptoms, you let go of a portion of directed anxiety/overt attention in expectation of relief thereafter.
Despite knowing the placebo pill beforehand, I'd hazard a guess that the same brain areas associated with non-placebo oral administration will light up with activity in an MRI scan.
Anyone know if there is any scientific literature on this phenomenon?
For the placebo effects of coffee - Kirsch & Wiexel 1988 (can't find it online, annoyingly enough).
There probably are some fMRI studies, but I really don't trust most of those due to the difficulties in avoiding multiple comparisions (see Vul et al 2009, Voodoo Neuroscience).
I was under the impression "placebo" includes the effects of:
- the extra exercise of going to the medical center
- better behavior under monitoring
- talking to the test administrators (important for e.g. depression)
- better organization to follow the intake schedule
Which are not affected by knowing you are in the control group. Is this really a surprise to the medical community, or is the article just going for the "mind over matter" and general woo line?
Also:
open a door toward ethical use of placebos in daily
medical practice
Haven't this been extensively discussed and ultimately rejected for a long time?
This study seemed to deal with people experiencing digestion problems. I wonder if it was the act of drinking extra glasses of water (to swallow the placebos) that helped the situation, rather than the psychology here.
Now why did I know that Ted Kaptchuk[1] would be the quoted "expert" (he is not a medical doctor) in this 23 December 2010 story even before I read it? Because he is always the guy pushing this line[2] in press releases[3] that get picked up by the popular media.
Meanwhile, the medical researchers who look at the issue with proper study designs and statistical controls know that placebos are essentially useless, as they at most have influence just on self-reported subjective symptoms, not on any sign that affects the progression of a disease or maintenance of good health.[4] Ladies and gentlemen, you know you aren't going to seek "placebo medicine" if you have cancer or congestive heart failure, and you know that no compassionate parent would seek "placebo medicine" for minor children who have a childhood disease. So why does this topic keep coming up over and over and over here on Hacker News, now most recently from a brand-new participant here? Take the time and effort to learn a bit more about the actual research base before assuming that this story is anything other than the outcome of carefully crafted press release.
Findings on placebo effects by researchers who have considered the issue carefully include
"Despite the spin of the authors – these results put placebo medicine into crystal clear perspective, and I think they are generalizable and consistent with other placebo studies. For objective physiological outcomes, there is no significant placebo effect. Placebos are no better than no treatment at all."[5]
"We did not find that placebo interventions have important clinical effects in general. However, in certain settings placebo interventions can influence patient-reported outcomes, especially pain and nausea, though it is difficult to distinguish patient-reported effects of placebo from biased reporting. The effect on pain varied, even among trials with low risk of bias, from negligible to clinically important. Variations in the effect of placebo were partly explained by variations in how trials were conducted and how patients were informed."[6]
I would say that you should perhaps not shoot the messenger in this case. While I am not a massive fan of Kaptchuk's experimental rigour (and I suspect most of what those articles say is not news to me), the study also had Irving Kirsch, who whatever else he may be, is a fine experimentalist. Kaptchuk's pretty good at getting funding though, so hence his appearance.
Also, if you think that placebo is nonsense, I would humbly suggest that you read Benedetti http://www.amazon.com/Placebo-Effects-Understanding-mechanis.... Its a very good summary of the state of the art in 2008, from someone (Benedetti) who runs extremely tight, well-designed experiments in reasonably valid conditions (typically post-surgery patients). Some of the findings are extremely interesting, and it is all well-referenced and supported.
Thanks for your comments on the quoted researchers. I have been trying to find the DIRECT link to a quotation from Fabrizio Benedetti, a co-author of one of the most cited papers who is also a medical doctor, in which he sums up his view this way: "I am a doctor, it is true, but I am mainly a neurophysiologist, so I use the placebo response as a model to understand how our brain works. I am not sure that in the future it will have a clinical application." (The stuff in the quotation marks appears online in articles on other websites, but I don't know specifically when Benedetti said that, except it was after his most famous paper, co-authored with Kaptchuk.)
The state of the art since 2008 has not been an advance in finding clinically useful placebo effects so much as it has been an advance in finding statistical flaws in previous studies of placebos. I really appreciated your comments in dialogue with another participant in this same thread about what the research shows, and indeed how one might define "placebo effect," and I'll have to digest that for the next time this issue comes up here on HN. Thanks.
No worries, I've really enjoyed your postings on many topics, mostly around hiring techniques (and have always been a big fan of Hunter & Schmidt).
I think that I am somewhat biased, given that I started a PhD in the placebo effect around then, so I actually (sortof) know all of these people. I would argue that there are a few problems with placebo research as currently practiced.
1) clinical studies without no-treatment arms
2) Relatively small experimental studies with not completely explicit treatment protocols
3) A fascination with colourful brain images at the expense of good experimental design (though that is sadly not limited to placebo research).
Statistics is very, very difficult to get right (and I've often struggled) and the incentives are not lined up in the correct way. For instance, if I find a counter-intuitive results in an experiment, it does not benefit me to engage in rigorous fact-checking, I am more likely to benefit if I just publish it, given the demands of tenure-track. To be honest, its a wonder any science gets done at all.
Well, now it's known that a placebo works, and the mind being mysterious still, a placebo is only a known unknown.
But if you give a sugar pill to a person who had never heard of the placebo effect, and tell him/her that the pill is only sugar with no effect whatsoever, perhaps we might find that the placebo effect does not manifest.
I take homoeopathic medicine for hayfever. I was recommended it by a friend when I was much younger, and found it was very successful.
I now know it's homoeopathic and what that means, but I still find it greatly improves my symptoms, and assume it isn't having any negative side effects (as the pills are just tiny sugar pills). Part of me doesn't like taking it, but it does help, and it doesn't seem worth stopping at this point.
Analyses comparing placebo interventions to no treatment reveal that the apparent power of the placebo may be overstated. No placebo effects are observed, for example, when comparing placebo to no treatment for objective endpoints, or binary endpoints (Hróbjartsson et al, 2001). They are observed in subjective endpoints (e.g. pain, nausea) where the condition of the patient is filtered through the opinion and biases of the patient and/or the clinician - which makes it quite possible that this aspect of placebo action can be accounted for by the experimenter effect.
All of which leads to my primary problem with this paper. It is a comparison of open-label placebo to no treatment, with a relatively small number of participants (n = 80), studying only subjective end-points (hello, experimenter effect). The media coverage of this paper (c.f. the NPR article) makes the claim that an "honest placebo" was given, with the patients informed they were only taking placebo, which is true. But patients were also told the placebo could "present significant improvement in IBS symptoms through mind-body self-healing processes", which just as readily primes the patient for the experimenter effect as does telling them they're taking a drug.
On top of that, the clinical relevance of the IBS-GIS improvement seen in the placebo arm is questionable, improvement from "(4) no change" to "(5) slight improvement" on a seven point scale.
Small effect, small numbers, and potentially flawed methodology.