Something has always bothered me about the placebo effect. And I don’t just mean the way it is co-opted by advocates of pseudomedical bollocks to justify their claims. I’ve lost count of the number of people I’ve heard defend homeopathy or acupuncture with the statement “It does work – it just works through the placebo effect”.
Well, no. If something works as well as a placebo, that’s the same as saying it doesn’t work. If a plain sugar pill works as well as a homeopathically treated sugar pill then – whatever else might be helping the patient feel better – it is not the homeopathy.
No, my problem is why a placebo helps at all. How can actively doing nothing be different from passively doing nothing? Mind over matter? I’m skeptical.
In 2001 the New England Journal of Medicine published a paper titled “Is the Placebo Powerless – An Analysis of Clinical Trials Comparing Placebo with No Treatment”. This was a systematic review which looked at clinical trials featuring both a “placebo” and a “no treatment” wing.
The researchers looked at 130 trials, sixteen of which were excluded for lacking the relevant data. Of the remainder, thirty-two trials had binary outcomes (as in, yes it worked, or no it didn’t) and eighty-two had continuous outcomes.
The study concluded that placebos had no effect on binary outcomes. That is to say, when the results were a simple yes/no, black/white, did it work vs did it not, then the placebo performed just as well as no treatment. Which makes sense, of course, as both placebo and nothing do, er, nothing. It’s just that when you take a placebo you get to pretend otherwise.
For objective, continuous outcomes the results were much the same. When we are able to objectively measure the impact of an intervention, like “has the swelling gone down?”, or “is the bone healing?”, then again placebo performs as well as no treatment.
But subjective continuous outcomes were different. Subjective outcomes are outcomes which involve some judgement by the patient or the clinician. “Do you feel better?” or “Are you less nauseous?” We are unable to objectively test how nauseous the patient feels, so we have no choice but to ask them how they are feeling.
The paper concludes:
Although placebos had no significant effects on objective or binary outcomes, they had possible small benefits in studies with continuous subjective outcomes and for the treatment of pain.
So let’s say we have a patient with a grazed elbow. Pain receptors around the graze will fire, sending nerve impulses up her arm, up her spine, and into her brain, which interprets the impulses as elbow pain. There is a strong psychological component to how she experiences that pain, and to that extent I can readily accept that her expectations – as managed by a placebo – could modify her experience of that pain.
Although the pain receptors are still firing at the same rate, and the signals are still travelling up her spine to her brain, the fact that she has been told the sugar pill she was given is actually a powerful painkiller means that she reports, and may even experience, less pain. Even when the nerve signals arriving at her brain are unchanged.
So far, so good. Nothing here which offends my reductionist sensibilities. But then came Ben Goldacre and Bad Science, a book which should probably be put on the National Curriculum. It’s a magnificent work, though the chapter on the placebo effect left me scratching my head.
In Bad Science, and in many dozens of lectures since, Goldacre enthuses about the amazing power of the placebo. By cunningly comparing one placebo to another, he says, we can discover a lot about the size of placebo effects. For example, we can tell that two placebo sugar pills are better than one sugar pill. We know that large sugar pills are better than small sugar pills, but tiny sugar pills are better than both. We can confidently make statements like pink sugar pills help you concentrate better than blue sugar pills do, even though they are both pharmacologically inert.
But that’s crazy!
Most bizarrely, to me at least, was his description of a study by the anthropologist Daniel Moerman. Ingeniously, Moerman compared the placebo wings of several gastric ulcer trials to see how the relative placebo effects compared. Gastric ulcers are good, because you can stick a camera down the patient’s throat to see if the ulcers have healed. This is a nice, objective outcome – something that the New England Journal of Medicine tells us should be immune to placebo effects.
But Moerman’s study says different. Moerman found that taking four placebo pills makes gastric ulcers heal faster than taking two sugar pills. Let me reiterate that for emphasis: gastric ulcers physically heal and disappear from your gut faster when you take four inert pills, versus two inert pills. That’s not a psychological overlay on a continuous subjective anything, that’s a solid, objective outcome. And, as Goldacre says, is an outrageous finding.
Is this some kind of mind over matter? Not in the woo-woo sense, of course, but does this show us that the brain can actually direct the body to heal a gastric ulcer faster some how, because it thinks it has taken medicine to make that happen? Last time I checked, 4 × 0 and 2 × 0 are both zero, so how can four inert pills be different to two? What is the physiological mechanism governing this? What is the brain doing to provoke a physical different in how ulcers heal?
I don’t understand. How can this work?
To my rescue came Steve Novella and the Science-Based Medicine blog. Last month, the Wall Street Journal published an unfortunately uncritical article about acupuncture, which repeated some crazy claims about how both acupuncture and placebo prompt the body into healing itself, which is why acupuncture and placebo produce similar results in clinical trials.
“I don’t see any disconnect between how acupuncture works and how a placebo works,” says radiologist Vitaly Napadow at the Martinos center. “The body knows how to heal itself. That’s what a placebo does, too.”
This prompted Novella to write an article called Placebo Effects Revisited, which featured an interesting characterisation of the placebo effect, as observed in the context of a clinical trial.
“Placebo effects, as measured in clinical trials, includes a host of factors – everything other than a physiological response to an active treatment.
“These placebo effects include the bias of the researchers, the desire of the subjects to please the researchers and to get well, non-specific effects of receiving medical intervention and attention, and other artifacts of the research process.”
This means that placebo data published in clinical trials includes all sorts of stuff, not just the physiological effects of inert interventions. Novella goes even further, suggesting that placebo effects don’t just include things like researcher bias, but are mostly comprised of bias in reporting and observation and non-specific effects.
In this context, it suddenly looks to me like Daniel Moerman, and perhaps Ben Goldacre, are comparing apples to oranges.
When you compare the placebo wings of different trials, even for the same illness, you’re doing more than comparing the relative effects of various inert treatments. You’re also comparing biases of the researchers involved, will which differ across trials.
Could it be that the reason four pills make gastric ulcers heal faster than two pills is not because of some outrageous mind-over-matter effect, but is simply due to researcher bias, or other artifacts which exist only in the data and not in the patient?
I don’t know. I’m not a doctor, nor am I remotely qualified to draw a conclusion.
But I would be very interested in the results if someone who knew what they were doing actually did the research to find out.