Throughout every issue of The Carlat Report, you, poor reader, have been subjected to citations of multiple research studies, and to terms such as “controlled,” “open-label”, “statistically significant.” In this article, TCR dives headfirst into research methodology, with the aim of helping you to become smarter consumers of statistical trickery.
To begin with, let’s decipher every researcher’s favorite phrase: “A randomized, placebo-controlled, double-blind study.”
“Randomized.” If you want to fairly test whether a medication works better than placebo, or better than another medication, the patients chosen for the different study arms should be as equivalent as possible. Obviously, if the patients in the treatment group are much less depressed than those in the placebo group, a finding in favor of the antidepressant means very little. The easiest way of balancing the two arms of a study are to “randomly assign” or to “randomly allocate” patients to one group or the other, usually by using a computer equivalent of drawing straws.
“Placebo-Controlled.” As clinicians, we see patients improve on medications all the time, but we are savvy enough to realize that many non-medication factors may be at play: positive expectation, changes in the patient’s life, your wonderful “couch-side” manner, the desire of patients to please you by saying they’ve improved even if they haven’t, etc…. All of these “non-specific”, or “placebo” factors come into play in research as well. A placebo control group allows us to measure the degree of nonspecific improvement versus medication improvement.
Note that a medication control group does no such thing: thus, for example, a recent study showed that Zoloft and St. John’s Wort yielded statistically equal response rates in depression (Hypericum Depression Trial Study Group, JAMA. 2002;287:1807-1814). Good news for the herbal? Not quite: the study also included a placebo group, and guess what? Patients in the placebo group did just as well as those on the two active treatments, meaning that neither Zoloft nor St. John’s Wort demonstrated a specific medication effect on depression.
You’ll often read references to “uncontrolled studies” or “open studies.” These studies have neither a placebo control nor an active drug control. Generally, these uncontrolled studies yield response rates that are much higher than response rates in controlled studies. Why is this so? After all, the presence or absence of a control group shouldn’t affect the response rate of a completely separate group of patients who are given active treatment, should it? But it does, and the reason for this is that studies that include placebo groups are almost always double-blinded.
“Double blind.” The purpose of a placebo group is to see how well patients do when they believe they are getting the treatment, but are actually not. If they knew they were swallowing a placebo, they might very well still improve, either from a sugar high, or passage of time, or other factors. But you would still not be measuring what is certainly a big part of the cure: the effects of the faith in the prescription. So patients have to be fooled, and the way this is done is by “blinding,” a brutal term referring to the benign art of disguising the placebo pill as the active medication.
But keeping patients blind to the treatment is only one part of the story. The “double” in double-blind refers to the need to keep the researcher in the dark about which patient is getting which pill. If a researcher knows that a particular patient is taking active medication, this knowledge may bias the evaluation of the degree of improvement. Thus, double-blinding seeks to improve studies in two ways: first, by making the placebo group a more effective measure of nonspecific effects; and second, by reducing potential research bias.
Double-blind studies yield lower response rates because the effects of researcher bias and patients’ expectations are minimized by the presence of the placebo pill.
You’ll often hear studies referred to as “closed-label”-this is equivalent to double-blinding, in that the “label” identifying the pill is “closed” to patients and researchers. On the other hand, the “open-label” study is one in which patients know exactly what they’re getting, and researchers know exactly what they’re dishing out. Such studies are easier and cheaper to conduct, and are valuable because they help identify promising treatments.
What about single-blind studies? Usually these are studies that compare two active drugs for a condition without including a placebo group. The patients know what they are taking, and the doctor knows what he or she is prescribing. The only one who is “blind” is the person assessing the degree of clinical improvement using structured rating scales. As you can surmise, such a design leaves plenty of room for “placebo” confounding of results, especially when the investigator is being funded by a company that makes one of the drugs in question (see, for example, TCR 1:10, page 2).
Finally, you’ll read about “parallel group” designs; these are the usual research designs in which an active drug group is treated in parallel with a placebo group. A “crossover design” is a cheaper version of this, in which patients are randomly crossed over from drug to placebo and vice-versa midway through the trial; this way, you can get by with half the number of patients you would otherwise need.