Apparent effectiveness of antidepressants is inflated by publication bias
A former FDA reviewer and his colleagues obtained all FDA data on studies of 12 antidepressants approved between 1987 and 2004. They then scoured the journals to figure which of these studies had been published. They found that of the 74 FDA-registered trials of antidepressants, 23 (31%) had never been published. The overwhelming majority (94%) of published trials reported positive results, but only 51% of all trials (published and unpublished) were positive. The mean effect size of antidepressants in published data was 0.41, while the effect size derived from all data was 0.31, a 32% decrease in apparent effect (Turner EH et al., NEJM 2008;358:252-260).
TCPR’s Take: While the problem of publication bias has been well known, this study paints a particularly disturbing picture. Psychiatrists who read journal articles (and what else do we have easy access to?) have the false impression that most antidepressant trials are positive, but in fact only half of them are. On the other hand, this bleak view of antidepressant efficacy must be tempered by the fact that industry trials have very strict exclusion criteria. Some have estimated that as many 80% of “real world” patients would be disqualified from the typical antidepressant trial. How relevant are the results to the patients we actually see in our practices? The answer isn’t clear.
Antidepressants are effective primarily for severe depression
Researchers used the Freedom of Information Act to retrieve all clinical trials data from the FDA on four different antidepressants: fluoxetine, venlafaxine, nefazodone, and paroxetine. They did not include sertraline or citalopram because of missing data, nor did they include duloxetine or escitalopram, presumably because they were approved after the study period ended in 1999. The researchers then synthesized the data to determine how effective these drugs appeared to be. They found that the average improvement in the Hamilton Depression Scale was 9.6 points for antidepressants, and 7.8 points for placebo, yielding an average difference of 1.8 points. While this small difference was statistically significant, the authors cited British government guidelines specifying that a drug-placebo difference must be 3 or greater on the HamD to be clinically significant. Nonetheless, when the researchers looked specifically at patients with very severe depression at baseline (HamD >28), they found a larger separation from placebo, rising to the level of clinical significance. This larger separation was due to more severe patients having a poorer response to placebo rather than greater improvement on antidepressants (Kirsch I et al., PLOS Medicine 2008;5:260-268, accessible for free at www.plosmedicine.org).
TCPR’s Take: This study has received a lot of press in part because of its provocative conclusion: “There seems little evidence to support the prescription of antidepressant medication to any but the most severely depressed patients.” Although the methodology was legitimate and the modest drug-placebo separation is indeed concerning, a single meta-focusing on mostly small clinical trials (only one study had a placebo arm sample size of greater than 100 subjects), does not negate the substantial body of research supporting antidepressant efficacy (e.g., discontinuation studies, combination studies, post-marketing studies, naturalistic studies, etc.). Our perspective is this: The pharmaceutical industry’s primary goal is to conduct trials that yield just enough drug-placebo separation to win approval. Because the FDA sets the bar very low (they require only two studies showing any statistical difference between drug and placebo) drug companies aim low. Why pay more to get an “A” when the FDA is satisfied with a “C”? Hopefully, this study will serve as a wake-up call to the field to develop more novel treatments and to test them in more clinically relevant ways.