This is another (edited) recycled post from my old blog, I am reposting it for hexyhex, who was talking about this issue recently.
Conclusions about antidepressant efficacy based on a meta-analysis of FDA clinical trials raise serious questions.
In late February some headlines heralded bad news for those being treated for depression: “Prozac does not work,” declared one headline from the UK Guardian, “and nor do similar drugs….”
The news was in reaction to a meta analysis of trials of selective serotonin reuptake inhibitors submitted to the U.S. Food and Drug Administration (FDA) from 1987 to 1999. The report appeared in the February 26 PLOS Medicine.
“These findings suggest that, compared with placebo, the new-generation antidepressants do not produce clinically significant improvements in depression in patients who initially have moderate or even very severe depression,” the authors wrote, “but show significant effects for only the most depressed patients.”
In their editorial, Turner and Rosenthal pointed out that the criterion set by NICE is “problematic, because it transforms effect size, a continuous measure, into a yes or no measure,thereby suggesting that drug efficacy is either totally present or absent, even when comparing values as close together as 0.51and 0.49.”
They also noted that the NICE criterion is not a definitive measure, but a value that could be problematic as a litmus test for drug efficacy.
In the article, Kirsch and his colleagues noted that drug efficacy did not change as a function of initial depression severity,whereas placebo efficacy decreased as initial depression severity increased. “Efficacy reaches clinical significance only in trials involving the most extremely depressed patients,” the authors wrote, “due to a decrease in the response to placebo rather than to an increase in the response to medication.”
See that, blink and you would miss it, unless you dig deeper into the data you might never realise the the change in effectiveness is due to the placebo, not the drugs. This happens all the time, while working as a statistician I have come to the realisation that it is easier to mislead a reader than it is to give them a true picture of what the data says, I honestly struggle to not mislead those who I write for. Not because I want to mislead them, but because statistics is hard, it is really easy to miss something important, or to fool yourself that the data is telling a story it really isn’t telling.
Another interesting thing worth noting is that placebos don’t appear to work on badly depressed people, this makes sense to me, as in my depression is a feeling that my life will never get better, withstand that mine set no wonder deposed people are less moved by the belief a drug will help. I would love to see if this carries over to treatment for other diseases in depressed people,
I wish the media was more questioning, if I had my way every journalism student would take courses in which they would have to make misleading findings given data in the hope that they would start asking questions of they way that studies where produced, did the researchers consider income, age, education level…? is the effect due to outliers?
Humans are too willing to believe a man in a white coat, and we need to stop, it’s time to step out of the 80′s open our eyes, it’s time to be blinded by “science” no more, and instead work on getting to the real answers under the simplistic conclusions.