To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Leucht et al. in 2012 described an overview of meta-analyses of the efficacy of medication in psychiatry and general medicine, concluding that psychiatric drugs were not less efficacious than other drugs. Our goal was to explore the dissemination of this highly cited paper, which combined a thought provoking message with a series of caveats.
We conducted a prospectively registered citation content analysis. All papers published before June 1st citing the target paper were independently rated by two investigators. The primary outcome coded dichotomously was whether the citation was used to justify a small or modest effect observed for a given treatment. Secondary outcomes regarded mentioning any caveats when citing the target paper, the point the citation was making (treatment effectiveness in psychiatry closely resembles that in general medicine, others), the type of condition (psychiatric, medical or both), specific disease, treatment category and specific type. We also extracted information about the type of citing paper, financial conflict of interest (COI) declared and any industry support. The primary analysis was descriptive by tabulating the extracted variables, with numbers and percentages where appropriate. Co-authorship networks were constructed to identify possible clusters of citing authors. An exploratory univariate logistic regression was used to explore the relationship between each of a subset of pre-specified secondary outcomes and the primary outcome.
We identified 135 records and retrieved and analysed 120. Sixty-three (53%) quoted Leucht et al.’s paper to justify a small or modest effect observed for a given therapy, and 113 (94%) did not mention any caveats. Seventy-two (60%) used the citation to claim that treatment effectiveness in psychiatry closely resembles that in general medicine; 110 (91%) paper were about psychiatric conditions. Forty-one (34%) papers quoted it without pointing towards any specific treatment category, 28 (23%) were about antidepressants, 18 (15%) about antipsychotics. Forty (33%) of the citing papers included data. COIs were reported in 55 papers (46%). Univariate and multivariate regressions showed an association between a quote justifying small or modest effects and the point that treatment effectiveness in psychiatry closely resembles that in general medicine.
Our evaluation revealed an overwhelmingly uncritical reception and seemed to indicate that beyond defending psychiatry as a discipline, the paper by Leucht et al. served to lend support and credibility to a therapeutic myth: trivial effects of mental health interventions, most often drugs, are to be expected and therefore accepted.
The standardised mean difference (SMD) is one of the most used effect sizes to indicate the effects of treatments. It indicates the difference between a treatment and comparison group after treatment has ended, in terms of standard deviations. Some meta-analyses, including several highly cited and influential ones, use the pre-post SMD, indicating the difference between baseline and post-test within one (treatment group).
In this paper, we argue that these pre-post SMDs should be avoided in meta-analyses and we describe the arguments why pre-post SMDs can result in biased outcomes.
One important reason why pre-post SMDs should be avoided is that the scores on baseline and post-test are not independent of each other. The value for the correlation should be used in the calculation of the SMD, while this value is typically not known. We used data from an ‘individual patient data’ meta-analysis of trials comparing cognitive behaviour therapy and anti-depressive medication, to show that this problem can lead to considerable errors in the estimation of the SMDs. Another even more important reason why pre-post SMDs should be avoided in meta-analyses is that they are influenced by natural processes and characteristics of the patients and settings, and these cannot be discerned from the effects of the intervention. Between-group SMDs are much better because they control for such variables and these variables only affect the between group SMD when they are related to the effects of the intervention.
We conclude that pre-post SMDs should be avoided in meta-analyses as using them probably results in biased outcomes.
The effects of cognitive behavioural therapy of anxiety disorders on depression has been examined in previous meta-analyses, suggesting that these treatments have considerable effects on depression. In the current meta-analysis we examined whether the effects of treatments of anxiety disorders on depression differ across generalized anxiety disorder (GAD), social anxiety disorder (SAD) and panic disorder (PD). We also compared the effects of these treatments with the effects of cognitive and behavioural therapies of major depression (MDD).
We searched PubMed, PsycINFO, EMBASE and the Cochrane database, and included 47 trials on anxiety disorders and 34 trials on MDD.
Baseline depression severity was somewhat lower in anxiety disorders than in MDD, but still mild to moderate in most studies. Baseline severity differed across the three anxiety disorders. The effect sizes found for treatment of the anxiety disorders ranged from g = 0.47 for PD, g = 0.68 for GAD and g = 0.69 for SAD. Differences between these effect sizes and those found in the treatment of MDD (g = 0.81) were not significant in most analyses and we found few indications that the effects differed across anxiety disorders. We did find that within-group effect sizes resulted in significantly (p < 0.001) larger effect sizes for depression (g = 1.50) than anxiety disorders (g = 0.73–0.91). Risk of bias was considerable in the majority of studies.
Patients participating in trials of cognitive behavioural therapy for anxiety disorders have high levels of depression. These treatments have considerable effects on depression, and these effects are comparable to those of treatment of primary MDD.
Suppose you are the developer of a new therapy for a mental health problem or you have several years of experience working with such a therapy, and you would like to prove that it is effective. Randomised trials have become the gold standard to prove that interventions are effective, and they are used by treatment guidelines and policy makers to decide whether or not to adopt, implement or fund a therapy.
You would want to do such a randomised trial to get your therapy disseminated, but in reality your clinical experience already showed you that the therapy works. How could you do a trial in order to optimise the chance of finding a positive effect?
Methods that can help include a strong allegiance towards the therapy, anything that increases expectations and hope in participants, making use of the weak spots of randomised trials (risk of bias), small sample sizes and waiting list control groups (but not comparisons with existing interventions). And if all that fails one can always not publish the outcomes and wait for positive trials.
Several methods are available to help you show that your therapy is effective, even when it is not.
Email your librarian or administrator to recommend adding this to your organisation's collection.