Getting an article accepted for publication can be a trying experience, requiring multiple re-drafts or considerable persistence in displaying your wares to several editors, or just having to accept your data is not of a quality suitable for scientific eyes. We might assume this would be particularly true for authors describing RCTs concerning clinical interventions, where positive outcomes could readily effect changes in clinical practice.
You would think that editors and reviewers would be aware of the responsibility they hold to really make sure any papers suggesting that psychologists consider adopting a new technique are based on sound evidence. You would think. But, according to Cristea, Kok and Cuijpers, you would be quite wrong. Looking into the evidence base for one of a wave of popular new arms of cognitive therapy, they have found a reality gap between the flag-waving for Cognitive Bias Modification (CBM) and the pitiful, poor quality studies forming some pretty shoddy flagpoles.
To set the context, a Google search of CBM throws up http://www.biasmodification.com/ with the following paragraph on its main page:
In 2009, the Journal of Abnormal Psychology reported on a study in which 72% of the volunteers were cured of Social Anxiety Disorder after just 2 hours CBM therapy, and in 2010 there were 12 studies which all concluded that Cognitive Bias Modification can be an effective treatment for anxiety.
Whilst respected international publications also seem unable to avoid the leap onto the bandwagon (http://www.economist.com/node/18276234).
No wonder Cristea et al were so concerned. And that’s not all they were disturbed by. From a more academic perspective, three previous meta-analyses of CBM studies failed to specify inclusion criteria applied, such as whether the papers reviewed had randomised samples. The field was already seen as hampered by the lack of large-scale studies meeting the CONSORT guidelines and a general failure for studies to attend to moderating factors.
Their aim, therefore, was to appraise the available literature, using the PRISMA statement method to select eligible RCTs, to use the Cochrane Collaboration guidelines to consider publication bias, and then to appraise in depth the quality of the RCTs. They also wanted to check whether CBM studies reporting positive results were appearing more in the high impact factor journals.
Their method starts with a clearly charted selection process producing 49 RCTs for review. These were appraised for their quality: only 5 met all 5 quality criteria, whilst 10 met none. A lack of detail in the papers made it difficult in the majority of cases to assess biasing factors.
Cristea et al applied Hedges’ g to comparisons between CBM treatment and comparison groups on anxiety and depression outcome measures. Their paper details the substantial challenges in this simple-seeming task. Comparison groups could be no treatment or alternative treatment; the CBM interventions used were widely varied; samples could be clinical or voluntary, paid or unpaid; therapy lengths varied considerably, with 21 studies only examining a ‘one-session-therapy’. Papers were too variable in their reporting of follow-up to allow any examination of longer-term effects.
A plot-by-study of effect sizes illustrates the potential impact of 3 major outliers amongst the 49 studies and justifies the steps taken by Cristea et al to address heterogeneity and extreme values in their analysis. They present effect sizes for all studies, and then anxiety outcomes, depression outcomes and clinical sample outcomes separately. Small effect sizes (approximately g=0.3) across the board drop to negligible levels when outliers are removed, with the exception of depression outcomes (g=0.33 for 17 RCTs across all samples; g=0.24 for 9 RCTs for clinical samples). Even these effect sizes, which might be heralded as positive support for CBM, are small not only in themselves but importantly are small when compared to other, more established therapeutic approaches for depression.
The authors highlight the neglect in the papers of mediating and moderating factors. Cristea et al’s own analyses found significant evidence of higher effect sizes linked to monetary compensation and laboratory (rather than home-based) intervention delivery. Logically, they suggest that the small effects found across the 49 RCTs could at least be potentially influenced by these mediators. On top of this, the troubling issue of publication bias was probed by considering impact factor of the publishing journal, and once this was accounted for, effect sizes became non-significant across the board.
Overall, the authors conclude that the promotion of CBM as an effective therapy for anxiety and depression is unfounded on the basis of current evidence. They suggest that a cluster of early studies with high effect sizes have been followed with diminishing returns, and they profess concern about the race to push CBM into clinical practice which has left an ‘after the horse has bolted’ situation. They despair at some length about the real lack of quality in the studies assessed, which are all published works and yet lack methodological vigour or proper biasing checks.
The authors suggest that the way forward for CBM would be quality RCTs with clinical populations and thoroughly outlined treatment protocols. This seems generous; on this evidence, one could argue that the notion of further research seems like an inefficient use of resources at best.
Cristea IA, Kok RN, Cuijpers P. Efficacy of cognitive bias modification interventions in anxiety and depression: meta-analysis. Br J Psychiatry 2015 Jan;206(1):7-16. [PubMed abstract]