Cognitive bias modification for addiction: are we flogging a dead horse?

2768694970_aa8f03bd52_o

Cognitive bias modification (CBM) is a computerised intervention that trains people to overcome the automatic cognitive processing biases that play an important role in the development and maintenance of psychological disorders.

The types of biases targeted by cognitive bias modification include:

  • Attentional bias (when stimuli related to the disorder capture the attention)
  • Approach bias (when stimuli related to the disorder evoke approach behaviour automatically)
  • Deficits in response inhibition (when stimuli related to the disorder impair the ability to control behaviour)
  • Interpretive bias (when ambiguous stimuli related to the disorder are interpreted in a way that exacerbates symptoms).

CBM has been applied to various psychological disorders including emotional disorders (e.g. depression and anxiety) and substance use disorders (addictions).

There is some debate about the efficacy of CBM for emotional disorders:

  • One meta-analysis (from the authors of the current target paper) concluded that CBM may have ‘no significant clinically relevant effects’ (Cristea, Kok & Cuijpers, 2015) in this domain,
  • Whereas others meta-analysed the same literature and reached more optimistic conclusions (Linetzky et al., 2015; MacLeod & Grafton, 2016).

In the addiction field, there have been promising results from a number of RCTs of one form of CBM (approach bias modification; Wiers et al., 2011, Manning et al., 2016; see this recent Mental Elf blog), although some researchers (including me and my colleagues!) have been skeptical about other forms of CBM (attentional bias modification; Christiansen et al., 2015). A meta-analysis of CBM for addictions would be a welcome addition to the field.

I first found out about the current meta-analysis over a year ago when I took part in a Mental Elf Campfire alongside the lead author (Ioana Cristea) and other colleagues who work on this topic. Ioana discussed her findings around the virtual campfire, so I was looking forward to seeing the article in print. The paper was published in PloS One in September 2016 (Cristea, Kok, & Cuijpers, 2016).

There are considerable levels of disagreement about CBM amongst researchers and practitioners.

There are considerable levels of disagreement about CBM amongst researchers and practitioners.

Methods 

The authors performed a comprehensive literature search to identify “randomised controlled trials” (RCTs; see Discussion) that investigated the effects of cognitive bias modification (a single session or multiple sessions) on cognitive biases and a range of addiction-related outcomes including subjective craving and clinician- or self-reported substance use, in comparison to any type of control condition.

They opted to exclude laboratory behavioural measures of alcohol consumption such as bogus ‘taste tests’ from their outcome measures, even though these were identified as primary outcome measures in many of the original studies (see Discussion).

For each comparison between a CBM and control intervention(s), effect sizes (Hedges’ g) were calculated at post-test and at follow-up. Data were analysed using a random effects meta-analysis. The number needed to treat (NNT) was also reported.

Publication bias was assessed using visual inspection of funnel plots accompanied by Egger’s test of asymmetry, and asymmetry was corrected with the Duval-Tweedie trim-and-fill procedure. 

Twenty-five RCTs, from 24 published studies, were included in the meta-analysis. Study characteristics can be broken down as follows:

  • 18 studies focused on alcohol problems, and 7 on tobacco smoking
  • 12 studies targeted attentional bias, 8 approach bias, 4 response inhibition, and 1 interpretive bias
  • 11 studies investigated effects of a single session of CBM, and 14 assessed the effects of multiple sessions (ranging from 2 to 21)
  • Only 5 studies focused on patients who had been diagnosed with a substance use disorder (SUD); the majority (20) considered ‘consumers’ of those substances (e.g. students who consumed alcohol), in whom SUD status was unknown
  • The majority of CBM interventions were delivered in a University laboratory (15), five in a clinical setting, and five in naturalistic settings such as participants’ own homes
  • Seven studies included a follow-up, the duration of which ranged from one month to one year.

Results

  • Effects of CBM (vs. control) on all outcomes at post-test (soon after receiving the intervention) were small and not statistically significant: g = 0.08 (95% CI = -0.02 to 0.18; NNT = 21.74). Heterogeneity was low (I2= 0%). Results were comparable for alcohol and tobacco outcomes, and when multiple comparisons from the same studies were considered separately
  • There was no significant effect of CBM on post-test craving: 18 trials; g = 0.05 (95% CI = -.006 to 0.16; NNT = 35.71). However, the effect on cognitive bias at post-test was statistically significant: 19 trials; g = 0.60 (95% CI = 0.39 to 0.79)
  • At follow-up, the effect of CBM on all outcomes was small but statistically significant: 7 studies; g = 0.18 (95% CI = 0.03 to 0.32)
  • Subgroup and meta-regression analyses revealed that effects of CBM were not moderated by addiction type (alcohol vs. smoking), delivery setting (lab, clinic or naturalistic), CBM type (attentional bias, approach bias, response inhibition, or interpretive bias), sample type (patients vs. consumers). The number of CBM sessions did not moderate effects on subjective craving or behavioural outcomes, but it had unexpected and counterintuitive effects on cognitive bias outcomes: these effects were larger after a single session of CBM: 9 studies; g = 0.86 (95% CI = 0.53 to 1.18) compared to after multiple sessions: g = 0.35 (95% CI = 0.16 to 0.53)
  • Most studies had high or uncertain risk of bias for most criteria; only 4 (of 25) studies had low risk of bias for at least three of the five criteria that were considered. A meta-regression revealed that risk of bias was negatively correlated with effect sizes for addiction outcomes: b = -0.11 (95% CI = -0.21 to 0.01), craving outcomes: b = -0.17 (95% CI = -0.29 to -0.06), but not cognitive bias outcomes
  • The magnitude of the effect of CBM on cognitive bias did not predict the effect of CBM on addiction outcomes: b = 0.18 (95% CI = -0.07 to 0.44)
  • There was evidence of publication bias, and adjustment for missing studies reduced the estimated effect size of CBM (on all outcomes at post-test) still further.
At post-test, there was no significant effect of CBM for addiction or craving, but there was a significant, moderate effect on cognitive bias.

At post-test, there was no significant effect of CBM for addiction or craving, but there was a significant, moderate effect on cognitive bias.

Discussion

At first glance, these findings are very bad news for proponents of CBM. There was no appreciable difference between CBM and control conditions on all addiction outcomes at post-test. There was a small (but robust) effect of CBM on addiction outcomes at follow-up; however, most studies were estimated to have high risk of bias, and the extent of this risk of bias was positively correlated with the effect size: studies that had the highest risk of bias tended to yield the largest effect sizes. There was no evidence to suggest that some forms of CBM (e.g. approach bias modification) were more effective than others (e.g. attentional bias modification).

However, some colleagues and I have raised concerns about methodological issues with this meta-analysis that complicate interpretation of their findings (see this comment from myself, Paul Christiansen and Andy Jones (all University of Liverpool) and a separate comment from Reinout Wiers (University of Amsterdam). I won’t repeat these detailed observations here (I suspect that most readers will not be that interested!), but I will briefly discuss some of the main issues, the response from the lead author to these issues, and some insights from others after the inevitable discussion on Twitter that followed.

What constitutes a randomised controlled trial?

The first point is that 11 of the 25 studies included in the meta-analysis were psychology experiments that investigated the causal influence of cognitive bias on substance use or craving in the laboratory. They were never intended as ‘RCTs’, were not portrayed as such in the original papers, and initially we could not understand why Cristea et al. took the decision to treat them as if they were RCTs. True, the techniques used during CBM may be very similar, if not identical, to experimental tools that were designed to manipulate cognitive bias in psychology experiments, but does this mean that both types of research should be described as RCTs? The authors subsequently responded to our comment and pointed out that according to both NICE and Cochrane guidelines, ‘RCTs are identified as such by the existence of a random allocation of participants to two or more groups, with one receiving an intervention, and the other no intervention, a dummy treatment or no treatment’. They went on to state that to distinguish laboratory studies from RCTs would be ‘cherry picking’.

So, do we stand corrected? I’m still not convinced, and I think that it hinges on how one defines an ‘intervention’. The implication of such a broad definition (as applied by Cristea and colleagues) is that any psychology experiment which compares the effects of two experimental manipulations on clinically-relevant outcome measures (e.g. substance craving or consumption, or subjective mood in the case of emotional disorders) should be classified as an RCT. I do not agree. However, I accept the point that we should not deliberately omit potentially valuable data from meta-analyses simply because those data were not labeled as an RCT in the original papers.

This issue provoked some polarised responses on Twitter, and it could rumble on indefinitely. However, it may all be irrelevant because Cristea et al. did investigate whether single-session lab studies (usually with student volunteers) and multiple-session CBM studies (often, but not always, with substance-dependent patients) yielded different findings: they did not. However, this failure to appreciate the difference between laboratory research and RCTs may have had other consequences, as detailed below.

Should laboratory-based psychology experiments be included alongside RCTs in meta-analyses like this one?

Should laboratory-based psychology experiments be included alongside RCTs in meta-analyses like this one?

Why disregard data from laboratory measures of alcohol consumption, such as the bogus ‘taste test’?

The second point concerns the outcome measures from laboratory studies of alcohol CBM that were selected or excluded from the meta-analysis. In many of these studies, participants completed a bogus ‘taste test’ immediately after completing the CBM (or control) intervention. During the bogus taste test, participants were given alcoholic drinks (often alongside soft drinks) and instructed to drink as much or as little as they wished in order to rate the taste of the drinks. Despite these instructions, the real purpose of bogus taste tests is to record how much alcohol participants choose to drink, the volume of which can be standardised across studies (by expressing it as a percentage of the volume of alcohol provided or, if participants have a choice between alcoholic and soft drinks, by expressing alcohol intake as a percentage of total fluid intake). This task is widely used in laboratory research on addiction, and comparable tasks have been used to investigate influences on food intake in the laboratory for many decades (see Jones et al., 2016, for our recent paper on the construct validity of the task). However, Cristea et al. opted to exclude all outcome measures from bogus taste tests from their meta-analysis because ‘this is a non-standardized and variable task that does not take into account participants’ general preferences, their habitual alcohol consumption, and that is usually carried out without their awareness’.

In our commentary, we noted our objections to the decision to exclude outcome variables from bogus taste tests. One observation is that each of the objections identified by Cristea et al. (in the above quote) are also likely to apply to subjective craving and self-reported alcohol consumption, the outcome measures that Cristea et al. chose to include. For example, subjective craving questionnaires differ widely from each other (in terms of their factor structure etc.); therefore, why are these any more ‘standardized’ than bogus taste tests? Also, participants who normally drink heavily and prefer alcoholic drinks over soft drinks are also likely to report higher craving for alcohol; so subjective craving would also be influenced by their ‘general preferences and their habitual alcohol consumption’. Finally, in my opinion the fact that participants are deceived about the real outcome measure when they complete ‘bogus’ taste tests is a methodological strength, rather than a weakness. Perhaps a more important, overarching point is that none of these methodological limitations should really matter if participants are randomly allocated to experimental conditions, so long as the same outcome measures are measured in all participants. Which of course, they are, in both psychology experiments and RCTs (back to this distinction again).

I would like to see this meta-analysis repeated with inclusion of standardised outcome measures from bogus taste tests. For the record, I don’t anticipate this changing the overall conclusions!

Would this evidence be more reliable if bogus taste test outcomes were included in the analysis?

Would this evidence be more reliable if bogus taste test outcomes were included in the analysis?

Are risk of bias measures appropriate for psychology experiments?

In a separate commentary, Reinout Wiers identified additional methodological limitations with this meta-analysis. One problem is the way that ‘risk of bias’ was assessed, which again goes back to the failure to distinguish between psychology experiments and RCTs. The issue is that some of the risk of bias measures (e.g. failure to report the method of randomisation, to report dropouts or confirm that intention to treat analyses were used) simply do not apply to psychology experiments. This is because randomisation is determined by a random number generator or is automated (e.g. by the computer program that ‘delivers’ CBM) and student volunteers typically do not drop out of brief psychology experiments, so the dropout rate is often zero (in which case, it is not reported). Therefore, the risk of bias estimates are likely to be inflated because criteria intended for RCTs were applied to psychology experiments for which these criteria were never intended.

Conclusions

The overall conclusions from this meta-analysis are a much-needed tonic to the (often uncritical) wave of enthusiasm for CBM for addiction and other psychological disorders. Unfortunately, in my opinion, the authors’ criterion for what constitutes an RCT is far too broad, and they fail to recognise the distinction between psychology experiments that manipulate candidate psychological processes in order to investigate their influence on subjective craving and substance use, versus RCTs that evaluate the effectiveness of psychological interventions for substance use disorders. The consequences include application of risk of bias measures and unwarranted exclusion of a key outcome measure. Overall, I believe that this paper does not add much clarity to the literature on CBM for addiction, which is a shame.

To end on a conciliatory note: how are people who do not work in the CBM field supposed to know the difference between a psychology experiment and an RCT? Perhaps what is needed is a framework that enables us to clearly distinguish psychology experiments and experimental medicine from ‘genuine’ RCTs, to develop appropriate risk of bias assessments for these different types of research, and to pay closer attention to the reliability and validity of the outcome measures that are used in these different types of research.

We need a framework that helps distinguish psychology experiments and experimental medicine from ‘genuine’ RCTs.

We need a framework that helps distinguish psychology experiments and experimental medicine from ‘genuine’ RCTs.

Conflict of interest

I work in this field, and several studies from our group (some with positive findings, some with null results) were included in the meta-analysis. I admit that I wish that CBM would prove to be an effective intervention for substance use disorders, but I do not consider this to be a conflict of interest and, as noted in the Introduction, I have previously expressed my skepticism about some forms of CBM in print.

Acknowledgements 

I am particularly grateful to Reinout Wiers, Andy Jones, and Paul Christiansen for useful discussions when preparing comments on this article for PLoS One, and to Marcus Munafo and Anne Wil-Kruijt for their insights when the discussion continued on social media.

Links

Primary paper

Cristea IA, Kok RN, Cuijpers P (2016) The Effectiveness of Cognitive Bias Modification Interventions for Substance Addictions: A Meta-Analysis. PLoS ONE 11(9): e0162226. doi: 10.1371/journal.pone.0162226

Other references

Christiansen, P., Schoenmakers, T. M. & Field, M. (2015). Less than meets the eye: Reappraising the clinical relevance of attentional bias in addiction (PDF). Addictive Behaviors 44, 43-50.

Cristea I. A., Kok, R. N, & Cuijpers, P. (2015). Efficacy of cognitive bias modification interventions in anxiety and depression: Meta-analysis. British Journal of Psychiatry, 206: 7–16. [Mental Elf blog of this study]

Jones, A., Button, E., Rose, A. K., Robinson, E., Christiansen, P., Di Lemma, L., et al. (2016). The ad-libitum alcohol “taste test”: secondary analyses of potential confounds and construct validity. Psychopharmacology, 233: 917–924.

Linetzky, M., Pergamin-Hight, L., Pine, D. S., & Bar-Haim, Y. (2015). Quantitative evaluation of the clinical efficacy of attention bias modification treatment for anxiety disorders. Depression and Anxiety,32: 383–391. [PubMed abstract]

MacLeod, C., & Grafton, B. (2016). Anxiety-linked attentional bias and its modification: Illustrating the importance of distinguishing processes and procedures in experimental psychopathology research. Behaviour Research and Therapy, doi:10.1016/j.brat.2016.07.005. [Abstract]

Photo credits

Share on Facebook Tweet this on Twitter Share on LinkedIn Share on Google+
Mark as read
Create a personal elf note about this blog
Matt Field

Matt Field

I'm a Professor of Psychology at the University of Liverpool, where I lead a small group of researchers investigating addiction, particularly alcohol problems. Our research is quite broad in scope, and includes laboratory work testing psychological processes in addiction, right through to studies of new treatments in clinics. You can read about our latest research findings on our blog here: http://livuniaddictiongroup.blogspot.co.uk/, or link to our pages on the University of Liverpool site here: http://www.liv.ac.uk/psychology-health-and-society/research/addiction/. I'm quite new to blogging but finding it very enjoyable and a pleasant change from the normal day to day stuff!

More posts - Website

Follow me here –