There is a growing evidence base around digital health interventions, defined here as “programs that provide information and support (emotional, decisional, and/or behavioral) for physical and/or mental health problems via a digital platform (e.g. website or computer)”.
Crucially, we know digital interventions are effective if people use them. But getting people to use them, or persist in using them, is the tricky part. There is evidence to suggest that there is a ‘dose response’ relationship with digital interventions, with greater effects associated with greater engagement. The question of how to better engage people with these interventions is therefore an important one.
This review focuses on exactly that; specifically on trying to prolong engagement, and encourage people to revisit an intervention (rather than targeting the decision to engage in the first place). The authors aimed to review the effectiveness of technological prompts for encouraging sustained use of digital interventions.
- The review looked at technological methods of prompting, e.g. emails, phone calls, text messages
- They used the Behaviour Change Technique Taxonomy to describe the behavioural change interventions
- The authors assessed risk of bias using the Cochrane tool, and reported according to PRISMA guidelines
- They looked at several comparators:
- Prompts vs. no prompts
- Prompts vs. multiple prompts (for example, one group receiving only emails, another receiving both emails and telephone calls)
- Prompts vs. non technological prompts (use of printed materials or face to face communication)
- The primary outcome was engagement “recorded as the number of log-ins/visits, number of pages, visited, number of sessions completed, time spent on the digital intervention, and number of digital intervention components/features used.”
- Secondary outcomes were economic cost and, interestingly, ‘adverse outcomes’, defined as frustration/irritation with the prompts or a “loss of self esteem due to not being able to engage with the digital intervention.”
- They excluded trials that focused on health professionals rather than service users, and also trials where they couldn’t pull out whether participants had disengaged from the intervention or from the trial itself
- After exclusions, a total of 14 studies with 8,774 participants were included in the review. All 14 were subjected to a narrative synthesis and 9 were included in a quantitative meta analysis.
- 8 of the digital interventions focused on mental health problems. The remaining 6 promoted various health behaviours, such as diet and smoking cessation
- Email was the most common form of prompt used. The prompts varied in terms of timing, frequency, duration, who sent them (e.g. a therapist/researcher/peer)
- The authors report “No paper provided information about any underlying theoretical framework for the use, delivery, or content of strategies”
- They were only able to perform a meta-analysis on trials where the comparator was absence of a prompt. They found that prompts led to significantly higher engagement when using dichotomous measures, but not when using continuous measures
- None of the studies included data on economic cost or adverse outcomes.
The authors say:
Generally, studies report borderline small-to-moderate positive effects of technology-based strategies on engagement compared to using no strategy, … However, this result should be treated with caution due to the high heterogeneity, small sample sizes, and the lack of statistical significance in the analysis of continuous outcomes… No firm conclusions were drawn about which characteristics of strategies were associated with effectiveness.
The authors state that “clear guidance for the optimal reporting of engagement is urgently needed” and we also need authors to state what they consider “optimal engagement” to be and why.
The authors suggest that qualitative research may be better suited to addressing some of these questions, to understand user experiences and preferences.
- Strictly speaking, the primary outcome is actually several outcomes, although this reflects that different studies reported different measures of engagement. It would have been useful to have some comment on whether all the measures are considered equivalent or if some are more important (for example, is having covered more of the components of an intervention more important than number of individual visits?). The authors themselves acknowledge that we need to think about what ‘optimal engagement’ looks like.
- Is this a case for COMET? Do we need to agree a core outcome set for digital interventions, which would include specific engagement measures?
- The included secondary outcome relating to adverse effects is interesting, and I would have liked some discussion from the authors as to why they included this and whether they anticipate a risk of damaging self esteem.
- I was surprised more wasn’t said about the absence of any theoretical frameworks, especially given the employment of the Behaviour Change Technique Taxonomy. The authors did attempt to code content of the prompts to the taxonomy, though it seems that there was insufficient data to then compare these based on the components employed. The issue of engagement seems highly appropriate for a behaviour change approach, and perhaps more theoretically driven studies would also help with reporting exactly how prompts are expected to work and what outcomes they are intended to achieve.
Alkhaldi G, Hamilton FL, Lau R, Webster R, Michie S, Murray E. (2016) The Effectiveness of Prompts to Promote Engagement With Digital Interventions: A Systematic Review. J Med Internet Res 2016;18(1):e6 DOI: 10.2196/jmir.4790