Apps to support the mental health of young people: flashy and available versus evidence-based and hidden?

laura-chouette-vhy5QcB3HIA-unsplash

Any mental health practitioner who has been asked to recommend a mental health app for young people will know that finding one which not only appeals to young people but is also effective is a tricky business.

The majority of mental health apps for young people available in App-stores are neither based on psychological theory, nor has their effectiveness been evaluated. In contrast, many apps which have been developed in academic settings and positively evaluated in scientific studies fail to be successfully implemented into practice.

Bear et al.’s (2022) systematic review address this gap between theory and practice by asking two key questions:

  • How many evidence-based apps for 15-25 year olds have been implemented in real-world settings?
  • What helps and hinders successful implementation?
Finding a mental health app which appeals to young people but also works is tricky business

Finding a mental health app that appeals to young people but also does the job is a tricky business.

Methods

In the first step of their project, Bear et al. (2022) conducted a systematic review to identify studies published between 2011 and 2021 investigating:

  1. an app with the primary aim of promoting wellbeing or preventing mental illness,
  2. in which the primary measure was mental health or wellbeing and,
  3. the app was efficacious compared to a control group of
  4. young people aged 15-25 years old.

In the second step, Bear et al. (2022) assessed implementation success for each app. Importantly, the studies themselves did not have to report implementation outcomes (although this was the case for some of them). Instead, using their own web-based research as well as by contacting original study authors, Bear et al. (2022) assessed implementation success themselves. Corresponding authors were contacted to determine how many apps were sustained or adopted post-development. This was supplemented by web-based research to rate each app according to a 10-item taxonomy based on Proctor et al. (2011).

Each app was evaluated in terms of:

  • Coproduction (user involvement in development)
  • Acceptability
  • Appropriateness
  • Feasibility
  • Fidelity
  • Adoption
  • Engagement
  • Penetration
  • Implementation cost
  • Sustainability

The quality of studies was assessed using the Mixed Methods Appraisal Tool (MMAT; Hong et al., 2018), although since the primary aim of the review was not to evaluate the effectiveness of the apps per se, but rather their implementation success, the quality of individual study designs was less important than in most systematic reviews.

Results

The authors identified 34 studies of 29 evidence-based apps. Most apps had been evaluated in university students and mostly as standalone interventions rather than supporting treatment. Roughly 28% of apps (n=8) were existing commercially available apps (e.g. Headspace) whereas the remaining 72% (n=21) were newly-developed apps. Of the newly-developed apps, 43% (n=9) were currently available at the time of the review (commercially or otherwise). Thus, in response to their first research question, 17 evidence-based apps for 15-25 year olds had been implemented in real-world settings.

The authors evaluated the apps in regards to 10 specific markers of implementation success. Just 5/29 apps had been coproduced with young people. Acceptance (e.g. user satisfaction) was frequently measured in the studies and was generally high. Similarly, engagement (e.g. retention) was reported in all but one study. A key finding here was that engagement frequently decreased over time. Feasibility (e.g. log-in frequency) was reported for 12/34 studies. Information on sustainability and penetration was rarely available.

The authors describe a number of barriers to implementation success which indirectly arose from their review. The nature of research funding plays an important role: it is often hard to acquire funding for the implementation of an app which has not yet been scientifically evaluated. Sustaining an app is also extremely costly and generally requires a commercial provider who sees a financial benefit in doing so and has the capacity to do so in the long-term. Currently a handful of existing mental health apps monopolise the market making it hard for novel evidence-based apps to become established.

Over 10,000 mental health apps available in practice: just 17 have a reasonable evidence-base for young people

There are over 10,000 mental health apps available, but just 17 have a reasonable evidence-base for young people.

Conclusions

This systematic review highlights how few mental health apps for young people (n=17) are both evidence-based and implemented in real-life. Co-producing apps with young people is an important step to ensuring that the app is not only efficacious within the realms of a research study, but also appealing to young people to use beyond this scope. Nevertheless, the lack of funding for the implementation of evidence-based apps represents a major barrier to successful sustainability, which researchers and clinicians themselves have relatively little control over. Funding schemes with a purely “implementation” focus could go some way towards spring-boarding efficacious apps onto the market.

In my personal opinion, what is needed  is far more financial support at a governmental level for the delivery of apps, which not only provide personal gain for individual users but play an important role at a public health level in the prevention of mental health problems. As we now well know, preventing mental illness in young people is cost-effective and investing here could reduce the economic burden of treating mental illness in the long-term.

With evidence to support the efficacy of mental health apps for young people, it’s time to start investing money in their sustainability

With evidence to support the efficacy of mental health apps for young people, it’s time to start investing money in their sustainability.

Strengths and limitations

The strengths of this manuscript include the highly-qualified study team who have established expertise regarding the development, delivery and sustainability of mental health apps for young people. The pre-registration of the systematic review protocol, which was then strictly adhered to, speaks to the transparency of the research process. As well as addressing a topic which is highly relevant, the authors provide clear recommendations for research and practice.

The authors themselves acknowledge that the findings are somewhat limited in their generalisability. Firstly, the age range of the sample (15-25 years) which bridges adolescence and young adulthood may have meant that some studies were left out. Secondly, the included studies were largely apps evaluated with university students rather than younger age groups or more diverse samples. Finally, the findings regarding implementation success may be an over-estimate, given they partly reflect apps which were offered as part of treatment rather than standalone apps per se.

I have to say it took me a while to get my head around the fact that the authors were not systematically searching for implementation outcomes, but rather using the systematic review method to identify apps that had been scientifically positively evaluated, whose implementation they then evaluated themselves. This approach certainly has its merits and was likely chosen because so little published research has been conducted into implementation outcomes of mental health apps and because the field is moving so fast. However, this novel approach also comes with its own limitations. Firstly, it is hard to assess the quality of the study since it depends less on the quality of the individual studies included (which the authors report) and more on the methods by which the authors collected data implementation outcomes. Much of the data collected (e.g. availability in the app-store, rating on the app-store) is relatively objective and less prone to researcher bias. The data collected via a survey of study authors seems more prone to bias. Whilst a standardised survey form was used and the data extracted was controlled by a second researcher a number of potential biases in data collection nevertheless seem possible. Firstly, I could not find information on how many times the survey was completed i.e. for which apps. Secondly, it was unclear how missing information within submitted survey responses was treated, and what reasons could explain why data could not be collected.

The authors took a novel approach to collecting implementation outcomes which comes with its own strengths and limitations

The authors took a novel approach to collecting implementation outcomes which comes with its own strengths and limitations.

Implications for practice

As mental health professionals we are often asked to provide recommendations for trustworthy apps to support young peoples’ mental health. This review demonstrates why this is a hard task: there are many apps which seem appealing to young people but have no evidence-base, and only a handful of apps with a sound evidence-base which are available to young people. Five to ten years ago one might have said that not enough is known about the efficacy of app-based interventions. We now know that the question is less “can app-based interventions work?” and more “which ones work best, how, and under which conditions?”.

As the authors rightly point out, sustaining the delivery of app-based interventions is expensive. Funding schemes to support implementation research are of course highly valuable. Equally if not more valuable from my perspective is governmental funding of app-based prevention and app-assisted treatment. This could be valuable in reducing the burden that mental illness plays not only on individuals but also the healthcare service(s).

A call for more funding of app-based mental health prevention and treatment

This research adds further weight to the call for more funding of app-based mental health prevention and treatment.

Statement of interests

I have worked with the authors of this review as part of the “ECoWeB” project aimed at developing and disseminating an App to provide engaging and personalized tools and psychological skills to promote emotional wellbeing and prevent mental health problems in adolescents and young adults. I was not involved in the conception, conduct or publication of this manuscript, but delighted to be invited to blog about it for the Mental Elf.

Links

Primary paper

Bear, H. A., Ayala Nunes, L., DeJesus, J., Liverpool, S., Moltrecht, B., Neelakantan, L., Harriss, E., Watkins, E., & Fazel, M. (2022). Determination of Markers of Successful Implementation of Mental Health Apps for Young People: Systematic Review. Journal of Medical Internet Research, 24(11), e40347.

Other references

Hong, Q. N., Fàbregues, S., Bartlett, G., Boardman, F., Cargo, M., Dagenais, P., … & Pluye, P. (2018). The Mixed Methods Appraisal Tool (MMAT) version 2018 for information professionals and researchersEducation for information34(4), 285-291.

Proctor, E., Silmere, H., Raghavan, R., Hovmand, P., Aarons, G., Bunger, A., … & Hensley, M. (2011). Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Administration and policy in mental health and mental health services research38(2), 65-76.

Photo credits

Share on Facebook Tweet this on Twitter Share on LinkedIn Share on Google+