Ethics of digital technology for mental health: is this the end of the digital dilettante?

4840356136_3caa471f0e_b

The potential of digital technology to make the lives of people with mental health difficulties better has never been greater. The advent of the smartphone and mobile internet access has created the conditions for an ever-expanding range of opportunities for the use of technology to influence outcomes in health. However, ethical considerations remain for professionals in suggesting the use of such technologies.

Bauer et al.’s (2017) open access paper Ethical perspectives on recommending digital technology for patients with mental illness reviews some of the major ethical concerns presented to medical professionals by this explosion of technological possibilities and explores some of the ways in which new technologies challenge the boundaries between health, commerce and the private and public uses of data.

As the authors state:

To maximize the potential of technology to help patients with mental illness, physicians need education about the basics of the digital economy, and must help patients to understand the limits and benefits.

The potential of digital technology to make the lives of people with mental health difficulties better has never been greater.

The potential of digital technology to make the lives of people with mental health difficulties better has never been greater.

Methods

Taking the form of a summary, Bauer at al. present an overview of some of the ethical concerns around emerging digital technologies in health care in the hope of stimulating discussion amongst mental health professionals and informing that discussion in a fast moving and ever-developing area.

The paper boasts a lengthy bibliography of health and technology papers and attempts to bring the reader up to speed with presenting issues.

Results

Bauer et al’s paper has two main elements: a discussion of the digital economy and the tensions it presents for patients and professionals in mental health; and a more focused section of specific ethical issues that digital tools, apps and services present for mental health care professionals.

In the paper the authors consider six questions specific ethical concerns for mental health professionals:

  1. Should physicians recommend digital technology when patients lack technical skills and understanding of the digital economy?
  2. Can physicians ignore patient use of digital technology?
  3. Do physicians understand mental state monitoring by commercial organisations?
  4. What is the message to patients when physicians recommend passive monitoring of mental health?
  5. Do physicians and healthcare administrators need education about the digital economy?
  6. Should individual physicians validate smartphone apps used to make treatment decisions?

In their overview of the digital economy the authors allude to a tension between the practices of tech companies and the ethical obligations of health care professionals. They paint a picture of two competing ethical frameworks around the individual and their right to privacy and to not have their data exploited. The authors point out that the emerging digital economy is one based upon the monetisation of data through the use of multiple sources of data to support analytic decision making.

Devices, websites and apps regularly capture and transmit data about individuals. As the authors say: “In the past, it was only profitable to collect personal data about the rich and famous (Goldfarb and Tucker 2012). The costs of data capture, storage, and distribution are now so low that it is profitable to collect personal data about everyone… Data from sources that appear harmless and unrelated may be combined to detect highly sensitive information, such as predicting sexual orientation from Facebook Likes (Kosinski et al. 2013).”  The authors quote Eric Schmidt (Executive Chairman of Alphabet, Google’s parent company): “We know where you are. We know where you’ve been. We can more or less know what you’re thinking about. (Saint 2010).”

Companies provide free-to-the-user services by collecting huge amounts of data, which is turned into data products sold on to third parties. This includes data from medical websites and apps. This trade allows for the creation of decision making tools that operate without human involvement, something that concerns the authors. “The collected data based on tracking behaviors enable automated decision-making, such as consumer profiling, risk calculation, and measurement of emotion,” they write. “These algorithms broadly impact our lives in education, insurance, employment, government services, criminal justice, information filtering, real-time online marketing, pricing, and credit offers” (Yulinsky 2012; Executive Office 2016; Pasquale 2015). The authors worry that these algorithms may compound biases and introduce new inequalities and that healthcare apps and websites might feed into this. The authors claim that: “People divulge information online because they are susceptible to manipulations that promote disclosure” (Acquisti et al. 2015).  People are keen to keep their medical data private; but are often inadvertently sharing large amounts of data about themselves as they interact with digital products and services. In the use of medical websites and apps, the line becomes blurred.

Do physicians and healthcare administrators need education about the digital economy?

Do physicians and healthcare administrators need education about the digital economy?

Conclusions

The authors conclude that there will always be disparities between people in their knowledge, proficiencies and desire to use digital tools and services and that we should guard against assumptions. They suggest that even if individual healthcare professionals ignore their patients use of digital technologies, those individuals may still come to harm through bad advice, fraudulent or dangerous services and through self diagnosis and potentially self treatment.

The authors argue that it is vital for healthcare professionals to be able to draw distinctions between monitoring of individuals done for their benefit and with their consent and the commercial activity of monitoring people’s mental state for profit. Discussing emerging technology that can discern emotional affect from voice, interactions with smartphones and social media use, the authors say:

At first glance, the use of personal data for commercial profiling and medical monitoring purposes may look identical. But the motivation for using algorithms to define emotions or mental state for commercial organizations is to make money, not to help patients… There must be a clear distinction between the algorithmic findings from the practice of psychiatry, and commercial findings for profit, even though similar analytic approaches are used.

Discussing passive monitoring of location, movement or exercise via smartphone sensors and similar, they point out that such monitoring in a mental health setting may compound stigma rather than alleviate it:

The concept that some individuals require passive monitoring for mental stability may be easily misinterpreted by the general public, who often associate mental illness with violence (Pescosolido 2013).  The situation will become worse if passive monitoring is used as a punishment, such as for non-adherence, or to facilitate the job of healthcare workers.

The authors recommend that a certifying organisation independent of all vendors of apps and services should exist to certify apps, acknowledging the challenges that the NHS has had in creating such certification. They also suggest that, given the complexity of the interaction between sensors in devices and apps that require them for measurement, that medical monitoring apps would require a new certification for each device upon which it is installed to ensure that the device gives true readings.

The authors are also at pains to point out that even healthcare professionals enthusiastic for the implementation of digital technologies may not have an understanding of the wider digital economy and the potential points of tension between its practices and accepted ethical standards. As such, they recommend healthcare professionals should have access to education and regular updates on the state of the industry from independent sources rather than the seller of services themselves.

To date, the NHS has struggled to deliver a credible system for app accreditation.

To date, the NHS has struggled to deliver a credible system for app accreditation.

Strengths and limitations

The paper provides a good starting point for what is a complex and contentious area. The authors acknowledge that they have not provided answers to the complex problems which they raise and that they could not include a full survey of all of the issues.

Issues that they suggest for future consideration include:

  • Whether it is ethical for chatbots (automated conversation software) to deceive patients into thinking they are conversing with a human being;
  • How patient monitoring software should handle data captured inadvertently about people other than the patient;
  • The implications of medication that can sense when it has been taken;
  • The legal issues about timely response to data collected such as are you obligated to respond if you are able to see from data someone is in trouble;
  • And broader questions about evaluating medical efficacy of technological interventions.

They also raise but do not cover the timely spectre of hacking and the protection of data from malicious attention. They touch on the productivity paradox of introducing new technologies where services and practices need front-end reorganisation effort to reap potential rewards. They flag the danger of automation bias, where professionals rely on data created by sensors and other technology and trust it more than other forms of data or observation meaning they cannot see when the data collected does not match with reality.

Is it ethical for chatbots to deceive patients into thinking they are conversing with a human being?

Is it ethical for chatbots to deceive patients into thinking they are conversing with a human being?

Implications for practice

Technology presents great potential for gains in the treatment and management of health conditions. Digital healthcare technology is a complicated interaction between healthcare professionals and systems, patient desires and wishes and the commercial imperatives of the mainly private enterprises that create the tech. In the UK, where the delivery of the majority of healthcare remains within the NHS, the tension between such socialised provision and the business models required by private developers remains an issue. Digital healthcare technology is not developed in a vacuum and the possibilities are governed by the interplay of the ethical duty of healthcare professionals to their patients and the obligations of companies to create sustainable and profitable products. As the authors state:

Without an understanding of the digital economy, physician recommendations to patients to use technology may inadvertently lead to harm.

New technologies create new possibilities and with new possibilities come new questions. It is clear that the digital economy will always push technology in the direction of ‘could’. It is up to healthcare professionals in collaboration with patients to establish whether the things that digital technologies and the companies that create them ‘could’ do are, in view of ethical and patient concerns, the things they should do.

Digital technology is the new normal and for the good of patients and for society as a whole, professionals in mental health need to treat it not as an optional add-on to everyday life, but as an integral part of it. The time, it seems, for digital dilettantes in healthcare is over.

The time, it seems, for digital dilettantes in healthcare is over.

The time, it seems, for digital dilettantes in healthcare is over.

Links

Primary paper

Bauer M, Glenn T, Monteith S, Bauer R, Whybrow PC, Geddes JR. (2017) Ethical perspectives on recommending digital technology for patients with mental illness. International Journal of Bipolar Disorders 2017 5:6 DOI: 10.1186/s40345-017-0073-9 Published: 7 February 2017

Other references

Acquisti A, Brandimarte L, Loewenstein G. (2015) Privacy and human behavior in the age of information. Science. 2015;347:509–14. PubMed

Executive Office of the President. (2016) Big data: a report on algorithmic systems, opportunity, and civil rights. 2016. https://www.whitehouse.gov/sites/default/files/microsites/ostp/2016_0504_data_discrimination.pdf. Accessed 8 Oct 2016.

Goldfarb A, Tucker CE. (2011) Online advertising, behavioral targeting, and privacy. Commun ACM. 2011;54:25–7. Google Scholar

Kosinski M, Stillwell D, Graepel T. (2013) Private traits and attributes are predictable from digital records of human behavior. Proc Natl Acad Sci USA. 2013;110:5802–5. PubMed

Pasquale F. (2015) The black box society. The secret algorithms that control money and information. Cambridge: Harvard University Press; 2015. Google Scholar

Pescosolido BA. (2013) The public stigma of mental illness: what do we think; what do we know; what can we prove? J Health Soc Behav. 2013 Mar;54(1):1-21. doi: 10.1177/0022146512471197. Epub 2013 Jan 16.

Saint N. (2010) Google CEO: “We know where you are. We know where you’ve been. We can more or less know what you’re thinking about.” 2010. BusinessInsider.com. http://www.businessinsider.com/eric-schmidt-we-know-where-you-are-we-know-where-youve-been-we-can-more-or-less-know-what-youre-thinking-about-2010-10. Accessed 8 Oct 2016.

Yulinsky C. (2012) Decisions, decisions … will ‘Big Data’ have ‘Big’ impact? Financial Times. 2012. http://www.ft.com/cms/s/0/9ee048b6-4612-11e1-9592-00144feabdc0.html. Accessed 8 Oct 2016.

Photo credits

Share on Facebook Tweet this on Twitter Share on LinkedIn Share on Google+