Very few areas of medicine have undergone such changes in service design as psychiatry. Providing community care to people with severe mental illness is a challenge, due to the multifaceted ways in which people present to services, and their needs.
Significant research has therefore taken place, and an accumulating evidence base developed for services such as Early intervention Services (EIS) (Correll et al., 2018).
A model of care that has gained publicity in recent years has been Open Dialogue (OD) therapy. This therapy originated in Western Lapland in the 1980s, broad principles being the provision of care at the social network level by staff trained in family, systems and related approaches.
What is Open Dialogue?
Open Dialogue is described as:
A person-centred model of mental healthcare [based on] a recovery-oriented model, in which the emphasis is on the mobilisation of resources within patients and their families, in order to engender a sense of agency from early on.
Organisational principles include a social network perspective, provision of immediate help, responsibility, psychological consistency, flexibility and mobility. Practice principles include dialogism…. (tolerating) uncertainty.
(Razzaque and Stockmann, 2016).
The possible use of Open Dialogue as a model of care in the NHS is based on:
Promising data from Finnish non-randomised trials, which demonstrate outcomes far superior to those in the UK. For example, more than 70% of people with first-episode psychosis treated with an Open Dialogue approach returned to study, work or work-seeking within 2 years, despite lower rates of medication and hospital admission compared with treatment as usual, and these outcomes were stable after 5 years.
(Seikkula: Five-year experience of first-episode nonaffect…,; Seikkula: The comprehensive open-dialogue approach) (Razzaque and Stockmann, 2016).
Further claims made about the effects of Open Dialogue by those who developed it include:
In the comparison group, the patients were hospitalised significantly longer (approximately 117 days compared to 14 days in the Open Dialogue (OD) group).
- At least one relapse occurred in 71% of comparison group patients compared to 24% in the OD group
- Comparison group patients had significantly more residual psychotic symptoms compared to the OD group. Some 50% of comparison group patients had at least occasional mild symptoms, compared to 17% of OD patients
- The employment outcome was better with OD patients, of whom only 19% were living on a disability pension compared to 57% of the comparison group patients.
- All the patients in the comparison group used neuroleptic drugs compared to one third in OD.
(Seikkula J, Alakare B)
Open Dialogue in the UK
Open Dialogue initiatives have been set up in mainly Western countries (Scandinavia, Italy, Germany, UK, Australia and the United States). Teaching programmes are available, and as stated above, initiatives are underway to provide this service within the NHS. Furthermore, the National Institute for Health Research (NIHR) has funded a 5 year programme of research, using Open Dialogue Therapy, including a randomised controlled trial (ODDESSI, UCL, 2018).
The NHS is perennially under financial strain, and mental health services are acknowledged to have been chronically underfunded. Therefore any intervention that makes such bold claims as those above, is appealing. Moreover, the aims in terms of person-centred care are laudable, and what one would wish for in a modern healthcare setting.
However, any new intervention requires a high degree of scrutiny before valuable time, energy, and funding are put into it. At a basic level, evaluating research for a public health intervention requires us to answer the following questions:
- Is the research of high enough quality to support a decision to implement the intervention?
- What are the research outcomes?
- Is the research transferable to potential recipients (individual/population)?
(Rychetnik et al., 2002).
To address this, studies should measure a representative population, demonstrate the intervention has fidelity to the proposed model of care, and use a valid comparator/control group (controlling for placebo effects). Studies should also measure relevant outcomes, including (and controlling for) potential confounders that could alternatively explain findings.
A recent review, published in Psychiatric Services by Freeman et al (2018), sought to evaluate the evidence for Open Dialogue, to enable clinicians, policymakers, and most importantly service users and their families to see what the current evidence is.
In this review the authors searched databases for studies of Open Dialogue, finding 23 studies (8 reporting quantitative data, 16 qualitative). The form of the review was a textual narrative synthesis, as opposed to a systematic review, owing to the “very low quality of included studies.” Due to study heterogeneity no one quality measure could be used. Therefore different quality measures were used, e.g. for qualitative studies a measure by Pope et al (Pope, Ziebland and Mays, 2000) was used, and the STROBE measure was used for quantitative studies (von Elm et al., 2014).
Study selection and study quality
The authors identified 23 studies, observing a wide range of study designs and outcome measures. They noted lack of consistency in reporting of study methodology, which would increase risk of bias (see below).
This was exemplified in the 8 quantitative studies, with small sample sizes, variable outcome measures, lack of randomisation, general lack of control group, and in the one study with a control group, lack of adequate information about study groups. Most studies were conducted by the Open Dialogue (OD) developers, increasing the risk of bias. The authors summarised results from the quantitative studies, reporting design, numbers, follow-up period, outcome measures, fidelity to OD principles, findings and reported significant limitations.
Of these quantitative studies, only one took place outside Finland, 6 took place in Western Lapland (where OD was pioneered, authored by proponents of OD.) Of the 8 studies, 5 were case series, 2 cohort studies (1 retrospective), and 1 “historical control” design (with a sample size of N=14 for the control group). The majority of studies reported on a cohort from 11-25 years ago. The authors of the review noted that sample sizes changed from study to study, despite the same sample being studied, with variability in data reporting throughout the published literature.
To give some insight into potential problems in interpreting the data, the authors take an excerpt from one paper, where OD proponents state that OD “had been helpful-if not in actually preventing schizophrenia, at least in moving the commencement of treatment in a less chronic direction.” (Aaltonen, 2011). Freeman et al state that this is overtly positive. This “change” could simply have been due to a change in diagnostic practice, and the rating of symptoms and diagnoses were made by those involved in OD (i.e. unblinded assessors).
The difficulty in use of historical cohorts is underlined by the authors of the review when they describe how OD was developed from an initial multicentre study in the early 1990s. They comment that principles of OD were in place for some of these participants, though it is unclear whom. It is stated in one of the studies that in 1994 the content of the psychotherapy pertaining to OD was “transformed” though no details are given as to what this “transformation” involved.
Other Open Dialogue publications then comment that treatment changes from the earlier trial were taken forward in a more systematic way in later OD groups, but no clear description is given.
Reading through the results of these quantitative studies, it is virtually impossible to know precisely what intervention was being delivered, who was receiving it and at which time point, and what the overall outcomes were. Most, if not all, analyses appear to be post-hoc, with selective reporting of samples and outcomes, predominantly with non-blinded assessors.
The authors of this review then comment on qualitative studies of Open Dialogue (OD), pointing out that, though important to guide clinicians in terms of themes, the qualitative data are drawn from small samples, with high risk of sampling bias. They point out that it is unclear if those critical of the intervention were sampled. Most studies do not report sampling procedures or participation rates, and increasing risk of bias. The authors state that a “dearth of methodological information….make it difficult to evaluate the credibility of data or potential bias.”
Qualitative data were also available from network meetings of OD services in Norway and Sweden. The authors latterly comment on 2 qualitative studies evaluating interviews with professionals within an OD service. These were described as being of good quality, where sampling bias and reporting procedures for data collection were reported. A strength of some qualitative studies was the multi-perspective approach, taking account of service users’ experience of change. They also note that the OD service model appears acceptable to service users.
The authors conclude that more rigorous qualitative and quantitative studies are required, that methods of analysis should be standardised, and involve both service users’ and staff experiences.
The authors comment on the lack of high-quality trials and service evaluation, and point out that most studies were highly biased and of low quality.
The authors conclude that few studies reported fidelity criteria (i.e. whether those in the study were adhering to the Open Dialogue (OD) model of care), making the point that the OD approach will be “assessed on its ability to be sustainable, scalable and measurable.”
They also bring up the issue of whether effectiveness studies in the future will provide evidence of cost benefits.
Strengths and limitations
A strength of the review is its thorough nature. The authors did not pre-publish a study protocol, though given the literature (and its quality) there is nothing to suggest the authors have selectively reported results, or ignored salient literature.
Another strength is the method of the review, and attempt to measure study quality. It would have been easy to take the data presented in studies at face value, report effect sizes and make (in our view) misleading generalisations.
In evaluating quantitative studies the authors use recognised criteria, and flag up significant problems with study design; major failings being lack of a uniform sampling method, lack of adequate comparator groups, risk of bias in assessment of outcomes and lack of measurement of factors affecting outcomes that one would reasonably expect in a service evaluation, let alone a controlled study.
It is difficult to understand what was actually done in a number of studies, and the table is at times hard to make sense of, which reflects the state of the literature rather than the quality of this review. The data in the original studies is not presented as one would expect in a peer-reviewed journal (e.g. full demographics, clinical variables, outcome measures, quantitative statistics), and a review can only present the evidence as it exists.
Implications for practice
This is an important review. The quantitative evidence presented is stark and sobering; the absence of clear data on effectiveness is worrying, given the claims made for Open Dialogue, in terms of outcome measures.
Quite apart from the numbers (which are small), almost every aspect of study methodology and reporting is weak in the included studies, and, as Professor Mueser notes in his incisive commentary to this review:
The results of this review are underwhelming.
Furthermore, the authors’ observation regarding reporting of results (various cohort numbers for the same sample, ratings of outcome by those conducting the study, selective reporting of outcomes) makes it difficult to see how anyone could make any decisions on further quantitative work.
The qualitative data points to strengths of the model, and lends itself to the way in which Open Dialogue is generally reported.
What does this mean?
Looking at the methods for evaluating healthcare research (above), there can be no doubt that the empirical evidence for Open Dialogue therapy is poor, to a level where it is difficult to see how one could justify its use on outcome measures such as relapse, hospitalisation, and healthcare burden. The quality of the evidence, as it is presented, would potentially not warrant a data publication in a peer-reviewed journal, and should only be viewed as a selectively reported service evaluation (at best).
Questions do have to be asked about how different samples from the same population, with random outcome measures and unconventional study designs can be put forward as any level of quantitative evidence, and how this message was disseminated, without any scrutiny.
Notwithstanding this, the principles, energy and enthusiasm that Open Dialogue has resulted in are a sign that there is a desire in mental health services for a fresh approach within existing services.
This will be an issue for policy-makers and key stakeholders, and the decision to conduct rigorous trials may well have been based on principles regarding aspects of care other than empirical evidence.
Given that we do have interventions that have a significant evidence base (such as Early Intervention, noted above), and the opportunity cost of implementing changes in a stretched healthcare system, as Professor Mueser puts it:
The present data on Open Dialogue are insufficient to warrant calls for further research on the program other than those projects that are currently under way.
Conflicts of interest
Sameer Jauhar is co-investigator on a research study in psychosis, funded by Alkermes. King’s College London, has received fees from Lundbeck for lectures SJ has given on psychosis.
Mukesh Kripalani and James Chivers report no conflicts of interest.
Freeman AM, Tribe RH, Stott JCH, Pilling S. (2018) Open Dialogue: A Review of the Evidence. Psychiatr Serv. 2018 Oct 18:appips201800236. doi: 10.1176/appi.ps.201800236. [Epub ahead of print] [PubMed abstract]
Aaltonen The comprehensive open-dialogue approach… – Google Scholar (no date). (Accessed: 26 February 2019).
Correll, C. U. et al. (2018) ‘Comparison of Early Intervention Services vs Treatment as Usual for Early-Phase Psychosis: A Systematic Review, Meta-analysis, and Meta-regression’, JAMA psychiatry, 75(6), pp. 555–565. doi: 10.1001/jamapsychiatry.2018.0623.
von Elm, E. et al. (2014) ‘The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) Statement: guidelines for reporting observational studies’, International Journal of Surgery (London, England), 12(12), pp. 1495–1499. doi: 10.1016/j.ijsu.2014.07.013.
Freeman, A. M. et al. (2018) ‘Open Dialogue: A Review of the Evidence’, Psychiatric Services (Washington, D.C.), p. appips201800236. doi: 10.1176/appi.ps.201800236.
Mueser, K. T. (2019) ‘Is More Rigorous Research on “Open Dialogue” a Priority?’, Psychiatric Services, 70(1), pp. 1–1. doi: 10.1176/appi.ps.70101.
Pope, C., Ziebland, S. and Mays, N. (2000) ‘Analysing qualitative data’, BMJ : British Medical Journal, 320(7227), pp. 114–116.
Razzaque, R., & Stockmann, T. (2016). An introduction to peer-supported open dialogue in mental healthcare. BJPsych Advances, 22(5), 348-356. doi:10.1192/apt.bp.115.015230
Rychetnik, L. et al. (2002) ‘Criteria for evaluating evidence on public health interventions’, Journal of Epidemiology & Community Health, 56(2), pp. 119–127. doi: 10.1136/jech.56.2.119.
Seikkula: Five-year experience of first-episode nonaffect… – Google Scholar (no date). (Accessed: 11 March 2019).
Seikkula J, Alakare B (no date) Open Dialogues. Peter Lehmann Publishing, Berlin.
Seikkula: The comprehensive open-dialogue approach… – Google Scholar (no date). (Accessed: 11 March 2019).
UCL (2018) ODDESSI, UCL Psychology and Language Sciences. (Accessed: 26 February 2019)