Clinicians are warmed by the nostalgic sunshine of times past: a geographically-based model with care from local community mental health teams, where staff moved across inpatient and community services, modelling the transition that many of our patients made. It felt, well, sensible: one team, one over-arching plan, and better long-term relationships. Staff liked it, and more importantly, many service users liked it: so what happened?
- Firstly, “liking stuff” doesn’t always cut much ice in a strained healthcare service where the focus is on maximising resources.
- Secondly, cuts to inpatient bed numbers with increasing recognition that about 50% of the secondary care mental health budget is spent on the 3% of those using these at any given time (Burns, 2017).
- Thirdly community services moved from more generic teams hubbed to primary care practices, to more specialist functional teams, notably splitting to ‘psychosis’ and ‘non-psychosis’ services.
Optimising services is not a bad thing, especially considering the growing demand and limited resources. In principle, a focus on specialist inpatient services that emphases rapid discharge, and community services with more specialist staff should offer both enhanced savings and higher quality care. The problem is that none of this is evidence-based. Further, these models focus on measuring success by throughput and duration of stay rather than the quality of care. And lastly, it inevitably means those who use services move through more teams, seeing more clinicians, thus lead to less relational continuity.
Specifically, the pressures from funding reduction starting in the year 2010 have triggered almost continuous extensive reconfiguration of the mental health services London, in search for efficiencies (Gilburt, 2015), often without adequate attempts to monitor the impact that the changes may have had on markers on functional and clinical outcomes.
As Gilburt states:
“…in pursuing financial sustainability, mental health providers have arguably taken a leap in the dark in redesigning services, workforce and operations”
– Gilburt (2015).
So back to the initial challenge: if admittedly “liked” by many, can we show that continuity of care is actually clinically effective or a better clinical model? Previous national surveys have highlighted both just how common changes in staff continuity can be, and how these are associated with worse patient experience and perceived quality of care, particularly for individuals with schizophrenia (Sanatinia et al, 2016). However, beyond survey data with all the caveats they inevitably contain, studies have not robustly addressed the impact of this loss of relational continuity on health or quality of life.
The authors of the current paper (Macdonald et al, 2018) sought to redress this; measuring both continuity of care and routine clinical outcome data over an 11-year period in a South London mental health trust, seeking to evaluate whether the major restructuring of services has had impacts on, arguably, a uniquely vulnerable cohort of service users, those with the chronic schizophrenia.
The authors collected electronic patient records from 5,552 individuals with schizophrenia or persistent delusional disorder diagnoses, from 2006 to 2016 in the South London and Maudsley NHS mental health trust. They calculated a yearly Modified Modified Continuity Index (MMCI), based on the number of different staff seen, number of contacts of staff and rates of team change (taking into account those closed and those newly formed) as measures of continuity of care. These were then analysed for longitudinal changes over the decade and correlated to changes in the Health of the Nation Outcome Scales (HoNOS), reflecting individuals’ mental health, well-being and social functioning, with other demographic factors, such as age and ethnicity.
Over the study period, the mean MMCI had a significant steady fall. In addition, the analysis showed that throughout the Trust, around 100 community mental health teams were closed, where only 58 new ones, aimed at patients with psychosis, were opened, with the most significant closures and reorganisations happening in 2011. High MMCI scores were significantly associated with specific demographic factors such as older age and involvement with fewer community teams.
On the other hand, the mean total HoNOS scores saw a significant increase, representing an overall decline in functioning, particularly for mental and behavioural problems, symptoms of mental illness (mood, hallucinations, delusions, cognitive problems), problems with physical health, and non-accidental self-injury. This rise in scores began after 2010, coinciding with the beginning of the major reorganisations. High HoNOS scores, equating to poor clinical and functional outcomes were correlated with older age, lower continuity measures and to higher number of teams involved in the patients care.
The results show that, over the 11 year period, there was a decline in continuity of care which was significantly (and independently) associated with worsened clinical outcomes.
The authors interpret these results to show that:
Declining relationship continuity disrupts patients, impairs communication and interferes with the best management of schizophrenia.
In addition, the results show that the more teams involved in a patient’s care, the lower the continuity measures and the worse the clinical outcomes. However, what this does not rule out is that there was a worsening mental state that required input from more and more specialised/acute teams, causing a decline in continuity. In other words, challenging us that the association with worsened outcomes might not be causally driven by change in continuity.
From the data, older patients were the subgroup who appeared most affected, in terms of clinical outcomes, by the changes in the mental health services over the decade, and that these effects appear cumulative, as the HoNOS scores were significantly higher in the later study years.
Strengths and limitations
This study provides a very necessary overview of the impact of longitudinal changes, in terms of both clinical outcomes and continuity of care, that have occurred in a specific subset of patients over a decade. However, the paper is perhaps limited for both the quality of its chosen measure and ability to inform future policy. The MMCI, originally developed for American family practice settings, is a scale from 0 to 1, calculated based on the number of staff members contacted and the total number of number of staff contacts, thus measuring relational continuity. However, it is limited in that it neglects to take into account other aspects of continuity of care, such as the integration between different services and within teams, and the continuity of management such as adherence to plans during periods of transition and the subjective experience of continuity from the patient’s point of view. In fact, other studies (Burns et al, 2009) have shown that there are up to seven factors within the phenomena of continuity of care in mental health, most of which the MMCI index does not incorporate.
In addition, the authors argue that the decline in relationship continuity is related to the reconfiguration of the Trust’s mental health services, but for the external reader they provide little information with regards to the specific nature of the service changes or how these potentially impacted both the patient outcomes and continuity. No doubt there are sensitivities and challenges to reporting on some of these, but in order for the study to have enhanced implications in practice, it would be helpful to have a more detailed analysis into which specific changes, over the past decade, have most impacted on clinical outcomes. For example, whether it be changes in staff turnover rates, reconfiguration from in-patient to community teams, or the change to more specialised services according to diagnostic criteria (as opposed to geographical catchment areas). Without this we are still left somewhat in the dark as to how, specifically, has the service reconfiguration failed to guarantee high quality patient care, or in other words, it is hard to learn from our mistakes if we do not understand where we went wrong.
Implications for practice
This paper helps address and evidence what you perhaps suspected: reorganisation (or, if not done well, what the authors label “redisorganisation”) and decreased continuity of care are associated with worse routine clinical outcomes in those with schizophrenia. Although we may all be tempted to reminisce about the good old days of the NHS, we must recognise that change (seeking to obtain improvement) is inevitable in healthcare. Perhaps, instead of these reflections, what we must address is capturing what is good, effective, and even “liked” in a given system. If we fail to do so, we cannot be surprised if they change or get shut down.
In terms of change, we are entering interesting times, including the ambitious strategy set out by the NHS Long Term Plan. Interestingly, some of this looks to be returning us to elements of the older style of care, with an increased emphasis on localism and place based care in an overarching system of integrated care. Whether this will deliver what it promises has yet to be seen (Tracy et al, 2019).
On the other hand, in terms of continuity of care, what we know is that service users and staff like it and as supported by the data in this paper, it is clinically meaningful and important, although lacking good measurement nationally. Thus perhaps a shift is required in how we view this concept, placing it in the forefront of our health policy as opposed to considering it a luxury for our patients.
To tie this all together, and as a last reflection: if we are to embark on novel large scale changes in how we deliver care, we need to be better at measuring what we do and sharing learning from what is working, and what is not. That is the challenge for all of us.
Conflicts of interest
Derek Tracy is the clinical director of an NHS Directorate that is currently undergoing a redesign into an integrated care system. He contributed to a forthcoming Royal College of Psychiatrist’s policy document on integrated care.
Macdonald A, Adamis D, Criag T, Murray R. (2019) Continuity of care and clinical outcomes in the community for people with severe mental illness (PDF). British Journal of Psychiatry 214, 273-278.
Gilburt H (2015). Mental Health Under Pressure (PDF). King’s Fund Briefing. King’s Fund.
Sanatinia R, Cowan V, Barnicot K, Zalewska K, Shiers D, Cooper J et al. (2016) Loss of relational continuity of care in schizophrenia; associations with patient satisfaction and quality of care. BJPsych Open 2: 318-22.
Burns, T, Catty, J, White, S, Clement, S, Ellis, G, Jones, IR, et al. (2009) Continuity of care in mental health: understanding and measuring a complex phenomenon (PDF). Psychol Med; 39: 313–23
- Photo by Anastasia Vityukova on Unsplash
- Photo by William Iven on Unsplash
- Photo by Christopher Rusev on Unsplash
- Photo by Bulkan Evcimen on Unsplash
- Photo by Markus Spiske on Unsplash
- Freddie Alequin CC BY 2.0
This is helpful in focussing my thinking about the length of art therapy groups, which I offer for people with a diagnosis of psychosis.
“The authors collected electronic patient records from 5,552 individuals with schizophrenia or persistent delusional disorder diagnoses, from 2006 to 2016 in the South London and Maudsley NHS mental health trust. They calculated a yearly Modified Modified Continuity Index (MMCI)”
Though I’m hugely critical of psychiatry, medicine, and most scientific research in psychiatry, I am a huge proponent of science.
Even though I do not care much about the result of the study, I care very much about the research infrastructure.
I’m very much happy that electronic patient records are used in this way to construct an index.
What I’d like to know is the following: is this kind of data collection and index construction done only for that one study, and will it be discarded until the next study will choose to address the same issue again for various reasons (reproducibility for instance) and reengineer the wheel? Or is it part of an effort to parse records continuously, and construct a continuously available metric that will be reusable by authorities and researchers?
To me, the infrastructure of research ends up being more important than the research itself.
Check out the ComPaRe effort.
“ComPaRe is not designed to answer a single given question but to answer a big number of research questions.”
“A scientific comity composed of patients and researchers selects projects that can use ComPaRe data on the basis of their scientific rigor and their relevance to patients.”
Does the study mentioned in the post integrates itself in a wider research infrastructure? Or not?