Evaluating integration: instruments to track the complexity

shutterstock_199678907

Policy expectations regarding the benefits of integration continue to run high. Beneath the positive experiences that many service users and staff have of receiving or working in a service promoting integrated care are a considerable range of thorny, contested and unchartered issues. Amongst these is the challenge of how best to undertake an evaluation of a concept whose definition is often vague and slippery, and which is applied to many contexts, people and services.

In some ways integration is not unique among other large and/or complex public service interventions. It shares common issues of lack of clarity regarding outcomes; difficulties in understanding what would have happened without the intervention; ring fencing of impacts to the intervention in question rather than other influences; and securing sufficient resource to enable timely gathering and analysis of data.

However, integration has some additional complexities – the need to work across organisational interests and cultures; to engage professionals with alternative views of evidence; and to draw on performance data systems that do not easily communicate.

Budding evaluators do not though always have to cut out a new path, as they are following in the tracks of previous studies seeking to observe, analyse and understand integration in practice. In some cases, these have developed validated instruments which can identify key elements of the integration process and/or outcomes and relevant methods and measures for these elements.

Such instruments present an opportunity for evaluators (and indeed those responsible for leading integration initiatives) to deploy such tools in their work. This potentially saves time that would be taken to develop new frameworks and measures, and could also generate more objective and comparable data.

There are though practical barriers why these have not been drawn upon as much as one would expect, in particular knowing which ones are available, what type of integration they explore, which elements they focus on, and what contexts are they relevant to. Furthermore it is often hard to get a sense of their robustness as they may not have been extensively challenged following their initial development.

Lyngsø et al (2014) seek to help overcome these practical barriers through providing an overview of instruments seeking to measure aspects of integration.

providing an overview of instruments seeking to measure aspects of integration

The systematic review aims to provide an overview of instruments seeking to measure aspects of integration.

Methods

This systematic review includes studies focusing on integration and

  • structure (e.g. adequacy of facilities, necessary equipment, qualifications of staff),
  • process (work routines, communication between staff members, user involvement etc) and
  • culture (shared beliefs, norms and values).

A systematic review of relevant databases (including PubMed, CINAHL, PsycINFO, Cochrane Library and Web of Science) was undertaken to identify articles between 1980 and 2011. Following full review and application of inclusion and exclusion criteria, 23 instruments were selected for inclusion.

In the article the instruments are presented in tabular form with concise notes on key dimensions such as – the original research objective, the integration elements of interest, process of the instrument’s development and the target populations.

Results

No instrument covered all organisational elements but nearly all studies included defined structural and process elements.

The following eight organisational elements were found within the 23 measurement instruments:

  • ‘IT/information transfer/communication and access
  • Organisational culture and leadership
  • Commitments and incentives to deliver integrated care
  • Clinical care (teams, clinical guidelines and protocols)
  • Education
  • Financial incentives
  • Patient focus
  • Quality improvement/performance measurement.’

The three elements most commonly highlighted for measurement were:

  • IT/information transfer,
  • commitment and incentives, and
  • clinical care (including multidisciplinary teams, case management and care guidelines).

The authors provide core references to enable the reader to obtain further details if the instrument in question is of potential interest and relevance.

elements most commonly highlighted for measurement

IT and information transfer was one of the three elements most commonly highlighted for measurement.

Conclusions

The authors conclude that

This review did not identify any measurement instrument covering all aspects of integrated care. Further, a lack of uniform use of the eight organisational elements across the studies was prevalent. It is uncertain whether development of a single ‘all-inclusive’ model for assessing integrated care is desirable. We emphasise the continuing need for validated instruments embedded in theoretical contexts.

Strengths and limitations

A systematic review of instruments to measure integrated care may not sound the most exciting read for those with only a passing interest in such matters. However, for anyone who works in an integrated setting and wants to use validated measurement tools to review their service, or for independent researchers of such services the article is to be highly recommended.

The searching and analyses processes are robust, the key concepts of integration are defined, and the instruments are summarised in sufficient detail to enable informed selection of ones for further exploration.

From a social care perspective, the major drawback is that the article and the instruments are principally focussed on healthcare services, and many of these are in a non-UK setting. This means that the contexts and services that they are designed to study may not be of primary relevance to a social care manager or researcher, and the terminology may not be familiar or comfortable to social care stakeholder.

More fundamentally (and it is not possible to answer this without going back to the original studies), the research teams and participants in the process to develop these instruments may have largely been healthcare patients, clinicians and other stakeholders. This may mean that the interests and perspectives of social care stakeholders, including service users and carers, may not be represented.

shutterstock_224817595-2

The authors did not find a measurement instrument covering all aspects of integrated care but were uncertain if a single ‘all-inclusive’ model for assessing integrated care is desirable.

Summing up

In summary a worthwhile article for anyone looking to undertake evaluation and research into integration, but one that could do with replicating for a UK social care-centric (or at least inclusive) audience.

Link

Lyngsø, A. M., Godtfredsen, N. S., Høst, D., & Frølich, A. (2014). Instruments to assess integrated care: A systematic review. International Journal of Integrated Care,  2014; Jul–Sep; URN:NBN:NL:UI:10-1-114794. [Abstract]

Share on Facebook Tweet this on Twitter Share on LinkedIn Share on Google+
Mark as read
Create a personal elf note about this blog
Profile photo of Robin Miller

Robin Miller

Robin is a Senior Fellow and Director of Consultancy at HSMC, University of Birmingham. He is the social care lead within the Chronic Disease Theme of the West Midlands Collaborations for Leadership in Applied Health Research & Care, and a Fellow of the School for Social Care Research. His research interests build on his practical experiences in the field, and centre on commissioning and management of integrated services, the role and impact of the Third Sector, and personalisation. Robin is Co-Editor of the Journal of Integrated Care. Outside of his University role, Robin is a non-executive director on the Board of Trident Social Investment Group and the chair of the board of trustees of Trident Reach.

More posts

Follow me here –