After a brief sojourn to an Ivory tower, that lies at the heart of the Elven forest, I have recently ventured out, blinking in the sunlight, and made my way back to the coal-face of clinical practice. Working at this coal-face there are things called ‘guidelines’ and ‘protocols’ which I am expected to follow and learn…
Evidence-based guidelines represent a form of ‘evidence-based practice’; defined 20 years ago as the integration of clinical experience, patient preference and research findings (Sackett et al, 1996). Guidelines, such as those published by NICE, provide a series of ‘quality statements’ that are then enacted in clinical practice. At least that’s the theory.
But, through their publication, guidelines implicitly make a claim regarding their own evidence base; in other words we assume there is evidence for the production of evidence-based guidelines. This assumption, as explored by Girlanda et al (2016) in a recent systematic review, leads to the use of guidelines as a ‘quality standard’ to measure the performance of organisations and clinical teams, again making the implicit assumption that in following such guidance patient benefit will automatically follow. But, evidence-based practice encourages the critical appraisal of ‘assumption’ and therefore exploring the role of ‘guidance’ in the enactment of clinical practice becomes pertinent.
To address this question the authors therefore set out to explore two questions:
- What is the impact of guideline implementation on provider performance?
- What is the impact on patient outcomes?
The authors conducted a systematic review seeking to identify studies that recruited participants from specialist mental health services and that were:
- Randomised Controlled trials (RCTs),
- Controlled Clinical Trials, or
- Before and after trials.
Trials were sought that considered either active or passive guidance implementation strategies and ultimately the authors made two comparisons:
- Guidance implementation versus treatment as usual
- Guidance implementation strategy A versus Guidance implementation strategy B
The authors defined the primary outcome for their review as being ‘provider performance’; this outcome was extracted from identified studies, but generally related to the adherence to the implemented guideline in clinical practice. The secondary outcome was defined as ‘patient outcomes’; specified as being a change in psychopathology, as measured on a validated scale.
Two researchers worked together to screen search results for inclusion, comparing their findings together and resolving disputes through discussion, or through the involvement of a third researcher. Study quality was judged according to standard criteria and rated as Poor, Fair or Good. Outcome data was extracted from the primary papers and presented in graphical form. Findings from RCTs were combined together through meta-analysis.
The search strategy identified 1,750 initial hits, reduced to 82 through considering title and abstract. A further 63 were then excluded on the basis of full text review; leading to a final total of 19 studies, with 6 RCTs addressing provider outcomes and 3 addressing patient outcomes included in the meta-analysis. The majority of the included studies were assessed as being of ‘fair’ methodological quality.
The majority of the identified studies showed small to moderate effect sizes in favour of the implementation of guidelines in terms of provider and service user outcomes. However, these effects were often not statistically significant and some studies also showed no effect, or even a negative effect from guideline implementation.
For the meta-analysis the authors identified no statistically significant impact of guideline implementation on provider outcome, but did identify a statistically significant outcome in terms of patient outcomes.
The authors conclude:
Guideline implementation does not seem to have an impact on provider performance, nonetheless it may influence patient outcomes positively.
Strengths, limitations and conclusions
This study clearly addresses an important area of discussion. Many quality standards judge healthcare providers according to their adherence to published guidelines, therefore the finding that attempts at guidance implementation does not necessarily lead to increased provider performance is significant. However, the finding is not necessarily surprising given the complexity of integrating new models of clinical care into practice. A great deal of research has been conducted in this area (e.g. Murray et al., 2010) identifying strategies to manage such implementation, much of which serves to demonstrate the difficulties inherent in the process.
A limitation of the current study is clearly the limited evidence base on which the authors have to draw, including the variations in findings between studies and the lack of high quality research in this area. Heterogeneity between study findings in this review as significant, impacting on the development of clear evidence from the study.
Ultimately, it seems that we need more evidence to better understand the way in which we implement evidence-based practice into day-to-day clinical work. This question seems particularly pertinent at a time when questions are being asked regarding the role of mental health services in the support of ‘personal recovery’. The complexity of translating meaningful research findings into high quality practice is demonstrated in this review; where ‘patient benefit’ was demonstrated despite the absence of clear evidence for change in provider practice. As the authors comment:
This possibility highlights the conundrum of how clinical effects come about and what makes clinicians and interventions effective in everyday practice.
My own, personal, response to this is to call for more observational, ethnographic, research in real world clinical scenarios so that we can better understand the processes of clinical support and the translation of the ‘evidence base’ into the complex world of human interaction. To paraphrase David Pilgrim: without ethnography we are only beginning to scratch the surface of what happens in real life clinical scenarios (2009).
Girlanda F, Fiedler I, Becker T, Barbui C, Koesters M. (2016) The evidence-practice gap in specialist mental healthcare: systematic review and meta-analysis of guideline implementation studies. The British Journal of Psychiatry, 1–7. http://doi.org/10.1192/bjp.bp.115.179093 [PubMed abstract]
Murray E, Shaun T, Catherine P, MacFarlane A, Ballini L, Dowrick C et al. (2010) Normalisation process theory: a framework for developing, evaluating and implementing complex interventions. BMC Medicine, 8(63). http://doi.org/10.1186/1741-7015-8-63
Pilgrim D. (2009). Recovery From Mental Health Problems: Scratching The Surface Without Ethnography. Journal of Social Work Practice, 23(4), 475–487. http://doi.org/10.1080/02650530903375033
Sackett DL, Rosenberg WM, Gray JA, Haynes RB, Richardson WS. (1996) Evidence based medicine: what it is and what it isn’t (PDF). BMJ, 312(7023), 71–72. [Publisher Link]
CC BY 2.0