This blog reviews the first paper in a series of talking points written by “Survivor Researchers” (service users who do research) and was sponsored by the McPin Foundation: “Randomised controlled trials: The straitjacket of mental health research? (PDF)” (Faulkner, 2015).
The methodology of the above paper suggests a community action approach, however the individual and collective voices of mental health service users from the McPin Foundation are not heard other than that of the author who represents them and writes the paper from what she describes as a survivor/researcher perspective. It is the first of a series and it may be that there will be more research with the participants in the Talking Points to follow this one.
So this blog is different from the usual Mental Elf fare. Here we discuss methods and conclusions used to bring the message. The most effective and meaningful outreach is the one that is grounded in accuracy to mobilise people within and without the community.
The questions this paper wanted to raise were:
- Can mental health and distress be measured meaningfully?
- Are there some interventions for which the RCT model is simply inappropriate?
- What outcomes and outcome measures do service users value?
- How can we raise the profile of experiential knowledge?
- How can we work to ensure that different sources of knowledge and evidence are taken into account in mental health research?
Our blog will focus on question five, because without addressing this question the other questions lose impact.
Mental health and community action: A great start
We are cheering for this community; we admire their initiative and can only hope that when faced with a mental health issue they could be in our corner. This group did more with this talking paper than many researchers will do in a lifetime. They held a press conference for the paper in London on 8 Oct 2015 with speakers, a social media discussion (which you can follow at #RCTdebate) and they worked as a team to reach out. There were some glitches along the way but they dared to share their vulnerability through the power of community action. The graphics in the talking points are beautiful. When the personal stakes are high it takes courage to communicate. This is a great start.
Mental health issues are likely to touch everyone within a lifetime
Those with mental health issues are numerous and underserved. At least one third of people worldwide will take prescription medicines to resolve mental health issues (Elliott, 2010). Medications and interventions used to treat mental health often bring unwanted side effects and yet there is not much help available for those who experience the side effects. This is also worrisome for friends and family who just want to see their loved ones have a normal and happy life (Price et al, 2015; Evans et al, 2008).
What’s more, mental health research remains criminally underfunded with approximately £9.75 invested in research per UK person affected by mental illness; over 100 times less than the amount spent on cancer research per patient (£1,571) (Join MQ, 2015).
The Talking Points paper states:
Talking point papers give people with lived experience the opportunity to discuss and debate under-discussed or particular difficult issues in mental health research. We hope that these papers, and the discussion around them, will aid us in our mission to ‘transform mental health research’.
We agree with the principle of user-led research, and accept that users have for too long had research done “to” them, rather than “with” or “by” them. There is a broad consensus across many areas of research that we need more and better active participation in research by service users. However, it’s important not to throw the baby out with the bathwater.
Criticisms of RCTs and Evidence-Based Medicine
The McPin paper includes some important criticisms and limitations of RCTs, and of Evidence-Based Medicine (EBM) generally. Let’s explore these a little further:
EBM is not just about RCTs
At the root of evidence-based medicine is the often-quoted hierarchy of evidence, which places RCTs, and systematic reviews of RCTs at the top and the views of clinicians and patients at the bottom.
– Alison Faulkner, 2015
This is misleading. It confuses a tool created to help people practice EBM (the “levels of evidence”) with the totality of evidence-based practice. The creators of these tools were explicit in the belief that the hierarchy should not be used in a deterministic way.
The “hierarchy” represents the “risk of bias” attached to different study types. It is intended to aid those searching for evidence to identify the types of study that are likely to yield unbiased results. The type of study is dictated by the question we are asking. So, “level 1 evidence” for a question about patient experiences is likely to be a qualitative study.
EBM is (or should be) all about patient values and preferences
In fact, EBM stresses that “evidence” must be integrated with “clinical expertise” and “patient values and preferences”.
It is true that in the past, clinical expertise was all too often out of date or did not take adequate account of the findings of rigorous research. For this reason, EBM has focused on improving how we produce evidence and helping people learn how to make sense of it.
But EBM is not cookbook medicine, slavishly following the dictates of an evidence hierarchy; conversely, if it’s cookbook medicine, it’s not EBM (Sackett et al, 1996).
EBM urges clinicians to “particularize” evidence to the individual context. Decisions should be driven by patients’ values and preferences and informed by the right evidence, which is not always a randomised trial (Price et al, 2015). In fact, opponents of cookbook medicine “will find the advocates of evidence based medicine joining them at the barricades.” (Sackett et al, 1996)
Marginalisation of patient values and of non-RCT research represents a failure to properly practice EBM. It is not a failure of EBM per se.
So we agree that RCTs are limited in how they can address the lived experience. For such questions, qualitative, patient-led approaches are best.
That said, we couldn’t turn our backs on RCTs in the same way that we cannot turn our backs on science. RCTs remain the best method for determining the effects of treatments on a population (Evans et al, 2008). You can download the free Testing Treatments e-book to learn more about this topic.
RCTs are but one tool in the box, and we need them all. Choosing the right tool can mean the difference between quality of life and hell on earth for service users, non-service users, their families and loved ones.
To do an RCT is not enough. To be effective they need to:
- Ask the right questions
- Choose the right study type and population for the question asked
- Embrace a common agreed understanding about risks and hazards
- Report clearly on the research methods used
- Clearly describe the implementation of complex interventions
- Consider the application of results in daily life
- Deal with the problem of bias in the conduct of the trial
- Recognise limitations in individual cases and social dimensions
To get these factors right, researchers need to work in partnership with service users at every point. The McPin group is right to draw attention to inadequacies in the way this is done (or not done).
Many EBM proponents are active in improving the ways in which outcomes are measured that are important to patients.
All research must be ethical. RCTs should not be carried out unless there is genuine uncertainty about the benefits or harms of a treatment.
It is true to say that, in a trial that established a treatment is effective; the “control” patients have been harmed because they were denied the treatment. This harm must be balanced against the much greater harm that can befall others by continuing to provide treatments that may be ineffective, or even harmful.
In many trials, if there is benefit, those that were in the control group will also be offered the treatment at no expense once the trial is completed.
So there is a moral imperative, where we encounter genuine uncertainty, to conduct appropriate research to improve our understanding, or else we condemn future generations.
We should not fall into the trap of believing that what we do cannot possibly cause harm. History is littered with examples of untested theories that have harmed patients. It’s important to remember that any treatment powerful enough to have a good effect, is also powerful enough to have a bad effect, and this includes talking treatments (Langford and Laws, 2014) .
We would also add the ethical imperative that all studies should be published. Anything else represents a betrayal of the trust that patients and their carers put in the research enterprise.
EBM stresses the particularisation of evidence, not its generalisability. This means its particularisation to an individual’s context:
- How much benefit can we expect for this particular patient?
- Does the treatment fulfill the patient’s values and preferences for care?
Patients are the most important part of evidence based medicine and finding current best evidence will get the best results for them.
RCTs can be biased, of course. We have tools to help us determine how likely they are to be affected by bias. However they provide vital information for patients in helping them to understand how much benefit they might expect from a treatment and make informed choices about their care. All research needs to explicitly provide transparent accounts of their methods so that others can judge their validity, importance and applicability.
EBM and research transparency
The pharmaceutical industry does not have a history of transparency about harms or failed trials. Much research on psychiatric drugs that caused harm were not registered and the results were hidden, remaining unreported and unpublished unless they were favourable to industry (Goldacre, 2012). Rightly, patients are angry about the avoidable harm they suffered as a result.
Evidence Based Medicine initiatives have done much to improve trials transparency, such as through the All Trials movement (Gøtzsche, 2011) and through continued efforts to improve the conduct and reporting of research. Once again, the EBM community is on the same side as the service users.
In the nineteenth century health was transformed by clear, clean water. In the twenty-first century, health will be transformed by clean clear knowledge.
– Sir Muir Gray Director UK NHS National Knowledge Service & NHS Chief Knowledge Officer
Many of the criticisms levelled at RCTs and EBM are long standing problems in medicine generally
These problems are not intrinsic to the method of randomised trials or the EBM philosophy of evidence; nevertheless, they are genuine problems that undermine the evidence that randomised trials provide for decision-making.
– Pearce, Raman and Turner, 2015
With around half of published research likely to be affected by bias, the UK National Institute for Health Research rightly prioritises research types that are less likely to be affected by bias.
Other types of research can and should be funded. All studies should be registered and published (e.g. local service level audits).
Will service user researchers and survivor researchers join the EBM collective?
Until recently the voice of survivors was not heard and many individuals live as survivors harmed by the medicine and research they trusted to heal them. Medicine has real issues we can tackle together:
- Inconsistency in listening to or engaging with patients
- Failure to address what matters to patients
- Poorly defined impact and outcomes
- Unclear description of interventions, making replication difficult
- Under-reporting of adverse events and harms
- Failure to provide balanced information about the benefits and harms of treatments
- Scant representation of patient experiences in policy, guidance and reviews.
Thought provoking resources about patients in research
We encourage interested readers to check out some of these openly accessible materials on this topic that are readable and practical. There is much common ground such as understanding the importance of patient values and preferences, a need for earlier and more extensive patient engagement in care decision-making and research.
- Brett J, Staniszewska S, Mockford C, Seers K, Herron-marx S, Bayliss H. (2010) The PIRICOM Study : A systematic review of the conceptualisation, measurement, impact and outcomes of patients and public involvement in health and social care research, 1–292. http://www.ukcrc.org/wp-content/uploads/2014/03/Piricom+Review+Final+2010.pdf
- Crowe S, Fenton M, Hall M, Cowan K, Chalmers I. (2015) Patients’, clinicians’ and the research communities’ priorities for treatment research: there is an important mismatch. Research Involvement and Engagement, 1(1), 2. doi:10.1186/s40900-015-0003-x http://www.researchinvolvement.com/content/1/1/2/
- Price A. (2014) From Junk Science Pawn to Public-Led Trials. J Bahrain Med Soc, 25(2), 113–115. Retrieved from https://www.researchgate.net/publication/265165760_From_Junk_Science_Pawn_to_Public-Led_Trials
- Snow R, Crocker JC, Crowe S. (2015) Missed opportunities for impact in patient and carer involvement: a mixed methods case study of research priority setting. Research Involvement and Engagement, 1(1), 7. doi:10.1186/s40900-015-0007-6 http://www.researchinvolvement.com/content/pdf/s40900-015-0007-6.pdf
RCTs and patient focus
Clinical trials have grown in multiple ways not shared in the McPin paper such as:
- The running of N of 1 trials with the chosen outcomes of an individual patient alongside the controlled trial (Shamseer, 2015)
- Participatory action research where patients take part as co-producers in all parts of a trial
- Patient initiated trials where patients run the trial with mentoring experts for back up
- Patient organisations are sponsoring research they endorse
- Patient and community reviewers are taking active roles to decide the fate of research funding applications.
The field is emerging which means there is multiple space for improvement, but at least we have started bridging the gap between research and service users.
This McPin paper has accomplished the purpose of starting the conversation, sharing issues and paving the way to transform mental health. Alison Faulkner, who wrote the paper, shared her perceptions, vulnerability and truth as she was led to understand it. This is an important first step in communication to build bridges and take down walls from all sides so that we listen to hear and not just to respond.
The road ahead
The job of the human being [in the digital age] is to become skilled at locating relevant valid data for their needs. In the sphere of medicine, the required skill is to be able to relate the knowledge generated by the study of groups of patients or populations to that lonely and anxious individual who has come to seek help
– Sir Muir Gray, 2001
The communications presented need to be not only heard but acted on so we can work together to dispel inaccuracies and make an impact on research and practice. As researchers and service users dare to communicate and look past assumptions, real change will open doors to a brighter future for mental health.
Faulkner A. (2015) TALKING POINT PAPERS Randomised controlled trials : The straitjacket of mental health (PDF). McPin Foundation Oct 2015.
Elliott C. (2010) White Coat, Black Hat: Adventures on the Dark Side of Medicine. Beacon Press, 2010.
Join MQ (2015) Mental Health Research Funding Landscape Report | MQ: Transforming Mental Health [Internet]. MQ Transforing Mental Health. 2015 [cited 2015 Oct 21]. Available from: http://www.joinmq.org/pages/mental-health-research-funding-landscape-report
Sackett DL, Rosenberg WMC, Gray JAM, Haynes RB, Richardson WS. (1996) Evidence based medicine: what it is and what it isn’t. BMJ. 1996;312(7023):71–2.
Price AI, Djulbegovic B, Biswas R, Chatterjee P. (2015) Evidence-based medicine meets person centred care: a collaborative perspective on the relationship. J Eval Clin Pract [Internet]. 2015;n/a – n/a. Available from: http://doi.wiley.com/10.1111/jep.12434
Evans I, Thornton H, Chalmers I. (2008) Testing treatments – Better research for better healthcare. Chinese J Evidence-Based Med [Internet]. London: Pinter and Martin; 2008;8(8):683–4. Available from: www.testingtreatments.org
Langford A, Laws K. Psychotherapy trials should report on the side effects of treatment [Internet]. The Mental Elf. 2014 [cited 2015 Oct 21]. Available from: https://www.nationalelfservice.net/treatment/psychotherapy/psychotherapy-trials-should-report-the-side-effects-of-treatment/
Goldacre B. Bad Pharma: How drug companies mislead doctors and harm patients. 1st ed. Fourth Estate; 2012.
Gøtzsche PC. Why we need easy access to all data from all clinical trials and how to accomplish it. Trials. 2011. p. 249.
Pearce W, Raman S, Turner A. Randomised trials in context: practical problems and social aspects of evidence-based medicine and policy. Trials [Internet]. Trials; 2015;16(1):394. Available from: http://www.trialsjournal.com/content/16/1/394
Shamseer L, Sampson M, Bukutu C, Schmid CH, Nikles J, Tate R, et al. CONSORT extension for reporting N-of-1 trials (CENT) 2015: Explanation and elaboration. Bmj [Internet]. 2015;350(may14 15):h1793–h1793. Available from: http://www.bmj.com/cgi/doi/10.1136/bmj.h1793