John Ioannidis might be right!
In a recent review, the authors challenge the foundation of evidence-based practice (EBP), specifically on the grounds that the vast majority of research evidence is bad (Kane et al, 2016).
They analyse the 10 most recent systematic reviews of interventions published in 4 major journals (Annals of Internal Medicine, The BMJ, JAMA and Pediatrics). The sample size is supplemented with a further 10 recent reviews issued in reports from the Cochrane Collaborative and 16 from the Evidence-based Practice Center (EPC).
The authors continue to report on the quality of evidence of 76 included papers – however, upon adding up the numbers we came to a total of 66. The authors don’t reference any of the included papers, thus it’s not clear where the other 10 included papers came from.
The authors extracted the reported quality of evidence score assigned to each intervention/ outcome pair and categorised them by intervention type and quality level.
Of the 76 reviews, 34 did not use a systematic quality of evidence rating scheme. From the remaining 42 reviews, a total of 1,472 outcomes linked to a specific intervention were abstracted.
In paragraph one the authors state that ‘of the studies that rated QOE, 39 used the methods endorsed…and 13 used GRADE.’ (Grading of Recommendations Assessment, Development, and Evaluation Working Group). This should clearly be 39% and 13% respectively.
Of the 1,472 outcomes, 1,039 included observational studies and 433 did not.
The strength of evidence rating (SOE) was moderate to high for 13.7% of outcomes where observational studies were included and 20.8% where observational studies were not included (p<0.01). The bottom line is that the SOE rating for the vast majority of outcomes in both groups were low or insufficient (86% and 79% respectively, p<0.01).
Meta-analysed interventions were less likely to have a high or moderate QOE rating.
The authors conclude:
Claiming that clinical practice is evidence-based is far from justified.
This paper raises the important issue of how prevalent poor quality of evidence reporting is within systematic reviews. This is undeniable and important. The utility of science to inform clinical practice depends on quality evidence.
The authors did not review the quality of the included papers; instead, they relied on the reported quality of each paper. It would have been more impactful had they also compared the two. Furthermore, this distinction gets lost throughout the paper, which can misrepresent its purpose and impact.
Additionally, there is a disconnect between their findings and the insightful story of the discussion. A more direct connection could have been made by further discussing the implications of their findings and how their data brings us closer to a solution to the problems at hand.
Our thanks to Rachel Playforth, Sadhia Khan, Cynthia Kroeger, Dan Mayer, Paul Dijkstra and Gerd Antes who worked together at our Making #EvidenceLive workshop on 21st June to produce this blog.
Kane RL, Butler M, Ng W. (2016) Examining the quality of evidence to support the effectiveness of interventions: an analysis of systematic reviews. BMJ Open 2016;6:5 e011051 doi:10.1136/bmjopen-2016-011051