Most research is flawed and many findings are false #EvidenceLive


John Ioannidis might be right!

In a recent review, the authors challenge the foundation of evidence-based practice (EBP), specifically on the grounds that the vast majority of research evidence is bad (Kane et al, 2016).


They analyse the 10 most recent systematic reviews of interventions published in 4 major journals (Annals of Internal Medicine, The BMJ, JAMA and Pediatrics). The sample size is supplemented with a further 10 recent reviews issued in reports from the Cochrane Collaborative and 16 from the Evidence-based Practice Center (EPC).

The authors continue to report on the quality of evidence of 76 included papers – however, upon adding up the numbers we came to a total of 66. The authors don’t reference any of the included papers, thus it’s not clear where the other 10 included papers came from.

The authors extracted the reported quality of evidence score assigned to each intervention/ outcome pair and categorised them by intervention type and quality level.


Of the 76 reviews, 34 did not use a systematic quality of evidence rating scheme. From the remaining 42 reviews, a total of 1,472 outcomes linked to a specific intervention were abstracted.

In paragraph one the authors state that ‘of the studies that rated QOE, 39 used the methods endorsed…and 13 used GRADE.’ (Grading of Recommendations Assessment, Development, and Evaluation Working Group). This should clearly be 39% and 13% respectively.

Of the 1,472 outcomes, 1,039 included observational studies and 433 did not.

The strength of evidence rating (SOE) was moderate to high for 13.7% of outcomes where observational studies were included and 20.8% where observational studies were not included (p<0.01). The bottom line is that the SOE rating for the vast majority of outcomes in both groups were low or insufficient (86% and 79% respectively, p<0.01).

Meta-analysed interventions were less likely to have a high or moderate QOE rating.

94% of psychosocial interventions were rated as low or insufficient.

94% of psychosocial interventions were rated as low or insufficient.


The authors conclude:

Claiming that clinical practice is evidence-based is far from justified.


This paper raises the important issue of how prevalent poor quality of evidence reporting is within systematic reviews. This is undeniable and important. The utility of science to inform clinical practice depends on quality evidence.


The authors did not review the quality of the included papers; instead, they relied on the reported quality of each paper. It would have been more impactful had they also compared the two. Furthermore, this distinction gets lost throughout the paper, which can misrepresent its purpose and impact.

Additionally, there is a disconnect between their findings and the insightful story of the discussion. A more direct connection could have been made by further discussing the implications of their findings and how their data brings us closer to a solution to the problems at hand.

How confident can we be that our practice is evidence-based if the quality of published evidence remains so patchy?

How confident can we be that our practice is evidence-based if the quality of published evidence remains so patchy?


Our thanks to Rachel Playforth, Sadhia Khan, Cynthia Kroeger, Dan Mayer, Paul Dijkstra and Gerd Antes who worked together at our Making #EvidenceLive workshop on 21st June to produce this blog.


Primary paper

Kane RL, Butler M, Ng W. (2016) Examining the quality of evidence to support the effectiveness of interventions: an analysis of systematic reviews. BMJ Open 2016;6:5 e011051 doi:10.1136/bmjopen-2016-011051

Photo credits

Share on Facebook Tweet this on Twitter Share on LinkedIn Share on Google+
Mark as read
Create a personal elf note about this blog
Profile photo of Paul Dijkstra

Paul Dijkstra

Dr. Paul Dijkstra is the Director of Medical Education at Aspetar Hospital where he also works as a Specialist Sport and Exercise Medicine Physician. He has extensive experience in elite sport as the UK Athletics Chief Medical Officer to the Beijing and London Olympic Games and Head Quarters Doctor for Team England at the 2014 Glasgow Commonwealth Games. He was the Local Organizing Committee Chief Medical Officer to the 12th FINA World Swimming Championships (25m) in Doha, 2014. He chaired the subcommittee on physical activity guidelines for medical conditions as a member of the Technical Advisory Group to the Qatar National Committee on Nutrition and Physical Activity that developed the physical activity guidelines for the State of Qatar in 2014. He is busy with further studies in Evidence-based Health Care as a part-time DPhil-student at the University of Oxford.

More posts

Follow me here –