Robert D. Mootz, DC, who is the Medical Director for the State of Washington Department of Labor and Industries also penned an interesting review in JMPT titled “When Evidence and Practice Collide” (FULL TEXT) that sheds a lot of light on EBM issues.
Added to this is the complication that, by JAMAs own standards of evaluation, “between 18 and 68 percent of the 264 abstracts evaluated from major medical journals were inaccurate”. Why is this a problem? Meta-analysis starts with a review of potential materials…and what’s initially reviewed is the ABSTRACT, not the full-text article.
Lastly, it has been stated, by the editor of BMJ (the prestigious British Medical Journal), that “only about 15% of medical interventions are supported by, solid scientific evidence”.
That’s all old news. What’s current?
This is from a more recent (2007) BMJ Survey: Of around 2500 (medical) treatments reviewed, 13% were rated as beneficial, 23% likely to be beneficial, 8% as trade off between benefits and harms, 6% unlikely to be beneficial, 4% likely to be ineffective or harmful, and 46%, the largest proportion, as unknown effectiveness (see figure 1).
None of this is an excuse for us to be negligent in developing clinical guidelines and evidence-based practice parameters, but excuse me while I ask “why are you expecting US to maintain a higher standard than you do?”