THE NEED FOR EVIDENCE-BASED MEDICINE
 
   

The Need for Evidence-based Medicine

This section is compiled by Frank M. Painter, D.C.
Send all comments or additions to:
   Frankp@chiro.org
 
   

FROM:   J Royal Society of Medicine 1995 (Nov);   88:   620–624 ~ FULL TEXT

David L Sackett FRSC MD Msc Epid FRCPC, William M C Rosenberg MA MB BS DPhil MRCP

Nuffield Department of Clinical Medicine,
Level 5, The Oxford Radcliffe NHS Trust,
Headley Way, Headington,
Oxford OX3 3DU, England


As physicians, whether serving individual patients or populations, we always have sought to base our decisions and actions on the best possible evidence. The ascendancy of the randomized trial heralded a fundamental shift in the way that we establish the clinical bases for diagnosis, prognosis, and therapeutics. The ability to track down, critically appraise (for its validity and usefulness), and incorporate this rapidly growing body of evidence into one's clinical practice has been named 'evidence-based medicine [5, 6] (EBM).



From the Full-Text Article:

Introduction

In selecting a treatment, previously it had been considered sufficient to understand the pathophysiological process in a disorder and to prescribe drugs or other treatments that had been shown to interrupt or otherwise modify this process. Thus, the observation that patients with ventricular ectopic beats following myocardial infarction were at high risk of sudden death [1], coupled with the demonstration that these extra beats could be suppressed by specific drugs, formed a sufficient rationale for the wide-spread prescription of these drugs to post-infarction patients with unstable cardiac rhythms. [2] However, subsequent randomized controlled trials examined outcomes, not processes, and showed that several of these drugs increase, rather than decrease, the risk of death in such patients, and their routine use is now strongly discouraged. [3] While other randomized trials (their total number now between 250,000 and 1,000,000!) have confirmed the efficacy of many treatments and confirmed the uselessness or harmfulness of many others. For example, of 226 manoeuvres that are carried out in obstetrics and childbirth, Chalmers et al. have documented sound evidence from randomized trials in almost a half (about 20% having been shown to be beneficial and almost 30% found to be of either doubtful value or harmful). [4]

Equally powerful methods have been developed and applied to validate the clinical history and physical examination, diagnostic tests, and prognostic markers. When performed in collaboration with seasoned clinicians, these methods often have made explicit the expert's implicit, non-verbal diagnostic, prognostic and therapeutic reasoning, making it possible for their trainees to replace mere mimicry with understanding, and avoiding the necessity for decades of experience as the only pathway to sound clinical judgement.

Given the extremely rapid growth of randomized trials and other rigorous clinical investigations, the issue is no longer how little of medical practice has a firm basis in such evidence; the issue today is how much of what is firmly based is actually applied in the front lines of patient care. For although we clinicians really do need to keep up to date with clinically-important information, direct observations suggest that we usually fail to do so. For example, a group of general physicians responded to a questionnaire by stating that they needed new and clinically-important information just once or twice a week, and met these needs by consulting their textbooks and journals. [7] However, the direct questioning of these same clinicians as they saw patients identified up to 16 needs for new, clinically-important information in just half a day, at a rate of about two questions for every three patients they saw (about half of their questions were related to therapeutics, and a quarter to diagnosis). As a net result, in a typical half-day of practice, four clinical decisions would have been altered if clinically-useful information about them had been available and employed.

However, only 30% of these information needs were met in the clinics and offices where the clinicians worked, and despite their earlier claim that they predominantly used texts and journals to gain this knowledge, direct observation again showed that most of it was obtained by asking colleagues. On further probing, these clinicians identified three barriers to obtaining clinically-important information: they lacked the time necessary for keeping up to date, their text books were out of date, and their journals were too disorganized to be useful. [7]

Unfortunately, there is growing evidence to support clinicians' claims that our texts are out of date, even when new. When Antman et al. compared the evidence accumulating from randomized trials and systematic reviews of treatments for myocardial infarction with recommendations from contemporaneous textbooks, they found that most texts were failing to recommend thrombolytic therapy, even for specific indications, six years after the first meta-analysis showed it to be efficacious. [8] Moreover, these same texts and reviews were persisting in recommending routine lignocaine prophylaxis for ventricular fibrillation, despite ever-stronger evidence that it was likely to be useless in lowering case-fatality.

What is the net effect of this constant but unfulfilled need for clinically-important new information? Unfortunately, it leads, on average, to progressive declines in our clinical competency following the completion of our formal training. When competency is measured by clinicians' knowledge of even the basics of the care of disorders like hypertension, it has been shown repeatedly that there is a statistically and clinically significant negative correlation between our knowledge of up to date care and the years that have elapsed since our graduation from medical school. [9, 10] Moreover, in one study of actual clinical behaviour, the decision to start antihypertensive drugs was more closely linked to the number of years since medical school graduation in the doctor than to the severity of target organ damage in the patient. [11]



Continuing Medical Education

It is clear from the foregoing that we need far readier access to clinically-important information. No wonder, then, that there is increasing interest in providing, and even requiring, continuing medical education (CME), continuing professional development, and the like. But when the same powerful strategy for determining the efficacy of a therapeutic regimen the randomized controlled trial has been used to test the efficacy of CME, the results have been sobering. A number of randomized trials have shown that traditional, instructional CME simply fails to modify our clinical performance and is ineffective in improving the health outcomes of our patients. [12]

For example, a group of us identified 18 conditions whose care, as documented in the clinical record, makes a difference in patient outcomes. [13] We then asked a random sample of general practitioners to rank them into a 'high-preference' set for which they really did want to receive CME, and a 'low preference' set for which they really did not. Physicians with similar rankings were paired and randomized either to a control group whose CME was postponed for 18 months, or into an experimental group who received CME at once for the 'high preference' set of conditions, but had to promise to study the CME we provided for their 'low preference' set as well. The CME 'packages' were portable, available in both written and audio versions, had explicit objectives and several feedback tests, and included all elements of care that were necessary for improved patient outcomes. The clinical records of both control and experimental physicians were examined before and after the experimental ones received their CME, permitting us to determine its effects on the quality of patient care. The results were startling.

Although the knowledge of experimental physicians rose substantially after their CME, the effects on quality of care were both surprising and disappointing: for 'high preference' conditions, quality of care rose about 6% (statistically significant, but of marginal clinical significance) both among the experimental physicians (who received CME about them) and among control physicians (who didn't). This led one wag on the research team to conclude: 'When you want CME, you don't need it.' By contrast, for the 'low preference' conditions, quality of care rose by a statistically and clinically significant 10% among the experimental physicians, but fell slightly among control physicians ('CME only works when you don't want it.'). Finally, there were small declines in the quality of care provided for conditions that had been rated as neither high nor low preference 'CME does not cause general improvements in the quality of care.' Thus, CME and other strategies for Continuing Professional Development that employ just instructional approaches do not address the problem of our declining clinical competence.



Evidence-based Medicine

Does anything work? Recent evidence suggests that three broad strategies based on the principles, strategies, and tactics of 'evidence-based medicine' (EBM) can work. By way of background, evidence-based medicine is a short-hand term for five linked ideas: first, that our clinical and other health care decisions should be based on the best patient- and population-based as well as laboratory-based evidence; second, that the problem determines the nature and source of evidence to be sought, rather than our habits, protocols or traditions; third, that identifying the best evidence calls for the integration of epidemiological and biostatistical ways of thinking with those derived from pathophysiology and our personal experience (examples include using likelihood ratios to increase the power of diagnostic information, considering inception cohorts in making prognoses, incorporating meta-analyses of randomized trials into decisions about therapy, and integrating odds ratios into judgements about iatrogenic disease); fourth, that the conclusions of this search and critical appraisal of evidence are worthwhile only if they are translated into actions that affect our patients; and fifth, that we should continuously evaluate our performance in applying these ideas.

The practice of EBM, then, is a process of life-long, selfdirected learning in which caring for our own patients creates the need for 61inically-important information about diagnosis, prognosis, therapy, decision analysis, cost: utility analysis, and other clinical and health care issues, and in which we:

  1. convert these information needs into answerable questions

  2. track down, with maximum efficiency, the best evidence with which to answer them (whether from the clinical examination, the diagnostic laboratory, the published literature, or other sources)

  3. critically appraise that evidence performance for its validity (closeness to the truth) and usefulness (clinical applicability)

  4. apply the results of this appraisal in our clinical practice

  5. evaluate our performance

Recent developments and evaluations support the view that three EBM strategies can be successful in keeping us up to date. They consist of learning how to practice evidencebased medicine ourselves, seeking and applying evidence-based medical summaries produced by others, and accepting evidencebased practice protocols, developed by our colleagues and augmented by strategies that help us improve our clinical performance.



Learning evidence-based medicine

The first effective strategy requires that we learn how to become life-long, self-directed learners of evidence-based medicine (EBM) as described above. Developed at McMaster University in Canada, and adopted and adapted at many other institutions around the world, this method of mastering life-long learning skills and habits has been evaluated in two sorts of ways. First, in a short term trial among clinical clerks nearing graduation from medical school, clerks who received EBM-oriented clinical tutorials showed substantial improvements in their ability to generate and properly defend correct diagnostic and management decisions, while control clerks who received traditional clinical tutorials actually made worse clinical decisions after than before their clerkships (they had become less critical of advice provided by authorities). [14] Moreover, when McMaster graduates of their self-directed, problem-based EBM curriculum were compared with other Canadian medical graduates on their knowledge of clinically-important advances in the detection, evaluation and management of hypertension, the other Canadian graduates exhibited the usual, progressive deterioration in this measure of clinical competence, but the McMaster graduates remained high, level, and up to date, even 15 years after graduation. [15] Other programmes have shown that we can master EBM skills after several years out in practice (eg. through journal clubs or less traditional, active programmes of continuing professional development).



Seeking and applying evidence-based medical summaries generated by others

The second effective strategy applies to those of us who, although we may not be willing or able to keep up to date by learning evidence-based medicine ourselves, are willing to seek out and apply specific examples of EBM produced by others. This second approach requires that we are predisposed to act, and that we are willing to seek information on what to do, preferably in a compressed, summary format that is direct and practical (this strategy is also used by those of us who practice EBM, but is just one of our means of keeping up to date).

In the past, this second group of clinicians were at the mercy of the throw-away journals, drug 'detailers', and traditional review articles, all of which have been discredited. For example, the traditional review article, in which an 'expert' states opinions about the proper evaluation and management of a condition, supporting key conclusions with selected references, has been shown to be both non-reproducible and, as a scientific exercise, of low mean scientific quality. For example, Oxman and Guyatt found that experts could not agree, even among themselves, about whether other experts who wrote review articles had: (i) conducted a competent search for relevant studies; (ii) generated a bias-free list of citations; (iii) appropriately judged the scientific quality of the cited articles; or (iv) appropriately synthesized their conclusions. Indeed, when these experts' own review articles were subjected to these same simple scientific principles, there was an inverse relationship between adherence to these standards and selfprofessed expertise (the correlation was -0.52 with an associated P-value of 0.004) [16]

Rather than rely on reviews of highly variable validity, clinicians seeking EBM have two new information sources at their disposal. First is a new type of journal of secondary publication that screens dozens of clinical journals for articles that are both relevant to practice and can pass critical appraisal quality filters, summarizes those that pass muster in more informative' abstracts, adds commentaries from seasoned clinicians, and introduces them with declarative titles that give the clinical 'bottom line'. For example, the ACP Journal Club, a publication of the American College of Physicians, screens up to 50 journals each month for articles on diagnosis, prognosis, therapy, aetiology, quality of care, and health economics that are both relevant to general physicians and adhere to rigorous methodological standards for patient-based research (if about therapy, was there random allocation of patients to treatments?; if about diagnosis, was there an independent, blind comparison with a 'gold standard [17]; if about prognosis, were patients assembled at an early and uniform point in their illness?). Each of those that pass muster (only about 13 per month!) occupies one page of the journal, and reader surveys have documented extraordinarily high ratings for its relevance and usefulness. Evidence-Based Medicine, a journal with a similar format but expanded to include surgery, obstetrics, paediatrics, and psychiatry, will be launched by the British Medical Journal Publications Group in 1995, jointly-edited at McMaster University in Canada and at the Centre for Evidence-Based Medicine at University of Oxford in the UK.

The second new information source for clinicians seeking EBM is even more systematic. It is an outgrowth of the scientific methods developed to combine (into overviews or 'meta-analyses') the growing numbers of randomized trials of the same or similar treatments for the same health condition. When properly carried out on as high a proportion as possible of all relevant trials (since MEDLINE misses about half the published trials [17] detailed journal searching, often by hand, is required to avoid bias) these systematic reviews provide the most accurate and authoritative guides to therapy. The performance of systematic reviews of therapy is so logical a step in progress toward evidence-based health care that it has become the focus of a rapidly growing international group of clinicians, methodologists, and consumers who have formed the Cochrane Collaboration [18] (a thousand strong by the start of 1995, and doubling every 6 months). The systematic reviews that are beginning to flow from this unselfish collaboration, updated each time an important new trial is reported, are providing the highest levels of evidence ever achieved on the efficacy of preventive, therapeutic, and rehabilitative regimens. They will be published on computer diskette and compact disk, on the Internet, and in a variety of other forms (including the EBM journals of secondary publication).

Thus, busy clinicians seeking clinical 'bottom lines' will increasingly be able to eschew non-expert 'expert' reviews and self-serving commercial sources and find brief but valid summaries of best evidence on a growing array of clinical topics, appraised according to uniform, established principles. Moreover, when 'clinical guidelines' and other practice recommendations are based on this level of evidence (most to date are not), they become worth following.



Accepting evidence-based practice protocols developed by colleagues

Third, even when we, for whatever reason, fail both to learn medicine if we acknowledge the problems of becoming out of date, accept evidence-based practice protocols developed by our colleagues, and submit ourselves to some combination of the four strategies that have been proven (in randomized trials, of course!) to alter our clinical practice for the better [19]: first, receiving individualized audit and feedback about what we are doing right and wrong (the growing use of computers in clinical practice enhances the potential effectiveness of this strategy [20]); second, receiving advice from a respected teacher (who has learned EBM); third, being visited by a non-commercial 'detailer' (who is informing and encouraging us about specific evidence-based ways of caring for patients rather than exhorting us to prescribe specific drugs); and fourth, taking a 'minisabbatical' or preceptorship in a place where EBM is practised.

These strategies have been shown to be effective in helping us overcome at least some of the barriers imposed by both the lack of clinically important information and the social and professional context within which we practice medicine, and can help us move ourselves from opinionbased practice toward evidence-based medicine.

It is not news that medicine and all other health care are rapidly changing. ('The future is already here; it just isn't evenly distributed yet.') Consequently, the advocacy of yet another change, the adoption of EBM, risks making impossible demands on an already over-burdened profession and health care system. However, many of the other changes we face become easier to enact, not more difficult, through the adoption of EBM:

  1. with reductions in junior doctors' hours comes the need for greater efficiency, both on their part and on the part of the consultants they leave behind; EBM can help here by identifying which time- and resource-intensive manoeuvres should be dropped and which retained

  2. similarly, EBM helps us identify those clinical acts whose performance will meet the growing demands for increased quality, and will help with their appropriate purchasing and provision

  3. as more clinical care is provided by health care teams, EBM provides a common language through which we can communicate and rules of evidence by which we can agree on who will do what and to whom

  4. EBM employs identical strategies and tactics for clinical learning for both undergraduate and postgraduate education, including continuing education and professional development. Not only does this make for far easier (reinforced, 'spiral') learning; it also makes for far easier teaching and resource-development (funding and 'training the trainers'), since EBM approaches meet the recommendations now appearing and to seek out EBM, we still can practise up to date from commissioning bodies and standing committees addressing the education of both future [21] and current [22] clinicians.

  5. Finally, evidence-based medicine provides us with not only the opportunity to remain up-to-date in our own and related clinical fields, but also with the scientific framework within which to identify and answer priority questons about the effectiveness of the entire range of clinical and other health care


Acknowledgments

Warm thanks to lain Chalmers, Muir Gray, Lelia Duley, Tony Hope, Nicholas Hicks, Martin Dawes, Ruairidh Milne, and Douglas Altman for their critiques of earlier versions of this essay.



References:

  1. Ruberman W, Weinblatt E, Goldberg JD, Frank CW, Shapiro S.
    Ventricular premature beats and mortality after myocardial infarction.
    N Engl J Med 1977;297:750-7

  2. Morganroth J, Bigger JT Jr, Anderson JL.
    Treatment of ventricular arrhythmia by United States cardiologists:
    a survey before the Cardiac Arrhythmia Suppression Trial results were available.
    Am J Cardiol 1990;65:40-8

  3. Echt DS, Liebson PR, Mitchell B, et al.
    Mortality and morbidity in patients receiving encainide, flecainide, or placebo:
    The Cardiac Arrhythmia Suppression Trial.
    N Engl J Med 1991;324:781-8

  4. Chalmers I, Enkin M, Keirse MJNC, eds.
    Effeceive Care in Pregnancy and Childbirth.
    Oxford: Oxford University Press, 1989:vol 2,pp 1471-6

  5. Evidence-Based Medicine Working Group.
    Evidence-based medicine: A new approach to teaching the practice of medicine.
    JAMA 1992;268:2420-5

  6. Sackett DL, Haynes RB, Guyatt GH, Tugwell P.
    Clinical Epidemiology: A Basic Sciencefor Clinical Medicine, 2nd edn.
    Boston: Little, Brown, 1991

  7. Covell DG, Uman GC, Manning PR.
    Information needs in office practice: Are they being met?
    Ann Intem Med 1985;103:596-9

  8. Antman EM, Lau J, Kupelnick B, Mosteller F, Chalmers TC.
    A comparison of results of meta-analyses of randomized control trials and recommendations
    of clinical experts.
    JAMA 1992;268:240-8

  9. Ramsey PG, Carline JD, Inui TS, et al.
    Changes over time in the knowledge base of practising internists.
    JAMA 1991;266:1103-7

  10. Evans CE, Haynes RB, Birkett NJ, et al.
    Does a mailed continuing education program improve clinician performance?
    Results of a randomized trial in antihypertensive care.
    JAMA 1986;255:501-4

  11. Sackett DL, Haynes RB, Taylor DW, Gibson ES, Roberts RS, Johnson AL.
    Clinical determinants of the decision to treat primary hypertension.
    Clin Res 1977;24:648

  12. Davis DA, Thompson MA, Oxman AD, Haynes RB.
    Evidence for the effectiveness of CME: A review of 50 randomized controlled trials.
    JAMA 1992;268:1111-7

  13. Sibley JC, Sackett DL, Neufeld V, Gerrard B, Rudnick KV, Fraser W.
    A randomized trial of continuing medical education.
    N Engl J Med 1982;306:511-5

  14. Bennett KJ, Sackett DL, Haynes RB, Neufeld VR.
    A controlled trial of teaching critical appraisal of the clinical literature
    to medical students.
    JAMA 1987;257:2451-4

  15. Shin JH, Haynes RB, Johnston ME.
    Effect of problem-based, selfdirected undergraduate education on life-long learning.
    Can Med Assoc J 1993;148:969-76

  16. Oxman A, Guyatt GH.
    The science of reviewing research.
    Ann NY Acad Sci 1993;703: 125-34

  17. Dickersin K, Sherer R, Lefebvre C.
    Identifying relevant studies for systematic reviews.
    BMJ 1994;309:1286-91

  18. Cochrane's legacy (Editorial).
    Lancet 1992;340: 1131-3

  19. Davis DA, Thomson MA, Oxman AD, Haynes RB.
    Evidence for the effectiveness of CME. A review of 50 randomized controlled trials.
    JAMA 1992;268:1111-17

  20. Johnston ME, Langton KB, Haynes RB.
    Effects of computer-based clinical decision support systems on clinician performance
    and patient outcome. A critical appraisal of research.
    Ann Intern Med 1994;120:135-42

  21. General Medical Council.
    Doctors of the Future.
    Recommendations On Undergraduate Medical Education.
    London: GMC, 1993

  22. Standing Committee on Postgraduate Medical and Dental Education.
    Continuing Professional Development for Doctors and Dentists.
    London: SCPM & DE, 1994

Return to EVIDENCE–BASED PRACTICE

Since 4-28-2018

                  © 1995–2024 ~ The Chiropractic Resource Organization ~ All Rights Reserved