Those of us old enough to have grown up along with EBM, have experienced its promotion and denigration over the years, both cyclic and concurrent. Yet overall, I remain encouraged by progress and excited for our next decades of improvements. And we have made improvements. Unfortunately, EBM’s growing pains are as public as a celebrity’s private life. That is the curse and the blessing of our methods.
You see, at the heart of evidence-based methods is the requirement of transparency. EBM must be transparent, so that the evaluation of evidence can itself be evaluated. We must be able to assess:
o if search and selection of evidence is thorough and likely to minimize bias
o if identified studies are conducted in a way to minimize bias
o if results are combined and summarized fairly
o if the information is relevant to practice and patients
We can’t tell unless the methods are clearly presented. This transparency also fosters our ongoing discussion and discovery of better methods. But our public discussions also make us vulnerable to those whom we’ve challenged to abandon eminence for evidence, those who may feel threatened by a movement away from “Trust me, I’m the doctor” to “Let me show you the data”.
Those elders among us know all too well that you can always find an expert to support any opinion. We were trained on the cohort studies of Sir Richard Doll and Sir Richard Peto linking smoking to lung cancer. We then used their studies to train new students in epidemiology methods while we watched tobacco company scientists testify they believed cigarettes were safe.
We also know that you can find, or create, statistics to support any opinion by using confirmation bias to guide your search and summary. This cherry picking of evidence to support an existing belief is EBM hijacked. We must point this out when we see it, and we should easily see it since EBM requires full transparency.
The transparency of our methods also highlights our weaknesses, not only to ourselves as we work to improve our methods but also to the others. We know there are many challenges in summarizing evidence that we cannot yet control, no matter how thorough our search or how careful our methods. These challenges have come from things like:
o Studies that are withheld, willfully or otherwise, through unpublished and unshared research
o Journals that favor studies with new and dramatic findings, creating a bias away from studies with confirmatory and null findings
o The lack of a universal data repository and instead multiple databases of published and unpublished studies that challenge our ability to find all relevant data.
o The propensity for trials to be conducted in people, often men, with a single disease or condition and who are willing to submit to experimentation with new therapies, which limits our ability to extrapolate from those trials to people with multiple risk and disease, and often to women
o Legacy EBM methods that favored RCT over good observational design because some early practice of EBM focused only on answering questions about interventions, missing questions important to the vast clinical practice of prevention, diagnosis and prognosis
o The role of judgment, essential to the interpretation of evidence, that was initially filled by clinical experts on guideline panels but missed patient preferences when balancing benefit and harm.
o Missing evidence for important intervention delivery details needed to individualize treatment, like optimal dosing, duration of therapy, and treatment goals
And these are only some of our challenges. Yet even this list may appear to some that we have identified too many challenges, or the challenges are too large. But they are not impossible. With continued work to improve our methods and to include patients and even naysayers in our processes, we will build a stronger foundation for combining evidence with clinical experience to help inform our patients’ decisions about their care.
TheEvidenceDoc August 25, 2014