Missed my 2013 presentation on #RapidReview at G-I-N? No worries, nothing has changed

In 2013 I presented “If Rapid Reviews are the Answer, What is the Question?” at the Guidelines International Network (G-I-N) conference in San Francisco. My interest in rapid review began in New York the year before at the Evidence-Based Guidelines Affecting Policy Practice and Stakeholders (EGAPPS) conference. Several colleagues shared that they were also concerned about confusion from adding rapid review to our nomenclature so soon after the Institute of Medicine standardization of systematic review methods. This prompted me to search for standards describing:

o   What is a rapid review?

o   When should it, or shouldn’t it, be used?

o   How does it differ from systematic or other evidence reviews?

I shared what I learned at the conference.

Now two years later, I wondered if we are closer to answers.

If you start with the simplest question, “What defines a rapid review?” it’s tempting to provide the simplest answer. It must be speed in completing the review, right? But this doesn’t address how the speed is achieved. What parts of the review become rapid? And what is meant by rapid? As you’ll see, that too is relative.

Several others have looked at what a rapid review might be. At the time of my presentation, I included three systematic looks at rapid review. (References 1,2,3)

Watt defined rapid review as a health technology assessment or systematic review that takes one to six months to produce. Ganaan defined it as literature review that accelerates systematic review and Harker was most generous allowing anything that calls itself a rapid review. The results were uniform in finding variability in:

o   Methods

o   Uses

o   Standards

o   Time to completion

Most surprising was the finding that the time frame for “rapid” varies considerably and is relative. Harker reported a mean time for completion (10.42 months) that was not much shorter than that for systematic review!

All authors found that rapid review developers made different choices about where to achieve speed. Choices such as:

o   Narrowing breadth of review question

o   Limiting sources searched

o   Limiting time-frame for search

o   Eliminating or limiting quality review

o   Limiting depth of analysis

In many instances, the rapid review developers were not fully transparent about the potential for introducing bias resulting from each of these short cuts.

All three of these authors had built an assumption into their assessments - that rapid review is a faster systematic review. But is it? I'd heard presentations by organizations that have implemented rapid reviews using a technique considered by others, like Cochrane, to be an overview of reviews. Instead of rapid assessment of primary literature, they find and synthesize secondary literature, specifically existing systematic reviews.

That led me to conduct an internet search for rapid reviews created by various organizations. I confirmed that many developers of rapid review products were not reviewing primary literature, but were evaluating secondary literature. So it wasn’t always about speedy creation of a de novo systematic evidence review, but was often about creating a user-friendly assessment of existing evidence, usually from prior systematic review. The commonality was what precipitated the rapid review, generally a user request for a quick answer and often to support policy decisions.

So a Rapid Review could be a:

o   Type of review that uses shortcuts reducing rigor and increasing the risk of introducing bias

o   Translated, user-friendly product using existing systematic reviews

o   Transformed systematic review process that shortens time spent in production without sacrificing rigor

Rapid review may be a label attached to a quick and dirty process that introduces selection bias in the search for and information bias in the evaluation of evidence.

Or it may mean a systematic appraisal of secondary research designed to meet user needs.

Or it may mean improved process methods that can accelerate SR timeline without reducing rigor. Things like increasing available resources, or implementing leaner processes during the development phase, or automating steps where the technology is ready.

Is rapid review a phase of the systematic review process?

To further complicate the issue, I did find that some organizations use the label rapid review when describing a kind of quick scoping review, used to look for major trends or patterns in research. These developers generally acknowledge that their review was not systematic but rather a pre-systematic review that indicated need for or ability to conduct thorough review. In that context, the rapid review could fit before a systematic review. The more common use by organizations was as a translated or aggregated product based on existing systematic review. In that use, the rapid review was more like a knowledge translation product built from existing systematic review.

In February of this year, the Agency for Healthcare Research and Quality Evidence-based Practice Centers’ Methods Workgroup released An Exploration of Methods and Context for the Production of Rapid Reviews” (reference 4) to address these issues. They combined literature review with key informant interviews and confirmed previous findings that rapid review has no standard meaning and the products are highly variable.

Here are my suggestions based on what I’ve learned.

1.     Stop using the label rapid review. It has no meaning and adds to confusion about evidence reviews.

2.     Use the label scoping review when a user wants a quick first look at what evidence may exist for their question.

3.     Use either the label "overview of reviews" or "systematic review of systematic reviews" when describing an evaluation of existing systematically derived evidence summaries.

4.     Work together to improve systematic review processes so we can create evidence reviews in shorter time frames without sacrificing rigor.

References

1. Watt A et al. Rapid Reviews vs. Full Systematic Reviews. International Journal of Technology Assessment in Health Care, 24:2 (2008), 133–139.

2. Ganann R, Ciliska D, Thomas H. Expediting systematic reviews: methods and implications of rapid reviews. Implement Sci. 2010;5:56.

3. Harker J, Kleijnen J. What is a rapid review? A methodological exploration of rapid reviews in Health Technology Assessments. Int J Evid Based Healthc. 2012;10:397–410.

4. Hartling L et al. EPC Methods: An Exploration of Methods and Context for the Production of Rapid Reviews. Research White Paper. AHRQ Publication No. 15-EHC008-EF. Rockville, MD: Agency for Healthcare Research and Quality; February 2015. www.effectivehealthcare.ahrq.gov/reports/final.cfm.

 

TheEvidenceDoc July 2015

 

Watson, Artificial Intelligence and the Evidence for Better Healthcare

My local PBS is running the special on IBM Watson and Jeopardy again. As you may recall, Watson dominated the game against Brad Rutter and Ken Jennings with strength on purely factual questions. I've watched the PBS special and the game several times now and never tire of watching Watson learn, just as I never tire of watching children (or adults) learn. Mistakes are more revealing than correct answers about the logic process.

Last month IBM, Memorial Sloan-Kettering and Wellpoint announced commercial applications from Watson's work in healthcare that recommend and authorize cancer treatment options, providing estimates of confidence on the options along with the supporting evidence Watson used to make the recommendation.

I'd love to be able to watch Watson at this work, the supporting estimates and evidence for the right answers would be as fascinating as for the wrong. I do hope someone is watching and learning along with Watson.

You see, those of us who've spent a lifetime in medical evidence know there's very little that's straightforward, black and white, or factual about our current best evidence. Evidence accumulates and our interpretation of it changes as the data and our methods for accumulating and evaluating it grow and strengthen. A single, new study is not accepted as the right answer, but as a piece of new information to blend with the old. And the blending itself is a challenge, for a wide variety of reasons. Unlike the measurements used in manufacturing, like lengths and widths and temperatures, measures of disease, even disease definitions and how we diagnose them change with new knowledge and new technologies. Multiple treatments and supporting care options are applied and changed simultaneously, making attribution of benefit to a single intervention challenging. And patients themselves react differently both to disease processes and to treatments for the disease.

I describe only some of the challenges, and not because I think they are too great for Watson, but because I think perhaps Watson can help. But I'm not sure he's ready for clinical practice just yet.

The reality is that for healthcare, we have considerable uncertainty for many diagnostic and care decisions. The evidence is weak, absent or not yet characterized for many conditions. So I would welcome Watson as a partner in the process of developing evidence based guidance for healthcare. Watson could begin by assisting us with the evidence synthesis through systematic review of evidence. This is the foundational step for developing evidence based clinical practice guidelines - see the Institute of Medicine (IOM), the Agency for Healthcare Research and Quality (AHRQ) and the National Institute for Health and Clinical Excellence (NICE) for more detail on systematic evidence review.  Watson could search for relevant studies, published and unpublished, using criteria for including and excluding studies based on the specific diagnostic or treatment question and the eligible patients and care settings.  Watson would, of course, have to pursue many rounds of "learning" just like those for jeopardy.  I'd love to watch that process. And once the eligible studies are selected, could Watson be taught to evaluate their quality? Oh, did I forget to mention that not all studies, even those published, are of good quality and that some have such serious limitations in their design or execution that it makes their findings invalid? We don't want to include severely biased results in our evidence composite, as they could mislead our understanding of the effectiveness or safety of the clinical action.  And we need to evaluate and rate the seriousness of the limitations in studies with only minimal to moderate bias so we can attempt to understand how the measurement errors influence the findings.  Oh, and at various stages throughout the process of evidence identification, selection, and evaluation Watson would need clinical input to address importance and relevance to the care of patients.

Clinical specialty societies and other organizations that develop clinical practice guidelines both welcomed and bemoaned the release of the IOM standards for evidence based guideline development. They welcomed the explicit, transparent methods but bemoaned the time and resources necessary to follow those standards. Patients and providers (and payers) are waiting for those evidence based guidelines to be developed and to be maintained and updated for currency. It takes up to a year or more to follow the process, all while technology and clinical practice move forward so that guidelines may be outdated by the time they are produced.  Meanwhile, health information technology including clinical decision support systems and electronic health records are automating current guidelines that may be based on expert opinion rather than an examination of all the evidence.

It's certainly not elementary, my dear Watson. Are you interested?

 TheEvidenceDoc

Applying a dose of evidence to testing quality improvements

Improving healthcare quality requires both knowledge of which clinical actions work and knowledge of how to repeatedly provide those effective clinical actions. Systematically embedding practices that lack evidence for effectiveness and safety may place patients at risk or waste limited resources.

But how do you know if and when a clinical action works? Currently, in the U.S., there are no universal standards to identify evidence based best practice for quality improvement. The Institute of Medicine (IOM) did provide standards for clinical practice guidelines and for the systematic reviews which provide the evidence summary to support guideline development.  These standards incorporate epidemiologic research methods which can be applied to best practice identification and testing as well.

This is the first of a multi-post look into evidence based quality improvement which will include some links to resources and tools for organizations on the path toward evidence based quality improvement.  For a brief overview and checklist from TheEvidenceDoc to get you started thinking about how to evaluate best practices for effectiveness and safety, click here. It's the first dose in a series to develop your skills and comfort with evidence.