Missed my 2013 presentation on #RapidReview at G-I-N? No worries, nothing has changed

In 2013 I presented “If Rapid Reviews are the Answer, What is the Question?” at the Guidelines International Network (G-I-N) conference in San Francisco. My interest in rapid review began in New York the year before at the Evidence-Based Guidelines Affecting Policy Practice and Stakeholders (EGAPPS) conference. Several colleagues shared that they were also concerned about confusion from adding rapid review to our nomenclature so soon after the Institute of Medicine standardization of systematic review methods. This prompted me to search for standards describing:

o   What is a rapid review?

o   When should it, or shouldn’t it, be used?

o   How does it differ from systematic or other evidence reviews?

I shared what I learned at the conference.

Now two years later, I wondered if we are closer to answers.

If you start with the simplest question, “What defines a rapid review?” it’s tempting to provide the simplest answer. It must be speed in completing the review, right? But this doesn’t address how the speed is achieved. What parts of the review become rapid? And what is meant by rapid? As you’ll see, that too is relative.

Several others have looked at what a rapid review might be. At the time of my presentation, I included three systematic looks at rapid review. (References 1,2,3)

Watt defined rapid review as a health technology assessment or systematic review that takes one to six months to produce. Ganaan defined it as literature review that accelerates systematic review and Harker was most generous allowing anything that calls itself a rapid review. The results were uniform in finding variability in:

o   Methods

o   Uses

o   Standards

o   Time to completion

Most surprising was the finding that the time frame for “rapid” varies considerably and is relative. Harker reported a mean time for completion (10.42 months) that was not much shorter than that for systematic review!

All authors found that rapid review developers made different choices about where to achieve speed. Choices such as:

o   Narrowing breadth of review question

o   Limiting sources searched

o   Limiting time-frame for search

o   Eliminating or limiting quality review

o   Limiting depth of analysis

In many instances, the rapid review developers were not fully transparent about the potential for introducing bias resulting from each of these short cuts.

All three of these authors had built an assumption into their assessments - that rapid review is a faster systematic review. But is it? I'd heard presentations by organizations that have implemented rapid reviews using a technique considered by others, like Cochrane, to be an overview of reviews. Instead of rapid assessment of primary literature, they find and synthesize secondary literature, specifically existing systematic reviews.

That led me to conduct an internet search for rapid reviews created by various organizations. I confirmed that many developers of rapid review products were not reviewing primary literature, but were evaluating secondary literature. So it wasn’t always about speedy creation of a de novo systematic evidence review, but was often about creating a user-friendly assessment of existing evidence, usually from prior systematic review. The commonality was what precipitated the rapid review, generally a user request for a quick answer and often to support policy decisions.

So a Rapid Review could be a:

o   Type of review that uses shortcuts reducing rigor and increasing the risk of introducing bias

o   Translated, user-friendly product using existing systematic reviews

o   Transformed systematic review process that shortens time spent in production without sacrificing rigor

Rapid review may be a label attached to a quick and dirty process that introduces selection bias in the search for and information bias in the evaluation of evidence.

Or it may mean a systematic appraisal of secondary research designed to meet user needs.

Or it may mean improved process methods that can accelerate SR timeline without reducing rigor. Things like increasing available resources, or implementing leaner processes during the development phase, or automating steps where the technology is ready.

Is rapid review a phase of the systematic review process?

To further complicate the issue, I did find that some organizations use the label rapid review when describing a kind of quick scoping review, used to look for major trends or patterns in research. These developers generally acknowledge that their review was not systematic but rather a pre-systematic review that indicated need for or ability to conduct thorough review. In that context, the rapid review could fit before a systematic review. The more common use by organizations was as a translated or aggregated product based on existing systematic review. In that use, the rapid review was more like a knowledge translation product built from existing systematic review.

In February of this year, the Agency for Healthcare Research and Quality Evidence-based Practice Centers’ Methods Workgroup released An Exploration of Methods and Context for the Production of Rapid Reviews” (reference 4) to address these issues. They combined literature review with key informant interviews and confirmed previous findings that rapid review has no standard meaning and the products are highly variable.

Here are my suggestions based on what I’ve learned.

1.     Stop using the label rapid review. It has no meaning and adds to confusion about evidence reviews.

2.     Use the label scoping review when a user wants a quick first look at what evidence may exist for their question.

3.     Use either the label "overview of reviews" or "systematic review of systematic reviews" when describing an evaluation of existing systematically derived evidence summaries.

4.     Work together to improve systematic review processes so we can create evidence reviews in shorter time frames without sacrificing rigor.

References

1. Watt A et al. Rapid Reviews vs. Full Systematic Reviews. International Journal of Technology Assessment in Health Care, 24:2 (2008), 133–139.

2. Ganann R, Ciliska D, Thomas H. Expediting systematic reviews: methods and implications of rapid reviews. Implement Sci. 2010;5:56.

3. Harker J, Kleijnen J. What is a rapid review? A methodological exploration of rapid reviews in Health Technology Assessments. Int J Evid Based Healthc. 2012;10:397–410.

4. Hartling L et al. EPC Methods: An Exploration of Methods and Context for the Production of Rapid Reviews. Research White Paper. AHRQ Publication No. 15-EHC008-EF. Rockville, MD: Agency for Healthcare Research and Quality; February 2015. www.effectivehealthcare.ahrq.gov/reports/final.cfm.

 

TheEvidenceDoc July 2015

 

A Powerball® ticket and relative vs absolute estimates of disease

This week's Powerball® mania seems a good time to talk about relative and absolute estimates of disease. This epi professor learned from her students years ago (especially from a bartender and professional gambler) that real world examples can be useful to explain epidemiology and biostatistic methods.

Got yours?

Got yours?

Do you have a lottery ticket for Saturday's draw? I'll admit to an occasional purchase when the payout is high, just for the entertainment value of having a ticket in hand in the company of friends as the numbers are called out. But only the one set of numbers. For after all, my absolute risk of winning the big payout, according to Powerball® is just 1 in 175,223,510 or 0.000000005707. If I buy 10 tickets, it's now just 10 in 175,223,510 or 0.00000005707, a barely detectable increase.  But according to news reports, that doesn't stop people from buying 1,000 or more tickets to increase their chance of winning.

This lottery ticket example provides an easy example of how changing my relative odds or relative chance of winning - a ten fold increase by buying 10 instead of 1 ticket - doesn't really change my absolute chance of winning by any appreciable amount because the underlying chance of winning is such a rare event.

So why are epidemiologists like me so enamored with relative disease estimates and seemingly less enamored with absolute disease estimates?

These different measurements serve different purposes.

Relative Disease Estimates

One of the big goals of epidemiology is to study patterns of disease and health and in so doing, to discover associations that may be causal relationships. So we do research like the studies of Sir Richard Doll that uncovered cigarette smoking as a cause of lung cancer, at a time when the suspected major cause was believed to be industrial pollution.  Or like the research that uncovered occupational exposure to vinyl chloride as a cause of angiosarcoma of the liver.  These kinds of studies compare the occurrence of disease in persons with exposure to those without. Through these relative comparisons, epidemiologists demonstrate 1)strength of association by comparing exposed to unexposed people and 2)dose response by measuring increasing occurrence of disease in persons with increasing levels of exposure. These represent two of HIll's causal criteria, with Hill's Causal Criteria being one of the logic frameworks for assessing causality. We'll talk more about Hill in context in a later blog. For now, we are simply explaining the relative importance of relative measures in an epidemiologist's armamentarium.  Epidemiologists like to discover causal relationships, after all, Sir Richard Doll is famous, well, at least among us epidemiologists.

Of course, it's pretty obvious that having a ticket is causally related to winning. Your odds of winning with no ticket are zero. But there isn't a dose response; the person with the most tickets isn't guaranteed to win it.

So a causal path, or analytic framework for winning the lottery is having at least one ticket.

Absolute disease estimates

Absolute estimates provide an assessment of the frequency of a disease or condition in a population.  Absolute estimates are an important tool for health policy and planning when determining where to place limited resources.  Likewise, they should be used by people with limited resources when deciding how many tickets to buy or even whether to play the lottery.

Your doctor will balance information from relative and absolute estimates, which is important in direct patient care, particularly in shared decision making when there are care choices. We'll need to spend a whole blog on that topic. But our goal for now is to start with a clear picture of the difference between relative and absolute estimates.  Got it?

TheEvidenceDoc May 2013

From Cherry Blossoms to Cherry Picking; How to spot reviews that are not evidence based

It's cherry blossom time in our nation's capital.  I visualize the puffy, pink petals and smell the sweet fragrance, before my thoughts shift to summer in Michigan tasting cherry fruits so juicy they drip down my chin. Mixed with all these pleasant memories is an image of cherry-picking, or selecting only the ripest, juiciest fruits from the trees.

What does cherry picking have to do with clinical epidemiology and evidence based medicine? The phrase is used to describe a practice of selecting, or "cherry picking" studies that support a prevailing belief or an individual's personal belief. It represents a practice that is counter to the scientific method of gathering all the evidence, which is the foundation for evidence based medicine. All studies relevant to your clinical question should be gathered, critically evaluated, and summarized, independent of their findings and conclusions.

This cherry picking of evidence is different from the selection of studies based on their relevance to your clinical question and the quality of the research.  So it's not selecting studies for a review that is the problem, it's HOW the studies are selected.

Let's say you are interested in whether or not tight control of blood glucose in very sick patients in a hospital intensive care unit leads to better survival. To answer this clinical question,  a reviewer would not select studies of blood glucose control in otherwise healthy diabetics living at home. Those studies would not help answer a question about caring for ICU patients. You are looking for a review of studies that has a good chance of answering your question. So in order to evaluate whether or not evidence based methods were followed in a review of tight control in ICU patients, you first look for clues that the review authors had a focused question that is close to one you are asking. The review question should specify the Patients or Population of interest (very sick patients in the ICU, with "sick" being defined), the Intervention (tight control of blood glucose, which should also be defined), the Comparison group (no tight control), and the Outcome (better survival, usually measured in time). These criteria are often referred to as PICO. When a question is well described, then studies identified through comprehensive search can be selected to answer that question. The review should clearly provide you with the criteria for including studies and for excluding them and these criteria should be developed before the selection process  begins.  Studies must be selected independently of the results, and solely on whether or not the studies were designed and conducted to answer these questions. This reduces error, like the error from cherry-picking studies for data supporting a conclusion the reviewer wants or expects to find, and from ignoring studies where the results differ from those expectations.

Evidence based methods should also use another selection process. This one may be harder to evaluate for the non-scientist, but again you are looking for selection that occurs regardless of the study findings, but because of the study methods. This selection is based on quality of the studies. You don't want to include data from very poor studies, because that data is likely to be compromised and misleading or just plain wrong. Reviewers should do their best to evaluate the quality of the studies and to only include studies without serious error in the design or conduct. When we say design of the study here,  we are considering much more than just the simplistic categorization of studies as experimental (such as RCTs) or observational (such as longitudinal cohort study). Whether or not an experimental design is an appropriate choice is certainly one component of quality. Additionally, the study itself, regardless of whether it's RCT or cohort, or whatever, must be designed and completed so as to minimize error in what it measures. In short, it needs to be a good RCT, or cohort, or whatever kind of study. Unfortunately, there are many examples of poor quality studies, yes, even RCTs, in the published literature, so studies must be evaluated and not accepted on faith. Methodological standards for good quality studies are available, and we'll talk about them in another post.  Evidence based reviews should include assessment for these standards of quality for each study included in the review. Then the review can reject studies if they suffer from such serious errors that the data reported in those studies is compromised.

So picking studies based on their relevance to the clinical question is good science, and picking studies based on their quality is good science, too. Picking studies because the results agree with the belief of the reviewer is bad science.

We'll talk more about other ways to tell if reviews are really evidence based, but for now, practice looking for cherry-picking. And please share questions, comments or discussion points here.

©TheEvidenceDoc April 15, 2013