Healthy sampling A very brief primer on selection bias for students of #epidemiology

Selection bias can occur when choosing people to participate in the study isn’t random. This creates a study sample that is not representative of the entire population you want to know about.  This systematic error leads to error in your results.

Selection bias can occur when choosing people to participate in the study isn’t random. This creates a study sample that is not representative of the entire population you want to know about.  This systematic error leads to error in your results.

What if your population is segmented, perhaps into people who have diabetes and those that don’t? Or people that are exposed to an important cause of disease like cigarette smoking and those that are not exposed? Can you see that if you pick a sample mostly from the upper left portion of the circle you will overestimate the amount of smoking in your population? And if you are studying a disease strongly associated with smoking, you will end up estimating a higher proportion of disease in your population.     *According to the CDC, prevalence of cigarette smoking among U.S. adults is highest among people living in the Midwest (25.4%), where TheEvidenceDoc is located.  https://www.cdc.gov/tobacco/disparities/geographic/index.htm

What if your population is segmented, perhaps into people who have diabetes and those that don’t? Or people that are exposed to an important cause of disease like cigarette smoking and those that are not exposed? Can you see that if you pick a sample mostly from the upper left portion of the circle you will overestimate the amount of smoking in your population? And if you are studying a disease strongly associated with smoking, you will end up estimating a higher proportion of disease in your population.

 

*According to the CDC, prevalence of cigarette smoking among U.S. adults is highest among people living in the Midwest (25.4%), where TheEvidenceDoc is located. https://www.cdc.gov/tobacco/disparities/geographic/index.htm

You can find some examples of biased sampling in the polls on Twitter. Since twitter uses hashtags to group tweets and make it easier to follow certain topics, some pollsters have made use of the hashtag to direct their polls to certain groups of people. If the intent is to accurately measure a population opinion, how will this segmented reach impact the results of their polls and the generalizability of those findings?

You can find some examples of biased sampling in the polls on Twitter. Since twitter uses hashtags to group tweets and make it easier to follow certain topics, some pollsters have made use of the hashtag to direct their polls to certain groups of people. If the intent is to accurately measure a population opinion, how will this segmented reach impact the results of their polls and the generalizability of those findings?

Are your vaccinations up to date?

Are your vaccinations up to date?

“Vaccine hesitancy” is the research label for your indecision about whether or not to get a vaccination when offered. In the past, patients rarely questioned the need for vaccines. Perhaps it was due to differences in the doctor patient relationship. Or perhaps it was due to the constant reminders of what happened without the vaccine. For those of us old enough to have lived through the introduction of some vaccines, there were reminders of the diseases they prevent. I still have scars on my legs from chickenpox. But more haunting are the vivid memories of my friend Jim as he dragged his uncooperative legs around my 4th grade classroom using crutches with leg and body braces. Jim had survived polio.
So I can’t help wondering how sharing our past, telling stories of the bad old days before vaccines, might impact our willingness to be accurately informed about the net benefits of vaccination.
One of my favorite storytellers of disease is Berton Roueche. For years he wrote his mysteries of disease and the detectives who solved them in the New Yorker magazine and published several books of the short stories. They are mostly out of print and hard to find but they are spellbinding stories. He painted vivid pictures of the effects of the diseases, like his description of tetanus, a vaccine preventable disease, in his story, “A Pinch of Dust”. Using quotes from Hippocrates and another ancient physician, Roueche presents an unforgettable image of a patient whose first symptoms of difficulty talking were followed quickly by locked teeth and jaws and only three days later by muscle contractions of the back so severe that the head became bent down the back between the shoulder blades while the spine arched so severely that it appeared the patient was trying to touch his head to his heels. Six days later the patient was dead.
Roueche also describes the bacterium that causes tetanus, Clostridium tetani, and the incredible strength of the toxin it makes (“one of the most venomous poisons known to man”). And, he explains, it is an essentially incurable disease. Since the time of his writing, intensive care therapeutic support options have improved. Still, you may want to read the wikipedia description of treatment for severe tetanus, especially if you aren’t up to date on your Tdap vaccine or don’t know when you had your last vaccine. Or read the CDC description of your risk of death if you get tetanus. It’s one out of every 10 people, even with the best medical care.


You need the vaccine every ten years.


I’m not recommending fear as a strategy for reducing vaccine hesitancy. You can review the state of the literature on how to address vaccine hesitancy by going to PubMed and entering vaccine hesitancy. Or start with this systematic review. Spoiler alert -  the causes of vaccine hesitancy are multifactorial and context specific and thus likely to need multi pronged interventions to address.

But stories from a past that no longer exists due to the success of modern vaccines may help prevent us from having to experience that past once again.

TheEvidenceDoc 2016

If you want to learn more about the vaccinations available to prevent disease in adults, check out the resources available from the CDC.
 

 

Missed my 2013 presentation on #RapidReview at G-I-N? No worries, nothing has changed

In 2013 I presented “If Rapid Reviews are the Answer, What is the Question?” at the Guidelines International Network (G-I-N) conference in San Francisco. My interest in rapid review began in New York the year before at the Evidence-Based Guidelines Affecting Policy Practice and Stakeholders (EGAPPS) conference. Several colleagues shared that they were also concerned about confusion from adding rapid review to our nomenclature so soon after the Institute of Medicine standardization of systematic review methods. This prompted me to search for standards describing:

o   What is a rapid review?

o   When should it, or shouldn’t it, be used?

o   How does it differ from systematic or other evidence reviews?

I shared what I learned at the conference.

Now two years later, I wondered if we are closer to answers.

If you start with the simplest question, “What defines a rapid review?” it’s tempting to provide the simplest answer. It must be speed in completing the review, right? But this doesn’t address how the speed is achieved. What parts of the review become rapid? And what is meant by rapid? As you’ll see, that too is relative.

Several others have looked at what a rapid review might be. At the time of my presentation, I included three systematic looks at rapid review. (References 1,2,3)

Watt defined rapid review as a health technology assessment or systematic review that takes one to six months to produce. Ganaan defined it as literature review that accelerates systematic review and Harker was most generous allowing anything that calls itself a rapid review. The results were uniform in finding variability in:

o   Methods

o   Uses

o   Standards

o   Time to completion

Most surprising was the finding that the time frame for “rapid” varies considerably and is relative. Harker reported a mean time for completion (10.42 months) that was not much shorter than that for systematic review!

All authors found that rapid review developers made different choices about where to achieve speed. Choices such as:

o   Narrowing breadth of review question

o   Limiting sources searched

o   Limiting time-frame for search

o   Eliminating or limiting quality review

o   Limiting depth of analysis

In many instances, the rapid review developers were not fully transparent about the potential for introducing bias resulting from each of these short cuts.

All three of these authors had built an assumption into their assessments - that rapid review is a faster systematic review. But is it? I'd heard presentations by organizations that have implemented rapid reviews using a technique considered by others, like Cochrane, to be an overview of reviews. Instead of rapid assessment of primary literature, they find and synthesize secondary literature, specifically existing systematic reviews.

That led me to conduct an internet search for rapid reviews created by various organizations. I confirmed that many developers of rapid review products were not reviewing primary literature, but were evaluating secondary literature. So it wasn’t always about speedy creation of a de novo systematic evidence review, but was often about creating a user-friendly assessment of existing evidence, usually from prior systematic review. The commonality was what precipitated the rapid review, generally a user request for a quick answer and often to support policy decisions.

So a Rapid Review could be a:

o   Type of review that uses shortcuts reducing rigor and increasing the risk of introducing bias

o   Translated, user-friendly product using existing systematic reviews

o   Transformed systematic review process that shortens time spent in production without sacrificing rigor

Rapid review may be a label attached to a quick and dirty process that introduces selection bias in the search for and information bias in the evaluation of evidence.

Or it may mean a systematic appraisal of secondary research designed to meet user needs.

Or it may mean improved process methods that can accelerate SR timeline without reducing rigor. Things like increasing available resources, or implementing leaner processes during the development phase, or automating steps where the technology is ready.

Is rapid review a phase of the systematic review process?

To further complicate the issue, I did find that some organizations use the label rapid review when describing a kind of quick scoping review, used to look for major trends or patterns in research. These developers generally acknowledge that their review was not systematic but rather a pre-systematic review that indicated need for or ability to conduct thorough review. In that context, the rapid review could fit before a systematic review. The more common use by organizations was as a translated or aggregated product based on existing systematic review. In that use, the rapid review was more like a knowledge translation product built from existing systematic review.

In February of this year, the Agency for Healthcare Research and Quality Evidence-based Practice Centers’ Methods Workgroup released An Exploration of Methods and Context for the Production of Rapid Reviews” (reference 4) to address these issues. They combined literature review with key informant interviews and confirmed previous findings that rapid review has no standard meaning and the products are highly variable.

Here are my suggestions based on what I’ve learned.

1.     Stop using the label rapid review. It has no meaning and adds to confusion about evidence reviews.

2.     Use the label scoping review when a user wants a quick first look at what evidence may exist for their question.

3.     Use either the label "overview of reviews" or "systematic review of systematic reviews" when describing an evaluation of existing systematically derived evidence summaries.

4.     Work together to improve systematic review processes so we can create evidence reviews in shorter time frames without sacrificing rigor.

References

1. Watt A et al. Rapid Reviews vs. Full Systematic Reviews. International Journal of Technology Assessment in Health Care, 24:2 (2008), 133–139.

2. Ganann R, Ciliska D, Thomas H. Expediting systematic reviews: methods and implications of rapid reviews. Implement Sci. 2010;5:56.

3. Harker J, Kleijnen J. What is a rapid review? A methodological exploration of rapid reviews in Health Technology Assessments. Int J Evid Based Healthc. 2012;10:397–410.

4. Hartling L et al. EPC Methods: An Exploration of Methods and Context for the Production of Rapid Reviews. Research White Paper. AHRQ Publication No. 15-EHC008-EF. Rockville, MD: Agency for Healthcare Research and Quality; February 2015. www.effectivehealthcare.ahrq.gov/reports/final.cfm.

 

TheEvidenceDoc July 2015