I just want #clinicalpracticeguidelines we can trust

Ten years ago, the US Congress, through the Medicare Improvements for Patients and Providers Act, asked what was then the Institute of Medicine (now the National Academies of Sciences, Engineering, and Medicine) to identify the best methods for developing clinical practice guidelines. The IOM did so producing what is now considered our national standard for clinical guideline development and appropriately titled, "Clinical Practice Guidelines We Can Trust".

If you are unfamiliar with this report, you can, and should, read it here

You will need to read it, understand it, and be prepared to implement it yourself.

You see, the US resource that used these IOM standards to evaluate clinical guidelines and provide free evaluation and information on #clinicalpracticeguidelineswecantrust has lost its funding and will cease to exist after July 16th of this year. You can read about our loss of the National Guideline Clearinghouse here.

While you're there, explore the resources available on the website, particularly the guideline submission kit that provides detail on how the organization implemented the IOM standards. Better get it quick, before the site comes down.

Not all guidelines are to be trusted. An examination of how well guidelines adhered to the new standards shortly after their release found that the average number of standards met in guidelines sampled was only 46.5%, or just less than half.

Developing trustworthy, evidence based guidelines for clinical care is hard work. The National Guideline Clearinghouse provided more than curated access to guidelines we can trust. Through their detailed methods for guideline inclusion, they also provided training to guideline developers interested in learning and implementing the minimum standards for trustworthy guidelines.

This is a huge loss at a time when we need science more than ever to help us navigate the evidence behind the many medical care choices available.

I don't want access to every clinical practice guideline.

I don't want to waste my time sifting through those that can't meet even half the standards for quality guideline development.

I just want access to clinical practice guidelines we can trust.

TheEvidenceDoc 2018




As you may recall from earlier posts, after assigning the individual evidence ratings for each of the domains for downgrading and upgrading, you then combine into one summary rating for each outcome. Here's the summary of prior posts for how to get to that overall summary rating.

Remember -

  1. GRADE provides evidence ratings for each outcome.
  2. GRADE uses 5 domains to rate down your confidence in the quality of evidence derived from Randomized Controlled Trial studies (RCTs).
  3. GRADE starts overall evidence rating for evidence derived from RCTs at the highest level (HIGH) and then subtracts, or downgrades, for insufficiencies in that evidence. However, there are only 4 levels total, so overall evidence cannot be rated lower than VERY LOW.
  4. The four rating levels in the GRADE approach are High, Moderate, Low, and Very low.
  5. Each domain is considered equal.
  6. Those 5 domains for downgrading are:
    1. Risk of bias = limitations in the design and conduct of the studies that impact validity
    2. Inconsistency = lack of reproducibility of the effect across multiple studies
    3. Indirectness = in any of the PICO elements; if tested in population that differs from the one of interest, if difference in the intervention itself, or use of surrogate outcomes
    4. Imprecision = when confidence intervals around the effect estimate include both benefit and harm and impact the clinical decision threshold
    5. Publication bias = difficult to assess, but GRADE provides some indicators for suspicion that positive studies have been selectively published for the topic
  7. So, for example, if the evidence is downgraded by one level for risk of bias and one level for inconsistency it would go from an overall rating of High to Low.
  8. After being downgraded by 3 levels when starting at High, the evidence rating cannot be further reduced.
  9. GRADE starts evidence derived from observational studies at an overall rating of Low and allows for upgrading in 3 instances.
  10. Only well-designed and conducted observational studies are eligible for upgrading.
  11. The 3 instances (domains) for upgrading are:
    1. Large effect
    2. Dose-response
    3. All known confounding should be working against the direction of the observed effect
  12. Evidence can be upgraded one or two levels for each domain. 

You know where to go for more detail - the official GRADE handbook

Copyright TheEvidenceDoc December 5, 2017.



It's time for the last domain for upgrading.

I've covered the most straightforward and most common reasons for upgrading evidence from well-done observational studies:

  1. large effect
  2. dose-response

The third and last domain is a little trickier to explain to those without methodology background. It has to do with confounding, which itself may need explaining. Confounding occurs when something is associated with both the intervention and the outcome of interest. For example, a study I published in the early 90s sought to evaluate the impact of a specific herbicide exposure on a specific eye disease. An earlier, unpublished study looked at all eye diseases in the cohort and found an association with long term exposure to the herbicide and development of cataracts. That study did not control for age. Increasing age was a confounder, since cataracts are more common as we age and long term exposure is also associated with increasing age.

Confounding is a known problem for all studies and especially for observational designs. Randomization is used in trials in an attempt to prevent confounding (which may or may not be successful, but that's another topic for another time). In observational designs, matching is used to prevent confounding. If we control for enough of the known confounders for the outcome, like age which is a common one, and sex, race, SES, or educational level if relevant, we are also hoping to match for confounders we don't know about. Confounding can also be adjusted for in the analysis through stratification or multivariate analysis.

Now that you have a bit of an understanding of confounding, here are the rules for upgrading.

Did you find an effect, even though all the known confounders should have reduced or eliminated an effect? If so, consider upgrading.

A common example is the study of an intervention where the sickest patients got the intervention and yet they improved more than the comparison group. The confounding factor of sicker patients should have reduced the impact of the intervention and yet it did not. So you would upgrade confidence in the evidence.

Of course, consider the converse, that is, did you fail to find an effect even though the known confounding factors should have increased the effect? Again you may decide to upgrade confidence in the evidence.

So there are 3 reasons to upgrade:

  1. Large effect
  2. Dose-response
  3. All known confounding should be working against the direction of the observed effect

That's it for upgrading in this very brief overview.

You know where to find more detail - GRADE Handbook

TheEvidenceDoc December 1, 2017