Composite Endpoints - Canny or cunning use of #healthoutcomes data?

You are on a guideline panel and you're at the PICO (Population - Intervention - Comparison - Outcome) stage of development. It's time to choose important outcomes.

A reminder - important outcomes are those important to the patients who are affected by the disease or condition. So for studies of diabetes, patient important outcomes are things like premature mortality or heart attack but not blood sugar levels. Lowering blood sugar levels is an intermediate step in the process to better health for diabetics, so would be considered a surrogate or intermediate outcome. This only indirectly measures what we are interested in, so the evidence wouldn't be rated as strong as the evidence for outcomes of direct importance.

So how do composite endpoints fit into this? What are composite endpoints? Composite endpoints (CEP) or composite outcomes are combined endpoints used in some clinical trials, particularly common in cardiology trials.

 According to a systematic review by Ferreira-Gonzalez et al, the most common reasons cited for using CEP are the smaller study size requirement and to evaluate the net effect of an intervention. Avoiding adjustments for multiple comparisons was also cited as a rationale for use. Disadvantages to using included misinterpretation when the components differed in patient importance or in size and direction of the effect.

A systematic review by Cordoba et al of 114 RCTs published in 2008 that used CEP found that changes in the definition of the composite outcome during the trials were common. Selection of components was often not pre-specified and definitions were inconsistently described throughout the study reports. Those trials also failed to report treatment effect for the individual components in a third of the publications. The less important components often had higher event rates and larger effects associated with treatment. Cordoba and colleagues recommended that "composite endpoints should generally be avoided, as their use leads to much confusion and bias. If composites are used, trialists should follow published guidance."

Fortunately, there is published guidance to direct decisions on how to create composite endpoints.  We can use this guidance to help us in determining whether or not composite endpoints may be valid and utilized in our guideline development.

Freemantle and colleagues use examples to demonstrate the problems with composite outcomes, including the presumption that the benefit described may be attributed to all the components when in fact, it is derived from only one component. The opposite also occurs; measures of a positive treatment effect for a critical outcome can be diluted by an outcome with no effect. And they provide data showing that CEP including clinician driven outcomes - where physicians order the intervention - were twice as likely to be associated with statistically significant results for the composite outcomes. Examples would include things like revascularization, hospitalization, and initiation of new therapy.

Montori and colleagues have produced an educational paper using examples to summarize three major considerations for evaluating the validity of composite endpoints. They are:

  1. Ensure that the component endpoints are of similar importance to patients. Most patients would not equate serious endpoints like death or heart attack with need for change in therapy.
  2. Ensure that the more and less important endpoints occur with similar frequency. If the more important events are uncommon (as is often the case for mortality) the composite measure is likely to be driven by the more common though less important events.
  3. Ensure that the component endpoints are likely to have similar risk reduction. Individual components should be similarly affected by the intervention.

There's another challenge when systematically collecting and summarizing the evidence on a given topic. Since CEP definitions frequently change, even within studies, it is very difficult to find standard definitions used across studies. This limits your ability to collect and combine the data from multiple studies for your guideline.

The easy answer for many guideline panels will be to simply exclude CEP from your outcome selections. But if you decide to consider their importance for your topic, you now have some guidance for evaluating that CEP.

And if you want to ponder that proposed benefit of using CEP to evaluate net effect by accounting for competing risks, I suggest you read this systematic review by Manja and colleagues

And though this very brief summary is directed at guideline developers, it wouldn't hurt trialists to learn a bit more about CEP.

TheEvidenceDoc August 7, 2017

 

Are you ready to be a panelist in #evidencebased guideline development?

Yesterday I blogged about the difference between experts and expertise on guideline panels. I've been focusing on clinical panelists who may think they've been invited to participate just because they are experts in a clinical field.

WRONG!

IT'S NOT THAT YOU ARE AN EXPERT.

IT'S THAT YOU HAVE EXPERTISE.

You have expertise to contribute to the discovery, evaluation and integration of relevant evidence to address important clinical questions.

LET ME REPEAT - You are not being invited because of your opinions about the best care options for disease x. 

You are invited because during your experience in providing care for patients with disease x you have pondered questions about how to better diagnose people with the disease early enough to intervene in the disease progression. Or you've experimented in many n of 1 trials with different treatment regimens to eliminate symptoms or slow disease progression. 

Do you see where I'm going here? Your expertise in thinking about the best ways to care for people with this disease are what is sought after. Your openness to considering new ways of preventing, diagnosing, and treating disease x and your ability to help construct questions that clinicians seek to answer is what guideline developers seek. 

Would you like to carefully and critically examine the science of what we currently know about disease x? Would you like to discover the research gaps? Are you innately curious?

If so, you are perfect for a guideline panel.

Guideline developers are not seeking clinicians with answers They seek clinicians with sufficient experience to have lots of questions.

Bring your questions. Help guide the development of guidelines that will make a difference in patient care.

TheEvidenceDoc June 16, 2017

 

Expert vs. Expertise in #evidencebased guideline development

In the past, guidelines were often developed by a panel of experts who sat around tables, discussed their experiences and came to consensus on best clinical practices based on these experiences. It was tempting to rely solely on experts, believing their experience leads them to better conclusions. Their authority can be reassuring, but there’s a tendency for those who have years of experience to generalize greatly from that experience and to fail to see evidence that contradicts that experience. The wisdom of experts may be tried and tested but the evidence is limited to the rather biased experience of those few clinicians. Unfortunately, some things get in the way of a critical and fair appraisal of their observations – things like:

    •    The resilience of the human body to recover from many bacterial, viral, physical and even clinically associated assaults on their bodies. (bloodletting anyone?)

    •    The inability of isolated physicians to fairly test interventions by controlled experimentation. Their single practices are often too small for sufficient cases to study scientifically, so they use individual case series. “Let’s try this and see if it works.” Unfortunately, a trial of one is just that. It is heavily biased by individual variation, chance occurrences, and human resilience.

    •    The human brain, which looks for and finds patterns, even when none exist and is resistant to changing belief in those patterns.

We now have greater opportunity to expand on the limited experiences of one or a few by compiling the collective experiences of clinicians and their patients worldwide. Careful capture of complete data about patients, concurrent other treatments given to those patients, the setting where care was delivered, and other factors can help our evaluation.  Additionally, we can better pool data on diseases and conditions that occur too infrequently in a single clinician’s practice so that we can make meaningful discoveries about care and cure. These larger pools of data, when properly collected can help us better find and interpret patterns of disease and treatment.

It is important to note that this development in our ability to gather data does not mean that an evidence based approach eliminates expertise.

IMPORTANT - FOLLOWING AN EVIDENCE-BASED APPROACH DOES NOT REQUIRE THAT YOU CHOOSE BETWEEN EXPERTISE AND EVIDENCE.  IT’S NOT EITHER/OR. IT IS AND.

An evidence-based approach builds on clinical expertise. Instead of asking experts for answers, it asks them to help. It asks clinical experts to:

    •    Identify important problems to solve and questions to ask

    •    Build on research evidence by sharing the data from their experience

    •    Evaluate and interpret evidence for clinical relevance and importance

An evidence based approach multiplies the contributions of clinical experts, by combining research evidence from the experience and expertise of many clinicians and their patients. Local clinical experts play an important role in understanding the accumulated evidence in the context of your local environment.  To choose an evidence based approach means you can integrate available data from thousands or millions of care interactions with the experience of your available local expertise.  It is AND, not OR.

Evidence-based guidance does not rely solely on expert opinion but integrates clinical expertise.

TheEvidenceDoc 6/15/17