Friday's news announced that the Look AHEAD trial of lifestyle intervention to reduce CVD events in patients with type II diabetes was stopped early for futility. Unfortunately, we don't yet have the final report or data to evaluate this surprising finding.
Part of the surprise is that the team had published numerous reports they say show significant improvements in intermediate and patient centered outcomes (Disclaimer - they are so numerous that TheEvidenceDoc has not reviewed the studies yet and is only sharing the author reported findings from abstracts). A PubMed search for the phrase "look AHEAD research group" generates 82 studies to date. The abstracts of some of these studies report significant improvements in the intervention group compared to the control group in the risk factor targets that physicians seek to achieve in diabetic patients. Important things like significant reduction in systolic and diastolic blood pressure, triglycerides, and body weight and increases in HDL. Some study abstracts also reported significant improvements in patient centered outcomes like increased mobility and decreased incidence of depression and new onset urinary incontinence.
So what are we to make of the apparent inconsistency between intermediate improvements and no difference in CVD?
Perhaps part of the answer lies in a methods paper published earlier this year by the research group. This paper is the very public oops that most honest epidemiologists will admit to experiencing at least some time in their research career. That is, the underestimation of the prevalence of the outcome of interest in the study population for the sample size calculations. For non epidemiologists, this is a very important component of power calculations that help us decide how many study subjects our study needs to detect a meaningful difference in the outcome between our Intervention and our Comparison group. If you overestimate how often the outcome occurs in the non-intervention group, you won't plan enough study subjects to detect an important difference. For this study, the team selected cardiovascular events as their primary outcome which they defined as fatal and non-fatal MI and stroke. They estimated that these events occurred at a rate of 3.125% per year in the absence of the intervention, and that to detect an 18% difference between the intervention and comparison group, they would need a total of 5,000 subjects followed for a maximum of 11.5 years. The oops in this case, is that at 3 years, they found the event rate in the comparison group was only 0.7%. Oops. In retrospect, the team notes that all trial participants were selected to be healthier than the general population of diabetics to ensure they could participate in the lifestyle intervention, which may have impacted CVD event rates. Or other improvements in the care of diabetes may have reduced background CVD event rates.
The team did employ rigorous methods to change the protocol and to reduce bias from those changes which are described in the paper. Ultimately, they chose to extend the length of the trial by 24 months and to include hospitalized angina in the primary outcome.
So this early halt is early in reference to that extension. The study still lacks sufficient power to detect an important difference in CVD events between the two groups.
There is still value in examining these results along with an examination of all the other findings from the team. I hope we will all get to read the full report, even though many negative trials are never published. -----TheEvidenceDoc
Full disclosure - to read an example of an oops when underestimating background prevalence rates from Dr. Ireland's career, see Evaluation of ocular health among alachlor manufacturing workers. In her defense, it was really hard to find prevalence rates for pigmentary dispersion syndrome.