Watson, Artificial Intelligence and the Evidence for Better Healthcare

My local PBS is running the special on IBM Watson and Jeopardy again. As you may recall, Watson dominated the game against Brad Rutter and Ken Jennings with strength on purely factual questions. I've watched the PBS special and the game several times now and never tire of watching Watson learn, just as I never tire of watching children (or adults) learn. Mistakes are more revealing than correct answers about the logic process.

Last month IBM, Memorial Sloan-Kettering and Wellpoint announced commercial applications from Watson's work in healthcare that recommend and authorize cancer treatment options, providing estimates of confidence on the options along with the supporting evidence Watson used to make the recommendation.

I'd love to be able to watch Watson at this work, the supporting estimates and evidence for the right answers would be as fascinating as for the wrong. I do hope someone is watching and learning along with Watson.

You see, those of us who've spent a lifetime in medical evidence know there's very little that's straightforward, black and white, or factual about our current best evidence. Evidence accumulates and our interpretation of it changes as the data and our methods for accumulating and evaluating it grow and strengthen. A single, new study is not accepted as the right answer, but as a piece of new information to blend with the old. And the blending itself is a challenge, for a wide variety of reasons. Unlike the measurements used in manufacturing, like lengths and widths and temperatures, measures of disease, even disease definitions and how we diagnose them change with new knowledge and new technologies. Multiple treatments and supporting care options are applied and changed simultaneously, making attribution of benefit to a single intervention challenging. And patients themselves react differently both to disease processes and to treatments for the disease.

I describe only some of the challenges, and not because I think they are too great for Watson, but because I think perhaps Watson can help. But I'm not sure he's ready for clinical practice just yet.

The reality is that for healthcare, we have considerable uncertainty for many diagnostic and care decisions. The evidence is weak, absent or not yet characterized for many conditions. So I would welcome Watson as a partner in the process of developing evidence based guidance for healthcare. Watson could begin by assisting us with the evidence synthesis through systematic review of evidence. This is the foundational step for developing evidence based clinical practice guidelines - see the Institute of Medicine (IOM), the Agency for Healthcare Research and Quality (AHRQ) and the National Institute for Health and Clinical Excellence (NICE) for more detail on systematic evidence review.  Watson could search for relevant studies, published and unpublished, using criteria for including and excluding studies based on the specific diagnostic or treatment question and the eligible patients and care settings.  Watson would, of course, have to pursue many rounds of "learning" just like those for jeopardy.  I'd love to watch that process. And once the eligible studies are selected, could Watson be taught to evaluate their quality? Oh, did I forget to mention that not all studies, even those published, are of good quality and that some have such serious limitations in their design or execution that it makes their findings invalid? We don't want to include severely biased results in our evidence composite, as they could mislead our understanding of the effectiveness or safety of the clinical action.  And we need to evaluate and rate the seriousness of the limitations in studies with only minimal to moderate bias so we can attempt to understand how the measurement errors influence the findings.  Oh, and at various stages throughout the process of evidence identification, selection, and evaluation Watson would need clinical input to address importance and relevance to the care of patients.

Clinical specialty societies and other organizations that develop clinical practice guidelines both welcomed and bemoaned the release of the IOM standards for evidence based guideline development. They welcomed the explicit, transparent methods but bemoaned the time and resources necessary to follow those standards. Patients and providers (and payers) are waiting for those evidence based guidelines to be developed and to be maintained and updated for currency. It takes up to a year or more to follow the process, all while technology and clinical practice move forward so that guidelines may be outdated by the time they are produced.  Meanwhile, health information technology including clinical decision support systems and electronic health records are automating current guidelines that may be based on expert opinion rather than an examination of all the evidence.

It's certainly not elementary, my dear Watson. Are you interested?

 TheEvidenceDoc