Is your #FallsPrevention Program Maximizing Inpatient Falls Risk Assessment?

Some patients will fall while they are in the hospital. How many? Bouldin and colleagues used National Database of Nursing Quality Indicators (NDNQI) data to find an overall fall rate in the U.S. of 3.53 falls per 1,000 patient days. The highest rates were in medical units, at 4.03 falls per 1,000 patient days. Overall, one fourth of the falls were associated with injury.(1) Collectively, the problem is large, and is estimated to impact 2% of all hospitalizations.(1)

Beginning in 2008, the Centers for Medicare and Medicaid Services (CMS) decided to stimulate U.S. hospital efforts to reduce inpatient falls by ending payments to hospitals for the additional costs associated with injury from inpatient falls.(2) As a result, fall prevention is one of the top priorities for most hospitals.

Nearly all fall prevention activities include the use of falls risk assessment tools. In the US, wristbands and bed signs are common warning labels to indicate patients identified as being at increased risk of falling through the use of these tools. But if risk identification is the main focus of your falls prevention program you are missing an opportunity to individualize fall prevention activities and reduce inpatient falls.


Fall risk assessments are not very good at risk prediction.

Does this surprise you? Several comprehensive systematic reviews have evaluated the performance of risk assessment tools for predicting an individual inpatient’s risk of falling.(3,4,5) None of the tools perform better than clinical judgment.

How can this be? You may remember from your epidemiology training that sensitivity and specificity respectively measure the proportion of fallers who tested positive with your tool and the proportion of non-fallers who tested negative with your tool. But did you remember other measures of the value of a screening or diagnostic tool? The positive predictive value is a better measure to evaluate the likelihood that a person who tests positive will have the condition. The negative predictive value measures the likelihood that a person who tests negative will not have the condition. Both measures are dependent on the validity of the tool and they are also dependent on the prevalence of the condition, which in this case is falls. How prevalent are falls in your organization?

Let’s look at some data presented in the NICE guideline to see how a tool with sensitivity and specificity of at least 70% (one of the inclusion criteria for studies for the NICE guideline) can have much lower predictive value.

For the Hendrich Fall Risk Model (data extracted for the NICE guideline from Hendrich et al 1995 as reported in the NICE guideline) (5)

There were 102 total patients who fell out of 338 patients. The test correctly labeled 79 of the 102 as falling, and 169 of the 236 as not falling

The sensitivity is 79/(79+23) = 77%.          The specificity is 169/(169+67) = 72%

But the positive predictive value is much lower. This measure looks at how many of the patients predicted to fall actually fell.

So the positive predictive value is 79/(79+67) = 54%.

Remember the predictive value is dependent on the condition prevalence, and even though falls are among the most common adverse events in the hospital, most inpatients do not fall.  

The negative predictive value, 169/(169 +23) = 88% is much better, but still misses the opportunity to identify and prevent falls in 12% of the people who were labeled low risk.


Using fall risk assessments to label patients as high or low fall risk misses the opportunity to individualize care. Should falls risk assessment be abandoned as a patient strategy? No, but the use of risk assessment as a screen to simply label patients should be abandoned. That use misses the opportunity to tailor interventions to patient needs to reduce the risk of falling during their hospital stay.

It’s not just that the quality of the evidence for inpatient risk prediction using any risk assessment tool is low or very low. Risk prediction produces a simple label that by itself does not guide staff action. After identifying a patient as being at increased risk, what is the next action step? Also, inpatient staff not regularly assigned to the patient, and other staff like lab, radiology and other technicians don’t know how to assist based on a simple warning alert. Patients, their family, and their friends may not have understanding of why the patient is labeled at risk nor what they can each do to change that risk. Risk predictions and resulting simple labels don’t provide actionable interventions. There is also potential for alert fatigue if too many patients are labeled with fall risk.  

There is moderate evidence from several recent systematic reviews that multi-factorial interventions can reduce inpatient falls, when multi-factorial is defined as interventions that are individually tailored to each patient’s modifiable risk factors. (6-11) How are the factors determined? Risk factors are identified through use of a risk assessment, whether tool or clinical judgment.

If risk assessment is used to identify why the patient is at risk of falling and then coupled with interventions directed to reduce those specific risks, it can be effective.  These systematic reviews and systematic overviews have concluded there is moderate evidence of the effectiveness of multi-factorial interventions. (6-11) Some of these review authors have lamented that the interventions vary so much from study to study that it is difficult to determine the essential elements. But that is just the point. The interventions must vary because falls are multi-factorial. Inpatient falls can be due to patient intrinsic factors like instability due to reductions in balance, strength, and agility or from loss of vision. They can arise from factors associated with hospitalization like unfamiliar surroundings, treatments and activity restrictions that add to confusion and instability.  And they can arise from combination of the above factors and additional factors like reduced bowel and bladder control leading to fear of toileting needs that can’t be met in the hospital environment.  To reduce preventable falls, interventions must be tailored to individual needs. The needs and plan of action must be communicated clearly to all staff who come in contact with the patient and with the patient and their loved ones so that everyone is empowered to reduce the opportunities for that patient to experience a fall.

Partners Health-Care System has tested this approach and successfully reduced falls from 4.18 to 3.15 per 1,000.  Dykes and colleagues used health information technology to incorporate risk assessment results (they used the Morse scale) with targeted intervention strategies. They developed specific signage and communication tools to use with patients and staff to clearly communicate the specific risks and the resulting actions to use to reduce falls. (12)

We have sufficient evidence from systematic review and from field-tested studies to stop using risk assessments for simple prediction and to start using them as the foundation to build individualized, multifactorial patient care plans. Is your organization using that evidence to build your fall prevention program?

TheEvidenceDoc 2015


1. Boudin ED, Andresen EM, Dunton NE et al. Falls among Adult Patients Hospitalized in the United States: Prevalence and Trends. J Patient Saf. 2013 March; 9(1): 13–17.

2. CMS Final Rule Federal Register August 19, 2008. accessed July 28, 2015.

3. Oliver D, Daly F, Martin /FC, McMurdo MET Risk factors and risk assessment tools for falls in hospital in-patients: a systematic review. Age and Ageing 2004;33:122-130.

4. Haines TP, Hill K, Walsh W, Osborne R. Design-Related Bias in Hospital Fall Risk Screening Tool Predictive Accuracy Evaluations: Systematic Review and Meta-Analysis. J Gerontol 2007;62A:664-672.

5. National Institute for Health and Care Excellence June 2013 Assessment and Prevention of Falls in Older People Developed by the Centre for Clinical Practice at NICE accessed July 28, 2015

6. Cameron ID, Gillespie LD, Robertson MC, et. al. Interventions for preventing falls in older people in care facilities and hospitals. Cochrane Database of Systematic Reviews 2012, Issue 12. Art. No.: CD005465. DOI: 10.1002/14651858. CD005465.pub3.

7. DiBardino D, Cohen ER, Didwania A. Meta-analysis: multidisciplinary fall prevention strategies in the acute care inpatient population. J Hosp Med. 2012;7:497-503.

8. Coussement J, De Paepe L, Schwendimann R, et. al.. Interventions for preventing falls in acute- and chronic-care hospitals: a systematic review and meta-analysis. J Am Geriatr Soc. 2008;56:29-36.

9. Oliver D, Connelly JB, Victor CR, et. al. Strategies to prevent falls and fractures in hospitals and care homes and effect of cognitive impairment: systematic review and meta-analyses. BMJ. 2007;334:82.

10. Shekelle PG, Wachter RM, Pronovost PJ, et. al. Making Health Care Safer II: An Updated Critical Analysis of the Evidence for Patient Safety Practices. Comparative Effectiveness Review No. 211. (Prepared by the Southern California-RAND Evidence-based Practice Center under Contract No. 290-2007-10062-I.) AHRQ Publication No. 13-E001-EF. Rockville, MD: Agency for Healthcare Research and Quality. March 2013. accessed July 28, 2015.

11. Miake-Lye IM, Hempel S, Ganz DA, Shekelle PG. Inpatient Fall Prevention Programs as a Patient Safety Strategy A Systematic Review. Ann Intern Med. 2013;158:390-396.

12. Dykes PC, Carroll DL, Hurley A et al. Fall Prevention in Acute Care Hospitals: A Randomized Trial. JAMA 2010;304:1912-1918.


Missed my 2013 presentation on #RapidReview at G-I-N? No worries, nothing has changed

In 2013 I presented “If Rapid Reviews are the Answer, What is the Question?” at the Guidelines International Network (G-I-N) conference in San Francisco. My interest in rapid review began in New York the year before at the Evidence-Based Guidelines Affecting Policy Practice and Stakeholders (EGAPPS) conference. Several colleagues shared that they were also concerned about confusion from adding rapid review to our nomenclature so soon after the Institute of Medicine standardization of systematic review methods. This prompted me to search for standards describing:

o   What is a rapid review?

o   When should it, or shouldn’t it, be used?

o   How does it differ from systematic or other evidence reviews?

I shared what I learned at the conference.

Now two years later, I wondered if we are closer to answers.

If you start with the simplest question, “What defines a rapid review?” it’s tempting to provide the simplest answer. It must be speed in completing the review, right? But this doesn’t address how the speed is achieved. What parts of the review become rapid? And what is meant by rapid? As you’ll see, that too is relative.

Several others have looked at what a rapid review might be. At the time of my presentation, I included three systematic looks at rapid review. (References 1,2,3)

Watt defined rapid review as a health technology assessment or systematic review that takes one to six months to produce. Ganaan defined it as literature review that accelerates systematic review and Harker was most generous allowing anything that calls itself a rapid review. The results were uniform in finding variability in:

o   Methods

o   Uses

o   Standards

o   Time to completion

Most surprising was the finding that the time frame for “rapid” varies considerably and is relative. Harker reported a mean time for completion (10.42 months) that was not much shorter than that for systematic review!

All authors found that rapid review developers made different choices about where to achieve speed. Choices such as:

o   Narrowing breadth of review question

o   Limiting sources searched

o   Limiting time-frame for search

o   Eliminating or limiting quality review

o   Limiting depth of analysis

In many instances, the rapid review developers were not fully transparent about the potential for introducing bias resulting from each of these short cuts.

All three of these authors had built an assumption into their assessments - that rapid review is a faster systematic review. But is it? I'd heard presentations by organizations that have implemented rapid reviews using a technique considered by others, like Cochrane, to be an overview of reviews. Instead of rapid assessment of primary literature, they find and synthesize secondary literature, specifically existing systematic reviews.

That led me to conduct an internet search for rapid reviews created by various organizations. I confirmed that many developers of rapid review products were not reviewing primary literature, but were evaluating secondary literature. So it wasn’t always about speedy creation of a de novo systematic evidence review, but was often about creating a user-friendly assessment of existing evidence, usually from prior systematic review. The commonality was what precipitated the rapid review, generally a user request for a quick answer and often to support policy decisions.

So a Rapid Review could be a:

o   Type of review that uses shortcuts reducing rigor and increasing the risk of introducing bias

o   Translated, user-friendly product using existing systematic reviews

o   Transformed systematic review process that shortens time spent in production without sacrificing rigor

Rapid review may be a label attached to a quick and dirty process that introduces selection bias in the search for and information bias in the evaluation of evidence.

Or it may mean a systematic appraisal of secondary research designed to meet user needs.

Or it may mean improved process methods that can accelerate SR timeline without reducing rigor. Things like increasing available resources, or implementing leaner processes during the development phase, or automating steps where the technology is ready.

Is rapid review a phase of the systematic review process?

To further complicate the issue, I did find that some organizations use the label rapid review when describing a kind of quick scoping review, used to look for major trends or patterns in research. These developers generally acknowledge that their review was not systematic but rather a pre-systematic review that indicated need for or ability to conduct thorough review. In that context, the rapid review could fit before a systematic review. The more common use by organizations was as a translated or aggregated product based on existing systematic review. In that use, the rapid review was more like a knowledge translation product built from existing systematic review.

In February of this year, the Agency for Healthcare Research and Quality Evidence-based Practice Centers’ Methods Workgroup released An Exploration of Methods and Context for the Production of Rapid Reviews” (reference 4) to address these issues. They combined literature review with key informant interviews and confirmed previous findings that rapid review has no standard meaning and the products are highly variable.

Here are my suggestions based on what I’ve learned.

1.     Stop using the label rapid review. It has no meaning and adds to confusion about evidence reviews.

2.     Use the label scoping review when a user wants a quick first look at what evidence may exist for their question.

3.     Use either the label "overview of reviews" or "systematic review of systematic reviews" when describing an evaluation of existing systematically derived evidence summaries.

4.     Work together to improve systematic review processes so we can create evidence reviews in shorter time frames without sacrificing rigor.


1. Watt A et al. Rapid Reviews vs. Full Systematic Reviews. International Journal of Technology Assessment in Health Care, 24:2 (2008), 133–139.

2. Ganann R, Ciliska D, Thomas H. Expediting systematic reviews: methods and implications of rapid reviews. Implement Sci. 2010;5:56.

3. Harker J, Kleijnen J. What is a rapid review? A methodological exploration of rapid reviews in Health Technology Assessments. Int J Evid Based Healthc. 2012;10:397–410.

4. Hartling L et al. EPC Methods: An Exploration of Methods and Context for the Production of Rapid Reviews. Research White Paper. AHRQ Publication No. 15-EHC008-EF. Rockville, MD: Agency for Healthcare Research and Quality; February 2015.


TheEvidenceDoc July 2015


Watson, Artificial Intelligence and the Evidence for Better Healthcare

My local PBS is running the special on IBM Watson and Jeopardy again. As you may recall, Watson dominated the game against Brad Rutter and Ken Jennings with strength on purely factual questions. I've watched the PBS special and the game several times now and never tire of watching Watson learn, just as I never tire of watching children (or adults) learn. Mistakes are more revealing than correct answers about the logic process.

Last month IBM, Memorial Sloan-Kettering and Wellpoint announced commercial applications from Watson's work in healthcare that recommend and authorize cancer treatment options, providing estimates of confidence on the options along with the supporting evidence Watson used to make the recommendation.

I'd love to be able to watch Watson at this work, the supporting estimates and evidence for the right answers would be as fascinating as for the wrong. I do hope someone is watching and learning along with Watson.

You see, those of us who've spent a lifetime in medical evidence know there's very little that's straightforward, black and white, or factual about our current best evidence. Evidence accumulates and our interpretation of it changes as the data and our methods for accumulating and evaluating it grow and strengthen. A single, new study is not accepted as the right answer, but as a piece of new information to blend with the old. And the blending itself is a challenge, for a wide variety of reasons. Unlike the measurements used in manufacturing, like lengths and widths and temperatures, measures of disease, even disease definitions and how we diagnose them change with new knowledge and new technologies. Multiple treatments and supporting care options are applied and changed simultaneously, making attribution of benefit to a single intervention challenging. And patients themselves react differently both to disease processes and to treatments for the disease.

I describe only some of the challenges, and not because I think they are too great for Watson, but because I think perhaps Watson can help. But I'm not sure he's ready for clinical practice just yet.

The reality is that for healthcare, we have considerable uncertainty for many diagnostic and care decisions. The evidence is weak, absent or not yet characterized for many conditions. So I would welcome Watson as a partner in the process of developing evidence based guidance for healthcare. Watson could begin by assisting us with the evidence synthesis through systematic review of evidence. This is the foundational step for developing evidence based clinical practice guidelines - see the Institute of Medicine (IOM), the Agency for Healthcare Research and Quality (AHRQ) and the National Institute for Health and Clinical Excellence (NICE) for more detail on systematic evidence review.  Watson could search for relevant studies, published and unpublished, using criteria for including and excluding studies based on the specific diagnostic or treatment question and the eligible patients and care settings.  Watson would, of course, have to pursue many rounds of "learning" just like those for jeopardy.  I'd love to watch that process. And once the eligible studies are selected, could Watson be taught to evaluate their quality? Oh, did I forget to mention that not all studies, even those published, are of good quality and that some have such serious limitations in their design or execution that it makes their findings invalid? We don't want to include severely biased results in our evidence composite, as they could mislead our understanding of the effectiveness or safety of the clinical action.  And we need to evaluate and rate the seriousness of the limitations in studies with only minimal to moderate bias so we can attempt to understand how the measurement errors influence the findings.  Oh, and at various stages throughout the process of evidence identification, selection, and evaluation Watson would need clinical input to address importance and relevance to the care of patients.

Clinical specialty societies and other organizations that develop clinical practice guidelines both welcomed and bemoaned the release of the IOM standards for evidence based guideline development. They welcomed the explicit, transparent methods but bemoaned the time and resources necessary to follow those standards. Patients and providers (and payers) are waiting for those evidence based guidelines to be developed and to be maintained and updated for currency. It takes up to a year or more to follow the process, all while technology and clinical practice move forward so that guidelines may be outdated by the time they are produced.  Meanwhile, health information technology including clinical decision support systems and electronic health records are automating current guidelines that may be based on expert opinion rather than an examination of all the evidence.

It's certainly not elementary, my dear Watson. Are you interested?