#QualityImprovement and #PDSA cycles - how to decide what to DO

Plan Do Study Act (PDSA) cycles are a quality improvement tool used to test process change. Not all change leads to improvement. In order to evaluate the change for its impact on your patients and the care you provide them, you need to test the change.

The Institute for Healthcare Improvement (IHI) uses the Model for Improvement developed by Associates in Process Improvement. Both sites provide resources for using this method, including this worksheet from IHI.

To summarize briefly, the process requires you to develop a PLAN for how you will test the change including how you will measure the change. Then you will DO the change in a small scale or pilot, measuring the outcome before and after implementing the change. After completing the pilot, you will STUDY the results of your test by carefully examining the before and after measurement. Finally, you ACT on what you learned, generally through modifying and re-testing the new change in a new PDSA cycle. You repeat until you are convinced that the change consistently creates more benefit than harm, which constitutes improvement.

There are many great resources on many different websites to help guide you in the PDSA process.

But the biggest challenge for you will be deciding WHAT to do.

STEP 1 - Know your organization’s most pressing problems. Hint - start with your data. Data includes your performance metrics, but it also includes your survey data from patients and staff.

STEP 2 - Prioritize - No organization has enough resources to change everything at once. Besides exhausting your staff, you’ll also contaminate your tests of change. When you change everything at once, you can’t determine which change produced the improvement, if there was improvement. And changing everything at once may produce no overall improvement, hiding real improvement that could be generated from one of the changes.

STEP 3 - Find evidence based innovations to address your priority problems. If you use innovations that have already been tested and found to work, you will have a head start. You will waste less time with more PDSA cycles and you are more likely to reduce unintended harm to your patients and staff from the change.

All this is necessary for successful PLANing of what to DO.

In the next post I’ll dig deeper into the process of how to decide what to do.

 ©TheEvidenceDoc 2018

Healthcare #qualityimprovement is doing the right things right

 We can improve the quality and safety of the healthcare we deliver to patients.

That’s the premise behind quality improvement and patient safety programs, initiatives and interventions. The study of quality improvement interventions, called improvement science, is relatively new. Improvement science is just beginning to evaluate the performance of the tools and techniques. Many of the techniques such as Plan-Do-Study-Act (PDSA) cycle, Lean, and Six Sigma come from manufacturing and seek to improve efficiency. They seek to improve efficiency by doing things right. 

While it is important to get better at delivering the right care,

it is essential to first know what is the right care to deliver.

The assumption behind many of the manufacturing improvement processes is that we just need to get better at delivering the right care.  But all too often, we honestly don’t know what the right care is.

The right care should provide each person with care that is effective - that is care that improves their life compared to what would have happened had they not sought care.

What may surprise many people, even some working in health care is that many of the clinical actions we consider standard have not been shown to be more effective than other care options or even no care.

Evidence-based care seeks to fill this gap in knowing by:

First, using data to identify what we know

Then, develop and test solutions for what we don’t.

We identify what we know using data. We call the end result of the rigorous collection and critical analysis of all relevant data “evidence–based”. Good evidence–based analysis tells us how much confidence to have in a particular clinical action. Is there enough evidence to be reasonably sure that the clinical action works? Or perhaps there is just enough evidence to make a reasonable guess today that may change when we accumulate more and better data? Or is there simply not enough evidence to make any reasonable guess?

We use the science of evidence-based methods to help us determine which are the right clinical actions to deliver. We use manufacturing improvement processes to help us get better at doing the right things.

If your quality improvement toolbox only includes one or the other set of tools, you aren’t maximizing your effectiveness and efficiency. The incorporation of evidence-based methods can improve the quality of the care you deliver and encourage innovation while managing risk. Infusing your quality improvement innovations with evidence-based methods means you can begin to take more calculated risk. You can put evidence-based methods to work in your organization to identify the right things to do so that your manufacturing improvement tools can help you then standardize and systematize the best care.

 © TheEvidenceDoc 2016

 

 

Is your #FallsPrevention Program Maximizing Inpatient Falls Risk Assessment?

Some patients will fall while they are in the hospital. How many? Bouldin and colleagues used National Database of Nursing Quality Indicators (NDNQI) data to find an overall fall rate in the U.S. of 3.53 falls per 1,000 patient days. The highest rates were in medical units, at 4.03 falls per 1,000 patient days. Overall, one fourth of the falls were associated with injury.(1) Collectively, the problem is large, and is estimated to impact 2% of all hospitalizations.(1)

Beginning in 2008, the Centers for Medicare and Medicaid Services (CMS) decided to stimulate U.S. hospital efforts to reduce inpatient falls by ending payments to hospitals for the additional costs associated with injury from inpatient falls.(2) As a result, fall prevention is one of the top priorities for most hospitals.

Nearly all fall prevention activities include the use of falls risk assessment tools. In the US, wristbands and bed signs are common warning labels to indicate patients identified as being at increased risk of falling through the use of these tools. But if risk identification is the main focus of your falls prevention program you are missing an opportunity to individualize fall prevention activities and reduce inpatient falls.

FALLS RISK ASSESSMENT AS RISK PREDICTOR

Fall risk assessments are not very good at risk prediction.

Does this surprise you? Several comprehensive systematic reviews have evaluated the performance of risk assessment tools for predicting an individual inpatient’s risk of falling.(3,4,5) None of the tools perform better than clinical judgment.

How can this be? You may remember from your epidemiology training that sensitivity and specificity respectively measure the proportion of fallers who tested positive with your tool and the proportion of non-fallers who tested negative with your tool. But did you remember other measures of the value of a screening or diagnostic tool? The positive predictive value is a better measure to evaluate the likelihood that a person who tests positive will have the condition. The negative predictive value measures the likelihood that a person who tests negative will not have the condition. Both measures are dependent on the validity of the tool and they are also dependent on the prevalence of the condition, which in this case is falls. How prevalent are falls in your organization?

Let’s look at some data presented in the NICE guideline to see how a tool with sensitivity and specificity of at least 70% (one of the inclusion criteria for studies for the NICE guideline) can have much lower predictive value.

For the Hendrich Fall Risk Model (data extracted for the NICE guideline from Hendrich et al 1995 as reported in the NICE guideline) (5)

There were 102 total patients who fell out of 338 patients. The test correctly labeled 79 of the 102 as falling, and 169 of the 236 as not falling

The sensitivity is 79/(79+23) = 77%.          The specificity is 169/(169+67) = 72%

But the positive predictive value is much lower. This measure looks at how many of the patients predicted to fall actually fell.

So the positive predictive value is 79/(79+67) = 54%.

Remember the predictive value is dependent on the condition prevalence, and even though falls are among the most common adverse events in the hospital, most inpatients do not fall.  

The negative predictive value, 169/(169 +23) = 88% is much better, but still misses the opportunity to identify and prevent falls in 12% of the people who were labeled low risk.

FALL RISK ASSESSMENT AS A TOOL TO IDENTIFY MODIFIABLE FALL RISK FACTORS FOR INDIVIDUALIZED CARE PLANS

Using fall risk assessments to label patients as high or low fall risk misses the opportunity to individualize care. Should falls risk assessment be abandoned as a patient strategy? No, but the use of risk assessment as a screen to simply label patients should be abandoned. That use misses the opportunity to tailor interventions to patient needs to reduce the risk of falling during their hospital stay.

It’s not just that the quality of the evidence for inpatient risk prediction using any risk assessment tool is low or very low. Risk prediction produces a simple label that by itself does not guide staff action. After identifying a patient as being at increased risk, what is the next action step? Also, inpatient staff not regularly assigned to the patient, and other staff like lab, radiology and other technicians don’t know how to assist based on a simple warning alert. Patients, their family, and their friends may not have understanding of why the patient is labeled at risk nor what they can each do to change that risk. Risk predictions and resulting simple labels don’t provide actionable interventions. There is also potential for alert fatigue if too many patients are labeled with fall risk.  

There is moderate evidence from several recent systematic reviews that multi-factorial interventions can reduce inpatient falls, when multi-factorial is defined as interventions that are individually tailored to each patient’s modifiable risk factors. (6-11) How are the factors determined? Risk factors are identified through use of a risk assessment, whether tool or clinical judgment.

If risk assessment is used to identify why the patient is at risk of falling and then coupled with interventions directed to reduce those specific risks, it can be effective.  These systematic reviews and systematic overviews have concluded there is moderate evidence of the effectiveness of multi-factorial interventions. (6-11) Some of these review authors have lamented that the interventions vary so much from study to study that it is difficult to determine the essential elements. But that is just the point. The interventions must vary because falls are multi-factorial. Inpatient falls can be due to patient intrinsic factors like instability due to reductions in balance, strength, and agility or from loss of vision. They can arise from factors associated with hospitalization like unfamiliar surroundings, treatments and activity restrictions that add to confusion and instability.  And they can arise from combination of the above factors and additional factors like reduced bowel and bladder control leading to fear of toileting needs that can’t be met in the hospital environment.  To reduce preventable falls, interventions must be tailored to individual needs. The needs and plan of action must be communicated clearly to all staff who come in contact with the patient and with the patient and their loved ones so that everyone is empowered to reduce the opportunities for that patient to experience a fall.

Partners Health-Care System has tested this approach and successfully reduced falls from 4.18 to 3.15 per 1,000.  Dykes and colleagues used health information technology to incorporate risk assessment results (they used the Morse scale) with targeted intervention strategies. They developed specific signage and communication tools to use with patients and staff to clearly communicate the specific risks and the resulting actions to use to reduce falls. (12)

We have sufficient evidence from systematic review and from field-tested studies to stop using risk assessments for simple prediction and to start using them as the foundation to build individualized, multifactorial patient care plans. Is your organization using that evidence to build your fall prevention program?

TheEvidenceDoc 2015

References:

1. Boudin ED, Andresen EM, Dunton NE et al. Falls among Adult Patients Hospitalized in the United States: Prevalence and Trends. J Patient Saf. 2013 March; 9(1): 13–17.

2. CMS Final Rule Federal Register August 19, 2008. http://www.gpo.gov/fdsys/pkg/FR-2008-08-19/html/E8-17914.htm accessed July 28, 2015.

3. Oliver D, Daly F, Martin /FC, McMurdo MET Risk factors and risk assessment tools for falls in hospital in-patients: a systematic review. Age and Ageing 2004;33:122-130.

4. Haines TP, Hill K, Walsh W, Osborne R. Design-Related Bias in Hospital Fall Risk Screening Tool Predictive Accuracy Evaluations: Systematic Review and Meta-Analysis. J Gerontol 2007;62A:664-672.

5. National Institute for Health and Care Excellence June 2013 Assessment and Prevention of Falls in Older People Developed by the Centre for Clinical Practice at NICE guidance.nice.org.uk/CG161 accessed July 28, 2015

6. Cameron ID, Gillespie LD, Robertson MC, et. al. Interventions for preventing falls in older people in care facilities and hospitals. Cochrane Database of Systematic Reviews 2012, Issue 12. Art. No.: CD005465. DOI: 10.1002/14651858. CD005465.pub3.

7. DiBardino D, Cohen ER, Didwania A. Meta-analysis: multidisciplinary fall prevention strategies in the acute care inpatient population. J Hosp Med. 2012;7:497-503.

8. Coussement J, De Paepe L, Schwendimann R, et. al.. Interventions for preventing falls in acute- and chronic-care hospitals: a systematic review and meta-analysis. J Am Geriatr Soc. 2008;56:29-36.

9. Oliver D, Connelly JB, Victor CR, et. al. Strategies to prevent falls and fractures in hospitals and care homes and effect of cognitive impairment: systematic review and meta-analyses. BMJ. 2007;334:82.

10. Shekelle PG, Wachter RM, Pronovost PJ, et. al. Making Health Care Safer II: An Updated Critical Analysis of the Evidence for Patient Safety Practices. Comparative Effectiveness Review No. 211. (Prepared by the Southern California-RAND Evidence-based Practice Center under Contract No. 290-2007-10062-I.) AHRQ Publication No. 13-E001-EF. Rockville, MD: Agency for Healthcare Research and Quality. March 2013. www.ahrq.gov/research/findings/evidence-based-reports/ptsafetyuptp.html accessed July 28, 2015.

11. Miake-Lye IM, Hempel S, Ganz DA, Shekelle PG. Inpatient Fall Prevention Programs as a Patient Safety Strategy A Systematic Review. Ann Intern Med. 2013;158:390-396.

12. Dykes PC, Carroll DL, Hurley A et al. Fall Prevention in Acute Care Hospitals: A Randomized Trial. JAMA 2010;304:1912-1918.