Search Journal-type in search term and press enter
Southwest Pulmonary and Critical Care Fellowships
In Memoriam
« Profiles in Medical Courage: Michael Wilkins and the Willowbrook School | Main
Wednesday
Mar142012

Relationship between the Veterans Healthcare Administration Hospital Performance Measures and Outcomes

Richard A. Robbins, M.D.1

Richard Gerkin, M.D.2

Clement U. Singarajah, M.D.1

1Phoenix Pulmonary and Critical Care Medicine Research and Education Foundation and 2Banner Good Samaritan Medical Center, Phoenix, AZ

Abstract

Health care organizations have been using performance measures to compare hospitals. However, it is unclear if compliance with these performance measures results in better healthcare outcomes. We examined compliance with acute myocardial infarction, congestive heart failure, pneumonia and surgical process of care measures with traditional outcome measures including mortality rates, morbidity rates, length of stay and readmission rates using the Veterans Healthcare Administration Quality and Safety report. Disappointingly, increased compliance with the performance measures was not correlated with better outcomes with the single exception of improved mortality with higher rates of compliance with echocardiography. We also evaluated the hospital level of care and found that higher levels of complexity of care correlated with the acute myocardial infarction performance measure, but not with the congestive heart failure, pneumonia, or surgical process of care performance measures. However, level of complexity of care strongly correlated with all cause mortality (p<0.001), surgical mortality (p=0.037) and surgical morbidity (p=0.01). These data demonstrate that compliance with the performance measures are not correlated with improved healthcare outcomes, and suggest that if measures are used to compare hospitals, different measures need to be developed.

Introduction

The Joint Commission recently released “Improving America’s Hospitals: The Joint Commission’s Annual Report on Quality and Safety 2011 (1). In this report the results of hospital compliance with the Joint Commission’s performance measures are listed. The Joint Commission announced not only is compliance improving but identified 405 hospitals as their “Top Performers on Key Quality Measures Program”. In a letter at the beginning of the report Mark Chassin, President of the Joint Commission, said “This program is designed to be an incentive for better performance on accountability measures and to support organizations in their quest to do better”.

However, there have been several criticisms of the report. First, many hospitals which were recognized as top hospitals by US News & World Report, HealthGrades Top 50 Hospitals, or Thomson Reuters Top Cardiovascular Hospitals were not included (2). Small community hospitals were overrepresented and large academic medical centers were underrepresented in the report. Chassin commented that this should be "a wake-up call to larger hospitals to put more resources into these programs…”. This is surprising since teaching hospitals, which are usually large, urban hospitals, have previously been reported to have lower risk-adjusted mortality rates and lengths of stay (3). Second, it has been pointed out that many of the performance measures are not or only weakly associated with traditional outcomes such as mortality (4-7). Therefore, we compared the compliance with the Joint Commission performance measures compared to mortality rates, morbidity rates, length of stay and readmissions using the Nation’s largest healthcare system, the Department of Veterans Affairs. The results demonstrate that compliance with performance measures are not correlated with improved outcomes.

Methods

The study was approved by the Western IRB.

Process Performance Measures. We evaluated hospital performance based on publicly available data from the 2010 VHA Facility Quality and Safety Report (9). These measures evaluate quality of care for acute myocardial infarction, congestive heart failure, pneumonia and surgical care improvement program (SCIP) during fiscal year 2009. For each of the measures, a hospital’s performance is calculated as the proportion of patients who received the indicated care out of all the patients who were eligible for the indicated care. The quality indicators are based on, and in most cases identical to those used for the Joint Commission’s Hospital Compare (acute myocardial infarction-Appendix 1; congestive heart failure-Appendix 2; pneumonia-Appendix 3, surgical quality-Appendix 4). Data were also available for each component of the congestive heart failure quality measure (see Appendix 2) which was evaluated independently.

Disease specific mortality. Hospital-specific, risk-standardized rates of mortality within 30 days of discharge are reported for patients hospitalized with a principal diagnosis of heart attack, heart failure, and pneumonia. For each condition, the risk-standardized (also known as "adjusted" or "risk-adjusted") hospital mortality rate are calculated using mathematical models that use administrative data to adjust for differences in patient characteristics that affect expected mortality rates (10).

Surgical morbidity and mortality. VA’s Surgical Quality Improvement Program (VASQIP) monitors major surgical procedures performed at VHA facilities and tracks risk adjusted surgical complications (morbidity) and mortality rates. Patient data are collected at each facility by a specially trained nurse and entered into the VA’s electronic health record: detailed preoperative patient characteristics including chart-abstracted medical conditions, functional status, recent laboratory tests, information about the surgical procedure performed, and 30-day outcomes data.

The VASQIP program analyzes these patient data using mathematical models to predict an individual patient’s expected outcome based on the patient’s preoperative characteristics and the type and nature of the surgical procedure. Overall patient outcomes for major surgical procedures are expressed by comparing observed rates of mortality and morbidity to the expected rates for those patients undergoing the procedure as observed-to-expected (O/E) ratios. For example, if, based on patient characteristics, a facility expected 5 deaths following major surgery, but only 4 patients died, the O/E ratio would be reported as 0.8.

Medical Surgical Length of Stay (LOS). These data are the VA hospital average length of stay for patients who were discharged from acute medicine or surgery bed sections. It does not include patients discharged from observation beds or discharged from other areas of the hospital such as mental health.

Readmission rates. A readmission was defined as a patient who has had a recent hospital stay and needs to re-enter the hospital again within 30 days. These rates are not adjusted for patient characteristics that affected expected admission rates, so comparisons among hospitals should be interpreted with caution.

CHF readmissions were reported separately. CHF readmission is defined by patients who had an initial hospitalization for CHF and were readmitted at least once to acute care in the hospital within 30 days following discharge for CHF.

Hospital level of care. For descriptive purposes, hospitals were grouped into levels of care. These are classified into 4 levels: highly complex (level 1); complex (level 2); moderate (level 3), and basic (level 4). In general, level 1 facilities and some level 2 facilities represent large urban, academic teaching medical centers.

Correlation with Outcomes. Pearson’s correlation coefficient was used to assess the correlation of compliance with the performance measures and outcomes. Significance was defined as p<0.05. For comparisons among hospital levels, ANOVA or Kruskall-Wallis testing was done, as appropriate.

Results

Disease specific and all cause mortality rates compared to performance measures. Hospital-specific, risk-standardized rates of mortality within 30 days of discharge for patients hospitalized with a principal diagnosis of heart attack, heart failure, and pneumonia were compared to performance measure compliance. There was no correlation (Table 1, p>0.05 all conditions) but with an increased incidence of pneumonia actually weakly correlating with higher compliance with the pneumonia performance measures (Table 1, p=0.0411). Furthermore, there was no correlation between all cause mortality and the average of the three compliance measures (Table 1, p>0.05). Because each table is large, only the correlation coefficients are presented in the text. The table data on which the correlations are based are given at the end of the manuscript. (N=the number of hospitals. NA=not available).

Table 1. Disease Specific Mortality Correlated with Performance Measure Compliance

Correlation Coefficients

r value

N

p value

Acute Myocardial Infarction Mortality and AMI Performance Measure

0.0266

103

0.7897

Congestive Heart Failure Mortality and CHF Performance Measure

0.0992

123

0.2752

Pneumonia Mortality and Pneumonia Performance Measure

0.1844

123

0.0411

All Cause Mortality vs. Average of Performance Measures

0.1118

122

0.2202

Each component of the congestive heart failure performance measure was evaluated individually. Performance of echocardiography correlated with improved mortality (Table 2, p=0.0496) but there was no correlation with use of a angiotensin converting enzyme inhibitor (ACEI) or angiotensin receptor blocker (ARB) at discharge, discharge instructions, nor smoking cessation advice (Table 2, p>0.05 all comparisons).

Table 2. Heart Failure Mortality Correlated with Compliance to Individual Heart Failure Performance Measures

Correlation Coefficients

r value

N

p

ACEI or ARB

-0.1007

112

0.2908

Smoking Cessation

0.0651

112

0.4953

Discharge Instructions

0.1411

111

0.1396

Echocardiography

-0.1860

112

0.0496

Surgical mortality and morbidity rates compared to surgical performance measures. There was no correlation between compliance with the surgical care improvement program (SCIP) and surgical mortality or morbidity (Table 3, p>0.05 both comparisons).

Table 3. Surgical Care Improvement Program (SCIP) Compliance Correlated with Observed/Expected (O/E) Morbidity/Mortality

Correlation Coefficients

r value

N

p value

O/E Mortality

0.0943

99

0.3530

O/E Morbidity

0.0031

99

0.9757

Length of Stay. None of the performance measures correlated with medical-surgical length of stay (Table 4, p>0.05 all comparisons).

Table 4. Length of Stay (LOS) Correlated with Performance Measure Compliance

Correlation Coefficients

r value

N

p value

LOS compared to AMI

0.1047

103

0.2926

LOS compared to CHF

-0.0178

123

0.8451

LOS compared to Pneumonia

-0.1679

123

0.0634

LOS compared to SCIP

-0.0404

106

0.6809

LOS compared to Average

0.0028

123

0.9755

Readmission rates. There was no correlation between all cause readmission rates and the acute myocardial infarction, congestive heart failure, pneumonia or surgical performance measures (Table 5, p>0.05 all comparisons). There was no correlation between heart failure readmission rate and the heart failure performance measure (data not shown, r=0.1525, p=0.0921).

Table 5. Readmission Rate Correlated with Performance Measure Compliance

Correlation Coefficients

 

r value

N

p

AMI

0.1688

103

0.0883

CHF

0.1505

123

0.0966

Pneumonia

0.0581

123

0.5233

Average

0.1281

122

0.1597

Hospital level of care. Acute myocardial infarction performance measures inversely correlated with the hospital level of care, i.e., the higher the hospital complexity level, the better the compliance (Table 6, p=0.004). However, there was no correlation between congestive heart failure, pneumonia, surgical care improvement program or the average of the measures and the hospital level of care (Table 6).

Table 6. Hospital Level Correlated with Performance Measure Compliance

ANOVA

N

p

Acute Myocardial Infarction (AMI)

103

0.004

Congestive Heart Failure (CHF)

120

0.782

Community Acquired Pneumonia

120

0.296

Surgical Care Improvement Program (SCIP)

106

0.801

Average of Process of Care Measures

120

0.285

There was no correlation between the level of hospital care and the acute myocardial infarction, congestive heart failure, nor pneumonia mortality (Table 7, p>0.05 all comparisons). However, there was a strong correlation between all cause morality (p<0.001) and a correlation between surgical Observed/Expected mortality (Table 7, p=0.037) and surgical Observed/Expected morbidity (p=0.010).

Table 7. Hospital Level Correlated with Mortality and Surgical Morbidity

ANOVA

N

p

Acute Myocardial Infarction (AMI) Mortality

103

0.835

Congestive Heart Failure (CHF) Mortality

120

0.493

Pneumonia Mortality

120

0.547

All Cause Mortality

106

<0.001

Surgical O/E Mortality

99

0.037

Surgical O/E Morbidity

99

0.010

Discussion

These data from the Nation’s largest healthcare system demonstrate that increasing compliance of the performance measures prescribed by the Joint Commission does not affect disease specific mortality, all cause mortality, surgical mortality, surgical morbidity, length of stay or readmissions with the single exception of improved mortality correlating with increased compliance with performance of echocardiography. In contrast to the Joint Commission’s list of top hospitals which found smaller and rural hospitals to be overrepresented, we found that only the acute myocardial infarction performance measure correlated with a higher level of hospital care which represents mostly large, urban hospitals. We did find that all cause mortality and surgical morbidity highly correlated with the level of care. This would appear to differ from the Joint Commission’s list of top hospitals which tended to be small and rural, since VA hospitals with higher levels of care largely represent large urban, academic teaching medical centers.

There are multiple possible reasons for the lack of correlation between the performance measures and outcomes. Many of the outcomes are evidence based but several are not. For example, there are no randomized, multi-center studies evaluating the efficacy of discharge instructions, smoking cessation advice and pneumococcal vaccination. Studies with discharge instructions are retrospective, observational studies and have largely not shown improved outcomes (11,12). Several meta-analyses have failed to demonstrate the efficacy of pneumococcal vaccine in adults (13-15). Advice to quit smoking without follow up support or pharmacologic intervention has not been shown to lower smoking cessation rates (16). Mandating ineffective interventions such as these would not be expected to have a positive effect on outcomes. However, this is where most of the improvement in performance measure outcome has occurred (2).

Most of the interventions are grouped or bundled. Lack of compliance with any one of the bundle is taken as noncompliance with the whole. However, if the only difference between hospitals is noncompliance with an ineffective performance measure, there would not be any expected improvement in outcomes.

Many of the strongly evidence-based outcomes have very high compliance, usually exceeding 95% (9). It is possible that small improvements of 1 or 2% in effective performance measures might have too small an impact on outcomes to be detected even in large databases such as the Veterans Administration which examined 485,774 acute medical/surgical discharges in 2009.

The performance measures appear to avoid highly technical or costly interventions and often avoid interventions which have been shown positively affect outcomes. For example, beta blockers and spironolactone have been shown to be effective in heart failure but are not included in the congestive heart failure performance measures (17,18). Furthermore, carvedilol has been shown to be superior to metoprolol in improving survival (19). Why the performance measures include use of an angiotensin converting enzyme inhibitor or angiotensin receptor blocker but not carvedilol and spironolactone is unclear.

Some of the performance measures may have caused inadvertent harm. For example, administration of antibiotics within 4 hours to patients with pneumonia was a previous performance measure. However, studies showed that this performance measurement led to administration of antibiotics in many patients who proved not to have pneumonia or another infectious disease, and a systematic review concluded that “evidence from observational studies fails to confirm decreased mortality with early administration of antibiotics in stable patients with [community acquired pneumonia]” (20-22). The time has since been changed to 6 hours, but it is unclear if that it is any better than the initial 4 hour timing used (7).

We did not confirm the Joint Commission’s findings that the top hospitals are overrepresented by small, rural hospitals. We found no correlation between hospital level of complexity of care and performance measure compliance with the exception of acute myocardial infarction which was higher in hospitals with higher complexities of care. Although we found no correlation of the performance measures with any outcome measures, we did find a strong correlation between the hospital level of complexity of care and overall survival and surgical morbidity with the hospitals having the higher level of complexity having improved survival and decreased surgical morbidity. This would seem consistent with concept that volume of care correlates with outcomes.

It seems surprising that initiation of performance measures seem to go through such little scrutiny. In a 2005 editorial Angus and Abraham (23) addressed the issue of when there is sufficient evidence for a concept to be widely applied as a guideline or performance measure. Comparing guidelines to evaluation of novel pharmacologic therapies, they point out that promising phase II studies are insufficient for regulatory approval. Instead, one, and usually two, large multicenter phase III trials are necessary to confirm reliability. The same principle is echoed in evidence-based medicine, where grade A recommendations are based on two or more large, positive, randomized, and multicenter trials. This seems a reasonable suggestion. Perhaps what is needed is an independent Federal or private agency to review and approve performance measures, and as Angus and Abraham suggest, require at least two randomized, multicenter trials before implementation

The data presented in this manuscript do not support the usefulness of increasing compliance with the Veterans Administration’s (or the Joint Commission’s) performance measures in improving outcomes such as mortality, morbidity, length of stay or readmission rates. Until compliance with the performance measures results in improved outcomes, investment to improve these performance measures seems to be a poor utilization of resources. It suggests that oversight of regulatory agencies is needed in developing and implementing performance measures. If performance measures are to be used, new, clinically meaningful measures that correlate with outcomes need to be developed.

References

  1. Available at: http://www.jointcommission.org/accreditation/hospitals.aspx (accessed 9-25-11).
  2. Available at: http://www.forbes.com/sites/davidwhelan/2011/09/20/is-the-joint-commission-list-of-top-hospitals-worth-heeding/ (accessed 9-25-11).
  3. Rosenthal GE, Harper DL, Quinn LM. Severity-adjusted mortality and length of stay in teaching and nonteaching hospitals. JAMA 1997;278:485-90.
  4. Werner RM, Bradlow ET. Relationship between Medicare's hospital compare performance measures and mortality rates. JAMA 2006;296:2694-702.
  5. Peterson ED, Roe MT, Mulgund J, DeLong ER, Lytle BL, Brindis RG, Smith SC Jr, Pollack CV Jr, Newby LK, Harrington RA, Gibler WB, Ohman EM. Association between hospital process performance and outcomes among patients with acute coronary syndromes. JAMA 2006;295:1912-20.
  6. Fonarow GC, Yancy CW, Heywood JT; ADHERE Scientific Advisory Committee, Study Group, and Investigators. Adherence to heart failure quality-of-care indicators in US hosptials: analysis of the ADHERE Registry. Arch Int Med 2005;165: 1469-77.
  7. Wachter RM, Flanders SA, Fee C, Pronovost PJ. Public reporting of antibiotic timing in patients with pneumonia: lessons from a flawed performance measure. Ann Intern Med 2008;149:29-32.
  8. Stulberg JJ, Delaney CP, Neuhauser DV, Aron DC, Fu P, Koroukian SM. Adherence to surgical care improvement project measures and the association with postoperative infections. JAMA. 2010;303:2479-85.
  9. Available at: http://www.va.gov/health/docs/HospitalReportCard2010.pdf (accessed 9-28-11).
  10. Ross JS, Maynard C, Krumholz HM, Sun H, Rumsfeld JS, Normand SL, Wang Y, Fihn SD. Use of administrative claims models to assess 30 day mortality among Veterans Health Administration hospitals. Medical Care 2010; 48: 652-658.
  11. VanSuch M, Naessens JM, Stroebel RJ, Huddleston JM, Williams AR. Effect of discharge instructions on readmission of hospitalised patients with heart failure: do all of the Joint Commission on Accreditation of Healthcare Organizations heart failure core measures reflect better care? Qual Saf Health Care 2006;15:414-7.
  12. Fonarow GC, Abraham WT, Albert NM, Stough WG, Gheorghiade M, Greenberg BH, O'Connor CM, Pieper K, Sun JL, Yancy C, Young JB; OPTIMIZE-HF Investigators and Hospitals. Association between performance measures and clinical outcomes for patients hospitalized with heart failure. JAMA 2007;297:61-70.
  13. Fine MJ, Smith MA, Carson CA, Meffe F, Sankey SS, Weissfeld LA, Detsky AS, Kapoor WN. Efficacy of pneumococcal vaccination in adults. A meta-analysis of randomized controlled trials. Arch Int Med 1994;154:2666-77.
  14. Dear K, Holden J, Andrews R, Tatham D. Vaccines for preventing pneumococcal infection in adults. Cochrane Database Sys Rev 2003:CD000422.
  15. Huss A, Scott P, Stuck AE, Trotter C, Egger M. Efficacy of pneumococcal vaccination in adults: a meta-analysis. CMAJ 2009;180:48-58.
  16. Rigotti NA, Munafo MR, Stead LF. Smoking cessation interventions for hospitalized smokers: A systematic review. Arch Intern Med 2008;168:1950-1960.
  17. Gottlieb SS, McCarter RJ, Vogel RA. Effect of beta-blockade on mortality among high-risk and low-risk patients after myocardial infarction. N Engl J Med 1998;339:489-97.
  18. Pitt B, Zannad F, Remme WJ, Cody R, Castaigne A, Perez A, Palensky J, Wittes J for the Randomized Aldactone Evaluation Study Investigators. The effect of spironolactone on morbidity and mortality in patients with severe heart failure. N Engl J Med 1999;341:709-17.
  19. Poole-Wilson PA, Swedberg K, Cleland JG, Di Lenarda A, Hanrath P, Komajda M, Lubsen J, Lutiger B, Metra M, Remme WJ, Torp-Pedersen C, Scherhag A, Skene A. Carvedilol Or Metoprolol European Trial Investigators. Comparison of carvedilol and metoprolol on clinical outcomes in patients with chronic heart failure in the Carvedilol or Metoprolol European Trial (COMET): randomised controlled trial. Lancet. 2003;362:7-13.
  20. Kanwar M, Brar N, Khatib R, Fakih MG. Misdiagnosis of community acquired pneumonia and inappropriate utilization of antibiotics: side effects of the 4-h antibiotic administration rule. Chest 2007;131:1865-9.
  21. Welker JA, Huston M, McCue JD. Antibiotic timing and errors in diagnosing pneumonia. Arch Intern Med 2008;168:351-6.
  22. Yu KT, Wyer PC. Evidence-based emergency medicine/critically appraised topic. Evidence behind the 4-hour rule for initiation of antibiotic therapy in community-acquired pneumonia. Ann Emerg Med 2008;51:651-62.
  23. Angus DC, Abraham E. Intensive insulin therapy in critical illness: when is the evidence enough? Am J Resp Crit Care 2005;172:1358-9

Click here for Excel version of Table 1

Click here for Excel version of Table 2

Click here for Excel version of Table 3

Click here for Excel version of Table 4

Click here for Excel version of Table 5

Click here for Excel version of Table 6

Click here for Excel version of Table 7

Click here for Appendix 1

Click here for Appendix 2

Click here for Appendix 3

Click here for Appendix 4

Reference as: Robbins RA, Gerkin R, Singarajah CU. Relationship between the Veterans Healthcare Administration Hospital Performance Measures and Outcomes. Southwest J Pulm Crit Care 2011;3:92-133. (Click here for PDF version of manuscript)

Reader Comments

There are no comments for this journal entry. To create a new comment, use the form below.

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>