Search Journal-type in search term and press enter
Announcements and Recruitment
Social Media-Follow Southwest Journal of Pulmonary and Critical Care on Facebook and Twitter

General Medicine

(Click on title to be directed to posting, most recent listed first)

The Unfulfilled Promise of the Quality Movement
A Comparison Between Hospital Rankings and Outcomes Data
Profiles in Medical Courage: John Snow and the Courage of
   Conviction
Comparisons between Medicare Mortality, Readmission and 
   Complications
In Vitro Versus In Vivo Culture Sensitivities:
   An Unchecked Assumption?
Profiles in Medical Courage: Thomas Kummet and the Courage to
   Fight Bureaucracy
Profiles in Medical Courage: The Courage to Serve
   and Jamie Garcia
Profiles in Medical Courage: Women’s Rights and Sima Samar
Profiles in Medical Courage: Causation and Austin Bradford Hill
Profiles in Medical Courage: Evidence-Based 
   Medicine and Archie Cochrane
Profiles of Medical Courage: The Courage to Experiment and 
   Barry Marshall
Profiles in Medical Courage: Joseph Goldberger,
   the Sharecropper’s Plague, Science and Prejudice
Profiles in Medical Courage: Peter Wilmshurst,
   the Physician Fugitive
Correlation between Patient Outcomes and Clinical Costs
   in the VA Healthcare System
Profiles in Medical Courage: Of Mice, Maggots 
   and Steve Klotz
Profiles in Medical Courage: Michael Wilkins
   and the Willowbrook School
Relationship Between The Veterans Healthcare Administration
   Hospital Performance Measures And Outcomes 

 

Although the Southwest Journal of Pulmonary and Critical Care was started as a pulmonary/critical care/sleep journal, we have received and continue to receive submissions that are of general medical interest. For this reason, a new section entitled General Medicine was created on 3/14/12. Some articles were moved from pulmonary to this new section since it was felt they fit better into this category.

-------------------------------------------------------------------------------------

Monday
Jan202014

The Unfulfilled Promise of the Quality Movement

Richard A. Robbins, MD

 

Phoenix Pulmonary and Critical Care Research and Education Foundation

Gilbert, AZ

 

Abstract

In the latter half of the 20th century efforts to improve medical care became known as the quality movement. Although these efforts were often touted as “evidence-based”, the evidence was often weak or nonexistent. We review the history of the quality movement. Although patient-centered outcomes were initially examined, these were replaced with surrogate markers. Many of the surrogate markers were weakly or non-evidence based interventions. Furthermore, the surrogate markers were often “bundled”, some evidence-based and some not. These guidelines, surrogate markers and bundles were rarely subjected to beta testing, and when carefully scrutinized, rarely correlated with improved patient-centered outcomes. Based on this lack of improvement in outcomes, the quality movement has not improved healthcare. Furthermore, the quality movement will not likely improve clinical performance until recommended or required interventions are tested using randomized trials.

Introduction

The quality movement has been touted as improving patient care at lower costs. However, there are very little data available that “clinically meaningful” outcomes have improved as a result of the quality movement. This manuscript will examine some of the major quality improvement efforts (Table 1).

Table 1. Major quality programs examined in this manuscript.

In addition, the manuscript will point out that some of the key issues with quality improvement measures particularly relevant to pulmonary/critical care physicians such as pneumococcal vaccination in adults, ventilator-associated pneumonia (VAP) and central line associated bloodstream infection (CLABSI) bundles.

Early History of the Quality Movement

Origins of the quality-of-care movement can be traced back to the nineteenth century. An early study assessing the efficacy of hospital care was the work of the British nurse, Florence Nightingale (1). She reported that the hospital in Scutari during the Crimean War had an exceedingly high mortality rate.

In 1910, Flexner issued a critical report of the U.S. medical educational system and called for major reforms. In addition to reforms in education, the report called for full time faculty who held appointments in a teaching hospital with adequate space and equipment (2). Shortly after the Flexner report, Codman developed the medical audit, a process for evaluating the overall practice of a physician including the outcomes of surgery (3).

A survey conducted by the American College of Surgeons of 700 hospitals in 1919 concluded that few were equipped to provide patients with even a minimal level of quality of care. The College went on to establish a program of minimum hospital standards (4). Later the program was transferred to the Joint Commission on Accreditation of Hospitals (now the JCAHO or Joint Commission). However, the Joint Commission was largely ineffective until 1965. At that time, the Joint Commission became one of the most powerful accrediting groups through its role in certifying eligibility for receipt of federal funds from Medicare and Medicaid.

As the role of government in paying for medical care has grown, so has the demand for assurance of the quality of the healthcare services. In the mid-1960s, utilization review activities were required to receive reimbursement for in-patient services from Medicare and Medicaid. Despite utilization review, increasing concern for accountability in healthcare arose from two sources. One was the consumer movement (5). This was fueled by continual reports of variation in services offered by different physicians and by different health care institutions. Usually the reports implied or stated that the care was substandard. The second was a dramatic increase in medical malpractice suits further eroding the public confidence in the medical profession.

The most common definition of quality of care used in the later twentieth century was authored by Donabedian in 1966 (6). He identified three major foci for the evaluation of quality of care­ outcome, process and structure. Outcome referred to the condition of the patient and the effectiveness of healthcare including traditional outcome measures such as morbidity, mortality, length of stay, readmission, etc. Process of care represented an alternative approach which examined the process of care itself rather than its outcomes. These processes are often referred to as surrogate markers. The structural approach involved examining the physical aspects of health care, including buildings, equipment, and supplies.

Joint Commission (JCAHO)

The structural approach was often emphasized in the Joint Commission surveys in the 1960’s and 70’s. Beginning in the 1970’s the Joint Commission began to address outcomes by requiring hospitals to perform medical audits. However, the Joint Commission soon realized that the audit was “tedious, costly and nonproductive” (7). Efforts to meet audit requirements were too frequently “a matter of paper compliance, with heavy emphasis on data collection and few results that can be used for follow-up activities. In the shuffle of paperwork, hospitals often lost sight of the purpose of the evaluation study and, most important, whether change or improvement occurred as a result of audit”. Furthermore, survey findings and research indicated that patient care and clinical performance had not improved (7).

In response to the ineffectiveness of the audit and the call to improve healthcare, the Joint Commission introduced a new quality assurance standard in 1980. The standard consisted of five elements:

  • The integration or coordination of all quality assurance activities into a comprehensive program;
  • A written plan for the program;
  • A problem-focused approach to review;
  • Annual reassessment of the program;
  • Measurable improvement in patient care or clinical performance.

Hospitals complied with most aspects of these five elements. Over 90% had a written plan, annual reassessment, and integration of the quality assurance activities under the hospital administration by mid-1982 (8). However, the other elements remained largely ignored. Physician involvement was often perfunctory either because of their reluctance to be involved or because of hospital administration reluctance to have them involved. Hospital boards and administrators had little idea of which problems were most important and little idea of how to proceed with evaluation and interpretation of the results. Given these limitations it is not surprising that data demonstrating measurable improvement in patient care was lacking.

A number of superficial name changes occurred over the next few years. These included quality improvement, total quality improvement, risk management, quality management and total quality improvement. Although each was touted as an improvement in quality assurance, none were fundamentally different than the original concept and none demonstrated a convincing improvement in any patient-centered outcomes.

Institute of Healthcare Improvement (IHI)

Recognizing weaknesses in the Joint Commission processes, private organizations attempted to develop programs that enhanced quality. One was the Institute for Healthcare Improvement (IHI). Founded by Donald Berwick in 1989, IHI was quite successful in attracting funding from a number of charitable organizations such as Kaiser Permanente Community Benefit, the Josiah Macy, Jr. Foundation, the Rx Foundation,​the MacArthur Foundation, the Robert Wood Johnson Foundation, the Bill & Melinda Gates Foundation, the Health Foundation, and the Izumi Foundation. Additional funding was obtained from insurance companies such as the Blue Cross and Blue Shield Association, the Cardinal Health Foundation, the Aetna Foundation, the Blue Shield of California Foundation. Some pharmaceutical funding was also obtained from​ Baxter International, Inc. and the Abbott Fund. In addition, through the IHI “Open School” many hospitals, both academic and private, supported IHI activities. These included the Mayo Clinic, Banner Good Samaritan Medical Center, St. Joseph's Hospital and Medical Center, the University of Arizona Medical Center, the University of Colorado at Denver, and University of New Mexico - Albuquerque (9).

Under Berwick’s leadership, IHI launched a number of proposals to improve healthcare. A noteworthy initiative from the IHI was the 18 month 100,000 Lives Campaign which began in January 2005. This campaign encouraged hospitals to adopt six best practices to reduce harm and deaths. The interventions included deployment of rapid response reams, a medication reconciliation process, interventions for acute myocardial infarction, a central line management process, administering antibiotics at a specific time to prevent surgical site infections, and using a ventilator protocol to minimize ventilator associated-pneumonia. Review of the evidence basis for at least 3 of these interventions reveals fundamental flaws. A large cluster-randomized, controlled trial demonstrated that medical response teams greatly increased medical response team calling, but did not substantially affect the incidence of cardiac arrest, unplanned ICU admissions, or unexpected death (10). Furthermore, the interventions to prevent central line infections and ventilator-associated pneumonia were either non- or weakly evidence-based and unlikely to improve patient outcomes (11,12)

Despite these limitations, IHI announced in June 2006 that the campaign prevented 122,300 avoidable deaths (13). Interestingly, the methodology and sloppy estimation of the number of lives saved were pointed out by Wachter and Pronovost (14), in the Joint Commission’s Journal of Quality and Safety. IHI failed to adjust their estimates of lives saved for case-mix which accounted for nearly three out of four "lives saved." The actual mortality data were supplied to the IHI by hospitals without audit, and 14% of the hospitals submitted no data at all. Moreover, the reports from even those hospitals that did submit data were usually incomplete. The most striking example of this is the fact that the IHI announcement of lives saved in 18 months was based on submissions through March, not June, 2006, accounting for only 15 months. The final three months were also extrapolated from hospitals’ previous submissions. Although not reported by IHI, it seems likely that there were even more missing data beyond that described above. One important confounder is the fact that the campaign took place against a background of declining inpatient mortality rates (14).

Whether this decline was a result of some of the quality improvement efforts promoted by IHI and others or other factors is unclear. Undeterred, the IHI proceeded with the 5,000,000 Lives Campaign claiming that over 80% of US hospitals were participants (15). However, this campaign ended in 2008 and was apparently not successful (16). Although IHI promised to publish results in major medical journals, a literature search revealed no published outcomes.

Department of Veterans Affairs (VA)

The Department of Veterans Affairs (VA) has played a pivotal role in the quality movement. Although VA hospitals have been required to be Joint Commission approved since the Regan Administration, the quality movement began with the appointment of Kenneth W. Kizer as the VA's undersecretary of health in 1994. An emergency room physician, Kizer was Director of California Emergency Medical Services, Chief of Public Health for California and Director of the California Department of Health Services before coming to the VA (18).

Kizer mandated several interventions. One was the installation and utilization of an electronic healthcare record. The second was a set of performance measures which became known as the chronic disease indicators (19). In order to encourage performance of these interventions, Kizer initiated pay-for-performance, not to the doctors and nurses doing the interventions, but to the top administration of the hospital. The focus changed from meeting the needs of the patient to meeting the performance measures so the administrators could receive their bonuses. From 1994 to 2000 nearly all the performance measures improved. Three improved dramatically-pneumococcal vaccination, annual measurement of hemoglobin A1C, and smoking cessation (19). However, the evidence basis that these interventions improved patient outcomes was questionable (20). Furthermore, there was no outcome data such as morbidity, mortality, admission rates, length of stay, etc. that supported the contention that the health of veterans improved.

Although politics forced Kizer’s resignation in 1999, he was followed by his deputy, Thomas L. Garthwaite and eventually by his Chief Quality and Performance Officer, Jonathan B. Perlin. Perlin realized that outcome data was needed and promised that this would be forthcoming. On August 11, 2003 at the First Annual VA Preventive Medicine Training Conference in Albuquerque, NM, Perlin claimed that the increase in pneumococcal vaccination saved 3914 lives between 1996 and 1998 (21) (For a copy of the slides used by Perlin click here). Furthermore, he claimed pneumococcal vaccination resulted in 8000 fewer admissions and 9500 fewer days of bed care between 1999 and 2001. However, this data was not measured but based on extrapolation from a single, non-randomized, observational study (22). Although no randomized study examining patient outcomes with the 23-polyvalent pneumococcal vaccine has been performed, other studies do not support the efficacy of the vaccine in adults (23-25). Furthermore, there was an overall downward trend in hospital admissions. The reduction in hospital admissions for pneumonia appeared to be nothing more than part of this trend since the number of outpatient visits for pneumonia increased (26).

Institute of Medicine (IOM)

The Institute of Medicine (IOM) is a non-profit, non-governmental organization founded in 1970, under the congressional charter of the National Academy of Sciences (27). Its purpose is to provide national advice on issues relating to biomedical science, medicine, and health, and its mission to serve as adviser to the nation to improve health.

The IOM attracted little attention until publication of “To Err is Human” in 1999 (28). This report was authored by the IOM’s committee on quality of health care in America whose members included Donald Berwick from the Institute of Healthcare Improvement and Mark Chassin, now president and chief executive officer of the Joint Commission. The report estimated that 44,000 to 98,000 deaths occur annually in the US due to medical errors. It was presented with drama and an assertion of lack of previous attention. This was followed by a plea to the medical profession to remember its promise to "do no harm" and that "at a very minimum, the health system needs to offer that assurance and security to the public." The clear implication was that doctors were killing their patients at a terrible rate and needed oversight and direction. The Clinton administration clearly heard this message and issued an executive order instructing government agencies that conduct or oversee health-care programs to implement proven techniques for reducing medical errors, and creating a task force to find new strategies for reducing errors. Congress soon launched a series of hearings on patient safety, and in December 2000 it appropriated $50 million to the Agency for Healthcare Research and Quality to support a variety of efforts targeted at reducing medical errors.

Examination of the two studies on which the IOM based their mortality estimates both came from the Harvard School of Public Health where Harvey Fineberg, president of the IOM, had been dean. The higher estimate was based on a 1991 study published in the New England Journal of Medicine (29). In this study, 30,121 medical records from 51 randomly selected acute care hospitals in New York State were reviewed. Population estimates of injuries were made. To do this the records were screened by trained nurses and medical-records analysts; if a record was screened as positive for a potentially adverse event, two physicians independently reviewed the record. The second study on which the lower mortality estimate was made was published in Medical Care in 2000 (30). The same group from the Harvard School of Public Health used a similar strategy and examined 15,000 records from acute care hospitals in Utah and Colorado from 1992.

Although examples were given of what constituted an adverse event and whether it was due to negligence, it is “difficult to judge whether a standard of care has been met” leading to a “relatively low level of reliability…” according to the first article.  In both articles, negligence remained undefined which ultimately means that the determination of negligence relied on judgment. Unexplained is why the second article showed deaths due to negligence in Utah and Colorado at half the rate of New York.

Hofer et al. (31) examined medical errors in large part as a response to “To Err is Human” and the Harvard School of Public Health studies on which the IOM based their mortality estimates. They made four principal observations. “First, errors have been defined in terms of failed processes without any link to subsequent harm. Second, only a few studies have actually measured errors, and these have not described the reliability of the measurement. Third, no studies directly examine the relationship between errors and adverse events. Fourth, the value of pursuing latent system errors (a concept pertaining to small, often trivial structure and process problems that interact in complex ways to produce catastrophe) using case studies or root cause analysis has not been demonstrated in either the medical or nonmedical literature”.

Patient Safety

The IOM report, “To Err is Human”, resulted in yet another name change for the quality movement, the patient safety movement. An article in JAMA by Leape, Berwick and Bates (32) in 2005 entitled “What Practices Will Most Improve Safety? Evidence-Based Medicine Meets Patient Safety” examined what the authors considered evidence-based practices that might improve patient safety. They examined patient safety targets and listed patient safety practices to reduce or eliminate any adverse outcomes. The evidence of each practice was graded greatest, high, medium, low and lowest. Unfortunately, mistakes were made and some would disagree with the strength of evidence. For example, the ventilator bundle from 2011 used by IHI lists 5 interventions:

  • Elevation of the Head of the Bed
  • Daily "Sedation Vacations" and Assessment of Readiness to Extubate
  • Peptic Ulcer Disease Prophylaxis
  • Deep Venous Thrombosis Prophylaxis
  • Daily Oral Care with Chlorhexidine

However, in their JAMA article the intervention with the greatest evidence for prevention of ventilator-associated pneumonia was continuous aspiration of subglottic secretions. Semirecumbent positioning and elective decontamination of the digestive tract were listed as having a high strength of evidence. Continuous oscillation was listed as medium strength evidence. Use of sucralfate was listed as lowest strength of evidence. Sedation vacations and oral care with chlorhexidine were not listed. Deep venous prophylaxis was listed as prevention of venous thromboembolism, which it clearly does but does not reduce mortality or prevent pneumonia (33). Peptic ulcer disease prophylaxis with H2 antagonists was listed as medium strength of evidence. The possibility that H2 antagonists might increase pneumonia rather than prevent it was not raised (34).

Similarly, practices listed in the 2005 JAMA article to prevent central line-associated bloodstream infection listed antibiotic-impregnated catheters (greatest strength of evidence); chlorhexidine, heparin and catheter tunneling (lower strength of evidence); and routine antibiotic prophylaxis and changing catheters routinely (lowest strength of evidence). However, the IHI central line bundle include:

  • Hand Hygiene
  • Maximal Barrier Precautions Upon Insertion
  • Chlorhexidine Skin Antisepsis
  • Optimal Catheter Site Selection, with Avoidance of the Femoral Vein for Central Venous Access in Adult Patients
  • Daily Review of Line Necessity with Prompt Removal of Unnecessary Lines

Although disagreement about the level of evidence may be appropriate, the IHI bundles are clearly discordant with the evidence basis of the interventions as listed in the JAMA article. Furthermore, IHI mixed practices with various levels of evidence into a single bundle and encouraged the performance of all in order to receive “credit” for compliance with the bundle.

Department of Health and Human Services (HHS)

When Berwick became director of HHS’ Center for Medicare and Medicaid Services (CMS) the IHI’s bundles moved with him. With the Agency for Healthcare Quality and Research (AHRQ), another division of HHS, CMS initiated a vigorous program to prevent hospital-acquired infections backed up with financial penalties for noncompliance. This included bundles mixing practices with various levels of evidence (or no evidence) and requiring compliance with all to receive “credit”, or in this case, avoid financial penalties for noncompliance. CMS referred to this as “value-based performance”.

In an increasingly familiar scenario, AHRQ issued a press release on September 20, 2012 touting the remarkable success of the program to reduce central line-associated bloodstream infections (CLABSIs) (35). The program “…reduced the rate [of CLABSI] … in intensive care units by 40 percent…saving more than 500 lives and avoiding more than $34 million in health care costs” (35). Although the methodology used to determine these numbers was not stated, it seems likely that it followed the model of IHI’s 100,000 Lives Campaign.  A program was initiated based on a dubious intervention(s)-in this case a 2 page checklist for central line insertion (36). Data on the incidence of infection was determined by billing, another form of self-reporting. The dollars and lives saved were then extrapolated from other publications.

There are several problems with this approach. First, Meddings et al. (37) determined that data on self reported urinary tract infections were “inaccurate” and “are not valid data sets for comparing hospital acquired catheter-associated urinary tract infection rates for the purpose of public reporting or imposing financial incentives or penalties”. The authors proposed that the nonpayment by Medicare for “reasonably preventable” hospital-acquired complications resulted in this discrepancy. There is no reason to assume that data reported for CLABSI is any more accurate. Further evidence comes from the observation that compliance with most of these bundles does not appear to correlate with outcomes such as mortality or readmission rates (38).

Second, choosing a single article which does not represent the overall body of evidence can be deceiving. In the pneumococcal vaccine example cited earlier, Perlin chose the only article to show a beneficial effect of the 23-polyvalent pneumococcal vaccine in adults (22). Most reviews and meta-analyses do not support the vaccine’s efficacy (23-25).

Third, many of the articles used for calculation of the mortality and cost savings are not risk adjusted. It may seem obvious, but saying that CLABSIs are associated with higher mortality and costs is not the same as saying the mortality and costs were caused by the CLABSI. Central lines are placed in sicker, more unstable patients and their underlying disease and not the CALBSI could account for the higher mortality and costs. These extrapolated conclusions are not the same as measurement of the mortality and costs in carefully planned and controlled randomized trials.

Discussion

Review of the quality movement reveals a dizzying array of pseudo-regulatory organizations and ever-changing programs and guidelines. The National Guidelines Clearinghouse lists in excess of 15,000 guidelines (39). This explosion of directives appears to undergo little to no oversight with no checks to ensure that these guidelines are evidence-based.

This manuscript reviewed some of the more prominent programs to improve healthcare and have found them sadly wanting (Table 1). Overall the science has been poor and evidence of improvement in patient outcomes lacking. It is unclear if the present programs have improved on the tedious, costly and nonproductive medical audits of the 1970’s (7). Like the audits, present quality programs appear to be more a matter of paper compliance. In the shuffle of paperwork, hospitals and regulatory agencies seem to have lost sight of that the purpose is to improve healthcare and not fulfill a political or financial goal.

As hospitals struggle to decrease complication rates in order to receive better reimbursement for “value-based performance” several likely strategies may evolve. One is to lie about the data. According to Meddings (37) this is apparently happening with urinary tract infections and clearly was happening with ventilator-associated pneumonia (12). The accuracy of the data submitted by a hospital’s quality manager, under intense scrutiny from the CEO or board to demonstrate the hospital’s success in quality improvement, is rarely questioned if it shows an improvement. This may be particularly true now that many CEOs and managers are operating under incentive systems that tie bonuses to quality performance (14). Tying hospital reimbursement may induce similar discrepancies. Another rather obvious strategy to increase reimbursement would be to prevent complications by not performing interventions such as urinary tract catheters or central lines. Whether this is happening is not clear but seems likely. It is also unclear whether this will be beneficial or harmful to patients.

It is difficult to argue that a complication might be good for a patient. However, some of the hospital-acquired infections and readmission rates correlate with improved mortality (38). The reason for this is not clear but could represent a minor complication of what are best practices that benefit the majority of patients.

According to Patrick Conway, CMS Chief Medical Officer and Deputy and Administrator for Innovation and Quality, CMS will be reorienting and aligning measures around patient-centered outcomes (40). Readmission rates for certain disorders are already part of CMS’ formula for reimbursement adjustments based on readmissions for COPD will begin in 2014. It is unclear at this juncture if other traditional outcome measures such as mortality, morbidiy, hospital length of stay and cost will also be considered. These would likely be an improvement over the “value-based performance” measures, many of which either do not or inversely correlate with outcomes.

The explosion in the number of groups attempting to improve quality and safety raises the question of how should target practices be selected. It is unclear if private organizations be setting a national agenda for change. The 100,000 Lives Campaign allowed IHI to receive credit for many things that would have happened anyway (14). The campaign created a landslide of “brand recognition” for IHI, and undoubtedly led to substantial new revenues and philanthropic dollars. A conflict (or, at very least, appearance of conflict) is unavoidable. A federal agency or regulator would not be vulnerable to such concerns.

Professional organizations need to do their part in improving the quality of medical care. Many, if not most, professional organizations have rushed to publish guidelines. Unfortunately, the evidence basis for these guidelines have been little better than the IHI, Joint Commission or IOM’s recommendations. Lee and Vielemeyer (41) found that only 14% of the Infectious Disease Society of America (IDSA) guidelines are based on level I evidence (data from >1 properly randomized controlled trial). Much of this 14% and the 86% that are below level I evidence will eventually be proven wrong (42,43). It is doubtful that other medical societies are performing much better.

Medical journals also need to do their part. Reviewers and editors need to evaluate manuscripts regarding “quality medical care” with the same scientific skepticism applied to other articles. Randomized trials should not only be applied to diagnostic and therapeutic interventions but just as vigorously applied to formulation and implementation of guidelines and other interventions designed to improve medical care quality. Too often interventions based on either weak or no evidence become ingrained in medical practice when papers with questionable methods and/or outcomes are published. Journals should not allow authors to call an intervention as improving quality of medical care without definition of quality and without accompanying demonstration of an improvement in patient-centered outcomes.

Lastly, physicians need to do their part. Physicians should reevaluate their participation and financial support of medical societies that author or support non- or weakly evidenced guidelines. They should oppose quality programs introduced into medical practice that are not based on level I evidence (at least one randomized trial). That the IHI was able to introduce a program such as the 100,000 Lives Campaign into hospital practice based on weak or non-evidence based interventions such as rapid response teams, central-line insertions guidelines and ventilator-associated guidelines is disturbing. That the IHI was able to declare that implementation of their interventions saved 123,000 lives based on the sloppy collection of self-reported data is equally disturbing. That these interventions persist to this day is perhaps most disturbing of all. As taught by Flexner nearly a century ago, only through application of scientific principles and vigorous review of interventions will the quality of medical care improve.

References

  1. John Maindonald J, Richardson AM. This passionate study: a dialogue with Florence Nightingale. Journal of Statistics Education 2004: 12(1). Available at: www.amstat.org/publications/jse/v12n1/maindonald.html (accessed 12/5/13).
  2. Flexner A. Medical Education in the United States and Canada: A Report to the Carnegie Foundation for the Advancement of Teaching. Bulletin 4. 1910. New York: Carnegie Foundation.
  3. Lembcke PA. Evolution of the medical audit. JAMA. 1967;199(8):543-50. [CrossRef] [PubMed]
  4. Borus ME, Buntz CG, Tash WR. Evaluating the Impact of Health Programs: A Primer. 1982. Cambridge, MA: MIT Press.
  5. Baker F. Quality assurance and program evaluation. Eval Health Prof. 1983;6(2):149-60. [PubMed]
  6. Donabedian A. Evaluating the quality of medical care. 1966. Milbank Q. 2005;83(4):691-729. [PubMed]
  7. Affeldt JE. The new quality assurance standard of the Joint Commission on Accreditation of Hospitals. West J Med. 1980;132:166-70. [PubMed]
  8. Affeldt JE, Roberts JS, Walczak RM. Quality assurance. Its origin, status, and future direction--a JCAH perspective. Eval Health Prof. 1983;6(2):245-55. [PubMed]
  9. Institute for Healthcare Improvement Open School. US Directory. Available at: http://www.ihi.org/offerings/IHIOpenSchool/Chapters/Pages/ChapterDirectoryUnitedStates.aspx (accessed 12/7/13).
  10. Hillman K, Chen J, Cretikos M, Bellomo R, Brown D, Doig G, Finfer S, Flabouris A; MERIT study investigators. Introduction of the medical emergency team (MET) system: a cluster-randomised controlled trial. Lancet. 2005;365(9477):2091-7. [PubMed]
  11. Hurley J, Garciaorr R, Luedy H, Jivcu C, Wissa E, Jewell J, Whiting T, Gerkin R, Singarajah CU, Robbins RA. Correlation of compliance with central line associated blood stream infection guidelines and outcomes: a review of the evidence. Southwest J Pulm Crit Care 2012;4:163-73.
  12. Padrnos L, Bui T, Pattee JJ, Whitmore EJ, Iqbal M, Lee S, Singarajah CU, Robbins RA. Analysis of overall level of evidence behind the Institute of Healthcare Improvement ventilator-associated pneumonia guidelines. Southwest J Pulm Crit Care 2011;3:40-8.
  13. Institute for Healthcare Improvement. IHI announces that hospitals participating in 100,000 lives campaign have saved an estimated 122,300 lives. Available at: http://www.ihi.org/about/news/documents/ihipressrelease_hospitalsin100000livescampaignhavesaved122300lives_jun06.pdf (accessed 12/7/13).
  14. Wachter RM, Pronovost PJ. The 100,000 Lives Campaign: A scientific and policy review. Jt Comm J Qual Patient Saf. 2006;32(11):621-7. [PubMed]
  15. Institute for Healthcare Improvement. 5 million lives campaign. Available at: http://www.ihi.org/about/Documents/5MillionLivesCampaignCaseStatement.pdf (accessed 12/7/13).
  16. DerGurahian J. IHI unsure about impact of 5 Million campaign. Available at: http://www.modernhealthcare.com/article/20081210/NEWS/312109976 (accessed 12/7/13).
  17. Ken Kizer transforming the VA. Available at: http://www.medsphere.com/transforming-the-va (accessed 12/19/13).
  18. Robbins RA. Profiles in medical courage: of mice, maggots and Steve Klotz. Southwest J Pulm Crit Care. 2012;4:71-7.
  19. Jha AK, Perlin JB, Kizer KW, Dudley RA. Effect of the transformation of the Veterans Affairs Health Care System on the quality of care. N Engl J Med. 2003;348(22):2218-27 [CrossRef] [PubMed]
  20. Robbins RA, Klotz SA. Quality of care in US hospitals. N Engl J Med 2005; 353(17):1860-1861 [CrossRef] [PubMed]
  21. Perlin JB. Prevention in the 21st century: using advanced technology and care models to move from the hospital and clinic to the community and caring. Presented at the First Annual VA Preventive Medicine Training Conference, Albuquerque, NM , August 11, 2003. For a copy of the slide presentation click here.
  22. Nichol KL, Baken L, Wuorenma J, Nelson A. The health and economic benefits associated with pneumococcal vaccination of elderly persons with chronic lung disease. Arch Intern Med. 1999;159(20):2437-42. [CrossRef] [PubMed]
  23. Fine MJ, Smith MA, Carson CA, Meffe F, Sankey SS, Weissfeld LA, Detsky AS, Kapoor WN. Efficacy of pneumococcal vaccination in adults. A meta-analysis of randomized controlled trials. Arch Int Med 1994;154:2666-77. [CrossRef] [PubMed]
  24. Dear K, Holden J, Andrews R, Tatham D. Vaccines for preventing pneumococcal infection in adults. Cochrane Database Sys Rev 2003:CD000422. [PubMed]
  25. Huss A, Scott P, Stuck AE, Trotter C, Egger M. Efficacy of pneumococcal vaccination in adults: a meta-analysis. CMAJ 2009;180:48-58. [CrossRef] [PubMed]
  26. Robbins RA. August 2013 pulmonary journal club: pneumococcal vaccine déjà vu. Southwest J Pulm Crit Care. 2013;7(2):131-4.[CrossRef]
  27. Institute of Medicine. Available at: http://www.iom.edu/About-IOM.aspx (accessed 12/20/13).
  28. Kohn LT, Corrigan J, Donaldson MS, Institute of Medicine (U.S.) Committee on Quality of Health Care in America. To Err Is Human: Building a Safer Health System. Washington, DC: National Academy Pr; 1999.
  29. Brennan TA, Leape LL, Laird NM, Hebert L, Localio AR, Lawthers AG, Newhouse JP, Weiler PC, Hiatt HH. Incidence of adverse events and negligence in hospitalized patients. Results of the Harvard Medical Practice Study I. N Engl J Med. 1991;324(6):370-6. [CrossRef] [PubMed] 
  30. Thomas EJ, Studdert DM, Burstin HR, Orav EJ, Zeena T, Williams EJ, Howard KM, Weiler PC, Brennan TA. Incidence and types of adverse events and negligent care in Utah and Colorado. Med Care. 2000;38(3):261-71. [CrossRef] [PubMed] 
  31. Hofer TP, Kerr EA, Hayward RA. What is an error? Eff Clin Pract. 2000;3(6):261-9. [PubMed]
  32. Leape LL, Berwick DM, Bates DW. What practices will most improve safety? Evidence-based medicine meets patient safety. JAMA. 2002;288(4):501-7. [CrossRef] [PubMed] 
  33. Dentali F, Douketis JD, Gianni M, Lim W, Crowther MA. Meta-analysis: anticoagulant prophylaxis to prevent symptomatic venous thromboembolism in hospitalized medical patients. Ann Intern Med. 2007;146(4):278-88. [CrossRef] [PubMed] 
  34. Marik PE, Vasu T, Hirani A, Pachinburavan M. Stress ulcer prophylaxis in the new millennium: a systematic review and meta-analysis. Crit Care Med. 2010;38(11):2222-8. [CrossRef] [PubMed] 
  35. AHRQ Patient Safety Project Reduces Bloodstream Infections by 40 Percent. September 2012. Agency for Healthcare Research and Quality, Rockville, MD. Available at: http://www.ahrq.gov/news/newsroom/press-releases/2012/20120910.html  (accessed 12/23/13).
  36. Central Line Insertion Care Team Checklist. May 2009. Agency for Healthcare Research and Quality, Rockville, MD. Available at: http://www.ahrq.gov/professionals/quality-patient-safety/patient-safety-resources/resources/cli-checklist.html (accessed 12/23/13).
  37. Meddings JA, Reichert H, Rogers MA, Saint S, Stephansky J, McMahon LF. Effect of nonpayment for hospital-acquired, catheter-associated urinary tract infection: a statewide analysis. Ann Intern Med. 2012;157:305-12. [CrossRef] [PubMed]
  38. Robbins RA, Gerkin RD. Comparisons between Medicare mortality, morbidity, readmission and complications. Southwest J Pulm Crit Care. 2013;6(6):278-86. 
  39. National Guideline Clearinghouse. Agency for Healthcare Research and Quality, Rockville, MD. Available at: http://www.guideline.gov/browse/by-topic.aspx (accessed 12/23/13).
  40. Conway PH, Mostashari F, Clancy C.The future of quality measurement for improvement and accountability. JAMA. 2013;309(21):2215-6. [CrossRef] [PubMed] 
  41. Lee DH, Vielemeyer O. Analysis of overall level of evidence behind infectious diseases society of America practice guidelines. Arch Intern Med. 2011;171:18-22.  [CrossRef]  [PubMed]
  42. Prasad V, Vandross A, Toomey C, et al. A decade of reversal: an analysis of 146 contradicted medical practices. Mayo Clin Proc. 2013;88(8):790-8. [CrossRef] [PubMed] 
  43. Villas Boas PJ, Spagnuolo RS, Kamegasawa A, et al. Systematic reviews showed insufficient evidence for clinical practice in 2004: what about in 2011? The next appeal for the evidence-based medicine age. J Eval Clin Pract. 2013;19(4):633-7. [CrossRef] [PubMed] 

Reference as: Robbins RA. The unfulfilled promise of the quality movement. Southwest J Pulm Crit Care. 2014;8(1):50-63. doi: http://dx.doi.org/10.13175/swjpcc181-13 PDF

Monday
Sep232013

A Comparison Between Hospital Rankings and Outcomes Data

Richard A. Robbins, MD*

Richard D. Gerkin, MD  

 

*Phoenix Pulmonary and Critical Care Research and Education Foundation, Gilbert, AZ

Banner Good Samaritan Medical Center, Phoenix, AZ

 

Abstract

Hospital rankings have become common but the agreement between the rankings and correlation with patient-centered outcomes remains unknown. We examined the ratings of Joint Commission on Healthcare Organizations (JCAHO), Leapfrog, and US News and World Report (USNews), and outcomes from Centers for Medicare and Medicaid Hospital Compare (CMS) for agreement and correlation. There was some correlation among the three “best hospitals” ratings.  There was also some correlation between “best hospitals” and CMS outcomes, but often in a negative direction.  These data suggest that no one “best hospital” list identifies hospitals that consistently attain better outcomes.

Introduction

Hospital rankings are being published by a variety of organizations. These rankings are used by hospitals to market the quality of their services. Although all the rankings hope to identify “best” hospitals, they differ in methodology. Some emphasize surrogate markers; some emphasize safety, i.e., a lack of complications; some factor in the hospital’s reputation; some factor in patient-centered outcomes.  However, most do not emphasize traditional outcome measures such as mortality, mortality, length of stay and readmission rates. None factor cost or expenditures on patient care.

We examined three common hospital rankings and clinical outcomes. We reasoned that if the rankings are valid then better hospitals should be consistently on these best hospital lists. In addition, better hospitals should have better outcomes.

Methods

CMS

Outcomes data was obtained from the CMS Hospital Compare website from December 2012-January 2013 (1). The CMS website presents data on three diseases, myocardial infarction (MI), congestive heart failure (CHF) and pneumonia. We examined readmissions, complications and deaths for each of these diseases. We did not examine all process of care measures since many of the measures have not been shown to correlate with improved outcomes and patient satisfaction has been shown to correlate with higher admission rates to the hospital, higher overall health care expenditures, and increased mortality (2). In some instances actual data is not presented on the CMS website but only higher, lower or no different from the National average. In this case, scoring was done 2, 0 and 1 respectively with 2=higher, 0=lower and 1=no different.

Mortality is the 30-day estimates of deaths from any cause within 30 days of a hospital admission, for patients hospitalized with one of several primary diagnoses (MI, CHF, and pneumonia). Mortality was reported regardless of whether the patient died while still in the hospital or after discharge. Similarly, the readmission rates are 30-day estimates of readmission for any cause to any acute care hospital within 30 days of discharge. The mortality and readmission measures rates were adjusted for patient characteristics including the patient’s age, gender, past medical history, and other diseases or conditions (comorbidities) the patient had at hospital arrival that are known to increase the patient’s risk of dying or readmission.

The rates of a number of complications are also listed in the CMS data base (Table 1).

Table 1. Complications examined that are listed in CMS data base.

CMS calculates the rate for each serious complication by dividing the actual number of outcomes at each hospital by the number of eligible discharges for that measure at each hospital, multiplied by 1,000. The composite value reported on Hospital Compare is the weighted averages of the component indicators.  The measures of serious complications reported are risk adjusted to account for differences in hospital patients’ characteristics. In addition, the rates reported on Hospital Compare are “smoothed” to reflect the fact that measures for small hospitals are measured less accurately (i.e., are less reliable) than for larger hospitals.

Similar to serious infections, CMS calculates the hospital acquired infection data from the claims hospitals submitted to Medicare. The rate for each hospital acquired infection measure is calculated by dividing the number of infections that occur within any given eligible hospital by the number of eligible Medicare discharges, multiplied by 1,000. The hospital acquired infection rates were not risk adjusted.

JCAHO

The JCAHO list of Top Performers on Key Quality Measures™ was obtained from its 2012 list (3). The Top Performers are based on an aggregation of accountability measure data reported to The JCAHO during the previous calendar year.

Leapfrog

Leapfrog’s Hospital Safety Score were obtained from their website during December 2012-January 2013 (4). The score utilizes 26 National performance measures from the Leapfrog Hospital Survey, the Agency for Healthcare Research and Quality (AHRQ), the Centers for Disease Control and Prevention (CDC), and the Centers for Medicare and Medicaid Services (CMS) to produce a single composite score that represents a hospital’s overall performance in keeping patients safe from preventable harm and medical errors. The measure set is divided into two domains: (1) Process/Structural Measures and (2) Outcome Measures. Many of the outcome measures are derived from the complications reported by CMS (Table 1). Each domain represents 50% of the Hospital Safety Score. The numerical safety score is then converted into one of five letter grades. "A" denotes the best hospital safety performance, followed in order by "B", "C", “D,” and “F.” For analysis, these letter grades were converted into numerical grades 1-5 corresponding to letter grades A-F.

US News and World Report

US News and World Report’s (USNews) 2012-3 listed 17 hospitals on their honor roll (5). The rankings are based largely on objective measures of hospital performance, such as patient survival rates, and structural resources, such as nurse staffing levels. Each hospital’s reputation, as determined by a survey of physician specialists, was also factored in the ranking methodology. The USNews top 50 cardiology and pulmonology hospitals were also examined.

Statistical Analysis

Categorical variables such as JCAHO and USNews best hospitals were compared with other data using chi-squared analysis. Spearman rank correlation was used to help determine the direction of the correlations (positive or negative). Significance was defined as p<0.05.

Results

Comparisons of Hospital Rankings between Organizations

A large database of nearly 3000 hospitals was compiled for each of the hospital ratings (Appendix 1). The “best hospitals” as rated by the JCAHO, Leapfrog and USNews were compared for correlation between the organizations (Table 2).

Table 2. Correlation of “best hospitals” between different organizations

There was significant correlation between the JCAHO and Leapfrog and Leapfrog and USNews but not between JCAHO and USNews.

JCAHO-Leapfrog Comparison

The Leapfrog grades were significantly better for JCAHO “Best Hospitals” compared to hospitals not listed as “Best Hospitals” (2.26 + 0.95  vs. 1.85 + 0.91, p<0.0001). However, there were multiple exceptions. For example, of the 358 JCAHO “Best Hospitals” with a Leapfrog grade, 84 were graded “C”, 11 were graded “D” and one was graded as “F”.

JCAHO-USNews Comparison

Of the JCAHO “Top Hospitals” only one was listed on the USNews “Honor Roll”. Of the cardiology and pulmonary “Top 50” hospitals only one and two hospitals, respectively, were listed on the JCAHO “Top Hospitals” list.

Leapfrog-USNews Comparison

The Leapfrog grades of the US News “Honor Roll” hospitals did not significantly differ compared to the those hospitals not listed on the “Honor Roll” (2.21 + 0.02 vs. 1.81 + 0.31, p>0.05). However, Leapfrog grades of the US News “Top 50 Cardiology” hospitals had better Leapfrog grades (2.21 +  0.02 vs. 1.92 + 0.14, p<0.05). Similarly, Leapfrog grades of the US News “Top 50 Pulmonary” hospitals had better Leapfrog grades (2.21 + 0.02 vs. 1.91 + 0.15, p<0.05).

“Best Hospital” Mortality, Readmission and Serious Complications

The data for the comparison between the hospital rankings and CMS’ readmission rates, mortality rates and serious complications for the JCAHO, Leapfrog, and USNews are shown in Appendix 2, Appendix 3, and Appendix 4 respectively. The results of the comparison of “best hospitals” compared to hospitals not listed as best hospitals are shown in Table 3.

Table 3. Results of “best hospitals” compared to other hospitals for mortality and readmission rates for myocardial infarction (MI), congestive heart failure (CHF) and pneumonia.

Red:  Relationship is concordant (better rankings associated with better outcomes)

Blue:  Relationship is discordant (better rankings associated with worse outcomes)

Note that of 21 total p values for relationships, 12 are non-significant, 6 are concordant and significant, and 6 are discordant and significant.  All 4 of the significant readmission relationships are discordant. All 5 of the significant mortality relationships are concordant. This underscores the disjunction of mortality and readmission. All 3 of the relationships with serious complications are significant, but one of these is discordant. Of the 3 ranking systems, Leapfrog has the least correlation with CMS outcomes (5/7 non-significant).  USNews has the best correlation with CMS outcomes (6/7 significant).  However, 3 of these 6 are discordant.

The USNews “Top 50” hospitals for cardiology and pulmonology were also compared to those hospitals not listed as “Top 50” hospitals for cardiology and pulmonology. Similar to the “Honor Roll” hospitals there was a significantly higher proportion of hospitals with better mortality rates for MI and CHF for the cardiology “Top 50” and for pneumonia for the pulmonary “Top 50”. Both the cardiology and pulmonary “Top 50” had better serious complication rates (p<0.05, both comparisons, data not shown).

Discussion

Lists of hospital rankings have become widespread but whether these rankings identify better hospitals is unclear. We reasoned that if the rankings were meaningful then there should be widespread agreement between the hospital lists. We did find a level of agreement but there were exceptions. Hospital rankings should correlate with patient-centered outcomes such as mortality and readmission rates. Overall that level of agreement was low.

One probable cause accounting for the differences in hospital rankings is the differing methodologies used in determined the rankings. For example, JCAHO uses an aggregation of accountability measures. Leapfrog emphasizes safety or a lack of complications. US News uses patient survival rates, structural resources, such as nurse staffing levels, and the hospital’s reputation. However, the exact methodolgical data used to formulate the rankings is often vague, especially for JCAHO and US News rankings. Therefore, it should not be surprising that the hospital rankings differ.

Another probable cause for the differing rankings is the use of selected complications in place of patient-centered outcome measures. Complications are most meaningful when they negatively affect ultimate patient outcomes. Some complications such as objects accidentally left in the body after surgery, air bubble in the bloodstream or mismatched blood types are undesirable but very infrequent. Whether a slight, but significant, increase in these complications would increase more global measures such as morality or readmission rates is unlikely. The overall poor correlation of these outcomes with deaths and readmissions in the CMS database is consistent with this concept.

Some of the surrogate complication rates are clearly evidence-based but some are clearly not. For example, many of the central-line associated infection and ventilator-associated pneumonia guidelines used are non-evidence based (6.7). Furthermore, overreaction to correct some of the complications such as “signs of uncontrolled blood sugar” may be potentially harmful. This complication could be interpreted as tight control of the blood sugar. Unfortunately, when rigorously studied, patients with tight glucose control actually had an increase in mortality (8).

In some instances a complication was associated with improved outcomes. Although the reason for this discordant correlation is unknown, it is possible that the complication may occur as a result of better care. For example, catherization of a central vein for rapid administration of fluids, drugs, blood products, etc. may result in better outcomes or quality but will increase the central line-associated bloodstream infection rate. In contrast, not inserting a catheter when appropriate might lead to worse outcomes or poorer quality but would improve the infection rate.

Many of the rankings are based, at least in part, on complication data self-reported by the hospitals to CMS. However, the accuracy of this data has been called into question (9,10). Meddings et al. (10) studied urinary tract infections which were self-reported by hospitals using claims data. According to Meddings (10), the data were “inaccurate” and not were “not valid data sets for comparing hospital acquired catheter-associated urinary tract infection rates for the purpose of public reporting or imposing financial incentives or penalties”. The authors proposed that the nonpayment by Medicare for “reasonably preventable” hospital-acquired complications resulted in this discrepancy. Inaccurate data may lead to the lack of correlation a complication and outcomes on the CMS database.

The sole source of mortality and readmission data in this study was CMS. This is limited to Medicare and Medicaid patients but is probably representative of the general population in an acute care hospital. However, also included on the CMS website is a dizzying array of measures. We did not analyze every measure but analyzed only those listed in Table 1. Whether examination of other measures would correlate with mortality and readmission rates is unclear.

There are several limitations to our data. First and foremost, the CMS data is self-reported by hospitals. The validity and accuracy of the data has been called into question. Second, data is missing in multiple instances. For example, much of the data from Maryland was not present. Also, there were multiple instances when the data was “unavailable” or the “number of cases are too small”.  Third, in some instances CMS did not report actual data but only higher, lower or no different from the National average. This loss of information may have led to inaccurate analyses. Fourth, much of the data are from surrogate markers, a fact which is important since surrogate markers have not been shown to predict outcomes. This is also puzzling since patient-centered outcomes are available.  Fifth, much of the outcomes data is derived from CMS which to a large extent eliminates Veterans Administration, pediatric, mental health and some other specialty facilities.

It is unclear if any of the hospital rankings should be used by patients or healthcare providers when choosing a hospital. At present it would appear that the rankings have an over reliance on surrogate markers, many of which are weakly evidence-based. Furthermore, categorizing the data as average, below or above average may lead to an inaccurate interpretation of the data. Lastly, the accuracy of the data is unclear. Finally, lack of data on length of stay and some major morbidities is a major weakness. We as physicians need to scrutinize these measurement systems and insist on greater methodological rigor and more relevant criteria to choose. Until these shortcomings are overcome, we cannot recommend the use of hospital rankings by patients or providers.

References

  1. http://www.medicare.gov/hospitalcompare/ (accessed 6/12/13).
  2. Fenton JJ, Jerant AF, Bertakis KD, Franks P. The cost of satisfaction: a national study of patient satisfaction, health care utilization, expenditures, and mortality. Arch Intern Med. 2012;172(5):405-11. [CrossRef] [PubMed]
  3. http://www.jointcommission.org/annualreport.aspx (accessed 6/12/13).
  4. http://www.hospitalsafetyscore.org (accessed 6/12/13).
  5. http://health.usnews.com/best-hospitals (accessed 6/12/13).
  6. Padrnos L, Bui T, Pattee JJ, Whitmore EJ, Iqbal M, Lee S, Singarajah CU, Robbins RA. Analysis of overall level of evidence behind the Institute of Healthcare Improvement ventilator-associated pneumonia guidelines. Southwest J Pulm Crit Care. 2011;3:40-8.
  7. Hurley J, Garciaorr R, Luedy H, Jivcu C, Wissa E, Jewell J, Whiting T, Gerkin R, Singarajah CU, Robbins RA. Correlation of compliance with central line associated blood stream infection guidelines and outcomes: a review of the evidence. Southwest J Pulm Crit Care. 2012;4:163-73.
  8. NICE-SUGAR Study Investigators. Intensive versus conventional insulin therapy in critically ill patients. N Engl J Med. 2009;360:1283-97. [CrossRef] [Pubmed]
  9. Robbins RA. The emperor has no clothes: the accuracy of hospital performance data. Southwest J Pulm Crit Care. 2012;5:203-5.
  10. Meddings JA, Reichert H, Rogers MA, Saint S, Stephansky J, McMahon LF. Effect of nonpayment for hospital-acquired, catheter-associated urinary tract infection: a statewide analysis. Ann Intern Med. 2012;157:305-12. [CrossRef] [PubMed]

Reference as: Robbins RA, Gerkin RD. A comparison between hospital rankings and outcomes data. Southwest J Pulm Crit Care. 2013;7(3):196-203. doi: http://dx.doi.org/10.13175/swjpcc076-13 PDF

Friday
Aug092013

Profiles in Medical Courage: John Snow and the Courage of Conviction

Richard A. Robbins, M.D.1

Stephen A. Klotz, M.D.2 

1Phoenix Pulmonary and Critical Care Research and Education Foundation, Gilbert, AZ

2Division of Infectious Diseases, University of Arizona, Tucson, AZ

 

Abstract

The story of John Snow’s removal of the handle of the Broad Street pump stopping the London cholera outbreak of 1854 has reached near legendary status. In this review we examine Snow’s life and conclude that the removal of the pump handle causing the end of the epidemic is largely myth. However, Snow was a clever man with eclectic medical interests. He not only founded the field of epidemiology but did early pioneering work in resuscitation and anesthesia. It is likely that his self-experimentation with anesthetic gases may have contributed to his early death at age 45. Largely forgotten during his own time, he is now correctly remembered as a smart physician and scientist with the conviction to pursue what he believed to be right.

 

Introduction

A poll taken by a UK medical magazine for hospital doctors named John Snow (Figure 1) the greatest physician of all time (1).

Figure 1. John Snow in 1857.

Hippocrates was second. However, a recent equally unscientific poll of medical students revealed that none had heard of Snow (Robbins RA, unpublished observations) prompting the writing of this Profile in Medical Courage. Snow’s work was largely ignored in his own time (2). The 19th century British medical establishment was in general fiercely opposed to his views on cholera and favored the “miasma” or bad air theory of how cholera was spread (2,3). His somewhat less than friendly personality and early death contributed to his lack of recognition. However, starting in the 1930’s with republication of Snow’s most famous text, “On the Mode of Communication of Cholera” the story of John Snow and cholera has become a founding chronicle of public health (4). Snow’s contributions have been so important that he has been termed the founder of epidemiology (2-4).

Many have heard the famous story of the Broad Street pump and how the removal of the pump handle stopped the cholera epidemic of 1854, although apparently not our medical students. Snow's rise to iconic status originated not only because of his founding the science of epidemiology, but also his pioneering work on resuscitation and anesthesia (2). During his short lifetime, Snow contributed 107 publications to the scientific literature (2). In this review, we examine the life of Snow, his publications and his work on cholera, anesthesia and resuscitation.

 

Early Years

John Snow was born into a family of modest means on March 15, 1813 in York, England (2). His father was a laborer in the neighboring coal yard. Beginning about age 6, Snow attended a private, common day school in York. There were few public schools and common day schools were intended to educate the poor. He attended school until he was 14 and received basic education in reading, writing, arithmetic and the Scriptures. He was noted to be a good student with mathematics and natural history his favorite subjects.

Usually the children of poor families left home as early as possible to earn their own livings. How Snow’s family afforded to send him to a private school, even one intended for the poor, remains a mystery. Some suggest that money may have come from his maternal uncle, Charles Empson, a prosperous book merchant first in Newcastle-upon-Tyne and later in Bath (2).  However, others suggest Snow's ambitious father, William, was the likely source (2). At about the time he began school, his father began delivering goods by horse-drawn carriage that arrived by river. He saved his money and purchased rental property and eventually a farm.  Such upward financial mobility was unusual in 19th century England.

 

Medical Apprenticeships

Medical education in the early 1800’s was markedly different from today (2). Only two universities, Oxford and Cambridge, granted medical degrees leading to licensure. Snow’s modest means meant he could attend neither. The other route to licensure, and the one taken by Snow, was to apprentice with a surgeon and apothecary (pharmacist). Eventually the apprentice could take the licensing test given by the Royal Colleges of Physicians and of Surgeons and the Worshipful Society of Apothecaries to obtain medical licensure. 

Snow did 3 apprenticeships beginning at age 14 that lasted 9 years. His first was in Newcastle- upon-Tyne with William Hardcastle, a general practitioner. Newcastle was 90 miles from York, a fair distance in the first half of the 19th Century. Why Snow was sent so far from home for his first apprenticeship is likely explained by Hardcastle’s friendship with Snow's uncle, Charles Empson. Empson was a witness at Hardcastle’s wedding and executor of his will. Hardcastle was an established practitioner of good reputation in the Newcastle area who was thirty-one years old at the outset of Snow's apprenticeship.  

Snow's apprenticeship with Hardcastle lasted six years. Not only were the foundations of his medical training developed but the first indications of his independent nature became evident. During the third year of his apprenticeship when he was 17 years old, he became a vegetarian and teetotaler. Snow found time to attend classes at what would eventually become a modern medical school. He also developed his interest in cholera. In 1831, when he was 18 years old, Hardcastle sent Snow to provide medical assistance to the local coal miners and their families who were victims of a cholera outbreak. Years later Snow wrote, "That the men [who work in coal pits] are occasionally attacked whilst at work I know, from having seen them brought up from some of the coal-pits ... after having had profuse discharges from the stomach and bowels, and when fast approaching to a state of collapse” (2).

At age 20 Snow went to Burnop Field, a neighboring village near Newcastle and became an assistant to John Watson, a rural apothecary.  He apparently had little in common with Mr. Watson and considered his wages very low. He left Burnop Field after only a year and did his next apprenticeship at Pateley Bridge with Joseph Warburton, also a licensed apothecary. Pateley Bridge was and is a small village in a remote region about 30 miles west of York. Snow lived in the large house that served as both home and surgery to Warburton and his family. Snow viewed Warburton with great respect and friendship, later referring to him as his "old master” (2). He remained with Warburton for 18 months.

 

Formal Medical Education

At the age of 23 Snow began his formal education by enrolling for a year in the Hunterian School of Medicine located on Great Windmill Street in the Soho region of east London (2). During this time Snow rented an inexpensive room at 11 Bateman's Buildings, a narrow alleyway several blocks north of the Hunterian Medical School and just south of Soho Square. In the 19th century Soho could be best described as “dodgy” (5). Respectable families had moved away, and prostitutes, music halls and small theatres had moved in. The Hunterian school was privately run and provided lectures, demonstrations and dissections.  Shortly after Snow completed his year at Hunterian, the school closed.  

In October 1837 at the age of 24, Snow became a registrar at the Westminster Hospital, thereby gaining experience in a hospital setting (2). The hospital was nearly a mile south of his home on Bateman's Buildings, to which he walked most days. The hospital had been reconstructed and enlarged in 1834, and had a good reputation in surgery. After about 18 months, Snow passed his examination to become a Member of the Royal College of Surgeons of England (MRCS), permitting him to practice general medicine. He ranked 7th among the 114 candidates who passed the examination. Snow also passed his Licentiate of the Society of Apothecaries (LSA), ranking 8th on a list of 10, allowing him to prepare and sell drugs and other medicines.  After 12 years of apprenticeship and education he was certified as a general practitioner.

In the mid-1800s, neither the MB nor MD was necessary to practice general medicine. However, Snow's curiosity along with a desire for wealthier, more discerning patients led him to enroll at the newly created University of London Medical School in 1838. While active as a general practitioner, Snow attended the University of London for five years. When he was 30 years old Snow received the Bachelor of Medicine (MB) degree. A year later in December 1844, he obtained the Doctorate of Medicine (or MD), also from the University of London. 

 

Early Practice and Academic Pursuits

Snow continued his general practice in Soho and was now located at a new address about a block south of Soho Square (2). In 1846 he took a position as lecturer in forensic medicine at the Aldersgate School of Medicine, a private medical school in central London.  He remained at the school as Lecturer from 1846 to 1849 when the institution closed for lack of funding. In 1850 he passed the examination to become a Licentiate of the Royal College of Physicians (LRCP) of London.  The LRCP was the most elite of the medical profession.

 

Resuscitation

Beginning in about 1837, Snow joined the Westminster Medical Society. Although not a particularly sociable man, Snow was able to interact with his medical colleagues and to present his scientific theories. Snow regarded membership in this organization as the most influential in his professional growth (2).

In the 1840’s Snow developed an interest in resuscitation. One of the first communications to the Westminster Medical Society described a device for resuscitation of the newborn (6). The instrument was based on the concept of the “pulmotor”. Snow assumed that the stimulus for respiration, including newborns, was hypoxemia. He also speculated that the pulsive action of the blood was in part due to the capillaries, since it seemed unlikely that heart was sufficiently strong to propel the blood through the arteries, capillaries and veins.

 

Anesthesia

Snow’s early practice was not particularly successful, at least in part, because he was not very personable (2). During his time at the University of London, Snow became increasingly interested in anesthetics. He conducted numerous experiments, both on animals and himself and invented an improved ether inhaler (2).  Air and ether were mixed as vapor at one side of the apparatus and drawn over and round the spiral chamber, to be inhaled by the patient through a mouth-tube fitted with cedarwood ball valves. His work attracted the attention of Robert Liston, the best known surgeon of the day. Liston, who performed the first operation in Europe using ether, was impressed with the difference between the result of anesthesia administered by Snow, and that of less cautious anesthetists. Liston put his ether practice almost entirely into Snow’s hands (2). Soon Snow was recognized as the premier anesthetist in London. Although he had practically introduced the use of ether into English surgery, Snow balanced ether against other anaesthetizing agents, particularly chloroform. He administered the later to Queen Victoria during the birth of her last 2 children in 1853 and 1857 (2). Snow published the results of his experience with ether in 1847, including the definition of four anesthesia stages which continue to be recognized in modern times (7).

 

Epidemiology and the Broad Street Pump

By the middle of the 19th century, London along the Thames was a cesspool (Figure 2).

Figure 2. Cartoon of famous English scientist Michael Faraday who wrote a letter to The Times in 1855 complaining of the foul condition of the Thames, which resulted in this cartoon in Punch.

Both human and animal excrement and garbage were placed in cesspits if not thrown directly into the river. However, the cesspits eventually filled and overflowed draining into the river. Disposal of waste in the river was further complicated in that London is close enough to the North Sea that the river level is affected by the tides. Sewage flowed downriver at low tide but twice a day a wall of water would carry it back upstream. Snow’s neighborhood of Soho was especially bad (5). It had become an insanitary place of cow-sheds, slaughterhouses, grease-boiling dens and primitive, decaying sewers.

When cholera first hit England in late 1831, it was thought to be spread by "miasma in the atmosphere” or bad air (2). The cholera outbreaks seemed to occur where the stench was worst, which was often next to water sources. This is hardly surprising since acceptance of the germ theory would wait the discoveries of Pasteur and Koch in the later part of the century (Figure 3).

Figure 3. Timeline for some of seminal infectious disease discoveries of the 19th century. A: 1847-Semmelweis discovers that hand washing decreases the incidence of puerperal fever. B: 1855-Snow publishes On the Mode of Communication of Cholera. C: 1878-Pasteur publishes Microbes Organized, Their Role In Fermentation, Putrefaction and the Contagion. D: 1883-Koch identifies the bacterial cause of cholera.

England had suffered numerous cholera outbreaks during the 19th century. Whenever cholera broke out nothing could be done to contain it. The disease rampaged through the industrial cities, leaving tens of thousands dead in its wake. At the beginning of the 1854 London epidemic Soho suffered only a few, isolated cases. However, on the night of August 31st what Dr Snow later called "the most terrible outbreak of cholera which ever occurred in the kingdom" broke out (8). During the next three days, 127 people living in or around Broad Street died. Within a week, most residents had fled their homes, leaving the shops shuttered, the houses locked and the streets deserted. Only those who could not afford to leave remained there.

By September 10th, the number of fatal attacks had reached 500. Snow’s previous researches had convinced him that cholera, "always commences with disturbances of the functions of the alimentary canal” (2). This led him to conclude that it was spread by sewage-tainted water. Snow had traced a recent outbreak in South London to contaminated water supplied by the Southwark and Vauxhall Water Company (9,10). However, no one believed his theory. The water company pooh-poohed his theories and the authorities were reluctant to believe in a theory of fecal-oral contamination.

From the beginning Snow interviewed the families of the victims. His research led him to a pump on the corner of Broad Street and Cambridge Street, at the epicenter of the epidemic. He mapped the deaths of each of the victims (Figure 4).

Figure 4. Panel A. Snow’s map from his 1855 publication (8). Squares indicate deaths from cholera. The pump is indicated by the red arrow and Snow’s residence on Frith Street by the blue arrow (5). Panel B. Enlargement of the green square from Panel A showing the clustering of the deaths surrounding the Broad Street pump.

Snow would write that “… nearly all the deaths had taken place within a short distance of the pump" (8).  He took a sample of water from the pump, and, on examining it under a microscope, found that it contained "white, flocculent particles” (8). By September 7th, he was convinced that these were the source of infection.

Snow was a prominent physician and considered somewhat an expert on cholera having published several articles on the disease (9,10). He came uninvited to a meeting of the Board of Guardians of St James's Parish, the region serviced by the Broad Street pump on September 7th. In England, the parish is the fundamental tier of local government. Dr. Edwin Lankester, a member of a local group that looked into the causes of the Broad Street outbreak and the first medical officer for the St. James's district, later wrote, "The Board of Guardians met to consult as to what ought to be done. Of that meeting, the late Dr. Snow demanded an audience. He was admitted and gave it as his opinion that the pump in Broad Street, and that pump alone, was the cause of all the pestilence.  He was not believed -- not a member of his own profession, not an individual in the parish believed that Snow was right.  But the pump was closed nevertheless and the plague was stayed” (2). The pump handle was famously removed.

 

Aftermath of the 1854 Cholera Outbreak

By the end of September the outbreak was over leaving 616 residents of Soho dead. However, there were several unexplained deaths from cholera that could not be linked to the Broad Street pump water -- notably, Susannah Eley, a widow living in Hampstead on London’s West End, who had died of cholera on September 2nd, and her niece who had succumbed the following day (8). Neither of these women had been near Soho. Dr Snow traveled to Hampstead to interview the Eley's son. Snow learned that the widow had once lived on Broad Street, and that she had liked the taste of the well-water there so much that she had sent her servant down to Soho every day to bring back a large bottle for her by cart. The last bottle of had been brought to Hampstead on August 31st, at the very start of the epidemic.

Snow’s persistence in obtaining as much data as was possible is a remarkable trait demonstrated by the following incidents (8). Only 5 of the 530 inmates of the Poland Street workhouse, which was just around the corner from the pump, contracted cholera. Snow discovered that few drank the pump water, since the workhouse had its own well. Similarly, among the 70 workers in a brewery on Broad Street there were no fatalities at all. It was discovered that the workers were given a beer allowance and never drank from the well.

Snow’s “Grand Experiment,” compared cholera in the 1854 epidemic in neighborhoods receiving water from two different companies (8). The Lambeth Company delivered water from the upper Thames away from the Broad Street pump and urban pollution. On the other hand, the Southwark and Vauxhall Company relied on inlets in the heart of London, where the contamination of water with sewage was common. Snow showed the harmful effect of contaminated water in two nearly equivalent populations, and he suggested intervention strategies to control the epidemic. His ideas and observations, including innovative disease maps, were published in his book On the Mode of Communication of Cholera (8). Later, beginning in the 1930s, Snow’s work was republished as a classic work in epidemiology, resulting in his lasting recognition (4).

Snow wrote, "The experiment, too, was on the grandest scale. No fewer than three hundred thousand people of both sexes, of every age and occupation, and of every rank and station, from gentlefolks down to the very poor, were divided into two groups without their choice, and, in most cases, without their knowledge; one group being supplied with water containing the sewage of London, and, amongst it, whatever might have come from the cholera patients, the other group having water quite free from such impurity” (8).

Using a classic 2X2 set up, Snow obtained data on the two sets of London households and found that during an 1854 epidemic there were 315 deaths from cholera per 10,000 homes among those supplied by Southwark-Vauxhall but only 37 deaths per 10,000 supplied by Lambeth (2). Snow had gotten his numbers from a less than precise Parliamentary report leading to criticism and Snow’s own admission that the results were not strong enough to establish that cholera was related to water supply.

Still no one believed Snow. A report by the Board of Health a few months later concluded, "We see no reason to adopt this [Snow’s] belief” (2).  The pump handle was replaced. However, about a year after the epidemic Snow’s theories received support from an unexpected source. The Reverend Henry Whitehead, vicar of St Luke's Church, Berwick Street, did his own investigation (2). Although originally a believer in the miasma theory, Snow’s data convinced him that the pump was the source. Furthermore, Whitehead helped Snow to determine the probable cause of the cholera outbreak. Just before the Soho epidemic, a child living at number 40 Broad Street had been taken ill with cholera symptoms. Its diapers had been steeped in water which was subsequently disposed of in a leaking cesspool situated only three feet from the Broad Street well.

Whitehead's findings were published in the architectural journal, The Builder, along with a report on living conditions in Soho (2). "Even in Broad-street it would appear that little has since been done... In St Anne's-Place, and St Anne's-Court, the open cesspools are still to be seen; in the court, so far as we could learn, no change has been made; so that here, in spite of the late numerous deaths, we have all the materials for a fresh epidemic... In some [houses] the water-butts were in deep cellars, close to the undrained cesspool... The overcrowding appears to increase..."  The Builder went on to recommend "the immediate abandonment and clearing away of all cesspools -- not the disguise of them, but their complete removal”.

Nothing was done. The pump handle was replaced. The cesspools were not drained. It was likely “The Great Stink of 1858” that prompted action (11). During the summer warm weather combined with a series of low tides to cause such a stench that Parliament was adjourned for a week. Finally in 1859 the Metropolitan Board of Works, after rejecting many schemes to diminish the Thames’ smell, accepted the proposal of Joseph Bazalgette. The intention of this very expensive scheme was to resolve the epidemic of cholera by eliminating the stench (miasma) which was believed to cause it. Over the next six years the main elements of the London sewerage system were created. As an unintended consequence the water supply ceased to be contaminated and resolved the repeated episodes of cholera epidemics.

 

Later Years and Death

Snow’s health had never been the best.  After receiving his MD from the University of London, Snow had suffered from tuberculosis (2). He recovered by spending a good deal of time in the fresh air away from Soho and the Thames. In 1845 he had an acute attack of renal disease. His physician told him to abandon his strict vegetarian diet and to take wine in small quantities. He improved. 

Likely of greater significance regarding was his self-experimentation with anesthesia (2).  He was the first to carry out experiments on the physiology of anesthesia, and did not spare himself in investigating every possible substance that might be employed as an anesthetic. The pathologic effect of most of these agents was not known in Snow's time. 

A clue to the Snow’s ill health is a photo showing the swelling of the index finger of his right hand (Figure 5).

Figure 5. Close up of Snow’s right hand taken from Figure 1 showing a swollen index finger.

Such swelling of the fingers has been associated with chronic renal failure. Exposure to anesthetic gases is now known to have numerous adverse health effects, including severe renal damage. In Snow's case, his swollen fingers were likely due to extensive self-experimentation over nearly a decade with a variety of anesthetic agents.

On the evening of June 9, 1858, John Snow joined a group of colleagues to discuss a new bi-aural (i.e., two ear pieces) stethoscope and the cause of the first Korotkoff sound when measuring blood pressure (2). The next morning, he suffered a slight stroke while working on, “On Chloroform and other Anesthetics”.  He recovered but a few days later suffered another cerebral accident. His housekeeper found him on the floor. He died a few days later. 

At autopsy, Snow's kidneys were found to be "shrunken, granular and encysted” (2). While there was also scar tissue in the kidney from old bouts of tuberculosis, it seems likely that his kidney problems arose from anesthetic experimentation.

On June 26, 1858, the following short notice of death appeared in The Lancet (Figure 6).

Figure 6. Snow’s death notice in the Lancet.

A humble obituary for so great a physician.

 

Legacy

Snow’s work in so many fields is well documented although he was not always right. Snow’s life is now commemorated in such an English way-a pub near the original site of the Broad Street pump. The pub was renamed for him in 1955 on the centenary of the Snow’s publication of “On the Mode of Communication of Cholera” (Figure 7).

Figure 7. John Snow pub on Broadwick (formerly Broad) Street in modern London.

It is ironic that Snow, who did not drink alcohol or eat meat for most of his life, should be commemorated by a public house where the menu is not vegetarian and the libations are alcoholic.

It is difficult to underestimate the historical importance of Snow’s work. Gro Harlem Brundtland, former Director-General of the World Health Organization has said, "In historic terms the marriage between science and health is a relatively recent event. Not long ago superstition, magic and astrology were the only weapons our ancestors had to fight diseases and epidemics that haunted the world. They were seen as divine punishments or unfavorable influence of the heavenly bodies. We owe that marriage to the creators of modern bacteriology, epidemiology and therapeutics - to scientists such as Louis Pasteur, Robert Koch, John Snow, Alexander Fleming and Paul Erlich - and their discoveries that shaped modern medicine and public health policies. They helped rescue our civilization from the dark ages of the unknown - and the unknown had names such as plagues, cholera or syphilis" (2).

When David Satcher, the former Surgeon General, was faced with a complex public health issue, he would frequently ask, “Where is the handle on this Broad Street pump?” (2).

We celebrate Snow for not “caving in” to popular opinion, even against his more illustrious colleagues, a persistence in getting at the truth of the matter, and the courage to put it in print. Yet, his real excellence, lies in his creativity to look at an event as all his contemporaries did and come up with the novel (and correct) solution and his conviction to that solution when he believed he was right.

References

  1. http://www.ph.ucla.edu/epi/snow.html (accessed 5/7/13).
  2. http://www.ph.ucla.edu/epi/snow.html (accessed 5/7/13).
  3. http://madisonleighrose.wordpress.com/2012/08/27/john-snow-and-the-cholera-myth/ (accessed 5/7/13).
  4. Snow J. Snow on Cholera -- A Reprint of Two Papers by John Snow, M.D. together with A Biographical Memoir by B.W. Richardson, M.D., and an Introduction by Wade Hampton Frost, M.D., Hafner Publishing Company, London, 1965.
  5. Summers J. Soho -- A History of London's Most Colourful Neighborhood, Bloomsbury, London, 1989, pp. 113-117.
  6. Snow J. On asphyxia and on the still-born. London Med Gaz. 1842;1:222-7.
  7. Snow J. On the inhalation of the vapour of ether in surgical operations Br. J. Anaesth. 1953;25: 53-4. [CrossRef]
  8. Snow J. On the Mode of Communication of Cholera. London: John Churchill,  New Burlington Street, England, 1855
  9. Snow J. On the pathology and mode of communication of cholera: part 1. London Medical Gazette. 1849;44:745-52.
  10. Snow J. On the pathology and mode of communication of cholera: part 2. London Medical Gazette. 1849;44:923-29.
  11. http://en.wikipedia.org/wiki/Great_Stink (accessed 5/7/13).

Reference as: Robbins RA, Klotz SA. Profiles in medical courage: John Snow and the courage of conviction. Southwest J Pulm Crit Care. 2013;7(2):87-99. doi: http://dx.doi.org/10.13175/swjpcc063-13 PDF

Thursday
Jun132013

Comparisons between Medicare Mortality, Readmission and Complications

Richard A. Robbins, MD*

Richard D. Gerkin, MD  

 

*Phoenix Pulmonary and Critical Care Research and Education Foundation, Gilbert, AZ

Banner Good Samaritan Medical Center, Phoenix, AZ

 

Abstract

The Center for Medicare and Medicaid Services (CMS) has been a leading advocate of evidence-based medicine. Recently, CMS has begun adjusting payments to hospitals based on hospital readmission rates and “value-based performance” (VBP). Examination of the association of Medicare bonuses and penalties with mortality rates revealed that the hospitals with better mortality rates for heart attacks, heart failure and pneumonia had significantly greater penalties for readmission rates (p<0.0001, all comparisons). A number of specific complications listed in the CMS database were also examined for their correlations with mortality, readmission rates and Medicare bonuses and penalties. These results were inconsistent and suggest that CMS continues to rely on surrogate markers that have little or no correlation with patient-centered outcomes.

Introduction

Implementation of the Affordable Care Act (ACA) emphasized the use of evidence-based measures of care (1). However, the scientific basis for many of these performance measures and their correlation with patient-centered outcomes such as mortality, morbidity, length of stay and readmission rates have been questioned (2-6). Recently, CMS has begun adjusting payments based on readmission rates and “value-based performance” (VBP) (7). Readmission rates and complications are based on claims submitted by hospitals to Medicare (8).

We sought to examine the correlations between mortality, hospital readmission rates, complications and adjustments in Medicare reimbursement. If the system of determining Medicare reimbursements is based on achievement of better patient outcomes, then one hypothesis is that lower readmission rates would be associated with lower mortality.  An additional hypothesis is that complications would be inversely associated with both mortality and readmission rates. 

Methods

Hospital Compare

Data was obtained from the CMS Hospital Compare website from December 2012-January 2013 (8). The data reflects composite data of all hospitals that have submitted claims to CMS. Although a number of measures are listed, we recorded only readmissions, complications and deaths since many of the process of care measures have not been shown to correlate with improved outcomes. Patient satisfaction was not examined since higher patient satisfaction has been shown to correlate with higher admission rates to the hospital, higher overall health care expenditures, and increased mortality (9). In some instances data are presented in Hospital Compare as higher, lower or no different from the National average. In this case, scoring was done 2, 0 and 1 respectively with 0=higher, 2=lower and 1=no different.

Mortality

Mortality was obtained from Hospital Compare and is the 30-day estimates of deaths from any cause within 30 days of a hospital admission for patients hospitalized for heart attack, heart failure, or pneumonia regardless of whether the patient died while still in the hospital or after discharge. The mortality and rates are adjusted for patient characteristics including the patient’s age, gender, past medical history, and other diseases or conditions (comorbidities) the patient had at hospital arrival that are known to increase the patient’s risk of dying.

Readmission Rates

Similarly, the readmission rates are 30-day estimates of readmission for any cause to any acute care hospital within 30 days of discharge. These measures include patients who were initially hospitalized for heart attack, heart failure, and pneumonia. Similar to mortality, the readmission measures rates are adjusted for patient characteristics including the patient’s age, gender, past medical history, and other diseases or conditions (comorbidities) the patient had at hospital arrival that are known to increase the patient’s risk for readmission.

Complications

CMS calculates the rate for each complication by dividing the actual number of self-reported outcomes at each hospital by the number of eligible discharges for that measure at each hospital, multiplied by 1,000. The composite value reported on Hospital Compare is the weighted averages of the component indicators.  The measures of serious complications reported are risk adjusted to account for differences in hospital patients’ characteristics. In addition, the rates reported on Hospital Compare are “smoothed” to reflect the fact that measures for small hospitals are measured less accurately (i.e., are less reliable) than for larger hospitals.

CMS calculates the hospital acquired infection data from the claims hospitals submit to Medicare. The rate for each hospital acquired infection measure is calculated by dividing the number of infections that occur within any given eligible hospital by the number of eligible Medicare discharges, multiplied by 1,000. The hospital acquired infection rates were not risk adjusted by CMS.

In addition to the composite data, individual complications listed in the CMS database were examined (Table 1).

Table 1. Complications examined that are listed in CMS data base.

Objects Accidentally Left in the Body After Surgery

Air Bubble in the Bloodstream

Mismatched Blood Types

Severe Pressure Sores (Bed Sores)

Falls and Injuries

Blood Infection from a Catheter in a Large Vein

Infection from a Urinary Catheter

Signs of Uncontrolled Blood Sugar

 

Medicare Bonuses and Penalties

The CMS data was obtained from Kaiser Health News which had compiled the data into an Excel database (10).

 

Statistical Analysis

Data was reported as mean + standard error of mean (SEM). Outcomes between hospitals rated as better were compared to those of hospitals rated as average or worse using Student’s t-test. The relationship between continuous variables was obtained using the Pearson correlation coefficient. Significance was defined as p<0.05. All p values reported are nominal, with no correction for multiple comparisons.

Results

A large database was compiled for the CMS outcomes and each of the hospital ratings (Appendix 1). There were over 2500 hospitals listed in the database.

Mortality and Readmission Rates

A positive correlation for heart attack, heart failure and pneumonia was found between hospitals with better mortality rates (p<0.001 all comparisons). In other words, hospitals with better mortality rates for heart attack tended to be better mortality performers for heart failure and pneumonia, etc.  Surprisingly, the hospitals with better mortality rates for heart attack, heart failure and pneumonia had higher readmission rates for these diseases (p<0.001, all comparisons).

Examination of the association of Medicare bonuses and penalties with mortality rates revealed that the hospitals with better mortality rates for heart attacks, heart failure and pneumonia received the same compensation for value-based performance as hospitals with average or worse mortality rates (Appendix 2, p>0.05, all comparisons). However, these better hospitals had significantly larger penalties for readmission rates (Figure 1, p<0.0001, all comparisons). 

 

Figure 1.  Medicare bonuses and penalties for readmission rates of hospitals with better, average or worse mortality for myocardial infarction (heart attack, Panel A), heart failure (Panel B), and pneumonia (Panel C).

Because total Medicare penalties are the average of the adjustment for VBP and readmission rates, the reduction in reimbursement was reflected with higher total penalty rates for hospitals with better mortality rates for heart attacks, heart failure and pneumonia (Figure 2 , p<0.001, all comparisons).

Figure 2.  Total Medicare bonuses and penalties for readmission rates of hospitals with better, average or worse mortality for myocardial infarction (heart attack, Panel A), heart failure (Panel B), and pneumonia (Panel C).

Mortality Rates and Complications

The rates of a number of complications are also listed in the CMS database (Table 1). A correlation was performed for each complication compared to the hospitals with better, average or worse death and readmission rates for heart attacks, heart failure and pneumonia (Appendix 3). A positive correlation of hospitals with better mortality rates was only observed for falls and injuries in the hospitals with better death rates from heart failure (p<0.02). However, severe pressure sores also differed in the hospitals with better mortality rates for heart attack and heart failure, but this was a negative correlation (p<0.05 both comparisons). In other words, hospitals that performed better in mortality performed worse in severe pressure sores. Similarly, hospitals with better mortality rates for heart failure had higher rates of blood infection from a catheter in a large vein compared to hospitals with an average mortality rate (p<0.001). None of the remaining complications differed.

Readmission Rates and Complications

A correlation was also performed between complications and hospitals with better, average and worse readmission rates for myocardial infarction, heart failure, and pneumonia (Appendix 4). Infections from a urinary catheter and falls and injuries were more frequent in hospitals with better readmission rates for myocardial infarction, heart failure, and pneumonia compared to hospitals with the worse readmission rates (p<0.02, all comparisons). Hospitals with better readmission rates for heart failure also had higher infections from a urinary catheter compared to hospitals with average readmission rates for heart failure (p<0.001). None of the remaining complications significantly differed 

Discussion

The use of “value-based performance” (VBP) has been touted as having the potential for improving care, reducing complications and saving money. However, we identified a negative correlation between deaths and readmissions, i.e., those hospitals with the better mortality rates were receiving larger financial penalties for readmissions and total compensation. Furthermore, correlations of hospitals with better mortality and readmission rates with complications were inconsistent.

Our data compliments and extends the observations of Krumholz et al. (11). These investigators examined the CMS database from 2005-8 for the correlation between mortality and readmissions. They identified an inverse correlation between mortality and readmission rates with heart failure but not heart attacks or pneumonia. However, with the financial penalties now in place for readmissions, it now seems likely hospital practices may have changed.

CMS compensating hospitals for lower readmission rates is disturbing since higher readmission rates correlated with better mortality. This equates to rewarding hospitals for practices leading to lower readmission rates but increase mortality. The lack of correlation for the other half of the payment adjustment, so called “value-based purchasing” is equally disturbing since if apparently has little correlation with patient outcomes.

Although there is an inverse correlation between mortality and readmissions, this does not prove cause and effect. The causes of the inverse association between readmissions and mortality rates are unclear, but the most obvious would be that readmissions may benefit patient survival. The reason for the lack of correlation between mortality and readmission rates with most complication rates is also unclear. VBP appears to rely heavily on complications that are generally infrequent and in some cases may be inconsequential. Furthermore, many of the complications are for all intents and purposes self-reported by the hospitals to CMS since they are based on claims data. However, the accuracy of these data has been called into question (12,13). Meddings et al. (13) studied urinary tract infections. According to Meddings, the data were “inaccurate” and not were “not valid data sets for comparing hospital acquired catheter-associated urinary tract infection rates for the purpose of public reporting or imposing financial incentives or penalties”. The authors proposed that the nonpayment by Medicare for “reasonably preventable” hospital-acquired complications resulted in this discrepancy. Inaccurate data may lead to the lack of correlation a complication and outcomes on the CMS database.

According to the CMS website the complications were chosen by “wide agreement from CMS, the hospital industry and public sector stakeholders such as The Joint Commission (TJC) , the National Quality Forum (NQF), and the Agency for Healthcare Research and Quality (AHRQ) , and hospital industry leaders” (7). However, some complications such as air bubble in the bloodstream or mismatched blood types are quite rare. Others such as signs of uncontrolled blood sugar are not evidence-based (14). Other complications actually correlated with improved mortality or readmission rates. It seems likely that some of the complications might represent more aggressive treatment or could reflect increased clinical care staffing which has previously been associated with better survival (145

There are several limitations to our data. First and foremost, the data are derived from CMS Hospital Compare where the data has been self-reported by hospitals. The validity and accuracy of the data has been called into question (12,13). Second, data are missing in multiple instances. For example, data from Maryland were not present. There were multiple instances when the data were “unavailable” or the “number of cases are too small”. Third, in some instances CMS did not report actual data but only higher, lower or no different from the National average. Fourth, much of the data are from surrogate markers, a fact which is puzzling when patient-centered outcomes are available. In addition, some of these surrogate markers have not been shown to correlate with outcomes.

It is unclear if CMS Hospital Compare should be used by patients or healthcare providers when choosing a hospital. At present it would appear that the dizzying array of data reported overrelies on surrogate markers which are possibly inaccurate. Lack of adequate outcomes data and even obfuscating the data by reporting the data as average, below or above average does little to help shareholders interpret the data. The failure to apparently incorporate mortality rates as a component of VBP is another major limitation. The accuracy of the data is also unclear. Until these shortcomings can be improved, we cannot recommend the use of Hospital Compare by patients or providers.

References

  1. Obama B. Securing the future of American health care. N Engl J Med. 2012; 367:1377-81.
  2. Showalter JW, Rafferty CM, Swallow NA, Dasilva KO, Chuang CH. Effect of standardized electronic discharge instructions on post-discharge hospital utilization. J Gen Intern Med. 2011;26(7):718-23.
  3. Heidenreich PA, Hernandez AF, Yancy CW, Liang L, Peterson ED, Fonarow GC. Get With The Guidelines program participation, process of care, and outcome for Medicare patients hospitalized with heart failure. Circ Cardiovasc Qual Outcomes. 2012 ;5(1):37-43.
  4. Hurley J, Garciaorr R, Luedy H, Jivcu C, Wissa E, Jewell J, Whiting T, Gerkin R, Singarajah CU, Robbins RA. Correlation of compliance with central line associated blood stream infection guidelines and outcomes: a review of the evidence. Southwest J Pulm Crit Care. 2012;4:163-73.
  5. Robbins RA, Gerkin R, Singarajah CU. Relationship between the Veterans Healthcare Administration Hospital Performance Measures and Outcomes. Southwest J Pulm Crit Care 2011;3:92-133.
  6. Padrnos L, Bui T, Pattee JJ, Whitmore EJ, Iqbal M, Lee S, Singarajah CU, Robbins RA. Analysis of overall level of evidence behind the Institute of Healthcare Improvement ventilator-associated pneumonia guidelines. Southwest J Pulm Crit Care. 2011;3:40-8.
  7. http://www.medicare.gov/HospitalCompare/Data/linking-quality-to-payment.aspx (accessed 4/8/13).
  8. http://www.medicare.gov/hospitalcompare/ (accessed 4/8/13).
  9. Fenton JJ, Jerant AF, Bertakis KD, Franks P. The cost of satisfaction: a national study of patient satisfaction, health care utilization, expenditures, and mortality. Arch Intern Med. 2012;172:405-11.
  10. http://capsules.kaiserhealthnews.org/wp-content/uploads/2012/12/Value-Based-Purchasing-And-Readmissions-KHN.csv (accessed 4/8/13).
  11. Krumholz HM, Lin Z, Keenan PS, Chen J, Ross JS, Drye EE, Bernheim SM, Wang Y, Bradley EH, Han LF, Normand SL. Relationship between hospital readmission and mortality rates for patients hospitalized with acute myocardial infarction, heart failure, or pneumonia. JAMA. 2013;309(6):587-93. doi: 10.1001/jama.2013.333.
  12. Robbins RA. The emperor has no clothes: the accuracy of hospital performance data. Southwest J Pulm Crit Care. 2012;5:203-5.
  13. Meddings JA, Reichert H, Rogers MA, Saint S, Stephansky J, McMahon LF. Effect of nonpayment for hospital-acquired, catheter-associated urinary tract infection: a statewide analysis. Ann Intern Med. 2012;157:305-12.
  14. NICE-SUGAR Study Investigators. Intensive versus conventional insulin therapy in critically ill patients. N Engl J Med. 2009;360:1283-97.
  15. Robbins RA, Gerkin R, Singarajah CU. Correlation between patient outcomes and clinical costs in the va healthcare system. Southwest J Pulm Crit Care. 2012;4:94-100.

Reference as: Robbins RA, Gerkin RD. Comparisons between Medicare mortality, morbidity, readmission and complications. Southwest J Pulm Crit Care. 2013;6(6):278-86. PDF

Monday
Mar042013

In Vitro Versus In Vivo Culture Sensitivities: An Unchecked Assumption?

Vinay Prasad, MD*

Nancy Ho, MD

 

*Medical Oncology Branch

National Cancer Institute

National Institutes of Health

Bethesda, Maryland.

vinayak.prasad@nih.gov

 

Department of Medicine

University of Maryland 

 

Case Presentation

A patient presents to urgent care with the symptoms of a urinary tract infection (UTI). The urinalysis is consistent with infection, and the urine culture is sent to lab.  In the interim, a physician prescribes empiric treatment, and sends the patient home. Two days later, the culture is positive for E. coli, resistant to the drug prescribed (Ciprofloxacin, Minimum Inhibitory Concentration (MIC) 64 μg/ml), but attempts to contact the patient (by telephone) are not successful. The patient returns the call two weeks later to say that the infection resolved without sequelae.

Discussion

Many clinicians have the experience of treatment success in the setting of known antibiotic resistance, and, conversely, treatment failure in the setting of known sensitivity. Such anomalies and empiric research described here forces us to revisit assumptions about the relationship between in vivo and in vitro drug responses. 

When it comes to the utility of microbiology cultures, other writers have questioned cost effectiveness and yield (1). Though it is considered a quality measure by some groups in the United States, routine blood cultures seldom change antibiotic choice (3.6%) in patients who present to the emergency room with the clinical and radiographic signs of pneumonia (2)

The objection here is different, but fundamental. Even when culture sensitivities suggest we should change antibiotics, what empirical evidence is there that such changes are warranted? It is by no means a novel doubt. In 1963, at the dawn of in vitro sensitivity techniques, one group questioned their utility to predict clinical outcomes:

“Several objections may be raised…. First, local or host defense mechanisms may act in synergism or antagonism with the antibiotic.  Second, the concentration of antibiotic in tissue fluids, specifically blood, might bear no relation to the concentration at the site of infection…” (3)

And, while substantial pharmacologic progress has been made to ensure proper tissue concentrations, few empirical studies have sought to address the first concern (4). Recent examples suggest the relationship between in vitro and in vivo outcomes may be questionable.

One study of H. pylori tackled this issue (5). Macrolide and metronidazole resistance were determined in lab, and a urea breath test assessed clinical response. Interestingly, treatment with a clarithromycin regiment failed in 77% of persons with clarithromycin-resistant H. pylori compared with 13% of those with clarithromycin-susceptible isolates (relative risk, 6.2 [CI, 1.9 to 37.1]; P < 0.001).  While treatment with metronidazole-based therapy failed in 11% of those with metronidazole-resistant isolates and 38% of those with metronidazole-susceptible isolates (P > 0.25). 

These results suggest that metronidazole susceptibility wholly lacks clinical utility, while clarithromycin sensitivity may be useful. To fully prove the utility of clarithromycin sensitivity testing the authors should show a higher cure rate with a different regiment, and then demonstrate that upfront screening is preferable to empiric treatment and observation.  

Another study suggests that for some organisms and infections— Acanthamoeba keratitis—there exists no relationship at all between in vitro drug sensitivities and the in vivo response (6).

For some conditions, knowing that a causative organism is susceptible in vitro does in fact predict clinical response. For instance, a large study of gram-negative infections treated with cefotaxime found that as the MIC increased, from <4 μg/ml to 64 μg/ml (in vitro), the rate of clinical response fell from 91% to 50% (4). Thus, nearly all patients with susceptible organisms (low MIC) were successfully treated. But, perhaps, what is most interesting about this study is that even resistant organisms were effectively treated in 50% of patients. This finding is supported by work in urinary tract infections, which similarly found a high percentage of clinical response (>80%), even among patients whose causative organisms were resistant to prescribed agents (7).

Basic studies are required for bacteremia, pneumonia, urinary tract infections, endocarditis, and others. To do this work, we should not use our words interchangeably. Treatment failure must refer to an independent clinical outcome and not defined circularly as antibiotic resistance. As of today, faith that in vitro results predict in vivo outcomes remains an unchecked assumption whose treatment implications remain vast and reaching. 

References

  1. Glerant JC, Hellmuth D, Schmit JL, Ducroix JP, Jounieaux V. Utility of blood cultures in community-acquired pneumonia requiring hospitalization: influence of antibiotic treatment before admission. Respir Med. 1999;93:208-12.
  2. Kennedy M, Bates DW, Wright SB, Ruiz R, Wolfe RE, Shapiro NI. Do emergency department blood cultures change practice in patients with pneumonia? Ann Emerg Med. 2005;46:393-400.
  3. Petersdorf RG, Plorde JJ. The usefulness of in vitro sensitivity tests in antibiotic therapy. Annu Rev Med. 1963;14:41-56.
  4. Doern GV, Brecher SM. The Clinical Predictive Value (or Lack Thereof) of the Results of In Vitro Antimicrobial Susceptibility Tests. J Clin Microbiol. 2011;49:S11-S4.
  5. McMahon BJ, Hennessy TW, Bensler JM, et al. The relationship among previous antimicrobial use, antimicrobial resistance, and treatment outcomes for Helicobacter pylori infections. Ann Intern Med. 2003;139:463-9.
  6. Perez-Santonja JJ, Kilvington S, Hughes R, Tufail A, Matheson M, Dart JK. Persistently culture positive acanthamoeba keratitis: in vivo resistance and in vitro sensitivity. Ophthalmology. 2003;110:1593-600.
  7. Alizadeh Taheri P, Navabi B, Shariat M. Neonatal urinary tract infection: clinical response to empirical therapy versus in vitro susceptibility at Bahrami Children's Hospital- Neonatal Ward: 2001-2010. Acta Med Iran. 2012;50:348-52.

Reference as: Prasad V, Ho N. In vitro versus in vivo culture sensitivities: an unchecked assumption? Southwest J Pulm Crit Care. 2013;6(3):125-7. PDF