Search Journal-type in search term and press enter
Social Media-Follow Southwest Journal of Pulmonary and Critical Care on Facebook and Twitter

General Medicine

(Click on title to be directed to posting, most recent listed first)

Tacrolimus-Associated Diabetic Ketoacidosis: A Case Report and Literature 
Nursing Magnet Hospitals Have Better CMS Hospital Compare Ratings
Publish or Perish: Tools for Survival
Is Quality of Healthcare Improving in the US?
Survey Shows Support for the Hospital Executive Compensation Act
The Disruptive Administrator: Tread with Care
A Qualitative Systematic Review of the Professionalization of the 
   Vice Chair for Education
Nurse Practitioners' Substitution for Physicians
National Health Expenditures: The Past, Present, Future and Solutions
Credibility and (Dis)Use of Feedback to Inform Teaching : A Qualitative
   Case Study of Physician-Faculty Perspectives
Special Article: Physician Burnout-The Experience of Three Physicians
Brief Review: Dangers of the Electronic Medical Record
Finding a Mentor: The Complete Examination of an Online Academic 
   Matchmaking Tool for Physician-Faculty
Make Your Own Mistakes
Professionalism: Capacity, Empathy, Humility and Overall Attitude
Professionalism: Secondary Goals 
Professionalism: Definition and Qualities
Professionalism: Introduction
The Unfulfilled Promise of the Quality Movement
A Comparison Between Hospital Rankings and Outcomes Data
Profiles in Medical Courage: John Snow and the Courage of
Comparisons between Medicare Mortality, Readmission and 
In Vitro Versus In Vivo Culture Sensitivities:
   An Unchecked Assumption?
Profiles in Medical Courage: Thomas Kummet and the Courage to
   Fight Bureaucracy
Profiles in Medical Courage: The Courage to Serve
   and Jamie Garcia
Profiles in Medical Courage: Women’s Rights and Sima Samar
Profiles in Medical Courage: Causation and Austin Bradford Hill
Profiles in Medical Courage: Evidence-Based 
   Medicine and Archie Cochrane
Profiles of Medical Courage: The Courage to Experiment and 
   Barry Marshall
Profiles in Medical Courage: Joseph Goldberger,
   the Sharecropper’s Plague, Science and Prejudice
Profiles in Medical Courage: Peter Wilmshurst,
   the Physician Fugitive
Correlation between Patient Outcomes and Clinical Costs
   in the VA Healthcare System
Profiles in Medical Courage: Of Mice, Maggots 
   and Steve Klotz
Profiles in Medical Courage: Michael Wilkins
   and the Willowbrook School
Relationship Between The Veterans Healthcare Administration
   Hospital Performance Measures And Outcomes 


Although the Southwest Journal of Pulmonary and Critical Care was started as a pulmonary/critical care/sleep journal, we have received and continue to receive submissions that are of general medical interest. For this reason, a new section entitled General Medicine was created on 3/14/12. Some articles were moved from pulmonary to this new section since it was felt they fit better into this category.


Entries in outcomes (2)


Nurse Practitioners' Substitution for Physicians

Richard A. Robbins, MD

Phoenix Pulmonary and Critical Care Research and Education Foundation

Gilbert, AZ USA



Background: To deal with a physician shortage and reduce salary costs, nurse practitioners (NPs) are seeing increasing numbers of patients especially in primary care. In Arizona, SB1473 has been introduced in the state legislature which would expand the scope of practice for NPs and nurse anesthetists to be fully independent practitioners. However, whether nurses provide equal quality of care at similar costs is unclear.

Methods: Relevant literature was reviewed and physician and nurse practitioner education and care were compared. Included were study design and metrics, quality of care, and efficiency of care.

Results: NP and physicians differ in the length of education. Most clinical studies comparing NP and physician care were poorly designed often comparing metrics such as patient satisfaction. While increased care provided by NPs has the potential to reduce direct healthcare costs, achieving such reductions depends on the particular context of care. In a minority of clinical situations, NPs appear to have increased costs compared to physicians. Savings in cost depend on the magnitude of the salary differential between doctors and NPs, and may be offset by lower productivity and more extensive testing by NPs compared to physicians.

Conclusions: The findings suggest that in most primary care situations NPs can produce as high quality care as primary care physicians. However, this conclusion should be viewed with caution given that studies to assess equivalence of care were poor and many studies had methodological limitations.

Physician Compared to NP Education

Physicians have a longer training process than NPs which is based in large part on history. In 1908 the American Medical Association asked the Carnegie Foundation for the Advancement of Teaching to survey American medical education, so as to promote a reformist agenda and hasten the elimination of medical schools that failed to meet minimum standards (1). Abraham Flexner was chosen to prepare a report. Flexner was not a physician, scientist, or a medical educator but operated a for-profit school in Louisville, KY. At that time, there were 155 medical schools in North America that differed greatly in their curricula, methods of assessment, and requirements for admission and graduation.

Flexner visited all 155 schools and generalized about them as follows: "Each day students were subjected to interminable lectures and recitations. After a long morning of dissection or a series of quiz sections, they might sit wearily in the afternoon through three or four or even five lectures delivered in methodical fashion by part-time teachers. Evenings were given over to reading and preparation for recitations. If fortunate enough to gain entrance to a hospital, they observed more than participated."

At the time of Flexner's survey many American medical schools were small trade schools owned by one or more doctors, unaffiliated with a college or university, and run to make a profit. Only 16 out of 155 medical schools in the United States and Canada required applicants to have completed two or more years of university education. Laboratory work and dissection were not necessarily required. Many of the instructors were local doctors teaching part-time, whose own training often left something to be desired. A medical degree was typically awarded after only two years of study.

Flexner used the Johns Hopkins School of Medicine as a model. His 1910 report, known as the Flexner report, issued the following recommendations:

  • Reduce the number of medical schools (from 155 to 31);
  • Reduce the number of poorly trained physicians;
  • Increase the prerequisites to enter medical training;
  • Train physicians to practice in a scientific manner and engage medical faculty in research;
  • Give medical schools control of clinical instruction in hospitals;
  • Strengthen state regulation of medical licensure.

Flexner recommended that admission to a medical school should require, at minimum, a high school diploma and at least two years of college or university study, primarily devoted to basic science. He also argued that the length of medical education should be four years, and its content should be to recommendations made by the American Medical Association in 1905. Flexner recommended that the proprietary medical schools should either close or be incorporated into existing universities. Medical schools should be part of a larger university, because a proper stand-alone medical school would have to charge too much in order to break even financially.

By and large medical schools followed Flexner's recommendations. An important factor driving the mergers and closures of medical schools was that all state medical boards gradually adopted and enforced the Report's recommendations. As a result the following consequences occurred (2):

  • Between 1910 and 1935, more than half of all American medical schools merged or closed. This dramatic decline was in some part due to the implementation of the Report's recommendation that all "proprietary" schools be closed, and that medical schools should henceforth all be connected to universities. Of the 66 surviving MD-granting institutions in 1935, 57 were part of a university.
  • Physicians receive at least six, and usually eight, years of post-secondary formal instruction, nearly always in a university setting;
  • Medical training adhered closely to the scientific method and was grounded in human physiology and biochemistry;
  • Medical research adhered to the protocols of scientific research;
  • Average physician quality increased significantly.

The Report is now remembered because it succeeded in creating a single model of medical education, characterized by a philosophy that has largely survived to the present day.

Today, physicians usually have a college degree, 4 years of medical school and at least 3 years of residency. This totals 11 years after high school.

The history of NP education is much more recent. A Master of Science in Nursing (MSN) is the minimum degree requirement for becoming a NP (3). This usually requires a bachelor of science in nursing and approximately 18 to 24 months of full-time study.  Nearly all programs are University-affiliated and most faculty are full-time. The curricula are standardized.

NPs have a Bachelor of Science in Nursing followed by 1 1/2 to 2 years of full-time study. This totals 5 1/2 to 6 years of education after high school.

Differences and Similarities Between Physician and NP Education

Curricula for both physicians and nurses are standardized and scientifically based. The length of time is considerably longer for physicians (about 11 years compared to 5 1/2-6 years). There are also likely differences in clinical exposure. Minimal time for a NP is 500 hours of supervised, direct patient care (3). Physicians have considerably more clinical time. All physicians are required to do at least 3 years of post-graduate education after medical school. Time is now limited to 70 hours per week but older physicians can remember when 100+ hour weeks were common. Given a conservative estimate of 50 hours/week for 48 weeks/year this would give physicians a total of 7200 hours over 3 years at a minimum.

Hours of Education and Outcomes

The critical question is whether the number of hours NPs spend in education is sufficient. No studies were identified examining the effect of number of hours of NP education on outcomes. However, the impact of recent resident duty hour restrictions may be relevant.

Resident Duty Hour Regulations

There are concerns about the reduction in resident duty hours. The idea between the duty hour restriction was that well rested physicians would make fewer mistakes and spend more time studying. These regulations resulted in large part from the infamous Libby Zion case, who died in New York at the age of 18 under the care a resident and intern physician because of a drug-drug reaction resulting in serotonin syndrome (4). It was alleged that physician fatigue contributed to Zion's death. In response, New York state initially limited resident duty hours to 80 per week and this was followed in July 2003 by the Accreditation Council for Graduate Medical Education adopted similar regulations for all accredited medical training institutions in the United States. Subsequently, duty hours were shortened to 70 hours/week in 2011.

The duty hour regulations were adopted despite a lack of studies on their impact and studies are just beginning to emerge. A recent meta-analysis of 27 studies on duty hour restriction, demonstrated no improvements in patient care or resident well-being and a possible negative impact on resident education (5). Similarly, an analysis of 135 articles also concluded here was no overall improvement in patient outcomes as a result of resident duty hour restrictions; however, some studies suggest increased complication rates in high-acuity patients (6). There was no improvement in education, and performance on certification examinations has declined in some specialties (5,6). Survey studies revealed a perception of worsened education and patient safety but there were improvements in resident wellness (5,6).

Although the reasons for the lack of improvement (and perhaps decline) in outcomes with the resident duty hour restriction are unclear, several have speculated that the lack of continuity of care resulting from different physicians caring for a patient may be responsible (7). If this is true, it may be that the reduction in duty hours has little to do with medical education or experience but the duty hour resulted in fragmentation which caused poorer care.

Comparison Between Physician and NP Care In Primary Care

A meta-analysis by Laurant et al. (8) in 2005 assessed physician compared to NP primary care. In five studies the nurse assumed responsibility for first contact care for patients wanting urgent outpatient visits. Patient health outcomes were similar for nurses and doctors but patient satisfaction was higher with nurse-led care. Nurses tended to provide longer consultations, give more information to patients and recall patients more frequently than doctors. The impact on physician workload and direct cost of care was variable. In four studies the nurse took responsibility for the ongoing management of patients with particular chronic conditions. In general, no appreciable differences were found between doctors and nurses in health outcomes for patients, process of care, resource utilization or cost.

However, Laurant et al. (8) advised caution since only one study was powered to assess equivalence of care, many studies had methodological limitations, and patient follow-up was generally 12 months or less. Noted was a lower NP productivity compared to physicians (Figure 1).


Figure 1. Median ambulatory encounters per year (9).

The lower number of visits by NPs implies that cost savings would depend on the magnitude of the salary differential between physicians and nurses, and might be offset by the lower productivity of nurses compared to physicians.

More recent reviews and meta-analysis have come to similar conclusions (10-13). However, consistent with Laurant et al's. (8) warning studies tend to be underpowered, poor quality and often biased.

Despite the overall similarity in results, some studies have reported to show a difference in utilization. Hermani et al. (14) reported increased resource utilization by NPs compared to resident physicians and attending physicians in primary care at a Veterans Affairs hospital. The increase in utilization was mostly explained by increased referrals to specialists and increased hospitalizations. A recent study by Hughes et al. (15) using 2010-2011 Medicare claims found that NPs and physician assistants (PAs) ordered imaging in 2.8% episodes of care compared to 1.9% for physicians. This was especially true as the diagnosis codes became more uncommon. In other words, the more uncommon the disease, the more NPs and PAs ordered imaging tests.

NPs Outside of Primary Care

Although studies of patient outcomes in NP-directed care in the outpatient setting were few and many had methodological limitations, even fewer studies have examined NPs outside the primary care clinic. Nevertheless, NPs and PAs have long practiced in both specialty care and the inpatient setting. My personal experience goes back into the 1980s with both NPs and PAs in the outpatient pulmonary and sleep clinics, the inpatient pulmonary setting and the ICU setting. Although most articles are descriptive, nearly all articles describe a benefit to physician extenders in these areas as well as other specialty areas.

More recently NPs may have hired to fill “hospitalist” roles with scant attention as to whether the educational preparation of the NP is consistent with the role (16). According to Arizona law, a NP "shall only provide health care services within the NP's scope of practice for which the NP is educationally prepared and for which competency has been established and maintained” (A.A.C. R4-19-508 C). The Department of Veterans Affairs conducted a study a number of years ago examining nurse practitioner inpatient care compared to resident physicians care (17). Outcomes were similar although 47% of the patients randomized to nurse practitioner care were actually admitted to housestaff wards, largely because of attending physicians and NP requests. A recent article examined also NP-delivered critical care compared to resident teams in the ICU (18). Mortality and length of stay were similar.


NP have less education and training than physicians. It would appear that the scientific basis of the curricula are similar and there is no evidence that the aptitude of nurses and physicians differ. Therefore, the data that nurses care for patients the same as physicians most of the time is not surprising, especially for common chronic diseases. However, care may be divergent for less common diseases where lack of NP training and experience may play a role.

Physicians have undergone increased training and certification over the past few decades, nurses are now doing the same. The American Association of Colleges of Nursing seems to be endorsing further education for nurses encouraging either a PhD or a Doctor of Nurse Practice degree (19). However, the trend in medicine has been contradictory requirements for increasing training and certification for physicians while substituting practitioners with less education, training and experience for those same physicians. An extension of this concept has been that traditional nursing roles are increasingly being filled by medical assistants or nursing assistants (20). The future will likely be more of the same. NPs will be substituted for physicians; nurses without advanced training will be hired to substitute for NPs and PAs; and medical assistants will increasingly be substituted for nurses all to reduce personnel costs. It is likely that studies will be designed to support these substitutions but will frequently be underpowered, use rather meaningless metrics or have other methodology flaws to justify the substitution of less qualified healthcare providers.

Much of this "dummying down" has been driven by shortage of physicians and/or nurses. The justification has always been that substitution of cheaper providers will solve the labor shortage while saving money. However, experience over the past few decades in the US has shown that as education and certification requirements increase, compensation has decreased for physicians (21). NPs can likely expect the same.

Some are asking whether physicians should abandon primary care. After years of politicians, bureaucrats and healthcare administrators promising increasing compensation for primary care, most medical students and resident physicians have realized that this is unlikely. Furthermore, the increasing intrusion of regulatory agencies and insurance companies mandating an array of bureaucratic tasks, has led to increasing dissatisfaction with primary care (22). Consequently, most young physicians are seeking training in subspecialty care. It seems apparent that it is less of a question of whether physicians will be making a choice to abandon primary care in the future, but without a dramatic change, the decision has already been made.

Arizona SB1473, the bill that would essentially make NPs equivalent to physicians in the eyes of the law, is an expected extension of the current trends in medicine. Although physicians might object, supporters of the legislation will likely accuse physicians of merely protecting their turf. Personally, I am disheartened by these trends. The current trends seem a throwback to pre-Flexner report days. The poor studies that support these trends will do little more than allow the unscrupulous to line their pockets by substituting a practitioner with less education, experience and training for a well-trained, experienced physicians or nurses.


  1. Flexner A. Medical Education in the United States and Canada: A Report to the Carnegie Foundation for the Advancement of Teaching. New York, NY: The Carnegie Foundation for the Advancement of Teaching; 1910. Available at: (accessed 2/6/16).
  2. Barzansky B; Gevitz N. Beyond Flexner. Medical Education in the Twentieth Century. New York, NY: Greenwood Press; 1992.
  3. National Task Force on Quality Nurse Practitioner Education. Criteria for evaluation of nurse practitioner programs. Washington, DC: National Organization of Nurse Practitioner Faculties; 2012. Available at: (accessed 2/6/16).
  4. Lerner BH. A case that shook medicine. Washington Post. November 28, 2006. Available at: (accessed 2/9/16).
  5. Bolster L, Rourke L. The effect of restricting residents' duty hours on patient safety, resident well-being, and resident education: an updated systematic review. J Grad Med Educ. 2015;7(3):349-63. [CrossRef] [PubMed]
  6. Ahmed N, Devitt KS, Keshet I, et al. A systematic review of the effects of resident duty hour restrictions in surgery: impact on resident wellness, training, and patient outcomes. Ann Surg. 2014;259(6):1041-53. [CrossRef] [PubMed]
  7. Denson JL, McCarty M, Fang Y, Uppal A, Evans L. Increased mortality rates during resident handoff periods and the effect of ACGME duty hour regulations. Am J Med. 2015;128(9):994-1000. [CrossRef] [PubMed]
  8. Laurant M, Reeves D, Hermens R, Braspenning J, Grol R, Sibbald B. Substitution of doctors by nurses in primary care. Cochrane Database Syst Rev. 2005 Apr 18;(2):CD001271. [CrossRef]
  9. Medical Group Management Association. NPP utilization in the future of US healthcare. March 2014. Available at: (accessed 2/17/16).
  10. Tappenden P, Campbell F, Rawdin A, Wong R, Kalita N. The clinical effectiveness and cost-effectiveness of home-based, nurse-led health promotion for older people: a systematic review. Health Technol Assess. 2012;16(20):1-72. [CrossRef] [PubMed]
  11. Donald F, Kilpatrick K, Reid K, et al. A systematic review of the cost-effectiveness of nurse practitioners and clinical nurse specialists: what is the quality of the evidence? Nurs Res Pract. 2014;2014:896587. [CrossRef] [PubMed]
  12. Bryant-Lukosius D, Carter N, Reid K, et al. The clinical effectiveness and cost-effectiveness of clinical nurse specialist-led hospital to home transitional care: a systematic review. J Eval Clin Pract. 2015;21(5):763-81. [CrossRef] [PubMed]
  13. Kilpatrick K, Reid K, Carter N, et al. A systematic review of the cost-effectiveness of clinical nurse specialists and nurse practitioners in inpatient roles. Nurs Leadersh (Tor Ont). 2015;28(3):56-76. [PubMed]
  14. Hemani A, Rastegar DA, Hill C, al-Ibrahim MS. A comparison of resource utilization in nurse practitioners and physicians. Eff Clin Pract. 1999;2(6):258-65. [PubMed]
  15. Hughes DR, Jiang M, Duszak R Jr. A comparison of diagnostic imaging ordering patterns between advanced practice clinicians and primary care physicians following office-based evaluation and management visits. JAMA Intern Med. 2015;175(1):101-7. [CrossRef] [PubMed]
  16. Arizona Board of Nursing. Registered nurse practitioner (rnp) practicing in an acute care setting. Available at: (accessed 2/12/16).
  17. Pioro MH, Landefeld CS, Brennan PF, Daly B, Fortinsky RH, Kim U, Rosenthal GE. Outcomes-based trial of an inpatient nurse practitioner service for general medical patients. J Eval Clin Pract. 2001;7(1):21-33. [CrossRef] [PubMed]
  18. Landsperger JS, Semler MW, Wang L, Byrne DW, Wheeler AP. Outcomes of nurse practitioner-delivered critical care: a prospective cohort study. Chest. 2015;148(6):1530-5. [CrossRef] [PubMed]
  19. American Association of Colleges of Nursing. DNP fact sheet. June 2015. Available at: (accessed 2/13/16).
  20. Bureau of Labor Statitistics. Occupational outlook handbook: medical assistants. December 17, 2015. Available at: (accessed 2/13/16).
  21. Robbins RA. National health expenditures: the past, present, future and solutions. Southwest J Pulm Crit Care. 2015;11(4):176-85. [CrossRef]
  22. Peckham C. Physician burnout: it just keeps getting worse. Medscape. January 26, 2015. Available at: (accessed 2/13/16).

Cite as: Robbins RA. Nurse pactitioners' substitution for physicians. Southwest J Pulm Crit Care. 2016;12(2):64-71. doi: PDF 


A Comparison Between Hospital Rankings and Outcomes Data

Richard A. Robbins, MD*

Richard D. Gerkin, MD  


*Phoenix Pulmonary and Critical Care Research and Education Foundation, Gilbert, AZ

Banner Good Samaritan Medical Center, Phoenix, AZ



Hospital rankings have become common but the agreement between the rankings and correlation with patient-centered outcomes remains unknown. We examined the ratings of Joint Commission on Healthcare Organizations (JCAHO), Leapfrog, and US News and World Report (USNews), and outcomes from Centers for Medicare and Medicaid Hospital Compare (CMS) for agreement and correlation. There was some correlation among the three “best hospitals” ratings.  There was also some correlation between “best hospitals” and CMS outcomes, but often in a negative direction.  These data suggest that no one “best hospital” list identifies hospitals that consistently attain better outcomes.


Hospital rankings are being published by a variety of organizations. These rankings are used by hospitals to market the quality of their services. Although all the rankings hope to identify “best” hospitals, they differ in methodology. Some emphasize surrogate markers; some emphasize safety, i.e., a lack of complications; some factor in the hospital’s reputation; some factor in patient-centered outcomes.  However, most do not emphasize traditional outcome measures such as mortality, mortality, length of stay and readmission rates. None factor cost or expenditures on patient care.

We examined three common hospital rankings and clinical outcomes. We reasoned that if the rankings are valid then better hospitals should be consistently on these best hospital lists. In addition, better hospitals should have better outcomes.



Outcomes data was obtained from the CMS Hospital Compare website from December 2012-January 2013 (1). The CMS website presents data on three diseases, myocardial infarction (MI), congestive heart failure (CHF) and pneumonia. We examined readmissions, complications and deaths for each of these diseases. We did not examine all process of care measures since many of the measures have not been shown to correlate with improved outcomes and patient satisfaction has been shown to correlate with higher admission rates to the hospital, higher overall health care expenditures, and increased mortality (2). In some instances actual data is not presented on the CMS website but only higher, lower or no different from the National average. In this case, scoring was done 2, 0 and 1 respectively with 2=higher, 0=lower and 1=no different.

Mortality is the 30-day estimates of deaths from any cause within 30 days of a hospital admission, for patients hospitalized with one of several primary diagnoses (MI, CHF, and pneumonia). Mortality was reported regardless of whether the patient died while still in the hospital or after discharge. Similarly, the readmission rates are 30-day estimates of readmission for any cause to any acute care hospital within 30 days of discharge. The mortality and readmission measures rates were adjusted for patient characteristics including the patient’s age, gender, past medical history, and other diseases or conditions (comorbidities) the patient had at hospital arrival that are known to increase the patient’s risk of dying or readmission.

The rates of a number of complications are also listed in the CMS data base (Table 1).

Table 1. Complications examined that are listed in CMS data base.

CMS calculates the rate for each serious complication by dividing the actual number of outcomes at each hospital by the number of eligible discharges for that measure at each hospital, multiplied by 1,000. The composite value reported on Hospital Compare is the weighted averages of the component indicators.  The measures of serious complications reported are risk adjusted to account for differences in hospital patients’ characteristics. In addition, the rates reported on Hospital Compare are “smoothed” to reflect the fact that measures for small hospitals are measured less accurately (i.e., are less reliable) than for larger hospitals.

Similar to serious infections, CMS calculates the hospital acquired infection data from the claims hospitals submitted to Medicare. The rate for each hospital acquired infection measure is calculated by dividing the number of infections that occur within any given eligible hospital by the number of eligible Medicare discharges, multiplied by 1,000. The hospital acquired infection rates were not risk adjusted.


The JCAHO list of Top Performers on Key Quality Measures™ was obtained from its 2012 list (3). The Top Performers are based on an aggregation of accountability measure data reported to The JCAHO during the previous calendar year.


Leapfrog’s Hospital Safety Score were obtained from their website during December 2012-January 2013 (4). The score utilizes 26 National performance measures from the Leapfrog Hospital Survey, the Agency for Healthcare Research and Quality (AHRQ), the Centers for Disease Control and Prevention (CDC), and the Centers for Medicare and Medicaid Services (CMS) to produce a single composite score that represents a hospital’s overall performance in keeping patients safe from preventable harm and medical errors. The measure set is divided into two domains: (1) Process/Structural Measures and (2) Outcome Measures. Many of the outcome measures are derived from the complications reported by CMS (Table 1). Each domain represents 50% of the Hospital Safety Score. The numerical safety score is then converted into one of five letter grades. "A" denotes the best hospital safety performance, followed in order by "B", "C", “D,” and “F.” For analysis, these letter grades were converted into numerical grades 1-5 corresponding to letter grades A-F.

US News and World Report

US News and World Report’s (USNews) 2012-3 listed 17 hospitals on their honor roll (5). The rankings are based largely on objective measures of hospital performance, such as patient survival rates, and structural resources, such as nurse staffing levels. Each hospital’s reputation, as determined by a survey of physician specialists, was also factored in the ranking methodology. The USNews top 50 cardiology and pulmonology hospitals were also examined.

Statistical Analysis

Categorical variables such as JCAHO and USNews best hospitals were compared with other data using chi-squared analysis. Spearman rank correlation was used to help determine the direction of the correlations (positive or negative). Significance was defined as p<0.05.


Comparisons of Hospital Rankings between Organizations

A large database of nearly 3000 hospitals was compiled for each of the hospital ratings (Appendix 1). The “best hospitals” as rated by the JCAHO, Leapfrog and USNews were compared for correlation between the organizations (Table 2).

Table 2. Correlation of “best hospitals” between different organizations

There was significant correlation between the JCAHO and Leapfrog and Leapfrog and USNews but not between JCAHO and USNews.

JCAHO-Leapfrog Comparison

The Leapfrog grades were significantly better for JCAHO “Best Hospitals” compared to hospitals not listed as “Best Hospitals” (2.26 + 0.95  vs. 1.85 + 0.91, p<0.0001). However, there were multiple exceptions. For example, of the 358 JCAHO “Best Hospitals” with a Leapfrog grade, 84 were graded “C”, 11 were graded “D” and one was graded as “F”.

JCAHO-USNews Comparison

Of the JCAHO “Top Hospitals” only one was listed on the USNews “Honor Roll”. Of the cardiology and pulmonary “Top 50” hospitals only one and two hospitals, respectively, were listed on the JCAHO “Top Hospitals” list.

Leapfrog-USNews Comparison

The Leapfrog grades of the US News “Honor Roll” hospitals did not significantly differ compared to the those hospitals not listed on the “Honor Roll” (2.21 + 0.02 vs. 1.81 + 0.31, p>0.05). However, Leapfrog grades of the US News “Top 50 Cardiology” hospitals had better Leapfrog grades (2.21 +  0.02 vs. 1.92 + 0.14, p<0.05). Similarly, Leapfrog grades of the US News “Top 50 Pulmonary” hospitals had better Leapfrog grades (2.21 + 0.02 vs. 1.91 + 0.15, p<0.05).

“Best Hospital” Mortality, Readmission and Serious Complications

The data for the comparison between the hospital rankings and CMS’ readmission rates, mortality rates and serious complications for the JCAHO, Leapfrog, and USNews are shown in Appendix 2, Appendix 3, and Appendix 4 respectively. The results of the comparison of “best hospitals” compared to hospitals not listed as best hospitals are shown in Table 3.

Table 3. Results of “best hospitals” compared to other hospitals for mortality and readmission rates for myocardial infarction (MI), congestive heart failure (CHF) and pneumonia.

Red:  Relationship is concordant (better rankings associated with better outcomes)

Blue:  Relationship is discordant (better rankings associated with worse outcomes)

Note that of 21 total p values for relationships, 12 are non-significant, 6 are concordant and significant, and 6 are discordant and significant.  All 4 of the significant readmission relationships are discordant. All 5 of the significant mortality relationships are concordant. This underscores the disjunction of mortality and readmission. All 3 of the relationships with serious complications are significant, but one of these is discordant. Of the 3 ranking systems, Leapfrog has the least correlation with CMS outcomes (5/7 non-significant).  USNews has the best correlation with CMS outcomes (6/7 significant).  However, 3 of these 6 are discordant.

The USNews “Top 50” hospitals for cardiology and pulmonology were also compared to those hospitals not listed as “Top 50” hospitals for cardiology and pulmonology. Similar to the “Honor Roll” hospitals there was a significantly higher proportion of hospitals with better mortality rates for MI and CHF for the cardiology “Top 50” and for pneumonia for the pulmonary “Top 50”. Both the cardiology and pulmonary “Top 50” had better serious complication rates (p<0.05, both comparisons, data not shown).


Lists of hospital rankings have become widespread but whether these rankings identify better hospitals is unclear. We reasoned that if the rankings were meaningful then there should be widespread agreement between the hospital lists. We did find a level of agreement but there were exceptions. Hospital rankings should correlate with patient-centered outcomes such as mortality and readmission rates. Overall that level of agreement was low.

One probable cause accounting for the differences in hospital rankings is the differing methodologies used in determined the rankings. For example, JCAHO uses an aggregation of accountability measures. Leapfrog emphasizes safety or a lack of complications. US News uses patient survival rates, structural resources, such as nurse staffing levels, and the hospital’s reputation. However, the exact methodolgical data used to formulate the rankings is often vague, especially for JCAHO and US News rankings. Therefore, it should not be surprising that the hospital rankings differ.

Another probable cause for the differing rankings is the use of selected complications in place of patient-centered outcome measures. Complications are most meaningful when they negatively affect ultimate patient outcomes. Some complications such as objects accidentally left in the body after surgery, air bubble in the bloodstream or mismatched blood types are undesirable but very infrequent. Whether a slight, but significant, increase in these complications would increase more global measures such as morality or readmission rates is unlikely. The overall poor correlation of these outcomes with deaths and readmissions in the CMS database is consistent with this concept.

Some of the surrogate complication rates are clearly evidence-based but some are clearly not. For example, many of the central-line associated infection and ventilator-associated pneumonia guidelines used are non-evidence based (6.7). Furthermore, overreaction to correct some of the complications such as “signs of uncontrolled blood sugar” may be potentially harmful. This complication could be interpreted as tight control of the blood sugar. Unfortunately, when rigorously studied, patients with tight glucose control actually had an increase in mortality (8).

In some instances a complication was associated with improved outcomes. Although the reason for this discordant correlation is unknown, it is possible that the complication may occur as a result of better care. For example, catherization of a central vein for rapid administration of fluids, drugs, blood products, etc. may result in better outcomes or quality but will increase the central line-associated bloodstream infection rate. In contrast, not inserting a catheter when appropriate might lead to worse outcomes or poorer quality but would improve the infection rate.

Many of the rankings are based, at least in part, on complication data self-reported by the hospitals to CMS. However, the accuracy of this data has been called into question (9,10). Meddings et al. (10) studied urinary tract infections which were self-reported by hospitals using claims data. According to Meddings (10), the data were “inaccurate” and not were “not valid data sets for comparing hospital acquired catheter-associated urinary tract infection rates for the purpose of public reporting or imposing financial incentives or penalties”. The authors proposed that the nonpayment by Medicare for “reasonably preventable” hospital-acquired complications resulted in this discrepancy. Inaccurate data may lead to the lack of correlation a complication and outcomes on the CMS database.

The sole source of mortality and readmission data in this study was CMS. This is limited to Medicare and Medicaid patients but is probably representative of the general population in an acute care hospital. However, also included on the CMS website is a dizzying array of measures. We did not analyze every measure but analyzed only those listed in Table 1. Whether examination of other measures would correlate with mortality and readmission rates is unclear.

There are several limitations to our data. First and foremost, the CMS data is self-reported by hospitals. The validity and accuracy of the data has been called into question. Second, data is missing in multiple instances. For example, much of the data from Maryland was not present. Also, there were multiple instances when the data was “unavailable” or the “number of cases are too small”.  Third, in some instances CMS did not report actual data but only higher, lower or no different from the National average. This loss of information may have led to inaccurate analyses. Fourth, much of the data are from surrogate markers, a fact which is important since surrogate markers have not been shown to predict outcomes. This is also puzzling since patient-centered outcomes are available.  Fifth, much of the outcomes data is derived from CMS which to a large extent eliminates Veterans Administration, pediatric, mental health and some other specialty facilities.

It is unclear if any of the hospital rankings should be used by patients or healthcare providers when choosing a hospital. At present it would appear that the rankings have an over reliance on surrogate markers, many of which are weakly evidence-based. Furthermore, categorizing the data as average, below or above average may lead to an inaccurate interpretation of the data. Lastly, the accuracy of the data is unclear. Finally, lack of data on length of stay and some major morbidities is a major weakness. We as physicians need to scrutinize these measurement systems and insist on greater methodological rigor and more relevant criteria to choose. Until these shortcomings are overcome, we cannot recommend the use of hospital rankings by patients or providers.


  1. (accessed 6/12/13).
  2. Fenton JJ, Jerant AF, Bertakis KD, Franks P. The cost of satisfaction: a national study of patient satisfaction, health care utilization, expenditures, and mortality. Arch Intern Med. 2012;172(5):405-11. [CrossRef] [PubMed]
  3. (accessed 6/12/13).
  4. (accessed 6/12/13).
  5. (accessed 6/12/13).
  6. Padrnos L, Bui T, Pattee JJ, Whitmore EJ, Iqbal M, Lee S, Singarajah CU, Robbins RA. Analysis of overall level of evidence behind the Institute of Healthcare Improvement ventilator-associated pneumonia guidelines. Southwest J Pulm Crit Care. 2011;3:40-8.
  7. Hurley J, Garciaorr R, Luedy H, Jivcu C, Wissa E, Jewell J, Whiting T, Gerkin R, Singarajah CU, Robbins RA. Correlation of compliance with central line associated blood stream infection guidelines and outcomes: a review of the evidence. Southwest J Pulm Crit Care. 2012;4:163-73.
  8. NICE-SUGAR Study Investigators. Intensive versus conventional insulin therapy in critically ill patients. N Engl J Med. 2009;360:1283-97. [CrossRef] [Pubmed]
  9. Robbins RA. The emperor has no clothes: the accuracy of hospital performance data. Southwest J Pulm Crit Care. 2012;5:203-5.
  10. Meddings JA, Reichert H, Rogers MA, Saint S, Stephansky J, McMahon LF. Effect of nonpayment for hospital-acquired, catheter-associated urinary tract infection: a statewide analysis. Ann Intern Med. 2012;157:305-12. [CrossRef] [PubMed]

Reference as: Robbins RA, Gerkin RD. A comparison between hospital rankings and outcomes data. Southwest J Pulm Crit Care. 2013;7(3):196-203. doi: PDF