Search Journal-type in search term and press enter
Social Media-Follow Southwest Journal of Pulmonary and Critical Care on Facebook and Twitter

General Medicine

(Click on title to be directed to posting, most recent listed first)

Nursing Magnet Hospitals Have Better CMS Hospital Compare Ratings
Publish or Perish: Tools for Survival
Is Quality of Healthcare Improving in the US?
Survey Shows Support for the Hospital Executive Compensation Act
The Disruptive Administrator: Tread with Care
A Qualitative Systematic Review of the Professionalization of the 
   Vice Chair for Education
Nurse Practitioners' Substitution for Physicians
National Health Expenditures: The Past, Present, Future and Solutions
Credibility and (Dis)Use of Feedback to Inform Teaching : A Qualitative
   Case Study of Physician-Faculty Perspectives
Special Article: Physician Burnout-The Experience of Three Physicians
Brief Review: Dangers of the Electronic Medical Record
Finding a Mentor: The Complete Examination of an Online Academic 
   Matchmaking Tool for Physician-Faculty
Make Your Own Mistakes
Professionalism: Capacity, Empathy, Humility and Overall Attitude
Professionalism: Secondary Goals 
Professionalism: Definition and Qualities
Professionalism: Introduction
The Unfulfilled Promise of the Quality Movement
A Comparison Between Hospital Rankings and Outcomes Data
Profiles in Medical Courage: John Snow and the Courage of
Comparisons between Medicare Mortality, Readmission and 
In Vitro Versus In Vivo Culture Sensitivities:
   An Unchecked Assumption?
Profiles in Medical Courage: Thomas Kummet and the Courage to
   Fight Bureaucracy
Profiles in Medical Courage: The Courage to Serve
   and Jamie Garcia
Profiles in Medical Courage: Women’s Rights and Sima Samar
Profiles in Medical Courage: Causation and Austin Bradford Hill
Profiles in Medical Courage: Evidence-Based 
   Medicine and Archie Cochrane
Profiles of Medical Courage: The Courage to Experiment and 
   Barry Marshall
Profiles in Medical Courage: Joseph Goldberger,
   the Sharecropper’s Plague, Science and Prejudice
Profiles in Medical Courage: Peter Wilmshurst,
   the Physician Fugitive
Correlation between Patient Outcomes and Clinical Costs
   in the VA Healthcare System
Profiles in Medical Courage: Of Mice, Maggots 
   and Steve Klotz
Profiles in Medical Courage: Michael Wilkins
   and the Willowbrook School
Relationship Between The Veterans Healthcare Administration
Hospital Performance Measures And Outcomes 


Although the Southwest Journal of Pulmonary and Critical Care was started as a pulmonary/critical care/sleep journal, we have received and continue to receive submissions that are of general medical interest. For this reason, a new section entitled General Medicine was created on 3/14/12. Some articles were moved from pulmonary to this new section since it was felt they fit better into this category.



Nursing Magnet Hospitals Have Better CMS Hospital Compare Ratings

Richard A. Robbins, MD

Phoenix Pulmonary and Critical Care Research and Education Foundation

Gilbert, AZ USA


Background: There has been conflicting data on whether Nursing Magnet Hospitals (NMH) provide better care.

Methods: NMH in the Southwest USA (Arizona, California, Colorado, Hawaii, Nevada, and New Mexico) were compared to hospitals not designated as NMH using the Centers for Medicare and Medicaid (CMS) hospital compare star designation.

Results: NMH had higher star ratings than non-NMH hospitals (3.34 + 0.78 vs. 2.86 + 0.83, p<0.001). The hospitals were mostly large, urban non-critical access hospitals. Academic medical centers made up a disproportionately large portion of the NMH.

Conclusions: Although NMH had higher hospital ratings, the data may favor non-critical access academic medical centers which are known to have better outcomes.


Magnet status is awarded to hospitals that meet a set of criteria designed to measure nursing quality by the American Nurses' Credentialing Center (ANCC), a part of the American Nurses Association (ANA). The Magnet designation program was based on a 1983 ANA survey of 163 hospitals deriving its key principles from the hospitals that had the best nursing performance. The prime intention was to help hospitals and healthcare facilities attract and retain top nursing talent.

There is no consensus whether Magnet status has an impact on nurse retention or on clinical outcomes. Kelly et al. (1) found that NMH hospitals provide better work environments and a more highly educated nursing workforce than non-NMH. In contrast, Trinkoff et al. (2) found no significant difference in working conditions between NHM and non-NMH. To further confuse the picture, Goode et al. (3) reported that NMH generally had poorer outcomes.

The Centers for Medicare and Medicaid Services (CMS) has developed star ratings in an attempt to measure quality of care (4). The ratings are based on five broad categories: 1. Outcomes; 2. Intermediate Outcomes; 3. Patient Experience; 4. Access; and 5. Process. Outcomes and intermediate outcomes are weighted three times as much as process measures, and patient experience and access measures are weighted 1.5 times as much as process measures. The ratings are from 1-5 stars with higher numbers of stars indicating a higher quality rating.

This study compares the CMS star ratings between NMH and non-NMH in the Southwest USA (Arizona, California, Colorado, Hawaii, Nevada and New Mexico). The results demonstrate that NMH have higher CMS star ratings. However, the NMH have characteristics which have been previously associated with higher quality of care using some measures.


Nursing Magnet Hospitals

NMH were identified from The American Nurses Credentialing Center website (5).

CMS Star Ratings

Star ratings were obtained from the CMS website (4).


Only when data was available for both NMH and CMS star ratings were the hospitals included. Data was expressed as mean + standard deviation.  NMH and non-NMH were compared using Student’s t test. Significance was defined as p<0.05.


Hospital Characteristics

There were 44 NMH and 415 non-NMH hospitals in the data (see Appendix). California had the most hospitals (287) and the most NMH (28). Arizona had 8 NMH, Colorado 7 and Hawaii 1. Nevada and New Mexico had none. All the NMH were acute care hospitals located in major metropolitan areas. Most were larger hospitals. None were designated critical access hospitals by CMS. Eleven of the NMH were the primary teaching hospitals for medical schools. Many of the others had affiliated teaching programs.

CMS Star Ratings

The CMS star ratings were higher for NMH than non NMH (3.34 + 0.78 vs. 2.86 + 0.83, p<0.001, Figure 1).

Figure 1. CMS star ratings for Nurse Magnet Hospitals (NMH) and non-NMH (p<0.001).


The present study shows that for hospitals in the Southwest, NMH had higher CMS star ratings than non-NMH. This is consistent with better levels of care in NMH than non-NMH. However, the NMH were large, urban, non-critical access medical centers which were disproportionately academic medical centers. Previous studies have shown that these hospitals have better outcomes (6,7).

There seems to be little consensus in the literature regarding patient outcomes in NMH. A 2010 study concluded that non-NMH actually had better patient outcomes than NMH (3). Similarly, studies published early in this decade suggested little difference in outcomes (1,2). In contrast, a more recent study suggested improvements in patient outcomes in NMH (8). The present study supports the concept that NMH status might be a marker for better patient outcomes.

Achieving NMH status is expensive. Hospitals pay about $2 million for initial NMH certification, and pay nearly the same amount for re-certification every 4 years. It seems unlikely that small rural hospitals could afford the fee to achieve and maintain NMH regardless of their quality of care. Therefore, the NMH would be expected to be larger, urban medical centers which were the results found in the present study.

Despite there being no direct link of NMH to reimbursement, a study by the Robert Wood Johnson Foundation suggests that achieving NMH status increased hospital revenue (9). On average, NMH received an adjusted net increase in inpatient income of about $104 to $127 per discharge after earning Magnet status, amounting to about $1.2 million in revenue each year. The reason(s) for the improvement in hospital fiscal status are unclear.

Measuring quality of care is quite complex. The CMS star ratings are an attempt to summarize the quality of care using 5 broad categories: 1. Outcomes; 2. Intermediate Outcomes; 3. Patient Experience; 4. Access; and 5. Process. There are up to 32 measures in each category. Outcomes, patient experience and access seem relatively straight-forward. An example of a secondary outcome is control of blood pressure because of its link to outcomes. Examples of process measures include colorectal cancer screening, annual flu shot and monitoring physical activity. To further complicate the CMS ratings, each category is weighted.

It is possible that the CMS star ratings might miss or under weigh a key element in quality of care. For example, Needleman et al. (10) has emphasized that increased registered nurse staffing reduces hospital mortality. However, a 2011 study concluded that NMH had less total staff and a lower RN skill mix compared with non-NMH hospitals contributing to poorer outcomes (3).

The present study supports the concept that achieving NMH status is associated with better care as defined by CMS. However, given the complexities of measuring quality of care it is unclear whether this represents a marker of better hospitals or if the process of achieving NMH leads to better care.


  1. Kelly LA, McHugh MD, Aiken LH. Nurse outcomes in Magnet® and non-Magnet hospitals. J Nurs Adm. 2012 Oct;42(10 Suppl):S44-9. [PubMed]
  2. Trinkoff AM, Johantgen M, Storr CL, Han K, Liang Y, Gurses AP, Hopkinson S. A comparison of working conditions among nurses in Magnet and non-Magnet hospitals. J Nurs Adm. 2010 Jul-Aug;40(7-8):309-15. [CrossRef] [PubMed]
  3. Goode CJ, Blegen MA, Park SH, Vaughn T, Spetz J. Comparison of patient outcomes in Magnet® and non-Magnet hospitals. J Nurs Adm. 2011 Dec;41(12):517-23. [CrossRef] [PubMed]
  4. Centers for Medicare and Medicaid. 2017 star ratings. Available at: (accessed 10/15/17).
  5. The American Nurses Credentialing Center. ANCC List of Magnet® Recognized Hospitals. Available at: (accessed 10/15/17).
  6. Burke LG, Frakt AB, Khullar D, Orav EJ, Jha AK. Association Between Teaching Status and Mortality in US Hospitals. JAMA. 2017 May 23;317(20):2105-13. [CrossRef] [PubMed]
  7. Joynt KE, Harris Y, Orav EJ, Jha AK. Quality of care and patient outcomes in critical access rural hospitals. JAMA. 2011 Jul 6;306(1):45-52. [CrossRef] [PubMed]
  8. Friese CR, Xia R, Ghaferi A, Birkmeyer JD, Banerjee M. Hospitals in 'Magnet' program show better patient outcomes on mortality measures compared to non-'Magnet' hospitals. Health Aff (Millwood). 2015 Jun;34(6):986-92. [CrossRef] [PubMed]
  9. Jayawardhana J, Welton JM, Lindrooth RC. Is there a business case for magnet hospitals? Estimates of the cost and revenue implications of becoming a magnet. Med Care. 2014 May;52(5):400-6. [CrossRef] [PubMed]
  10. Needleman J, Buerhaus P, Pankratz VS, Leibson CL, Stevens SR, Harris M. Nurse staffing and inpatient hospital mortality. N Engl J Med. 2011 Mar 17;364(11):1037-45.[CrossRef] [PubMed]

Cite as: Robbins RA. Nursing magnet hospitals have better CMS hospital compare ratings. Southwest J Pulm Crit Care. 2017;15(5):209-13. doi: PDF 


Publish or Perish: Tools for Survival

Stuart F. Quan, M.D.1

Jonathan F. Borus, M.D.2


1Division of Sleep and Circadian Disorders and 2Department of Psychiatry

Brigham and Women’s Hospital

Harvard Medical School

Boston, MA USA


(Editor's Note: A downloadable PowerPoint presentation accompanies this article and be accessed by clicking on the following link "Publish or Perish: Tools for Suvival". It is 20 Mb and may take some time to download).

Success in one’s chosen profession is often predicated upon meeting a profession-wide standard of excellence or productivity. In the corporate world, the metric might be sales volume and in clinical medicine it may be patient satisfaction and/or number of patients seen. In academic medicine, including the fields of Pulmonary and Critical Care Medicine, the “coin of the realm” is demonstrable written scholarship. In large part, this is determined by the number and quality of publications in scientific journals. Unfortunately, the skills required to navigate the complexities of how to publish in the scientific literature rarely are taught in either medical school or postgraduate training. To assist the inexperienced academic physician or scientist, the Writing for Scholarship Interest Group of the Harvard Medical School Academy recently published “A Writer’s Toolkit” (1). This comprehensive monograph provides valuable information on all phases of the writing process ranging from conceptualization of a manuscript to understanding of the publication process itself. In today’s society, however, there are alternative methods of disseminating knowledge that may be better received by some learners than traditional prose. Examples include videos, podcasts and online interactive courses.

In order to provide a complementary method of presenting some of the information contained in “A Writer’s Toolkit” for more active learners, we have developed a self-paced interactive learning module to help young authors better understand the submission, review, and response to reviews stages of the publishing process. The module entitled “Publish or Perish: Tools for Survival” is downloadable from this journal’s website. We believe that providing a way for self-learners to better understand these processes will help such inexperienced authors more successfully get published and therefore share their work with others in the field.


  1. Pories S. Bard T, Bell S et al. A Writer’s Toolkit. MedEdPORTAL, Association of American Medical Colleges; 2012. Available from:

Cite as: Quan SF, Borus JF. Publish or perish: tools for survival. Southwest J Pulm Crit Care. 2017;14(2):67. doi: PDF 


Is Quality of Healthcare Improving in the US?

Richard A. Robbins, MD

Phoenix Pulmonary and Critical Care Research and Education Foundation

Gilbert, AZ USA



Politicians and healthcare administrators have touted that under their leadership enormous strides have been made in the quality of healthcare. However, the question of how to measure quality remains ambiguous. To demonstrate improved quality that is meaningful to patients, outcomes such as life expectancy, mortality, and patient satisfaction must be validly and reliably measured. Dramatic improvements made in many of these patient outcomes through the twentieth century have not been sustained through the twenty-first. Most studies have shown no, or only modest improvements in the past several years, and at a considerable increase in cost. These data suggest that the rate of healthcare improvement is slowing and that many of the quality improvements touted have not been associated with improved outcomes.

Surrogate Markers

The most common measures of quality of healthcare come from Donabedian in 1966 (1). He identified two major foci for the measuring quality of care-outcome and process. Outcome referred to the condition of the patient and the effectiveness of healthcare including traditional outcome measures such as morbidity, mortality, length of stay, readmission, etc. Process of care represented an alternative approach which examined the process of care itself rather than its outcomes.

Beginning in the 1970’s the Joint Commission began to address healthcare quality by requiring hospitals to perform medical audits. However, the Joint Commission soon realized that the audit was “tedious, costly and nonproductive” (2). Efforts to meet audit requirements were too frequently “a matter of paper compliance, with heavy emphasis on data collection and few results that can be used for follow-up activities. In the shuffle of paperwork, hospitals often lost sight of the purpose of the evaluation study and, most important, whether change or improvement occurred as a result of audit”. Furthermore, survey findings and research indicated that audits had not resulted in improved patient care and clinical performance (2).

In response to the ineffectiveness of the audit and the call to improve healthcare, the Joint Commission introduced new quality assurance standards in 1980 which emphasized measurable improvement in process of care rather than outcomes. This approach proved popular with both regulatory agencies and healthcare investigators since it was easier and quicker to show improvement in process of care surrogate markers than outcomes.

Although there are many examples of the misapplication of these surrogate markers, one recent example of note is ventilator-associated pneumonia (VAP), a diagnosis without a clear definition. VAP guidelines issued by the Institute for Healthcare Improvement include elevation of the head of the bed, daily sedation vacation, daily readiness to wean or extubate, daily spontaneous breathing trial, peptic ulcer disease prophylaxis, and deep venous thrombosis prophylaxis. As early as 2011, the evidence basis of these guidelines was questioned (3). Furthermore, compliance with the guidelines had no influence on the incidence of VAP or inpatient mortality (3). Nevertheless, relying on self-reported hospital data the CDC published data touting declines in VAP rates of 71% and 62% in medical and surgical intensive care units, respectively, between 2006 and 2012 (4,5). However, Metersky and colleagues (6) reviewed Medicare Patient Safety Monitoring System (MPSMS) data on 86,000 critically ill patients between 2005 and 2013 and report that VAP rates remain unchanged since 2005.

Hospital Value-Based Purchasing (HVBP)

CMS’ own data might be interpreted as showing no improvement in quality. About 200 fewer hospitals will see bonuses from the Centers for Medicare and Medicaid Services (CMS) under the hospital value-based purchasing (HVBP) program in 2017 than last year (7). The program affects some 3,000 hospitals and compares hospitals to other hospitals and its own performance over time.

The reduction in payments are “somewhat concerning,” according to Francois de Brantes, executive director of the Health Care Incentives Improvement Institute (7). One reason given was fewer hospitals were being rewarded, but another was hospitals' lack of movement in rankings. The HVBP contains inherent design flaws according to de Brantes. As a "tournament-style" program in which hospitals are stacked up against each other, they don't know how they'll perform until the very end of the tournament. "It's not as if you have a specific target," he said. "You could meet that target, but if everyone meets that target, you're still in the middle of the pack."

Although de Brantes point is well taken, another explanation might be that HVBP might reflect a declining performance in healthcare. If the HVBP program is to reward quality of care, fewer hospitals being rewarded logically indicates poorer care. As noted above, CMS will likely be quick to point out that they have established an ever-increasing litany of "quality" measures self-reported by the hospitals that show increasing compliance with these measures (8). However, the lack of improvement in patient outcomes (see below) suggests that completion of these has little meaningful effect.

Life Expectancy

Although life expectancy for the Medicare age group is improving, the increase likely reflects a long-term improvement in life expectancy and may be slowing over the past few years (Figure 1) (9). Since 2005, life expectancy at birth in the U.S. has increased by only 1 year (10).

Figure 1. Life expectancy past age 65 by year.

The reason(s) for the declining improvement in life expectancy in the twenty-first century compared to the dramatic improvements in the twentieth are unclear but likely multifactorial. However, one possible contributing factor to a slowing improvement in mortality is a declining or flattening rate of improvement in healthcare.

Inpatient Mortality

Figueroa et al. (11) examined the association between HVBP and patient mortality in 2,430,618 patients admitted to US hospitals from 2008 through 2013. Main outcome measures were 30-day risk adjusted mortality for acute myocardial infarction, heart failure, and pneumonia using a patient level linear spline analysis to examine the association between the introduction of the HVBP program and 30-day mortality. Non-incentivized, medical conditions were the comparators. The difference in the mortality trends between the two groups was small and non-significant (difference in difference in trends −0.03% point difference for each quarter, 95% confidence interval −0.08% to 0.13%-point difference, p=0.35). In no subgroups of hospitals was HVBP associated with better outcomes, including poor performers at baseline.

Consistent with Figueroa’s data, inpatient mortality trends declined only modestly from 2000 to 2010 (Figure 2) (12).

Figure 2. Number of inpatient deaths 2000-10.

Although the decline was significant, the significance appears to be mostly explained by a greater that expected drop in 2010 and may not represent a real ongoing decrease. Consistent with the modest improvements seen in overall inpatient mortality, disease-specific mortality rates for stroke, acute myocardial infarction (AMI), pneumonia and congestive heart failure (CHF) all declined from 2002-12. However, the trend appears to have slowed since 2007 especially for CHF and pneumonia (Figure 3).

Figure 3. Inpatient mortality rates for stroke, acute myocardial infarction (AMI), pneumonia and congestive heart failure (CHF) 2002-12.

Consistent with the trend of slowing improvement, mortality rates for these four conditions declined at −0.13% for each quarter during from 2008 until Q2 2011 but only −0.03% from Q3 2011 until the end of 2013 (12).

Patient Ratings of Healthcare

CMS has embraced the concept of patient satisfaction as a quality measure, even going so far as rating hospitals based on patient satisfaction (13). The Gallup company conducts an annual poll of Americans' ratings of their healthcare (14). In general, these have not improved and may have actually declined in the past 2 years (Figure 4).

Figure 4. Americans’ rating of their healthcare.


There is little doubt that healthcare costs have risen (15). The rising cost of healthcare has been cited as a major factor in Americans’ poor rating of their healthcare. The trend appears to be one of increasing dissatisfaction with the cost of healthcare (Figure 5) (16).

Figure 5. Americans’ satisfaction or dissatisfaction with the cost of healthcare.


Americans have enjoyed remarkable improvements in life expectancy, mortality, and satisfaction with their healthcare over the past 100 years. However, the rate of these improvements appears to have slowed despite an ever-escalating cost. Starting with a much lower life expectancy in the US, primarily due to infections disease, the dramatic effect of antibiotics and vaccines on overall mortality in the twentieth century would be difficult to duplicate. The current primary causes of mortality in the US, heart disease and cancer, are perhaps more difficult to impact in the same way. However, declining healthcare quality may explain, at least in part, the slowing improvement in healthcare.

The evidence of lack, or only modest, improvement in patient outcomes is part of a disturbing trend in quality improvement programs by healthcare regulatory agencies. Under political pressure to “improve” healthcare, these agencies have  imposed weak or non-evidence based guidelines for many common medical disorders. In the case of CMS, hospitals are required to show compliance improvement under the threat of financial penalties. Not surprisingly, hospitals show an improvement in compliance whether achieved or not (17). The regulatory agency then extrapolates this data from previous observational studies to show a decline in mortality, cost or other outcomes. However, actual measure of the outcomes is rarely performed. This difference is important because a reduction in a surrogate marker may not be associated with improved outcomes, or worse, the improvement may be fictitious. For example, many patients often die with a hospital-acquired infection. Certainly, hospital-acquired infections are associated with increased mortality. However, preventing the infections does not necessarily prevent death. For example, in patients with widely metastatic cancer, infection is a common cause of death. However, preventing or treating the infection, may do little other than delay the inevitable. A program to improve infections in these patients would likely have little effect on any meaningful patient outcomes.

There is also a trend of bundling weakly evidence-based, non-patient centered surrogate markers with legitimate performance measures (18). Under threat of financial penalties, hospitals are required to improve these surrogate markers, and not surprisingly their reports indicate they do. The organization mandating compliance with their outcomes reports that under their guidance hospitals have significantly improved healthcare saving both lives and money. However, if the outcome is meaningless or the hospital lies about their improvement, there is no overall quality improvement. There is little incentive for the parties to question the validity of the data. The organization that mandates the program would be politically embarrassed by an ineffective program and the hospital would be financially penalized for honest reporting.

Improvement begins with the establishment of measures that are truly evidence-based. Surrogate markers should only be used when improvement in that marker has been unequivocally shown to improve patient-centered outcomes. The validity of the data also needs to be independently confirmed. Those regulatory agency-demanded quality improvement programs that do not meet these criteria need to be regarded for what they are-political propaganda rather than real solutions.

The above data suggest that healthcare is improving little in what matters most, patient-centered outcomes. Those claims by regulatory agencies of improved healthcare should be regarded with skepticism unless corroborated by improvement in valid patient-centered outcomes.


  1. Donabedian A. Evaluating the quality of medical care. 1966. Milbank Q. 2005;83(4):691-729. [PubMed]
  2. Affeldt JE. The new quality assurance standard of the Joint Commission on Accreditation of Hospitals. West J Med. 1980;132:166-70. [PubMed]
  3. Padrnos L, Bui T, Pattee JJ, et al. Analysis of overall level of evidence behind the Institute of Healthcare Improvement ventilator-associated pneumonia guidelines. Southwest J Pulm Crit Care 2011;3:40-8.
  4. Edwards JR, Peterson KD, Andrus ML, et al; NHSN Facilities. National Healthcare Safety Network (NHSN) Report, data summary for 2006, issued June 2007. Am J Infect Control. 2007;35(5):290-301. [CrossRef] [PubMed]
  5. Dudeck MA, Weiner LM, Allen-Bridson K, et al. National Healthcare Safety Network (NHSN) report, data summary for 2012, device-associated module. Am J Infect Control. 2013;41(12):1148-66. [CrossRef] [PubMed]
  6. Metersky ML, Wang Y, Klompas M, Eckenrode S, Bakullari A, Eldridge N. Trend in ventilator-associated pneumonia rates between 2005 and 2013. JAMA. 2016 Dec 13;316(22):2427-9. [CrossRef] [PubMed]
  7. Whitman E. Fewer hospitals earn Medicare bonuses under value-based purchasing. Medscape. November 1, 2016. Available at: (accessed 11/3/16).
  8. Centers for Medicare & Medicaid Services. 2015 national impact assessment of the centers for medicare & medicaid services (CMS). quality measures report. March 2, 2015. Available at: (accessed 11/3/16).
  9. National Center for Health Statistics. Health, United States, 2015: With Special Feature on Racial and Ethnic Health Disparities. Hyattsville, MD. 2016. Available at: (accessed 11/3/16).
  10. Johnson NB, Hayes LD, Brown K, Hoo EC, Ethier KA. CDC National health report: leading causes of morbidity and mortality and associated behavioral risk and protective factors—United States, 2005–2013October 31, 2014/ 63(04);3-27. Available at: (accessed 11/3/16).
  11. Figueroa JF, Tsugawa Y, Zheng J, Orav EJ, Jha AK. Association between the Value-Based Purchasing pay for performance program and patient mortality in US hospitals: observational study. BMJ. 2016 May 9;353:i2214.
  12. Centers for Disease Control. Trends in inpatient hospital deaths: national hospital discharge survey, 2000–2010. March 2013. Available at: (accessed 11/3/16).
  13. CMS. First release of the overall hospital quality star rating on hospital compare. July 27, 2016. Available at: (accessed 11/3/16)
  14. Newport F. Ratings of U.S. healthcare quality no better after ACA. November 19, 2015. Available at: (accessed 11/3/16).
  15. Robbins RA. National health expenditures: the past, present, future and solutions. Southwest J Pulm Crit Care. 2015;11(4):176-85.
  16. Newport F. Ratings of U.S. healthcare quality no better after ACA. November 19, 2015. Available at: (accessed 11/3/16).
  17. Meddings JA, Reichert H, Rogers MA, Saint S, Stephansky J, McMahon LF. Effect of nonpayment for hospital-acquired, catheter-associated urinary tract infection: a statewide analysis. Ann Intern Med 2012;157:305-12. [CrossRef] [PubMed]
  18. CMS. Bundled payments for care improvement (BPCI) initiative: general information. November 28, 2016. Available at: (accessed 12/30/16).

Cite as: Robbins RA. Is quality of healthcare improving in the US? Southwest J Pulm Crit Care. 2017;14(1):29-36. doi: PDF 


Survey Shows Support for the Hospital Executive Compensation Act

Richard A. Robbins, MD

Editor, SWJPCC

The Arizona Hospital Executive Compensation Act 2016 was an Arizona state proposition to limit healthcare executive pay to $450,000/year. An anonymous survey conducted by the Southwest Journal of Pulmonary and Critical Care (SWJPCC) from 8/1/16-8/22/16 on support for the proposition support and its possible effect on healthcare (Appendix 1). We obtained 52 responses of which 49 were from physicians and 3 from other healthcare workers. Eighty-three percent (43 of 52) supported the proposition and only 10% (5 of 52) felt if would make patient care worse. Thirty-five percent (18 of 52) felt it would make patient care better while the remaining 56% believed it would have no effect. All 5 of those who opposed the proposition felt it would make healthcare worse. These data suggest that in the opinion of those who answered a survey in the SWJPCC the vast majority supported a measure to limit healthcare executive pay and most felt it would have no effect on patient care or make it better.

Cite as: Robbins RA. Survey shows support for the hospital executive compensation act. Southwest J Pulm Crit Care. 2016;13:90. doi: PDF


The Disruptive Administrator: Tread with Care

Richard A. Robbins, MD

Phoenix Pulmonary and Critical Care Research and Education Foundation

Gilbert, AZ



Although the extent of disruptive behavior in healthcare is unclear, the courts are beginning to recognize that administrators can wrongfully restrain a physician's ability to practice. Disruptive conduct is often difficult to prove. However, when administration takes action against an individual physician, they are largely powerless, with governing boards and courts usually siding with the administrators. As long as physicians remain vulnerable to retaliation and administration remains exempt for inappropriate actions, physicians should carefully consider the consequences before displaying any opposition to an administrative action.


Over the past three decades there have been hundreds of articles published on "disruptive" physicians. Publications have appeared in prestigious medical journals and been published by medical organizations such as the American Medical Association and by regulatory organizations such as the Joint Commission and some state licensing agencies. Although attempts have been made to define disruptive behavior, the definition remains subjective and can be applied to any behavior viewed objectionable by an administrator. The medical literature on disruptive physician behavior is descriptive, nonexperimental and not evidence based (1). Furthermore, despite claims to the contrary, there is little evidence that "disruptive" behavior harms patient care (1).

Certainly, there are physicians who are disruptive. Most disruptions are due to conflict between physicians and other healthcare providers with which they most closely interact, usually nurses. Not surprisingly, many of the authors of these descriptive articles have been nurses although some have been administrators, lawyers or even other physicians. These articles often give the impression that administrators are merely trying to do their job and that physicians who disagree should be punished. Although this may be true, and most administrators are trying their best to have a positive impact on health care delivery, in some instances it is not.

Like disruptive physician behavior, the extent and incidence of disruptive administrative behavior is unknown. A PubMed search and even a Google search on disruptive administrative behavior discovered no appropriate articles. However, one type of disruptive behavior is bullying. A recent survey in the United Kingdom of obstetrics and gynecology consultants suggests the problem may be common. Nearly half of the consultants who responded to a survey said they had been persistently bullied or undermined on the job (2). Victims report that those at the top of the hierarchy or near it, such as lead clinicians, medical directors, and board-level executives, do most of the bullying and undermining. Pamela Wible MD, an authority in physician suicide prevention, said these results are not unique to the United Kingdom, and that the patterns are similar in the United States (3).

A major difference between physician and administrative disruptive behavior is that physician disruptive behavior usually applies to a specific individual but most of the examples detailed below are largely system retaliation against physicians who complained. Administrators typically work through committees thereby diffusing their individual responsibility for a specific action. Wible said the usual long list of perpetrators against physicians often indicates a toxic work environment (3). "I talk to doctors every day who are ready to quit medicine because of this toxic work environment that has to do with this bullying behavior. What I hear most is it's coming from the clinic manager or the administrative team who calls the doctor into the office and beats them up ..." she added.

History of the Recognition of Physician Disruptive Behavior

Isolated articles on disruptive physician behavior first appeared in the medical literature in the 1970's with scattered reports appearing through the 1980's and 1990's (4). Prompted by these isolated reports and the perception that this might be a growing problem, a Special Committee on Professional Conduct and Ethics was appointed by the Federation of State Medical Boards to investigate physician disruptive behavior. They released their report in April, 2000 and listed 17 behavioral sentinel events (Table 1) (5).

Table 1. Behavioral sentinel events (3).

As announced in 2008 in an article in "The Joint Commission Journal of Quality and Patient Safety" and a Joint Commission Sentinel Event Alert,  a new Joint Commission accreditation standard requires hospitals to have a disruptive policy in place and to provide resources for its support as one of the leadership standards for accreditation (6,7). Although not stated, it is clear these standards refer to hospital employees and not hospital administration giving the impression that any disagreement between a physician or other employee and administration are the result of a disruptive behavior on the part of the physician or employee. They imply that all adverse actions against physicians for disruptive physician behavior are warranted. However, physicians may be trying to protect their patients from poor administrative decisions while administrators view physician opposition as insubordination. The viewpoint lies in the eyes of observer.

Disruptive Administrative Behavior Involving Whistleblowing

Klein v University Of Medicine and Dentistry of New Jersey

Sanford Klein was chief of anesthesiology at Robert Wood Johnson University Hospital in New Brunswick, NJ, for 16 years (8). He grew increasingly concerned about patient safety in the radiology department and complained repeatedly to the hospital's chief of staff, citing insufficient staff, space, and resuscitation equipment. After Klein grew increasingly vocal he was required to work under supervision. He refused to accept that restriction and sued. The trial judge granted summary judgment for the defendants, and an appellate court upheld that ruling. Klein is still a tenured professor at the university, but he no longer has privileges at the hospital. "This battle has cost me hundreds of thousands of dollars so far, and it's destroyed my career as a practicing physician," he says. "But if I had to do it over again, I would, because this is an ethical issue."

Lemonick v Allegheny Hospital System

David Lemonick was an emergency room physician at Pittsburgh's Western Pennsylvania Hospital who repeatedly complained to his department chairman about various patient safety problems (8). His department chairman accused him of "disruptive behavior". Lemonick wrote to the hospital's CEO to express his concerns about patient care, who thanked him, promised an investigation, and assured him there would be no retaliation. Nevertheless, Lemonick was terminated and sued the hospital for violating Pennsylvania's whistleblower protection law and another state law that specifically protects healthcare workers from retaliation for reporting a "serious event or incident" involving patient safety. Lemonick and Alleghany reached an out of court settlement and he  is now director of emergency medicine at a small hospital about 50 miles from his Pittsburgh. He was named Pennsylvania's emergency room physician of the year in 2007.

Ulrich v Laguna Honda Hospital

John Ulrich protested at a staff meeting when he learned that Laguna Honda Hospital was planning to lay off medical personnel, including physicians (9). He claimed layoffs would endanger patient care. Ulrich resigned and the hospital administration reported his resignation to the state board and the National Practitioner Data Bank, noting that it had followed unknown to Ulrich "commencement of a formal investigation into his practice and professional conduct".  Although the state board found no grounds for action, the hospital refused to void the NPDB report. Ulrich sued the hospital and its administrators. In 2004, after a long legal battle, Ulrich won a $4.3 million verdict, and later settled for about $1.5 million, with the hospital agreeing to retract its report to the NPDB. Still, he spent nearly seven years without a full-time job, doing part-time work as a coder and medical researcher, with a sharply reduced income.

Schulze v Humana

Dr. John Paul Schulze, a longtime family practice doctor in Corpus Christi, Texas, criticized Humana Health Care in 1996 for its decision to have its own doctors care for all patients once they were admitted to Humana hospitals (9). Humana officials alleged that he “was unfit to practice medicine, and represented an ongoing threat of harm to his patients" and reported Schulze to the National Practitioners Data Bank and the Texas State Board of Medical Examiners. Schulze sued and after several years of legal battles an out of court settlement was reached.

Flynn v. Anadarko Municipal Hospital

Dr. John Flynn reported to Anadarko Municipal Hospital administrators that a colleague abandoned a patient (9). After no action was taken, he resigned from the medical staff before reporting the alleged violations to state and federal authorities. Flynn attempted to rejoin the staff after an investigation had found violations, but the medical staff denied him privileges. The public works authority governing the hospital held a lengthy hearing on the case and restored Flynn's privileges.

Kirby v University Hospitals of Cleveland

University Hospitals of Cleveland (UH) which is affiliated with Case Western Reserve University recruited Dr. Thomas Kirby to head up its cardiothoracic surgery and lung transplant divisions in 1998 (9). Not long after he joined UH, Kirby started pressing hospital executives about program changes, particularly for open heart procedures. Kirby said he was alarmed by mounting deaths and complications among intensive care patients after heart surgeries, and took his concerns to hospital administrators and board members.

When he returned from a vacation, Kirby learned he'd been demoted and the two colleagues he'd recruited to the program had been fired. During the subsequent months, acrimony within the department boiled over and eventually led to Kirby filing a slander suit against a fellow surgeon, who Kirby claimed made disparaging remarks to other staff members about his clinical competence. The hospital's reaction was to suspend Kirby. The suspension letter from the hospital chief of staff accused Kirby of being "abusive, arrogant and aggressive" with other hospital staff, including use of profanity and "foul and/or sexual language." Accusers were not named, dates were not supplied and Kirby was not offered the chance to continue practicing surgery. Subsequently, the Accreditation Council for Graduate Medical Education revoked UH's cardiothoracic surgery residency, saying the program no longer met council standards.

However, Kirby sued over another issue which may have been at the heart of the acrimony. Kirby had alleged that UH had entered into improper financial arrangements with doctors to induce them to refer patients and then billed Medicare for the services provided. The U.S. attorney for the Northern District of Ohio intervened in the suit. University Hospital eventually agreed to pay $13.9 million to settle the federal false claims lawsuit arising from alleged anti-kickback violations although they denied any wrongdoing. Kirby was awarded a settlement of $1.5 million.

Fahlen vs. Memorial Medical Center

Between 2004 and 2008, Dr. Mark Fahlen, reported to hospital administration that nurses at Memorial Medical Center in Modesto, California were failing to follow his directions, thus endangering patients’ lives (10). However, the nurses complained about Fahlen’s behavior and he was fired. A peer committee consisting of six physicians reviewed the decision and found no professional incompetence but Memorial’s board refused to grant him staff privileges. Subsequently, Fahlen sued. After four years of legal wrangling, an out of court agreement reinstated Fahlen's hospital privileges.

Disruptive Administrative Behavior By an Individual Administrator

Vosough vs. Kierce

In Patterson, New Jersey Khashayar Vosough MD and his partners sued St. Joseph's Regional Medical Center's obstetrics and gynecology department chairman, Roger Kierce MD, for profane language and abusive and demeaning behavior (11). Kierce once told a group of doctors he would "separate their skulls from their bodies" if they disobeyed him. In 2012 a Bergen County jury returned the verdict in less than an hour, awarding Vosough and his colleagues $1,270,000. However, the decision was appealed and overturned in 2014 by the Superior Court of New Jersey, Appellate Division (12).

Medical Staff Collectively Suing a Hospital Administration

Medical Staff of Avera Marshall Regional Medical Center v. Avera Marshall

In rare instances a collection of physicians comes into legal conflict with a hospital. In Minnesota the medical staff of Avera Marshall Medical Center was charged with physician credentialing, peer review, and quality assurance (13). A two-thirds majority vote was required to change the bylaws but the hospital administration unilaterally changed the bylaws in early 2012. The medical staff sued the hospital.

However, the real source of the dispute might be over patient referrals and income. Conflict arose when doctors not employed by the hospital alleged that the that the hospital was steering emergency room patients toward its own employed doctors. The case was eventually decided by the Minnesota Supreme Court who ruled in favor of the medical staff (13).


These cases illustrate that physicians can occasionally win lawsuits against hospital administration for disruptive behavior. However, victory is often hollow with careers destroyed and years without a professional income as the wheels of justice slowly turn. As one article said, "Is whistleblowing worth it?" (8).

Dr. Fahlen was fortunate that the peer review found no professional incompetence. In many instances the reviews are conducted by physician administrators with the verdict predetermined. For example, in the Thomas Kummet case presented in the Southwest Journal of Pulmonary and Critical Care, an independent review concluded there was no malpractice (14). However, the Veterans Administration had the case reviewed by a VA appointed committee who sided with the VA administration. Kummet's name was subsequently submitted to the National Practioner Databank and he sued the VA. After the case was dismissed by a Federal court, Kummet left the VA system.

Physicians are particularly vulnerable to retaliation by unfounded accusations. Several examples were given above. In many of these cases, complaints were followed by what appeared to be a sham peer review. Sham peer review is a name given to the abuse of the peer review process to attack a doctor for personal or other non-medical reasons (15,16). The American Medical Association conducted an investigation of medical peer review in 2007 and concluded that it is easy to allege misconduct and 15% of surveyed physicians by the Massachusetts Medical Society indicated that they were aware of peer review misuse or abuse (17). However, cases of malicious peer review proven through the legal system are rare.

Huntoon (18) listed a number of characteristic of sham peer review (Table 2).

Table 2. Characteristics of sham peer review (16).

I first witnessed peer review being used as a weapon as a junior faculty member in the mid-1980's. The then chief of thoracic surgery, a pediatric thoracic surgeon, underwent peer review. It appeared that the underlying reason was that most of his operations were performed at an affiliated children's hospital rather than the university medical center that conducted the review. The influence of income as opposed to medical quality being the real motivation for an administrative action against a physician is unknown, although some of the above cases suggests it is not uncommon. Given the amount of money potentially involved and the lack of consequences for hospital administration, it is naive to believe that false accusations would not or will not continue to occur.

Most disturbing is physicians who falsely accuse other physicians. Although this behavior would clearly be covered by behavioral sentinel events such as those listed in table 1, hospital boards may deem not to act. For example, one physician accused a hospital director, a non-practicing physician, of being disruptive. The hospital board failed to act stating that their interpretation was that the term disruptive physician applied only to practicing physicians.

The federal Whistle Blower Protection Act (WPA) protects most federal employees who work in the executive branch. It also requires that federal agencies take appropriate action. Most individual states have also enacted their own whistleblower laws, which protect state, public and/or private employees. Unlike their federal counterparts however, these state levels generally do not provide payment or compensation to whistleblowers, Instead the states concentrate on the prevention of retaliatory action toward the whistleblower. Unlike California's law specifically protecting physicians most state laws are not specific to physicians.

Although beyond the scope of this review, it seems likely that administrative disruptive actions may also occur against other health care workers including nurses, technicians and  other staff. However, the prevalence and appropriateness of these actions are unclear. However, as leaders of the healthcare team and often not employed by the hospital, physicians are unique as evidenced by the National Practioner Data Bank. No similar nursing, technician or administrator data bank exists.

Although the few cases cited above suggest that legal action can be successful against abusive administrators, these cases are rare. The consequences of being labeled disruptive can be dire to physicians who lack any due process either in hospitals and often in the courts. Until such a time when administration can be held accountable for behavior that is considered disruptive, the sensible physician might avoid conflicts with hospital administration.


  1. Hutchinson M, Jackson D. Hostile clinician behaviours in the nursing work environment and implications for patient care: a mixed-methods systematic review. BMC Nurs. 2013 Oct 4;12(1):25. [CrossRef] [PubMed]
  2. Shabazz T, Parry-Smith W, Oates S, Henderson S, Mountfield J. Consultants as victims of bullying and undermining: a survey of Royal College of Obstetricians and Gynaecologists consultant experiences. BMJ Open. 2016 Jun 20;6(6):e011462. [CrossRef] [PubMed]
  3. Frellick M. Senior physicians report bullying from above and below. Medscape. June 29, 2016. Available at:
  4. Hollowell EE. The disruptive physician: handle with care. Trustee. 1978 Jun;31(6):11-3, 15, 17. [PubMed]
  5. Russ C, Berger AM, Joas T, Margolis PM, O'Connell LW, Pittard JC, George A. Porter GA, Selinger RCL, Tornelli-Mitchell J, Winchell CE, Wolff TL. Report of the Special Committee on Professional Conduct and Ethics. Federation of State Medical Boards of the United States. April, 2000. Available at: (accessed 5/3/16).
  6. Rosenstein AH, O’Daniel M. A survey of the impact of disruptive behaviors and communication defects on patient safety. Jt Comm J Qual Patient Saf. 2008;34:464–471. [PubMed]
  7. The Joint Commission. Behaviors That Undermine a Culture of Safety Sentinel Event Alert #40 July 9, 2008: 1-5. Available from: (accessed 5/3/16).
  8. Rice B. Is whistleblowing worth it? Medical Economics. January 20, 2006. Available at: (accessed 5/5/16).
  9. Twedt S. The Cost of Courage: how the tables turn on doctors. Pittsburgh Post-Gazette. October 26, 2003. Available at: (accessed 5/5/16).
  10. Danaher M. Physician not required to exhaust hospital’s administrative review process before suing hospital under state’s whistleblower statute. Employment Law Matters. February 20, 2014. Available at: (accessed 5/3/16).
  11. Washburn L. Doctors win suit against hospital over abuse by boss. The Record. January 11, 2012. Available at: (accessed 5/4/16).
  12. Ashrafi JAD. Vosough v. Kierce. Find Law for Legal Professionals. 2014. Available at: (accessed 5/4/16).
  13. Moore, JD Jr. When docs sue their own hospital-at issue: who has authority to hire, fire, and discipline staff physicians. Medpage Today. January 19, 2015. Available at: (accessed 5/3/16).
  14. Robbins RA. Profiles in medical courage: Thomas Kummet and the courage to fight bureaucracy. Southwest J Pulm Crit Care. 2013;6(1):29-35. Available at: (accessed 8/5/16)
  15. Chalifoux R Jr, So what is a sham peer review? MedGenMed. 2005 Nov 15;7(4):47. [PubMed]
  16. Langston EL. Inappropriate peer review. Report of the board of trustees. 2016. Available at: (accessed 5/15/16).
  17. Chu J. Doctors who hurt doctors. Time. August 07, 2005. Available at:,9171,1090918,00.html (accessed 5/15/16, requires subscription).
  18. Huntoon LR. Tactics characteristic of sham peer review. Journal of American Physicians and Surgeons 2009;14(3):64-6.

Cite as: Robbins RA. The disruptive administrator: tread with care. Southwest J Pulm Crit Care. 2016:13(2):71-9. doi: PDF