Search Journal-type in search term and press enter
Social Media-Follow Southwest Journal of Pulmonary and Critical Care on Facebook and Twitter

Editorials

Last 50 Editorials

(Click on title to be directed to posting, most recent listed first)

Medicare for All-Good Idea or Political Death?
What Will Happen with the Generic Drug Companies’ Lawsuit: Lessons from
   the Tobacco Settlement
The Implications of Increasing Physician Hospital Employment
More Medical Science and Less Advertising
The Need for Improved ICU Severity Scoring
A Labor Day Warning
Keep Your Politics Out of My Practice
The Highest Paid Clerk
The VA Mission Act: Funding to Fail?
What the Supreme Court Ruling on Binding Arbitration May Mean to
   Healthcare 
Kiss Up, Kick Down in Medicine 
What Does Shulkin’s Firing Mean for the VA? 
Guns, Suicide, COPD and Sleep
The Dangerous Airway: Reframing Airway Management in the Critically Ill 
Linking Performance Incentives to Ethical Practice 
Brenda Fitzgerald, Conflict of Interest and Physician Leadership 
Seven Words You Can Never Say at HHS
Equitable Peer Review and the National Practitioner Data Bank 
Fake News in Healthcare 
Beware the Obsequious Physician Executive (OPIE) but Embrace Dyad
   Leadership 
Disclosures for All 
Saving Lives or Saving Dollars: The Trump Administration Rescinds Plans to
   Require Sleep Apnea Testing in Commercial Transportation Operators
The Unspoken Challenges to the Profession of Medicine
EMR Fines Test Trump Administration’s Opposition to Bureaucracy 
Breaking the Guidelines for Better Care 
Worst Places to Practice Medicine 
Pain Scales and the Opioid Crisis 
In Defense of Eminence-Based Medicine 
Screening for Obstructive Sleep Apnea in the Transportation Industry—
   The Time is Now 
Mitigating the “Life-Sucking” Power of the Electronic Health Record 
Has the VA Become a White Elephant? 
The Most Influential People in Healthcare 
Remembering the 100,000 Lives Campaign 
The Evil That Men Do-An Open Letter to President Obama 
Using the EMR for Better Patient Care 
State of the VA
Kaiser Plans to Open "New" Medical School 
CMS Penalizes 758 Hospitals For Safety Incidents 
Honoring Our Nation's Veterans 
Capture Market Share, Raise Prices 
Guns and Sleep 
Is It Time for a National Tort Reform? 
Time for the VA to Clean Up Its Act 
Eliminating Mistakes In Managing Coccidioidomycosis 
A Tale of Two News Reports 
The Hands of a Healer 
The Fabulous Fours! Annual Report from the Editor 
A Veterans Day Editorial: Change at the VA? 
A Failure of Oversight at the VA 
IOM Releases Report on Graduate Medical Education 
Mild Obstructive Sleep Apnea: Beyond the AHI 

 

For complete editorial listings click here.

The Southwest Journal of Pulmonary and Critical Care welcomes submission of editorials on journal content or issues relevant to the pulmonary, critical care or sleep medicine.

---------------------------------------------------------------------------------------------

Entries in surgical mortality (1)

Tuesday
Nov012011

Why Is It So Difficult to Get Rid of Bad Guidelines? 

Reference as: Robbins RA. Why is it so difficult to get rid of bad guidelines? Southwest J Pulm Crit Care 2011;3:141-3. (Click here for a PDF version of the editorial)

My colleagues and I recently published a manuscript in the Southwest Journal of Pulmonary and Critical Care examining compliance with the Joint Commission of Healthcare Organization (Joint Commission, JCAHO) guidelines (1). Compliance with the Joint Commission’s acute myocardial infarction, congestive heart failure, pneumonia and surgical process of care measures had no correlation with traditional outcome measures including mortality rates, morbidity rates, length of stay and readmission rates. In other words, increased compliance with the guidelines was ineffectual at improving patient centered outcomes. Most would agree that ineffectual outcomes are bad. The data was obtained from the Veterans Healthcare Administration Quality and Safety Report and included 485,774 acute medical/surgical discharges in 2009 (2). This data is similar to the Joint Commission’s own data published in 2005 which showed no correlation between guideline compliance and hospital mortality and a number of other publications which have failed to show a correlation with the Joint Commission’s guidelines and patient centered outcomes (3-8). As we pointed out in 2005, the lack of correlation is not surprising since several of the guidelines are not evidence based and improvement in performance has usually been because of increased compliance with these non-evidence based guidelines (1,9).

The above raises the question that if some of the guidelines are not evidence based, and do not seem to have any benefit for patients, why do they persist? We believe that many of the guidelines were formulated with the concept of being easy and cheap to measure and implement, and perhaps more importantly, easy to demonstrate an improvement in compliance. In other words, the guidelines are initiated more to create the perception of an improvement in healthcare, rather than an actual improvement. For example in the pneumonia guidelines, one of the performance measures which have markedly improved is administration of pneumococcal vaccine. Pneumococcal vaccine is easy and cheap to administer once every 5 years to adult patients, despite the evidence that it is ineffective (10). In contrast, it is probably not cheap and certainly not easy to improve pneumonia mortality rates, morbidity rates, length of stay and readmission rates.

To understand why these ineffectual guidelines persist, one needs to understand who benefits from guideline implementation and compliance. First, organizations which formulate the guidelines, such as the Joint Commission, benefit. Implementing a program that the Joint Commission can claim shows an improvement in healthcare is self-serving, but implementing a program which provides no benefit would be politically devastating. At a time when some hospitals are opting out of Joint Commission certification, and when the Joint Commission is under pressure from competing regulatory organizations, the Joint Commission needs to show their programs produce positive results.

Second, programs to ensure compliance with the guidelines directly employ an increasingly large number of personnel within a hospital. At the last VA hospital where I was employed, 26 full time personnel were employed in quality assurance. Since compliance with guidelines to a large extent accounts for their employment, the quality assurance nurses would seem to have little incentive to question whether these guidelines really result in improved healthcare. Rather, their job is to ensure guideline compliance from both hospital employees and nonemployees who practice within the hospital.

Lastly, the administrators within a hospital have several incentives to preserve the guideline status quo. Administrators are often paid bonuses for ensuring guideline compliance. In addition to this direct financial incentive, administrators can often lobby for increases in pay since with the increase number of personnel employed to ensure guideline compliance, the administrators now supervise more employees, an important factor in determining their salary. Furthermore, success in improving compliance, allows administrators to advertise both themselves and their hospital as “outstanding”.

In addition, guidelines allow administrative personnel to direct patient care and indirectly control clinical personnel. Many clinical personnel feel uneasy when confronted with "evidence-based" protocols and guidelines when they are clearly not “evidence-based”. Such discomfort is likely to be more intense when the goals are not simply to recommend a particular approach but to judge failure to comply as evidence of substandard or unsafe care. Reporting a physician or a nurse for substandard care to a licensing board or on a performance evaluation may have devastating consequences.

There appears to be a discrepancy between an “outstanding” hospital as determined by the Joint Commission guidelines and other organizations. Many hospitals which were recognized as top hospitals by US News & World Report, HealthGrades Top 50 Hospitals, or Thomson Reuters Top Cardiovascular Hospitals were not included in the Joint Commission list. Absent are the Mayo Clinic, the Cleveland Clinic, Johns Hopkins University, Stanford University Medical Center, and Massachusetts General.  Academic medical centers, for the most part, were noticeably absent. There were no hospitals listed in New York City, none in Baltimore and only one in Chicago. Small community hospitals were overrepresented and large academic medical centers were underrepresented in the report. However, consistent with previous reports, we found that larger predominately urban, academic hospitals had better all cause mortality, surgical mortality and surgical morbidity compared to small, rural hospitals (1).

Despite the above, I support both guidelines and performance measures, but only if they clearly result in improved patient centered outcomes. Formulating guidelines where the only measure of success is compliance with the guideline should be discouraged. We find it particularly disturbing that we can easily find a hospital’s compliance with a Joint Commission guideline but have difficulty finding the hospital’s standardized mortality rates, morbidity rates, length of stay and readmission rates, measures which are meaningful to most patients. The Joint Commission needs to develop better measures to determine hospital performance. Until that time occurs, the “quality” measures need to be viewed as what they are-meaningless measures which do not serve patients but serve those who benefit from their implementation and compliance.

Richard A. Robbins, M.D.

Editor, Southwest Journal of Pulmonary and Critical Care

References

  1. Robbins RA, Gerkin R, Singarajah CU. Relationship between the veterans healthcare administration hospital performance measures and outcomes. Southwest J Pulm Crit Care 2011;3:92-133.
  2. Available at: http://www.va.gov/health/docs/HospitalReportCard2010.pdf (accessed 9-28-11).
  3. Williams SC, Schmaltz SP, Morton DJ, Koss RG, Loeb JM. Quality of care in U.S. hospitals as reflected by standardized measures, 2002-2004. N Engl J Med. 2005;353:255-64.
  4. Werner RM, Bradlow ET. Relationship between Medicare's hospital compare performance measures and mortality rates. JAMA 2006;296:2694-702.
  5. Peterson ED, Roe MT, Mulgund J, DeLong ER, Lytle BL, Brindis RG, Smith SC Jr, Pollack CV Jr, Newby LK, Harrington RA, Gibler WB, Ohman EM. Association between hospital process performance and outcomes among patients with acute coronary syndromes. JAMA 2006;295:1912-20.
  6. Fonarow GC, Yancy CW, Heywood JT; ADHERE Scientific Advisory Committee, Study Group, and Investigators. Adherence to heart failure quality-of-care indicators in US hospitals: analysis of the ADHERE Registry. Arch Int Med 2005;165:1469-77.
  7. Wachter RM, Flanders SA, Fee C, Pronovost PJ. Public reporting of antibiotic timing in patients with pneumonia: lessons from a flawed performance measure. Ann Intern Med 2008;149:29-32.
  8. Stulberg JJ, Delaney CP, Neuhauser DV, Aron DC, Fu P, Koroukian SM.  Adherence to surgical care improvement project measures and the association with postoperative infections. JAMA. 2010;303:2479-85.
  9. Robbins RA, Klotz SA. Quality of care in U.S. hospitals. N Engl J Med. 2005;353:1860-1.
  10. Padrnos L, Bui T, Pattee JJ, Whitmore EJ, Iqbal M, Lee S, Singarajah CU, Robbins RA. Analysis of overall level of evidence behind the Institute of Healthcare Improvement ventilator-associated pneumonia guidelines. Southwest J Pulm Crit Care 2011;3:40-8.

The opinions expressed in this editorial are the opinions of the author and not necessarily the opinions of the Southwest Journal of Pulmonary and Critical Care or the Arizona Thoracic Society.