Search Journal-type in search term and press enter
In Memoriam
Social Media-Follow Southwest Journal of Pulmonary and Critical Care on Facebook and Twitter

Editorials

Last 50 Editorials

(Click on title to be directed to posting, most recent listed first)

Blue Shield of California Announces Help for Independent Doctors-A
   Warning
Medicare for All-Good Idea or Political Death?
What Will Happen with the Generic Drug Companies’ Lawsuit: Lessons from
   the Tobacco Settlement
The Implications of Increasing Physician Hospital Employment
More Medical Science and Less Advertising
The Need for Improved ICU Severity Scoring
A Labor Day Warning
Keep Your Politics Out of My Practice
The Highest Paid Clerk
The VA Mission Act: Funding to Fail?
What the Supreme Court Ruling on Binding Arbitration May Mean to
   Healthcare 
Kiss Up, Kick Down in Medicine 
What Does Shulkin’s Firing Mean for the VA? 
Guns, Suicide, COPD and Sleep
The Dangerous Airway: Reframing Airway Management in the Critically Ill 
Linking Performance Incentives to Ethical Practice 
Brenda Fitzgerald, Conflict of Interest and Physician Leadership 
Seven Words You Can Never Say at HHS
Equitable Peer Review and the National Practitioner Data Bank 
Fake News in Healthcare 
Beware the Obsequious Physician Executive (OPIE) but Embrace Dyad
   Leadership 
Disclosures for All 
Saving Lives or Saving Dollars: The Trump Administration Rescinds Plans to
Require Sleep Apnea Testing in Commercial Transportation Operators
The Unspoken Challenges to the Profession of Medicine
EMR Fines Test Trump Administration’s Opposition to Bureaucracy 
Breaking the Guidelines for Better Care 
Worst Places to Practice Medicine 
Pain Scales and the Opioid Crisis 
In Defense of Eminence-Based Medicine 
Screening for Obstructive Sleep Apnea in the Transportation Industry—
   The Time is Now 
Mitigating the “Life-Sucking” Power of the Electronic Health Record 
Has the VA Become a White Elephant? 
The Most Influential People in Healthcare 
Remembering the 100,000 Lives Campaign 
The Evil That Men Do-An Open Letter to President Obama 
Using the EMR for Better Patient Care 
State of the VA
Kaiser Plans to Open "New" Medical School 
CMS Penalizes 758 Hospitals For Safety Incidents 
Honoring Our Nation's Veterans 
Capture Market Share, Raise Prices 
Guns and Sleep 
Is It Time for a National Tort Reform? 
Time for the VA to Clean Up Its Act 
Eliminating Mistakes In Managing Coccidioidomycosis 
A Tale of Two News Reports 
The Hands of a Healer 
The Fabulous Fours! Annual Report from the Editor 
A Veterans Day Editorial: Change at the VA? 
A Failure of Oversight at the VA 
IOM Releases Report on Graduate Medical Education 

 

For complete editorial listings click here.

The Southwest Journal of Pulmonary and Critical Care welcomes submission of editorials on journal content or issues relevant to the pulmonary, critical care or sleep medicine.

---------------------------------------------------------------------------------------------

Entries in Joint Commission (5)

Thursday
Mar162017

Pain Scales and the Opioid Crisis 

In the last year, physicians and nurses have increasingly voiced their dissatisfaction with pain as the fifth vital sign. In June 2016, the American Medical Association recommended that pain scales be removed in professional medical standards (1). In September 2016, the American Academy of Family Physicians did the same (2). A recent Medscape survey reported that over half of surveyed doctors and nurses supported removal of pain assessment as a routine vital sign (3).

In the 1990’s there was a widespread impression that pain was undertreated. Whether this was true or an impression created by a few practitioners and undertreated patients with the support of the pharmaceutical industry is unclear. Nevertheless, the prevailing thought became that identifying and quantifying pain would lead to more appropriate pain therapy. The American Society of Anesthesiologists and the American Pain Society issued practice guidelines for pain management (4,5). Subsequently, both the Department of Veterans Affairs and the Joint Commission on the Accreditation of Healthcare Organizations (JCAHO) mandated a pain scale as the fifth vital sign (6-9). Most commonly these scales ask patients to rate their pain on a scale of 1-10. The JCAHO mandated that "Pain is assessed in all patients” and would give hospitals "requirements for Improvement" if they failed to meet this standard (9). The JCAHO also published a book in 2000 for purchase as part of required continuing education seminars (9). The book cited studies that claimed "there is no evidence that addiction is a significant issue when persons are given opioids for pain control." It also called doctors' concerns about addiction side effects "inaccurate and exaggerated." The book was sponsored by Purdue Pharma makers of oxycodone.

Almost as soon as the standards were initiated, suggestions emerged that pain treatment was becoming overzealous. In 2003 a survey of 250 adults who had undergone surgical procedures reported that almost 90% were satisfied with their pain medications. Nevertheless, the authors concluded that “many patients continue to experience intense pain after surgery … additional efforts are required to improve patients’ postoperative pain experience” (8). Concerns about overaggressive treatment for pain increased after Vila et al. (10) reported in 2005 that the incidence of opioid oversedation increased from 11.0 to 24.5 per 100 000 inpatient hospital days after the hospitals implemented a numerical pain treatment algorithm. As early as 2002 the Institute for Safe Medication Practices linked overaggressive pain management to a substantial increase in oversedation and fatal respiratory depression events (11). Articles appeared questioning the wisdom of asking every patient to rate their pain noting that implementation of the scale did not appear to improve pain management (12). The JCAHO removed its standard to assess pain in all patients but not until 2009.

The US has seen a dramatic increase in the incidence of opioid deaths (13). It is unclear if adoption of the pain scale and its widespread application to all patients contributed to the increase although the time frame and the data from Vila et al. (10) suggest that this is likely.

There have been other factors that may have also contributed to the increase in opioid deaths. The Medscape survey mentioned above asked participants how often they feel pressure to prescribe pain medication in order to keep patient satisfaction levels high (3). Specifically mentioned was the Hospital Consumer Assessment of Healthcare Providers and Systems or HCAHPS. HCAHPS is a patient satisfaction survey required for all hospitals in the US. About two thirds of doctors and nurses felt there was pressure (3). The survey also asked respondents about the influence of patient reviews on opioid prescribing. Forty-six percent of doctors said the reviews were more than slightly influential. The surveys seemed to carry more weight with nurses. Seventy-three percent said the reviews were influential. Others have blamed pharmaceutical company marketing opioids as a way of reducing pain and increasing patient satisfaction (14). Clearly, there has been a dramatic increase in narcotic prescriptions. Not surprisingly, pharmaceutical companies have done little to curb the use of their products.

Earlier this year, former CDC Director Tom Frieden said "The prescription overdose epidemic is doctor-driven…It can be reversed in part by doctors' actions” (15). Some physicians have taken this as blame for the entire opioid crisis, including deaths from heroin and illegal fentanyl. There may be some validity in this belief since abuse of illegal narcotics sometimes evolves out of abuse of prescribed narcotics. However, the actions of the health regulatory agencies that mandated pain scales and created guidelines for pain management were not mentioned by Dr. Frieden. Also, not mentioned are the patient satisfaction surveys. 

About a year ago the CDC issued guidelines for prescribing opioids for chronic pain (15). These guidelines were developed in collaboration with a number of federal agencies including the Department of Veterans Affairs which was one of the first to mandate pain scales and the Centers for Medicare and Medicaid Services (CMS) which mandated HCAHPS. Pain is a subjective symptom and quantification and treatment are imprecise. The goal cannot be to deliver perfect pain management but to reduce the incidence of under- and overtreatment as much as possible. Someone needs to assess patients’ pain complaints and prescribe opioids appropriately. No one is better qualified and prepared than the clinician at the bedside.

No one condones the unethical practice of widespread prescription of opioids without sufficient medical oversight. However, meddling by unqualified bureaucrats, administrators and politicians emphasizes guidelines over appropriate care. As detailed above, the present opioid crisis may be an unattended consequence of the pain scale and opioid prescribing guidelines. Further intrusion by the same groups who created the crisis is unlikely to solve the problem but is likely to create additional problems such as the undertreatment of patients with severe pain. As I write this on the ides of March it may be appropriate to paraphrase a line from Julius Cesar, “The fault lies not in our doctors but in our regulators”.

Richard A. Robbins, MD

Editor, SWJPCC

References

  1. Anson P. AMA drops pain as vital sign. Pain News Network. June 16, 2016. Available at: https://www.painnewsnetwork.org/stories/2016/6/16/ama-drops-pain-as-vital-sign (accessed 3/2/17).
  2. Lowes R. Drop pain as the fifth vital sign, AAFP says. Medscape Medical News. September 22, 2016. Available at: http://www.medscape.com/viewarticle/869169 (accessed 3/2/17).
  3. Ault A. Many physicians, nurses want pain removed as fifth vital sign. Medscape Medical News. Medscape Medical News. February 21, 2017. Available at: http://www.medscape.com/viewarticle/875980?nlid=113119_3464&src=WNL_mdplsfeat_170228_mscpedit_ccmd&uac=9273DT&spon=32&impID=1299168&faf=1 (accessed 3/2/17).
  4. Practice guidelines for acute pain management in the perioperative setting. A report by the American Society of Anesthesiologists Task Force on Pain Management, Acute Pain Section. Anesthesiology. 1995 Apr;82(4):1071-81. [CrossRef] [PubMed]
  5. Gordon DB, Dahl JL, Miaskowski C, McCarberg B, Todd KH, Paice JA, Lipman AG, Bookbinder M, Sanders SH, Turk DC, Carr DB. American pain society recommendations for improving the quality of acute and cancer pain management: American Pain Society Quality of Care Task Force. Arch Intern Med. 2005 Jul 25;165(14):1574-80. [CrossRef] [PubMed]
  6. National Pain Management Coordinating Committee. Pain as the 5Th vital sign toolkit. Department of Veterans Affairs. October 2000. Available at: https://www.va.gov/PAINMANAGEMENT/docs/Pain_As_the_5th_Vital_Sign_Toolkit.pdf (accessed 3/2/17).
  7. Baker DW. History of The Joint Commission's Pain Standards: Lessons for Today's Prescription Opioid Epidemic. JAMA. 2017 Mar 21;317(11):1117-8. [CrossRef] [PubMed]
  8. Apfelbaum JL, Chen C, Mehta SS, Gan TJ. Postoperative pain experience: results from a national survey suggest postoperative pain continues to be undermanaged. Anesth Analg. 2003;97(2):534-540. [CrossRef] [PubMed]
  9. Moghe S. Opioid history: From 'wonder drug' to abuse epidemic. CNN. October 14, 2016. Available at: http://www.cnn.com/2016/05/12/health/opioid-addiction-history/ (accessed 3/2/17).
  10. Vila H Jr, Smith RA, Augustyniak MJ, et al. The efficacy and safety of pain management before and after implementation of hospital-wide pain management standards: is patient safety compromised by treatment based solely on numerical pain ratings? Anesth Analg. 2005;101(2):474-480. [CrossRef] [PubMed]
  11. Institute for Safe Medication Practices. Pain scales don’t weigh every risk. July 24, 2002. Available at: https://www.ismp.org/newsletters/acutecare/articles/20020724.asp (accessed 3/2/17).
  12. Mularski RA, White-Chu F, Overbay D, Miller L, Asch SM, Ganzini L. Measuring pain as the 5th vital sign does not improve quality of pain management. J Gen Intern Med. 2006 Jun;21(6):607-12. [CrossRef] [PubMed] 
  13. Rudd RA, Seth P, David F, Scholl L. Increases in drug and opioid-involved overdose deaths - United States, 2010-2015. MMWR Morb Mortal Wkly Rep. 2016 Dec 16;65. Published on-line. [CrossRef] [PubMed]
  14. Cha AE. The drug industry’s answer to opioid addiction: More pills. Washington Post. October 16, 2016. Available at: https://www.washingtonpost.com/national/the-drug-industrys-answer-to-opioid-addiction-more-pills/2016/10/15/181a529c-8ae4-11e6-bff0-d53f592f176e_story.html?utm_term=.36c5992fa62f (accessed 3/2/17).
  15. Lowes R. CDC issues opioid guidelines for 'doctor-driven' epidemic. Medscape. March 15, 2016. Available at: http://www.medscape.com/viewarticle/860452 (accessed 3/2/17).

Cite as: Robbins RA. Pain scales and the opioid crisis. Southwest J Pulm Crit Care. 2017;14(3):119-22. doi: https://doi.org/10.13175/swjpcc033-17 PDF 

Sunday
Mar162014

Questioning the Inspectors 

In the early twentieth century hospitals were unregulated and care was arbitrary, nonscientific and often poor. The Flexner report of 1910 and the establishment of hospital standards by the American College of Surgeons in 1918 began the process of hospital inspection and improvement (1). The later program eventually evolved into what we know today as the Joint Commission. Veterans Administration (VA) hospitals have been inspected and accredited by the Joint Commission since the Reagan administration.

The VA hospitals often share reports regarding recent Joint Commission inspections and disseminate the reports as a "briefing". One of these briefings from a recent  Amarillo VA inspection was widely distributed as an email attachment and forwarded to me (for a copy of the briefing click here). There were several items in the briefing that are noteworthy. One was on the first page (highlighted in the attachment) where the briefing stated, "Surveyor recommended teaching people how to smoke with oxygen, not just discuss smoking cessation". However, patients requiring oxygen should not smoke with oxygen flowing (2,3).  It is not that oxygen is explosive but a patient lighting a cigarette in a high oxygen environment can ignite their oxygen tubing resulting in a facial burn (2,3). A very rare but more serious situation can occur when a home fire results from ignition of clothing, bedding, etc. (3).

A quick Google search revealed no data for any program teaching patients to smoke on oxygen. It is possible that the author of the "briefing" misunderstood the Joint Commission surveyor. However, the lack of physician, nurse and respiratory therapist autonomy makes it easy to envision administrative demands for a program to "teach people how to smoke on oxygen" wasting clinician and technician time to do something that is potentially harmful.

Although this is an extreme and absurd example of healthcare directed by bureaucrats, review of the remainder of the "briefing" is only slightly less disappointing. Most of the Joint Commission's recommendations for Amarillo would not be expected to improve healthcare and even fewer have an evidence basis. The Joint Commission focus should be on those standards demonstrated to improve patient outcomes rather than a series of arbitrary meaningless metrics. For example, a Joint Commission inspection should include an assessment of the adequacy of nurse staffing, are the major medical specialties and subspecialties readily accessible, is sufficient equipment and space provided to care for the patients, etc. (4-5).  By ignoring the important and focusing on the insignificant, the Joint Commission is pushing hospitals towards arbitrary and nonscientific care reminiscent of the last century. These poor hospital inspections will undoubtedly eventually lead to poorer patient outcomes.

Richard A. Robbins, MD*

Editor

References

  1. Borus ME, Buntz CG, Tash WR. Evaluating the Impact of Health Programs: A Primer. 1982. Cambridge, MA: MIT Press.
  2. Robb BW, Hungness ES, Hershko DD, Warden GD, Kagan RJ. Home oxygen therapy: adjunct or risk factor? J Burn Care Rehabil. 2003;24(6):403-6. [CrossRef] [PubMed]
  3. Ahrens M. Fires And Burns Involving Home Medical Oxygen. National Fire Protection. Association. Available at: http://www.nfpa.org/safety-information/for-consumers/causes/medical-oxygen (accessed 3/12/14).
  4. Aiken LH, Clarke SP, Sloane DM, Sochalski J, Silber JH. Hospital nurse staffing and patient mortality, nurse burnout, and job dissatisfaction. JAMA. 2002 Oct 23-30;288(16):1987-93. [CrossRef] [PubMed]
  5. Harrold LR, Field TS, Gurwitz JH. Knowledge, patterns of care, and outcomes of care for generalists and specialists. J Gen Intern Med. 1999;14(8):499-511. [CrossRef] [PubMed]

*The views expressed are those of the author and do not necessarily reflect the views of the Arizona, New Mexico, Colorado or California Thoracic Societies or the Mayo Clinic.

Reference as: Robbins RA. Questioning the inspectors. Southwest J Pulm Crit Care. 2014;8(3):188-9. doi: http://dx.doi.org/10.13175/swjpcc032-14 PDF

Monday
Oct082012

The Emperor Has No Clothes: The Accuracy of Hospital Performance Data  

Several studies were announced within the past month dealing with performance measurement. One was the Joint Commission on the Accreditation of Healthcare Organizations (Joint Commission, JCAHO) 2012 annual report on Quality and Safety (1). This includes the JCAHO’s “best” hospital list. Ten hospitals from Arizona and New Mexico made the 2012 list (Table 1).

Table 1. JCAHO list of “best” hospitals in Arizona and New Mexico for 2011 and 2012.

This compares to 2011 when only six hospitals from Arizona and New Mexico were listed. Notably underrepresented are the large urban and academic medical centers. A quick perusal of the entire list reveals that this is true for most of the US, despite larger and academic medical centers generally having better outcomes (2,3).

This raises the question of what criteria are used to measure quality. The JCAHO criteria are listed in Appendix 2 at the end of their report. The JCAHO criteria are not outcome based but a series of surrogate markers. The Joint Commission calls their criteria “evidence-based” and indeed some are, but some are not (2). Furthermore, many of the Joint Commission’s criteria are bundled. In other words, failure to comply with one criterion is the same as failing to comply with them all. They are also not weighted, i.e., each criterion is judged to be as important as the other. An example where this might have an important effect on outcomes might be pneumonia. Administering an appropriate antibiotic to a patient with pneumonia is clearly evidence-based. However, administering the 23-polyvalent pneumococcal vaccine in adults is not effective (4-6). By the Joint Commission’s criteria administering pneumococcal vaccine is just as important as choosing the right antibiotic and failure to do either results in their judgment of noncompliance.

Previous studies have not shown that compliance with the JCAHO criteria improves outcomes (2,3). Examination of the US Health & Human Services Hospital Compare website is consistent with these results. None of the “best” hospitals in Arizona or New Mexico were better than the US average in readmissions, complications, or deaths (7).

A second announcement was the success of the Agency for Healthcare Quality and Research’s (AHRQ) program on central line associated bloodstream infections (CLABSI) (8). According to the press release the AHRQ program has prevented more than 2,000 CLABSIs, saving more than 500 lives and avoiding more than $34 million in health care costs. This is surprising since with the possible exception of using chlorhexidine instead of betadine, the bundled criteria are not evidence-based and have not correlated with outcomes (9). Examination of the press release reveals the reduction in mortality and the savings in healthcare costs were estimated from the hospital self-reported reduction in CLABSI.

A clue to the potential source of these discrepancies came from an article published in the Annals of Internal Medicine by Meddings and colleagues (10). These authors studied urinary tract infections which were self-reported by hospitals using claims data. According to Meddings, the data were “inaccurate” and “are not valid data sets for comparing hospital acquired catheter-associated urinary tract infection rates for the purpose of public reporting or imposing financial incentives or penalties”. The authors propose that the nonpayment by Medicare for “reasonably preventable” hospital-acquired complications resulted in this discrepancy. There is no reason to assume that data reported for CLABSI or ventilator associated pneumonia (VAP) is any more accurate.

These and other healthcare data seem to follow a trend of bundling weakly evidence-based, non-patient centered surrogate markers with legitimate performance measures. Under threat of financial penalty the hospitals are required to improve these surrogate markers, and not surprisingly, they do. The organization mandating compliance with their outcomes joyfully reports how they have improved healthcare saving both lives and money. These reports are often accompanied by estimates, but not measurement, of patient centered outcomes such as mortality, morbidity, length of stay, readmission or cost. The result is that there is no real effect on healthcare other than an increase in costs. Furthermore, there would seem to be little incentive to question the validity of the data. The organization that mandates the program would be politically embarrassed by an ineffective program and the hospital would be financially penalized for honest reporting.

Improvement begins with the establishment of guidelines that are truly evidence-based and have a reasonable expectation of improving patient centered outcomes. Surrogate markers should be replaced by patient-centered outcomes such as mortality, morbidity, length of stay, readmission, and/or cost. The recent "pay-for-performance" ACA provision on hospital readmissions that went into effect October 1 is a step in the right direction. The guidelines should not be bundled but weighted to their importance. Lastly, the validity of the data needs to be independently confirmed and penalties for systematically reporting fraudulent data should be severe. This approach is much more likely to result in improved, evidence-based healthcare rather than the present self-serving and inaccurate programs without any benefit to patients.

Richard A. Robbins, MD*

Editor, Southwest Journal of Pulmonary and Critical Care

References

  1. Available at: http://www.jointcommission.org/assets/1/18/TJC_Annual_Report_2012.pdf (accessed 9/22/12).
  2. Robbins RA, Gerkin R, Singarajah CU. Relationship between the Veterans Healthcare Administration hospital performance measures and outcomes. Southwest J Pulm Crit Care 2011;3:92-133.
  3. Rosenthal GE, Harper DL, Quinn LM. Severity-adjusted mortality and length of stay in teaching and nonteaching hospitals. JAMA 1997;278:485-90.
  4. Fine MJ, Smith MA, Carson CA, Meffe F, Sankey SS, Weissfeld LA, Detsky AS, Kapoor WN. Efficacy of pneumococcal vaccination in adults. A meta-analysis of randomized controlled trials. Arch Int Med 1994;154:2666-77.
  5. Dear K, Holden J, Andrews R, Tatham D. Vaccines for preventing pneumococcal infection in adults. Cochrane Database Sys Rev 2003:CD000422.
  6. Huss A, Scott P, Stuck AE, Trotter C, Egger M. Efficacy of pneumococcal vaccination in adults: a meta-analysis. CMAJ 2009;180:48-58.
  7. http://www.hospitalcompare.hhs.gov/ (accessed 9/22/12).
  8. http://www.ahrq.gov/news/press/pr2012/pspclabsipr.htm (accessed 9/22/12).
  9. Hurley J, Garciaorr R, Luedy H, Jivcu C, Wissa E, Jewell J, Whiting T, Gerkin R, Singarajah CU, Robbins RA. Correlation of compliance with central line associated blood stream infection guidelines and outcomes: a review of the evidence. Southwest J Pulm Crit Care 2012;4:163-73.
  10. Meddings JA, Reichert H, Rogers MA, Saint S, Stephansky J, McMahon LF. Effect of nonpayment for hospital-acquired, catheter-associated urinary tract infection: a statewide analysis. Ann Intern Med 2012;157:305-12.

*The views expressed are those of the author and do not necessarily represent the views of the Arizona or New Mexico Thoracic Societies.

Reference as: Robbins RA. The emperor has no clothes: the accuracy of hospital performance data. Southwest J Pulm Crit Care 2012;5:203-5. PDF

Tuesday
Nov012011

Why Is It So Difficult to Get Rid of Bad Guidelines? 

Reference as: Robbins RA. Why is it so difficult to get rid of bad guidelines? Southwest J Pulm Crit Care 2011;3:141-3. (Click here for a PDF version of the editorial)

My colleagues and I recently published a manuscript in the Southwest Journal of Pulmonary and Critical Care examining compliance with the Joint Commission of Healthcare Organization (Joint Commission, JCAHO) guidelines (1). Compliance with the Joint Commission’s acute myocardial infarction, congestive heart failure, pneumonia and surgical process of care measures had no correlation with traditional outcome measures including mortality rates, morbidity rates, length of stay and readmission rates. In other words, increased compliance with the guidelines was ineffectual at improving patient centered outcomes. Most would agree that ineffectual outcomes are bad. The data was obtained from the Veterans Healthcare Administration Quality and Safety Report and included 485,774 acute medical/surgical discharges in 2009 (2). This data is similar to the Joint Commission’s own data published in 2005 which showed no correlation between guideline compliance and hospital mortality and a number of other publications which have failed to show a correlation with the Joint Commission’s guidelines and patient centered outcomes (3-8). As we pointed out in 2005, the lack of correlation is not surprising since several of the guidelines are not evidence based and improvement in performance has usually been because of increased compliance with these non-evidence based guidelines (1,9).

The above raises the question that if some of the guidelines are not evidence based, and do not seem to have any benefit for patients, why do they persist? We believe that many of the guidelines were formulated with the concept of being easy and cheap to measure and implement, and perhaps more importantly, easy to demonstrate an improvement in compliance. In other words, the guidelines are initiated more to create the perception of an improvement in healthcare, rather than an actual improvement. For example in the pneumonia guidelines, one of the performance measures which have markedly improved is administration of pneumococcal vaccine. Pneumococcal vaccine is easy and cheap to administer once every 5 years to adult patients, despite the evidence that it is ineffective (10). In contrast, it is probably not cheap and certainly not easy to improve pneumonia mortality rates, morbidity rates, length of stay and readmission rates.

To understand why these ineffectual guidelines persist, one needs to understand who benefits from guideline implementation and compliance. First, organizations which formulate the guidelines, such as the Joint Commission, benefit. Implementing a program that the Joint Commission can claim shows an improvement in healthcare is self-serving, but implementing a program which provides no benefit would be politically devastating. At a time when some hospitals are opting out of Joint Commission certification, and when the Joint Commission is under pressure from competing regulatory organizations, the Joint Commission needs to show their programs produce positive results.

Second, programs to ensure compliance with the guidelines directly employ an increasingly large number of personnel within a hospital. At the last VA hospital where I was employed, 26 full time personnel were employed in quality assurance. Since compliance with guidelines to a large extent accounts for their employment, the quality assurance nurses would seem to have little incentive to question whether these guidelines really result in improved healthcare. Rather, their job is to ensure guideline compliance from both hospital employees and nonemployees who practice within the hospital.

Lastly, the administrators within a hospital have several incentives to preserve the guideline status quo. Administrators are often paid bonuses for ensuring guideline compliance. In addition to this direct financial incentive, administrators can often lobby for increases in pay since with the increase number of personnel employed to ensure guideline compliance, the administrators now supervise more employees, an important factor in determining their salary. Furthermore, success in improving compliance, allows administrators to advertise both themselves and their hospital as “outstanding”.

In addition, guidelines allow administrative personnel to direct patient care and indirectly control clinical personnel. Many clinical personnel feel uneasy when confronted with "evidence-based" protocols and guidelines when they are clearly not “evidence-based”. Such discomfort is likely to be more intense when the goals are not simply to recommend a particular approach but to judge failure to comply as evidence of substandard or unsafe care. Reporting a physician or a nurse for substandard care to a licensing board or on a performance evaluation may have devastating consequences.

There appears to be a discrepancy between an “outstanding” hospital as determined by the Joint Commission guidelines and other organizations. Many hospitals which were recognized as top hospitals by US News & World Report, HealthGrades Top 50 Hospitals, or Thomson Reuters Top Cardiovascular Hospitals were not included in the Joint Commission list. Absent are the Mayo Clinic, the Cleveland Clinic, Johns Hopkins University, Stanford University Medical Center, and Massachusetts General.  Academic medical centers, for the most part, were noticeably absent. There were no hospitals listed in New York City, none in Baltimore and only one in Chicago. Small community hospitals were overrepresented and large academic medical centers were underrepresented in the report. However, consistent with previous reports, we found that larger predominately urban, academic hospitals had better all cause mortality, surgical mortality and surgical morbidity compared to small, rural hospitals (1).

Despite the above, I support both guidelines and performance measures, but only if they clearly result in improved patient centered outcomes. Formulating guidelines where the only measure of success is compliance with the guideline should be discouraged. We find it particularly disturbing that we can easily find a hospital’s compliance with a Joint Commission guideline but have difficulty finding the hospital’s standardized mortality rates, morbidity rates, length of stay and readmission rates, measures which are meaningful to most patients. The Joint Commission needs to develop better measures to determine hospital performance. Until that time occurs, the “quality” measures need to be viewed as what they are-meaningless measures which do not serve patients but serve those who benefit from their implementation and compliance.

Richard A. Robbins, M.D.

Editor, Southwest Journal of Pulmonary and Critical Care

References

  1. Robbins RA, Gerkin R, Singarajah CU. Relationship between the veterans healthcare administration hospital performance measures and outcomes. Southwest J Pulm Crit Care 2011;3:92-133.
  2. Available at: http://www.va.gov/health/docs/HospitalReportCard2010.pdf (accessed 9-28-11).
  3. Williams SC, Schmaltz SP, Morton DJ, Koss RG, Loeb JM. Quality of care in U.S. hospitals as reflected by standardized measures, 2002-2004. N Engl J Med. 2005;353:255-64.
  4. Werner RM, Bradlow ET. Relationship between Medicare's hospital compare performance measures and mortality rates. JAMA 2006;296:2694-702.
  5. Peterson ED, Roe MT, Mulgund J, DeLong ER, Lytle BL, Brindis RG, Smith SC Jr, Pollack CV Jr, Newby LK, Harrington RA, Gibler WB, Ohman EM. Association between hospital process performance and outcomes among patients with acute coronary syndromes. JAMA 2006;295:1912-20.
  6. Fonarow GC, Yancy CW, Heywood JT; ADHERE Scientific Advisory Committee, Study Group, and Investigators. Adherence to heart failure quality-of-care indicators in US hospitals: analysis of the ADHERE Registry. Arch Int Med 2005;165:1469-77.
  7. Wachter RM, Flanders SA, Fee C, Pronovost PJ. Public reporting of antibiotic timing in patients with pneumonia: lessons from a flawed performance measure. Ann Intern Med 2008;149:29-32.
  8. Stulberg JJ, Delaney CP, Neuhauser DV, Aron DC, Fu P, Koroukian SM.  Adherence to surgical care improvement project measures and the association with postoperative infections. JAMA. 2010;303:2479-85.
  9. Robbins RA, Klotz SA. Quality of care in U.S. hospitals. N Engl J Med. 2005;353:1860-1.
  10. Padrnos L, Bui T, Pattee JJ, Whitmore EJ, Iqbal M, Lee S, Singarajah CU, Robbins RA. Analysis of overall level of evidence behind the Institute of Healthcare Improvement ventilator-associated pneumonia guidelines. Southwest J Pulm Crit Care 2011;3:40-8.

The opinions expressed in this editorial are the opinions of the author and not necessarily the opinions of the Southwest Journal of Pulmonary and Critical Care or the Arizona Thoracic Society.

Tuesday
Jun282011

The Pain of the Timeout

Reference as : Robbins RA. The pain of the timeout. Southwest J Pulm Crit Care 2011:2:102-5. (Click here for a PDF version)

An article in the Washington Post entitled “The Pain of Wrong Site Surgery” (1) caught my eye earlier this month. In 2004 the Joint Commission of Healthcare Organizations (Joint Commission or JCAHO), prompted by media reports of wrong site surgery, mandated the “universal protocol” or surgical timeout. These rules require preoperative verification of correct patient, correct site, marking of the surgical site and a timeout to confirm everything just before the procedure starts. In announcing the rules, Dr. Dennis O’Leary, then president of the Joint Commission, stated “This is not quite ‘Dick and Jane,’ but it’s pretty close,” and that the rules were “very simple stuff” to prevent events such as wrong site or patient surgery which are so egregious and avoidable that they should be “never events” because they should never happen. During the following years different components have been added to the timeout and the timeout has been extended to cover most procedures in the hospital.

However, the article goes on to state that “some researchers and patient safety experts say the problem of wrong-site surgery has not improved and may be getting worse, although spotty reporting makes conclusions difficult”. Last year 93 cases were reported to the Joint Commission in 2009 compared to 49 in 2004. Furthermore the article states that reporting data from Minnesota and Pennsylvania, two states that require reporting have not shown a decrease over the past few years.

The reason for the increasing incidence of wrong site or wrong patient operations is not totally clear. Dr. Mark Chassin, who replaced O’Leary as president of the Joint Commission in 2008, said he thinks such errors are growing in part because of increased time pressures. Preventing wrong-site surgery also “turns out to be more complicated to eradicate than anybody thought,” he said, because it involves changing the culture of hospitals and getting doctors — who typically prize their autonomy, resist checklists and underestimate their propensity for error — to follow standardized procedures and work in teams. Dr. Peter Pronovost, medical director of the Johns Hopkins Center for Innovation in Quality Patient Care, echoed those sentiments by suggesting that doctors only pay lip service to the rules. Studies of wrong-site errors have consistently revealed a failure by physicians to participate in a timeout. Dr. Ken Kizer, former Undersecretary at the Department of Veterans Affairs and President of the National Quality Forum, advocates reporting doctors to a federal agency so wrong site surgery or patient cases can be investigated and the results publicly reported.

Several points made in the article need to be clarified

  1. The reason that it is unclear whether the present Joint Commission mandates actually prevents wrong site or patient surgery is that no data was systematically collected prior to implementation of the timeout to ensure that it works and no data has been collected since implementation. As with most bureaucracies, the Joint Commission emphasis has been more on ensuring compliance rather than studying the effectiveness of an intervention.
  2. Although no one condones wrong site or patient surgery, it is fortunately relatively rare. Stahel et al. (2) reported 132 wrong-site and wrong-patient cases during a 6 and a half year period by over 5000 physicians. They found only one death which was attributed to a wrong-sided chest tube placement for respiratory failure (2). This is questionable because a wrong sided chest tube does  not necessarily result in a patient’s death (3). Another 43 patients had significant harm from their wrong site or patient procedure and are listed below (Table 1). 
  3. Based on the above, occurrence of these wrong site or patient operations would appear to be mostly in the operating room. The surgeon often enters the operating room after the patient is under general anesthesia, prepped and draped. Unless the surgeon saw the patient in the operating room prior to anesthesia and marked the operative site, it would not be possible for the surgeon to know that the correct site and patient are present.  It is not stated in the article how many of the operations reported had a timeout or the surgeon labeled the operative site but it is implied in the article that it was few. The first author of the manuscript, Philip Stahel, an orthopedic surgeon from the University of Colorado, explained the results stating that “many doctors resent the rules, even though orthopedists have a 25 percent chance of making a wrong-site error during their career….” Dr. John R. Clarke, a professor of surgery at Drexel University College of Medicine and clinical director of the Pennsylvania Patient Safety Authority, agreed stating, “There’s a big difference between hospitals that take care of patients and those that take care of doctors…The staff needs to believe the hospital will back them against even the biggest surgeon.”
  4. Dr. Peter Pronost extends this sentiment by stating “Health care has far too little accountability for results. . . . All the pressures are on the side of production; that’s how you get paid.” He adds that increased pressure to turn over operating rooms quickly has trumped patient safety, increasing the chance of error.

I would offer some suggestions:

  1. Focus should be on the operating room since this is where most of the wrong site or wrong patient procedures occur. I’m frustrated by the unnecessary timeouts that occur during bronchoscopy. For example, where the patient is known to me, enters the bronchoscopy suite awake and alert, and the biopsies are done under direct vision, fluoroscopic or CT guidance there is no real chance of wrong site or patient surgery. Similar procedures do not need a timeout. The Joint Commission needs to recognize this and stop its “one size fits all” approach.
  2. What is needed is data. Right now it is unclear whether a timeout makes any difference. A scientific valid study of the timeout procedure is needed but not observational studies, designed only to create political statistics that a timeout works. The Joint Commission and other regulatory health care organizations need to break the habit of mandating interventions based on no or little evidence.
  3. The Joint Commission mandates have apparently had little impact on reducing wrong site or patient operations. Making further mandates would seem to offer little hope. If, as Dr. Chassin believes, that time is the issue, adding more items to a checklist will not likely improve the problem and probably make it worse.
  4. If time is the culprit in the operating room then simplifying the process as much as possible might be useful. I have been told of one operating room in Phoenix where a timeout is so extensive that it can take up to 30 minutes. Marking the site by the surgeon should be mandatory and a simplified, standardized checklist read and confirmed by the nurse, anesthesiologist and/or surgeon will hopefully simplify the timeout and enhance data collection.
  5. I would agree with both Provost and Kizer that accountability needs to be present. However, Kizer’s idea of a Federal repository may be ineffectual at improving outcomes. Witness the National Practioner Databank which has done nothing to improve health care and blames only physicians for lapses in healthccare. It would seem that many of the physicians quoted above do the same, i.e., blame only the doctors. Dr. Chassin suggests a team approach to medicine, i.e., an operating room team. I agree but it seems inconsistent to refer to a team approach and only hold the physicians accountable. Instead, I would suggest a mandatory reporting system with a free, transparent and searchable data base available to everyone. This data bank should report not only the surgeon(s) but everyone else in the operating room. Hospitals also need to be identified so that they cannot deflect their accountability by blaming surgeons while emphasizing operating room turn around over patient safety. This means not only the hospital but the CEO or administrator needs to accept some responsibility. The CEO or administrator controls the finances and often touts their “accountability”. It is time to put some teeth to that claim. Such a transparent data base will not only allow patients to check on surgeons but also hospitals, nurses, and anesthesiologists. Furthermore, it will allow the healthcare providers to check on each other as well as substandard hospitals and their administrators.

Richard A. Robbins, M.D.

Editor, Southwest Journal of Pulmonary and Critical Care

References

  1. Boodman SG. The pain of wrong site surgery. Washington Post. Published June 20, 2011. Available at URL http://www.washingtonpost.com/national/the-pain-of-wrong-site-surgery/2011/06/07/AGK3uLdH_story.html (accessed 6-21-11).
  2. Stahel PF, Sabel AL, Victoroff MS, Varnell J, Lembitz A, Boyle DJ, Clarke TJ, Smith WR, Mehler PS. Wrong-site and wrong-patient procedures in the universal protocol era: analysis of a prospective database of physician self-reported occurrences. Arch Surg.2010;145:978-84
  3. Singarajah C, Park K. A case of mislabeled identity. Southwest J Pulm Crit Care 2010;1:22-27.

The opinions expressed in this editorial are the opinions of the author and not necessarily the opinions of the Southwest Journal of Pulmonary and Critical Care or the Arizona Thoracic Society.