Search Journal-type in search term and press enter
In Memoriam
Social Media-Follow Southwest Journal of Pulmonary and Critical Care on Facebook and Twitter

General Medicine

(Click on title to be directed to posting, most recent listed first)

Tacrolimus-Associated Diabetic Ketoacidosis: A Case Report and Literature 
Nursing Magnet Hospitals Have Better CMS Hospital Compare Ratings
Publish or Perish: Tools for Survival
Is Quality of Healthcare Improving in the US?
Survey Shows Support for the Hospital Executive Compensation Act
The Disruptive Administrator: Tread with Care
A Qualitative Systematic Review of the Professionalization of the 
   Vice Chair for Education
Nurse Practitioners' Substitution for Physicians
National Health Expenditures: The Past, Present, Future and Solutions
Credibility and (Dis)Use of Feedback to Inform Teaching : A Qualitative
   Case Study of Physician-Faculty Perspectives
Special Article: Physician Burnout-The Experience of Three Physicians
Brief Review: Dangers of the Electronic Medical Record
Finding a Mentor: The Complete Examination of an Online Academic 
   Matchmaking Tool for Physician-Faculty
Make Your Own Mistakes
Professionalism: Capacity, Empathy, Humility and Overall Attitude
Professionalism: Secondary Goals 
Professionalism: Definition and Qualities
Professionalism: Introduction
The Unfulfilled Promise of the Quality Movement
A Comparison Between Hospital Rankings and Outcomes Data
Profiles in Medical Courage: John Snow and the Courage of
Comparisons between Medicare Mortality, Readmission and 
In Vitro Versus In Vivo Culture Sensitivities:
   An Unchecked Assumption?
Profiles in Medical Courage: Thomas Kummet and the Courage to
   Fight Bureaucracy
Profiles in Medical Courage: The Courage to Serve
   and Jamie Garcia
Profiles in Medical Courage: Women’s Rights and Sima Samar
Profiles in Medical Courage: Causation and Austin Bradford Hill
Profiles in Medical Courage: Evidence-Based 
   Medicine and Archie Cochrane
Profiles of Medical Courage: The Courage to Experiment and 
   Barry Marshall
Profiles in Medical Courage: Joseph Goldberger,
   the Sharecropper’s Plague, Science and Prejudice
Profiles in Medical Courage: Peter Wilmshurst,
   the Physician Fugitive
Correlation between Patient Outcomes and Clinical Costs
   in the VA Healthcare System
Profiles in Medical Courage: Of Mice, Maggots 
   and Steve Klotz
Profiles in Medical Courage: Michael Wilkins
   and the Willowbrook School
Relationship Between The Veterans Healthcare Administration
   Hospital Performance Measures And Outcomes 


Although the Southwest Journal of Pulmonary and Critical Care was started as a pulmonary/critical care/sleep journal, we have received and continue to receive submissions that are of general medical interest. For this reason, a new section entitled General Medicine was created on 3/14/12. Some articles were moved from pulmonary to this new section since it was felt they fit better into this category.



The Disruptive Administrator: Tread with Care

Richard A. Robbins, MD

Phoenix Pulmonary and Critical Care Research and Education Foundation

Gilbert, AZ



Although the extent of disruptive behavior in healthcare is unclear, the courts are beginning to recognize that administrators can wrongfully restrain a physician's ability to practice. Disruptive conduct is often difficult to prove. However, when administration takes action against an individual physician, they are largely powerless, with governing boards and courts usually siding with the administrators. As long as physicians remain vulnerable to retaliation and administration remains exempt for inappropriate actions, physicians should carefully consider the consequences before displaying any opposition to an administrative action.


Over the past three decades there have been hundreds of articles published on "disruptive" physicians. Publications have appeared in prestigious medical journals and been published by medical organizations such as the American Medical Association and by regulatory organizations such as the Joint Commission and some state licensing agencies. Although attempts have been made to define disruptive behavior, the definition remains subjective and can be applied to any behavior viewed objectionable by an administrator. The medical literature on disruptive physician behavior is descriptive, nonexperimental and not evidence based (1). Furthermore, despite claims to the contrary, there is little evidence that "disruptive" behavior harms patient care (1).

Certainly, there are physicians who are disruptive. Most disruptions are due to conflict between physicians and other healthcare providers with which they most closely interact, usually nurses. Not surprisingly, many of the authors of these descriptive articles have been nurses although some have been administrators, lawyers or even other physicians. These articles often give the impression that administrators are merely trying to do their job and that physicians who disagree should be punished. Although this may be true, and most administrators are trying their best to have a positive impact on health care delivery, in some instances it is not.

Like disruptive physician behavior, the extent and incidence of disruptive administrative behavior is unknown. A PubMed search and even a Google search on disruptive administrative behavior discovered no appropriate articles. However, one type of disruptive behavior is bullying. A recent survey in the United Kingdom of obstetrics and gynecology consultants suggests the problem may be common. Nearly half of the consultants who responded to a survey said they had been persistently bullied or undermined on the job (2). Victims report that those at the top of the hierarchy or near it, such as lead clinicians, medical directors, and board-level executives, do most of the bullying and undermining. Pamela Wible MD, an authority in physician suicide prevention, said these results are not unique to the United Kingdom, and that the patterns are similar in the United States (3).

A major difference between physician and administrative disruptive behavior is that physician disruptive behavior usually applies to a specific individual but most of the examples detailed below are largely system retaliation against physicians who complained. Administrators typically work through committees thereby diffusing their individual responsibility for a specific action. Wible said the usual long list of perpetrators against physicians often indicates a toxic work environment (3). "I talk to doctors every day who are ready to quit medicine because of this toxic work environment that has to do with this bullying behavior. What I hear most is it's coming from the clinic manager or the administrative team who calls the doctor into the office and beats them up ..." she added.

History of the Recognition of Physician Disruptive Behavior

Isolated articles on disruptive physician behavior first appeared in the medical literature in the 1970's with scattered reports appearing through the 1980's and 1990's (4). Prompted by these isolated reports and the perception that this might be a growing problem, a Special Committee on Professional Conduct and Ethics was appointed by the Federation of State Medical Boards to investigate physician disruptive behavior. They released their report in April, 2000 and listed 17 behavioral sentinel events (Table 1) (5).

Table 1. Behavioral sentinel events (3).

As announced in 2008 in an article in "The Joint Commission Journal of Quality and Patient Safety" and a Joint Commission Sentinel Event Alert,  a new Joint Commission accreditation standard requires hospitals to have a disruptive policy in place and to provide resources for its support as one of the leadership standards for accreditation (6,7). Although not stated, it is clear these standards refer to hospital employees and not hospital administration giving the impression that any disagreement between a physician or other employee and administration are the result of a disruptive behavior on the part of the physician or employee. They imply that all adverse actions against physicians for disruptive physician behavior are warranted. However, physicians may be trying to protect their patients from poor administrative decisions while administrators view physician opposition as insubordination. The viewpoint lies in the eyes of observer.

Disruptive Administrative Behavior Involving Whistleblowing

Klein v University Of Medicine and Dentistry of New Jersey

Sanford Klein was chief of anesthesiology at Robert Wood Johnson University Hospital in New Brunswick, NJ, for 16 years (8). He grew increasingly concerned about patient safety in the radiology department and complained repeatedly to the hospital's chief of staff, citing insufficient staff, space, and resuscitation equipment. After Klein grew increasingly vocal he was required to work under supervision. He refused to accept that restriction and sued. The trial judge granted summary judgment for the defendants, and an appellate court upheld that ruling. Klein is still a tenured professor at the university, but he no longer has privileges at the hospital. "This battle has cost me hundreds of thousands of dollars so far, and it's destroyed my career as a practicing physician," he says. "But if I had to do it over again, I would, because this is an ethical issue."

Lemonick v Allegheny Hospital System

David Lemonick was an emergency room physician at Pittsburgh's Western Pennsylvania Hospital who repeatedly complained to his department chairman about various patient safety problems (8). His department chairman accused him of "disruptive behavior". Lemonick wrote to the hospital's CEO to express his concerns about patient care, who thanked him, promised an investigation, and assured him there would be no retaliation. Nevertheless, Lemonick was terminated and sued the hospital for violating Pennsylvania's whistleblower protection law and another state law that specifically protects healthcare workers from retaliation for reporting a "serious event or incident" involving patient safety. Lemonick and Alleghany reached an out of court settlement and he  is now director of emergency medicine at a small hospital about 50 miles from his Pittsburgh. He was named Pennsylvania's emergency room physician of the year in 2007.

Ulrich v Laguna Honda Hospital

John Ulrich protested at a staff meeting when he learned that Laguna Honda Hospital was planning to lay off medical personnel, including physicians (9). He claimed layoffs would endanger patient care. Ulrich resigned and the hospital administration reported his resignation to the state board and the National Practitioner Data Bank, noting that it had followed unknown to Ulrich "commencement of a formal investigation into his practice and professional conduct".  Although the state board found no grounds for action, the hospital refused to void the NPDB report. Ulrich sued the hospital and its administrators. In 2004, after a long legal battle, Ulrich won a $4.3 million verdict, and later settled for about $1.5 million, with the hospital agreeing to retract its report to the NPDB. Still, he spent nearly seven years without a full-time job, doing part-time work as a coder and medical researcher, with a sharply reduced income.

Schulze v Humana

Dr. John Paul Schulze, a longtime family practice doctor in Corpus Christi, Texas, criticized Humana Health Care in 1996 for its decision to have its own doctors care for all patients once they were admitted to Humana hospitals (9). Humana officials alleged that he “was unfit to practice medicine, and represented an ongoing threat of harm to his patients" and reported Schulze to the National Practitioners Data Bank and the Texas State Board of Medical Examiners. Schulze sued and after several years of legal battles an out of court settlement was reached.

Flynn v. Anadarko Municipal Hospital

Dr. John Flynn reported to Anadarko Municipal Hospital administrators that a colleague abandoned a patient (9). After no action was taken, he resigned from the medical staff before reporting the alleged violations to state and federal authorities. Flynn attempted to rejoin the staff after an investigation had found violations, but the medical staff denied him privileges. The public works authority governing the hospital held a lengthy hearing on the case and restored Flynn's privileges.

Kirby v University Hospitals of Cleveland

University Hospitals of Cleveland (UH) which is affiliated with Case Western Reserve University recruited Dr. Thomas Kirby to head up its cardiothoracic surgery and lung transplant divisions in 1998 (9). Not long after he joined UH, Kirby started pressing hospital executives about program changes, particularly for open heart procedures. Kirby said he was alarmed by mounting deaths and complications among intensive care patients after heart surgeries, and took his concerns to hospital administrators and board members.

When he returned from a vacation, Kirby learned he'd been demoted and the two colleagues he'd recruited to the program had been fired. During the subsequent months, acrimony within the department boiled over and eventually led to Kirby filing a slander suit against a fellow surgeon, who Kirby claimed made disparaging remarks to other staff members about his clinical competence. The hospital's reaction was to suspend Kirby. The suspension letter from the hospital chief of staff accused Kirby of being "abusive, arrogant and aggressive" with other hospital staff, including use of profanity and "foul and/or sexual language." Accusers were not named, dates were not supplied and Kirby was not offered the chance to continue practicing surgery. Subsequently, the Accreditation Council for Graduate Medical Education revoked UH's cardiothoracic surgery residency, saying the program no longer met council standards.

However, Kirby sued over another issue which may have been at the heart of the acrimony. Kirby had alleged that UH had entered into improper financial arrangements with doctors to induce them to refer patients and then billed Medicare for the services provided. The U.S. attorney for the Northern District of Ohio intervened in the suit. University Hospital eventually agreed to pay $13.9 million to settle the federal false claims lawsuit arising from alleged anti-kickback violations although they denied any wrongdoing. Kirby was awarded a settlement of $1.5 million.

Fahlen vs. Memorial Medical Center

Between 2004 and 2008, Dr. Mark Fahlen, reported to hospital administration that nurses at Memorial Medical Center in Modesto, California were failing to follow his directions, thus endangering patients’ lives (10). However, the nurses complained about Fahlen’s behavior and he was fired. A peer committee consisting of six physicians reviewed the decision and found no professional incompetence but Memorial’s board refused to grant him staff privileges. Subsequently, Fahlen sued. After four years of legal wrangling, an out of court agreement reinstated Fahlen's hospital privileges.

Disruptive Administrative Behavior By an Individual Administrator

Vosough vs. Kierce

In Patterson, New Jersey Khashayar Vosough MD and his partners sued St. Joseph's Regional Medical Center's obstetrics and gynecology department chairman, Roger Kierce MD, for profane language and abusive and demeaning behavior (11). Kierce once told a group of doctors he would "separate their skulls from their bodies" if they disobeyed him. In 2012 a Bergen County jury returned the verdict in less than an hour, awarding Vosough and his colleagues $1,270,000. However, the decision was appealed and overturned in 2014 by the Superior Court of New Jersey, Appellate Division (12).

Medical Staff Collectively Suing a Hospital Administration

Medical Staff of Avera Marshall Regional Medical Center v. Avera Marshall

In rare instances a collection of physicians comes into legal conflict with a hospital. In Minnesota the medical staff of Avera Marshall Medical Center was charged with physician credentialing, peer review, and quality assurance (13). A two-thirds majority vote was required to change the bylaws but the hospital administration unilaterally changed the bylaws in early 2012. The medical staff sued the hospital.

However, the real source of the dispute might be over patient referrals and income. Conflict arose when doctors not employed by the hospital alleged that the that the hospital was steering emergency room patients toward its own employed doctors. The case was eventually decided by the Minnesota Supreme Court who ruled in favor of the medical staff (13).


These cases illustrate that physicians can occasionally win lawsuits against hospital administration for disruptive behavior. However, victory is often hollow with careers destroyed and years without a professional income as the wheels of justice slowly turn. As one article said, "Is whistleblowing worth it?" (8).

Dr. Fahlen was fortunate that the peer review found no professional incompetence. In many instances the reviews are conducted by physician administrators with the verdict predetermined. For example, in the Thomas Kummet case presented in the Southwest Journal of Pulmonary and Critical Care, an independent review concluded there was no malpractice (14). However, the Veterans Administration had the case reviewed by a VA appointed committee who sided with the VA administration. Kummet's name was subsequently submitted to the National Practioner Databank and he sued the VA. After the case was dismissed by a Federal court, Kummet left the VA system.

Physicians are particularly vulnerable to retaliation by unfounded accusations. Several examples were given above. In many of these cases, complaints were followed by what appeared to be a sham peer review. Sham peer review is a name given to the abuse of the peer review process to attack a doctor for personal or other non-medical reasons (15,16). The American Medical Association conducted an investigation of medical peer review in 2007 and concluded that it is easy to allege misconduct and 15% of surveyed physicians by the Massachusetts Medical Society indicated that they were aware of peer review misuse or abuse (17). However, cases of malicious peer review proven through the legal system are rare.

Huntoon (18) listed a number of characteristic of sham peer review (Table 2).

Table 2. Characteristics of sham peer review (16).

I first witnessed peer review being used as a weapon as a junior faculty member in the mid-1980's. The then chief of thoracic surgery, a pediatric thoracic surgeon, underwent peer review. It appeared that the underlying reason was that most of his operations were performed at an affiliated children's hospital rather than the university medical center that conducted the review. The influence of income as opposed to medical quality being the real motivation for an administrative action against a physician is unknown, although some of the above cases suggests it is not uncommon. Given the amount of money potentially involved and the lack of consequences for hospital administration, it is naive to believe that false accusations would not or will not continue to occur.

Most disturbing is physicians who falsely accuse other physicians. Although this behavior would clearly be covered by behavioral sentinel events such as those listed in table 1, hospital boards may deem not to act. For example, one physician accused a hospital director, a non-practicing physician, of being disruptive. The hospital board failed to act stating that their interpretation was that the term disruptive physician applied only to practicing physicians.

The federal Whistle Blower Protection Act (WPA) protects most federal employees who work in the executive branch. It also requires that federal agencies take appropriate action. Most individual states have also enacted their own whistleblower laws, which protect state, public and/or private employees. Unlike their federal counterparts however, these state levels generally do not provide payment or compensation to whistleblowers, Instead the states concentrate on the prevention of retaliatory action toward the whistleblower. Unlike California's law specifically protecting physicians most state laws are not specific to physicians.

Although beyond the scope of this review, it seems likely that administrative disruptive actions may also occur against other health care workers including nurses, technicians and  other staff. However, the prevalence and appropriateness of these actions are unclear. However, as leaders of the healthcare team and often not employed by the hospital, physicians are unique as evidenced by the National Practioner Data Bank. No similar nursing, technician or administrator data bank exists.

Although the few cases cited above suggest that legal action can be successful against abusive administrators, these cases are rare. The consequences of being labeled disruptive can be dire to physicians who lack any due process either in hospitals and often in the courts. Until such a time when administration can be held accountable for behavior that is considered disruptive, the sensible physician might avoid conflicts with hospital administration.


  1. Hutchinson M, Jackson D. Hostile clinician behaviours in the nursing work environment and implications for patient care: a mixed-methods systematic review. BMC Nurs. 2013 Oct 4;12(1):25. [CrossRef] [PubMed]
  2. Shabazz T, Parry-Smith W, Oates S, Henderson S, Mountfield J. Consultants as victims of bullying and undermining: a survey of Royal College of Obstetricians and Gynaecologists consultant experiences. BMJ Open. 2016 Jun 20;6(6):e011462. [CrossRef] [PubMed]
  3. Frellick M. Senior physicians report bullying from above and below. Medscape. June 29, 2016. Available at:
  4. Hollowell EE. The disruptive physician: handle with care. Trustee. 1978 Jun;31(6):11-3, 15, 17. [PubMed]
  5. Russ C, Berger AM, Joas T, Margolis PM, O'Connell LW, Pittard JC, George A. Porter GA, Selinger RCL, Tornelli-Mitchell J, Winchell CE, Wolff TL. Report of the Special Committee on Professional Conduct and Ethics. Federation of State Medical Boards of the United States. April, 2000. Available at: (accessed 5/3/16).
  6. Rosenstein AH, O’Daniel M. A survey of the impact of disruptive behaviors and communication defects on patient safety. Jt Comm J Qual Patient Saf. 2008;34:464–471. [PubMed]
  7. The Joint Commission. Behaviors That Undermine a Culture of Safety Sentinel Event Alert #40 July 9, 2008: 1-5. Available from: (accessed 5/3/16).
  8. Rice B. Is whistleblowing worth it? Medical Economics. January 20, 2006. Available at: (accessed 5/5/16).
  9. Twedt S. The Cost of Courage: how the tables turn on doctors. Pittsburgh Post-Gazette. October 26, 2003. Available at: (accessed 5/5/16).
  10. Danaher M. Physician not required to exhaust hospital’s administrative review process before suing hospital under state’s whistleblower statute. Employment Law Matters. February 20, 2014. Available at: (accessed 5/3/16).
  11. Washburn L. Doctors win suit against hospital over abuse by boss. The Record. January 11, 2012. Available at: (accessed 5/4/16).
  12. Ashrafi JAD. Vosough v. Kierce. Find Law for Legal Professionals. 2014. Available at: (accessed 5/4/16).
  13. Moore, JD Jr. When docs sue their own hospital-at issue: who has authority to hire, fire, and discipline staff physicians. Medpage Today. January 19, 2015. Available at: (accessed 5/3/16).
  14. Robbins RA. Profiles in medical courage: Thomas Kummet and the courage to fight bureaucracy. Southwest J Pulm Crit Care. 2013;6(1):29-35. Available at: (accessed 8/5/16)
  15. Chalifoux R Jr, So what is a sham peer review? MedGenMed. 2005 Nov 15;7(4):47. [PubMed]
  16. Langston EL. Inappropriate peer review. Report of the board of trustees. 2016. Available at: (accessed 5/15/16).
  17. Chu J. Doctors who hurt doctors. Time. August 07, 2005. Available at:,9171,1090918,00.html (accessed 5/15/16, requires subscription).
  18. Huntoon LR. Tactics characteristic of sham peer review. Journal of American Physicians and Surgeons 2009;14(3):64-6.

Cite as: Robbins RA. The disruptive administrator: tread with care. Southwest J Pulm Crit Care. 2016:13(2):71-9. doi: PDF 


A Qualitative Systematic Review of the Professionalization of the Vice Chair for Education

Guadalupe F. Martinez, PhD 

Kenneth S. Knox, MD


Department of Medicine

University of Arizona

Tucson, Arizona. USA




Pulmonary/Critical Care physician-faculty are often in academic leadership positions, such as a department chair. As chairs are responsible for the success of their education programs, and given the increased complexity involved in evaluating learners and faculty increases, chairs are turning to colleagues with expertise in education for assistance. As such, vice chairs for education (VCE) are being introduced into the mix of academic executives to respond to the demands for accountability, training requirements, and professional development in a rapidly changing medical education climate. This review synthesizes the published literature around the VCE position.


An advanced electronic database and academic journal search was performed specific to the medical, medical education, and education disciplines. “Vice Chair for Education, Educational Leadership, (specialty) Residency Program Director” terms were used in these search processes. We conducted a qualitative systematic review of VCE literature in the English language published from January 1, 2005 to April 1, 2016.


From the 6 studies screened, 4 were excluded and 2 full-text articles were eligible and retained for review. Both studies were cross-sectional and published between March and August of 2012 with response rates above 70%. Each employed quantitative and qualitative methods. The studies report important demographics and job duties of the vice chair.


The vice chair for education in academic medical departments has emerged as an important position and is undergoing professionalization.

Abbreviation List

AAIM-Alliance for Academic Internal Medicine

PRISMA-Preferred Reporting Items for Systematic Reviews and Meta-Analyses

VCE-Vice Chair for Education


Schuster and Pangaro (1) introduced the pyramid of educators concept in their book chapter, Understanding Systems of Education in 2010. They designate the top of the pyramid as the institutional leaders or “academic executives” of the medical education system. These leaders include positions such as department chairs, deans, and CEOs. Pulmonary/Critical Care physician-faculty are often in leadership positions such as these. Locally, at our southwest institution and affiliate training hospital, the senior vice president for health sciences, chief medical officer, internal medicine department chair, vice chair for education, vice chair for quality and safety, internal medicine residency director, and one of the three associate residency directors are all pulmonary/critical care physician-faculty. Nationally, according to the Alliance for Academic Internal Medicine (P. Ballou, AAIM email communication, May 2016), 12% (20/172) Internal Medicine department chairs are pulmonary/critical care/allergy physician-faculty belonging to the association to date. As chairs are responsible for the educational success of their programs, and given the complexity involved in evaluating learners and faculty, department chairs are turning to colleagues with interest and expertise in education for assistance. Vice chairs for education (VCE) are now being introduced into the mix of academic executives. Although the VCE role may vary by institution, VCEs are likely to respond to the demands for accountability, training requirements, and professional development in a rapidly changing medical education climate.

According to sociologists DiMaggio and Powell (2), one way to respond to external pressures is to create and legitimize new positions intended to better manage changes and demands. They go on to define this process as a professionalization of a position. Despite the emergence of the prominent and potentially pivotal position of the VCE, the formal recognition of this position and clarity of its purview over the educational mission remains obscure. In addition to synthesizing the published literature around the VCE position, we sought to determine two points that could best inform the medical education community about this position and future directions for educational leadership. First, is the role of department VCE defined in the academic literature? Second, what evidence exists that the position has professionalized in academic medicine?


In adherence with Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) (3) guidelines, we conducted a qualitative systematic review of VCE literature in the English language published from January 1, 2005 to April 1, 2016. The authors adapted the Cochrane Collaboration and developed and followed a specific search protocol a priori (4-5). The protocol is summarized below and detailed in Table 1. Institutional Review Board approval is not necessary for literature reviews.

Table 1. Search protocol in adherence to the Cochrane Collaboration

1. Text of the review

a. Background: As department chairs are responsible for the educational success of their learners and faculty in academic medical centers, changes in how they delegate and manage the educational mission are evident. VCEs are now being introduced into the mix of academic executives to respond to the demands for accountability, expertise and leadership from changing medical education climate. Despite this important role, the formal recognition of this position and clarity of their purview over the educational mission has remained obscure.

b. Objectives of the study are to:

i) review how well-defined the role of the department vice chair for education is medicine education institutions, and

ii) understand to what extent the position is professionalizing and becoming institutionalized.

iii) gain insight into the above via synthesis and appraisal of relevant literature.

2. Criteria for selected studies for review


Non-English works




Lone job descriptions

Unpublished under-review research reports





English language works

Peer-reviewed published or in press qualitative, quantitative mix-methods original research reports, or articles with a research component

Book chapters dedicated to the role solely

Written between: January 1, 2005*-April 1, 2016

*Average of Brownfield (9) and Sanfey (5y) mean years since the establishment of the position as reported in 2012 publication (12y; 8y)

3. Search strategy

a. Email outreach to national VCE- Internal Medicine and Emergency Medicine interest group and listserv for the purposes of:

i) triangulation

ii) accessing submitted, in press, and unpublished work

iii) accessing grey literature such as white papers and institutional reports.

b. Electronic search consisting of the following relevant journals:

Academic Medicine

American Journal of Medicine

Medical Education

Journal of American Medical Association

American Educational Research Association

Journal of Surgical Education

Medical Teacher

The American Journal of Surgery

e. Ancestry search of inclusive study references for snowball e-searches.

f. Relevant database search of the following:

Cochrane Database of Systematic Reviews




Research Gate

Science Direct


f. Search engines:


Google scholar

g. Conference proceedings for specialty educational associations (Internal Medicine; Anesthesia, Surgery, Emergency Medicine, Pediatrics, Dermatology, Family Medicine, Psychiatry)

h. Word search:

Vice Chair for Education, Educational Leadership, (Specialty) Program Director

Search protocol

The first author completed an advanced electronic database and academic journal search that included those terms specific to the medical, medical education, and education disciplines. “Vice Chair for Education, Educational Leadership, (specialty) residency program director” terms were used in these search processes as well as the search engine examination. The first author also conducted an ancestry search of the references listed in the screened literary pieces. The authors reached out to a national interest group made up of primarily VCEs in Internal Medicine via a national VCE email distribution list to combat publication and database bias, and gain knowledge about any existing grey literature, conference proceedings, unpublished or recently submitted works. Hand searches were not conducted as the ancestry search found the earliest relevant and indexed piece to be in 2012. Additionally, most journals have moved historic volumes as of 2005 to an online interface.

Inclusion and exclusion criteria

Authors set inclusion criteria to be qualitative, quantitative mix-methods original research reports or articles with a research component. Reports were to be full-text peer-reviewed works published or “in press.” Additionally, book chapters dedicated solely to the VCE role were considered.

Excluded were commentaries, perspectives, newsletters, pure job description documents, unpublished research reports or articles and those in “under review” status.

Data appraisal and extraction

Framework analysis (6), citations and full-text articles were charted, indexed, identified for themes, and finally, mapped and interpreted to collect and examine text for review. Appraisal of methodological soundness, reporting, and contribution to knowledge was conducted once full-text articles were identified for review. Validated quality assessment tools for quantitative and qualitative works were implemented and are discussed later in this review.

During the ancestry search, citations were imported into Endnote. Full study documents were imported into QSR Nvivo 10 software for analysis. Data categories and coding were developed via consensus building between the authors as part of the analytical framework Figure 1.

Figure 1. Thematic coding and concept mapping. As a method of mapping methods for qualitative data structuring, this concept map illustrates themes that emerged from data. Concepts are linked to demonstrate the relationships between them. Similarities, differences, strengths, and weaknesses were identified and threaded throughout each domain-or branch of the map that focuses on a particular aspect.

The first author began the initial coding process and queries followed by member checking by the senior author to improve categorization credibility. No initial categorization discrepancies between the authors occurred.


Search results

From the 6 screened studies, 4 were excluded and only 2 full-text articles were eligible for review. See Figure 2 for detailed PRISMA flow diagram.

Figure 2. PRISMA 2009 Flow Diagram. The diagram depicts the flow of information throughout the systematic review. Mapped are the number of records identified, included and excluded.

Study characteristics

Both studies were cross-sectional and published between March and August of 2012 with response rates above 70%. Each employed quantitative and qualitative methods, but each favored one method Table 2.

Table 2 List of relevant, but excluded literature and justification


Month/Year Published

Literary Type/Topic

Focus/Justification for final exclusion

Sanfey et al.14

March web content and July 2012

Web-based and article based Review/VCE scope of duties and qualifications

Brief website review of the authors’ previous work that delineates VCE qualifications for MDs and PhD educators, career development opportunities, and job description with specific workloads for each mission. The authors offer sections on career advice specific to time management, acquiring a national reputation, funding for educational research activities, and resource sites to find a VCE position. Excluded as this is a review and career offerings are opinion-based. Via a surgical organization task force, the online material underwent a slight title modification. This online review of the original research was subsequently published in print with the American Journal of Surgery.



Commentary/VCE and direction for future educational leadership

Highlight gaps in nation’s overall approach to medical education. Offers a paradigm shift calling for medical education to use evidence-based data, and educational theory to inform future directions and departmental leadership. Innovation and creativity is stressed. In this spirit, there is a call for a specific leadership style (collaborative) on the side of the Chair that could likely empower the VCE role. Insightful and relevant for future directions, but excluded as the commentary is opinion-based.

Wolfsthal et al.16


Book Chapter/Internal Medicine Program Residency Director Job Description

Seven page chapter in the internal Medicine association’s textbook for medicine education programs. This chapter outlines the job description of Internal Medicine program directors. One paragraph with 5 bullet points articulates that the VCE role may be combined with that of the Internal Medicine Residency Program Director role. This chapter does serve as additional evidence of dual leadership roles that appear as a trend among the VCE and internal medicine departments. However, excluded as chapter is not dedicated solely to the VCE position and integrated, in-depth, with the PD position.


Sanfey et al. (7) is a quantitative work that provides basic descriptives with means. A job description with specific categories is the qualitative element presented. Participants were 20 MD surgeons and 4 PhD educators serving as VCEs in departments of surgery. One data collection instrument was used and consisted of an online survey with Likert scales and open-ended questions with comment sections to gather short narrative responses.

Though Brownfield et al. (8) employed both quantitative and qualitative methods, the study was dominated by an inductive qualitative approach. Participants included 59 MDs serving as VCEs in departments of internal medicine. The primary source of data was VCE responses to an online survey comprised of open-ended questions to collect narratives.

Appraisal of studies

Each report was appraised by the authors. We applied Spencer et al.’s (9) appraisal of qualitative work, the National Collaborating Center for Methods and Tools (10), and Jack et al.’s (11) quality appraisal tools for basic descriptive statistics. Post scoring and deliberation, studies were categorized into either: low, moderate, good, or high quality studies. This process helped us make an informed decision regarding the quality of the research reports. The qualitative assessment tool was applied to Brownfield et al. (8). Scores between the authors ranged from 35 to 44 (maximum score of 72) and a mean score of 39.5 (8). Sanfey et al.’s (7) qualitative scores ranged from 30 to 41 and a mean score of 35.5; quantitative scores ranged from 14-15 (maximum score of 18) and a mean score of 14.5 (7). In all, both reports were of good quality (scale consists of low-good-high categories), methodological rigor, reporting, and knowledge contribution.  Studies note sufficient and important limitations regarding relatively small sample sizes, non-responder bias potential, and limitations to just two fields: surgery and medicine.

Synthesis of study findings

Although both research reports were related to the VCE role, there was substantial heterogeneity in their study aims that allowed for a broad conceptualization of the role. One study was largely to create a career development path for VCEs on a national level, while the other sought to establish, in detail, the roles and responsibilities of VCEs.

Similarities. Both studies had VCEs as the primary data source with the Brownfield et al. (8) work implementing follow up member checking with a group of VCEs at a national conference. Both also refer to the elevated expectations from institutions and accreditation agencies for evidence driven education and administrative practices as an external force that has led department chairs to create the VCE role. However, these studies noted that the clerkship and residency director roles have job descriptions and recommended protected time established by national accreditation bodies. Notably absent is a formal job description for the VCE role. As such, informed by their data, these studies set precedent by establishing a job description by providing lists of expected duties and activities. These duties not only centered on program and director oversight, but reflected a value system that appreciated autonomy, educational expertise, promotion of educational scholarship and investment in the further development of leadership skills.

In terms of demographics, both studies found that VCEs were more likely male, senior MD professors with additional training in education. Formal establishment and recognition of the position is difficult to deduce from the studies. Each study identified the position as “relatively new.” They both cite this as a reason to explain why participants reported uncertainty in their responsibilities and the lack of a formal job description. VCEs in both studies served in the position for a widely variable number of years ranging from 6 months to 25 years. Distribution of protected time for the role was addressed. However, Sanfey et al. (7) provide a snap shot of participants’ work load distributions with ascribed percentages to each of the institutional missions. In terms of preparation for the role, they went into greater detail about expectations. The investigators note a national increase in educational graduate programs in academic medicine and suggest chairs seek VCEs with backgrounds in graduate medical education in order to meet the demands and expectations of the position.

Differences. Sanfey et al. (7) reviewed the academic preparation for the VCE position, terms of employment, expected scholarly productivity, and took inventory of participants’ job satisfaction as well as specific leadership skills they desired to acquire and improve upon. In this study there was comparison between MD and PhD educators’ time allocations, and demographics. Closing their report, Sanfey et al. (7) discussed recruitment strategies for the hiring of VCEs, and stressed the importance of education portfolios and educational research productivity among potential candidates. Furthermore, they provided recommendations to those in hiring positions to strongly consider PhD educators for the role given PhDs scholarly productivity outpaced those of their surgeon peers who often have time consuming clinical demands.

Methodologically, Brownfield et al. (8) state they ask for job descriptions in their data collection, but do not note actually triangulating these documents with survey responses. From survey responses and an in-person group follow-up meeting, Brownfield and colleagues (8) noted in-depth, dominant themes that emerged from those surveyed.  Unlike Sanfey, they include how participants experienced the role, and if metrics for assessing their success were clearly established at their institutions. Despite a relatively robust set of reported responsibilities, most striking was the theme of reported uncertainty about the role among their participants. This was as a result from vague expectations or ill-defined purview. Brownfield and colleagues (8) provided a set of guidelines for current and prospective VCEs to consider that could potentially mitigate such an experience. A few include: the importance of transparency with the Chair about expectations, delegation, priority setting, and establishing an appropriate infrastructure of support.

Two themes that answer our research questions. Both studies a) formally identified and defined VCE duties, and b) documented the establishment and professionalization of the VCE position in departments of surgery and internal medicine in the U.S. Analysis indicated a theme wherein VCE roles and duties were defined in both works. However, the purview was dauntingly broad. As expected, multiple indicators of the professionalization, as defined by DiMaggio and Powell (2), of the VCE role in academic medicine exist within these two published studies. Both studies were published in quality journals (Academic Medicine (Impact Factor 3.292 at the time their study was published) and the Journal of Surgical Education (Impact Factor 1.634 at the time their study was published) (12-13). Moreover, data in these studies contributed to a formalized job description that set a vast scope of duties, broad oversight purview, working conditions, and career development needs of this group at a national level.


The VCE role is designed to help the department navigate an ever changing, complex and diverse academic environment in medical education. Because these studies included only two disciplines, we believe the position remains ambiguous and not well-defined. It is clear the responsibilities of the position need refinement to maximize its impact within the department.

Both studies provide specific examples of the VCE responsibilities and roles with attention to how VCEs are expected to oversee educational programs. Brownfield et al. list position expectations that include: educational program oversight, promote scholarship and serve in leadership activities. Sanfey et al. (14) provided examples by subcategorizing responsibilities by i). administration, ii). teaching, and iii) research responsibilities. Both studies defined oversight as: setting the philosophical tone and course to move programs toward institutional and/or departmental vision; defining priorities; creating initiatives that would aid in program advancement; play a key role in redesigning evaluation technologies and methods; developing faculty reward systems; designing faculty development curriculum; consultant to all the educational directors in the department; advising the chair in faculty recruitment; chairing educational committees; training education staff regarding accreditation and strategic initiatives, and identifying and securing resources. Though broad, this collective list outlines responsibilities that are different than those presented in Wolfsthal et al. (16) job description of Internal Medicine residency directors and Foster and Clive’s (17) chapter on the Program Director as Manager. Unlike the VCE oversight examples that are illustrative of executive leadership, the current program director literature offers examples of managerial responsibilities to a single program. Responsibilities include: implementing policy and initiatives; setting agendas for meetings; budgeting basics; delegating authority; office personnel management, and time management. This distinguishes the VCE role from that of other departmental education positions such as the residency director. From the reviewed studies, VCE responsibilities are more vision-driven rather than managerial in nature (18).

Finally, it was unclear if the VCE position should be bundled with other administrative leadership roles. According to Brownfield et al. (8), this was pervasive in internal medicine as well. While we do not believe this is unique to one specialty, Sanfey et al.’s (7) work did not report dual leadership in surgery perhaps because the survey question was not asked. Regardless, the complexity in the Sanfey et al. (7) article was not as rich or apparent as Brownfield et al.’s study.

Given the emerging importance of this influential leadership role, we were surprised by the lack of a VCE recruitment strategy. In fact, both studies touch on the fact that the majority of participants were thrust into the VCE role with a small minority being promoted into the position internally. Neither study solicited the perspectives of department chairs, what they expect of the VCE and why they were chosen for the role. This practice is in stark contrast to the guidance provided by the articles where they provide discussion points and items to negotiate prior to accepting the VCE position. The data suggest a formal recruitment process with negotiation for educational resources is needed for the VCE position to realize its potential.

Yielding 2 full-text studies this review is not robust and thus, limits recommendations. Other medical disciplines may have similar roles, but no data has been published. Never-the-less the information in this review is educationally significant. This review serves as a critical starting point from which to gain knowledge about more nuanced educational leadership positions and their mobilization towards legitimacy, formal recognition, and time allocations in clinical departments. This review documents the professionalization of the VCE role in the academic community in its infancy.

As many pulmonary/critical care physician-faculty make up the top administrative and educational leadership roles at our institution, we speculate that pulmonary/critical care and practice lends itself to leadership in academics. Building relationships with multidisciplinary ICU teams is much like building academic leadership teams. The skills necessary to articulate sensitive information to family members of critically ill patients provides a foundation for dealing with the most challenging aspects of administrative leadership discussions that are inherent to academe. Defining successful leaders and studying the personality traits of those from medical specialties would provide further insight and are ongoing.

Scholars are encouraged to consider research pertaining to the VCE role and to move beyond the job description to study the value the position brings to the department. Studies should include department chair perceptions of the position in the changing education and healthcare landscape, and whether these types of roles are more appropriately suited for particular medical disciplines over others. Examining the academic culture of departments to inform the desirable dynamic for the VCE is important. A starting approach can tease out how this role is impacted by departmental relationship dynamics, behaviors, and values. Finally, future studies that include robust examination of the VCE relationship with the chair would triangulate the existing body of work, and could validate what we know about educational leadership and academic executives.


The authors thank Carole Howe, MD, MLS of the Arizona Health Sciences Library for her guidance regarding database searches, and the University of Arizona College of Medicine Department of Medicine for allowing research time to conduct this review.

Authors also thank Ms. Sarah Almodovar for her time preparing and reviewing this work.

Finally, the authors thank those on the Vice Chairs for Education in Internal Medicine national interest group distribution list for responding to inquiry for grey literature knowledge, clarification questions, works in press, and unpublished works. We thank the National network of VCE responding to inquiry for grey literature knowledge, clarification questions, and unpublished works: Drs. Michael Frank, Stephanie Call, Erica Brownfield, Alan Harris, John Mastronarde, Bradley Allen, Ellis Levin, Lisa Bellini, Gerald Donowitz, Joel Thorp Katz, and Susan Wolfsthal.


  1. Schuster B, Pangaro L. Understanding systems of education: What to expect of, and for, each faculty member. In: Pangaro L, ed. Leadership careers in academic medicine (Ed Louis Pangaro). Philadelphia, PA: ACP Press; 2010.
  2. DiMaggio PJ, Powell W. The iron cage revisited:institutional isomorphism and collective rationality in organizational fields. American Sociological Review. 1983;48:147-60. [CrossRef]
  3. Moher D, Liberati A, Tetzlaff J, Altman DG, The PRISMA Group. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med. 2009; 6(6): e1000097. [CrossRef] [PubMed]
  4. Cook DA, West CP. Conducting systematic reviews in medical education: a stepwise approach. Med Edu. 2012;46:943-952. [CrossRef] [PubMed] 
  5. Schlosser R. Appraising the quality of systematic reviews. FOCUS. 2007. Technical Brief no. 17.
  6. Gale NK, Heath G, Cameron E, Rashid S, Redwood S. Using the framework method for the analysis of qualitative data in multi-disciplinary health research. BMC Med Res Methodol. Sep 2013;13:117. [CrossRef] [PubMed]
  7. Sanfey H, Boehler M, DaRosa D, Dunnington GL. Career development needs of vice chairs for education in departments of surgery. J Surg Educ. Feb 2012 69(2):156-61. [CrossRef] [PubMed] 
  8. Brownfield E, Clyburn B, Santen S, Heudebert G, Hemmer PA. The activities and responsibilities of the vice chair for education in U.S. and Canadian departments of medicine. Acad Med. Aug 2012;87:1041–5. [CrossRef] [PubMed] 
  9. Spencer, Liz; Ritchie, Jane; Lewis, Jane; Dillon, Lucy & National Centre for Social Research (2003). Quality in qualitative evaluation: A framework for assessing research evidence. . Accessed March 22, 2015.
  10. National Collaborating Centre for Methods and Tools (2012). Qualitative research appraisal tool. Hamilton, ON: McMaster University. (Updated 03 October, 2012) Accessed March 22, 2015.
  11. Jack L, Hayes SC, Jeanfreau SG, Stetson B, Jones-Jack NH, Valliere R, LeBlanc C. Appraising quantitative research in health education: guidelines for public health educators. Health Promotion Practice. 2010;2:161-5. [CrossRef] [PubMed]
  12. Impact factor citation Accessed April 2, 2016.
  13. Impact factor citation Accessed April 2, 2016.
  14. Sanfey H, Boehler M, Darosa D, Dunnington GL. Career development needs of vice chairs for education in departments of surgery. J Surg Educ. 2012 Mar-Apr;69(2):156-61. [CrossRef] [PubMed]
  15. Pangaro LN. Commentary: getting to the next phase in medical education--a role for the vice-chair for education. Acad Med. 2012;87(8):999-1001. [CrossRef] [PubMed] 
  16. Wolfsthal S, Call S, Wood V. Job description of the internal medicine residency program director. In: Ficalora RF, Costa, ST, eds. The Toolkit Series: A Textbook for Internal Medicine Education Programs. 11th ed. Alexandria, VA: AAIM; 2013. Foster RM, Clive DM. Program director as manager. In: Ficalora RF, Costa, ST, eds. The Toolkit Series: A Textbook for Internal Medicine Education Programs. 11th ed. Alexandria, VA: AAIM: 2013.
  17. Foster RM, Clive DM.  Program director as manager. In: Ficalora RF, Costa, ST, eds.  The Toolkit Series: A Textbook for Internal Medicine Education Programs.  11th ed.  Alexandria, VA: AAIM: 2013.
  18. Naylor CD. Leadership in academic medicine: reflections from administrative exile. Clin Med September/October 2006 6(5) 488-92. [CrossRef] [Pubmed]

Cite as: Martinez GF, Knox KS. A qualitative systematic review of the professionalization of the vice chair for education. Southwest J Pulm Crit Care. 2016;12(6):240-52. doi:


Nurse Practitioners' Substitution for Physicians

Richard A. Robbins, MD

Phoenix Pulmonary and Critical Care Research and Education Foundation

Gilbert, AZ USA



Background: To deal with a physician shortage and reduce salary costs, nurse practitioners (NPs) are seeing increasing numbers of patients especially in primary care. In Arizona, SB1473 has been introduced in the state legislature which would expand the scope of practice for NPs and nurse anesthetists to be fully independent practitioners. However, whether nurses provide equal quality of care at similar costs is unclear.

Methods: Relevant literature was reviewed and physician and nurse practitioner education and care were compared. Included were study design and metrics, quality of care, and efficiency of care.

Results: NP and physicians differ in the length of education. Most clinical studies comparing NP and physician care were poorly designed often comparing metrics such as patient satisfaction. While increased care provided by NPs has the potential to reduce direct healthcare costs, achieving such reductions depends on the particular context of care. In a minority of clinical situations, NPs appear to have increased costs compared to physicians. Savings in cost depend on the magnitude of the salary differential between doctors and NPs, and may be offset by lower productivity and more extensive testing by NPs compared to physicians.

Conclusions: The findings suggest that in most primary care situations NPs can produce as high quality care as primary care physicians. However, this conclusion should be viewed with caution given that studies to assess equivalence of care were poor and many studies had methodological limitations.

Physician Compared to NP Education

Physicians have a longer training process than NPs which is based in large part on history. In 1908 the American Medical Association asked the Carnegie Foundation for the Advancement of Teaching to survey American medical education, so as to promote a reformist agenda and hasten the elimination of medical schools that failed to meet minimum standards (1). Abraham Flexner was chosen to prepare a report. Flexner was not a physician, scientist, or a medical educator but operated a for-profit school in Louisville, KY. At that time, there were 155 medical schools in North America that differed greatly in their curricula, methods of assessment, and requirements for admission and graduation.

Flexner visited all 155 schools and generalized about them as follows: "Each day students were subjected to interminable lectures and recitations. After a long morning of dissection or a series of quiz sections, they might sit wearily in the afternoon through three or four or even five lectures delivered in methodical fashion by part-time teachers. Evenings were given over to reading and preparation for recitations. If fortunate enough to gain entrance to a hospital, they observed more than participated."

At the time of Flexner's survey many American medical schools were small trade schools owned by one or more doctors, unaffiliated with a college or university, and run to make a profit. Only 16 out of 155 medical schools in the United States and Canada required applicants to have completed two or more years of university education. Laboratory work and dissection were not necessarily required. Many of the instructors were local doctors teaching part-time, whose own training often left something to be desired. A medical degree was typically awarded after only two years of study.

Flexner used the Johns Hopkins School of Medicine as a model. His 1910 report, known as the Flexner report, issued the following recommendations:

  • Reduce the number of medical schools (from 155 to 31);
  • Reduce the number of poorly trained physicians;
  • Increase the prerequisites to enter medical training;
  • Train physicians to practice in a scientific manner and engage medical faculty in research;
  • Give medical schools control of clinical instruction in hospitals;
  • Strengthen state regulation of medical licensure.

Flexner recommended that admission to a medical school should require, at minimum, a high school diploma and at least two years of college or university study, primarily devoted to basic science. He also argued that the length of medical education should be four years, and its content should be to recommendations made by the American Medical Association in 1905. Flexner recommended that the proprietary medical schools should either close or be incorporated into existing universities. Medical schools should be part of a larger university, because a proper stand-alone medical school would have to charge too much in order to break even financially.

By and large medical schools followed Flexner's recommendations. An important factor driving the mergers and closures of medical schools was that all state medical boards gradually adopted and enforced the Report's recommendations. As a result the following consequences occurred (2):

  • Between 1910 and 1935, more than half of all American medical schools merged or closed. This dramatic decline was in some part due to the implementation of the Report's recommendation that all "proprietary" schools be closed, and that medical schools should henceforth all be connected to universities. Of the 66 surviving MD-granting institutions in 1935, 57 were part of a university.
  • Physicians receive at least six, and usually eight, years of post-secondary formal instruction, nearly always in a university setting;
  • Medical training adhered closely to the scientific method and was grounded in human physiology and biochemistry;
  • Medical research adhered to the protocols of scientific research;
  • Average physician quality increased significantly.

The Report is now remembered because it succeeded in creating a single model of medical education, characterized by a philosophy that has largely survived to the present day.

Today, physicians usually have a college degree, 4 years of medical school and at least 3 years of residency. This totals 11 years after high school.

The history of NP education is much more recent. A Master of Science in Nursing (MSN) is the minimum degree requirement for becoming a NP (3). This usually requires a bachelor of science in nursing and approximately 18 to 24 months of full-time study.  Nearly all programs are University-affiliated and most faculty are full-time. The curricula are standardized.

NPs have a Bachelor of Science in Nursing followed by 1 1/2 to 2 years of full-time study. This totals 5 1/2 to 6 years of education after high school.

Differences and Similarities Between Physician and NP Education

Curricula for both physicians and nurses are standardized and scientifically based. The length of time is considerably longer for physicians (about 11 years compared to 5 1/2-6 years). There are also likely differences in clinical exposure. Minimal time for a NP is 500 hours of supervised, direct patient care (3). Physicians have considerably more clinical time. All physicians are required to do at least 3 years of post-graduate education after medical school. Time is now limited to 70 hours per week but older physicians can remember when 100+ hour weeks were common. Given a conservative estimate of 50 hours/week for 48 weeks/year this would give physicians a total of 7200 hours over 3 years at a minimum.

Hours of Education and Outcomes

The critical question is whether the number of hours NPs spend in education is sufficient. No studies were identified examining the effect of number of hours of NP education on outcomes. However, the impact of recent resident duty hour restrictions may be relevant.

Resident Duty Hour Regulations

There are concerns about the reduction in resident duty hours. The idea between the duty hour restriction was that well rested physicians would make fewer mistakes and spend more time studying. These regulations resulted in large part from the infamous Libby Zion case, who died in New York at the age of 18 under the care a resident and intern physician because of a drug-drug reaction resulting in serotonin syndrome (4). It was alleged that physician fatigue contributed to Zion's death. In response, New York state initially limited resident duty hours to 80 per week and this was followed in July 2003 by the Accreditation Council for Graduate Medical Education adopted similar regulations for all accredited medical training institutions in the United States. Subsequently, duty hours were shortened to 70 hours/week in 2011.

The duty hour regulations were adopted despite a lack of studies on their impact and studies are just beginning to emerge. A recent meta-analysis of 27 studies on duty hour restriction, demonstrated no improvements in patient care or resident well-being and a possible negative impact on resident education (5). Similarly, an analysis of 135 articles also concluded here was no overall improvement in patient outcomes as a result of resident duty hour restrictions; however, some studies suggest increased complication rates in high-acuity patients (6). There was no improvement in education, and performance on certification examinations has declined in some specialties (5,6). Survey studies revealed a perception of worsened education and patient safety but there were improvements in resident wellness (5,6).

Although the reasons for the lack of improvement (and perhaps decline) in outcomes with the resident duty hour restriction are unclear, several have speculated that the lack of continuity of care resulting from different physicians caring for a patient may be responsible (7). If this is true, it may be that the reduction in duty hours has little to do with medical education or experience but the duty hour resulted in fragmentation which caused poorer care.

Comparison Between Physician and NP Care In Primary Care

A meta-analysis by Laurant et al. (8) in 2005 assessed physician compared to NP primary care. In five studies the nurse assumed responsibility for first contact care for patients wanting urgent outpatient visits. Patient health outcomes were similar for nurses and doctors but patient satisfaction was higher with nurse-led care. Nurses tended to provide longer consultations, give more information to patients and recall patients more frequently than doctors. The impact on physician workload and direct cost of care was variable. In four studies the nurse took responsibility for the ongoing management of patients with particular chronic conditions. In general, no appreciable differences were found between doctors and nurses in health outcomes for patients, process of care, resource utilization or cost.

However, Laurant et al. (8) advised caution since only one study was powered to assess equivalence of care, many studies had methodological limitations, and patient follow-up was generally 12 months or less. Noted was a lower NP productivity compared to physicians (Figure 1).


Figure 1. Median ambulatory encounters per year (9).

The lower number of visits by NPs implies that cost savings would depend on the magnitude of the salary differential between physicians and nurses, and might be offset by the lower productivity of nurses compared to physicians.

More recent reviews and meta-analysis have come to similar conclusions (10-13). However, consistent with Laurant et al's. (8) warning studies tend to be underpowered, poor quality and often biased.

Despite the overall similarity in results, some studies have reported to show a difference in utilization. Hermani et al. (14) reported increased resource utilization by NPs compared to resident physicians and attending physicians in primary care at a Veterans Affairs hospital. The increase in utilization was mostly explained by increased referrals to specialists and increased hospitalizations. A recent study by Hughes et al. (15) using 2010-2011 Medicare claims found that NPs and physician assistants (PAs) ordered imaging in 2.8% episodes of care compared to 1.9% for physicians. This was especially true as the diagnosis codes became more uncommon. In other words, the more uncommon the disease, the more NPs and PAs ordered imaging tests.

NPs Outside of Primary Care

Although studies of patient outcomes in NP-directed care in the outpatient setting were few and many had methodological limitations, even fewer studies have examined NPs outside the primary care clinic. Nevertheless, NPs and PAs have long practiced in both specialty care and the inpatient setting. My personal experience goes back into the 1980s with both NPs and PAs in the outpatient pulmonary and sleep clinics, the inpatient pulmonary setting and the ICU setting. Although most articles are descriptive, nearly all articles describe a benefit to physician extenders in these areas as well as other specialty areas.

More recently NPs may have hired to fill “hospitalist” roles with scant attention as to whether the educational preparation of the NP is consistent with the role (16). According to Arizona law, a NP "shall only provide health care services within the NP's scope of practice for which the NP is educationally prepared and for which competency has been established and maintained” (A.A.C. R4-19-508 C). The Department of Veterans Affairs conducted a study a number of years ago examining nurse practitioner inpatient care compared to resident physicians care (17). Outcomes were similar although 47% of the patients randomized to nurse practitioner care were actually admitted to housestaff wards, largely because of attending physicians and NP requests. A recent article examined also NP-delivered critical care compared to resident teams in the ICU (18). Mortality and length of stay were similar.


NP have less education and training than physicians. It would appear that the scientific basis of the curricula are similar and there is no evidence that the aptitude of nurses and physicians differ. Therefore, the data that nurses care for patients the same as physicians most of the time is not surprising, especially for common chronic diseases. However, care may be divergent for less common diseases where lack of NP training and experience may play a role.

Physicians have undergone increased training and certification over the past few decades, nurses are now doing the same. The American Association of Colleges of Nursing seems to be endorsing further education for nurses encouraging either a PhD or a Doctor of Nurse Practice degree (19). However, the trend in medicine has been contradictory requirements for increasing training and certification for physicians while substituting practitioners with less education, training and experience for those same physicians. An extension of this concept has been that traditional nursing roles are increasingly being filled by medical assistants or nursing assistants (20). The future will likely be more of the same. NPs will be substituted for physicians; nurses without advanced training will be hired to substitute for NPs and PAs; and medical assistants will increasingly be substituted for nurses all to reduce personnel costs. It is likely that studies will be designed to support these substitutions but will frequently be underpowered, use rather meaningless metrics or have other methodology flaws to justify the substitution of less qualified healthcare providers.

Much of this "dummying down" has been driven by shortage of physicians and/or nurses. The justification has always been that substitution of cheaper providers will solve the labor shortage while saving money. However, experience over the past few decades in the US has shown that as education and certification requirements increase, compensation has decreased for physicians (21). NPs can likely expect the same.

Some are asking whether physicians should abandon primary care. After years of politicians, bureaucrats and healthcare administrators promising increasing compensation for primary care, most medical students and resident physicians have realized that this is unlikely. Furthermore, the increasing intrusion of regulatory agencies and insurance companies mandating an array of bureaucratic tasks, has led to increasing dissatisfaction with primary care (22). Consequently, most young physicians are seeking training in subspecialty care. It seems apparent that it is less of a question of whether physicians will be making a choice to abandon primary care in the future, but without a dramatic change, the decision has already been made.

Arizona SB1473, the bill that would essentially make NPs equivalent to physicians in the eyes of the law, is an expected extension of the current trends in medicine. Although physicians might object, supporters of the legislation will likely accuse physicians of merely protecting their turf. Personally, I am disheartened by these trends. The current trends seem a throwback to pre-Flexner report days. The poor studies that support these trends will do little more than allow the unscrupulous to line their pockets by substituting a practitioner with less education, experience and training for a well-trained, experienced physicians or nurses.


  1. Flexner A. Medical Education in the United States and Canada: A Report to the Carnegie Foundation for the Advancement of Teaching. New York, NY: The Carnegie Foundation for the Advancement of Teaching; 1910. Available at: (accessed 2/6/16).
  2. Barzansky B; Gevitz N. Beyond Flexner. Medical Education in the Twentieth Century. New York, NY: Greenwood Press; 1992.
  3. National Task Force on Quality Nurse Practitioner Education. Criteria for evaluation of nurse practitioner programs. Washington, DC: National Organization of Nurse Practitioner Faculties; 2012. Available at: (accessed 2/6/16).
  4. Lerner BH. A case that shook medicine. Washington Post. November 28, 2006. Available at: (accessed 2/9/16).
  5. Bolster L, Rourke L. The effect of restricting residents' duty hours on patient safety, resident well-being, and resident education: an updated systematic review. J Grad Med Educ. 2015;7(3):349-63. [CrossRef] [PubMed]
  6. Ahmed N, Devitt KS, Keshet I, et al. A systematic review of the effects of resident duty hour restrictions in surgery: impact on resident wellness, training, and patient outcomes. Ann Surg. 2014;259(6):1041-53. [CrossRef] [PubMed]
  7. Denson JL, McCarty M, Fang Y, Uppal A, Evans L. Increased mortality rates during resident handoff periods and the effect of ACGME duty hour regulations. Am J Med. 2015;128(9):994-1000. [CrossRef] [PubMed]
  8. Laurant M, Reeves D, Hermens R, Braspenning J, Grol R, Sibbald B. Substitution of doctors by nurses in primary care. Cochrane Database Syst Rev. 2005 Apr 18;(2):CD001271. [CrossRef]
  9. Medical Group Management Association. NPP utilization in the future of US healthcare. March 2014. Available at: (accessed 2/17/16).
  10. Tappenden P, Campbell F, Rawdin A, Wong R, Kalita N. The clinical effectiveness and cost-effectiveness of home-based, nurse-led health promotion for older people: a systematic review. Health Technol Assess. 2012;16(20):1-72. [CrossRef] [PubMed]
  11. Donald F, Kilpatrick K, Reid K, et al. A systematic review of the cost-effectiveness of nurse practitioners and clinical nurse specialists: what is the quality of the evidence? Nurs Res Pract. 2014;2014:896587. [CrossRef] [PubMed]
  12. Bryant-Lukosius D, Carter N, Reid K, et al. The clinical effectiveness and cost-effectiveness of clinical nurse specialist-led hospital to home transitional care: a systematic review. J Eval Clin Pract. 2015;21(5):763-81. [CrossRef] [PubMed]
  13. Kilpatrick K, Reid K, Carter N, et al. A systematic review of the cost-effectiveness of clinical nurse specialists and nurse practitioners in inpatient roles. Nurs Leadersh (Tor Ont). 2015;28(3):56-76. [PubMed]
  14. Hemani A, Rastegar DA, Hill C, al-Ibrahim MS. A comparison of resource utilization in nurse practitioners and physicians. Eff Clin Pract. 1999;2(6):258-65. [PubMed]
  15. Hughes DR, Jiang M, Duszak R Jr. A comparison of diagnostic imaging ordering patterns between advanced practice clinicians and primary care physicians following office-based evaluation and management visits. JAMA Intern Med. 2015;175(1):101-7. [CrossRef] [PubMed]
  16. Arizona Board of Nursing. Registered nurse practitioner (rnp) practicing in an acute care setting. Available at: (accessed 2/12/16).
  17. Pioro MH, Landefeld CS, Brennan PF, Daly B, Fortinsky RH, Kim U, Rosenthal GE. Outcomes-based trial of an inpatient nurse practitioner service for general medical patients. J Eval Clin Pract. 2001;7(1):21-33. [CrossRef] [PubMed]
  18. Landsperger JS, Semler MW, Wang L, Byrne DW, Wheeler AP. Outcomes of nurse practitioner-delivered critical care: a prospective cohort study. Chest. 2015;148(6):1530-5. [CrossRef] [PubMed]
  19. American Association of Colleges of Nursing. DNP fact sheet. June 2015. Available at: (accessed 2/13/16).
  20. Bureau of Labor Statitistics. Occupational outlook handbook: medical assistants. December 17, 2015. Available at: (accessed 2/13/16).
  21. Robbins RA. National health expenditures: the past, present, future and solutions. Southwest J Pulm Crit Care. 2015;11(4):176-85. [CrossRef]
  22. Peckham C. Physician burnout: it just keeps getting worse. Medscape. January 26, 2015. Available at: (accessed 2/13/16).

Cite as: Robbins RA. Nurse pactitioners' substitution for physicians. Southwest J Pulm Crit Care. 2016;12(2):64-71. doi: PDF 


National Health Expenditures: The Past, Present, Future and Solutions

Richard A. Robbins, MD

Phoenix Pulmonary and Critical Care Research and Education Foundation

Gilbert, AZ

"[T]he US health care system … defies the laws of economics, and of gravity. Once the price is high, it just stays there."- Dr. Naoki Ikegami


The costs of health care in the US have been increasing for many years and the US now spends more on health care than other developed country. The cost of health care is higher in the US in nearly every category. However, the dramatic rise in health care costs over the past 35 years occurs during the time when pharmaceutical costs and administrative costs have also dramatically risen. It seems likely that these costs may account for much of the increase in health care. However, neither is dealt with by the Affordable Care Act (ACA). Until a system of oversight is enacted on medical costs, it seems likely that US health care costs will continue to rise.

The Past

In comparison to other economically developed countries health care costs have risen dramatically in the US over the past 35 years (Figure 1) (1).

Figure 1. Rise in health care spending in the US and selected other countries.

Myths. The reasons for this rise in spending have been shrouded in myths and accusations. It has been argued that high costs is the price for the best health-care system in the world. However, patient outcomes in the US are mixed. In a 2011 report by the Organization for Economic Co-operation and Development (OECD), the United States ranked 25th in life expectancy (1). Although we do better in cancer survival rates, we are more likely to die of heart disease and we do not have a good track record on treating chronic diseases such as asthma.

Health care rationing. An argument has been made that because health care is heavily rationed in other countries, Americans use more health-care services in comparison. We do rank high in the use of some expensive tests and procedures (more on this later), but overall the OECD reports that the US is well below other developed countries in number of average doctor visits per year, hospitalizations and hospital length of stay (1). Americans have better-than-average access to specialists, but we lag compared to other countries in getting immediate access to a primary care doctor when we're sick and we are much more likely forgo heath care because of costs (2).

Bad patients. Some have claimed that the US has to spend more on health care because we are fat and lazy. Although this may be true, it does not explain the gap in health care spending between the US and other countries. Obesity rates are higher in the US but the US compares well to other countries in smoking and drinking (1). We also have a younger population compared to many other OECD countries which should actually lower costs (1).

Tort reform. The US has more lawyers and more lawsuits of doctors but this does not seem to be a major factor in health care costs. Tort reform would probably not go far in bringing down US health-care costs. A 2009 study by the nonpartisan Congressional Budget Office (CBO) found that implementing tort reform would reduce US health care spending by only 2 percent (3).

Government inefficiency. There is also speculation that US Government inefficiency and spending that drives up health care costs. Health care administrative costs in the Veterans Administration (VA) are estimated to be lower than private insurance according to the CBO (4). However, as recently discovered in the patient wait times scandal, VA data may be suspect. The Centers for Medicare and Medicaid Service's (CMS) administrative costs are reported to be about 2 percent of claims costs, while private insurance companies’ administrative costs are in the 20 to 25 percent range. The argument is that private industry with costs for advertising, collection, and profit are eliminated by CMS resulting in lower costs. However, this concept has also been challenged. CMS’s administrative costs are often hidden or completely ignored by the complex and bureaucratic reporting and tracking systems used by CMS (5). Furthermore, the estimates completely ignore the inefficiencies created by CMS's mandates requiring an increasingly heavy paperwork burden for physicians and hospitals.

Physician income. Some think that greedy physicians making too much money explain the rising costs in health care. Physician compensation varies widely between specialty, health care setting and region. Laugesen and Glied (6) concluded that higher physician fees were the main drivers of higher US spending. However, in 1970, the average inflation-adjusted income of general practitioners was $185,000. In 2010, it was $161,000, despite a near doubling of the number of patients that doctors see a day. Furthermore, during the boom years of the 1990's physician incomes remained relatively stagnant with an actual decline in the early 2000's (7-9). Although physician income is higher in the US than other countries, it would not appear to explain increasing health care costs since physician income was predominately stagnant or decreasing while health care costs rose.

Drug costs. Pharmaceutical costs have been increasing in the US (Figure 2) (10).



Figure 2. Total prescription drug spending 1980-2012.

Some have blamed these costs in increasing health care costs in the US. Although the rate of growth appears to be leveling off when adjusted for inflation (Figure 2), pharmaceutical costs remain high in the US.

Administrative costs. In ground-breaking work published in 1991 Woolhandler and Himmelstein (11) found that US administrative health care costs increased 37% between 1983 and 1987. They estimated these costs accounted for nearly a quarter of all health care expenditures. In Canada the administrative costs were about half as much and declined over the same period. They followed their 83-87 report by examining data from 1999 (12). US administrative costs had risen to 31% of US health care expenditures.

The trend is perhaps best illustrated by the graph below (Figure 3) (13).  

Figure 3. Growth in administrators and physicians 1970-2010 (used with permission of David Himmelstein).

The growth in administrative costs may not limited to the private sector. CMS' administrative costs are very difficult to determine. Similarly, the VA also has hidden costs. However, during my 30 years at the VA, I saw a disturbing growth in the front office. New assistant directors were continually hired, sometimes during a hiring freeze when needed doctors and nurses were not hired (Robbins RA, unpublished observations). The growth in VA administration has been staggering at some levels. Regional Veterans Integrated Service Network (VISN) offices were founded in the mid 1990's. However, these VISNs provide no healthcare and now number nearly 5000 employees (14). VA central office in Washington grew from about 800 employees to 11,000 in the last 15 years (14). This represents a staggering 20-fold increase over the past 15 years.

The Present

High Costs. Nearly everyone agrees that health care costs are too high and have continued to rise albeit more slowly during the Obama administration (1,15). At $8713 per person the US outspent every other OECD country for a number of years including 2015 (Figure 4) (1,15).

Figure 4. Current expenditure on health, per capita, US$ purchasing power parities. OECD average in green and United States in red.

The next closest was Switzerland at $6325. The US is a very rich country, but even so, it has devoted an increasing percentage of its gross domestic product (GDP) to health than any other country for a number of years including 2015 (Figure 5).

Figure 5. Current expenditure on health as a % of gross domestic product (GDP). OECD average in green and United States in red.

Switzerland is the next highest, at 11.1% of GDP, and the average among economically developed countries was almost half that of the US, at 8.9%.

High Numbers of Expensive Procedures. There is plenty of blame to spread for the increased cost of health care in the US. Spending on almost every area of health care is higher (Figure 6) (1,2).

Figure 6. Health spending by category in US dollars 2010 or latest year available.    

Because the spending is higher in nearly every category, the reasons for the high costs in the US are likely multifactorial. US health care has a long-standing reputation for excessive numbers of procedures at high costs. The data would seem to back that impression. The numbers of some expensive procedures or operations appear to be higher in the US compared to other countries (Table 1) (1).

Table 1. Numbers of exams or procedures in the US with OECD rank and average.

High Cost per Procedure. Furthermore, the costs of procedures in the US are high compared to other countries (1,16). (Table 2).

Table 2. Cost of common procedures. Highest cost in red.

The average price for a wide range of both medical and surgical services in the US is 85 percent higher than other OECD countries (16). Both the numbers of expensive procedures and the high cost of procedures undoubtedly contribute to the high cost of health care in the US.

Administrative Costs. In 1999 the administrative costs of health care were estimated to be about 1/3 of all costs and were rapidly rising. There appears to have been little slow down in the rapid rise of administrative costs. Himmelstein and Woolhandler (17) estimated that administration costs could be as much as 45% of health care costs in 2014. There is no line for administrative costs on a medical bill but these costs are factored into all categories of medical spending.

The Future

As both Niels Bohr and Yogi Berra have said, "it's tough to make predictions, especially about the future". Now that King vs. Burwell has been settled, it is apparent that American health care will be directed by the ACA for the foreseeable future. Each year an official National Health Expenditure Projections for the next 10 years is released by the Centers for Medicare and Medicaid Services (CMS)’ Office of the Actuary. By examining these projections (which may be overly optimistic) as well as some observational studies, a rough prediction for the costs of health care can be made.

Economies of Scale. A principle in medical economics central to the Affordable Care Act (ACA) is economies of scale (18). The theory is that larger insurers will have lower prices because they are more administratively efficient. However, a recent study found that the largest insurer in each of the US states served by raised their prices in 2015 by an average of over 10% compared to smaller competitors in the same market (19). Those steeper price hikes for monthly premiums did not seem warranted by the level of health claims which did not significantly differ as a percentage of premiums in 2014.

Provider-Owned Health Plans. Another principle of the ACA in controlling health care costs is establishment of provider-owned (usually hospital) health plans. The theory is that substitution of provider-owned health plans will lower costs by controlling doctors over charging in a fee-for-service model. Although temptingly simple, a recent study concludes that this theory is not supported by the evidence. Comparing provider-owned to nonprovider-owned plans within twelve counties across the US was on average 12% more expensive compared to traditional insurers (20).

Drug Costs. Although drug prices remain consistently high in the US compared to other economically developed countries, competition to reduce these prices for CMS patients has been limited by Congress. Most health care plans have focused on formularies to control prices. Under this system, contracts with pharmaceutical manufacturers establish preferred drugs for use by their clients and their contracted physician prescribers. Although this strategy has been in place for some time, it appears to be ineffectual in controlling drug costs (Figure 6). Most countries place price controls on drugs, a strategy that seems to lack political will in the US (21). There appears to be little in the ACA that will control drug costs.

Administrative Costs. Himmelstein and Woolhandler (22) calculated new overhead costs from the official National Health Expenditure Projections for 2012-2022 released by the Centers for Medicare and Medicaid Services (CMS)’ Office of the Actuary in July 2014. Between 2014 and 2022, CMS projects $2.757 trillion in spending for private insurance overhead and administering government health programs (mostly Medicare and Medicaid), including $273.6 billion in new administrative costs attributable to the ACA. Nearly two-thirds of this new overhead—$172.2 billion—will go for increased private insurance overhead.

Most of this soaring private insurance overhead is attributable to rising enrollment in private plans which carry high costs for administration and profits. The rest reflects the costs of running the ACA exchanges.

Insuring the 25 million additional Americans, as the ACA is projected to do, is surely worthwhile, but the administrative cost is enormous. The ACA isn’t the first time we’ve seen bloated administrative costs from a federal program that subcontracts for coverage through private insurers. Medicare Advantage plans’ overhead averaged 13.7 percent in 2011, about $1,355 per enrollee. However, both Congress and the White House seem intent on sending more federal dollars to private insurers. Indeed, the House Republican’s initial budget proposal would have "voucherized" Medicare, eventually diverting almost the entire Medicare budget to private insurers. Fortunately, the measure passed by the House on April 30, 2015 dropped the voucher scheme.


The difficulty with the ACA is that it does not appear to control the two major causes of the rise in health care spending - pharmaceutical costs and more importantly administrative costs. Himmelstein and Woolhandler (22) have long advocated a national single-payer system for health care similar to Canada's. They cite the low overhead for Medicare and Medicaid and the VA as demonstrating that such a system can work in the US. Despite the obfuscation of the overhead data by both US government agencies such as CMS and the VA, it seems likely that a single payer system would be more efficient than a private system. As Himmelstein and Woolhandler (22) have stated "public insurance gives much more bang for each buck".

However, a caveat must be added. A lesson that should be learned from the recent VA scandal is that public officials are no more honest that private companies in reporting data. Any system devised will need close oversight by knowledgeable patient care advocates. If not, the dollars intended for health care will be diverted into administrative pockets. It seems most likely that this should be on a local level by health care providers not employed or appointed by the administrators they oversee. Otherwise, there would be no real oversight. The ACA seems to encourage "provider-owned" health plans. These plans should be overseen not by the business cronies or administratively appointed physicians and nurses, but by independent health care providers who will look at administrative costs with a suspicious eye and question the costs at a local level. Otherwise the present system of less care at higher prices will persist.


  1. Organisation for Economic Co-operation and Development. Available at: (accessed 8/4/15).
  2. Stokes B. Health affairs: among 11 nations, American seniors struggle more with health costs. Pew Research Center. December 3, 2014. Available at: (accessed 8/4/15).
  3. Congressional Budget Office. October 9, 2009. Available at: (accessed 8/4/15).
  4. Congressional Budget Office. Comparing the costs of the veterans’ health care system with private-sector costs. December, 2014. Available at: care_Costs.pdf (accessed 8/4/15).
  5. Mathews M. Medicare’s hidden administrative costs: a comparison of Medicare and the private sector. The Council for Affordable Health Insurance. January 10, 2006. Available at: (accessed 8/4/15).
  6. Laugesen MJ, Glied SA. Higher fees paid to US physicians drive higher spending for physician services compared to other countries. Health Aff (Millwood). 2011;30(9):1647-56. [CrossRef] [PubMed]
  7. Ballas C. Why do doctors accept gifts, and what would happen if they didn't? The Last Psychiatrist. October 26, 2010. Available at: (accessed 8/4/15).
  8. Tu HT, Ginsburg PB. Losing ground: physician income, 1995-2003. Track Rep. 2006;(15):1-8. [PubMed]
  9. Medscape physician compensation report 2015. Available at: (accessed 8/4/15).
  10. Centers for Medicare & Medicaid Services, Office of the Actuary. Data released January 7, 2014. Available at: (accessed 8/4/15)
  11. Woolhandler S, Himmelstein DU. The deteriorating administrative efficiency of the US health care system. N Engl J Med. 1991;324(18):1253-8. [CrossRef] [PubMed]
  12. Woolhandler S, Campbell T, Himmelstein DU. Costs of health care administration in the United States and Canada. N Engl J Med. 2003;349(8):768-75. [CrossRef] [PubMed]
  13. Bureau of Labor Statistics. NCHS. Himmelstein and Woolhandler analysis of current population survey. Avaialable at: (accessed 8/4/15).
  14. Kizer KW, Jha AK. Restoring trust in VA health care. N Engl J Med 2014;371:295-7. [CrossRef] [PubMed] 
  15. Kane J. Health costs: how the US compares with other countries. PBS Newshour. 2012. Available at: (accessed 8/4/15).
  16. Koechlin F, Lorenzoni L, Schreyer P. Comparing price levels of hospital services across countries: results of pilot study. OECD. Available at: (accessed 8/4/15).
  17. Himmelstein D, Woolhandler S. The post-launch problem: the affordable care act’s persistently high administrative costs. May 27, 2015. Available at:
  18. Robbins RA. Capture market share, raise prices. Southwest J Pulm Crit Care. 2015;11(2):88-9. [CrossRef]
  19. Wang E, Gee G. Larger Issuers, larger premium increases: health insurance issuer competition post-aca. Technology Science. 2015081104. August 11, 2015. Available at: (accessed 8/31/15).
  20. Colemen K, Gleeson J. Cheapest healthcare provider-owned insurance plans still 12% more expensive than cheapest insurance plans not owned by providers. HealthPocket. August 20, 2015. Available at: (accessed 8/31/15).
  21. US Department of Commerce. Pharmaceutical price controls in OECD countries: implications for U.S. Consumers, pricing, research and development, and innovation. 2004. Available at: (accessed 8/31/2015).
  22. Himmelstein D, Woolhandler S. The post-launch problem: the affordable care act’s persistently high administrative costs. Health Affairs Blog. 5/27/2015. Available at: (accessed 8/31/15).

Cite as: Robbins RA. National health expenditures: the past, present, future and solutions. Southwest J Pulm Crit Care. 2015;11(4):176-85. doi: PDF


Credibility and (Dis)Use of Feedback to Inform Teaching : A Qualitative Case Study of Physician-Faculty Perspectives

Tara F. Carr, MD

Guadalupe F. Martinez, PhD


Division of Pulmonary/Critical Care, Sleep and Adult Allergy

Departments of Medicine and Otolaryngology

University of Arizona College of Medicine

Tucson, AZ



Evaluation plays a central role in teaching in that physician-faculty theoretically use evaluations from clinical learners to inform their teaching. Knowledge about how physician-faculty access and internalize feedback from learners is sparse and concerning given its importance in medical training. This study aims to broaden our understanding. Using multiple data sources, this cross-sectional qualitative case study conducted in Spring of 2014 explored the internalization of learner feedback among physician-faculty teaching medical students, residents and fellows at a southwest academic medical center. Twelve one-on-one interviews were triangulated with observation notes and a national survey. Thematic and document analysis was conducted. Results revealed that the majority accessed and reviewed evaluations about their teaching. Most admitted not using learner feedback to inform teaching while a quarter did use them. Factors influencing participants use or disuse of learner feedback were the a) reporting metrics and mechanisms, and b) physician-faculty perception of learner credibility. Physician-faculty did not regard learners’ ability to assess and recognize effective teaching skills highly. To refine feedback for one-on-one teaching in the clinical setting, recommendations by study participants include: a) redesigning of evaluation reporting metrics and narrative sections, and b) feedback rubric training for learners.


Teaching is at the heart of academic medicine. Evaluation plays a central role in teaching in that clinical teachers, theoretically use evaluations from learners to inform their teaching (1,2) Feedback has been identified as a critical component of evaluation, and by extension, medical education training (3-6). National accreditation agencies emphasize the need for the ongoing meaningful exchange of feedback between learners and physician-faculty (7,8)

The learner perspective has dominated feedback research (9-14). These studies examine how physician-faculty deliver feedback, and how learners absorb the content and delivery of feedback. Physician-faculty also assume the role of learner when medical students and trainees serve as evaluators and provide feedback about physician-faculty teaching. In response, physician-faculty develop perceptions about the quality and context of feedback from learners that shape their receptiveness of that feedback, and teacher self-efficacy (15-18). Yet, only four studies consider context and explore factors that influence feedback receptiveness of physician-faculty (15, 19-21).   Only one study examines how physician-faculty respond to learner feedback to make adjustments to their teaching (15). Previous studies have also uncovered the important idea of “source credibility." (11,14,20,22).  They find that the impetus for both effective learning and teaching adjustment comes from the feedback recipient’s trust in the evaluators’ credibility. A limitation of these studies is the lack of attention to the feedback reporting mechanisms used by their institutions, leaner-teacher contact time, the establishment of relationships, and the various factors that go into trusting or valuing learner feedback. These perceptions play an essential role in how we understand educational exchanges between teacher and learner. As such, the purpose of this study is to recognize physician-faculty perceptions about the feedback process in relationship to their teaching practice.

Knowledge about how physician-faculty access and internalize feedback from learners is sparse (22), much less faculty recommendations for improving the process. This is concerning given the important role feedback plays in clinical training. This study aims at broadening the understanding of how physician-faculty access and internalize written feedback from learners while considering contextual factors that shape the overall feedback experience for physician-faculty. We qualitatively examine if and how learner feedback influences physician-faculty receptivity and incorporation of feedback critiques into their teaching practice. In supporting inquiries, we ask: To what extent do physician-faculty access and use feedback and why (or why not)? What factors shape their decisions to incorporate (or not incorporate) learner feedback into their teaching practice?


Exempt from human research approval by the site’s Institutional Review Board, this cross-sectional case study explored feedback internalization among medicine physician-faculty at a southwest academic medical center (23). The ethical conduct to maintain anonymity and inhibit coercion was exercised and articulated to participants. Participation was voluntary and without monetary compensation.

Case study research in the social science calls for the use of multiple data sources to gain understanding of an issue using a bounded group (24,25). As such, three data sources were included in analysis and to triangulate findings. First, purposeful selection was used to identify physician-faculty whose lived experiences in the department would assist us in understand the issue (26). Physician-faculty were introduced to the study’s purpose at a routine faculty meeting where voluntary participation was elicited.

Twelve of 15 (80%) full-time medicine subspecialists participated. Sometimes mistaken as a limitation of qualitative case study design is the relative small sample size; our interview numbers not only meet the general qualitative research sample size criterion of five to 30 interviews (27-30) but focuses on obtaining information-richness in the form of quality, length and depth of interview data and supporting evidence from additional sources that answer the research question. (Table 1).

Table 1. Sample Demographics.

Original interview questions were created (Appendix A). Individual semi-structured open ended interviews were conducted during the Spring of 2014. Follow-up interviews on two participants were conducted in early February of 2015 once promoted from mid-level to full professor. The same interview protocol was used to capture changes in perspective from full professors in the effort to expand the insight pool of senior professors.

During the preceding three years, all physician-faculty in the department received e-feedback at the end of rotations from learners that includes evaluation of their individual teaching. E-feedback was designed by the college’s medical education program directors. Forms were 9-point Likert scale with an optional written comments section after each question. To gather information regarding the internalization of feedback, we asked physician-faculty to recollect past e-feedback through their tenure at the study site. Interview questions asked participants to describe their access to evaluations, and internalizations of feedback. Interviews lasted between 30-60 minutes, were audio recorded, and transcribed. Transcripts were de-identified, and demographic information reported was limited. Reporting of narratives was truncated to capture central points and stay within the word count limitation. Participants from outside institutions and departments were not included in this study as evaluation tools may include different reporting mechanisms. Additionally, we wanted to capture and understand the current subculture that exists regarding feedback and teaching that is particular to one local clinical department.

Secondary data were: observation notes, and annual ACGME trainee survey results. Observation notes were taken by the principal investigator to memorialize each interview exchange, physician-faculty education meetings (e.g. faculty meetings, clinical competency committee meetings), and clinic exchanges also during Spring of 2014 (31). Given that the principal investigator is also a physician-faculty member, an insider researcher approach (32) allows the design to include her notations as she is acutely attuned to the daily lived experiences of the participating physician-faculty. The advantage of implementing this approach is that the principal investigator understands the participants’ academic values, current work environment, insider language and cues for accurate and trustworthy behavioral notes. Observation notes were taken to document behavior at education meetings where program evaluation and physician-faculty development was discussed. Disadvantages of being an insider could lead to bias, assumptions about meanings, and overlooking of routine behaviors that could be important. A quasi-outside researcher and non-physician-faculty member in the department served as a collaborator to counter insider researcher assumptions and bias.

Physician-faculty interviewed also partook in the 2013-2014 annual ACGME anonymous online trainee survey in the Spring of 2014. Trainee ratings of physician-faculty commitment to GME programs, and perceived satisfaction the program’s perceived use of evaluations to improve rotations could further validate whether or not physician-faculty use evaluations to inform their teaching. (Appendix B).

Data were analyzed using qualitative software, QSR Nvivo10©. Using a holistic and cross-case analysis approach (25), thematic coding was used to identify patterns in access to feedback, and receptiveness on interview data and observation notes. Axial coding was then used to hone in on specific challenges/strengths in feedback from learners. Once identified, selective coding was conducted to detect themes and redundant assertions so as to ensure that no new information was emerging. Last, document analysis of the ACGME survey results was conducted. Implementing the In-between-triangulation method (33), codes from observation notes and the ACGME survey results were linked through memos to interview data. Member checking between the principal investigator and co-investigator regarding themes, terms, and categorizations occurred to ensure data trustworthiness as defined by Guba (34) (Appendix C).


Access, review, and (dis)use

A significant proportion of physician-faculty accessed and reviewed feedback about them when available (10/12: 83%). The majority of physician-faculty revealed that they do not use learner feedback to make adjustments to their teaching (9/12; 75%). One physician-faculty member summarized the group’s sentiments and disclosed, 

“Not at all. The verbal feedback from my colleagues and boss makes me more cognizant of my behavior and I modify it appropriately; whether it was a success, I’ll let them judge. The written eval[uation]s from [learners] has never changed [my teaching] because they go from horrible to great and they are not useful.”-S11

Only a quarter, all of whom were junior faculty, reported utilizing learner feedback to alter teaching (3/12; 25%). Evidence that the majority of physician-faculty may not be using learner feedback to adjust teaching is broadly, but further corroborated by the ACGME survey data. Although 100% of trainees in this GME program reported having the opportunity to evaluate physician-faculty, less than 70% (which was very close to the national average) reported satisfaction with the program and physician-faculty using learner evaluations to improve. Despite this rating, these learners also reported that physician-faculty were interested in the educational program, and created an environment of inquiry at the rate of 100% (Appendix B). Furthermore, from observation notes taken during daily clinical discussions, it was noted that physician-faculty did not discuss their weaknesses with each other; especially regarding their teaching skills. Finally, when conversations regarding national conferences arose in physician-faculty education meetings or informal social settings, physician-faculty did not dialogue about attending conferences for the specific reason of improving or learning new teaching skills.

Factors influencing (dis)use

Physician-faculty identified several factors shaping their decisions to incorporate learner feedback into their teaching. To begin, just over half (7/12; 58%) reported that the metric used was problematic. When asked what they found valuable or disposable in reporting mechanisms, physician-faculty attested: 

 “A one-to-one evaluation rather than [the software we use] would be more valuable because…the numerical feedback is not very good. They need directed questions. There are non-substantive comments.” - S11

 “The numbers are worthless. I’d rather get comments that say,’ the bedside teaching was excellent, but he should work on his didactic session and change the graphics on that PowerPoint,’ but I never get that.” - S04

Second, differences in the perception of the learners emerged. Observation notes documenting contact time, relationship establishment and perspectives on fellows, specifically, revealed that physician-faculty tended to label learners in “good/bad” categories based on a combination of professional conduct, and medical knowledge base. “Good fellows” were the desired learner in the clinical setting. These learners were discussed and seen frequently in the company of physician-faculty at grand rounds, academic half days and departmental social gatherings. From observation, five physician-faculty had a following of learners who were similar to them in personality traits, interests or career aspirations. These physician-faculty and learners had a relationship, and it was evident at both social and academic gatherings as evident by the quality, duration and topic of verbal engagement, and physical proximity. Not all physician-faculty observed had this type of following and engagement.

Expanding on the observation of categorization and relationship establishment, physician-faculty reflected on their overall experience with learners and reported a general concern with the learners serving as evaluators. As a result, they cited this as a major reason for the disuse of feedback to inform their teaching (9/12; 75%). Concerns were grounded in the context of a) inadequate contact time, b) learners’ teaching fund of knowledge, and c) feedback being foregrounded in whether or not the learner takes a personal liking to the attending. When asked what their visceral reaction was to learner feedback, physician-faculty stated,

“I think you should limit it to somebody who has prolonged exposure to you. Most [learners] are only exposed to you for a few days…I think it’s more about the person doing the eval[uation] than the faculty member’s teaching ability. So I don’t hold learner feedback in high regard.” - S07

“I don’t think they know what a good teacher is….most [learners] just anchor their eval[uation] based on whether they like someone or not, so there’s not a rigorous evaluation of teaching methods.”- S04

These issues relate to physician-faculty skepticism about learners’ abilities to assess the teaching skills of their attending. There was a perception that learners were either: a) not knowledgeable about teaching methods and feedback, or b) scared to give honest feedback to physician-faculty because of the fear of retaliation. Nearly all physician-faculty reporting concerns with learner feedback knowledge recommended they receive a rubric as a tool to not only guide their feedback, but educate them about the evaluation process, and help identify “teaching moments” (7/9; 78%). Physician-faculty remarked,

“They might not know when the teaching is happening… I don’t think they know how that works and what that standard is... they don’t notice it…a lot of the teaching can be seen as unconventional. A rubric for them might be helpful…they need to be educated on evaluation.“- S10

Conversely, only two physician-faculty reported using learner feedback to adjust their teaching (2/9; 22%). They noted,

“[Learners] have been exposed to a lot of teaching and have a sense of what is effective and works for them. So part of our job is to be an effective teacher for different learners so if we’re not an effective teacher for certain learners we need to know about that…in a sense everyone is qualified… It doesn’t mean that one person who says you are not an effective educator is correct. We can’t please everyone, but we can work towards it.”- S11

 “…I try to establish relationships with the residents and fellows, and unfortunately or fortunately, it is easier for me to talk to them that way.”- S01

Learners’ experiences with numerous teachers and styles throughout their physician training were valued by the latter example. They perceived that learners had enough knowledge and experience to provide valid and competent feedback. Additionally, they saw it as their responsibility to adjust teachings and approach the teacher-learner construct as a bidirectional relationship. This is consistent with teacher-learner relationships noted in the observation settings.


The implications surrounding learner feedback and how physician-faculty internalize and use feedback to inform their teaching practices are substantial. In sum, physician-faculty in our study did not hold learner feedback in high regard. Extending the work identifying the issue of “source credibility” in feedback (3,11,14,20,22), a key finding that adds dimension to this concept is that physician-faculty in our study use learner feedback to adjust teaching practices based on the specific value they placed on learners’ past education experiences and competency regarding teaching skills and assessment. Results suggested that source credibility is further shaped by communication and existence of a relationship between the two parties given that study participants discussed viewing the dyad as “relationship”. Supporting a recent framework, “educational alliance” introduced by Telio and colleagues (3), this idea of a relationship implies an investment, and value in each other’s roles and contributions. The quality of the relationship and communication matters as it appears to play a role in the development of physician-faculty perceptions about their learner and by extension, receptiveness to learner feedback. If such an alliance is developed, physician-faculty could then draw more informed conclusions about learner credibility that could subsequently shape their use of learner feedback. When considering the context of resident and fellow learners, this underscores the importance of national Resident-as -Teachers programs as the intent of these programs is to build a teaching fund of knowledge for trainees. Research examining their effectiveness from the perspective of seasoned physician-faculty is needed. Additionally, future studies assessing correlations between faculty who place high value on learner feedback and credibility with increased recognition as effective teachers would greatly add to our understanding of this complex issue.

Findings also highlighted the importance of appropriate feedback metrics and mechanisms. Physician-faculty reported dissatisfaction with the metrics of the institution’s online evaluation system, and their corresponding narrative sections. They recommended rubric training for the learners to refine feedback for one-on-one teaching. Looking to our results, we support and propose a feedback rubric that is deployed via a purposeful training. To set the stage for feedback to occur as a process, rubric training could require learners to undergo brief training at their respective orientations on both the use of the rubric and importance of quality narrative feedback for program improvement and physician-faculty development. Rubric for each metric that incorporates rich descriptions could scaffold and improve the critical thinking process involved in writing constructive feedback narratives for learners. Moreover, comment boxes on evaluation reporting mechanisms with either prompts or ideal substantive comment examples could help learners’ better articulate meaningful feedback for physician-faculty and make connections with rubric scoring guides. This approach forces a reconceptualization of the role of learner feedback that is different. With the training and implementation of feedback rubric for learners, this places them in the role of teacher and expert evaluator. This alters the traditional paradigm and forces physician-faculty to expect more of learners and facilitates a system to further train learners in teaching and evaluation skills.

Finally, rubrics could include moderate tailoring to address abbreviated contact time, ensure anonymity, and review institutional safeguards against physician-faculty retaliation against the learner. A limitation of current feedback frameworks (3) is the lack of attention to how limited duration of contact time, and desire for anonymity, could impact quality communication and the establishment of a relationship. Consequently, physician-faculty being evaluated should undergo parallel training to understand context in which learners have been instructed to reflect and formatively evaluate their teaching practices given a varied set of learning/teaching conditions that consider the aforementioned obstacles. We encourage the development and testing of such tools as a next step.


A limitation of our study is the restriction to one department and over-representation of junior faculty. Physician-faculty were not asked to disaggregate feedback by the type of learner. Differences between physician-faculty perceptions of medical students versus residents versus fellows may have emerged. Despite these limitations, findings provide critical insight into what gives rise to the receptiveness of learner feedback while providing an honest report on why physician-faculty use or disuse evaluations to inform their teaching.  


Our study evaluates the value physician-faculty place on individual learner feedback about their teaching in the clinical setting. Despite the centrality of feedback in medical education training, physician-faculty predominantly accessed, reviewed, but disused feedback from learners to inform their teaching. This is due to the reporting mechanisms and concern over credibility of the learner; specifically, their ability to assess and recognize effective teaching skills. The introduction of feedback rubric training for learners could advance learning and contribute to sound evaluation as they are important sources of information for identifying and improving teaching and evaluation skills.35 Physician-faculty need to be able to trust and value the feedback they receive. Credible feedback shapes the decisions they make when selecting appropriate professional development opportunities, thus, shaping the quality of our medical training programs.


We would like to thank Karen Spear Ellinwood, PhD, JD and Gail T. Pritchard, PhD for the Academy of Medical Education Scholars (AMES) Teaching Scholars Program for providing a platform from which to design and conduct the study. We also wish to thank the faculty members who participated in this study, for their time and candor.

Declaration of Interest

No declarations of interest.



  1. Vu TR, Marriott DJ, Skeff KM, Stratos GA, Litzelman DK. Prioritizing areas for faculty development of clinical teachers by using student evaluations for evidence-based decisions. Acad Med. 1997;72(10):S7-S9. [CrossRef] [PubMed]
  2. Elzubeir M, Rizk D. Evaluating the quality of teaching in medical education: are we using the evidence for both formative and summative purposes? Med Teach. 2002; 24(3):313-9. [CrossRef] [PubMed]
  3. Telio S, Ajjawi R, Regehr G. The "educational alliance" as a framework for re-conceptualizing feedback in medical education. Acad Med. 2015;90(5):609-14. [CrossRef] [PubMed]
  4. van der Ridder JMM, Stokking KM, McGaghie WC, Ten Cate OTJ. What is feedback in clinical education? Med Educ. 2008;42:189-97. [CrossRef] [PubMed]
  5. Ende J. Feedback in clinical medical education. JAMA. 1983;250(6):777-81. [CrossRef] [PubMed]
  6. Wood BP. Feedback: A key feature and reflection: Teaching methods clinical settings. Radiology. 2000;215:17-19. [CrossRef] [PubMed]
  7. Accreditation Council for Graduate Medical Education. Common Program Requirements- Currently in Effect. Available at: Published July 1 , 2014. Accessed December 10, 2014.
  8. Liaison Committee on Medical Education. Functions and structure of a medical school: Standards for Accreditation of Medical Education Programs Leading to the MD Degree. Available at: Published June 2013. Accessed December 10, 2014.
  9. Curtis DA, O'Sullivan P. Does trainee confidence influence acceptance of feedback? Med Educ. 2014;48(10):943-5. [CrossRef] [PubMed]
  10. Arah OA, Heineman MJ, Lombarts, KM. Factors influencing residents' evaluations of clinical faculty member teaching qualities and role model status. Med Edu. 2012;46(4):381-9. [CrossRef] [PubMed]
  11. Watling CJ, Driessen E, van der Vleuten CP, Lingard L. Learning from clinical work: The roles of learning cues and credibility judgments. Med Edu. 2012;46(2):192-200. [CrossRef] [PubMed]
  12. Ferguson P. Student perceptions of quality feedback in teacher education. Assessment & Evaluation in Higher Education. 2011;36(1):51-62. [CrossRef]
  13. Shute VJ. Focus on formative feedback. Review of Educational Research. 2008;78:153-89. [CrossRef]
  14. Bing-You RG, Paterson J, Levine MA. Feedback falling on deaf ears: Residents' receptivity to feedback tempered by sender credibility. Med Teach. 1997;19(1):40-4. [CrossRef]
  15. van der Leeuw RM, Overeem K, Arah OA, Heineman MJ, Lombarts KM. Frequency and determinants of residents' narrative feedback on the teaching performance of faculty: narratives in numbers. Acad Med. 2013;88(9):1324-31. [CrossRef] [PubMed]
  16. Bing-You RG, Throwbridge RI. Why medical educators may be failing at feedback. JAMA. 2009;302:1330-1. [CrossRef] [PubMed]
  17. Epstein RM, Siegel DJ, Silberman J. Self-monitoring in clinical: a challenge for medical educators. Journal of Continuing Education in the Health Professions. 2008;28(1):5-13. [CrossRef]
  18. Bandura A. Self-regulation of motivation and action through goal systems In Hamilton V, Bower GH, Frijda NH, eds. Cognitive perspectives on emotion and motivation. Dordrecht: Kluwer Academic Publishers; 1988: 3-38.
  19. Watling CJ, Kenyon CF, Schulz V, Goldszmidt MA, Zibrowski E, Lingard L. An exploration of faculty perspectives on the in-training evaluation of residents. Acad Med. 2010;85(7):1157-62. [CrossRef]
  20. Sargeant J, Mann K, van der Vieuten C, Metsemakers J. Feedback falling on deaf ears: residents's receptivity to feedback tempered by sender credibility. Journal of Continuing Education in the Health Professions. 2008;28(1):47-54. [CrossRef] [PubMed]
  21. Sargeant, J, Armson, H, Chesluk, B, Doran, T, Eva, K, Holmboe, E, Lockyer, J. The processes and dimensions of informed self-assessment: a conceptual model. Acad Med. 2010;85(7):1212-20. [CrossRef]
  22. Eva KW, Armson H, Holmboe E, Lockery J, Loney E, Mann K, Sargeant J. Factors influencing responsiveness to feedback: On the interplay between fear, confidence, and reasoning processes. Adv Health Sci Educ Theory Pract. 2012;17:15-26. [CrossRef]
  23. Thomas G. How to do your case study. Thousand Oak: SAGE Publications; 2010. Merriam SB. Case Study Research in Education: A Qualitative Approach. San Francisco: Joseey-Bass Publications; 1988.
  24. Merriam SB. Case Study Research in Education: A Qualitative Approach. San Francisco: Joseey-Bass Publications; 1988.
  25. Yin RK. Case study research: Design and method (3rd ed.). Thousand Oaks, CA: SAGE Publications; 2003.
  26. Creswell JW, Miller DL. Determining validity in qualitative inquiry. Theory into Practice. 2000;39(3):124-131. [CrossRef]
  27. Marshall B, Cardon P, Amit P, Fontenot R. Does sample size matter in qualitative research?: A review of qualitative interviews in IS research. Journal of Computer Information Systems. 2013;54(1):11-22.
  28. Morrow S. Quality and Trustworthiness in Qualitative Research in Counseling Psychology. Journal of Counseling Psychology. 2005;52(2):250-60. [CrossRef]
  29. Creswell JW. Qualitative inquiry and research design: Choosing among five traditions. Thousand Oaks: Sage Publications; 1998.
  30. Lincoln YS, Guba EG. Naturalistic Inquiry. Newbury Park: SAGE Publications; 1985.
  31. Adler PA, Adler P. Observational techniques In Denzin NK, Lincoln, YS, eds. Handbook of qualitative research. Thousand Oaks, CA: SAGE Publications. 377-392; 1994.
  32. Unluer S. Being an insider researcher while conducting case study research. Qualitative Report. 2012;17(58):1-14.
  33. Creswell JW. Research design: Qualitative, quantitative, and mix method approaches (3rd ed.). Thousand Oaks: Sage Publications; 2009.
  34. Guba EG. Annual Review Paper: Criteria for assessing the trustworthiness of naturalistic inquiries. Educational Communication and Technology. 1981;29(2);75-91.
  35. Wolf K, Stevens E. The role of rubrics in advancing and assessing student learning. The Journal of Effective Teaching. 2007;7(1):3-14.






Reference as: Carr TF, Martinez GF. Credibility and (dis)use of feedback to inform teaching : a qualitative case study of physician-faculty perspectives. Southwest J Pulm Crit Care. 2015;10(6):352-64. doi: PDF