Antimicrobial stewardship programs (ASPs) are critical infrastructure to improve antibiotic prescribing in hospitals. They are designed to optimize clinical outcomes while minimizing unintended consequences of antibiotic use, including adverse drug events, Clostridioides difficile infections (CDI), and emerging antibiotic resistance. Reference Dellit, Owens and McGowan1
In 2014, the Centers for Disease Control and Prevention (CDC) called on all US hospitals to implement ASPs and released the Core Elements of Hospital Antibiotic Stewardship Programs (Core Elements) to guide hospitals in achieving this goal.2 The Core Elements describe structural and process components associated with successful ASPs. In 2015, the US National Action Plan for Combating Antibiotic Resistant Bacteria (CARB) set a goal to implement the Core Elements in all hospitals that receive federal funding. 3 The CDC updated its Core Elements in 2019 to emphasize the importance of hospital leadership, commitment, accountability, pharmacy expertise, actions such as prospective audit and feedback (PAF), local guidelines for common conditions, and antibiotic use tracking using the National Healthcare Safety Network (NHSN) Antimicrobial Use option. 4
To support the National Action Plan for CARB, The Joint Commission established ASP standards for its accredited hospitals effective January 2017. 5 In 2017, the Agency for Healthcare Research and Quality (AHRQ) Safety Program for Improving Antibiotic Use began a pragmatic quality-improvement program that produced free, setting-specific, tool kits for ASPs. Reference Tamma, Miller and Dullabh6,7 The Centers for Medicare and Medicaid Services (CMS) added federal regulations for hospital antibiotic stewardship programs to the conditions of participation in 2019. 8
These combined efforts appear to have been successful in establishing ASPs in hospitals; self-reported data from NHSN annual hospital surveys revealed that 91% of acute-care hospitals had all 7 Core Elements in place in 2020, compared to only 41% in 2014. 9 Although most hospitals have a basic infrastructure, it is important to ensure that ASPs are implementing effective approaches that strengthen and advance their existing programs.
To identify promising, evidence-based leading ASP practices, The Joint Commission and The Pew Charitable Trusts convened an in-person meeting of experts and key stakeholder organizations in May 2018. Reference Baker, Hyun, Neuhauser, Bhatt and Srinivasan10 Leading practices can be described as best and emerging interventions that complement, strengthen, or go beyond traditional interventions conducted by ASPs. The group identified 6 leading practices (3 established or emerging practices and 3 measurement-related practices) that top-performing ASPs should be performing to improve care for patients: (1) development and implementation of facility-specific treatment guidelines (FSTGs), (2) interactive prospective audit and feedback (also known as handshake stewardship), (3) optimizing diagnostic testing (also known as diagnostic stewardship), (4) measurement of antimicrobial use using days of therapy per 1,000 days present or patient days, (5) measurement of hospital-onset CDI, and (6) measurement of adherence to FSTGs.
In this study, we assessed the proportion of Joint Commission–accredited hospitals that have implemented these 6 leading practices of antimicrobial stewardship, and we identified hospital characteristics associated with these practices.
This cross-sectional observational study was guided by 9 expert advisors who (1) helped develop the web-based questionnaire, (2) established minimum necessary requirements to determine whether a hospital has implemented a leading practice, and (3) advised on data interpretation. Table 1 presents the leading practices operational descriptions.
We reviewed published literature and previous questionnaires, and we held several advisory panel meetings to establish questionnaire domains and review draft questions. Reference Graber, Jones and Chou11 The draft questionnaire was pilot tested at 22 hospitals in fall 2019 (Supplementary Methods 1 online). To calculate prevalence of leading practices, algorithms linked specific combinations of questionnaire items (Supplementary Methods 2 online).
General medical–surgical acute-care hospitals, children’s hospitals, and critical-access hospitals (CAHs) that received accreditation following a full Joint Commission accreditation survey visit in 2018 were eligible for inclusion. Hospitals due for a survey visit in 2019 or 2020 were excluded to reinforce that the study was unrelated to accreditation. The Joint Commission, a not-for-profit organization, accredits ∼3,239 (64.3%) of 5,038 US nonspecialty hospitals: 2,328 (76.9%) of 3,416 general medical-surgical acute-care hospitals, 94 (81.0%) of 116 children’s hospitals, 152 (89.9%) of 169 federally owned hospitals, and 365 (27.3%) of 1,337 of CAHs. Reference Roberts, Coale and Redman12,13
Following a hardcopy advance letter to hospitals in January 2020, a 50-item questionnaire was sent by e-mail to the designated ASP leader (Supplementary Methods 3 online). The desired minimum sample size, calculated based on 5% precision and confidence intervals (CIs) of 95% after applying a finite population correction factor, was determined to be 274 hospitals.
We used R version 3.5 software (R Foundation for Statistical Computing, Vienna, Austria) for data analysis. Sampling weights were used to adjust the results for nonresponse and were applied to the calculation of prevalence for the leading practices. Logistic regression was used to estimate the probability that a sampled hospital had completed the survey as a function of bed-size category (ie, small, ≤100 beds; medium, 100–399 beds; and large, ≥400 beds), location (urban or rural), health-system status (membership in a hospital system or not), and teaching status (major, minor, or nonteaching). The inverse of the predicted probability of response was used as the weight. The mean scores for each practice, both overall and stratified by hospital characteristics, were calculated using these sampling weights. Sampling weights were not applied to frequencies of other descriptive survey findings. P values < .05 were considered significant. We used the χ 2 test to examine differences in response rates by hospital characteristics, and we have provided 95% CIs for the mean scores, overall, and by characteristic.
This project was reviewed by Ethical and Independent Review Services and was determined to be exempt from institutional review board (IRB) review.
Approximately 1,600 hospitals underwent a full accreditation survey in 2018. Of these, 601 were specialty hospitals and 44 did not have valid contact information. E-mail invitations were sent to 948 eligible hospitals in 48 states. Overall, 288 (30.4%) of 948 hospitals completed the questionnaire, meeting the sample size needed for estimated precision. Respondents came from 47 states.
Among responding hospitals, 82 (28.5%) were small, 162 (56.3%) were medium sized, and 44 (15.3%) were large. Also, 228 (79.2%) hospitals were in urban settings; 26 (9%) were major teaching hospitals; 230 (79.9%) belonged to a healthcare system; 25 were CAHs; and 5 were children’s hospitals. Small hospitals (P = .005) and nonteaching hospitals (P = .01) were less likely to respond compared to large, teaching hospitals. Healthcare system membership and location were similar between respondents and nonrespondents. (Table 2). Furthermore, 141 respondents (49.0%) reported their role or title as specialists in antimicrobial stewardship or infectious disease (eg, ASP pharmacist, ID clinical pharmacist, or ASP medical director); 125 (43.4%) reported their role or title as nonspecialist pharmacy directors or clinical pharmacists; and 22 (7.6%) reported another role (eg, infection preventionist or director of quality).
Note. ASP, antimicrobial stewardship program; ID, infectious diseases.
Percentages are unweighted. The χ 2 test was used to test for the significance of differences in hospital characteristics.
a System indicates whether a hospital is affiliated with a healthcare system. A multihospital health care system is 2 or more hospitals owned, leased, sponsored, or contract managed by a central organization (AHA data dictionary 2018).
b Teaching hospitals are those with Council of Teaching Hospitals designation (COTH). Minor teaching hospitals are those approved to participate in residency and/or internship training by the Accreditation Council for Graduate Medical Education (ACGME), or American Osteopathic Association (AOA) or those with medical school affiliation reported to the American Medical Association. Nonteaching hospitals are those without COTH, ACGME, AOA or medical school (AMA) affiliation (AHA data dictionary 2018).
c Hospital location indicates rural or urban location based on Metropolitan Statistical Area (MSA) designation. A rural location is defined as located outside an MSA, as designated by the US Office of Management and Budget (OMB). An urban area is a geographically defined, integrated social and economic unit with a large population nucleus (AHA data dictionary 2018).
Prevalence of leading practices
Weighted estimates of the prevalence of the leading practices are provided in Table 3 with stratification by hospital characteristics. Implementation across all 6 leading practices was as follows: Only 3 hospitals (1%) indicated that they had implemented no practices. However, 16 hospitals (5.6%) indicated that they had implemented 1 practice; 37 hospitals (12.9%) indicated that they had implemented 2 practices; and 69 hospitals (24.0%) indicated that they had implemented 3 practices. Furthermore, 68 hospitals (23.6%) indicated that they had implemented 4 practices; 56 hospitals (19.4%) indicated that they had implemented 5 practices; and 39 hospitals (13.5%) indicated that they had implemented 6 practices. The median number of leading practices implemented across hospitals was 4 (interquartile range, 3–5).
Note. CAP, community-acquired pneumonia; UTI, urinary tract infection; SSTI, skin and soft-tissue infection; CDI, Clostridioides difficile infection.
a Facility-specific treatment guidelines included CAP, UTI, SSTI, and sepsis.
b Measured as days of therapy per 1,000 days present or 1,000 patient days.
Facility-specific treatment guidelines
Overall, 268 hospitals (93.1%) developed FSTGs for at least 1 inpatient condition. The most frequently addressed conditions were community-acquired pneumonia (CAP) (n = 246 hospitals, 85.4%), sepsis (n = 232 hospitals, 80.6%), urinary tract infection (UTI) (n = 215 hospitals, 74.7%), and skin and soft-tissue infection (SSTI) (n = 199 hospitals, 69.1%). Furthermore, 161 hospitals (55.9%) developed FSTGs for CAP, UTI, SSTI, and sepsis (Supplementary Table 1 online). Hospitals not in a health system were least likely to have met the criteria for this leading practice (34.3%; 95% CI, 27.8%–40.8%; P < .001) (Table 3). Guidelines were generally implemented by treatment algorithms or pathways built into the electronic health records (EHR) system via order sets.
Interactive prospective audit and feedback
Overall, 239 hospitals (83.0%) reported having any process for prospective audit and feedback (PAF). Approaches used to provide frontline staff with feedback varied widely. Recommendations were commonly provided by the ASP pharmacist (n = 198, 68.8%) using some combination of telephone (n = 224, 77.8%), face-to-face (n = 198, 68.8%), text message (n = 155, 53.8%), or EHR alert (n = 104, 36.1%). Most hospitals (n = 198, 68.8%) reviewed orders for all units; 142 (49.3%) reviewed orders for all antimicrobials, and 123 (42.7%) reviewed orders 4–5 days per week (Table 4).
Note. ASP, antimicrobial stewardship program; EHR, electronic health record.
Percentages are unweighted.
a Respondents were asked to select all applicable responses.
Regarding the leading practice criteria, 214 hospitals (72.4%) performed interactive PAF whereby an ASP team member provided feedback either by telephone (speaking with the clinician or leaving voice message), face to face, or both. Small hospitals (61.0%; 95% CI, 56.0%–66.0%; P = .0018), rural hospitals (52.6%; 95% CI, 46.5%–58.6%; P < .001), and nonteaching hospitals (68.2%; 95% CI, 63.9%–72.4%; P = .0076) were less likely to have implemented interactive PAF (Table 3).
Diagnostic testing optimization
Overall, 207 hospitals (71.9%) had procedures in place to optimize the appropriate use of diagnostic tests. Regarding the leading practice criteria, only 105 hospitals (34.9%) had implemented procedures to optimize testing for both C. difficile and UTIs (Table 3). Small hospitals (25.2%; 95% CI, 20.7%–29.6%; P = .030) and nonsystem hospitals (21.0%; 95% CI, 15.5%–26.6%; P = .0077) were less likely to meet this leading practice.
The main strategies used to optimize diagnostic testing for C. difficile were laboratory-initiated interventions (n = 165 hospitals, 57.3%) or clinician education sessions (n = 162, 56.3%). Allowing reflex urine cultures only when specific parameters were met (n = 91, 31.6%) and clinician education (n = 87 hospitals, 30.2%) were strategies commonly used to optimize urine-specimen testing. Hospitals frequently (n = 120, 41.7%) used a clinical decision support system to optimize diagnostic testing for CDI though fewer (n = 34, 11.8%) did so for urine-specimen testing (Supplementary Table 2 online).
Regarding antimicrobial use, 235 (79.8%) hospitals routinely measured days of therapy (DOT) per 1,000 days present or 1,000 patient days. Small hospitals (67.8%; 95% CI, 63.0%–72.5%; P = .0010), rural hospitals (69.1%; 95% CI 63.5%–74.8%; P = .033), and nonteaching hospitals (74.2%; 95% CI, 70.2%–78.2%; P = .033) were less likely to measure antibiotic DOTs (Table 3).
The overall proportion of hospitals measuring hospital-onset CDI (HO-CDI) was high (n = 258, 88.2%). Small hospitals were least likely (80.3%; 95% CI, 76.2%–84.3%; P = .0038) to measure HO-CDI. The proportion of hospitals monitoring provider adherence to at least 1 FSTG (ie, CAP, UTI, SSTI or sepsis) was low. Only 110 hospitals (37.1%) met this leading practice, with no differences by hospital characteristics (Table 3). Approximately one-fourth assessed adherence to either UTI (n = 73 hospitals, 25.3%), sepsis (n = 71 hospitals, 24.7%), or CAP (n = 70 hospitals, 24.3%); however, only 46 hospitals (16.0%) assessed adherence to FSTG for SSTI (Supplementary Table 1 online). Some hospitals (n = 59, 20.5%) collected adherence information manually, and 48 hospitals (16.7%) collected information electronically. Adherence results were disseminated to clinicians in formal meetings such as a pharmacy and therapeutics committee or medical staff (n = 109 hospitals, 37.8%), followed by informal approaches such as PAF (n = 63 hospitals, 21.9%), individually in person (n = 57 hospitals, 19.8%), in-service educational lectures (eg, grand rounds; n = 44 hospitals, 15.3%), and using e-mail distribution (n = 29 hospitals, 10.1%).
In this study, we sought to determine what proportion of Joint Commission–accredited hospitals had implemented the 6 leading practices of antimicrobial stewardship previously identified by an expert group. Reference Baker, Hyun, Neuhauser, Bhatt and Srinivasan10 Overall, these results show encouraging signs that US hospitals are adopting the leading practices of antimicrobial stewardship.
Most hospitals had implemented 1 or more facility-specific guidelines and slightly more than half have guidelines in place for CAP, UTI, SSTI, and sepsis. Similarly, in most hospitals, ASP team members were performing interactive prospective audit and feedback. Interactive prospective audits are powerful interventions to modify clinician practice and optimize treatment. The most common modes of interaction were by phone call, face-to-face, and text messaging. There was, however, considerable variation across hospitals in how often this was done, how many units were included, and which drugs were reviewed. In a similar study in Colorado hospitals, 55% of respondents were performing handshake stewardship. Reference Dodson14
Most hospitals measured antibiotic use with the recommended days of therapy metric. Reference Polk, Fox, Mahoney, Letcavage and MacDougall15,Reference Barlam, Cosgrove and Abbo16 Enrollment in the CDC NHSN Antimicrobial Use option allows hospitals to electronically capture and submit these data, in partnership with a vendor, and to calculate a standardized antimicrobial administration ratio. 17 Similarly, most hospitals were measuring CDI rates, likely due to the mandatory NHSN measure in the CMS hospital Inpatient Quality Reporting program. 18
However, 2 leading practices remain greatly underutilized. The first is measuring adherence to at least 1 FSTG, which was done by only approximately one-third of hospitals. Without data on adherence to treatment guidelines, improvement will be difficult for many hospitals. There are several possible reasons for the underutilization. Often hospitals lack the technical support resources or EHR capabilities to electronically capture adherence data. If these resources are not available, time-consuming retrospective manual data collection is required. Also, no standardized metrics or guidance for measuring FSTG adherence is available. In cases in which nonadherence is identified, it may be difficult to attribute nonadherence to individual prescribers for targeted interventions to change provider behavior. Changing behavior is more difficult to implement than technical changes to electronic systems. Greater understanding of barriers to assessing adherence to local guidelines and readily available tools are critical to improving this practice.
Finally, only one-third of hospitals reported efforts to optimize diagnostic testing for C. difficile and UTIs and the subsequent prescribing of unnecessary antibiotics that results from inappropriate testing. This was the lowest overall percentage among the leading practices. This may be because optimizing diagnostic testing requires a multidisciplinary effort that involves adjusting infection control and/or microbiology laboratory protocols. Optimizing testing for C. difficile was slightly higher than for urine specimens, likely because CDI rates are publicly reported. 18 Clinician education as an intervention to improve diagnostic testing was higher for C. difficile than for collecting urine specimens. This difference may be because there is more clarity on when and who to test for C. difficile and less clarity on when to obtain a urine specimen. This finding is consistent with an infection preventionist survey that found hospitals frequently rejected formed stool submitted for CDI testing but that the use of urine culture stewardship was much lower. Reference Vaughn, Greene and Ratz19
Variation by hospital characteristics
In this study, the implementation of 4 leading practices was less common among small hospitals. The first 2 practices were interactive prospective audit and feedback and diagnostic stewardship, which may reflect more dedicated roles and established expertise in antimicrobial stewardship at larger hospitals. Although hospitals with fewer providers sometimes have a more collaborative environment, they also have fewer ASP staff with ID training. 20 In-person feedback may be more challenging in small hospitals where the physician is only present a small portion of the day. Small hospitals may also have less information technology (IT) surveillance capability to target review, although strategies to overcome these limitations have been recommended. Reference Stenehjem, Hyun and Septimus21–Reference Sexton and Moehring24 Two other practices less common in small hospitals were measurement of CDI and antibiotic use. This may be because CAHs are not yet required to participate in the CMS IQR program and are less likely to enroll in the NHSN Antimicrobial Use option. 18
Hospitals belonging to a health system more frequently performed 2 leading practices. The first was developing guidelines for 4 conditions and implementing guidelines for CAP, UTI, and SSTI. Health systems can provide centralized resources including ASP clinical expertise for FSTG development as well as the technical staff needed to incorporate FSTGs into EHRs. Reference Kuper, Nagel, Kile, May and Lee25–Reference Logan, Williamson, Reinke, Jarrett, Boger and Davidson27 Similarly, belonging to a system was associated with optimizing diagnostic testing for C. difficile and UTIs. Diagnostic testing guidelines can often be integrated into EHR order sets at the system level.
As described, our findings indicated that most hospitals have implemented some, but not all, of the leading practices. Oversight organizations and national public health agencies have played a pivotal role in working to establish prioritized requirements for ASPs, driving demonstrable improvement over time, maintaining antibiotic stewardship in the national spotlight, and modifying prioritized requirements with new data. Now may be the right time for oversight organizations to direct increased attention to ASPs and to help reprioritize resources. Several studies have reported that ASP activities decreased when resources shifted to the COVID-19 pandemic response. 28–Reference Zembles, MacBrayne, Mitchell and Parker32
Our findings underscore the importance of substantive time and financial commitment from clinical and administration leadership for ASPs at both the health-system and local-hospital levels. Such support can create an infrastructure that will facilitate the dissemination and implementation of best practices and build the personnel and technical capacity for ASPs to achieve local goals, assess guideline adherence, and provide interactive prospective audit and feedback, much of which is carried out by pharmacists. When possible, health-system leaders should centralize these capacities and expertise to provide specialized support for smaller hospitals, for example, through antibiotic stewardship telehealth programs. Reference Stenehjem, Hyun and Septimus21,Reference Fakih, Guharoy, Hendrich and Haydar33
ASP leaders must tailor the implementation of practices or interventions to the local facility environment and their challenges. ASP leaders should determine that the internal environment would be receptive to the change. Reference Fakih34–Reference Krein, Damschroder and Kowalski36 ASP leaders can also take advantage of free resources such as the AHRQ tool kits and the CDC antimicrobial stewardship program assessment tool. 7,37
This study had several limitations. The sample included only hospitals accredited by The Joint Commission. Despite efforts to clarify that this project was unrelated to accreditation, the possibility of a positive response bias exists. A follow-up qualitative study of challenges and facilitators related to implementing these practices in a subsample of respondents will elucidate areas in which the questionnaire was unclear. The overall response rate was likely affected by the COVID-19 pandemic. To adjust for lower response rate in small hospitals, we weighted the analysis of leading practice prevalence. Nonresponding hospitals may have been less advanced in their ASP practices. Another limitation is the potential positive response bias associated with self-reported data. We did not collect information on staffing composition of ASP teams, which could confound interpretations related to hospital characteristics. Although the target respondent was the ASP leader, in some cases infection preventionists may have been more familiar with CDI diagnostic stewardship practices and NHSN-related issues because infection preventionists report these data. Finally, we did not address the ASP’s role in outpatient departments. Hospital ASPs often devote considerable resources to these areas. For example, ASP interventions for outpatient respiratory infections may be more salient for smaller hospitals than certain leading practices such as CDI reporting.
Overall, our findings indicate that many hospital ASPs have implemented effective practices such as facility-specific treatment guidelines for common conditions, engaging in interactive prospective audit and feedback, and measuring antibiotic use and CDI. However, advancing diagnostic stewardship activities and assessing compliance with local guidelines will require additional commitment, resources, guidance, and oversight from internal and external partners to maximize the overall impact of ASPs, especially in smaller hospitals.
To view supplementary material for this article, please visit https://doi.org/10.1017/ice.2022.241.
The findings and conclusions in this manuscript are those of the authors and do not necessarily represent the official position of the Centers for Disease Control and Prevention. The authors are sincerely grateful to all hospital staff who agreed to participate in the project and completed the questionnaire. We thank the following individuals who pilot tested the questionnaire: Reese Cosimi PharmD; John J. Veillette PharmD, BCPS; Dusten T. Rose PharmD BCIDP AAHIVP; Jason Taylor PharmD; Kyle Piscitello; Elizabeth Cicchetti PharmD; David Coe Silver PharmD BCPS; Dustin Waters PharmD BCPS (AQ-ID); Heather N. Taylor PharmD BCPS; Jessica A. Garcia (Watt) RN BSN; Kelsey Pena PharmD; Lauren H. Huneycutt PharmD; Meagan Godwin PharmD BCPS BCIDP; Leonard B. Johnson MD; Brian Maynard PharmD BCIDP; Mary Hutton PharmD BCIDP; Jason Child PharmD BCIDP; Sarah K. Parker MD; Erin M. Gentry PharmD BCPS; and Allison M. Kane PharmD. We are very grateful to Jason Newland MD MEd and Candace Allen RN MSN for their important contribution to the expert panel activities. Finally, we thank Tasha Mearday, BS, for data collection and editorial assistance, Kristine Donofrio for project coordination, and Scott Williams, PsyD, for guidance and review.
This project was supported in part by The Pew Charitable Trusts (contract ID no. 32952).
Conflicts of interest
Elizabeth S. Dodds-Ashley, PharmD, MHS, reports having received consulting fees from the following institutions: University of Maryland, University of Chicago, American College of Clinical Pharmacy, Hospital Association of New York State, and Sarah Moreland Russell Consulting. She has also received author royalties from UpToDate and personal fees from Joint Commission Resources, Belmont University, and Making a Difference—Infectious Diseases. She reports research grants to my institution from the CDC. All other authors report no conflicts of interest relevant to this article.