To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Background: Prompt identification of patients colonized or infected with carbapenem-resistant Enterobacterales (CRE) upon admission can help ensure rapid initiation of infection prevention measures and may reduce intrafacility transmission of CRE. The Chicago CDC Prevention Epicenters Program previously created a CRE prediction model using state-wide public health data (doi: 10.1093/ofid/ofz483). We evaluated how well a similar model performed using data from a single academic healthcare system in Atlanta, Georgia, and we sought to determine whether including additional variables improved performance. Methods: We performed a case–control study using electronic medical record data. We defined cases as adult encounters to acute-care hospitals in a 4-hospital academic healthcare system from January 1, 2014, to December 31, 2021, with CRE identified from a clinical culture within the first 3 hospital days. Only the first qualifying encounter per patient was included. We frequency matched cases to control admissions (no CRE identified) from the same hospital and year. Using multivariable logistic regression, we compared 2 models. The “public health model” included 4 variables from the Chicago Epicenters model (age, number of hospitalizations in the prior 365 days, mean length of stay in hospitalizations in the prior 365 days, and hospital admission with an infection diagnosis in the prior 365 days). The “healthcare system model” added 4 additional variables (admission to the ICU in the prior 365 days, malignancy diagnosis, Elixhauser score and inpatient antibiotic days of therapy in the prior 365 days) to the public health model. We used billing codes to determine Elixhauser score, malignancy status, and recent infection diagnoses. We compared model performance using the area under the receiver operating curve (AUC). Results: We identified 105 cases and 441,460 controls (Table 1). CRE was most frequently identified in urine cultures (46%). All 4 variables included in the public health model and the 4 additional variables in the healthcare system model were all significantly associated with being a case in unadjusted analyses (Table 1). The AUC for the public health model was 0.76, and the AUC for the healthcare system model was 0.79 (Table 2; Fig. 1). In both models, a prior admission with an infection diagnosis was the most significant risk factor. Conclusions: A modified CRE prediction model developed using public health data and focused on prior healthcare exposures performed reasonably well when applied to a different academic healthcare system. The addition of variables accessible in large healthcare networks did not meaningfully improve model discrimination.
Background: Socioeconomic barriers or divergent implementation of prevention measures may impact risk of healthcare-associated infections by racial groups. We utilized a previously studied cohort of patients to quantify disparities in central-line–associated bloodstream infection (CLABSI) risk by race accounting for inherent differences in risk related to device utilization. Methods: In a retrospective cohort of adult patients at 4 hospitals (range, 110–733 beds) from 2012 to 2017, we linked central-line data to patient encounter data: race, age, comorbidities, total parenteral nutrition (TPN), chemotherapy, CLABSI. Analysis was limited to patients with >2 central-line days and <3 concurrent central lines. Patient exposures were calculated for each central-line episode (defined by insertion and removal dates); analysis of central-line episode-specific risk of CLABSI among Black versus White patients adjusted for clinical factors, duration of central-line episode, and central-line risk category (ie, low: single port, dialysis or PICC; medium: single temporary or nontunneled; or high: any concurrent central-lines) in Cox proportional hazards regression of time to CLABSI. Results: In total, 526 CLABSIs occurred a median of 14 days after insertion among 57,642 central-line episodes in 32,925 patients. CLABSIs occurred in similar frequency across racial groups: 217 (1.7%) among Black patients, 256 (1.6%) among White patients, and 11 (1.6%) among Hispanic patients (also 42 among unknown or other race). Duration of central-line episode was similar between racial groups (median, 5 days). Black patients were less likely to have medium-risk central lines (34%) compared to white patients (RR, 0.82; 95% CI, 0.79–0.84), but they had a similar frequency of high-risk central lines (21%; RR, 1.0; 95% CI, 1.0–1.1). Compared with low-risk central lines, risk of CLABSI was increased among medium-risk central lines (RR, 1.3; 95% CI, 1.0–1.7) and high-risk central lines (RR, 2.2; 95% CI, 1.8–2.7). CLABSIs were more likely in TPN central lines (RR, 2.3; 95% CI, 1.9–2.7) than others, but they were not more likely among Black patients than White patients (RR, 0.9; 95% CI, 0.1–1.1). In survival analysis, there were 24,700 central-line episodes among Black patients compared to 26,648 episodes among White patients; adjusting for central-line risk and TPN, the risk of CLABSI was similar during the first 21 days of central-line use (adjusted hazard ratio, 1.08; 95% CI, 0.88–01.32) (Fig. 1). Conclusions: After accounting for central-line configuration, Black patients did not have a higher risk of CLABSI within 21 central-line days. Further evaluation is warranted to assess racial disparities in risks of other healthcare-associated infections and to determine whether a lack of CLABSI-specific racial disparities can be replicated in other regions and healthcare systems.
We evaluated the impact of test-order frequency per diarrheal episodes on Clostridioides difficile infection (CDI) incidence estimates in a sample of hospitals at 2 CDC Emerging Infections Program (EIP) sites.
Inpatients at 5 acute-care hospitals in Rochester, New York, and Atlanta, Georgia, during two 10-workday periods in 2020 and 2021.
We calculated diarrhea incidence, testing frequency, and CDI positivity (defined as any positive NAAT test) across strata. Predictors of CDI testing and positivity were assessed using modified Poisson regression. Population estimates of incidence using modified Emerging Infections Program methodology were compared between sites using the Mantel-Hanzel summary rate ratio.
Surveillance of 38,365 patient days identified 860 diarrhea cases from 107 patient-care units mapped to 26 unique NHSN defined location types. Incidence of diarrhea was 22.4 of 1,000 patient days (medians, 25.8 for Rochester and 16.2 for Atlanta; P < .01). Similar proportions of diarrhea cases were hospital onset (66%) at both sites. Overall, 35% of patients with diarrhea were tested for CDI, but this differed by site: 21% in Rochester and 49% in Atlanta (P < .01). Regression models identified location type (ie, oncology or critical care) and laxative use predictive of CDI test ordering. Adjusting for these factors, CDI testing was 49% less likely in Rochester than Atlanta (adjusted rate ratio, 0.51; 95% confidence interval [CI], 0.40–0.63). Population estimates in Rochester had a 38% lower incidence of CDI than Atlanta (summary rate ratio, 0.62; 95% CI, 0.54–0.71).
Accounting for patient-specific factors that influence CDI test ordering, differences in testing practices between sites remain and likely contribute to regional differences in surveillance estimates.
Background: Provider-specific prescribing metrics can be used for benchmarking and feedback to reduce unnecessary antibiotic use; however, metrics must be credible. To improve credibility of a recently described risk-adjusted antibiotic prescribing metric for hospital medicine service (HMS) providers, we assessed whether providers who initially prescribed excess antibiotics continued to prescribe antibiotics excessively. Methods: We linked administration and billing data among patients at 4 acute-care hospitals (1,571 beds) to calculate days of therapy (DOT) ordered by individual hospitalists for each of 3 NHSN antibiotic groupings: broad-spectrum hospital onset (BS-HO), broad-spectrum community-onset (BS-CO), or anti-MRSA for each patient day billed from January 2020 to June 2021. To incorporate repeated measures by provider, mixed models adjusted for patient-mix characteristics (eg, % encounters with urinary tract infection, etc) were used to calculate serial, bimonthly, provider-specific, observed-to-expected ratios (OERs). An OER of 1.25 indicates that the prescribing rate observed was 25% higher than predicted, adjusting for patient mix. We then used log binomial generalized estimating equations to assess whether a high prescribing rate (defined as an OER ≥ 1.25) for an individual provider in an earlier bimonthly period was associated with a persistent high rate for that provider in the following period. Results: Overall, 975 bimonthly periods were evaluated from 136 hospitalists. Most (58%) contributed data the entire 18-month study period. Median OERs were similar between hospitals: 0.94 (IQR, 0.65–1.28) for BS-HO antibiotic use, 0.99 (IQR, 0.73–1.24) for BS-CO antibiotic use, and 0.95 (IQR, 0.65–1.28) for anti-MRSA antibiotic use. At the individual prescriber level, roughly one-quarter of bimonthly OERs (range varied by group and hospital from 21% to 31%) were categorized as high. At 3 of the 4 hospitals, a provider with a high OER for either BS-HO or BS-CO antibiotic use in any bimonthly period was more likely to have a high OER in the subsequent period (Fig. 1). These observed risk ratios were statistically significant for BS-HO antibiotic use at only 2 hospitals: hospital A risk ratio (RR) was 1.54 (95% CI, 1.10–2.16); hospital B RR was 1.28 (95% CI, 0.90–1.82); hospital C RR was 0.76 (95% CI, 0.39–1.48); and ospital D RR was 1.71 (95% CI, 1.09–2.68). Conclusions: Our findings suggest that hospitalists with a higher than expected 2-month period of antibiotic prescribing are likely to continue to have elevated prescribing rates in the following period, particularly for BS-HO antibiotics. These findings increase the credibility of using a 2-month prescribing metric for BS-HO antibiotic stewardship efforts; further work is needed to evaluate utility for other antibiotic groupings.
Background: Healthcare facilities have experienced many challenges during the COVID-19 pandemic, including limited personal protective equipment (PPE) supplies. Healthcare personnel (HCP) rely on PPE, vaccines, and other infection control measures to prevent SARS-CoV-2 infections. We describe PPE concerns reported by HCP who had close contact with COVID-19 patients in the workplace and tested positive for SARS-CoV-2. Method: The CDC collaborated with Emerging Infections Program (EIP) sites in 10 states to conduct surveillance for SARS-CoV-2 infections in HCP. EIP staff interviewed HCP with positive SARS-CoV-2 viral tests (ie, cases) to collect data on demographics, healthcare roles, exposures, PPE use, and concerns about their PPE use during COVID-19 patient care in the 14 days before the HCP’s SARS-CoV-2 positive test. PPE concerns were qualitatively coded as being related to supply (eg, low quality, shortages); use (eg, extended use, reuse, lack of fit test); or facility policy (eg, lack of guidance). We calculated and compared the percentages of cases reporting each concern type during the initial phase of the pandemic (April–May 2020), during the first US peak of daily COVID-19 cases (June–August 2020), and during the second US peak (September 2020–January 2021). We compared percentages using mid-P or Fisher exact tests (α = 0.05). Results: Among 1,998 HCP cases occurring during April 2020–January 2021 who had close contact with COVID-19 patients, 613 (30.7%) reported ≥1 PPE concern (Table 1). The percentage of cases reporting supply or use concerns was higher during the first peak period than the second peak period (supply concerns: 12.5% vs 7.5%; use concerns: 25.5% vs 18.2%; p Conclusions: Although lower percentages of HCP cases overall reported PPE concerns after the first US peak, our results highlight the importance of developing capacity to produce and distribute PPE during times of increased demand. The difference we observed among selected groups of cases may indicate that PPE access and use were more challenging for some, such as nonphysicians and nursing home HCP. These findings underscore the need to ensure that PPE is accessible and used correctly by HCP for whom use is recommended.
Background: Nursing home (NH) residents and staff were at high risk for COVID-19 early in the pandemic; several studies estimated seroprevalence of infection in NH staff to be 3-fold higher among CNAs and nurses compared to other staff. Risk mitigation added in Fall 2020 included systematic testing of residents and staff (and furlough if positive) to reduce transmission risk. We estimated risks for SARS-CoV-2 infection among NH staff during the first winter surge before widespread vaccination. Methods: Between February and May 2021, voluntary serologic testing was performed on NH staff who were seronegative for SARS-CoV-2 in late Fall 2020 (during a previous serology study at 14 Georgia NHs). An exposure assessment at the second time point covered prior 3 months of job activities, community exposures, and self-reported COVID-19 vaccination, including very recent vaccination (≤4 weeks). Risk factors for seroconversion were estimated by job type using multivariable logistic regression, accounting for interval community-incidence and interval change in resident infections per bed. Results: Among 203 eligible staff, 72 (35.5%) had evidence of interval seroconversion (Fig. 1). Among 80 unvaccinated staff, interval infection was significantly higher among CNAs and nurses (aOR, 4.9; 95% CI, 1.4–20.7) than other staff, after adjusting for race and interval community incidence and facility infections. This risk persisted but was attenuated when utilizing the full study cohort including those with very recent vaccination (aOR, 1.8; 95% CI, 0.9–3.7). Conclusions: Midway through the first year of the pandemic, NH staff with close or common resident contact continued to be at increased risk for infection despite enhanced infection prevention efforts. Mitigation strategies, prior to vaccination, did not eliminate occupational risk for infection. Vaccine utilization is critical to eliminate occupational risk among frontline healthcare providers.
To determine the incidence of severe acute respiratory coronavirus virus 2 (SARS-CoV-2) infection among healthcare personnel (HCP) and to assess occupational risks for SARS-CoV-2 infection.
Prospective cohort of healthcare personnel (HCP) followed for 6 months from May through December 2020.
Large academic healthcare system including 4 hospitals and affiliated clinics in Atlanta, Georgia.
HCP, including those with and without direct patient-care activities, working during the coronavirus disease 2019 (COVID-19) pandemic.
Incident SARS-CoV-2 infections were determined through serologic testing for SARS-CoV-2 IgG at enrollment, at 3 months, and at 6 months. HCP completed monthly surveys regarding occupational activities. Multivariable logistic regression was used to identify occupational factors that increased the risk of SARS-CoV-2 infection.
Of the 304 evaluable HCP that were seronegative at enrollment, 26 (9%) seroconverted for SARS-CoV-2 IgG by 6 months. Overall, 219 participants (73%) self-identified as White race, 119 (40%) were nurses, and 121 (40%) worked on inpatient medical-surgical floors. In a multivariable analysis, HCP who identified as Black race were more likely to seroconvert than HCP who identified as White (odds ratio, 4.5; 95% confidence interval, 1.3–14.2). Increased risk for SARS-CoV-2 infection was not identified for any occupational activity, including spending >50% of a typical shift at a patient’s bedside, working in a COVID-19 unit, or performing or being present for aerosol-generating procedures (AGPs).
In our study cohort of HCP working in an academic healthcare system, <10% had evidence of SARS-CoV-2 infection over 6 months. No specific occupational activities were identified as increasing risk for SARS-CoV-2 infection.
To estimate prior severe acute respiratory coronavirus virus 2 (SARS-CoV-2) infection among skilled nursing facility (SNF) staff in the state of Georgia and to identify risk factors for seropositivity as of fall 2020.
Baseline survey and seroprevalence of the ongoing longitudinal Coronavirus 2019 (COVID-19) Prevention in Nursing Homes study.
The study included 14 SNFs in the state of Georgia.
In total, 792 SNF staff employed or contracted with participating SNFs were included in this study. The analysis included 749 participants with SARS-CoV-2 serostatus results who provided age, sex, and complete survey information.
We estimated unadjusted odds ratios (ORs) and 95% confidence intervals (95% CIs) for potential risk factors and SARS-CoV-2 serostatus. We estimated adjusted ORs using a logistic regression model including age, sex, community case rate, SNF resident infection rate, working at other facilities, and job role.
Staff working in high-infection SNFs were twice as likely (unadjusted OR, 2.08; 95% CI, 1.45–3.00) to be seropositive as those in low-infection SNFs. Certified nursing assistants and nurses were 3 times more likely to be seropositive than administrative, pharmacy, or nonresident care staff: unadjusted OR, 2.93 (95% CI, 1.58–5.78) and unadjusted OR, 3.08 (95% CI, 1.66–6.07). Logistic regression yielded similar adjusted ORs.
Working at high-infection SNFs was a risk factor for SARS-CoV-2 seropositivity. Even after accounting for resident infections, certified nursing assistants and nurses had a 3-fold higher risk of SARS-CoV-2 seropositivity than nonclinical staff. This knowledge can guide prioritized implementation of safer ways for caregivers to provide necessary care to SNF residents.
To determine the impact of an inpatient stewardship intervention targeting fluoroquinolone use on inpatient and postdischarge Clostridioides difficile infection (CDI).
We used an interrupted time series study design to evaluate the rate of hospital-onset CDI (HO-CDI), postdischarge CDI (PD-CDI) within 12 weeks, and inpatient fluoroquinolone use from 2 years prior to 1 year after a stewardship intervention.
An academic healthcare system with 4 hospitals.
All inpatients hospitalized between January 2017 and September 2020, excluding those discharged from locations caring for oncology, bone marrow transplant, or solid-organ transplant patients.
Introduction of electronic order sets designed to reduce inpatient fluoroquinolone prescribing.
Among 163,117 admissions, there were 683 cases of HO-CDI and 1,104 cases of PD-CDI. In the context of a 2% month-to-month decline starting in the preintervention period (P < .01), we observed a reduction in fluoroquinolone days of therapy per 1,000 patient days of 21% after the intervention (level change, P < .05). HO-CDI rates were stable throughout the study period. In contrast, we also detected a change in the trend of PD-CDI rates from a stable monthly rate in the preintervention period to a monthly decrease of 2.5% in the postintervention period (P < .01).
Our systemwide intervention reduced inpatient fluoroquinolone use immediately, but not HO-CDI. However, a downward trend in PD-CDI occurred. Relying on outcome measures limited to the inpatient setting may not reflect the full impact of inpatient stewardship efforts.
Background: Effective inpatient stewardship initiatives can improve antibiotic prescribing, but impact on outcomes like Clostridioides difficile infections (CDIs) is less apparent. However, the effect of inpatient stewardship efforts may extend to the postdischarge setting. We evaluated whether an intervention targeting inpatient fluoroquinolone (FQ) use in a large healthcare system reduced incidence of postdischarge CDI. Methods: In August 2019, 4 acute-care hospitals in a large healthcare system replaced standalone FQ orders with order sets containing decision support. Order sets redirected prescribers to syndrome order sets that prioritize alternative antibiotics. Monthly patient days (PDs) and antibiotic days of therapy (DOT) administered for FQs and NHSN-defined broad-spectrum hospital-onset (BS-HO) antibiotics were calculated using patient encounter data for the 23 months before and 13 months after the intervention (COVID-19 admissions in the previous 7 months). We evaluated hospital-onset CDI (HO-CDI) per 1,000 PD (defined as any positive test after hospital day 3) and 12-week postdischarge (PDC- CDI) per 100 discharges (any positive test within healthcare system <12 weeks after discharge). Interrupted time-series analysis using generalized estimating equation models with negative binomial link function was conducted; a sensitivity analysis with Medicare case-mix index (CMI) adjustment was also performed to control for differences after start of the COVID-19 pandemic. Results: Among 163,117 admissions, there were 683 HO-CDIs and 1,009 PDC-CDIs. Overall, FQ DOT per 1,000 PD decreased by 21% immediately after the intervention (level change; P < .05) and decreased at a consistent rate throughout the entire study period (−2% per month; P < .01) (Fig. 1). There was a nonsignificant 5% increase in BS-HO antibiotic use immediately after intervention and a continued increase in use after the intervention (0.3% per month; P = .37). HO-CDI rates were stable throughout the study period, with a nonsignificant level change decrease of 10% after the intervention. In contrast, there was a reversal in the trend in PDC-CDI rates from a 0.4% per month increase in the preintervention period to a 3% per month decrease in the postintervention period (P < .01). Sensitivity analysis with adjustment for facility-specific CMI produced similar results but with wider confidence intervals, as did an analysis with a distinct COVID-19 time point. Conclusion: Our systemwide intervention using order sets with decision support reduced inpatient FQ use by 21%. The intervention did not significantly reduce HO-CDI but significantly decreased the incidence of CDI within 12 weeks after discharge. Relying on outcome measures limited to inpatient setting may not reflect the full impact of inpatient stewardship efforts and incorporating postdischarge outcomes, such as CDI, should increasingly be considered.
In total, 13 facilities changed C. difficile testing to reflexive testing by enzyme immunoassay (EIA) only after a positive nucleic acid-amplification test (NAAT); the standardized infection ratio (SIR) decreased by 46% (range, −12% to −71% per hospital). Changing testing practice greatly influenced a performance metric without changing C. difficile infection prevention practice.
Among 353 healthcare personnel in a longitudinal cohort in 4 hospitals in Atlanta, Georgia (May–June 2020), 23 (6.5%) had severe acute respiratory coronavirus virus 2 (SARS-CoV-2) antibodies. Spending >50% of a typical shift at the bedside (OR, 3.4; 95% CI, 1.2–10.5) and black race (OR, 8.4; 95% CI, 2.7–27.4) were associated with SARS-CoV-2 seropositivity.
The movement of healthcare professionals (HCPs) induces an indirect contact network: touching a patient or the environment in one area, then again elsewhere, can spread healthcare-associated pathogens from 1 patient to another. Thus, understanding HCP movement is vital to calibrating mathematical models of healthcare-associated infections. Because long-term care facilities (LTCFs) are an important locus of transmission and have been understudied relative to hospitals, we developed a system for measuring contact patterns specifically within an LTCF. Methods: To measure HCP movement patterns, we used badges (credit-card–sized, programmable, battery-powered devices with wireless proximity sensors) worn by HCPs and placed in 30 locations for 3 days. Each badge broadcasts a brief message every 8 seconds. When received by other badges within range, the recipients recorded the time, source badge identifier, and signal strength. By fusing the data collected by all badges with a facility map, we estimated when and for how long each HCP was in any of the locations where instruments had been installed. Results: Combining the messages captured by all of our devices, we calculated the dwell time for each job type (eg, nurses, nursing assistants, physical therapists) in different locations (eg, resident rooms, dining areas, nurses stations, hallways, etc). Although dwell times over all job and area types averaged ∼100 seconds, the standard deviation was large (115 seconds), with a mean of maximums by job type of ∼450 seconds. For example, nursing assistants spent substantially more time in resident rooms and transitioned across rooms at a much higher rate. Overall, each distribution exhibits a power-law–like characteristic. By aggregating the data from devices with location data extracted from the floor plan, we were able to produce an explicit trace for each individual (identified only by job type) for each day and to compute cross-table transition probabilities by area for each job type. Conclusions: We developed a portable system for measuring contact patterns in long-term care settings. Our results confirm that frequent interactions between HCPs and LTC residents occur, but they are not uniform across job types or resident locations. The data produced by our system can be used to better calibrate mathematical models of pathogen spread in LTCs. Moreover, our system can be easily and quickly deployed to any healthcare settings to similarly inform outbreak investigations.
Disclosures: Scott Fridkin reports that his spouse receives a consulting fee from the vaccine industry.
Background: Hospitalists play a critical role in antimicrobial stewardship as the primary antibiotic prescriber for many inpatients. We sought to describe antibiotic prescribing variation among hospitalists within a healthcare system. Methods: We created a novel metric of hospitalist-specific antibiotic prescribing by linking hospitalist billing data to hospital medication administration records in 4 hospitals (two 500-bed academic (AMC1 and AMC2), one 400-bed community (CH1), and one 100-bed community (CH2)) from January 2016 to December 2018. We attributed dates that a hospitalist electronically billed for a given patient as billed patient days (bPD) and mapped an antibiotic day of therapy (DOT) to a bPD. Each DOT was classified according to National Healthcare Safety Network antibiotic categories: broad-spectrum hospital-onset (BS-HO), broad-spectrum community-onset (BS-CO), anti-MRSA, and highest risk for Clostridioides difficile infection (CDI). DOT and bPD were pooled to calculate hospitalist-specific DOT per 1,000 bPD. Best subsets regression was performed to assess model fit and generate hospital and antibiotic category-specific models adjusting for patient-level factors (eg, age ≥65, ICD-10 codes for comorbidities and infections). The models were used to calculate predicted hospitalist-specific DOT and observed-to-expected ratios (O:E) for each antibiotic category. Kruskal-Wallis tests and pairwise Wilcoxon rank-sum tests were used to determine significant differences between median DOT per 1,000 bPD and O:E between hospitals for each antibiotic category. Results: During the study period, 116 hospitalists across 4 hospitals contributed a total of 437,303 bPD. Median DOT per 1,000 bPD varied between hospitals (BS-HO range, 46.7–84.2; BS-CO range, 63.3–100; anti-MRSA range, 48.4–65.4; CDI range, 82.0–129.4). CH2 had a significantly higher median DOT per 1,000 bPD compared to the academic hospitals (all antibiotic categories P < .001) and CH1 (BS-HO, P = .01; anti-MRSA, P = .02) (Fig. 1A). The 4 antibiotic groups at 4 hospitals resulted in 16 models, with good model fit for CH2 (R2 > 0.55 for all models), modest model fit for AMC2 (R2 = 0.46–0.55), fair model fit for CH1 (R2 = 0.19–0.35), and poor model fit for AMC1 (R2 < 0.12 for all models). Variation in hospitalist-specific O:E was moderate (IQR, 0.9–1.1). AMC1 showed greater variation than other hospitals, but we detected no significant differences in median O:E between hospitals (all antibiotic categories P > .10) (Fig. 1B). Conclusions: Adjusting for patient-level factors significantly reduced much of the variation in hospitalist-specific DOT per 1,000 bPD in some but not all hospitals, suggesting that unmeasured factors may drive antibiotic prescribing. This metric may represent a target for stewardship intervention, such as hospitalist-specific feedback of antibiotic prescribing practices.
Disclosures: Scott Fridkin, consulting fee - vaccine industry (various) (spouse)
Background:Staphylococcus aureus is the leading cause of joint infections. These infections may arise in native or prosthetic joints. Previous analysis of population-based surveillance has documented racial differences in incidence of invasive S. aureus bloodstream infections. We hypothesized that racial differences in incidence would not persist among of S. aureus joint infections. Methods: We utilized data from the Georgia Emerging Infections Program (GA EIP), which conducts CDC-funded active, population-based surveillance for iSA within the 8-county area of Atlanta. Cases were defined as residents of the surveillance area with S. aureus isolated during 2016–2018 from joint fluid or tissue, and cultures within a 30-day period after the initial culture date were considered a single case. Age- and race-specific incidence were calculated using US census data; incidence rate ratios (RR) and adjusted rate ratios (aRR) were calculated using the Mantel-Hanzel method. Results: Between 2016 and 2018, 500 iSA joint infections were identified (iMRSA, 28.2% and iMSSA, 71.8%): 34.4% occurred in black patients and 65.6% occurred in white patients. Also, 90 cases (18%) had a bloodstream infection (BSI) within 30 days of the joint infection. Incidence of iSA joint infections dropped 22% from 9.4 per 100,000 in 2016 to 7.5 per 100,000 in 2018 (RR, 0.79; 95% CI, 0.7–0.9). Adjusting for year, incidence was 40% lower among blacks than whites (RR, 0.6,; 95% CI, 0.5–0.7); this finding was attributed to blacks having 60% lower incidence of iMSSA joint infections compared to whites (aRR, 0.4; 95% CI, 0.3–0.5) but similar MRSA incidence (aRR, 1.2; 95% CI, 0.8–1.6). The highest incidence was observed among whites aged >65 years with iMSSA infections (30.2 per 100,000) (Fig. 1). Among cases with a full chart review (n = 138), surgery in the prior 90 days was uncommon (n = 42, 30.4%), and a preceding major orthopedic procedure was even more rare (n = 13, 9.4%). Antecedent therapeutic injections and arthroscopic procedures are under investigation. Conclusions: Unlike S. aureus bacteremia, where previous analysis demonstrates higher incidences among blacks predominantly due to MRSA, our data demonstrate that the incidence of S. aureus joint infections is higher in whites, predominantly due to MSSA. Investigations in differential practices regarding orthopedic illness and injury should be pursued.
Disclosures: Scott Fridkin reports that his spouse receives consulting fees from the vaccine industry.
Background: Certain nursing home (NH) resident care tasks have a higher risk for multidrug-resistant organisms (MDRO) transfer to healthcare personnel (HCP), which can result in transmission to residents if HCPs fail to perform recommended infection prevention practices. However, data on HCP-resident interactions are limited and do not account for intrafacility practice variation. Understanding differences in interactions, by HCP role and unit, is important for informing MDRO prevention strategies in NHs. Methods: In 2019, we conducted serial intercept interviews; each HCP was interviewed 6–7 times for the duration of a unit’s dayshift at 20 NHs in 7 states. The next day, staff on a second unit within the facility were interviewed during the dayshift. HCP on 38 units were interviewed to identify healthcare personnel (HCP)–resident care patterns. All unit staff were eligible for interviews, including certified nursing assistants (CNAs), nurses, physical or occupational therapists, physicians, midlevel practitioners, and respiratory therapists. HCP were asked to list which residents they had cared for (within resident rooms or common areas) since the prior interview. Respondents selected from 14 care tasks. We classified units into 1 of 4 types: long-term, mixed, short stay or rehabilitation, or ventilator or skilled nursing. Interactions were classified based on the risk of HCP contamination after task performance. We compared proportions of interactions associated with each HCP role and performed clustered linear regression to determine the effect of unit type and HCP role on the number of unique task types performed per interaction. Results: Intercept-interviews described 7,050 interactions and 13,843 care tasks. Except in ventilator or skilled nursing units, CNAs have the greatest proportion of care interactions (interfacility range, 50%–60%) (Fig. 1). In ventilator and skilled nursing units, interactions are evenly shared between CNAs and nurses (43% and 47%, respectively). On average, CNAs in ventilator and skilled nursing units perform the most unique task types (2.5 task types per interaction, Fig. 2) compared to other unit types (P < .05). Compared to CNAs, most other HCP types had significantly fewer task types (0.6–1.4 task types per interaction, P < .001). Across all facilities, 45.6% of interactions included tasks that were higher-risk for HCP contamination (eg, transferring, wound and device care, Fig. 3). Conclusions: Focusing infection prevention education efforts on CNAs may be most efficient for preventing MDRO transmission within NH because CNAs have the most HCP–resident interactions and complete more tasks per visit. Studies of HCP-resident interactions are critical to improving understanding of transmission mechanisms as well as target MDRO prevention interventions.
Funding: Centers for Disease Control and Prevention (grant no. U01CK000555-01-00)
Disclosures: Scott Fridkin, consulting fee, vaccine industry (spouse)
Background: Epidemiological studies have utilized administrative discharge diagnosis codes to identify methicillin-resistant and methicillin-sensitive Staphylococcus aureus (MRSA and MSSA) infections and trends, despite debate regarding the accuracy of utilizing codes for this purpose. We assessed the sensitivity and positive predictive value (PPV) of MRSA- and MSSA-specific diagnosis codes, trends, characteristics, and outcomes of S. aureus hospitalizations by method of identification. Methods: Clinical micro biology results and discharge data from geographically diverse US hospitals participating in the Premier Healthcare Database from 2012–2017 were used to identify monthly rates of MRSA and MSSA. Positive MRSA or MSSA clinical cultures and/or a MRSA- or MSSA-specific International Classification of Diseases, Ninth/Tenth Revision, Clinical Modification (ICD-9/10 CM) diagnosis codes from adult inpatients (aged ≥18 years) were included as S. aureus hospitalizations. Septicemia was defined as a positive blood culture or a MRSA or MSSA septicemia code. Sensitivity and PPV for codes were calculated for hospitalizations where admission status was not listed as transfer; true infection was considered a positive clinical culture. Negative binominal regression models measured trends in rates of MRSA and MSSA per 1,000 hospital discharges. Results: We identified 168,634 MRSA and 148,776 MSSA hospitalizations in 256 hospitals; 17% of MRSA and 21% of MSSA were septicemia. Less than half of all S. aureus hospitalizations (49% MRSA, 46% MSSA) and S. aureus septicemia hospitalizations (37% MRSA, 38% MSSA) had both a positive culture and diagnosis code (Fig. 1). Sensitivity of MRSA codes in identifying positive cultures was 61% overall and 56% for septicemia, PPV was 62% overall and 53% for septicemia. MSSA codes had a sensitivity of 49% in identifying MSSA cultures and 52% for MSSA septicemia; PPV was 69% overall and 62% for septicemia. Despite low sensitivity, MRSA trends are similar for cultures and codes, and MSSA trends are divergent (Fig. 2). For hospitalizations with septicemia, mortality was highest among those with a blood culture only (31.3%) compared to hospitalizations with both a septicemia code and blood culture (16.6%), and septicemia code only (14.7%). Conclusions: ICD diagnosis code sensitivity and PPV for identifying infections were consistently poor in recent years. Less than half of hospitalizations have concordant microbiology laboratory results and diagnosis codes. Rates and trend estimates for MSSA differ by method of identification. Using diagnosis codes to identify S. aureus infections may not be appropriate for descriptive epidemiology or assessing trends due to significant misclassification.
Disclosures: Scott Fridkin reports that his spouse receives consulting fees from the vaccine industry.
Background: The NHSN methods for central-line–associated bloodstream infection (CLABSI) surveillance do not account for additive CLABSI risk of concurrent central lines. Past studies were small and modestly risk adjusted but quantified the risk to be ~2-fold. If the attributable risk is this high, facilities that serve high-acuity patients with medically indicated concurrent central-line use may disproportionally incur CMS payment penalties for having high CLABSI rates. We aimed to build evidence through analysis using improved risk adjustment of a multihospital CLABSI experience to influence NHSN CLABSI protocols to account for risks attributed to concurrent central lines. Methods: In a retrospective cohort of adult patients at 4 hospitals (range, 110–733 beds) from 2012 to 2017, we linked central-line data to patient encounter data (age, comorbidities, total parenteral nutrition, chemotherapy, CLABSI). Analysis was limited to patients with >2 central-line days, with either a single central line or concurrence of no more than 2 central lines where insertion and removal dates overlapped by >1 day. Propensity-score matching for likelihood of concurrence and conditional logistic regression modeling estimated the risk of CLABSI attributed to concurrence of >1 day. To evaluate in Cox proportional hazards regression of time to CLABSIs, we also analyzed patients as unique central-line episodes: low risk (ie, ports, dialysis central lines, or PICC) or high risk (ie, temporary or nontunneled) and single versus concurrent. Results: In total, 64,575 central lines were used in 50,254 encounters. Among these patients, 517 developed a CLABSI; 438 (85%) with a single central line and 74 (15%) with concurrence. Moreover, 4,657 (9%) patients had concurrence (range, 6%–14% by hospital); of these, 74 (2%) had CLABSI, compared to 71 of 7,864 propensity-matched controls (1%). Concurrence patients had a median of 17 NHSN central-line days and 21 total central-line days. In multivariate modeling, patients with more concurrence (>2 of 3 of concurrent central-line days) had an higher risk for CLABSI (adjusted risk ratio, 1.62; 95% CI, 1.1–2.3) compared to controls. In survival analysis, 14,610 concurrent central-line episodes were compared to 31,126 single low-risk central-line episodes; adjusting for comorbidity, total parenteral nutrition, and chemotherapy, the daily excess risk of CLABSI attributable to the concurrent central line was ~80% (hazard ratio 1.78 for 2 high-risk or 2 low-risk central lines; hazard ratio 1.80 for a mix of high- and low-risk central lines) (Fig. 1). Notably, the hazard ratio attributed to a single high-risk line compared to a low-risk line was 1.44 (95% CI, 1.13–1.84). Conclusions: Since a concurrent central line nearly doubles the risk for CLABSI compared to a single low-risk line, the CDC should modify NHSN methodology to better account for this risk.
Disclosures: Scott Fridkin reports that his spouse receives consulting fees from the vaccine industry.
Background: Due to reliance on hospital discharge data for case identification, the burden of noninvasive and community-acquired S. aureus disease is often underestimated. To determine the full burden of S. aureus infections, we utilized population-based surveillance in a large urban county. Methods: The Georgia Emerging Infections Program (GA EIP) conducted CDC-funded, population-based surveillance by finding cases of S. aureus infections in 8 counties around Atlanta in 2017. Cases were residents with S. aureus isolated from either a normally sterile site in a 30-day period (invasive cases) or another site in a 14-day period (noninvasive cases). Medical records (all invasive and 1:4 sample of noninvasive cases) among Fulton County residents were abstracted for clinical, treatment, and outcome data. Cases treated were mapped to standard therapeutic site codes. Noninvasive specimens were reviewed and attributed to an invasive case if both occurred within 2 weeks. Incidence rates were calculated using 2017 census population and using a weight-adjusted cohort to account for sampling. Results: In total, 1,186 noninvasive (1:4 sample) and 529 invasive cases of S. aureus in Fulton county were reviewed. Only 35 of 1,186 (2.9%) noninvasive cases were temporally linked to invasive cases, resulting in 5,133 cases after extrapolation (529 invasive, 4,604 noninvasive). All invasive cases and 3,776 of 4,604 noninvasive cases (82%) were treated (4,305 total). Treatment was highest in skin (90%) and abscess (97%), lowest in urine (62%) and sputum (60%), and consisted of antibacterial agents alone (65%) or in addition to drainage procedures (35%). Overall, 41% of all cases were hospitalized, 12% required ICU admission, and 2.7% died, almost exclusively with bloodstream and pulmonary infections. Attribution of noninvasive infection was most often outside healthcare settings (87%); only 341 (7.9%) were hospital-onset cases; however, 34% of cases had had healthcare exposure in the preceding year, most often inpatient hospitalization (75%) or recent surgery (35%). Estimated countywide incidence was 414 per 100,000 (130 for MRSA and 284 for MSSA), invasive infection was 50 per 100,000. Among treated cases, 57% were SSTI, and the proportion of cases caused by MRSA was ~33% but varied slightly by therapeutic site (Fig. 1). Conclusions: The incidence of treated S. aureus infection in our large urban county is estimated to be 414 per 100,000 persons, which exceeds previously estimated rates based on hospital discharge data. Only 12% of treated infections were invasive, and <1 in 10 were hospital onset. Also, two-thirds of treated disease cases were MSSA; most were SSTIs.
Funding: Proprietary Organization: Pfizer.
Disclosures: Scott Fridkin, consulting fee - vaccine industry (spouse).
Background: Historically, metronidazole was first-line therapy for Clostridioides difficile infection (CDI). In February 2018, the Infectious Diseases Society of America (IDSA) and Society for Healthcare Epidemiology of America (SHEA) updated clinical practice guidelines for CDI. The new guidelines recommend oral vancomycin or fidaxomicin for treatment of initial episode of CDI in adults. We examined the changes in treatment of CDI during 2018 across all types of healthcare settings in metropolitan Atlanta. Methods: Cases were identified through the Georgia Emerging Infections program (funded by the Centers for Disease Control and Prevention), which conducts active population-based surveillance in an 8-county area including Atlanta, Georgia (population, 4,126,399). An incident case was a resident of the catchment area with a positive C. difficile toxin test and no additional positive test in the previous 8 weeks. Recurrent CDI was defined as >1 incident CDI episode in 1 year. Clinical and treatment data were abstracted on a random 33% sample of adult (>17 years) cases. Definitive treatment categories were defined as the single antibiotic agent, metronidazole or vancomycin, used to complete a course. We examined the effect of time of infection, location of treatment, and number of CDI episodes on the use of metronidazole only. Results: We analyzed treatment information for 831 adult sampled cases. Overall, cases were treated at 29 hospitals (568 cases), 4 nursing homes (6 cases), and 101 outpatient providers (257 cases). The mean age was 60 (IQR, 34–86), and 111 (13.4%) had recurrent infection. Moreover, ∼28% of first-incident CDI episodes, 8% of second episodes, and 6% of third episodes were treated with metronidazole only. Compared to facility-based providers, outpatient providers were more likely to treat initial CDI episodes with metronidazole only (44% vs 21%; relative risk [RR], 2.1; 95% CI, 1.7–2.7). Treatment changed over time from 56% metronidazole only in January to 10% in December (Fig. 1). First-incident cases in the first quarter of 2018 were more likely to be treated with metronidazole only compared to those in the fourth quarter (RR, 2.76; 95% CI, 1.91–3.97). Conclusions: Preferential use of vancomycin for initial CDI episodes increased throughout 2018 but remained <100%. CDI episodes treated in the outpatient setting and nonrecurrent episodes were more likely to be treated with metronidazole only. Additional studies on persistent barriers to prescribing oral vancomycin, such as cost, are warranted.
Disclosures: Scott Fridkin reports that his spouse receives a consulting fee from the vaccine industry.