To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We evaluated the impact of test-order frequency per diarrheal episodes on Clostridioides difficile infection (CDI) incidence estimates in a sample of hospitals at 2 CDC Emerging Infections Program (EIP) sites.
Inpatients at 5 acute-care hospitals in Rochester, New York, and Atlanta, Georgia, during two 10-workday periods in 2020 and 2021.
We calculated diarrhea incidence, testing frequency, and CDI positivity (defined as any positive NAAT test) across strata. Predictors of CDI testing and positivity were assessed using modified Poisson regression. Population estimates of incidence using modified Emerging Infections Program methodology were compared between sites using the Mantel-Hanzel summary rate ratio.
Surveillance of 38,365 patient days identified 860 diarrhea cases from 107 patient-care units mapped to 26 unique NHSN defined location types. Incidence of diarrhea was 22.4 of 1,000 patient days (medians, 25.8 for Rochester and 16.2 for Atlanta; P < .01). Similar proportions of diarrhea cases were hospital onset (66%) at both sites. Overall, 35% of patients with diarrhea were tested for CDI, but this differed by site: 21% in Rochester and 49% in Atlanta (P < .01). Regression models identified location type (ie, oncology or critical care) and laxative use predictive of CDI test ordering. Adjusting for these factors, CDI testing was 49% less likely in Rochester than Atlanta (adjusted rate ratio, 0.51; 95% confidence interval [CI], 0.40–0.63). Population estimates in Rochester had a 38% lower incidence of CDI than Atlanta (summary rate ratio, 0.62; 95% CI, 0.54–0.71).
Accounting for patient-specific factors that influence CDI test ordering, differences in testing practices between sites remain and likely contribute to regional differences in surveillance estimates.
To determine the incidence of severe acute respiratory coronavirus virus 2 (SARS-CoV-2) infection among healthcare personnel (HCP) and to assess occupational risks for SARS-CoV-2 infection.
Prospective cohort of healthcare personnel (HCP) followed for 6 months from May through December 2020.
Large academic healthcare system including 4 hospitals and affiliated clinics in Atlanta, Georgia.
HCP, including those with and without direct patient-care activities, working during the coronavirus disease 2019 (COVID-19) pandemic.
Incident SARS-CoV-2 infections were determined through serologic testing for SARS-CoV-2 IgG at enrollment, at 3 months, and at 6 months. HCP completed monthly surveys regarding occupational activities. Multivariable logistic regression was used to identify occupational factors that increased the risk of SARS-CoV-2 infection.
Of the 304 evaluable HCP that were seronegative at enrollment, 26 (9%) seroconverted for SARS-CoV-2 IgG by 6 months. Overall, 219 participants (73%) self-identified as White race, 119 (40%) were nurses, and 121 (40%) worked on inpatient medical-surgical floors. In a multivariable analysis, HCP who identified as Black race were more likely to seroconvert than HCP who identified as White (odds ratio, 4.5; 95% confidence interval, 1.3–14.2). Increased risk for SARS-CoV-2 infection was not identified for any occupational activity, including spending >50% of a typical shift at a patient’s bedside, working in a COVID-19 unit, or performing or being present for aerosol-generating procedures (AGPs).
In our study cohort of HCP working in an academic healthcare system, <10% had evidence of SARS-CoV-2 infection over 6 months. No specific occupational activities were identified as increasing risk for SARS-CoV-2 infection.
To estimate prior severe acute respiratory coronavirus virus 2 (SARS-CoV-2) infection among skilled nursing facility (SNF) staff in the state of Georgia and to identify risk factors for seropositivity as of fall 2020.
Baseline survey and seroprevalence of the ongoing longitudinal Coronavirus 2019 (COVID-19) Prevention in Nursing Homes study.
The study included 14 SNFs in the state of Georgia.
In total, 792 SNF staff employed or contracted with participating SNFs were included in this study. The analysis included 749 participants with SARS-CoV-2 serostatus results who provided age, sex, and complete survey information.
We estimated unadjusted odds ratios (ORs) and 95% confidence intervals (95% CIs) for potential risk factors and SARS-CoV-2 serostatus. We estimated adjusted ORs using a logistic regression model including age, sex, community case rate, SNF resident infection rate, working at other facilities, and job role.
Staff working in high-infection SNFs were twice as likely (unadjusted OR, 2.08; 95% CI, 1.45–3.00) to be seropositive as those in low-infection SNFs. Certified nursing assistants and nurses were 3 times more likely to be seropositive than administrative, pharmacy, or nonresident care staff: unadjusted OR, 2.93 (95% CI, 1.58–5.78) and unadjusted OR, 3.08 (95% CI, 1.66–6.07). Logistic regression yielded similar adjusted ORs.
Working at high-infection SNFs was a risk factor for SARS-CoV-2 seropositivity. Even after accounting for resident infections, certified nursing assistants and nurses had a 3-fold higher risk of SARS-CoV-2 seropositivity than nonclinical staff. This knowledge can guide prioritized implementation of safer ways for caregivers to provide necessary care to SNF residents.
To determine the impact of an inpatient stewardship intervention targeting fluoroquinolone use on inpatient and postdischarge Clostridioides difficile infection (CDI).
We used an interrupted time series study design to evaluate the rate of hospital-onset CDI (HO-CDI), postdischarge CDI (PD-CDI) within 12 weeks, and inpatient fluoroquinolone use from 2 years prior to 1 year after a stewardship intervention.
An academic healthcare system with 4 hospitals.
All inpatients hospitalized between January 2017 and September 2020, excluding those discharged from locations caring for oncology, bone marrow transplant, or solid-organ transplant patients.
Introduction of electronic order sets designed to reduce inpatient fluoroquinolone prescribing.
Among 163,117 admissions, there were 683 cases of HO-CDI and 1,104 cases of PD-CDI. In the context of a 2% month-to-month decline starting in the preintervention period (P < .01), we observed a reduction in fluoroquinolone days of therapy per 1,000 patient days of 21% after the intervention (level change, P < .05). HO-CDI rates were stable throughout the study period. In contrast, we also detected a change in the trend of PD-CDI rates from a stable monthly rate in the preintervention period to a monthly decrease of 2.5% in the postintervention period (P < .01).
Our systemwide intervention reduced inpatient fluoroquinolone use immediately, but not HO-CDI. However, a downward trend in PD-CDI occurred. Relying on outcome measures limited to the inpatient setting may not reflect the full impact of inpatient stewardship efforts.
Background: Effective inpatient stewardship initiatives can improve antibiotic prescribing, but impact on outcomes like Clostridioides difficile infections (CDIs) is less apparent. However, the effect of inpatient stewardship efforts may extend to the postdischarge setting. We evaluated whether an intervention targeting inpatient fluoroquinolone (FQ) use in a large healthcare system reduced incidence of postdischarge CDI. Methods: In August 2019, 4 acute-care hospitals in a large healthcare system replaced standalone FQ orders with order sets containing decision support. Order sets redirected prescribers to syndrome order sets that prioritize alternative antibiotics. Monthly patient days (PDs) and antibiotic days of therapy (DOT) administered for FQs and NHSN-defined broad-spectrum hospital-onset (BS-HO) antibiotics were calculated using patient encounter data for the 23 months before and 13 months after the intervention (COVID-19 admissions in the previous 7 months). We evaluated hospital-onset CDI (HO-CDI) per 1,000 PD (defined as any positive test after hospital day 3) and 12-week postdischarge (PDC- CDI) per 100 discharges (any positive test within healthcare system <12 weeks after discharge). Interrupted time-series analysis using generalized estimating equation models with negative binomial link function was conducted; a sensitivity analysis with Medicare case-mix index (CMI) adjustment was also performed to control for differences after start of the COVID-19 pandemic. Results: Among 163,117 admissions, there were 683 HO-CDIs and 1,009 PDC-CDIs. Overall, FQ DOT per 1,000 PD decreased by 21% immediately after the intervention (level change; P < .05) and decreased at a consistent rate throughout the entire study period (−2% per month; P < .01) (Fig. 1). There was a nonsignificant 5% increase in BS-HO antibiotic use immediately after intervention and a continued increase in use after the intervention (0.3% per month; P = .37). HO-CDI rates were stable throughout the study period, with a nonsignificant level change decrease of 10% after the intervention. In contrast, there was a reversal in the trend in PDC-CDI rates from a 0.4% per month increase in the preintervention period to a 3% per month decrease in the postintervention period (P < .01). Sensitivity analysis with adjustment for facility-specific CMI produced similar results but with wider confidence intervals, as did an analysis with a distinct COVID-19 time point. Conclusion: Our systemwide intervention using order sets with decision support reduced inpatient FQ use by 21%. The intervention did not significantly reduce HO-CDI but significantly decreased the incidence of CDI within 12 weeks after discharge. Relying on outcome measures limited to inpatient setting may not reflect the full impact of inpatient stewardship efforts and incorporating postdischarge outcomes, such as CDI, should increasingly be considered.
In total, 13 facilities changed C. difficile testing to reflexive testing by enzyme immunoassay (EIA) only after a positive nucleic acid-amplification test (NAAT); the standardized infection ratio (SIR) decreased by 46% (range, −12% to −71% per hospital). Changing testing practice greatly influenced a performance metric without changing C. difficile infection prevention practice.
Among 353 healthcare personnel in a longitudinal cohort in 4 hospitals in Atlanta, Georgia (May–June 2020), 23 (6.5%) had severe acute respiratory coronavirus virus 2 (SARS-CoV-2) antibodies. Spending >50% of a typical shift at the bedside (OR, 3.4; 95% CI, 1.2–10.5) and black race (OR, 8.4; 95% CI, 2.7–27.4) were associated with SARS-CoV-2 seropositivity.
Background: Hospitalists play a critical role in antimicrobial stewardship as the primary antibiotic prescriber for many inpatients. We sought to describe antibiotic prescribing variation among hospitalists within a healthcare system. Methods: We created a novel metric of hospitalist-specific antibiotic prescribing by linking hospitalist billing data to hospital medication administration records in 4 hospitals (two 500-bed academic (AMC1 and AMC2), one 400-bed community (CH1), and one 100-bed community (CH2)) from January 2016 to December 2018. We attributed dates that a hospitalist electronically billed for a given patient as billed patient days (bPD) and mapped an antibiotic day of therapy (DOT) to a bPD. Each DOT was classified according to National Healthcare Safety Network antibiotic categories: broad-spectrum hospital-onset (BS-HO), broad-spectrum community-onset (BS-CO), anti-MRSA, and highest risk for Clostridioides difficile infection (CDI). DOT and bPD were pooled to calculate hospitalist-specific DOT per 1,000 bPD. Best subsets regression was performed to assess model fit and generate hospital and antibiotic category-specific models adjusting for patient-level factors (eg, age ≥65, ICD-10 codes for comorbidities and infections). The models were used to calculate predicted hospitalist-specific DOT and observed-to-expected ratios (O:E) for each antibiotic category. Kruskal-Wallis tests and pairwise Wilcoxon rank-sum tests were used to determine significant differences between median DOT per 1,000 bPD and O:E between hospitals for each antibiotic category. Results: During the study period, 116 hospitalists across 4 hospitals contributed a total of 437,303 bPD. Median DOT per 1,000 bPD varied between hospitals (BS-HO range, 46.7–84.2; BS-CO range, 63.3–100; anti-MRSA range, 48.4–65.4; CDI range, 82.0–129.4). CH2 had a significantly higher median DOT per 1,000 bPD compared to the academic hospitals (all antibiotic categories P < .001) and CH1 (BS-HO, P = .01; anti-MRSA, P = .02) (Fig. 1A). The 4 antibiotic groups at 4 hospitals resulted in 16 models, with good model fit for CH2 (R2 > 0.55 for all models), modest model fit for AMC2 (R2 = 0.46–0.55), fair model fit for CH1 (R2 = 0.19–0.35), and poor model fit for AMC1 (R2 < 0.12 for all models). Variation in hospitalist-specific O:E was moderate (IQR, 0.9–1.1). AMC1 showed greater variation than other hospitals, but we detected no significant differences in median O:E between hospitals (all antibiotic categories P > .10) (Fig. 1B). Conclusions: Adjusting for patient-level factors significantly reduced much of the variation in hospitalist-specific DOT per 1,000 bPD in some but not all hospitals, suggesting that unmeasured factors may drive antibiotic prescribing. This metric may represent a target for stewardship intervention, such as hospitalist-specific feedback of antibiotic prescribing practices.
Disclosures: Scott Fridkin, consulting fee - vaccine industry (various) (spouse)
Fluoroquinolones (FQs) and extended-spectrum cephalosporins (ESCs) are associated with higher risk of Clostridioides difficile infection (CDI). Decreasing the unnecessary use of FQs and ESCs is a goal of antimicrobial stewardship. Understanding how prescribers perceive the risks and benefits of FQs and ESCs is needed.
We conducted interviews with clinicians from 4 hospitals. Interviews elicited respondent perceptions about the risk of ESCs, FQs, and CDI. Interviews were audio recorded, transcribed, and analyzed using a flexible coding approach.
Interviews were conducted with 64 respondents (38 physicians, 7 nurses, 6 advance practice providers, and 13 pharmacists). ESCs and FQs were perceived to have many benefits, including infrequent dosing, breadth of coverage, and greater patient adherence after hospital discharge. Prescribers stated that it was easy to make decisions about these drugs, so they were especially appealing to use in the context of time pressures. They described having difficulty discontinuing these drugs when prescribed by others due to inertia and fear. Prescribers were skeptical about targeting specific drugs as a stewardship approach and felt that the risk of a negative outcome from under treatment of a suspected bacterial infection was a higher priority than the prevention of CDI.
Prescribers in this study perceived many advantages to using ESCs and FQs, especially under conditions of time pressure and uncertainty. In making decisions about these drugs, prescribers balance risk and benefit, and they believed that the risk of CDI was acceptable in compared with the risk of undertreatment.
To determine the effect of an electronic medical record (EMR) nudge at reducing total and inappropriate orders testing for hospital-onset Clostridioides difficile infection (HO-CDI).
An interrupted time series analysis of HO-CDI orders 2 years before and 2 years after the implementation of an EMR intervention designed to reduce inappropriate HO-CDI testing. Orders for C. difficile testing were considered inappropriate if the patient had received a laxative or stool softener in the previous 24 hours.
Four hospitals in an academic healthcare network.
All patients with a C. difficile order after hospital day 3.
Orders for C. difficile testing in patients administered a laxative or stool softener in <24 hours triggered an EMR alert defaulting to cancellation of the order (“nudge”).
Of the 17,694 HO-CDI orders, 7% were inappropriate (8% prentervention vs 6% postintervention; P < .001). Monthly HO-CDI orders decreased by 21% postintervention (level-change rate ratio [RR], 0.79; 95% confidence interval [CI], 0.73–0.86), and the rate continued to decrease (postintervention trend change RR, 0.99; 95% CI, 0.98–1.00). The intervention was not associated with a level change in inappropriate HO-CDI orders (RR, 0.80; 95% CI, 0.61–1.05), but the postintervention inappropriate order rate decreased over time (RR, 0.95; 95% CI, 0.93–0.97).
An EMR nudge to minimize inappropriate ordering for C. difficile was effective at reducing HO-CDI orders, and likely contributed to decreasing the inappropriate HO-CDI order rate after the intervention.
To determine the best nursing home facility characteristics for aggregating antibiotic susceptibility testing results across nursing homes to produce a useful annual antibiogram that nursing homes can use in their antimicrobial stewardship programs.
Derivation cohort study.
Center for Medicare and Medicaid Services (CMS) certified skilled nursing facilities in Georgia (N = 231).
All residents of eligible facilities submitting urine culture specimens for microbiologic testing at a regional referral laboratory.
Crude and adjusted metrics of antibiotic resistance prevalence (percent of isolates testing susceptible) for 5 bacterial species commonly recovered from urine specimens were calculated using mixed linear models to determine which facility characteristics were predictive of testing antibiotic susceptibility.
In a single year, most facilities had an insufficient number of isolates tested to create facility-specific antibiograms: 49% of facilities had sufficient Escherichia coli isolates tested, but only about 1 in 10 had sufficient isolates of Klebsiella pneumoniae, Proteus mirabilis, Enterococcus faecalis, or Pseudomonas aeruginosa. After accounting for antibiotic tested and age of the patient, facility characteristics predictive of susceptibility were: E. coli, region, year, average length of stay; K. pneumoniae, region, bed size; P. mirabilis, region; and for E. faecalis or P. aerginosa no facility parameter remained in the model.
Nursing homes often have insufficient data to create facility-specific antibiograms; aggregating data across nursing homes in a region is a statistically sound approach to overcoming data shortages in nursing home stewardship programs.
We utilized publicly available data from the Centers for Disease Control to explore possible causes of state-to-state variability in antibiotic-resistant healthcare-associated infections. Outpatient antibiotic prescribing rates of fluoroquinolones and cephalosporins explained some variability in extended-spectrum cephalosporin-resistant Escherichia coli after adjusting for differences in age and healthcare facility composition.
In 2017, we surveyed 101 SHEA Research Network hospitals regarding Legionnaires’ disease (LD). Of 29 respondents, 94% have or are developing a water management plan with varying characteristics and personnel engaged. Most LD diagnostic testing is limited to urine antigen testing. Many opportunities to improve LD prevention and diagnosis exist.
To develop a probabilistic method for measuring central line–associated bloodstream infection (CLABSI) rates that reduces the variability associated with traditional, manual methods of applying CLABSI surveillance definitions.
Multicenter retrospective cohort study of bacteremia episodes among patients hospitalized in adult patient-care units; the study evaluated presence of CLABSI.
Hospitals that used SafetySurveillor software system (Premier) and who also reported to the Centers for Disease Control and Prevention’s National Healthcare Safety Network (NHSN).
Patients were identified from a stratified sample from all eligible blood culture isolates from all eligible hospital units to generate a final set with an equal distribution (ie, 20%) from each unit type. Units were divided a priori into 5 major groups: medical intensive care unit, surgical intensive care unit, medical-surgical intensive care unit, hematology unit, or general medical wards.
Episodes were reviewed by 2 experts, and a selection of discordant reviews were re-reviewed. Data were joined with NHSN data for hospitals for in-plan months. A predictive model was created; model performance was assessed using the c statistic in a validation set and comparison with NHSN reported rates for in-plan months.
A final model was created with predictors of CLABSI. The c statistic for the final model was 0.75 (0.68–0.80). Rates from regression modeling correlated better with expert review than NHSN-reported rates.
The use of a regression model based on the clinical characteristics of the bacteremia outperformed traditional infection preventionist surveillance compared with an expert-derived reference standard.
Infect. Control Hosp. Epidemiol. 2016;37(2):149–155
Case mix index (CMI) has been used as a facility-level indicator of patient disease severity. We sought to evaluate the potential for CMI to be used for risk adjustment of National Healthcare Safety Network (NHSN) healthcare-associated infection (HAI) data.
NHSN facility-wide laboratory-identified Clostridium difficile infection event data from 2012 were merged with the fiscal year 2012 Inpatient Prospective Payment System (IPPS) Impact file by CMS certification number (CCN) to obtain a CMI value for hospitals reporting to NHSN. Negative binomial regression was used to evaluate whether CMI was significantly associated with healthcare facility-onset (HO) CDI in univariate and multivariate analysis.
Among 1,468 acute care hospitals reporting CDI data to NHSN in 2012, 1,429 matched by CCN to a CMI value in the Impact file. CMI (median, 1.49; interquartile range, 1.36–1.66) was a significant predictor of HO CDI in univariate analysis (P<.0001). After controlling for community onset CDI prevalence rate, medical school affiliation, hospital size, and CDI test type use, CMI remained highly significant (P<.0001), with an increase of 0.1 point in CMI associated with a 3.4% increase in the HO CDI incidence rate.
CMI was a significant predictor of NHSN HO CDI incidence. Additional work to explore the feasibility of using CMI for risk adjustment of NHSN data is necessary.
The findings and conclusions in this report are those of the authors and do not necessarily represent the official position of the Centers for Disease Control and Prevention or the Agency for Toxic Substances and Diseases Registry.
Describe the impact of standardizing state-specific summary measures of antibiotic resistance that inform regional interventions to reduce transmission of resistant pathogens in healthcare settings.
Analysis of public health surveillance data.
Central line–associated bloodstream infection (CLABSI) data from intensive care units (ICUs) of facilities reporting to the National Healthcare Safety Network in 2011 were analyzed. For CLABSI due to methicillin-resistant Staphylococcus aureus (MRSA), extended-spectrum cephalosporin (ESC)-nonsusceptible Klebsiella species, and carbapenem-nonsusceptible Klebsiella species, we computed 3 state-level summary measures of nonsusceptibility: crude percent nonsusceptible, model-based adjusted percent nonsusceptible, and crude infection incidence rate.
Overall, 1,791 facilities reported CLABSIs from ICU patients. Of 1,618 S. aureus CLABSIs with methicillin-susceptibility test results, 791 (48.9%) were due to MRSA. Of 756 Klebsiella CLABSIs with ESC-susceptibility test results, 209 (27.7%) were due to ESC-nonsusceptible Klebsiella, and among 661 Klebsiella CLABSI with carbapenem susceptibility test results, 70 (10.6%) were due to carbapenem-nonsusceptible Klebsiella. All 3 state-specific measures demonstrated variability in magnitude by state. Adjusted measures, with few exceptions, were not appreciably different from crude values for any phenotypes. When linking values of crude and adjusted percent nonsusceptible by state, a state’s absolute rank shifted slightly for MRSA in 5 instances and only once each for ESC-nonsusceptible and carbapenem-nonsusceptible Klebsiella species. Infection incidence measures correlated strongly with both percent nonsusceptibility measures.
Crude state-level summary measures, based on existing NHSN CLABSI data, may suffice to assess geographic variability in antibiotic resistance. As additional variables related to antibiotic resistance become available, risk-adjusted summary measures are preferable.
Central line–associated bloodstream infection (BSI) rates are a key quality metric for comparing hospital quality and safety. Traditional BSI surveillance may be limited by interrater variability. We assessed whether a computer-automated method of central line–associated BSI detection can improve the validity of surveillance.
Retrospective cohort study.
Eight medical and surgical intensive care units (ICUs) in 4 academic medical centers.
Traditional surveillance (by hospital staff) and computer algorithm surveillance were each compared against a retrospective audit review using a random sample of blood culture episodes during the period 2004–2007 from which an organism was recovered. Episode-level agreement with audit review was measured with κ statistics, and differences were assessed using the test of equal κ coefficients. Linear regression was used to assess the relationship between surveillance performance (κ) and surveillance-reported BSI rates (BSIs per 1,000 central line–days).
We evaluated 664 blood culture episodes. Agreement with audit review was significantly lower for traditional surveillance (κ [95% confidence interval (CI)] = 0.44 [0.37–0.51]) than computer algorithm surveillance (κ [95% CI] [0.52–0.64]; P = .001). Agreement between traditional surveillance and audit review was heterogeneous across ICUs (P = .001); furthermore, traditional surveillance performed worse among ICUs reporting lower (better) BSI rates (P = .001). In contrast, computer algorithm performance was consistent across ICUs and across the range of computer-reported central line–associated BSI rates.
Compared with traditional surveillance of bloodstream infections, computer automated surveillance improves accuracy and reliability, making interfacility performance comparisons more valid.
Infect Control Hosp Epidemiol 2014;35(12):1483–1490
To quantify historical trends in rates of central line-associated bloodstream infections (CLABSIs) in US intensive care units (ICUs) caused by major pathogen groups, including Candida spp., Enterococcus spp., specified gram-negative rods, and Staphylococcus aureus.
Active surveillance in a cohort of participating ICUs through the Centers for Disease Control and Prevention, the National Nosocomial Infections Surveillance system during 1990–2004, and the National Healthcare Safety Network during 2006–2010.
Patients who were admitted to participating ICUs.
The CLABSI incidence density rate for S. aureus decreased annually starting in 2002 and remained lower than for other pathogen groups. Since 2006, the annual decrease for S. aureus CLABSIs in nonpediatric ICU types was −18.3% (95% confidence interval [CI], −20.8% to −15.8%), whereas the incidence density rate for S. aureus among pediatric ICUs did not change. The annual decrease for all ICUs combined since 2006 was −17.8% (95% CI, −19.4% to −16.1%) for Enterococcus spp., −16.4% (95% CI, −18.2% to −14.7%) for gram-negative rods, and −13.5% (95% CI, −15.4% to −11.5%) for Candida spp.
Patterns of ICU CLABSI incidence density rates among major pathogen groups have changed considerably during recent decades. CLABSI incidence declined steeply since 2006, except for CLABSI due to S. aureus in pediatric ICUs. There is a need to better understand CLABSIs that still do occur, on the basis of microbiological and patient characteristics. New prevention approaches may be needed in addition to central line insertion and maintenance practices.
Recent studies have demonstrated that central line-associated bloodstream infections (CLABSIs) are preventable through implementation of evidence-based prevention practices. Hospitals have reported CLABSI data to the Centers for Disease Control and Prevention (CDC) since the 1970s, providing an opportunity to characterize the national impact of CLABSIs over time. Our objective was to describe changes in the annual number of CLABSIs in critical care patients in the United States.
Monte Carlo simulation.
US acute care hospitals.
Nonneonatal critical care patients.
We obtained administrative data on patient-days for nearly all US hospitals and applied CLABSI rates from the National Nosocomial Infections Surveillance and the National Healthcare Safety Network systems to estimate the annual number of CLABSIs in critical care patients nationally during the period 1990–2010 and the number of CLABSIs prevented since 1990.
We estimated that there were between 462,000 and 636,000 CLABSIs in nonneonatal critical care patients in the United States during 1990–2010. CLABSI rate reductions led to between 104,000 and 198,000 fewer CLABSIs than would have occurred if rates had remained unchanged since 1990. There were 15,000 hospital-onset CLABSIs in nonneonatal critical care patients in 2010; 70% occurred in medium and large teaching hospitals.
Substantial progress has been made in reducing the occurrence of CLABSIs in US critical care patients over the past 2 decades. The concentration of critical care CLABSIs in medium and large teaching hospitals suggests that a targeted approach may be warranted to continue achieving reductions in critical care CLABSIs nationally.