To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We sought to determine whether increased antimicrobial use (AU) at the onset of the coronavirus disease 2019 (COVID-19) pandemic was driven by greater AU in COVID-19 patients only, or whether AU also increased in non–COVID-19 patients.
In this retrospective observational ecological study from 2019 to 2020, we stratified inpatients by COVID-19 status and determined relative percentage differences in median monthly AU in COVID-19 patients versus non–COVID-19 patients during the COVID-19 period (March–December 2020) and the pre–COVID-19 period (March–December 2019). We also determined relative percentage differences in median monthly AU in non–COVID-19 patients during the COVID-19 period versus the pre–COVID-19 period. Statistical significance was assessed using Wilcoxon signed-rank tests.
The study was conducted in 3 acute-care hospitals in Chicago, Illinois.
Facility-wide AU for broad-spectrum antibacterial agents predominantly used for hospital-onset infections was significantly greater in COVID-19 patients versus non–COVID-19 patients during the COVID-19 period (with relative increases of 73%, 66%, and 91% for hospitals A, B, and C, respectively), and during the pre–COVID-19 period (with relative increases of 52%, 64%, and 66% for hospitals A, B, and C, respectively). In contrast, facility-wide AU for all antibacterial agents was significantly lower in non–COVID-19 patients during the COVID-19 period versus the pre–COVID-19 period (with relative decreases of 8%, 7%, and 8% in hospitals A, B, and C, respectively).
AU for broad-spectrum antimicrobials was greater in COVID-19 patients compared to non–COVID-19 patients at the onset of the pandemic. AU for all antibacterial agents in non–COVID-19 patients decreased in the COVID-19 period compared to the pre–COVID-19 period.
We quantified hospital-acquired coronavirus disease 2019 (COVID-19) during the early phases of the pandemic, and we evaluated solely temporal determinations of hospital acquisition.
Retrospective observational study during early phases of the COVID-19 pandemic, March 1–November 30, 2020. We identified laboratory-detected severe acute respiratory coronavirus virus 2 (SARS-CoV-2) from 30 days before admission through discharge. All cases detected after hospital day 5 were categorized by chart review as community or unlikely hospital-acquired cases, or possible or probable hospital-acquired cases.
The study was conducted in 2 acute-care hospitals in Chicago, Illinois.
The study included all hospitalized patients including an inpatient rehabilitation unit.
Each hospital implemented infection-control precautions soon after identifying COVID-19 cases, including patient and staff cohort protocols, universal masking, and restricted visitation policies.
Among 2,667 patients with SARS-CoV-2, detection before hospital day 6 was most common (n = 2,612; 98%); detection during hospital days 6–14 was uncommon (n = 43; 1.6%); and detection after hospital day 14 was rare (n = 16; 0.6%). By chart review, most cases after day 5 were categorized as community acquired, usually because SARS-CoV-2 had been detected at a prior healthcare facility (68% of cases on days 6–14 and 53% of cases after day 14). The incidence rates of possible and probable hospital-acquired cases per 10,000 patient days were similar for ICU- and non-ICU patients at hospital A (1.2 vs 1.3 difference, 0.1; 95% CI, −2.8 to 3.0) and hospital B (2.8 vs 1.2 difference, 1.6; 95% CI, −0.1 to 4.0).
Most patients were protected by early and sustained application of infection-control precautions modified to reduce SARS-CoV-2 transmission. Using solely temporal criteria to discriminate hospital versus community acquisition would have misclassified many “late onset” SARS-CoV-2–positive cases.
Ventilator-capable skilled nursing facilities (vSNFs) are critical to the epidemiology and control of antibiotic-resistant organisms. During an infection prevention intervention to control carbapenem-resistant Enterobacterales (CRE), we conducted a qualitative study to characterize vSNF healthcare personnel beliefs and experiences regarding infection control measures.
A qualitative study involving semistructured interviews.
One vSNF in the Chicago, Illinois, metropolitan region.
The study included 17 healthcare personnel representing management, nursing, and nursing assistants.
We used face-to-face, semistructured interviews to measure healthcare personnel experiences with infection control measures at the midpoint of a 2-year quality improvement project.
Healthcare personnel characterized their facility as a home-like environment, yet they recognized that it is a setting where germs were ‘invisible’ and potentially ‘threatening.’ Healthcare personnel described elaborate self-protection measures to avoid acquisition or transfer of germs to their own household. Healthcare personnel were motivated to implement infection control measures to protect residents, but many identified structural barriers such as understaffing and time constraints, and some reported persistent preference for soap and water.
Healthcare personnel in vSNFs, from management to frontline staff, understood germ theory and the significance of multidrug-resistant organism transmission. However, their ability to implement infection control measures was hampered by resource limitations and mixed beliefs regarding the effectiveness of infection control measures. Self-protection from acquiring multidrug-resistant organisms was a strong motivator for healthcare personnel both outside and inside the workplace, and it could explain variation in adherence to infection control measures such as a higher hand hygiene adherence after resident care than before resident care.
Background: During a 2017–2019 intervention in Chicago-area vSNFs to control carbapenem-resistant Enterobacteriaceae, healthcare worker adherence to hand hygiene and personal protective equipment was stubbornly inadequate (hand hygiene adherence, ~16% and 56% on entry and exit), despite educational and monitoring efforts. Little is known about vSNF staff understanding of multidrug-resistant organism (MDRO) transmission. We conducted a qualitative analysis of staff members at a vSNF that included assessment of staff perceptions of personal MDRO acquisition risk and associated personal hygiene routines transitioning from work to home. Methods: Between September 2018 and November 2018, a PhD-candidate medical anthropologist conducted semistructured interviews with management (N = 5), nursing staff (N = 6), and certified nursing assistants (N = 6) at a vSNF in the Chicago region (Illinois) who had already received 1 year of MDRO staff education and hand hygiene adherence monitoring. More than 11 hours of semistructured interviews were collected and transcribed. Data collection and analysis included identifying how staff members related to their own risk of MDRO acquisition/infection and what personal hygiene routines they followed. Transcriptions of the data were analyzed using thematic coding aided by MAXQDA qualitative analysis software. Results: Staff members at all levels were able to describe their perceptions related to the risk of acquiring an MDRO and personal hygiene in great detail. The risk of acquiring an MDRO was perceived as a constant threat by staff members, who described germs as bad and everywhere (Table 1). The perceived threat of MDRO acquisition was connected to individual personal hygiene routines (eg, changing shoes before leaving work), which were considered important by staff members (Table 2). Nursing staff and certified nursing assistants noted that personal hygiene was a critical factor keeping their residents, themselves, and their families free from MDROs. Conclusions: In the context of a quality improvement campaign, vSNF healthcare workers are aware of the transmissibility of microscopic MDROs and are highly motivated in preventing transmission of MDROs to themselves. Such perceptions may explain actions such as why workers may be differentially adherent with infection control interventions (eg, more likely to perform hand hygiene leaving a room rather than going into a room, or less likely to change gowns in between residents in multibed rooms if they believe they are already personally protected with a gown). Our findings suggest that interventions to improve staff adherence to infection control measures may need to address other factors related to adherence besides knowledge deficit (eg, understaffing) and may need to acknowledge self-protection as a driving motivator for staff adherence.
Background: Carbapenem-resistant Enterobacteriaceae (CRE) are endemic in the Chicago region. We assessed the regional impact of a CRE control intervention targeting high-prevalence facilities; that is, long-term acute-care hospitals (LTACHs) and ventilator-capable skilled nursing facilities (vSNFs). Methods: In July 2017, an academic–public health partnership launched a regional CRE prevention bundle: (1) identifying patient CRE status by querying Illinois’ XDRO registry and periodic point-prevalence surveys reported to public health, (2) cohorting or private rooms with contact precautions for CRE patients, (3) combining hand hygiene adherence, monitoring with general infection control education, and guidance by project coordinators and public health, and (4) daily chlorhexidine gluconate (CHG) bathing. Informed by epidemiology and modeling, we targeted LTACHs and vSNFs in a 13-mile radius from the coordinating center. Illinois mandates CRE reporting to the XDRO registry, which can also be manually queried or generate automated alerts to facilitate interfacility communication. The regional intervention promoted increased automation of alerts to hospitals. The prespecified primary outcome was incident clinical CRE culture reported to the XDRO registry in Cook County by month, analyzed by segmented regression modeling. A secondary outcome was colonization prevalence measured by serial point-prevalence surveys for carbapenemase-producing organism colonization in LTACHs and vSNFs. Results: All eligible LTACHs (n = 6) and vSNFs (n = 9) participated in the intervention. One vSNF declined CHG bathing. vSNFs that implemented CHG bathing typically bathed residents 2–3 times per week instead of daily. Overall, there were significant gaps in infection control practices, especially in vSNFs. Also, 75 Illinois hospitals adopted automated alerts (56 during the intervention period). Mean CRE incidence in Cook County decreased from 59.0 cases per month during baseline to 40.6 cases per month during intervention (P < .001). In a segmented regression model, there was an average reduction of 10.56 cases per month during the 24-month intervention period (P = .02) (Fig. 1), and an estimated 253 incident CRE cases were averted. Mean CRE incidence also decreased among the stratum of vSNF/LTACH intervention facilities (P = .03). However, evidence of ongoing CRE transmission, particularly in vSNFs, persisted, and CRE colonization prevalence remained high at intervention facilities (Table 1). Conclusions: A resource-intensive public health regional CRE intervention was implemented that included enhanced interfacility communication and targeted infection prevention. There was a significant decline in incident CRE clinical cases in Cook County, despite high persistent CRE colonization prevalence in intervention facilities. vSNFs, where understaffing or underresourcing were common and lengths of stay range from months to years, had a major prevalence challenge, underscoring the need for aggressive infection control improvements in these facilities.
Funding: The Centers for Disease Control and Prevention (SHEPheRD Contract No. 200-2011-42037)
Disclosures: M.Y.L. has received research support in the form of contributed product from OpGen and Sage Products (now part of Stryker Corporation), and has received an investigator-initiated grant from CareFusion Foundation (now part of BD).
Background: During 2017–2019 in the Chicago region, several ventilator-capable skilled nursing facilities (vSNFs) participated in a quality improvement project to control the spread of highly prevalent carbapenem-resistant Enterobacteriaceae (CRE). With guidance from regional project coordinators and public health departments that involved education, assistance with implementation, and adherence monitoring, the facilities implemented a CRE prevention bundle that included a hand hygiene campaign that promoted alcohol-based hand rub, contact precautions (personal protective equipment with glove/gown) for care of CRE-colonized residents, and 2% chlorhexidine gluconate (CHG) wipes for routine resident bathing. We conducted a qualitative study to better understand the ways that vSNF employees engage with the implementation of such infection control measures. Methods: A PhD-candidate medical anthropologist conducted semistructured interviews with management (N = 5), nursing staff (N = 6), and certified nursing assistants (N = 6) at a vSNF in the Chicago region (Illinois) between September 2018 and November 2018. More than 11 hours of semistructured interviews were collected and transcribed. Data collection and analysis focused on identifying healthcare worker experiences during an infection control intervention. Transcriptions of the data were analyzed using thematic coding aided by MAXQDA qualitative analysis software. Results: Healthcare workers described the facility using language associated with a family environment (Table 1). Furthermore, healthcare workers demonstrated motivation to implement infection control policies (Table 2). However, healthcare workers expressed cultural and structural challenges encountered during implementation, such as their belief that some infection control measures discouraged maintenance of a home-like environment, lack of time, and understaffing. Some healthcare workers perceived that alcohol-based hand rub was ineffective over time and left unpleasant textures on the skin. Additionally, some workers did not trust the available gown and gloves used to prevent transmission. Lastly, healthcare workers typically did not prefer 2% CHG wipes over soap and water, citing residual resident postbathing smell as one indicator of CHG ineffectiveness. Conclusions: In a vSNF we found both considerable support and challenges implementing a CRE prevention bundle from the healthcare worker perspective. Healthcare workers were dedicated to recreating a home-like environment for their residents, which sometimes felt at odds with infection control interventions. Residual misconceptions (eg, alcohol-based hand rub is not effective) and negative worker perceptions (eg, permeability of contact precaution gowns and/or residue from alcohol-based hand rub) suggest that ongoing education and participation by healthcare workers in evaluating infection control products for interventions is critical.
Background: Reducing inappropriate antibiotic use is critical for fighting antibiotic resistance. Quantifying the amount and diversity of antibiotic use in US hospitals is foundational to these efforts but hampered by limited national surveillance. The current study aims to address this knowledge gap by examining adult inpatient antibiotic usage, including regional, facility, and case-mix differences, across 576 hospitals and nearly 12 million encounters in 2016–2017. Methods: We conducted a retrospective cohort study of patients aged ≥18 years discharged from hospitals in the Premier Healthcare Database, a repository of nearly 1 of every 4 annual US hospitalizations, between January 1, 2016, and December 31, 2017. Detailed hospital- and patient-level data were extracted for each admission. Facilities were classified geographically by census division. Using daily antibiotic charge data, we mapped antibiotics to 18 mutually exclusive classes and to categories based upon spectrum of activity. Patient-level data were transformed into hospital case-mix variables (eg, hospital mean patient age), and relationships between antibiotic days of therapy (DOTs), and these and other facility-level variables were evaluated in negative binomial regression models. Results: The study included 11,701,326 adult admissions, totaling 64,064,632 patient days across 576 US hospitals. Overall, antibiotics were used in 65% of all hospitalizations, at a rate of 870 DOTs per 1,000 patient days. The most commonly used classes per patient days were
β-lactam/β-lactamase inhibitor combinations (206 DOTs), third- and fourth-generation cephalosporins (128 DOTs), and glycopeptides (113 DOTs) (Fig. 1). By spectrum of activity, antipseudomonal agents (245 DOTs) were the most common. Crude usage rates varied by geographic region (Fig. 2). In multivariable analyses, teaching hospitals, and/or larger bed sizes were independently associated with lower use across a range of antibiotic classes (adjusted IRR ranges, 0.90–0.94 and 0.96–0.98, respectively). Significant regional differences also persisted. Compared to the South Atlantic region (chosen as the reference category because it had the largest representation in the cohort), rates of total antibiotic use were 6%, 15%, and 18% lower on average in the Pacific, New England, and the Middle Atlantic regions, respectively. By class, carbapenems reflected the most geographic variability. Conclusions: In a large, diverse cohort of US hospitals, adult inpatients received antibiotics at a rate similar to, but higher than, previously published estimates. In adjusted models, lower antibiotic use was frequently associated with facilities likely to have robust antibiotic stewardship programs—those with teaching status and larger bed size. Further research to understand other reasons for regional differences in antibiotic use such as different rates of resistance is needed.
Funding: This work was supported by Funding: from the Agency for Healthcare Research and Quality (AHRQ) (R01-HS026205 to A.D.H.).
To evaluate probiotics for the primary prevention of Clostridium difficile infection (CDI) among hospital inpatients.
A before-and-after quality improvement intervention comparing 12-month baseline and intervention periods.
A 694-bed teaching hospital.
We administered a multispecies probiotic comprising L. acidophilus (CL1285), L. casei (LBC80R), and L. rhamnosus (CLR2) to eligible antibiotic recipients within 12 hours of initial antibiotic receipt through 5 days after final dose. We excluded (1) all patients on neonatal, pediatric and oncology wards; (2) all individuals receiving perioperative prophylactic antibiotic recipients; (3) all those restricted from oral intake; and (4) those with pancreatitis, leukopenia, or posttransplant. We defined CDI by symptoms plus C. difficile toxin detection by polymerase chain reaction. Our primary outcome was hospital-onset CDI incidence on eligible hospital units, analyzed using segmented regression.
The study included 251 CDI episodes among 360,016 patient days during the baseline and intervention periods, and the incidence rate was 7.0 per 10,000 patient days. The incidence rate was similar during baseline and intervention periods (6.9 vs 7.0 per 10,000 patient days; P=.95). However, compared to the first 6 months of the intervention, we detected a significant decrease in CDI during the final 6 months (incidence rate ratio, 0.6; 95% confidence interval, 0.4–0.9; P=.009). Testing intensity remained stable between the baseline and intervention periods: 19% versus 20% of stools tested were C. difficile positive by PCR, respectively. From medical record reviews, only 26% of eligible patients received a probiotic per the protocol.
Despite poor adherence to the protocol, there was a reduction in the incidence of CDI during the intervention, which was delayed ~6 months after introducing probiotic for primary prevention.
Because antibacterial history is difficult to obtain, especially when the exposure occurred at an outside hospital, we assessed whether infection-related diagnostic billing codes, which are more readily available through hospital discharge databases, could infer prior antibacterial receipt.
Retrospective cohort study.
This study included 121,916 hospitalizations representing 78,094 patients across the 3 hospitals.
We obtained hospital inpatient data from 3 Chicago-area hospitals. Encounters were categorized as “infection” if at least 1 International Classification of Disease, Ninth Revision, Clinical Modification (ICD-9-CM) code indicated a bacterial infection. From medication administration records, we categorized antibacterial agents and calculated total therapy days using Centers for Disease Control and Prevention (CDC) definitions. We evaluated bivariate associations between infection encounters and 3 categories of antibacterial exposure: any, broad spectrum, or surgical prophylaxis. We constructed multivariable models to evaluate adjusted risk ratios for antibacterial receipt.
Of the 121,916 inpatient encounters (78,094 patients) across the 3 hospitals, 24% had an associated infection code, 47% received an antibacterial, and 13% received a broad-spectrum antibacterial. Infection-related ICD-9-CM codes were associated with a 2-fold increase in antibacterial administration compared to those lacking such codes (RR, 2.29; 95% confidence interval [CI], 2.27–2.31) and a 5-fold increased risk for broad-spectrum antibacterial administration (RR, 5.52; 95% CI, 5.37–5.67). Encounters with infection codes had 3 times the number of antibacterial days.
Infection diagnostic billing codes are strong surrogate markers for prior antibacterial exposure, especially to broad-spectrum antibacterial agents; such an association can be used to enhance early identification of patients at risk of multidrug-resistant organism (MDRO) carriage at the time of admission.
To determine the impact of recurrent Clostridium difficile infection (RCDI) on patient behaviors following illness.
Using a computer algorithm, we searched the electronic medical records of 7 Chicago-area hospitals to identify patients with RCDI (2 episodes of CDI within 15 to 56 days of each other). RCDI was validated by medical record review. Patients were asked to complete a telephone survey. The survey included questions regarding general health, social isolation, symptom severity, emotional distress, and prevention behaviors.
In total, 119 patients completed the survey (32%). On average, respondents were 57.4 years old (standard deviation, 16.8); 57% were white, and ~50% reported hospitalization for CDI. At the time of their most recent illness, patients rated their diarrhea as high severity (58.5%) and their exhaustion as extreme (30.7%). Respondents indicated that they were very worried about getting sick again (41.5%) and about infecting others (31%). Almost 50% said that they have washed their hands more frequently (47%) and have increased their use of soap and water (45%) since their illness. Some of these patients (22%–32%) reported eating out less, avoiding certain medications and public areas, and increasing probiotic use. Most behavioral changes were unrelated to disease severity.
Having had RCDI appears to increase prevention-related behaviors in some patients. While some behaviors are appropriate (eg, handwashing), others are not supported by evidence of decreased risk and may negatively impact patient quality of life. Providers should discuss appropriate prevention behaviors with their patients and should clarify that other behaviors (eg, eating out less) will not affect their risk of future illness.
Risk adjustment is needed to fairly compare central-line–associated bloodstream infection (CLABSI) rates between hospitals. Until 2017, the Centers for Disease Control and Prevention (CDC) methodology adjusted CLABSI rates only by type of intensive care unit (ICU). The 2017 CDC models also adjust for hospital size and medical school affiliation. We hypothesized that risk adjustment would be improved by including patient demographics and comorbidities from electronically available hospital discharge codes.
Using a cohort design across 22 hospitals, we analyzed data from ICU patients admitted between January 2012 and December 2013. Demographics and International Classification of Diseases, Ninth Edition, Clinical Modification (ICD-9-CM) discharge codes were obtained for each patient, and CLABSIs were identified by trained infection preventionists. Models adjusting only for ICU type and for ICU type plus patient case mix were built and compared using discrimination and standardized infection ratio (SIR). Hospitals were ranked by SIR for each model to examine and compare the changes in rank.
Overall, 85,849 ICU patients were analyzed and 162 (0.2%) developed CLABSI. The significant variables added to the ICU model were coagulopathy, paralysis, renal failure, malnutrition, and age. The C statistics were 0.55 (95% CI, 0.51–0.59) for the ICU-type model and 0.64 (95% CI, 0.60–0.69) for the ICU-type plus patient case-mix model. When the hospitals were ranked by adjusted SIRs, 10 hospitals (45%) changed rank when comorbidity was added to the ICU-type model.
Our risk-adjustment model for CLABSI using electronically available comorbidities demonstrated better discrimination than did the CDC model. The CDC should strongly consider comorbidity-based risk adjustment to more accurately compare CLABSI rates across hospitals.
To determine which comorbid conditions are considered causally related to central-line associated bloodstream infection (CLABSI) and surgical-site infection (SSI) based on expert consensus.
Using the Delphi method, we administered an iterative, 2-round survey to 9 infectious disease and infection control experts from the United States.
Based on our selection of components from the Charlson and Elixhauser comorbidity indices, 35 different comorbid conditions were rated from 1 (not at all related) to 5 (strongly related) by each expert separately for CLABSI and SSI, based on perceived relatedness to the outcome. To assign expert consensus on causal relatedness for each comorbid condition, all 3 of the following criteria had to be met at the end of the second round: (1) a majority (>50%) of experts rating the condition at 3 (somewhat related) or higher, (2) interquartile range (IQR)≤1, and (3) standard deviation (SD)≤1.
From round 1 to round 2, the IQR and SD, respectively, decreased for ratings of 21 of 35 (60%) and 33 of 35 (94%) comorbid conditions for CLABSI, and for 17 of 35 (49%) and 32 of 35 (91%) comorbid conditions for SSI, suggesting improvement in consensus among this group of experts. At the end of round 2, 13 of 35 (37%) and 17 of 35 (49%) comorbid conditions were perceived as causally related to CLABSI and SSI, respectively.
Our results have produced a list of comorbid conditions that should be analyzed as risk factors for and further explored for risk adjustment of CLABSI and SSI.
To compare interrater reliabilities for ventilator-associated event (VAE) surveillance, traditional ventilator-associated pneumonia (VAP) surveillance, and clinical diagnosis of VAP by intensivists.
A retrospective study nested within a prospective multicenter quality improvement study.
Intensive care units (ICUs) within 5 hospitals of the Centers for Disease Control and Prevention Epicenters.
Patients who underwent mechanical ventilation.
We selected 150 charts for review, including all VAEs and traditionally defined VAPs identified during the primary study and randomly selected charts of patients without VAEs or VAPs. Each chart was independently reviewed by 2 research assistants (RAs) for VAEs, 2 hospital infection preventionists (IPs) for traditionally defined VAP, and 2 intensivists for any episodes of pulmonary deterioration. We calculated interrater agreement using κ estimates.
The 150 selected episodes spanned 2,500 ventilator days. In total, 93–96 VAEs were identified by RAs; 31–49 VAPs were identified by IPs, and 29–35 VAPs were diagnosed by intensivists. Interrater reliability between RAs for VAEs was high (κ, 0.71; 95% CI, 0.59–0.81). Agreement between IPs using traditional VAP criteria was slight (κ, 0.12; 95% CI, −0.05–0.29). Agreement between intensivists was slight regarding episodes of pulmonary deterioration (κ 0.22; 95% CI, 0.05–0.39) and was fair regarding whether episodes of deterioration were attributable to clinically defined VAP (κ, 0.34; 95% CI, 0.17–0.51). The clinical correlation between VAE surveillance and intensivists’ clinical assessments was poor.
Prospective surveillance using VAE criteria is more reliable than traditional VAP surveillance and clinical VAP diagnosis; the correlation between VAEs and clinically recognized pulmonary deterioration is poor.
To develop a probabilistic method for measuring central line–associated bloodstream infection (CLABSI) rates that reduces the variability associated with traditional, manual methods of applying CLABSI surveillance definitions.
Multicenter retrospective cohort study of bacteremia episodes among patients hospitalized in adult patient-care units; the study evaluated presence of CLABSI.
Hospitals that used SafetySurveillor software system (Premier) and who also reported to the Centers for Disease Control and Prevention’s National Healthcare Safety Network (NHSN).
Patients were identified from a stratified sample from all eligible blood culture isolates from all eligible hospital units to generate a final set with an equal distribution (ie, 20%) from each unit type. Units were divided a priori into 5 major groups: medical intensive care unit, surgical intensive care unit, medical-surgical intensive care unit, hematology unit, or general medical wards.
Episodes were reviewed by 2 experts, and a selection of discordant reviews were re-reviewed. Data were joined with NHSN data for hospitals for in-plan months. A predictive model was created; model performance was assessed using the c statistic in a validation set and comparison with NHSN reported rates for in-plan months.
A final model was created with predictors of CLABSI. The c statistic for the final model was 0.75 (0.68–0.80). Rates from regression modeling correlated better with expert review than NHSN-reported rates.
The use of a regression model based on the clinical characteristics of the bacteremia outperformed traditional infection preventionist surveillance compared with an expert-derived reference standard.
Infect. Control Hosp. Epidemiol. 2016;37(2):149–155
Central line–associated bloodstream infection (BSI) rates are a key quality metric for comparing hospital quality and safety. Traditional BSI surveillance may be limited by interrater variability. We assessed whether a computer-automated method of central line–associated BSI detection can improve the validity of surveillance.
Retrospective cohort study.
Eight medical and surgical intensive care units (ICUs) in 4 academic medical centers.
Traditional surveillance (by hospital staff) and computer algorithm surveillance were each compared against a retrospective audit review using a random sample of blood culture episodes during the period 2004–2007 from which an organism was recovered. Episode-level agreement with audit review was measured with κ statistics, and differences were assessed using the test of equal κ coefficients. Linear regression was used to assess the relationship between surveillance performance (κ) and surveillance-reported BSI rates (BSIs per 1,000 central line–days).
We evaluated 664 blood culture episodes. Agreement with audit review was significantly lower for traditional surveillance (κ [95% confidence interval (CI)] = 0.44 [0.37–0.51]) than computer algorithm surveillance (κ [95% CI] [0.52–0.64]; P = .001). Agreement between traditional surveillance and audit review was heterogeneous across ICUs (P = .001); furthermore, traditional surveillance performed worse among ICUs reporting lower (better) BSI rates (P = .001). In contrast, computer algorithm performance was consistent across ICUs and across the range of computer-reported central line–associated BSI rates.
Compared with traditional surveillance of bloodstream infections, computer automated surveillance improves accuracy and reliability, making interfacility performance comparisons more valid.
Infect Control Hosp Epidemiol 2014;35(12):1483–1490
Electronic surveillance for healthcare-associated infections (HAIs) is increasingly widespread. This is driven by multiple factors: a greater burden on hospitals to provide surveillance data to state and national agencies, financial pressures to be more efficient with HAI surveillance, the desire for more objective comparisons between healthcare facilities, and the increasing amount of patient data available electronically. Optimal implementation of electronic surveillance requires that specific information be available to the surveillance systems. This white paper reviews different approaches to electronic surveillance, discusses the specific data elements required for performing surveillance, and considers important issues of data validation.
Infect Control Hosp Epidemiol 2014;35(9):1083-1091
Previous work has shown that daily skin cleansing with Chlorhexidine gluconate (CHG) is effective in preventing infection in the medical intensive care unit (MICU). A colorimetric, semiquantitative indicator was used to measure CHG concentration on skin (neck, antecubital fossae, and inguinal areas) of patients bathed daily with CHG during their MICU stay and after discharge from the MICU, when CHG bathing stopped.
Patients and Setting.
MICU patients at Rush University Medical Center.
CHG concentration on skin was measured and skin sites were cultured quantitatively. The relationship between CHG concentration and microbial density on skin was explored in a mixed-effects model using gram-positive colony-forming unit (CFU) counts.
For 20 MICU patients studied (240 measurements), the lowest CHG concentrations (0–18.75 μg/mL) and the highest gram-positive CFU counts were on the neck (median, 1.07 log10 CFUs; P = .014). CHG concentration increased postbath and decreased over 24 hours (P < .001). In parallel, median log10 CFUs decreased pre- to postbath (0.78 to 0) and then increased over 24 hours to the baseline of 0.78 (P = .001). A CHG concentration above 18.75 μg/mL was associated with decreased gram-positive CFUs (P = .004). In all but 2 instances, CHG was detected on patient skin during the entire interbath (approximately 24-hour) period (18 [90%] of 20 patients). In 11 patients studied after MICU discharge (80 measurements), CHG skin concentrations fell below effective levels after 1–3 days.
In MICU patients bathed daily with CHG, CHG concentration was inversely associated with microbial density on skin; residual antimicrobial activity on skin persisted up to 24 hours. Determination of CHG concentration on the skin of patients may be useful in monitoring the adequacy of skin cleansing by healthcare workers.
To develop prediction algorithms for the presence of a central vascular catheter in hospitalized patients with use of data present in an electronic health record. Such algorithms could be used for measurement of device utilization rates and for clinical decision support rules.
John H. Stroger, Jr, Hospital of Cook County, a 464-bed public hospital in Chicago, Illinois.
Patients admitted to the medical intensive care unit from May 31, 2005 through June 26, 2006 (derivation data set, May 31, 2005-September 28, 2005; validation data set, September 29, 2005-June 28, 2006).
Covariates were collected from the electronic medical record for each patient; the outcome variable was presence of a central vascular device. Multivariate models were developed using the derivation set and the generalized estimating equation. Three models, each with increasing database requirements, were validated using the validation set. Device utilization ratios and performance characteristics were calculated.
Although Charlson score and duration of intensive care unit stay were significant predictors in all models, factors that indicated use or presence of a central line were also important. Device utilization rates derived from the algorithmic models were as accurate as those obtained using manual sampling.
Automated calculation of central vascular catheter use is both feasible and accurate, providing estimates statistically similar to those obtained using manual surveillance. Prediction modeling of central vascular catheter use may enable automated surveillance of bloodstream infections and enhance important prevention interventions, such as timely removal of unnecessary central lines.