To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We sought to determine whether increased antimicrobial use (AU) at the onset of the coronavirus disease 2019 (COVID-19) pandemic was driven by greater AU in COVID-19 patients only, or whether AU also increased in non–COVID-19 patients.
In this retrospective observational ecological study from 2019 to 2020, we stratified inpatients by COVID-19 status and determined relative percentage differences in median monthly AU in COVID-19 patients versus non–COVID-19 patients during the COVID-19 period (March–December 2020) and the pre–COVID-19 period (March–December 2019). We also determined relative percentage differences in median monthly AU in non–COVID-19 patients during the COVID-19 period versus the pre–COVID-19 period. Statistical significance was assessed using Wilcoxon signed-rank tests.
The study was conducted in 3 acute-care hospitals in Chicago, Illinois.
Facility-wide AU for broad-spectrum antibacterial agents predominantly used for hospital-onset infections was significantly greater in COVID-19 patients versus non–COVID-19 patients during the COVID-19 period (with relative increases of 73%, 66%, and 91% for hospitals A, B, and C, respectively), and during the pre–COVID-19 period (with relative increases of 52%, 64%, and 66% for hospitals A, B, and C, respectively). In contrast, facility-wide AU for all antibacterial agents was significantly lower in non–COVID-19 patients during the COVID-19 period versus the pre–COVID-19 period (with relative decreases of 8%, 7%, and 8% in hospitals A, B, and C, respectively).
AU for broad-spectrum antimicrobials was greater in COVID-19 patients compared to non–COVID-19 patients at the onset of the pandemic. AU for all antibacterial agents in non–COVID-19 patients decreased in the COVID-19 period compared to the pre–COVID-19 period.
We quantified hospital-acquired coronavirus disease 2019 (COVID-19) during the early phases of the pandemic, and we evaluated solely temporal determinations of hospital acquisition.
Retrospective observational study during early phases of the COVID-19 pandemic, March 1–November 30, 2020. We identified laboratory-detected severe acute respiratory coronavirus virus 2 (SARS-CoV-2) from 30 days before admission through discharge. All cases detected after hospital day 5 were categorized by chart review as community or unlikely hospital-acquired cases, or possible or probable hospital-acquired cases.
The study was conducted in 2 acute-care hospitals in Chicago, Illinois.
The study included all hospitalized patients including an inpatient rehabilitation unit.
Each hospital implemented infection-control precautions soon after identifying COVID-19 cases, including patient and staff cohort protocols, universal masking, and restricted visitation policies.
Among 2,667 patients with SARS-CoV-2, detection before hospital day 6 was most common (n = 2,612; 98%); detection during hospital days 6–14 was uncommon (n = 43; 1.6%); and detection after hospital day 14 was rare (n = 16; 0.6%). By chart review, most cases after day 5 were categorized as community acquired, usually because SARS-CoV-2 had been detected at a prior healthcare facility (68% of cases on days 6–14 and 53% of cases after day 14). The incidence rates of possible and probable hospital-acquired cases per 10,000 patient days were similar for ICU- and non-ICU patients at hospital A (1.2 vs 1.3 difference, 0.1; 95% CI, −2.8 to 3.0) and hospital B (2.8 vs 1.2 difference, 1.6; 95% CI, −0.1 to 4.0).
Most patients were protected by early and sustained application of infection-control precautions modified to reduce SARS-CoV-2 transmission. Using solely temporal criteria to discriminate hospital versus community acquisition would have misclassified many “late onset” SARS-CoV-2–positive cases.
Ventilator-capable skilled nursing facilities (vSNFs) are critical to the epidemiology and control of antibiotic-resistant organisms. During an infection prevention intervention to control carbapenem-resistant Enterobacterales (CRE), we conducted a qualitative study to characterize vSNF healthcare personnel beliefs and experiences regarding infection control measures.
A qualitative study involving semistructured interviews.
One vSNF in the Chicago, Illinois, metropolitan region.
The study included 17 healthcare personnel representing management, nursing, and nursing assistants.
We used face-to-face, semistructured interviews to measure healthcare personnel experiences with infection control measures at the midpoint of a 2-year quality improvement project.
Healthcare personnel characterized their facility as a home-like environment, yet they recognized that it is a setting where germs were ‘invisible’ and potentially ‘threatening.’ Healthcare personnel described elaborate self-protection measures to avoid acquisition or transfer of germs to their own household. Healthcare personnel were motivated to implement infection control measures to protect residents, but many identified structural barriers such as understaffing and time constraints, and some reported persistent preference for soap and water.
Healthcare personnel in vSNFs, from management to frontline staff, understood germ theory and the significance of multidrug-resistant organism transmission. However, their ability to implement infection control measures was hampered by resource limitations and mixed beliefs regarding the effectiveness of infection control measures. Self-protection from acquiring multidrug-resistant organisms was a strong motivator for healthcare personnel both outside and inside the workplace, and it could explain variation in adherence to infection control measures such as a higher hand hygiene adherence after resident care than before resident care.
To evaluate probiotics for the primary prevention of Clostridium difficile infection (CDI) among hospital inpatients.
A before-and-after quality improvement intervention comparing 12-month baseline and intervention periods.
A 694-bed teaching hospital.
We administered a multispecies probiotic comprising L. acidophilus (CL1285), L. casei (LBC80R), and L. rhamnosus (CLR2) to eligible antibiotic recipients within 12 hours of initial antibiotic receipt through 5 days after final dose. We excluded (1) all patients on neonatal, pediatric and oncology wards; (2) all individuals receiving perioperative prophylactic antibiotic recipients; (3) all those restricted from oral intake; and (4) those with pancreatitis, leukopenia, or posttransplant. We defined CDI by symptoms plus C. difficile toxin detection by polymerase chain reaction. Our primary outcome was hospital-onset CDI incidence on eligible hospital units, analyzed using segmented regression.
The study included 251 CDI episodes among 360,016 patient days during the baseline and intervention periods, and the incidence rate was 7.0 per 10,000 patient days. The incidence rate was similar during baseline and intervention periods (6.9 vs 7.0 per 10,000 patient days; P=.95). However, compared to the first 6 months of the intervention, we detected a significant decrease in CDI during the final 6 months (incidence rate ratio, 0.6; 95% confidence interval, 0.4–0.9; P=.009). Testing intensity remained stable between the baseline and intervention periods: 19% versus 20% of stools tested were C. difficile positive by PCR, respectively. From medical record reviews, only 26% of eligible patients received a probiotic per the protocol.
Despite poor adherence to the protocol, there was a reduction in the incidence of CDI during the intervention, which was delayed ~6 months after introducing probiotic for primary prevention.
Because antibacterial history is difficult to obtain, especially when the exposure occurred at an outside hospital, we assessed whether infection-related diagnostic billing codes, which are more readily available through hospital discharge databases, could infer prior antibacterial receipt.
Retrospective cohort study.
This study included 121,916 hospitalizations representing 78,094 patients across the 3 hospitals.
We obtained hospital inpatient data from 3 Chicago-area hospitals. Encounters were categorized as “infection” if at least 1 International Classification of Disease, Ninth Revision, Clinical Modification (ICD-9-CM) code indicated a bacterial infection. From medication administration records, we categorized antibacterial agents and calculated total therapy days using Centers for Disease Control and Prevention (CDC) definitions. We evaluated bivariate associations between infection encounters and 3 categories of antibacterial exposure: any, broad spectrum, or surgical prophylaxis. We constructed multivariable models to evaluate adjusted risk ratios for antibacterial receipt.
Of the 121,916 inpatient encounters (78,094 patients) across the 3 hospitals, 24% had an associated infection code, 47% received an antibacterial, and 13% received a broad-spectrum antibacterial. Infection-related ICD-9-CM codes were associated with a 2-fold increase in antibacterial administration compared to those lacking such codes (RR, 2.29; 95% confidence interval [CI], 2.27–2.31) and a 5-fold increased risk for broad-spectrum antibacterial administration (RR, 5.52; 95% CI, 5.37–5.67). Encounters with infection codes had 3 times the number of antibacterial days.
Infection diagnostic billing codes are strong surrogate markers for prior antibacterial exposure, especially to broad-spectrum antibacterial agents; such an association can be used to enhance early identification of patients at risk of multidrug-resistant organism (MDRO) carriage at the time of admission.
To determine the impact of recurrent Clostridium difficile infection (RCDI) on patient behaviors following illness.
Using a computer algorithm, we searched the electronic medical records of 7 Chicago-area hospitals to identify patients with RCDI (2 episodes of CDI within 15 to 56 days of each other). RCDI was validated by medical record review. Patients were asked to complete a telephone survey. The survey included questions regarding general health, social isolation, symptom severity, emotional distress, and prevention behaviors.
In total, 119 patients completed the survey (32%). On average, respondents were 57.4 years old (standard deviation, 16.8); 57% were white, and ~50% reported hospitalization for CDI. At the time of their most recent illness, patients rated their diarrhea as high severity (58.5%) and their exhaustion as extreme (30.7%). Respondents indicated that they were very worried about getting sick again (41.5%) and about infecting others (31%). Almost 50% said that they have washed their hands more frequently (47%) and have increased their use of soap and water (45%) since their illness. Some of these patients (22%–32%) reported eating out less, avoiding certain medications and public areas, and increasing probiotic use. Most behavioral changes were unrelated to disease severity.
Having had RCDI appears to increase prevention-related behaviors in some patients. While some behaviors are appropriate (eg, handwashing), others are not supported by evidence of decreased risk and may negatively impact patient quality of life. Providers should discuss appropriate prevention behaviors with their patients and should clarify that other behaviors (eg, eating out less) will not affect their risk of future illness.
Risk adjustment is needed to fairly compare central-line–associated bloodstream infection (CLABSI) rates between hospitals. Until 2017, the Centers for Disease Control and Prevention (CDC) methodology adjusted CLABSI rates only by type of intensive care unit (ICU). The 2017 CDC models also adjust for hospital size and medical school affiliation. We hypothesized that risk adjustment would be improved by including patient demographics and comorbidities from electronically available hospital discharge codes.
Using a cohort design across 22 hospitals, we analyzed data from ICU patients admitted between January 2012 and December 2013. Demographics and International Classification of Diseases, Ninth Edition, Clinical Modification (ICD-9-CM) discharge codes were obtained for each patient, and CLABSIs were identified by trained infection preventionists. Models adjusting only for ICU type and for ICU type plus patient case mix were built and compared using discrimination and standardized infection ratio (SIR). Hospitals were ranked by SIR for each model to examine and compare the changes in rank.
Overall, 85,849 ICU patients were analyzed and 162 (0.2%) developed CLABSI. The significant variables added to the ICU model were coagulopathy, paralysis, renal failure, malnutrition, and age. The C statistics were 0.55 (95% CI, 0.51–0.59) for the ICU-type model and 0.64 (95% CI, 0.60–0.69) for the ICU-type plus patient case-mix model. When the hospitals were ranked by adjusted SIRs, 10 hospitals (45%) changed rank when comorbidity was added to the ICU-type model.
Our risk-adjustment model for CLABSI using electronically available comorbidities demonstrated better discrimination than did the CDC model. The CDC should strongly consider comorbidity-based risk adjustment to more accurately compare CLABSI rates across hospitals.
To determine which comorbid conditions are considered causally related to central-line associated bloodstream infection (CLABSI) and surgical-site infection (SSI) based on expert consensus.
Using the Delphi method, we administered an iterative, 2-round survey to 9 infectious disease and infection control experts from the United States.
Based on our selection of components from the Charlson and Elixhauser comorbidity indices, 35 different comorbid conditions were rated from 1 (not at all related) to 5 (strongly related) by each expert separately for CLABSI and SSI, based on perceived relatedness to the outcome. To assign expert consensus on causal relatedness for each comorbid condition, all 3 of the following criteria had to be met at the end of the second round: (1) a majority (>50%) of experts rating the condition at 3 (somewhat related) or higher, (2) interquartile range (IQR)≤1, and (3) standard deviation (SD)≤1.
From round 1 to round 2, the IQR and SD, respectively, decreased for ratings of 21 of 35 (60%) and 33 of 35 (94%) comorbid conditions for CLABSI, and for 17 of 35 (49%) and 32 of 35 (91%) comorbid conditions for SSI, suggesting improvement in consensus among this group of experts. At the end of round 2, 13 of 35 (37%) and 17 of 35 (49%) comorbid conditions were perceived as causally related to CLABSI and SSI, respectively.
Our results have produced a list of comorbid conditions that should be analyzed as risk factors for and further explored for risk adjustment of CLABSI and SSI.
To compare interrater reliabilities for ventilator-associated event (VAE) surveillance, traditional ventilator-associated pneumonia (VAP) surveillance, and clinical diagnosis of VAP by intensivists.
A retrospective study nested within a prospective multicenter quality improvement study.
Intensive care units (ICUs) within 5 hospitals of the Centers for Disease Control and Prevention Epicenters.
Patients who underwent mechanical ventilation.
We selected 150 charts for review, including all VAEs and traditionally defined VAPs identified during the primary study and randomly selected charts of patients without VAEs or VAPs. Each chart was independently reviewed by 2 research assistants (RAs) for VAEs, 2 hospital infection preventionists (IPs) for traditionally defined VAP, and 2 intensivists for any episodes of pulmonary deterioration. We calculated interrater agreement using κ estimates.
The 150 selected episodes spanned 2,500 ventilator days. In total, 93–96 VAEs were identified by RAs; 31–49 VAPs were identified by IPs, and 29–35 VAPs were diagnosed by intensivists. Interrater reliability between RAs for VAEs was high (κ, 0.71; 95% CI, 0.59–0.81). Agreement between IPs using traditional VAP criteria was slight (κ, 0.12; 95% CI, −0.05–0.29). Agreement between intensivists was slight regarding episodes of pulmonary deterioration (κ 0.22; 95% CI, 0.05–0.39) and was fair regarding whether episodes of deterioration were attributable to clinically defined VAP (κ, 0.34; 95% CI, 0.17–0.51). The clinical correlation between VAE surveillance and intensivists’ clinical assessments was poor.
Prospective surveillance using VAE criteria is more reliable than traditional VAP surveillance and clinical VAP diagnosis; the correlation between VAEs and clinically recognized pulmonary deterioration is poor.
Central line–associated bloodstream infection (BSI) rates are a key quality metric for comparing hospital quality and safety. Traditional BSI surveillance may be limited by interrater variability. We assessed whether a computer-automated method of central line–associated BSI detection can improve the validity of surveillance.
Retrospective cohort study.
Eight medical and surgical intensive care units (ICUs) in 4 academic medical centers.
Traditional surveillance (by hospital staff) and computer algorithm surveillance were each compared against a retrospective audit review using a random sample of blood culture episodes during the period 2004–2007 from which an organism was recovered. Episode-level agreement with audit review was measured with κ statistics, and differences were assessed using the test of equal κ coefficients. Linear regression was used to assess the relationship between surveillance performance (κ) and surveillance-reported BSI rates (BSIs per 1,000 central line–days).
We evaluated 664 blood culture episodes. Agreement with audit review was significantly lower for traditional surveillance (κ [95% confidence interval (CI)] = 0.44 [0.37–0.51]) than computer algorithm surveillance (κ [95% CI] [0.52–0.64]; P = .001). Agreement between traditional surveillance and audit review was heterogeneous across ICUs (P = .001); furthermore, traditional surveillance performed worse among ICUs reporting lower (better) BSI rates (P = .001). In contrast, computer algorithm performance was consistent across ICUs and across the range of computer-reported central line–associated BSI rates.
Compared with traditional surveillance of bloodstream infections, computer automated surveillance improves accuracy and reliability, making interfacility performance comparisons more valid.
Infect Control Hosp Epidemiol 2014;35(12):1483–1490
Electronic surveillance for healthcare-associated infections (HAIs) is increasingly widespread. This is driven by multiple factors: a greater burden on hospitals to provide surveillance data to state and national agencies, financial pressures to be more efficient with HAI surveillance, the desire for more objective comparisons between healthcare facilities, and the increasing amount of patient data available electronically. Optimal implementation of electronic surveillance requires that specific information be available to the surveillance systems. This white paper reviews different approaches to electronic surveillance, discusses the specific data elements required for performing surveillance, and considers important issues of data validation.
Infect Control Hosp Epidemiol 2014;35(9):1083-1091
We compared strategies to increase the rate of influenza vaccination. A written standing-orders policy that enabled nurses to vaccinate patients was compared with augmentation of the standing-orders policy with either electronic opt-out orders for physicians or electronic reminders to nurses. Use of opt-out orders yielded the highest vaccination rate (12% of patients), followed by use of nursing reminders (6%); use of the standing-orders policy alone was ineffective.
We surveyed house staff who had participated in a trial that compared influenza vaccination strategies for inpatients. House staff who were exposed to computer-generated vaccination orders were more likely to report that they recommended vaccination to their inpatients and outpatients, compared with house staff who were not exposed to a vaccination intervention. Also, house staff did not recognize regnant women as a high-priority population for influenza vaccination.
To evaluate infection control and hand hygiene understanding at 3 public hospitals, we surveyed 4,345 healthcare workers (HCWs) 3 times during a 5-year infection control intervention. The preference for the use of alcohol hand rub for hand hygiene increased dramatically; in nurses, it increased from 14% to 34%; in physicians, 4.3% to 51%; and in allied HCWs, 12% to 44%. Study year, infection control interactive education-session attendance, infection control knowledge, and being a physician or allied HCW independently predicted a preference for alcohol hand rub.
To determine whether a multimodal intervention could improve adherence to hand hygiene and glove use recommendations and decrease the incidence of antimicrobial resistance in different types of healthcare facilities.
Prospective, observational study performed from October 1, 1999, through December 31, 2002. We monitored adherence to hand hygiene and glove use recommendations and the incidence of antimicrobial-resistant bacteria among isolates from clinical cultures. We evaluated trends in and predictors for adherence and preferential use of alcohol-based hand rubs, using multivariable analyses.
Three intervention hospitals (a 660-bed acute and long-term care hospital, a 120-bed community hospital, and a 600-bed public teaching hospital) and a control hospital (a 700-bed university teaching hospital).
At the intervention hospitals, we introduced or increased the availability of alcohol-based hand rub, initiated an interactive education program, and developed a poster campaign; at the control hospital, we only increased the availability of alcohol-based hand rub.
We observed 6,948 hand hygiene opportunities. The frequency of hand hygiene performance or glove use significantly increased during the study period at the intervention hospitals but not at the control hospital; the maximum quarterly frequency of hand hygiene performance or glove use at intervention hospitals (74%, 80%, and 77%) was higher than that at the control hospital (59%). By multivariable analysis, preferential use of alcohol-based hand rubs rather than soap and water for hand hygiene was more likely among workers at intervention hospitals compared with nonintervention hospitals (adjusted odds ratio, 4.6 [95% confidence interval, 3.3-6.4]) and more likely among physicians (adjusted odds ratio, 1.4 [95% confidence interval, 1.2-1.8]) than among nurses at intervention hospitals. A significantly reduced incidence of antimicrobial-resistant bacteria among isolates from clinical culture was found at a single intervention hospital, which had the greatest increase in the frequency of hand hygiene performance.
During a 3-year period, a multimodal intervention program increased adherence to hand hygiene recommendations, especially to the use of alcohol-based hand rubs. In one hospital, a concomitant reduction was found in the incidence of antimicrobial-resistant bacteria among isolates from clinical cultures.
We developed criteria for justifiable CVC use and evaluated CVC use in a public hospital. Unjustified CVC-days were more common for non-ICU patients compared with ICU patients. Also, insertion-site dressings were less likely to be intact on non-ICU patients. Interventions to reduce CVC-associated bloodstream infections should include non-ICU patients.
To evaluate whether a natural language processing system, SymText, was comparable to human interpretation of chest radiograph reports for identifying the mention of a central venous catheter (CVC), and whether use of SymText could detect patients who had a CVC.
To identify patients who had a CVC, we performed two surveys of hospitalized patients. Then, we obtained available reports from 104 patients who had a CVC during one of two cross-sectional surveys (ie, case-patients) and 104 randomly selected patients who did not have a CVC (ie, control-patients).
A 600-bed public teaching hospital.
Chest radiograph reports were available from 124 of the 208 participants. Compared with human interpretation, SymText had a sensitivity of 95.8% and a specificity of 98.7%. The use of SymText to identify case- and control-patients resulted in a sensitivity of 43% and a specificity of 98%. Successful application of SymText varied significantly by venous insertion site (eg, a sensitivity of 78% for subclavian and a sensitivity of 3.7% for femoral). Twenty-six percent of the case-patients had a femoral CVC.
Compared with human interpretation, SymText performed well in interpreting whether a report mentioned a CVC. In patient populations with less frequent CVC placement in femoral veins, the sensitivity for CVC detection likely would be higher. Applying a natural language processing system to chest radiograph reports may be a useful adjunct to other data sources to automate detection of patients who had a CVC.
We observed adherence with hand hygiene in 14 units at 4 hospitals with varying sink-to-bed ratios (range, 1:1 to 1:6). Adherence was less than 50% in all units and there was no significant trend toward improved hand hygiene with increased sink-to-bed ratios.