We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
To determine whether the order in which healthcare workers perform patient care tasks affects hand hygiene compliance.
Design:
For this retrospective analysis of data collected during the Strategies to Reduce Transmission of Antimicrobial Resistant Bacteria in Intensive Care Units (STAR*ICU) study, we linked consecutive tasks healthcare workers performed into care sequences and identified task transitions: 2 consecutive task sequences and the intervening hand hygiene opportunity. We compared hand hygiene compliance rates and used multiple logistic regression to determine the adjusted odds for healthcare workers (HCWs) transitioning in a direction that increased or decreased the risk to patients if healthcare workers did not perform hand hygiene before the task and for HCWs contaminating their hands.
Setting:
The study was conducted in 17 adult surgical, medical, and medical-surgical intensive care units.
Participants:
HCWs in the STAR*ICU study units.
Results:
HCWs moved from cleaner to dirtier tasks during 5,303 transitions (34.7%) and from dirtier to cleaner tasks during 10,000 transitions (65.4%). Physicians (odds ratio [OR]: 1.50; P < .0001) and other HCWs (OR, 2.15; P < .0001) were more likely than nurses to move from dirtier to cleaner tasks. Glove use was associated with moving from dirtier to cleaner tasks (OR, 1.22; P < .0001). Hand hygiene compliance was lower when HCWs transitioned from dirtier to cleaner tasks than when they transitioned in the opposite direction (adjusted OR, 0.93; P < .0001).
Conclusions:
HCWs did not organize patient care tasks in a manner that decreased risk to patients, and they were less likely to perform hand hygiene when transitioning from dirtier to cleaner tasks than the reverse. These practices could increase the risk of transmission or infection.
To compare the prevalence of select cardiovascular risk factors (CVRFs) in patients with mild cognitive impairment (MCI) versus lifetime history of major depression disorder (MDD) and a normal comparison group using baseline data from the Prevention of Alzheimer’s Dementia with Cognitive Remediation plus Transcranial Direct Current Stimulation (PACt-MD) study.
Design:
Baseline data from a multi-centered intervention study of older adults with MCI, history of MDD, or combined MCI and history of MDD (PACt-MD) were analyzed.
Setting:
Community-based multi-centered study based in Toronto across 5 academic sites.
Participants:
Older adults with MCI, history of MDD, or combined MCI and history of MDD and healthy controls.
Measurements:
We examined the baseline distribution of smoking, hypertension and diabetes in three groups of participants aged 60+ years in the PACt-MD cohort study: MCI (n = 278), MDD (n = 95), and healthy older controls (n = 81). Generalized linear models were fitted to study the effect of CVRFs on MCI and MDD as well as neuropsychological composite scores.
Results:
A higher odds of hypertension among the MCI cohort compared to healthy controls (p < .05) was noted in unadjusted analysis. Statistical significance level was lost on adjusting for age, sex and education (p > .05). A history of hypertension was associated with lower performance in composite executive function (p < .05) and overall composite neuropsychological test score (p < .05) among a pooled cohort with MCI or MDD.
Conclusions:
This study reinforces the importance of treating modifiable CVRFs, specifically hypertension, as a means of mitigating cognitive decline in patients with at-risk cognitive conditions.
To develop a fully automated algorithm using data from the Veterans’ Affairs (VA) electrical medical record (EMR) to identify deep-incisional surgical site infections (SSIs) after cardiac surgeries and total joint arthroplasties (TJAs) to be used for research studies.
Design:
Retrospective cohort study.
Setting:
This study was conducted in 11 VA hospitals.
Participants:
Patients who underwent coronary artery bypass grafting or valve replacement between January 1, 2010, and March 31, 2018 (cardiac cohort) and patients who underwent total hip arthroplasty or total knee arthroplasty between January 1, 2007, and March 31, 2018 (TJA cohort).
Methods:
Relevant clinical information and administrative code data were extracted from the EMR. The outcomes of interest were mediastinitis, endocarditis, or deep-incisional or organ-space SSI within 30 days after surgery. Multiple logistic regression analysis with a repeated regular bootstrap procedure was used to select variables and to assign points in the models. Sensitivities, specificities, positive predictive values (PPVs) and negative predictive values were calculated with comparison to outcomes collected by the Veterans’ Affairs Surgical Quality Improvement Program (VASQIP).
Results:
Overall, 49 (0.5%) of the 13,341 cardiac surgeries were classified as mediastinitis or endocarditis, and 83 (0.6%) of the 12,992 TJAs were classified as deep-incisional or organ-space SSIs. With at least 60% sensitivity, the PPVs of the SSI detection algorithms after cardiac surgeries and TJAs were 52.5% and 62.0%, respectively.
Conclusions:
Considering the low prevalence rate of SSIs, our algorithms were successful in identifying a majority of patients with a true SSI while simultaneously reducing false-positive cases. As a next step, validation of these algorithms in different hospital systems with EMR will be needed.
Healthcare-associated infections (HAIs) remain a major challenge. Various strategies have been tried to prevent or control HAIs. Positive deviance, a strategy that has been used in the last decade, is based on the observation that a few at-risk individuals follow uncommon, useful practices and that, consequently, they experience better outcomes than their peers who share similar risks. We performed a systematic literature review to measure the impact of positive deviance in controlling HAIs.
Methods:
A systematic search strategy was used to search PubMed, CINAHL, Scopus, and Embase through May 2020 for studies evaluating positive deviance as a single intervention or as part of an initiative to prevent or control healthcare-associated infections. The risk of bias was evaluated using the Downs and Black score.
Results:
Of 542 articles potentially eligible for review, 14 articles were included for further analysis. All studies were observational, quasi-experimental (before-and-after intervention) studies. Hand hygiene was the outcome in 8 studies (57%), and an improvement was observed in association with implementation of positive deviance as a single intervention in all of them. Overall HAI rates were measured in 5 studies (36%), and positive deviance was associated with an observed reduction in 4 (80%) of them. Methicillin-resistant Staphylococcus aureus infections were evaluated in 5 studies (36%), and positive deviance containing bundles were successful in all of them.
Conclusions:
Positive deviance may be an effective strategy to improve hand hygiene and control HAIs. Further studies are needed to confirm this effect.
Background: Cardiovascular implantable electronic device (CIED) infections are highly morbid, yet infection control resources dedicated to preventing them are limited. Infection surveillance in outpatient care is also challenging because there are no infection reporting mandates, and monitoring patients after discharge is difficult. Objective: Thus, we sought to develop a replicable electronic infection detection methodology that integrates text mining with structured data to expand surveillance to outpatient settings. Methods: Our methodology was developed to detect 90-day CIED infections. We tested an algorithm to accurately flag only cases with a true CIED-related infection using diagnostic and therapeutic data derived from the Veterans Affairs (VA) electronic medical record (EMR), including administrative data fields (visit and hospital stay dates, diagnoses, procedure codes), structured data fields (laboratory microbiology orders and results, pharmacy orders and dispensed name, quantity and fill dates, vital signs), and text files (clinical notes organized by date and type containing unstructured text). We evenly divided a national dataset of CIED procedures from 2016–2017 to create development and validation samples. We iteratively tested various infection flag types to estimate a model predicting a high likelihood of a true infection, defined using chart review, to test criterion validity. We then applied the model to the validation data and reviewed cases with high and low likelihood of infection to assess performance. Results: The algorithm development sample included 9,606 CIED procedures in 67 VA hospitals. Iterative testing over 381 chart reviewed cases with 47 infections produced a final model with a C-statistic of 0.95 (Table 1). We applied the model to the 9,606 CIED procedures in our validation sample and found 100 infections of the 245 cases identified by the model to have a high likelihood of infection We identified no infections among cases the model as having low likelihood. The final model included congestive heart failure and coagulopathy as comorbidities, surgical site infection diagnosis, a blood or cardiac microbiology order, and keyword hits for infection diagnosis and history of infection from clinical notes. Conclusions: Evolution of infection prevention programs to include ambulatory and procedural areas is crucial as healthcare delivery is increasingly provided outside traditional settings. Our method of algorithm development and validation for outpatient healthcare-associated infections using EMR-derived data, including text-note searching, has broad application beyond CIED infections. Furthermore, as integrated healthcare systems employ EMRs in more outpatient settings, this approach to infection surveillance could be replicated in non-VA care.
Background: Daptomycin is considered an effective alternative to vancomycin in patients with methicillin-resistant Staphylococcus aureus bloodstream infection (MRSA BSI). Objective: We investigated the real-world effectiveness of recommended daptomycin doses compared with vancomycin. Methods: This nationwide retrospective cohort study included patients from 124 Veterans’ Affairs hospitals who had a MRSA BSI and were initially treated with vancomycin during 2007–2014. Patients were categorized into 3 groups by daptomycin dose calculated using adjusted body weight: low (>6 mg/kg/day), standard (6–8 mg/kg/day), and high (≥8 mg/kg/day). International Classification of Diseases, Ninth Revision (ICD-9) diagnosis codes were used to identify other prior or concurrent infections and comorbidities. Multivariate cox regression was used to compare 30-day all-cause mortality as the primary outcome comparing patients on either low-dose, standard-dose, or high-dose daptomycin with vancomycin. Hazard ratio (HR) and 95% confidence intervals (CIs) were reported. Results: Of the 7,518 patients in the cohort, 683 (9.1%) were switched to daptomycin after initial treatment with vancomycin for their MRSA BSI episode. A low dose of daptyomycin was administered to 181 patients (26.5%), a standard dose was given to 377 patients (55.2%), and a high dose was administered to 125 patients (18.3%). Dose groups differed significantly in body mass index (BMI), presence of an osteomyelitis diagnosis, and diagnosis of diabetes. Thirty-day mortality was significantly lower in daptomycin patients than in those given vancomycin (11.3% vs 17.6%; P < .0001). Treatment with daptomycin was associated with improved 30-day survival compared with vancomycin (HR, 0.66; 95% CI, 0.53–0.84), after adjusting for age, BMI, diagnosis of endovascular infection, skin and soft-tissue infection and osteomyelitis, hospitalization in the prior year, immunosuppression, diagnosis of diabetes, and vancomycin minimum inhibitory concentration (MIC). Treatment with a standard dose of daptomycin was associated with lower mortality compared with vancomycin (HR, 0.63; 95% CI, 0.46–0.86). High and low daptomycin dose groups had a trend toward improved 30-day survival compared with vancomycin (Fig. 1). In 2 separate sensitivity analyses excluding vancomycin patients, there was no difference in 30-day mortality between a standard dose and a high dose (HR, 1.01; 95% CI, 0.51–1.97). However, we detected a trend toward poor survival with a low dose compared with a standard dose (HR, 1.21; 95% CI, 0.73–2.02). Conclusions: A standard dose of daptomycin was significantly associated with lower 30-day mortality compared with continued vancomycin treatment. Accurate dosage of daptomycin and avoidance of low-dose daptomycin should be a part of good antibiotic stewardship practice.
Background: Catheter-related bloodstream infections (CRBSIs) are associated with significant morbidity and mortality. We aimed to determine the effectiveness of chlorhexidine (CHG) dressings in preventing incident CRBSI in different settings and types of catheters. Methods: We searched PubMed, Cochrane Library, CINAHL, Embase, and ClinicalTrials.gov through March 2019 for studies with the following inclusion criteria: (1) population consisted of patients requiring short or long-term catheters; (2) CHG dressing was used in the intervention group and a nonantimicrobial impregnated dressing was used in the control group; (3) CRBSI was reported as an outcome. Randomized controlled trials (RCTs) and quasi-experimental studies were included. We used a random-effect models to obtain pooled OR estimates. Heterogeneity was evaluated with I 2 test and the Cochran Q statistic. Results: The review included 21 studies (17 RCTs). The use of CHG dressings was associated with a lower incidence of CRBSI (pooled RR, 0.63; 95% CI, 0.53–0.76). There was no evidence of publication bias. In stratified analyses, CHG dressing reduced CRBSI in ICU adult patients (9 studies, pRR, 0.52; 95% CI, 0.38–0.72) and adults with oncohematological disease (3 studies, pRR, 0.53; 95% CI, 0.35–0.81) but not in neonates and pediatric populations (6 studies, pRR, 0.90; 95% CI, 0.57–1.40). When stratified by type of catheter, CHG dressing remained protective against CRBSI in short-term venous catheters (11 studies, pRR, 0.65; 95% CI, 0.48–0.88) but not in long-term catheters (3 studies, pRR, 0.76:; 95% CI, 0.19–3.06). Other subgroup analyses are shown in Table 1. Conclusions: CHG dressings reduce the incidence of CRBSI, particularly in adult ICU patients and adults with an onco-hematological disease. Future studies need to evaluate the benefit of CHG in non-ICU settings, in neonates and pediatric populations, and in long-term catheters.
Background: Studies of interventions to decrease rates of surgical site infections (SSIs) must include thousands of patients to be statistically powered to demonstrate a significant reduction. Therefore, it is important to develop methodology to extract data available in the electronic medical record (EMR) to accurately measure SSI rates. Prior studies have created tools that optimize sensitivity to prioritize chart review for infection control purposes. However, for research studies, positive predictive value (PPV) with reasonable sensitivity is preferred to limit the impact of false-positive results on the assessment of intervention effectiveness. Using information from the prior tools, we aimed to determine whether an algorithm using data available in the Veterans Affairs (VA) EMR could accurately and efficiently identify deep incisional or organ-space SSIs found in the VA Surgical Quality Improvement Program (VASQIP) data set for cardiac and orthopedic surgery patients. Methods: We conducted a retrospective cohort study of patients who underwent cardiac surgery or total joint arthroplasty (TJA) at 11 VA hospitals between January 1, 2007, and April 30, 2017. We used EMR data that were recorded in the 30 days after surgery on inflammatory markers; microbiology; antibiotics prescribed after surgery; International Classification of Diseases (ICD) and current procedural terminology (CPT) codes for reoperation for an infection related purpose; and ICD codes for mediastinitis, prosthetic joint infection, and other SSIs. These metrics were used in an algorithm to determine whether a patient had a deep or organ-space SSI. Sensitivity, specificity, PPV and negative predictive values (NPV) were calculated for accuracy of the algorithm through comparison with 30-day SSI outcomes collected by nurse chart review in the VASQIP data set. Results: Among the 11 VA hospitals, there were 18,224 cardiac surgeries and 16,592 TJA during the study period. Of these, 20,043 were evaluated by VASQIP nurses and were included in our final cohort. Of the 8,803 cardiac surgeries included, manual review identified 44 (0.50%) mediastinitis cases. Of the 11,240 TJAs, manual review identified 71 (0.63%) deep or organ-space SSIs. Our algorithm identified 32 of the mediastinitis cases (73%) and 58 of the deep or organ-space SSI cases (82%). Sensitivity, specificity, PPV, and NPV are shown in Table 1. Of the patients that our algorithm identified as having a deep or organ-space SSI, only 21% (PPV) actually had an SSI after cardiac surgery or TJA. Conclusions: Use of the algorithm can identify most complex SSIs (73%–82%), but other data are necessary to separate false-positive from true-positive cases and to improve the efficiency of case detection to support research questions.
Background: Antimicrobial prophylaxis is an evidence-proven strategy for reducing procedure-related infections; however, measuring this key quality metric typically requires manual review, due to the way antimicrobial prophylaxis is documented in the electronic medical record (EMR). Our objective was to combine structured and unstructured data from the Veterans’ Health Administration (VA) EMR to create an electronic tool for measuring preincisional antimicrobial prophylaxis. We assessed this methodology in cardiac device implantation procedures. Methods: With clinician input and review of clinical guidelines, we developed a list of antimicrobial names recommended for the prevention of cardiac device infection. Next, we iteratively combined positive flags for an antimicrobial order or drug fill from structured data fields in the EMR and hits on text string searches of antimicrobial names documented in electronic clinical notes to optimize an algorithm to flag preincisional antimicrobial use with high sensitivity and specificity. We trained the algorithm using existing fiscal year (FY) 2008-15 data from the VA Clinical Assessment Reporting and Tracking-Electrophysiology (CART-EP), which contains manually determined information about antimicrobial prophylaxis. We then validated the performance of the final version of the algorithm using a national cohort of VA patients who underwent cardiac device procedures in FY 2016 or 2017. Discordant cases underwent expert manual review to identify reasons for algorithm misclassification and to identify potential future implementation barriers. Results: The CART-EP dataset included 2,102 procedures at 38 VA facilities with manually identified antimicrobial prophylaxis in 2,056 cases (97.8%). The final algorithm combining structured EMR fields and text-note search results flagged 2,048 of the CART-EP cases (97.4%). Algorithm validation identified antimicrobial prophylaxis in 16,334 of 19,212 cardiac device procedures (87.9%). Misclassifications occurred due to EMR documentation issues. Conclusions: We developed a methodology with high accuracy to measure guideline-concordant use of antimicrobial prophylaxis before cardiac device procedures using data fields present in modern EMRs that does not rely on manual review. In addition to broad applicability in the VA and other healthcare systems with EMRs, this method could be adapted for other procedural areas in which antimicrobial prophylaxis is recommended but comprehensive measurement has been limited to resource-intense manual review.
To evaluate the effectiveness of chlorhexidine (CHG) dressings to prevent catheter-related bloodstream infections (CRBSIs).
Design:
Systematic review and meta-analysis.
Methods:
We searched PubMed, CINAHL, EMBASE, and ClinicalTrials.gov for studies (randomized controlled and quasi-experimental trials) with the following criteria: patients with short- or long-term catheters; CHG dressings were used in the intervention group and nonantimicrobial dressings in the control group; CRBSI was an outcome. Random-effects models were used to obtain pooled risk ratios (pRRs). Heterogeneity was evaluated using the I2 test and the Cochran Q statistic.
Results:
In total, 20 studies (18 randomized controlled trials; 15,590 catheters) without evidence of publication bias and mainly performed in intensive care units (ICUs) were included. CHG dressings significantly reduced CRBSIs (pRR, 0.71; 95% CI, 0.58–0.87), independent of the CHG dressing type used. Benefits were limited to adults with short-term central venous catheters (CVCs), including onco-hematological patients. For long-term CVCs, CHG dressings decreased exit-site/tunnel infections (pRR, 0.37; 95% CI, 0.22–0.64). Contact dermatitis was associated with CHG dressing use (pRR, 5.16; 95% CI, 2.09–12.70); especially in neonates and pediatric populations in whom severe reactions occurred. Also, 2 studies evaluated and did not find CHG-acquired resistance.
Conclusions:
CHG dressings prevent CRBSIs in adults with short-term CVCs, including patients with an onco-hematological disease. CHG dressings might reduce exit-site and tunnel infections in long-term CVCs. In neonates and pediatric populations, proof of CHG dressing effectiveness is lacking and there is an increased risk of serious adverse events. Future studies should investigate CHG effectiveness in non-ICU settings and monitor for CHG resistance.
Pregabalin is indicated for the treatment of GAD in adults in Europe. The efficacy and safety of pregabalin for the treatment of adults and elderly patients with GAD has been demonstrated in 6 of 7 short-term clinical trials of 4 to 8 weeks.
Aims/objectives
To characterise the long-term efficacy and safety of pregabalin in subjects with GAD.
Methods
Subjects were randomised to double-blind treatment with either high-dose pregabalin (450-600 mg/d), low-dose pregabalin (150-300 mg/d), or lorazepam (3-4 mg/d) for 3 months. Treatment was extended with drug or blinded placebo for a further 3 months.
Results
At 3 months, mean change from baseline Hamilton Anxiety Rating Scale (HAM-A) for pregabalin high- and low-dose, and for lorazepam ranged from -16.0 to -17.4. Mean change from baseline Clinical Global Impression-Severity (CGI-S) scores ranged from -2.1 to -2.3 and mean CGI-Improvement (CGI-I) scores were 1.9 for each active treatment group. At 6 months, improvement was retained for all 3 active drug groups, even when switched to placebo. HAM-A and CGI-S change from baseline scores ranged from -14.9 to -19.0 and -2.0 to -2.5, respectively. Mean CGI-I scores ranged from 1.5 to 2.3. The most frequently reported adverse events were insomnia, fatigue, dizziness, headache, and somnolence.
Conclusions
Efficacy was observed at 3 months, with maintained improvement in anxiety symptoms over 6 months of treatment. These results are consistent with previously reported efficacy and safety trials of shorter duration with pregabalin and lorazepam in subjects with GAD.
Pregabalin is indicated for the treatment of generalised anxiety disorder (GAD) in adults in Europe. When pregabalin is discontinued, a 1-week (minimum) taper is recommended to prevent potential discontinuation symptoms.
Aims/objectives
To evaluate whether a 1-week pregabalin taper, after 3 or 6 months of treatment, is associated with the development of discontinuation symptoms (including rebound anxiety) in subjects with GAD.
Methods
Subjects were randomised to double-blind treatment with low- (150-300 mg/d) or high-dose pregabalin (450-600 mg/d) or lorazepam (3-4 mg/d) for 3 months. After 3 months ~25% of subjects in each group (per the original randomisation) underwent a double-blind, 1-week taper, with substitution of placebo. The remaining subjects continued on active treatment for another 3 months and underwent the 1-week taper at 6 months.
Results
Discontinuation after 3 months was associated with low mean changes in Physician Withdrawal Checklist (PWC) scores (range: +1.4 to +2.3) and Hamilton Anxiety Rating Scale (HAM A) scores (range: +0.9 to +2.3) for each pregabalin dose and lorazepam. Discontinuation after 6 months was associated with low mean changes in PWC scores (range: -1.0 to +3.0) and HAM A scores (range: -0.8 to +3.0) for all active drugs and placebo. Incidence of rebound anxiety during pregabalin taper was low and did not appear related to treatment dose or duration.
Conclusions
A 1-week taper following 3 or 6 months of pregabalin treatment was not associated with clinically meaningful discontinuation symptoms as evaluated by changes in the PWC and HAM A rating scales.
Clostridioides difficile infection (CDI) is the most frequently reported hospital-acquired infection in the United States. Bioaerosols generated during toilet flushing are a possible mechanism for the spread of this pathogen in clinical settings.
Objective:
To measure the bioaerosol concentration from toilets of patients with CDI before and after flushing.
Design:
In this pilot study, bioaerosols were collected 0.15 m, 0.5 m, and 1.0 m from the rims of the toilets in the bathrooms of hospitalized patients with CDI. Inhibitory, selective media were used to detect C. difficile and other facultative anaerobes. Room air was collected continuously for 20 minutes with a bioaerosol sampler before and after toilet flushing. Wilcoxon rank-sum tests were used to assess the difference in bioaerosol production before and after flushing.
Setting:
Rooms of patients with CDI at University of Iowa Hospitals and Clinics.
Results:
Bacteria were positively cultured from 8 of 24 rooms (33%). In total, 72 preflush and 72 postflush samples were collected; 9 of the preflush samples (13%) and 19 of the postflush samples (26%) were culture positive for healthcare-associated bacteria. The predominant species cultured were Enterococcus faecalis, E. faecium, and C. difficile. Compared to the preflush samples, the postflush samples showed significant increases in the concentrations of the 2 large particle-size categories: 5.0 µm (P = .0095) and 10.0 µm (P = .0082).
Conclusions:
Bioaerosols produced by toilet flushing potentially contribute to hospital environmental contamination. Prevention measures (eg, toilet lids) should be evaluated as interventions to prevent toilet-associated environmental contamination in clinical settings.
We report a novel strategy to render stainless steel (SS) a more versatile material that is suitable to be used as the substrate for preparing electrodes for efficient hydrogen evolution by interface engineering. Our strategy involves the growth of carbon nanotubes (CNTs) by atmospheric pressure chemical vapor deposition (APCVD) as the interface material on the surface of SS. We optimized the procedure to prepare CNTs/SS and demonstrate a higher activity of the CNTs/SS prepared at 700 °C for the hydrogen evolution reaction (HER) when compared to samples prepared at other temperatures. This can be attributed to the higher number of defects and the higher content of pyrrolic N obtained at this temperature. Our strategy offers a new approach to employ SS as a substrate for the preparation of highly efficient electrodes and has the potential to be widely used in electrochemistry.
We examined Clostridioides difficile infection (CDI) prevention practices and their relationship with hospital-onset healthcare facility-associated CDI rates (CDI rates) in Veterans Affairs (VA) acute-care facilities.
Design:
Cross-sectional study.
Methods:
From January 2017 to February 2017, we conducted an electronic survey of CDI prevention practices and hospital characteristics in the VA. We linked survey data with CDI rate data for the period January 2015 to December 2016. We stratified facilities according to whether their overall CDI rate per 10,000 bed days of care was above or below the national VA mean CDI rate. We examined whether specific CDI prevention practices were associated with an increased risk of a CDI rate above the national VA mean CDI rate.
Results:
All 126 facilities responded (100% response rate). Since implementing CDI prevention practices in July 2012, 60 of 123 facilities (49%) reported a decrease in CDI rates; 22 of 123 facilities (18%) reported an increase, and 41 of 123 (33%) reported no change. Facilities reporting an increase in the CDI rate (vs those reporting a decrease) after implementing prevention practices were 2.54 times more likely to have CDI rates that were above the national mean CDI rate. Whether a facility’s CDI rates were above or below the national mean CDI rate was not associated with self-reported cleaning practices, duration of contact precautions, availability of private rooms, or certification of infection preventionists in infection prevention.
Conclusions:
We found considerable variation in CDI rates. We were unable to identify which particular CDI prevention practices (i.e., bundle components) were associated with lower CDI rates.
We used a survey to characterize contemporary infection prevention and antibiotic stewardship program practices across 64 healthcare facilities, and we compared these findings to those of a similar 2013 survey. Notable findings include decreased frequency of active surveillance for methicillin-resistant Staphylococcus aureus, frequent active surveillance for carbapenem-resistant Enterobacteriaceae, and increased support for antibiotic stewardship programs.
Healthcare-associated infections (HAIs) are a significant burden on healthcare facilities. Universal gloving is a horizontal intervention to prevent transmission of pathogens that cause HAI. In this meta-analysis, we aimed to identify whether implementation of universal gloving is associated with decreased incidence of HAI in clinical settings.
Methods:
A systematic literature search was conducted to find all relevant publications using search terms for universal gloving and HAIs. Pooled incidence rate ratios (IRRs) and 95% confidence intervals (CIs) were calculated using random effects models. Heterogeneity was evaluated using the Woolf test and the I2 test.
Results:
In total, 8 studies were included. These studies were moderately to substantially heterogeneous (I2 = 59%) and had varied results. Stratified analyses showed a nonsignificant association between universal gloving and incidence of methicillin-resistant Staphylococcus aureus (MRSA; pooled IRR, 0.94; 95% CI, 0.79–1.11) and vancomycin-resistant enterococci (VRE; pooled IRR, 0.94; 95% CI, 0.69–1.28). Studies that implemented universal gloving alone showed a significant association with decreased incidence of HAI (IRR, 0.77; 95% CI, 0.67–0.89), but studies implementing universal gloving as part of intervention bundles showed no significant association with incidence of HAI (IRR, 0.95; 95% CI, 0.86–1.05).
Conclusions:
Universal gloving may be associated with a small protective effect against HAI. Despite limited data, universal gloving may be considered in high-risk settings, such as pediatric intensive care units. Further research should be performed to determine the effects of universal gloving on a broader range of pathogens, including gram-negative pathogens.
Objectives: Guidelines on return-to-driving after traumatic brain injury (TBI) are scarce. Since driving requires the coordination of multiple cognitive, perceptual, and psychomotor functions, neuropsychological testing may offer an estimate of driving ability. To examine this, a meta-analysis of the relationship between neuropsychological testing and driving ability after TBI was performed. Methods: Hedge’s g and 95% confidence intervals were calculated using a random effects model. Analyses were performed on cognitive domains and individual tests. Meta-regressions examined the influence of study design, demographic, and clinical factors on effect sizes. Results: Eleven studies were included in the meta-analysis. Executive functions had the largest effect size (g = 0.60 [0.39–0.80]), followed by verbal memory (g = 0.49 [0.27–0.71]), processing speed/attention (g = 0.48 [0.29–0.67]), and visual memory (g = 0.43 [0.14–0.71]). Of the individual tests, Useful Field of Vision (UFOV) divided attention (g = 1.12 [0.52–1.72]), Trail Making Test B (g = 0.75 [0.42–1.08]), and UFOV selective attention (g = 0.67 [0.22–1.12]) had the largest effects. The effect sizes for Choice Reaction Time test and Trail Making Test A were g = 0.63 (0.09–1.16) and g = 0.58 (0.10–1.06), respectively. Years post injury (β = 0.11 [0.02–0.21] and age (β = 0.05 [0.009–0.09]) emerged as significant predictors of effect sizes (both p < .05). Conclusions: These results provide preliminary evidence of associations between neuropsychological test performance and driving ability after moderate to severe TBI and highlight moderating effects of demographic and clinical factors.