To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We analyzed 2017 healthcare facility-onset (HO) vancomycin-resistant Enterococcus (VRE) bacteremia data to identify hospital-level factors that were significant predictors of HO-VRE using the Centers for Disease Control and Prevention (CDC) National Healthcare Safety Network (NHSN) multidrug-resistant organism and Clostridioides difficile reporting module. A risk-adjusted model that can be used to calculate the number of predicted HO-VRE bacteremia events in a facility was developed, thus enabling the calculation of VRE standardized infection ratios (SIRs).
Acute-care hospitals reporting at least 1 month of 2017 VRE bacteremia data were included in the analysis. Various hospital-level characteristics were assessed to develop a best-fit model and subsequently derive the 2018 national and state SIRs.
In 2017, 470 facilities in 35 states participated in VRE bacteremia surveillance. Inpatient VRE community-onset prevalence rate, average length of patient stay, outpatient VRE community-onset prevalence rate, and presence of an oncology unit were all significantly associated (all 95% likelihood ratio confidence limits excluded the nominal value of zero) with HO-VRE bacteremia. The 2018 national SIR was 1.01 (95% CI, 0.93–1.09) with 577 HO bacteremia events reported.
The creation of an SIR enables national-, state-, and facility-level monitoring of VRE bacteremia while controlling for individual hospital-level factors. Hospitals can compare their VRE burden to a national benchmark to help them determine the effectiveness of infection prevention efforts over time.
This study aimed to investigate general factors associated with prognosis regardless of the type of treatment received, for adults with depression in primary care.
We searched Medline, Embase, PsycINFO and Cochrane Central (inception to 12/01/2020) for RCTs that included the most commonly used comprehensive measure of depressive and anxiety disorder symptoms and diagnoses, in primary care depression RCTs (the Revised Clinical Interview Schedule: CIS-R). Two-stage random-effects meta-analyses were conducted.
Twelve (n = 6024) of thirteen eligible studies (n = 6175) provided individual patient data. There was a 31% (95%CI: 25 to 37) difference in depressive symptoms at 3–4 months per standard deviation increase in baseline depressive symptoms. Four additional factors: the duration of anxiety; duration of depression; comorbid panic disorder; and a history of antidepressant treatment were also independently associated with poorer prognosis. There was evidence that the difference in prognosis when these factors were combined could be of clinical importance. Adding these variables improved the amount of variance explained in 3–4 month depressive symptoms from 16% using depressive symptom severity alone to 27%. Risk of bias (assessed with QUIPS) was low in all studies and quality (assessed with GRADE) was high. Sensitivity analyses did not alter our conclusions.
When adults seek treatment for depression clinicians should routinely assess for the duration of anxiety, duration of depression, comorbid panic disorder, and a history of antidepressant treatment alongside depressive symptom severity. This could provide clinicians and patients with useful and desired information to elucidate prognosis and aid the clinical management of depression.
The rapid spread of severe acute respiratory coronavirus virus 2 (SARS-CoV-2) throughout key regions of the United States in early 2020 placed a premium on timely, national surveillance of hospital patient censuses. To meet that need, the Centers for Disease Control and Prevention’s National Healthcare Safety Network (NHSN), the nation’s largest hospital surveillance system, launched a module for collecting hospital coronavirus disease 2019 (COVID-19) data. We present time-series estimates of the critical hospital capacity indicators from April 1 to July 14, 2020.
From March 27 to July 14, 2020, the NHSN collected daily data on hospital bed occupancy, number of hospitalized patients with COVID-19, and the availability and/or use of mechanical ventilators. Time series were constructed using multiple imputation and survey weighting to allow near–real-time daily national and state estimates to be computed.
During the pandemic’s April peak in the United States, among an estimated 431,000 total inpatients, 84,000 (19%) had COVID-19. Although the number of inpatients with COVID-19 decreased from April to July, the proportion of occupied inpatient beds increased steadily. COVID-19 hospitalizations increased from mid-June in the South and Southwest regions after stay-at-home restrictions were eased. The proportion of inpatients with COVID-19 on ventilators decreased from April to July.
The NHSN hospital capacity estimates served as important, near–real-time indicators of the pandemic’s magnitude, spread, and impact, providing quantitative guidance for the public health response. Use of the estimates detected the rise of hospitalizations in specific geographic regions in June after they declined from a peak in April. Patient outcomes appeared to improve from early April to mid-July.
The majority of psychological treatment research is dedicated to investigating the effectiveness of cognitive behavioural therapy (CBT) across different conditions, population and contexts. We aimed to summarise the current systematic review evidence and evaluate the consistency of CBT's effect across different conditions. We included reviews of CBT randomised controlled trials in any: population, condition, format, context, with any type of comparator and published in English. We searched DARE, Cochrane, MEDLINE, EMBASE, PsycINFO, CINAHL, CDAS, and OpenGrey between 1992 and January 2019. Reviews were quality assessed, their data extracted and summarised. The effects upon health-related quality of life (HRQoL) were pooled, within-condition groups. If the across-condition heterogeneity was I2 < 75%, we pooled effects using a random-effect panoramic meta-analysis. We summarised 494 reviews (221 128 participants), representing 14/20 physical and 13/20 mental conditions (World Health Organisation's International Classification of Diseases). Most reviews were lower-quality (351/494), investigated face-to-face CBT (397/494), and in adults (378/494). Few reviews included trials conducted in Asia, South America or Africa (45/494). CBT produced a modest benefit across-conditions on HRQoL (standardised mean difference 0.23; 95% confidence intervals 0.14–0.33, I2 = 32%). The effect's associated prediction interval −0.05 to 0.50 suggested CBT will remain effective in conditions for which we do not currently have available evidence. While there remain some gaps in the completeness of the evidence base, we need to recognise the consistent evidence for the general benefit which CBT offers.
Evaluate the difference in antibiotic prescribing between various levels of resident training or attending types.
Observational, retrospective study.
Tertiary-care, academic medical center in Madison, Wisconsin.
We measured antibiotic utilization from January 1, 2016, through December 31, 2018, in our general medicine (GM) and hospitalist services. The GM1 service is staffed by outpatient internal medicine physicians, the GM2 service is staffed by geriatricians and hospitalists, and the GM3 service is staffed by only hospitalists. The GMA service is led by junior resident physicians, and the GMB service is led by senior resident physicians. We measured utilization using days of therapy (DOT) per 1,000 patient days (PD). In a secondary analysis based on antibiotic spectrum, we used average DOT per 1,000 PD.
Teaching services prescribed more antibiotics than nonteaching services (671.6 vs 575.2 DOT per 1,000 PD; P < .0001). Junior resident–led services used more antibiotics than senior resident–led services (740.9 vs 510.0 DOT per 1,000 PD; P < .0001). Overall, antibiotic prescribing was numerically similar between various attending physician backgrounds. A secondary analysis showed that GM services prescribed more broad-spectrum, anti-MRSA, and anti-pseudomonal antibiotics than the hospitalist services. GM junior resident–led services prescribed more broad-spectrum, anti-MRSA, and antipseudomonal therapy compared to their senior counterparts.
Antibiotics were prescribed at a significantly higher rate in services associated with trainees than those without. Services led by a junior resident physician prescribed antibiotics at a significantly higher rate than services led by a senior resident. Interventions to reduce unnecessary antibiotic exposure should be targeted toward resident physicians, especially junior trainees.
Background: The NHSN is the nation’s largest surveillance system for healthcare-associated infections. Since 2011, acute-care hospitals (ACHs) have been required to report intensive care unit (ICU) central-line–associated bloodstream infections (CLABSIs) to the NHSN pursuant to CMS requirements. In 2015, this requirement included general medical, surgical, and medical-surgical wards. Also in 2015, the NHSN implemented a repeat infection timeframe (RIT) that required repeat CLABSIs, in the same patient and admission, to be excluded if onset was within 14 days. This analysis is the first at the national level to describe repeat CLABSIs. Methods: Index CLABSIs reported in ACH ICUs and select wards during 2015–2108 were included, in addition to repeat CLABSIs occurring at any location during the same period. CLABSIs were stratified into 2 groups: single and repeat CLABSIs. The repeat CLABSI group included the index CLABSI and subsequent CLABSI(s) reported for the same patient. Up to 5 CLABSIs were included for a single patient. Pathogen analyses were limited to the first pathogen reported for each CLABSI, which is considered to be the most important cause of the event. Likelihood ratio χ2 tests were used to determine differences in proportions. Results: Of the 70,214 CLABSIs reported, 5,983 (8.5%) were repeat CLABSIs. Of 3,264 nonindex CLABSIs, 425 (13%) were identified in non-ICU or non-select ward locations. Staphylococcus aureus was the most common pathogen in both the single and repeat CLABSI groups (14.2% and 12%, respectively) (Fig. 1). Compared to all other pathogens, CLABSIs reported with Candida spp were less likely in a repeat CLABSI event than in a single CLABSI event (P < .0001). Insertion-related organisms were more likely to be associated with single CLABSIs than repeat CLABSIs (P < .0001) (Fig. 2). Alternatively, Enterococcus spp or Klebsiella pneumoniae and K. oxytoca were more likely to be associated with repeat CLABSIs than single CLABSIs (P < .0001). Conclusions: This analysis highlights differences in the aggregate pathogen distributions comparing single versus repeat CLABSIs. Assessing the pathogens associated with repeat CLABSIs may offer another way to assess the success of CLABSI prevention efforts (eg, clean insertion practices). Pathogens such as Enterococcus spp and Klebsiella spp demonstrate a greater association with repeat CLABSIs. Thus, instituting prevention efforts focused on these organisms may warrant greater attention and could impact the likelihood of repeat CLABSIs. Additional analysis of patient-specific pathogens identified in the repeat CLABSI group may yield further clarification.
Background: The NHSN has used positive laboratory tests for surveillance of Clostridioides difficile infection (CDI) LabID events since 2009. Typically, CDIs are detected using enzyme immunoassays (EIAs), nucleic acid amplification tests (NAATs), or various test combinations. The NHSN uses a risk-adjusted, standardized infection ratio (SIR) to assess healthcare facility-onset (HO) CDI. Despite including test type in the risk adjustment, some hospital personnel and other stakeholders are concerned that NAAT use is associated with higher SIRs than are EIAs. To investigate this issue, we analyzed NHSN data from acute-care hospitals for July 1, 2017 through June 30, 2018. Methods: Calendar quarters for which CDI test type was reported as NAAT (includes NAAT, glutamate dehydrogenase (GDH)+NAAT and GDH+EIA followed by NAAT if discrepant) or EIA (includes EIA and GDH+EIA) were selected. HO CDI SIRs were calculated for facility-wide inpatient locations. We conducted the following analyses: (1) Among hospitals that did not switch their test type, we compared the distribution of HO incident rates and SIRs by those reporting NAAT vs EIA. (2) Among hospitals that switched their test type, we selected quarters with a stable switch pattern of 2 consecutive quarters of each of EIA and NAAT (categorized as pattern EIA-to-NAAT or NAAT-to-EIA). Pooled semiannual SIRs for EIA and NAAT were calculated, and a paired t test was used to evaluate the difference of SIRs by switch pattern. Results: Most hospitals did not switch test types (3,242, 89%), and 2,872 (89%) reported sufficient data to calculate SIRs, with 2,444 (85%) using NAAT. The crude pooled HO CDI incidence rates for hospitals using EIA clustered at the lower end of the histogram versus rates for NAAT (Fig. 1). The SIR distributions of both NAAT and EIA overlapped substantially and covered a similar range of SIR values (Fig. 1). Among hospitals with a switch pattern, hospitals were equally likely to have an increase or decrease in their SIR (Fig. 2). The mean SIR difference for the 42 hospitals switching from EIA to NAAT was 0.048 (95% CI, −0.189 to 0.284; P = .688). The mean SIR difference for the 26 hospitals switching from NAAT to EIA was 0.162 (95% CI, −0.048 to 0.371; P = .124). Conclusions: The pattern of SIR distributions of both NAAT and EIA substantiate the soundness of NHSN risk adjustment for CDI test types. Switching test type did not produce a consistent directional pattern in SIR that was statistically significant.
Background: The CDC NHSN surveillance coverage includes central-line–associated bloodstream infections (CLABSIs) in acute-care hospital intensive care units (ICUs) and select patient-care wards across all 50 states. This surveillance enables the use of CLABSI data to measure time between events (TBE) as a potential metric to complement traditional incidence measures such as the standardized infection ratio and prevention progress. Methods: The TBEs were calculated using 37,705 CLABSI events reported to the NHSN during 2015–2018 from medical, medical-surgical, and surgical ICUs as well as patient-care wards. The CLABSI TBE data were combined into 2 separate pairs of consecutive years of data for comparison, namely, 2015–2016 (period 1) and 2017–2018 (period 2). To reduce the length bias, CLABSI TBEs were truncated for period 2 at the maximum for period 1; thereby, 1,292 CLABSI events were excluded. The medians of the CLABSI TBE distributions were compared over the 2 periods for each patient care location. Quantile regression models stratified by location were used to account for factors independently associated with CLABSI TBE, such as hospital bed size and average length of stay, and were used to measure the adjusted shift in median CLABSI TBE. Results: The unadjusted median CLABSI TBE shifted significantly from period 1 to period 2 for the patient care locations studied. The shift ranged from 20 to 75.5 days, all with 95% CIs ranging from 10.2 to 32.8, respectively, and P < .0001 (Fig. 1). Accounting for independent associations of CLABSI TBE with hospital bed size and average length of stay, the adjusted shift in median CLABSI TBE remained significant for each patient care location that was reduced by ∼15% (Table 1). Conclusions: Differences in the unadjusted median CLABSI TBE between period 1 and period 2 for all patient care locations demonstrate the feasibility of using TBE for setting benchmarks and tracking prevention progress. Furthermore, after adjusting for hospital bed size and average length of stay, a significant shift in the median CLABSI TBE persisted among all patient care locations, indicating that differences in patient populations alone likely do not account for differences in TBE. These findings regarding CLABSI TBEs warrant further exploration of potential shifts at additional quantiles, which would provide additional evidence that TBE is a metric that can be used for setting benchmarks and can serve as a signal of CLABSI prevention progress.
Background: The National Healthcare Safety Network (NHSN) has used positive laboratory tests for surveillance of Clostridioides difficile infection (CDI) LabID events since 2009. Typically, CDIs are detected using enzyme immunoassays (EIAs), nucleic acid amplification tests (NAATs), or various test combinations. The NHSN uses a risk-adjusted, standardized infection ratio (SIR) to assess healthcare facility-onset (HO) CDI. Despite including test type in the risk adjustment, some hospital personnel and other stakeholders are concerned that NAAT use is associated with higher SIRs than EIA use. To investigate this issue, we analyzed NHSN data from acute-care hospitals for July 1, 2017, through June 30, 2018. Methods: Calendar quarters where CDI test type was reported as NAAT (includes NAAT, glutamate dehydrogenase (GDH)+NAAT and GDH+EIA followed by NAAT if discrepant) or EIA (includes EIA and GDH+EIA) were selected. HO-CDI SIRs were calculated for facility-wide inpatient locations. We conducted the following 2 analyses: (1) Among hospitals that did not switch their test type, we compared the distribution of HO incident rates and SIRs by those reporting NAAT versus EIA. (2) Among hospitals that switched their test type, we selected quarters with a stable switch pattern of 2 consecutive quarters of each of EIA and NAAT (categorized as EIA-to-NAAT or NAAT-to-EIA). Pooled semiannual SIRs for EIA and NAAT were calculated, and a paired t test was used to evaluate the difference in SIRs by switch pattern. Results: Most hospitals did not switch test types (3,242, 89%), and 2,872 (89%) reported sufficient data to calculate an SIR, with 2,444 (85%) using NAAT. The crude pooled HO CDI incidence rates for hospitals using EIAs clustered at the lower end of the histogram versus rates for NAATs (Fig. 1). The SIR distributions, both NAATs and EIAs, overlapped substantially and covered a similar range of SIR values (Fig. 1). Among hospitals with a switch pattern, hospitals were equally likely to have an increase or decrease in their SIRs (Fig. 2). The mean SIR difference for the 42 hospitals switching from EIA to NAAT was 0.048 (95% CI, −0.189 to 0.284; P = .688). The mean SIR difference for the 26 hospitals switching from NAAT to EIA was 0.162 (95% CI, −0.048 to 0.371; P = .124). Conclusions: The pattern of SIR distribution for both NAAT and EIA substantiate the soundness of the NHSN’s risk adjustment for CDI test types. Switching test type did not produce a consistent directional pattern in SIR that was statistically significant.
To describe pathogen distribution and rates for central-line–associated bloodstream infections (CLABSIs) from different acute-care locations during 2011–2017 to inform prevention efforts.
CLABSI data from the Centers for Disease Control and Prevention (CDC) National Healthcare Safety Network (NHSN) were analyzed. Percentages and pooled mean incidence density rates were calculated for a variety of pathogens and stratified by acute-care location groups (adult intensive care units [ICUs], pediatric ICUs [PICUs], adult wards, pediatric wards, and oncology wards).
From 2011 to 2017, 136,264 CLABSIs were reported to the NHSN by adult and pediatric acute-care locations; adult ICUs and wards reported the most CLABSIs: 59,461 (44%) and 40,763 (30%), respectively. In 2017, the most common pathogens were Candida spp/yeast in adult ICUs (27%) and Enterobacteriaceae in adult wards, pediatric wards, oncology wards, and PICUs (23%–31%). Most pathogen-specific CLABSI rates decreased over time, excepting Candida spp/yeast in adult ICUs and Enterobacteriaceae in oncology wards, which increased, and Staphylococcus aureus rates in pediatric locations, which did not change.
The pathogens associated with CLABSIs differ across acute-care location groups. Learning how pathogen-targeted prevention efforts could augment current prevention strategies, such as strategies aimed at preventing Candida spp/yeast and Enterobacteriaceae CLABSIs, might further reduce national rates.
In 2017, Public Health England South East Health Protection Team (HPT) were involved in the management of an outbreak of Mycobacterium bovis (the causative agent of bovine tuberculosis) in a pack of working foxhounds. This paper summarises the actions taken by the team in managing the public health aspects of the outbreak, and lessons learned to improve the management of future potential outbreaks. A literature search was conducted to identify relevant publications on M. bovis. Clinical notes from the Public Health England (PHE) health protection database were reviewed and key points extracted. Animal and public health stakeholders involved in the management of the situation provided further evidence through unstructured interviews and personal communications. The PHE South East team initially provided ‘inform and advise’ letters to human contacts whilst awaiting laboratory confirmation to identify the infectious agent. Once M. bovis had been confirmed in the hounds, an in-depth risk assessment was conducted, and contacts were stratified in to risk pools. Eleven out of 20 exposed persons with the greatest risk of exposure were recommended to attend TB screening and one tested positive, but had no evidence of active TB infection. The number of human contacts working with foxhound packs can be large and varied. HPTs should undertake a comprehensive risk assessment of all potential routes of exposure, involve all other relevant stakeholders from an early stage and undertake regular risk assessments. Current guidance should be revised to account for the unique risks to human health posed by exposure to infected working dogs.
OBJECTIVES/SPECIFIC AIMS: New Beginnings is a 12-week community-based behavioral intervention for improving health, strength, and wellness through a holistic approach to coaching that supports lifestyle change. The program serves predominantly low-income, minority women. Given the substantial focus on exercise, including resistance training, we aimed to test whether pain at baseline is associated with program completion in a prospective cohort. METHODS/STUDY POPULATION: At the entry of the New Beginnings program, women completed a survey that included a body map of sites at which they experienced pain for most days in the prior week. Using logistic regression, we independently tested the association between presence of pain, the total number of pain sites, and grouped location of pain with program completion, assessing the following a priori candidate confounders: age, race/ethnicity, body mass index, and income. We also tested for interaction of pain and age in influencing completion. RESULTS/ANTICIPATED RESULTS: Seventy-five percent of participants, 185 of 247, completed the program. They had an average age of 44.2±11.7 years, weight of 244.5±115.4 pounds, and BMI of 41.3±18.2. Fifty-seven percent were African American and 3% were Hispanic. The majority reported preexisting pain (83%), with an average of 3.4±2.7 pain sites. Completers and non-completers did not differ by the total number of pain sites (p=0.2). Having preexisting pain compared to no pain [odds ratio (OR)=1.3; 95% confidence interval (CI): 0.5–3.4] and to the number of pain sites (OR=1.0; 95% CI: 0.9–1.1) did not influence program completion after adjusting for the sole confounder, which was age. Likewise, we observed no association between limb/joint pain (OR=1.1; 95% CI: 0.6–2.1) or back pain (OR=0.9; 95% CI: 0.5–1.6) with program completion. The association of pain with completion was not modified by age. DISCUSSION/SIGNIFICANCE OF IMPACT: While pain is believed to be a barrier to improving fitness, preexisting pain may not be a strong predictor of completing a holistic lifestyle intervention with a substantial exercise component. Rather, women’s commitment to making a healthy lifestyle change may result in program completion irrespective of preexisting pain. Addressing and accommodating pain-related modifications to exercise interventions promise to be more effective than excluding those with pain from participation.
Training for the clinical research workforce does not sufficiently prepare workers for today’s scientific complexity; deficiencies may be ameliorated with training. The Enhancing Clinical Research Professionals’ Training and Qualifications developed competency standards for principal investigators and clinical research coordinators.
Clinical and Translational Science Awards representatives refined competency statements. Working groups developed assessments, identified training, and highlighted gaps.
Forty-eight competency statements in 8 domains were developed.
Training is primarily investigator focused with few programs for clinical research coordinators. Lack of training is felt in new technologies and data management. There are no standardized assessments of competence.
The translation of discoveries to drugs, devices, and behavioral interventions requires well-prepared study teams. Execution of clinical trials remains suboptimal due to varied quality in design, execution, analysis, and reporting. A critical impediment is inconsistent, or even absent, competency-based training for clinical trial personnel.
In 2014, the National Center for Advancing Translational Science (NCATS) funded the project, Enhancing Clinical Research Professionals’ Training and Qualifications (ECRPTQ), aimed at addressing this deficit. The goal was to ensure all personnel are competent to execute clinical trials. A phased structure was utilized.
This paper focuses on training recommendations in Good Clinical Practice (GCP). Leveraging input from all Clinical and Translational Science Award hubs, the following was recommended to NCATS: all investigators and study coordinators executing a clinical trial should understand GCP principles and undergo training every 3 years, with the training method meeting the minimum criteria identified by the International Conference on Harmonisation GCP.
We anticipate that industry sponsors will acknowledge such training, eliminating redundant training requests. We proposed metrics to be tracked that required further study. A separate task force was composed to define recommendations for metrics to be reported to NCATS.
Catheter-associated urinary tract infections (CAUTIs) are among the most common hospital-acquired infections (HAIs). Reducing CAUTI rates has become a major focus of attention due to increasing public health concerns and reimbursement implications.
To implement and describe a multifaceted intervention to decrease CAUTIs in our ICUs with an emphasis on indications for obtaining a urine culture.
A project team composed of all critical care disciplines was assembled to address an institutional goal of decreasing CAUTIs. Interventions implemented between year 1 and year 2 included protocols recommended by the Centers for Disease Control and Prevention for placement, maintenance, and removal of catheters. Leaders from all critical care disciplines agreed to align routine culturing practice with American College of Critical Care Medicine (ACCCM) and Infectious Disease Society of America (IDSA) guidelines for evaluating a fever in a critically ill patient. Surveillance data for CAUTI and hospital-acquired bloodstream infection (HABSI) were recorded prospectively according to National Healthcare Safety Network (NHSN) protocols. Device utilization ratios (DURs), rates of CAUTI, HABSI, and urine cultures were calculated and compared.
The CAUTI rate decreased from 3.0 per 1,000 catheter days in 2013 to 1.9 in 2014. The DUR was 0.7 in 2013 and 0.68 in 2014. The HABSI rates per 1,000 patient days decreased from 2.8 in 2013 to 2.4 in 2014.
Effectively reducing ICU CAUTI rates requires a multifaceted and collaborative approach; stewardship of culturing was a key and safe component of our successful reduction efforts.