To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Background: The NHSN is the nation’s largest surveillance system for healthcare-associated infections. Since 2011, acute-care hospitals (ACHs) have been required to report intensive care unit (ICU) central-line–associated bloodstream infections (CLABSIs) to the NHSN pursuant to CMS requirements. In 2015, this requirement included general medical, surgical, and medical-surgical wards. Also in 2015, the NHSN implemented a repeat infection timeframe (RIT) that required repeat CLABSIs, in the same patient and admission, to be excluded if onset was within 14 days. This analysis is the first at the national level to describe repeat CLABSIs. Methods: Index CLABSIs reported in ACH ICUs and select wards during 2015–2108 were included, in addition to repeat CLABSIs occurring at any location during the same period. CLABSIs were stratified into 2 groups: single and repeat CLABSIs. The repeat CLABSI group included the index CLABSI and subsequent CLABSI(s) reported for the same patient. Up to 5 CLABSIs were included for a single patient. Pathogen analyses were limited to the first pathogen reported for each CLABSI, which is considered to be the most important cause of the event. Likelihood ratio χ2 tests were used to determine differences in proportions. Results: Of the 70,214 CLABSIs reported, 5,983 (8.5%) were repeat CLABSIs. Of 3,264 nonindex CLABSIs, 425 (13%) were identified in non-ICU or non-select ward locations. Staphylococcus aureus was the most common pathogen in both the single and repeat CLABSI groups (14.2% and 12%, respectively) (Fig. 1). Compared to all other pathogens, CLABSIs reported with Candida spp were less likely in a repeat CLABSI event than in a single CLABSI event (P < .0001). Insertion-related organisms were more likely to be associated with single CLABSIs than repeat CLABSIs (P < .0001) (Fig. 2). Alternatively, Enterococcus spp or Klebsiella pneumoniae and K. oxytoca were more likely to be associated with repeat CLABSIs than single CLABSIs (P < .0001). Conclusions: This analysis highlights differences in the aggregate pathogen distributions comparing single versus repeat CLABSIs. Assessing the pathogens associated with repeat CLABSIs may offer another way to assess the success of CLABSI prevention efforts (eg, clean insertion practices). Pathogens such as Enterococcus spp and Klebsiella spp demonstrate a greater association with repeat CLABSIs. Thus, instituting prevention efforts focused on these organisms may warrant greater attention and could impact the likelihood of repeat CLABSIs. Additional analysis of patient-specific pathogens identified in the repeat CLABSI group may yield further clarification.
Background: The NHSN has used positive laboratory tests for surveillance of Clostridioides difficile infection (CDI) LabID events since 2009. Typically, CDIs are detected using enzyme immunoassays (EIAs), nucleic acid amplification tests (NAATs), or various test combinations. The NHSN uses a risk-adjusted, standardized infection ratio (SIR) to assess healthcare facility-onset (HO) CDI. Despite including test type in the risk adjustment, some hospital personnel and other stakeholders are concerned that NAAT use is associated with higher SIRs than are EIAs. To investigate this issue, we analyzed NHSN data from acute-care hospitals for July 1, 2017 through June 30, 2018. Methods: Calendar quarters for which CDI test type was reported as NAAT (includes NAAT, glutamate dehydrogenase (GDH)+NAAT and GDH+EIA followed by NAAT if discrepant) or EIA (includes EIA and GDH+EIA) were selected. HO CDI SIRs were calculated for facility-wide inpatient locations. We conducted the following analyses: (1) Among hospitals that did not switch their test type, we compared the distribution of HO incident rates and SIRs by those reporting NAAT vs EIA. (2) Among hospitals that switched their test type, we selected quarters with a stable switch pattern of 2 consecutive quarters of each of EIA and NAAT (categorized as pattern EIA-to-NAAT or NAAT-to-EIA). Pooled semiannual SIRs for EIA and NAAT were calculated, and a paired t test was used to evaluate the difference of SIRs by switch pattern. Results: Most hospitals did not switch test types (3,242, 89%), and 2,872 (89%) reported sufficient data to calculate SIRs, with 2,444 (85%) using NAAT. The crude pooled HO CDI incidence rates for hospitals using EIA clustered at the lower end of the histogram versus rates for NAAT (Fig. 1). The SIR distributions of both NAAT and EIA overlapped substantially and covered a similar range of SIR values (Fig. 1). Among hospitals with a switch pattern, hospitals were equally likely to have an increase or decrease in their SIR (Fig. 2). The mean SIR difference for the 42 hospitals switching from EIA to NAAT was 0.048 (95% CI, −0.189 to 0.284; P = .688). The mean SIR difference for the 26 hospitals switching from NAAT to EIA was 0.162 (95% CI, −0.048 to 0.371; P = .124). Conclusions: The pattern of SIR distributions of both NAAT and EIA substantiate the soundness of NHSN risk adjustment for CDI test types. Switching test type did not produce a consistent directional pattern in SIR that was statistically significant.
Background: Methicillin-resistant Staphylococcus aureus (MRSA) nasal colonization has been a well-established risk for developing MRSA pneumonia. In previous studies, the MRSA nasal screening test has shown an excellent negative predictive value (NPV) for MRSA pneumonia in patients without exclusion criteria such as mechanical ventilation, hemodynamic instability, cavitary lesions, and underlying pulmonary disease. MRSA nasal screening can be used as a stewardship tool to de-escalate broad antibiotic coverage, such as vancomycin. Objective: The purpose of this study was to determine whether implementation of a MRSA nasal screening questionnaire improves de-escalation of vancomycin for patients with pneumonia. Methods: A retrospective review was performed on 250 patients from October 2018 to January 2019 who received MRSA nasal screening due to their prescriber choosing only “respiratory” on the vancomycin dosing consult form. Data obtained included demographics and clinical outcomes. Statistical analyses were performed, and P < .05 was considered significant. Results: Of the 250 patients screened, only 19 patients (8%) were positive for MRSA. Moreover, 40% of patients met exclusion criteria. In 149 patients without exclusion criteria, the MRSA nasal swab had a 98% NPV. Although not statistically significant, vancomycin days of therapy (DOT) based on MRSA nasal swab result was 1 day shorter in those with negative swabs (3.49 days negative vs 4.58 days positive; P = .22). Vancomycin DOT was significantly reduced in pneumonia patients without exclusion criteria (3.17 days “no” vs 4.17 days “yes”; P = .037). Conclusions: The implementation of an electronic MRSA nasal screening questionnaire resulted in reduced vancomycin DOT in pneumonia patients at UAB Hospital. The MRSA nasal swab is an effective screening tool for antibiotic de-escalation based on its 98% NPV for MRSA pneumonia if utilized in the correct patient population.
Disclosures: Rachael Anne Lee reports a speaker honoraria from Prime Education, LLC.
Background: The CDC NHSN surveillance coverage includes central-line–associated bloodstream infections (CLABSIs) in acute-care hospital intensive care units (ICUs) and select patient-care wards across all 50 states. This surveillance enables the use of CLABSI data to measure time between events (TBE) as a potential metric to complement traditional incidence measures such as the standardized infection ratio and prevention progress. Methods: The TBEs were calculated using 37,705 CLABSI events reported to the NHSN during 2015–2018 from medical, medical-surgical, and surgical ICUs as well as patient-care wards. The CLABSI TBE data were combined into 2 separate pairs of consecutive years of data for comparison, namely, 2015–2016 (period 1) and 2017–2018 (period 2). To reduce the length bias, CLABSI TBEs were truncated for period 2 at the maximum for period 1; thereby, 1,292 CLABSI events were excluded. The medians of the CLABSI TBE distributions were compared over the 2 periods for each patient care location. Quantile regression models stratified by location were used to account for factors independently associated with CLABSI TBE, such as hospital bed size and average length of stay, and were used to measure the adjusted shift in median CLABSI TBE. Results: The unadjusted median CLABSI TBE shifted significantly from period 1 to period 2 for the patient care locations studied. The shift ranged from 20 to 75.5 days, all with 95% CIs ranging from 10.2 to 32.8, respectively, and P < .0001 (Fig. 1). Accounting for independent associations of CLABSI TBE with hospital bed size and average length of stay, the adjusted shift in median CLABSI TBE remained significant for each patient care location that was reduced by ∼15% (Table 1). Conclusions: Differences in the unadjusted median CLABSI TBE between period 1 and period 2 for all patient care locations demonstrate the feasibility of using TBE for setting benchmarks and tracking prevention progress. Furthermore, after adjusting for hospital bed size and average length of stay, a significant shift in the median CLABSI TBE persisted among all patient care locations, indicating that differences in patient populations alone likely do not account for differences in TBE. These findings regarding CLABSI TBEs warrant further exploration of potential shifts at additional quantiles, which would provide additional evidence that TBE is a metric that can be used for setting benchmarks and can serve as a signal of CLABSI prevention progress.
Background: The National Healthcare Safety Network (NHSN) has used positive laboratory tests for surveillance of Clostridioides difficile infection (CDI) LabID events since 2009. Typically, CDIs are detected using enzyme immunoassays (EIAs), nucleic acid amplification tests (NAATs), or various test combinations. The NHSN uses a risk-adjusted, standardized infection ratio (SIR) to assess healthcare facility-onset (HO) CDI. Despite including test type in the risk adjustment, some hospital personnel and other stakeholders are concerned that NAAT use is associated with higher SIRs than EIA use. To investigate this issue, we analyzed NHSN data from acute-care hospitals for July 1, 2017, through June 30, 2018. Methods: Calendar quarters where CDI test type was reported as NAAT (includes NAAT, glutamate dehydrogenase (GDH)+NAAT and GDH+EIA followed by NAAT if discrepant) or EIA (includes EIA and GDH+EIA) were selected. HO-CDI SIRs were calculated for facility-wide inpatient locations. We conducted the following 2 analyses: (1) Among hospitals that did not switch their test type, we compared the distribution of HO incident rates and SIRs by those reporting NAAT versus EIA. (2) Among hospitals that switched their test type, we selected quarters with a stable switch pattern of 2 consecutive quarters of each of EIA and NAAT (categorized as EIA-to-NAAT or NAAT-to-EIA). Pooled semiannual SIRs for EIA and NAAT were calculated, and a paired t test was used to evaluate the difference in SIRs by switch pattern. Results: Most hospitals did not switch test types (3,242, 89%), and 2,872 (89%) reported sufficient data to calculate an SIR, with 2,444 (85%) using NAAT. The crude pooled HO CDI incidence rates for hospitals using EIAs clustered at the lower end of the histogram versus rates for NAATs (Fig. 1). The SIR distributions, both NAATs and EIAs, overlapped substantially and covered a similar range of SIR values (Fig. 1). Among hospitals with a switch pattern, hospitals were equally likely to have an increase or decrease in their SIRs (Fig. 2). The mean SIR difference for the 42 hospitals switching from EIA to NAAT was 0.048 (95% CI, −0.189 to 0.284; P = .688). The mean SIR difference for the 26 hospitals switching from NAAT to EIA was 0.162 (95% CI, −0.048 to 0.371; P = .124). Conclusions: The pattern of SIR distribution for both NAAT and EIA substantiate the soundness of the NHSN’s risk adjustment for CDI test types. Switching test type did not produce a consistent directional pattern in SIR that was statistically significant.
In seeking to frame reading as a multimedia event, this chapter looks back to a period in the late nineteenth century when book-makers sought in various ways to refashion their products as audiobooks, and so to undo some of the principles that characterise silent reading. In this way, the chapter elaborates on the familiar history of gramophonic storage media by uncovering a pre-history that stretches right back to the technology of Thomas Edison in the 1870s. This, then, is an experiment in media archaeology, which is alive both to forgotten and aborted attempts to make books talk, and to books that like to imagine in more vicarious ways a reading culture unencumbered by the false principles of an ‘audiovisual litany’, as Jonathan Sterne once put it. The chapter touches on a variety of material – by Edward Bellamy and Bram Stoker, and by the French science fiction writers Albert Robida and Jules Verne – and it does so with a view to showing how imaginative writers anticipated the future of sound media.
Surgical site infections (SSIs) are among the most common healthcare-associated infections in low- and middle-income countries. To encourage establishment of actionable and standardized SSI surveillance in these countries, we propose simplified surveillance case definitions. Here, we use NHSN reports to explore concordance of these simplified definitions to NHSN as ‘reference standard.’
To describe pathogen distribution and rates for central-line–associated bloodstream infections (CLABSIs) from different acute-care locations during 2011–2017 to inform prevention efforts.
CLABSI data from the Centers for Disease Control and Prevention (CDC) National Healthcare Safety Network (NHSN) were analyzed. Percentages and pooled mean incidence density rates were calculated for a variety of pathogens and stratified by acute-care location groups (adult intensive care units [ICUs], pediatric ICUs [PICUs], adult wards, pediatric wards, and oncology wards).
From 2011 to 2017, 136,264 CLABSIs were reported to the NHSN by adult and pediatric acute-care locations; adult ICUs and wards reported the most CLABSIs: 59,461 (44%) and 40,763 (30%), respectively. In 2017, the most common pathogens were Candida spp/yeast in adult ICUs (27%) and Enterobacteriaceae in adult wards, pediatric wards, oncology wards, and PICUs (23%–31%). Most pathogen-specific CLABSI rates decreased over time, excepting Candida spp/yeast in adult ICUs and Enterobacteriaceae in oncology wards, which increased, and Staphylococcus aureus rates in pediatric locations, which did not change.
The pathogens associated with CLABSIs differ across acute-care location groups. Learning how pathogen-targeted prevention efforts could augment current prevention strategies, such as strategies aimed at preventing Candida spp/yeast and Enterobacteriaceae CLABSIs, might further reduce national rates.
We evaluated the effectiveness and cost-effectiveness of the Incredible Years® Teacher Classroom Management (TCM) programme as a universal intervention, given schools’ important influence on child mental health.
A two-arm, pragmatic, parallel group, superiority, cluster randomised controlled trial recruited three cohorts of schools (clusters) between 2012 and 2014, randomising them to TCM (intervention) or Teaching As Usual (TAU-control). TCM was delivered to teachers in six whole-day sessions, spread over 6 months. Schools and teachers were not masked to allocation. The primary outcome was teacher-reported Strengths and Difficulties Questionnaire (SDQ) Total Difficulties score. Random effects linear regression and marginal logistic regression models using Generalised Estimating Equations were used to analyse the outcomes. Trial registration: ISRCTN84130388.
Eighty schools (2075 children) were enrolled; 40 (1037 children) to TCM and 40 (1038 children) to TAU. Outcome data were collected at 9, 18, and 30-months for 96, 89, and 85% of children, respectively. The intervention reduced the SDQ-Total Difficulties score at 9 months (mean (s.d.):5.5 (5.4) in TCM v. 6.2 (6.2) in TAU; adjusted mean difference = −1.0; 95% CI−1.9 to −0.1; p = 0.03) but this did not persist at 18 or 30 months. Cost-effectiveness analysis suggested that TCM may be cost-effective compared with TAU at 30-months, but this result was associated with uncertainty so no firm conclusions can be drawn. A priori subgroup analyses suggested TCM is more effective for children with poor mental health.
TCM provided a small, short-term improvement to children's mental health particularly for children who are already struggling.
When claimants press their claims without counsel, they fail at virtually every stage of civil litigation and overwhelmingly fail to obtain meaningful access to justice. This research program harnesses psychological science to experimentally test a novel hypothesis: mainly, a claimant's pro se status itself sends a signal that biases decision making about the claimant and her claim. We conducted social psychological experiments with the public (N = 157), law students (N = 198), and employment discrimination lawyers (N = 39), holding the quality and merit of a Title VII sex discrimination case constant. In so doing, we examined whether a claimant's pro se status itself shapes stereotypes held about the claimant and biases decision making about settlement awards. These experiments reveal that pro se status influences stereotypes of claimants and settlement awards received. Moreover, the signaling effect of pro se status is exacerbated by socialization in the legal profession. Among law-trained individuals (i.e., law students and lawyers), a claimant's pro se status generates negative stereotypes about the claimant and these negative stereotypes explain the adverse effect of pro se status on decision making about settlement awards.
Corals of the Hawaiian Archipelago are well situated in the North Pacific Gyre (NPG) to record how bomb-produced radiocarbon has been sequestered and transported by the sea. While this signal can be traced accurately through time in reef-building corals and used to infer oceanographic processes and determine the ages of marine organisms, a comprehensive and validated record has been lacking for the Hawaiian Archipelago. In this study, a coral core from Kure Atoll in the northwestern Hawaiian Islands was used to create a high-resolution bomb 14C record for the years 1939–2002, and was then used with other 14C measurements in fish otoliths and seawater to explore differences and similarities in the bomb 14C signal throughout the Hawaiian Archipelago. The Kure Atoll sample series produced a well-defined bomb 14C curve that, with some exceptions, was similar to other coral 14C records from the Hawaiian Archipelago. Subtle differences in the coral 14C records across the region may be explained by the large-scale ocean circulation patterns and decadal cycles of the NPG. The most rapid increase of 14C, in the 1950s and 1960s, showed similar timing across the Hawaiian Archipelago and provides a robust basis for use of bomb 14C dating to obtain high-precision age determinations of marine organisms. Reference otoliths of juvenile fish demonstrated the use of the post-peak 14C decline period as a viable reference in the age validation of younger and more recently collected fishes, and effectively extended the utility of bomb 14C dating to the latest 30 yr.
Our knowledge of the universe comes from recording the photon and particle fluxes incident on the Earth from space. We thus require sensitive measurement across the entire energy spectrum, using large telescopes with efficient instrumentation located on superb sites. Technological advances and engineering constraints are nearing the point where we are recording as many photons arriving at a site as is possible. Major advances in the future will come from improving the quality of the site. The ultimate site is, of course, beyond the Earth’s atmosphere, such as on the Moon, but economic limitations prevent our exploiting this avenue to the degree that the scientific community desires. Here we describe an alternative, which offers many of the advantages of space for a fraction of the cost: the Antarctic Plateau.
Due to the wide bandgap and other key materials properties of 4H-SiC, SiC MOSFETs
offer performance advantages over competing Si-based power devices. For example,
SiC can more easily be used to fabricate MOSFETs with very high voltage ratings,
and with lower switching losses. Silicon carbide power MOSFET development has
progressed rapidly since the market release of Cree’s 1200V 4H-SiC
power MOSFET in 2011. This is due to continued advancements in SiC substrate
quality, epitaxial growth capabilities, and device processing. For example,
high-quality epitaxial growth of thick, low-doped SiC has enabled the
fabrication of SiC MOSFETs capable of blocking extremely high voltages (up to
15kV); while dopant control for thin highly-doped epitaxial layers has helped
enable low on-resistance 900V SiC MOSFET production. Device design and
processing improvements have resulted in lower MOSFET specific on-resistance for
each successive device generation. SiC MOSFETs have been shown to have a long
device lifetime, based on the results of accelerated lifetime testing, such as
high-temperature reverse-bias (HTRB) stress and time-dependent dielectric
This paper describes the system architecture of a newly constructed radio telescope – the Boolardy engineering test array, which is a prototype of the Australian square kilometre array pathfinder telescope. Phased array feed technology is used to form multiple simultaneous beams per antenna, providing astronomers with unprecedented survey speed. The test array described here is a six-antenna interferometer, fitted with prototype signal processing hardware capable of forming at least nine dual-polarisation beams simultaneously, allowing several square degrees to be imaged in a single pointed observation. The main purpose of the test array is to develop beamforming and wide-field calibration methods for use with the full telescope, but it will also be capable of limited early science demonstrations.
We define the notion of smooth supercritical compositional structures. Two well-known examples are compositions and graphs of given genus. The ‘parts’ of a graph are the subgraphs that are maximal trees. We show that large part sizes have asymptotically geometric distributions. This leads to asymptotically independent Poisson variables for numbers of various large parts. In many cases this leads to asymptotic formulas for the probability of being gap-free and for the expected values of the largest part sizes, number of distinct parts and number of parts of multiplicity k.