To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
One of the most important novels of the eighteenth-century, Sir Charles Grandison  shaped the English courtship novel, and was loved and admired by both Jane Austen and George Eliot. The book follows the life of Sir Charles, a man parallel in virtue with Richardson's female paragons Clarissa and Pamela; and a response to the fallible protagonist Tom Jones in Fielding's popular satire of moralising novels. Forming part of the first full scholarly edition of Richardson's complete works, comprehensive general and textual introductions significantly revise and advance understanding of the composition and printing history of Richardson's final novel, and reveal the central place of Sir Charles in the literature of the period. Including Richardson's Historical Index for the first time in any edition, extensive annotations and expansive notes also give readers crucial context, and provides scholars with paths to follow for future research.
Understanding the cognitive determinants of healthcare worker (HCW) behavior is important for improving the use of infection prevention and control (IPC) practices. Given a patient requiring only standard precautions, we examined the dimensions along which different populations of HCWs cognitively organize patient care tasks (ie, their mental models).
HCWs read a description of a patient and then rated the similarities of 25 patient care tasks from an infection prevention perspective. Using multidimensional scaling, we identified the dimensions (ie, characteristics of tasks) underlying these ratings and the salience of each dimension to HCWs.
Adult inpatient hospitals across an academic hospital network.
In total, 40 HCWs, comprising infection preventionists and nurses from intensive care units, emergency departments, and medical-surgical floors rated the similarity of tasks. To identify the meaning of each dimension, another 6 nurses rated each task in terms of specific characteristics of tasks.
Each HCW population perceived patient care tasks to vary along 3 common dimensions; most salient was the perceived magnitude of infection risk to the patient in a task, followed by the perceived dirtiness and risk of HCW exposure to body fluids, and lastly, the relative importance of a task for preventing versus controlling an infection in a patient.
For a patient requiring only standard precautions, different populations of HCWs have similar mental models of how various patient care tasks relate to IPC. Techniques for eliciting mental models open new avenues for understanding and ultimately modifying the cognitive determinants of IPC behaviors.
Genome-wide association studies (GWAS) have successfully revealed genetic risk variants for schizophrenia (SCZ). However, the vast majority of GWAS largely comprise European samples. As a result, the derived polygenic risk scores (PRS) show decreased predictive power when applied to non-European populations.
A long-term scientific cooperation between the Charité Universitätsmedizin Berlin and the Hanoi Medical University aims to address this limitation by recruiting a large genetic cohort of comprehensively phenotyped schizophrenia patients and controls in Vietnam.
A pilot study was conducted at the Department of Psychiatry of the Medical University Hanoi in 2017. Data collection encompassed i) genome-wide SNP genotyping of 200 schizophrenia patients and 200 control subjects ii) structured interviews to assess symptom severity (PANSS), iii) clinical parameters (e.g. duration of illness, medication) and demography.
SCZ-PRS of the pilot sample (N=400) were generated using different training data sets: i) European, ii) East-Asian and iii) mixed GWAS summary statistics from the Psychiatric Genomics Consortium’s latest discovery sample. Most variance explained was observed using a mixed discovery sample (R2liability=0.053, p=3.11*10-8, Pd <0.5), followed by PRS based on the East-Asian summary statistics (R2liability=0.0503, p=6.78*10-8, Pd <1) and the European sample (R2liability=0.0363, p = 4.26*10-6, Pd <0.01).
With this pilot project we established an efficient recruitment, genotyping and data analysis pipeline. Our results corroborate previous findings indicating that transferability of PRS across populations depends on the ancestral composition of the initial discovery dataset. We therefore aim to expand data collection efforts in the future in order to improve risk prediction across diverse populations.
Group Name: The Emory COVID-19 Quality and Clinical Research Collaborative
Background: Patients hospitalized with COVID-19 are at risk of secondary infections—10%–33% develop bacterial pneumonia and 2%–6% develop bloodstream infection (BSI). We conducted a retrospective cohort study to identify the prevalence, microbiology, and outcomes of secondary pneumonias and BSIs in patients hospitalized with COVID-19. Methods: Patients aged ≥18 years with a positive SARS-CoV-2 real-time polymerase chain reaction assay admitted to 4 academic hospitals in Atlanta, Georgia, between February 15 and May 16, 2020, were included. We extracted electronic medical record data through June 16, 2020. Microbiology tests were performed according to standard protocols. Possible ventilator-associated pneumonia (PVAP) was defined according to Centers for Disease Control and Prevention (CDC) criteria. We assessed in-hospital mortality, comparing patients with and without infections using the χ2 test. SAS University Edition software was used for data analyses. Results: In total, 774 patients were included (median age, 62 years; 49.7% female; 66.6% black). In total, 335 patients (43.3%) required intensive care unit (ICU) admission, 238 (30.7%) required mechanical ventilation, and 120 (15.5%) died. Among 238 intubated patients, 65 (27.3%) had a positive respiratory culture, including 15 with multiple potential pathogens, for a total of 84 potential pathogens. The most common organisms were Staphylococcus aureus (29 of 84; 34.5%), Pseudomonas aeruginosa (16 of 84; 19.0%), and Klebsiella spp (14 of 84; 16.7%). Mortality did not differ between intubated patients with and without a positive respiratory culture (41.5% vs 35.3%; P = .37). Also, 5 patients (2.1%) had a CDC-defined PVAP (1.7 PVAPs per 1,000 ventilator days); none of them died. Among 536 (69.3%) nonintubated patients, 2 (0.4%) had a positive Legionella urine antigen and 1 had a positive respiratory culture (for S. aureus). Of 774 patients, 36 (4.7%) had BSI, including 5 with polymicrobial BSI (42 isolates total). Most BSIs (24 of 36; 66.7%) had ICU onset. The most common organisms were S. aureus (7 of 42; 16.7%), Candida spp (7 of 42; 16.7%), and coagulase-negative staphylococci (5 of 42; 11.9%); 12 (28.6%) were gram-negative. The most common source was central-line–associated BSI (17 of 36; 47.2%), followed by skin (6 of 36; 16.7%), lungs (5 of 36; 13.9%), and urine (4 of 36; 11.1%). Mortality was 50% in patients with BSI versus 13.8% without (p < 0.0001). Conclusions: In a large cohort of patients hospitalized with COVID-19, secondary infections were rare: 2% bacterial pneumonia and 5% BSI. The risk factors for these infections (intubation and central lines, respectively) and causative pathogens reflect healthcare delivery and not a COVID-19–specific effect. Clinicians should adhere to standard best practices for preventing and empirically treating secondary infections in patients hospitalized with COVID-19.
Background: Antibiotics targeted against Clostridioides difficile bacteria are necessary, but insufficient, to achieve a durable clinical response because they have no effect on C. difficile spores that germinate within a disrupted microbiome. ECOSPOR-III evaluated SER-109, an investigational, biologically derived microbiome therapeutic of purified Firmicute spores for treatment of rCDI. Herein, we present the interim analysis in the ITT population at 8 and 12 weeks. Methods: Adults ≥18 years with rCDI (≥3 episodes in 12 months) were screened at 75 US and CAN sites. CDI was defined as ≥3 unformed stools per day for <48 hours with a positive C. difficile assay. After completion of 10–21 days of vancomycin or fidaxomicin, adults with symptom resolution were randomized 1:1 to SER-109 (4 capsules × 3 days) or matching placebo and stratified by age (≥ or <65 years) and antibiotic received. Primary objectives were safety and efficacy at 8 weeks. Primary efficacy endpoint was rCDI (recurrent toxin+ diarrhea requiring treatment); secondary endpoints included efficacy at 12 weeks after dosing. Results: Overall, 287 participants were screened and 182 were randomized (59.9% female; mean age, 65.5 years). The most common reason for screen failure was a negative C. difficile toxin assay. A significantly lower proportion of SER-109 participants had rCDI after dosing compared to placebo at week 8 (11.1% vs 41.3%, respectively; relative risk [RR], 0.27; 95% confidence interval [CI], 0.15–0.51; p-value <0.001). Efficacy rates were significantly higher with SER-109 vs placebo in both stratified age groups (Figure 1). SER-109 was well-tolerated with a safety profile similar to placebo. The most common treatment-emergent adverse events (TEAEs) were gastrointestinal and were mainly mild to moderate. No serious TEAEs, infections, deaths, or drug discontinuations were deemed related to study drug. Conclusions: SER-109, an oral live microbiome therapeutic, achieved high rates of sustained clinical response with a favorable safety profile. By enriching for Firmicute spores, SER-109 achieves high efficacy while mitigating risk of transmitting infectious agents, beyond donor screening alone. SER-109 represents a major paradigm shift in the clinical management of patients with recurrent CDI. Clinicaltrials.gov Identifier NCT03183128. These data were previously presented as a late breaker at American College of Gastroenterology 2020.
Background: At our institution, the concern for false-negative nasopharyngeal testing for SARS-CoV-2 at the onset of illness led to a general policy of retesting inpatients at 48 hours. For such patients, 2 negative SARS-CoV-2 PCR test results were required prior to discontinuation of COVID-19 control precautions. To assess the utility of routine repeat testing We analyzed patients presenting to our hospital who initially tested negative for SARS-CoV-2 but were found to be positive on repeated testing. Methods: All inpatients with symptoms concerning for COVID-19 were tested via nasopharyngeal sample for SARS-CoV-2 by PCR on admission. Patients with continued symptoms and no alternative diagnosis were retested 48 hours later. Testing was performed using either the Roche cobas SARS-CoV-2 RT-PCR assay or the Cepheid Xpert Xpress SARS-CoV-2 test. Between March 17, 2020, and May 10, 2020, we retrospectively analyzed data from patients with false-negative SARS-CoV-2 PCR test results who were subsequently confirmed positive 48 hours later. We evaluated demographic information, days since symptom onset, symptomatology, chest imaging, vital sign trends, and the overall clinical course of each patient. Results: During the study period, 14,683 tests were performed, almost half (n = 7,124) were performed through the ED and in the inpatient setting. Of 2,283 patients who tested positive for SARS-CoV-2, only 19 (0.01%) initially tested negative. Patients with initial false-negative test results presented with symptoms that ranged from fever and dyspnea to fatigue and vomiting. Notably, few patients presented “early” in their disease (median, 6 days; range, 0–10 days). However, patients with initial false-negative PCR test results did seem to have consistent imaging findings, specifically bilateral bibasilar ground glass opacities on chest radiograph or computed tomography scan. Conclusions: Among inpatients with COVID-19, we found a very low rate of initial false-negative SARS-CoV-2 PCR test results, which were not consistently related to premature testing. We also identified common radiographic findings among patients with initially false-negative test results, which could be useful in triaging patients who may merit retesting. Based on these data, we revised our existing clearance criteria to allow for single-test removal of COVID-19 precautions. Evaluating subsequent reduction in unnecessary testing is difficult given changing community prevalence, increased census, and increased opening to elective procedures. However, given the significant percentage of ED and inpatient testing, removal of repeated testing has likely resulted in a reduction of several thousand unnecessary COVID-19 tests monthly.
Grant and Lebo (2016) and Keele et al. (2016) clarify the conditions under which the popular general error correction model (GECM) can be used and interpreted easily: In a bivariate GECM the data must be integrated in order to rely on the error correction coefficient, $\alpha _1^\ast$, to test cointegration and measure the rate of error correction between a single exogenous x and a dependent variable, y. Here we demonstrate that even if the data are all integrated, the test on $\alpha _1^\ast$ is misunderstood when there is more than a single independent variable. The null hypothesis is that there is no cointegration between y and any x but the correct alternative hypothesis is that y is cointegrated with at least one—but not necessarily more than one—of the x's. A significant $\alpha _1^\ast$ can occur when some I(1) regressors are not cointegrated and the equation is not balanced. Thus, the correct limiting distributions of the right-hand-side long-run coefficients may be unknown. We use simulations to demonstrate the problem and then discuss implications for applied examples.
In total, 13 facilities changed C. difficile testing to reflexive testing by enzyme immunoassay (EIA) only after a positive nucleic acid-amplification test (NAAT); the standardized infection ratio (SIR) decreased by 46% (range, −12% to −71% per hospital). Changing testing practice greatly influenced a performance metric without changing C. difficile infection prevention practice.
To determine clinical characteristics associated with false-negative severe acute respiratory coronavirus virus 2 (SARS-CoV-2) test results to help inform coronavirus disease 2019 (COVID-19) testing practices in the inpatient setting.
A retrospective observational cohort study.
All patients 2 years of age and older tested for SARS-CoV-2 between March 14, 2020, and April 30, 2020, who had at least 2 SARS-CoV-2 reverse-transcriptase polymerase chain reaction tests within 7 days.
The primary outcome measure was a false-negative testing episode, which we defined as an initial negative test followed by a positive test within the subsequent 7 days. Data collected included symptoms, demographics, comorbidities, vital signs, labs, and imaging studies. Logistic regression was used to model associations between clinical variables and false-negative SARS-CoV-2 test results.
Of the 1,009 SARS-CoV-2 test results included in the analysis, 4.0% were false-negative results. In multivariable regression analysis, compared with true-negative test results, false-negative test results were associated with anosmia or ageusia (adjusted odds ratio [aOR], 8.4; 95% confidence interval [CI], 1.4–50.5; P = .02), having had a COVID-19–positive contact (aOR, 10.5; 95% CI, 4.3–25.4; P < .0001), and having an elevated lactate dehydrogenase level (aOR, 3.3; 95% CI, 1.2–9.3; P = .03). Demographics, symptom duration, other laboratory values, and abnormal chest imaging were not significantly associated with false-negative test results in our multivariable analysis.
Clinical features can help predict which patients are more likely to have false-negative SARS-CoV-2 test results.
Providing high-quality electron images and hyperspectral X-ray maps is a focus of many modern electron microscopy laboratories. Nevertheless, further image processing and annotations are often needed to prepare them for publications and reports. For multi-user facilities, accessibility to processing software can be a limitation either through license costs or availability of processing stations. Open-source software running on multiple platforms allows for post-acquisition data processing in-lab or on user-owned devices. We developed Probelab ReImager to supersede our vendor-supplied acquisition software's exportation by being efficient and highly customizable. This article describes its main features and capabilities.
This paper introduces a dynamic knowledge-graph approach for digital twins and illustrates how this approach is by design naturally suited to realizing the vision of a Universal Digital Twin. The dynamic knowledge graph is implemented using technologies from the Semantic Web. It is composed of concepts and instances that are defined using ontologies, and of computational agents that operate on both the concepts and instances to update the dynamic knowledge graph. By construction, it is distributed, supports cross-domain interoperability, and ensures that data are connected, portable, discoverable, and queryable via a uniform interface. The knowledge graph includes the notions of a “base world” that describes the real world and that is maintained by agents that incorporate real-time data, and of “parallel worlds” that support the intelligent exploration of alternative designs without affecting the base world. Use cases are presented that demonstrate the ability of the dynamic knowledge graph to host geospatial and chemical data, control chemistry experiments, perform cross-domain simulations, and perform scenario analysis. The questions of how to make intelligent suggestions for alternative scenarios and how to ensure alignment between the scenarios considered by the knowledge graph and the goals of society are considered. Work to extend the dynamic knowledge graph to develop a digital twin of the UK to support the decarbonization of the energy system is discussed. Important directions for future research are highlighted.
In this paper, we address how the COVID-19 pandemic has impacted informed consent for clinical research through examining experiences within Clinical and Translation Science Award (CTSA) institutions. We begin with a brief overview of informed consent and the challenges that existed prior to COVID-19. Then, we discuss how informed consent processes were modified or changed to address the pandemic, consider what lessons were learned, and present research and policy steps to prepare for future research and public health crises. The experiences and challenges for CTSA institutions offer an important perspective for examining what we have learned about informed consent and determining the next steps for improving the consent process.
Background: Hospital-acquired Clostridioides difficile infection (HA-CDI) rates are highly variable over time, posing problems for research assessing interventions that might improve rates. By understanding seasonality in HA-CDI rates and the impacts that other factors such as influenza admissions might have on these rates, we can account for them when establishing the relationship between interventions and infection rates. We assessed whether there were seasonal trends in HA-CDI and whether they could be accounted for by influenza rates. Methods: We assessed HA-CDI rates per 10,000 patient days, and the rate of hospitalized patients with influenza per 1,000 admissions in 4 acute-care facilities (n = 2,490 beds) in Calgary, Alberta, from January 2016 to December 2018. We used 4 statistical approaches in R (version 3.5.1 software): (1) autoregressive integrated moving average (ARIMA) to assess dependencies and trends in each of the monthly HA-CDI and influenza series; (2) cross correlation to assess dependencies between the HA-CDI and influenza series lagged over time; (3) Poisson harmonic regression models (with sine and cosine components) to assess the seasonality of the rates; and (4) Poisson regression to determine whether influenza rates accounted for seasonality in the HA-CDI rates. Results: Conventional ARIMA approaches did not detect seasonality in the HA-CDI rates, but we found strong seasonality in the influenza rates. A cross-correlation analysis revealed evidence of correlation between the series at a lag of zero (R = 0.41; 95% CI, 0.10–0.65) and provided an indication of a seasonal relationship between the series (Fig. 1). Poisson regression suggested that influenza rates predicted CDI rates (P < .01). Using harmonic regression, there was evidence of seasonality in HA-CDI rates (2 [2 df] = 6.62; P < .05) and influenza rates (2 [2 df] = 1,796.6; P < .001). In a Poisson model of HA-CDI rates with both the harmonic components and influenza admission rates, the harmonic components were no longer predictive of HA-CDI rates. Conclusions: Harmonic regression provided a sensitive means of identifying seasonality in HA-CDI rates, but the seasonality effect was accounted for by influenza admission rates. The relationship between HA-CDI and influenza rates is likely mediated by antibiotic prescriptions, which needs to be assessed. To improve precision and reduce bias, research on interventions to reduce HA-CDI rates should assess historic seasonality in HA-CDI rates and should account for influenza admissions.
Background: High-level personal protective equipment (PPE) protects healthcare workers (HCWs) during the care of patients with serious communicable diseases. Doffing body fluid–contaminated PPE presents a risk of self-contamination. A study assessing HCW failure modes and self-contamination with viruses during PPE doffing found that, of all PPE items, the highest number of doffing failure modes and highest self-contamination risk occurred during removal of the 1-layer powered air-purifying respirator (PAPR) hood. Hood type may affect contamination risk; however, no experimental evidence exists comparing hood types. Objective: We quantified and compared the risk of self-contamination with viruses during doffing of a 1e-layer versus a 2-layer PAPR hood. Methods: In this study, 8 HCWs with experience using high-level PPE donned PPE contaminated on 4 prespecified areas with 2 surrogate human viruses, bacteriophage MS2 (a nonenvelope virus) and Φ6 (an enveloped virus). They completed a clinical task then doffed PPE according to a standard protocol. Following doffing, inner gloves, hands, face, and scrubs were sampled for viral contamination using infectivity assays. HCWs performed the entire sequence twice, first with a 1-layer hood with 1 shroud then with a 2-layer hood with 2 shrouds. The Wilcoxon rank-sum test was used to compare viral contamination between the 2 hood types. HCWs were video-recorded to identify failure modes in their doffing process using a failure modes and effects analysis to identify ways that individual actions deviated from optimal behavior. Results: Φ6 transfer to hands, inner gloves, and scrubs were observed for 1 HCW using the 1-layer hood versus scrubs only for 1 HCW using the 2-layer hood. MS2 transfer to hands was observed for 2 HCWs using the 1-layer hood versus none using the 2-layer hood. Inner glove contamination was observed for 6 of 8 HCWs using the 1-layer hood versus 2 of 8 using the 2-layer hood. Conclusions: A significantly higher number of MS2 virus was recovered on the inner gloves of HCWs using the 1-layer versus the 2-layer hood (median difference, 2.27×104; P = .03). In addition, 31 failure modes were identified during removal of the 2-layer hood versus 13 failure modes for the 1-layer hood. The magnitude of self-contamination depends on the type of PAPR hood used. The 2-layer hood resulted in significantly less inner glove contamination than the 1-layer hood. However, more failure modes were identified during the doffing process for the 2-layer hood. In conclusion, the failure modes identified during the use of the 2-layer hood were less likely to result in self-contamination compared to the failure modes identified during use of the 1-layer hood.
Background:Clostridioides difficile infection (CDI) is the most common cause of infectious diarrhea in hospitalized patients. Probiotics have been studied as a measure to prevent CDI. Timely probiotic administration to at-risk patients receiving systemic antimicrobials presents significant challenges. We sought to determine optimal implementation methods to administer probiotics to all adult inpatients aged 55 years receiving a course of systemic antimicrobials across an entire health region. Methods: Using a randomized stepped-wedge design across 4 acute-care hospitals (n = 2,490 beds), the probiotic Bio-K+ was prescribed daily to patients receiving systemic antimicrobials and was continued for 5 days after antimicrobial discontinuation. Focus groups and interviews were conducted to identify barriers, and the implementation strategy was adapted to address the key identified barriers. The implementation strategy included clinical decision support involving a linked flag on antibiotic ordering and a 1-click order entry within the electronic medical record (EMR), provider and patient education (written/videos/in-person), and local site champions. Protocol adherence was measured by tracking the number of patients on therapeutic antimicrobials that received BioK+ based on the bedside nursing EMR medication administration records. Adherence rates were sorted by hospital and unit in 48- and 72-hour intervals with recording of percentile distribution of time (days) to receipt of the first antimicrobial. Results: In total, 340 education sessions with >1,800 key stakeholders occurred before and during implementation across the 4 involved hospitals. The overall adherence of probiotic ordering for wards with antimicrobial orders was 78% and 80% at 48 and 72 hours, respectively over 72 patient months. Individual hospital adherence rates varied between 77% and 80% at 48 hours and between 79% and 83% at 72 hours. Of 246,144 scheduled probiotic orders, 94% were administered at the bedside within a median of 0.61 days (75th percentile, 0.88), 0.47 days (75th percentile, 0.86), 0.71 days (75th percentile, 0.92) and 0.67 days (75th percentile, 0.93), respectively, at the 4 sites after receipt of first antimicrobial. The key themes from the focus groups emphasized the usefulness of the linked flag alert for probiotics on antibiotic ordering, the ease of the EMR 1-click order entry, and the importance of the education sessions. Conclusions: Electronic clinical decision support, education, and local champion support achieved a high implementation rate consistent across all sites. Use of a 1-click order entry in the EMR was considered a key component of the success of the implementation and should be considered for any implementation strategy for a stewardship initiative. Achieving high prescribing adherence allows more precision in evaluating the effectiveness of the probiotic strategy.
Funding: Partnerships for Research and Innovation in the Health System, Alberta Innovates/Health Solutions Funding: Award
Background: US hospitals are required to report C. difficile infections (CDIs) to the NHSN as a performance measure tied to payment penalties for poor scores. Currently, only the charted CDI test results performed last in reflex testing scenarios are reported to the NHSN (CDI events). We describe the reduction in NHSN CDI events from the addition of a reflex toxin enzyme immunoassay (EIA) after a positive nucleic acid amplification test (NAAT) in teaching and nonteaching hospitals, and we estimate the impact on standardized infection ratios (SIR). Methods: Reporting of all CDI test results, by test method, occurred during April 2018–July 2019 to the Georgia Emerging Infections program (funded by the Centers for Disease Control and Prevention), which conducts active population-based surveillance in an 8-county Atlanta area (population, 4 million). Among facilities starting reflex EIA testing, results were aggregated by test method during months of reflex testing to calculate facility-specific reduction in NHSN CDI events (% reduction; 1-[no. EIA+/no. NAAT+]). Differences in percent reduction between facilities by characteristic were compared using the Kruskal-Wallis test. We simulated expected changes in the SIR for a range of reductions, assuming equal effect on both community-onset (CO) and hospital-onset (HO) tests. Each facility’s historical NHSN CDI events prior to reflex testing were used to estimate changes to facility-specific SIRs by reducing values by the corresponding facility’s percent reduction. Results: Overall, 13 acute-care hospitals (bed size, 52–633; ICU bed size, 6–105) started reflex testing during the study period (mean, 7 months, 15,800 admissions, 66,400 patient days), resulting in 550 +NAAT tests reflexing to 180 +EIA tests (pooled mean 58% reduction). Percent reduction varied (mean, 67%; range, 42%–81%) but did not differ between larger (≥217 beds) and smaller hospitals (61 vs 50% reduction; P > .05) or by outsourced versus inhouse testing (65% vs 54% reduction; P > .05). Simulations identified a threshold reduction at which point effect on HO counteract the effects on CO events enough to reduce the SIR; thresholds for nonteaching and teaching were 26% and 32% reduction, respectively (Fig. 1). The estimated reductions in facility-specific SIRs using measured percent reductions on historic NHSN CDI events closely paralleled the simulation, and the mean estimated change in SIR was −46% (range, −12% to −71%) (Fig. 1). Conclusions: Although the magnitude of the effect varied, all 13 facilities experienced dramatic reductions in CDI events reportable to NHSN due to reflex testing; applying these reductions to historical NHSN data illustrates anticipated reductions in their facility-specific SIRs due to this testing change.
Disclosures: Scott Fridkin, consulting fee, vaccine industry (various) (spouse)