To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
One of the most important novels of the eighteenth-century, Sir Charles Grandison  shaped the English courtship novel, and was loved and admired by both Jane Austen and George Eliot. The book follows the life of Sir Charles, a man parallel in virtue with Richardson's female paragons Clarissa and Pamela; and a response to the fallible protagonist Tom Jones in Fielding's popular satire of moralising novels. Forming part of the first full scholarly edition of Richardson's complete works, comprehensive general and textual introductions significantly revise and advance understanding of the composition and printing history of Richardson's final novel, and reveal the central place of Sir Charles in the literature of the period. Including Richardson's Historical Index for the first time in any edition, extensive annotations and expansive notes also give readers crucial context, and provides scholars with paths to follow for future research.
Background: Hospital-acquired Clostridioides difficile infection (HA-CDI) rates are highly variable over time, posing problems for research assessing interventions that might improve rates. By understanding seasonality in HA-CDI rates and the impacts that other factors such as influenza admissions might have on these rates, we can account for them when establishing the relationship between interventions and infection rates. We assessed whether there were seasonal trends in HA-CDI and whether they could be accounted for by influenza rates. Methods: We assessed HA-CDI rates per 10,000 patient days, and the rate of hospitalized patients with influenza per 1,000 admissions in 4 acute-care facilities (n = 2,490 beds) in Calgary, Alberta, from January 2016 to December 2018. We used 4 statistical approaches in R (version 3.5.1 software): (1) autoregressive integrated moving average (ARIMA) to assess dependencies and trends in each of the monthly HA-CDI and influenza series; (2) cross correlation to assess dependencies between the HA-CDI and influenza series lagged over time; (3) Poisson harmonic regression models (with sine and cosine components) to assess the seasonality of the rates; and (4) Poisson regression to determine whether influenza rates accounted for seasonality in the HA-CDI rates. Results: Conventional ARIMA approaches did not detect seasonality in the HA-CDI rates, but we found strong seasonality in the influenza rates. A cross-correlation analysis revealed evidence of correlation between the series at a lag of zero (R = 0.41; 95% CI, 0.10–0.65) and provided an indication of a seasonal relationship between the series (Fig. 1). Poisson regression suggested that influenza rates predicted CDI rates (P < .01). Using harmonic regression, there was evidence of seasonality in HA-CDI rates (2 [2 df] = 6.62; P < .05) and influenza rates (2 [2 df] = 1,796.6; P < .001). In a Poisson model of HA-CDI rates with both the harmonic components and influenza admission rates, the harmonic components were no longer predictive of HA-CDI rates. Conclusions: Harmonic regression provided a sensitive means of identifying seasonality in HA-CDI rates, but the seasonality effect was accounted for by influenza admission rates. The relationship between HA-CDI and influenza rates is likely mediated by antibiotic prescriptions, which needs to be assessed. To improve precision and reduce bias, research on interventions to reduce HA-CDI rates should assess historic seasonality in HA-CDI rates and should account for influenza admissions.
Background: High-level personal protective equipment (PPE) protects healthcare workers (HCWs) during the care of patients with serious communicable diseases. Doffing body fluid–contaminated PPE presents a risk of self-contamination. A study assessing HCW failure modes and self-contamination with viruses during PPE doffing found that, of all PPE items, the highest number of doffing failure modes and highest self-contamination risk occurred during removal of the 1-layer powered air-purifying respirator (PAPR) hood. Hood type may affect contamination risk; however, no experimental evidence exists comparing hood types. Objective: We quantified and compared the risk of self-contamination with viruses during doffing of a 1e-layer versus a 2-layer PAPR hood. Methods: In this study, 8 HCWs with experience using high-level PPE donned PPE contaminated on 4 prespecified areas with 2 surrogate human viruses, bacteriophage MS2 (a nonenvelope virus) and Φ6 (an enveloped virus). They completed a clinical task then doffed PPE according to a standard protocol. Following doffing, inner gloves, hands, face, and scrubs were sampled for viral contamination using infectivity assays. HCWs performed the entire sequence twice, first with a 1-layer hood with 1 shroud then with a 2-layer hood with 2 shrouds. The Wilcoxon rank-sum test was used to compare viral contamination between the 2 hood types. HCWs were video-recorded to identify failure modes in their doffing process using a failure modes and effects analysis to identify ways that individual actions deviated from optimal behavior. Results: Φ6 transfer to hands, inner gloves, and scrubs were observed for 1 HCW using the 1-layer hood versus scrubs only for 1 HCW using the 2-layer hood. MS2 transfer to hands was observed for 2 HCWs using the 1-layer hood versus none using the 2-layer hood. Inner glove contamination was observed for 6 of 8 HCWs using the 1-layer hood versus 2 of 8 using the 2-layer hood. Conclusions: A significantly higher number of MS2 virus was recovered on the inner gloves of HCWs using the 1-layer versus the 2-layer hood (median difference, 2.27×104; P = .03). In addition, 31 failure modes were identified during removal of the 2-layer hood versus 13 failure modes for the 1-layer hood. The magnitude of self-contamination depends on the type of PAPR hood used. The 2-layer hood resulted in significantly less inner glove contamination than the 1-layer hood. However, more failure modes were identified during the doffing process for the 2-layer hood. In conclusion, the failure modes identified during the use of the 2-layer hood were less likely to result in self-contamination compared to the failure modes identified during use of the 1-layer hood.
Background:Clostridioides difficile infection (CDI) is the most common cause of infectious diarrhea in hospitalized patients. Probiotics have been studied as a measure to prevent CDI. Timely probiotic administration to at-risk patients receiving systemic antimicrobials presents significant challenges. We sought to determine optimal implementation methods to administer probiotics to all adult inpatients aged 55 years receiving a course of systemic antimicrobials across an entire health region. Methods: Using a randomized stepped-wedge design across 4 acute-care hospitals (n = 2,490 beds), the probiotic Bio-K+ was prescribed daily to patients receiving systemic antimicrobials and was continued for 5 days after antimicrobial discontinuation. Focus groups and interviews were conducted to identify barriers, and the implementation strategy was adapted to address the key identified barriers. The implementation strategy included clinical decision support involving a linked flag on antibiotic ordering and a 1-click order entry within the electronic medical record (EMR), provider and patient education (written/videos/in-person), and local site champions. Protocol adherence was measured by tracking the number of patients on therapeutic antimicrobials that received BioK+ based on the bedside nursing EMR medication administration records. Adherence rates were sorted by hospital and unit in 48- and 72-hour intervals with recording of percentile distribution of time (days) to receipt of the first antimicrobial. Results: In total, 340 education sessions with >1,800 key stakeholders occurred before and during implementation across the 4 involved hospitals. The overall adherence of probiotic ordering for wards with antimicrobial orders was 78% and 80% at 48 and 72 hours, respectively over 72 patient months. Individual hospital adherence rates varied between 77% and 80% at 48 hours and between 79% and 83% at 72 hours. Of 246,144 scheduled probiotic orders, 94% were administered at the bedside within a median of 0.61 days (75th percentile, 0.88), 0.47 days (75th percentile, 0.86), 0.71 days (75th percentile, 0.92) and 0.67 days (75th percentile, 0.93), respectively, at the 4 sites after receipt of first antimicrobial. The key themes from the focus groups emphasized the usefulness of the linked flag alert for probiotics on antibiotic ordering, the ease of the EMR 1-click order entry, and the importance of the education sessions. Conclusions: Electronic clinical decision support, education, and local champion support achieved a high implementation rate consistent across all sites. Use of a 1-click order entry in the EMR was considered a key component of the success of the implementation and should be considered for any implementation strategy for a stewardship initiative. Achieving high prescribing adherence allows more precision in evaluating the effectiveness of the probiotic strategy.
Funding: Partnerships for Research and Innovation in the Health System, Alberta Innovates/Health Solutions Funding: Award
Background: US hospitals are required to report C. difficile infections (CDIs) to the NHSN as a performance measure tied to payment penalties for poor scores. Currently, only the charted CDI test results performed last in reflex testing scenarios are reported to the NHSN (CDI events). We describe the reduction in NHSN CDI events from the addition of a reflex toxin enzyme immunoassay (EIA) after a positive nucleic acid amplification test (NAAT) in teaching and nonteaching hospitals, and we estimate the impact on standardized infection ratios (SIR). Methods: Reporting of all CDI test results, by test method, occurred during April 2018–July 2019 to the Georgia Emerging Infections program (funded by the Centers for Disease Control and Prevention), which conducts active population-based surveillance in an 8-county Atlanta area (population, 4 million). Among facilities starting reflex EIA testing, results were aggregated by test method during months of reflex testing to calculate facility-specific reduction in NHSN CDI events (% reduction; 1-[no. EIA+/no. NAAT+]). Differences in percent reduction between facilities by characteristic were compared using the Kruskal-Wallis test. We simulated expected changes in the SIR for a range of reductions, assuming equal effect on both community-onset (CO) and hospital-onset (HO) tests. Each facility’s historical NHSN CDI events prior to reflex testing were used to estimate changes to facility-specific SIRs by reducing values by the corresponding facility’s percent reduction. Results: Overall, 13 acute-care hospitals (bed size, 52–633; ICU bed size, 6–105) started reflex testing during the study period (mean, 7 months, 15,800 admissions, 66,400 patient days), resulting in 550 +NAAT tests reflexing to 180 +EIA tests (pooled mean 58% reduction). Percent reduction varied (mean, 67%; range, 42%–81%) but did not differ between larger (≥217 beds) and smaller hospitals (61 vs 50% reduction; P > .05) or by outsourced versus inhouse testing (65% vs 54% reduction; P > .05). Simulations identified a threshold reduction at which point effect on HO counteract the effects on CO events enough to reduce the SIR; thresholds for nonteaching and teaching were 26% and 32% reduction, respectively (Fig. 1). The estimated reductions in facility-specific SIRs using measured percent reductions on historic NHSN CDI events closely paralleled the simulation, and the mean estimated change in SIR was −46% (range, −12% to −71%) (Fig. 1). Conclusions: Although the magnitude of the effect varied, all 13 facilities experienced dramatic reductions in CDI events reportable to NHSN due to reflex testing; applying these reductions to historical NHSN data illustrates anticipated reductions in their facility-specific SIRs due to this testing change.
Disclosures: Scott Fridkin, consulting fee, vaccine industry (various) (spouse)
This chapter presents a survey of medieval Greek apocalyptic literature and consists of three parts, which outline typical characteristics of this literary genre, introduce a number of prominent Byzantine apocalyptic narratives, and sketch the manuscript transmission of each.
Healthcare personnel (HCP) were recruited to provide serum samples, which were tested for antibodies against Ebola or Lassa virus to evaluate for asymptomatic seroconversion.
From 2014 to 2016, 4 patients with Ebola virus disease (EVD) and 1 patient with Lassa fever (LF) were treated in the Serious Communicable Diseases Unit (SCDU) at Emory University Hospital. Strict infection control and clinical biosafety practices were implemented to prevent nosocomial transmission of EVD or LF to HCP.
All personnel who entered the SCDU who were required to measure their temperatures and complete a symptom questionnaire twice daily were eligible.
No employee developed symptomatic EVD or LF. EVD and LF antibody studies were performed on sera samples from 42 HCP. The 6 participants who had received investigational vaccination with a chimpanzee adenovirus type 3 vectored Ebola glycoprotein vaccine had high antibody titers to Ebola glycoprotein, but none had a response to Ebola nucleoprotein or VP40, or a response to LF antigens.
Patients infected with filoviruses and arenaviruses can be managed successfully without causing occupation-related symptomatic or asymptomatic infections. Meticulous attention to infection control and clinical biosafety practices by highly motivated, trained staff is critical to the safe care of patients with an infection from a special pathogen.
This paper presents Parallel World Framework as a solution for simulations of complex systems within a time-varying knowledge graph and its application to the electric grid of Jurong Island in Singapore. The underlying modeling system is based on the Semantic Web Stack. Its linked data layer is described by means of ontologies, which span multiple domains. The framework is designed to allow what-if scenarios to be simulated generically, even for complex, inter-linked, cross-domain applications, as well as conducting multi-scale optimizations of complex superstructures within the system. Parallel world containers, introduced by the framework, ensure data separation and versioning of structures crossing various domain boundaries. Separation of operations, belonging to a particular version of the world, is taken care of by a scenario agent. It encapsulates functionality of operations on data and acts as a parallel world proxy to all of the other agents operating on the knowledge graph. Electric network optimization for carbon tax is demonstrated as a use case. The framework allows to model and evaluate electrical networks corresponding to set carbon tax values by retrofitting different types of power generators and optimizing the grid accordingly. The use case shows the possibility of using this solution as a tool for CO2 reduction modeling and planning at scale due to its distributed architecture.
Postoperative cognitive impairment is among the most common medical complications associated with surgical interventions – particularly in elderly patients. In our aging society, it is an urgent medical need to determine preoperative individual risk prediction to allow more accurate cost–benefit decisions prior to elective surgeries. So far, risk prediction is mainly based on clinical parameters. However, these parameters only give a rough estimate of the individual risk. At present, there are no molecular or neuroimaging biomarkers available to improve risk prediction and little is known about the etiology and pathophysiology of this clinical condition. In this short review, we summarize the current state of knowledge and briefly present the recently started BioCog project (Biomarker Development for Postoperative Cognitive Impairment in the Elderly), which is funded by the European Union. It is the goal of this research and development (R&D) project, which involves academic and industry partners throughout Europe, to deliver a multivariate algorithm based on clinical assessments as well as molecular and neuroimaging biomarkers to overcome the currently unsatisfying situation.
We apply deep kernel learning (DKL), which can be viewed as a combination of a Gaussian process (GP) and a deep neural network (DNN), to compression ignition engine emissions and compare its performance to a selection of other surrogate models on the same dataset. Surrogate models are a class of computationally cheaper alternatives to physics-based models. High-dimensional model representation (HDMR) is also briefly discussed and acts as a benchmark model for comparison. We apply the considered methods to a dataset, which was obtained from a compression ignition engine and includes as outputs soot and NOx emissions as functions of 14 engine operating condition variables. We combine a quasi-random global search with a conventional grid-optimization method in order to identify suitable values for several DKL hyperparameters, which include network architecture, kernel, and learning parameters. The performance of DKL, HDMR, plain GPs, and plain DNNs is compared in terms of the root mean squared error (RMSE) of the predictions as well as computational expense of training and evaluation. It is shown that DKL performs best in terms of RMSE in the predictions whilst maintaining the computational cost at a reasonable level, and DKL predictions are in good agreement with the experimental emissions data.
This paper is programmatic: it defines the concept of “phantom borders” and describes its heuristic potential. The proposed approach positions itself between structuralist methodologies that postulate stable social and cultural regional structures and deconstructive viewpoints that reject the former, while focusing on the discursive dimension of regions. The paper takes this tension as its point of departure. Viewed from a situational perspective, phantom borders are neither to be understood as immutable structures nor as purely discursive constructions, but rather as an outcome of the interaction between three interwoven levels, which are simultaneously: 1) imagined in mental maps and discourses, 2) experienced and perceived by the respective actors, and 3) shaped by everyday practices and continuously updated and implemented. Phantom borders are context sensitive. We argue that the topic of phantom borders is not only relevant for research on eastern Europe, but also for research in “new area studies” in general.
Grape marc (GPM) is a viticulture by-product that is rich in secondary compounds, including condensed tannins (CT), and is used as a supplement in livestock feeding practices. The aim of this study was to determine whether feeding GPM to lactating dairy cows would alter the milk proteome through changes in nitrogen (N) partitioning. Ten lactating Holstein cows were fed a total mixed ration (TMR) top-dressed with either 1.5 kg dry matter (DM)/cow/day GPM (GPM group; n = 5) or 2.0 kg DM/cow/day of a 50:50 beet pulp: soy hulls mix (control group; n = 5). Characterization of N partitioning and calculation of N partitioning was completed through analysis of plasma urea-N, urine, feces, and milk urea-N. Milk samples were collected for general composition analysis, HPLC quantification of the high abundance milk proteins (including casein isoforms, α-lactalbumin, and β-lactoglobulin) and liquid chromatography tandem mass spectrometry (LC-MS/MS) analysis of the low abundance protein enriched milk fraction. No differences in DMI, N parameters, or calculated N partitioning were observed across treatments. Dietary treatment did not affect milk yield, milk protein or fat content or yield, or the concentrations of high abundance milk proteins quantified by HPLC analysis. Of the 127 milk proteins that were identified by LC-MS/MS analysis, 16 were affected by treatment, including plasma proteins and proteins associated with the blood-milk barrier, suggesting changes in mammary passage. Immunomodulatory proteins, including butyrophilin subfamily 1 member 1A and serum amyloid A protein, were higher in milk from GPM-fed cows. Heightened abundance of bioactive proteins in milk caused by dietary-induced shifts in mammary passage could be a feasible method to enhance the healthfulness of milk for both the milk-fed calf and human consumer. Additionally, the proteome shifts observed in this trial could provide a starting point for the identification of biomarkers suitable for use as indicators of mammary function.
Four experiments examine how lack of awareness of inequality affect behaviour towards the rich and poor. In Experiment 1, participants who became aware that wealthy individuals donated a smaller percentage of their income switched from rewarding the wealthy to rewarding the poor. In Experiments 2 and 3, participants who played a public goods game – and were assigned incomes reflective of the US income distribution either at random or on merit – punished the poor (for small absolute contributions) and rewarded the rich (for large absolute contributions) when incomes were unknown; when incomes were revealed, participants punished the rich (for their low percentage of income contributed) and rewarded the poor (for their high percentage of income contributed). In Experiment 4, participants provided with public education contributions for five New York school districts levied additional taxes on mostly poorer school districts when incomes were unknown, but targeted wealthier districts when incomes were revealed. These results shed light on how income transparency shapes preferences for equity and redistribution. We discuss implications for policy-makers.
Though used frequently in machine learning, boosted decision trees are largely unused in political science, despite many useful properties. We explain how to use one variant of boosted decision trees, AdaBoosted decision trees (ADTs), for social science predictions. We illustrate their use by examining a well-known political prediction problem, predicting U.S. Supreme Court rulings. We find that our ADT approach outperforms existing predictive models. We also provide two additional examples of the approach, one predicting the onset of civil wars and the other predicting county-level vote shares in U.S. presidential elections.