To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This book is an interdisciplinary guide to the religion clauses of the First Amendment with a focus on its philosophical foundations, historical developments, and legal and political implications. The volume begins with fundamental questions about God, the nature of belief and worship, conscience, freedom, and their intersections with law. It then traces the history of religious liberty and church-state relations in America through a diverse set of religious and non-religious voices from the seventeenth century to the most recent Supreme Court decisions. The Companion will conclude by addressing legal and political questions concerning the First Amendment and the court cases and controversies surrounding religious liberty today, including the separation of church and state, corporate religious liberty, and constitutional interpretation. This scholarly yet accessible book will introduce students and scholars alike to the main issues concerning the First Amendment and religious liberty, along with offering incisive new insights into one of the most important topics in American culture.
Optimism and pessimism are distinct constructs that have demonstrated independent relationships with aspects of health and well-being. The purpose of this study was to investigate whether optimism or pessimism is more closely linked with physical and mental health among older adults.
Community-dwelling older adults (N = 272) ages 59–95 in the southern United States.
The Life Orientation Test—Revised and the Short Form 8.
At the bivariate level, optimism was associated with higher physical health and mental health, while pessimism was associated with lower physical health and mental health. Multiple-regression analyses as well as comparison of correlation coefficients found that pessimism was more closely associated with physical health and mental health than optimism.
These results add to the literature suggesting that, in terms of older adults’ health and well-being, avoiding pessimism may be more important than being optimistic.
OBJECTIVES/SPECIFIC AIMS: Of the six Centers for Medicare and Medicaid Services (CMS) monitored diagnoses targeted for readmissions reductions, reasons for readmissions within academic hospitals are poorly understood and reflect complex interactions between the patient, provider and organizational-level responses to initial hospitalization. Learning health systems (the organizational and orchestrated integration of research into evidence-based practice) can address the complexities of readmissions through an innovative approach to knowledge translation and patient-centered outcomes research. The objective of this review is to define and optimize the architecture of learning health systems to produce a dynamic pre-implementation framework of knowledge translation and patient-centered outcomes research, leveraging two engines (research and learning) within the academic and clinical settings for reducing readmissions. METHODS/STUDY POPULATION: Three databases were utilized for this scoping review (PubMed, Academic Search Premier, and Scopus) focusing on 1.) learning health systems and the methods of defining and building these systems within an academic hospital setting and 2.) the use of learning health systems in reducing readmissions within academic hospitals. Empirical articles and reviews pertaining to the architecture, development, conceptualization, definition, and translation of learning health systems were identified and compiled into a scoping review and proposed framework. RESULTS/ANTICIPATED RESULTS: The scoping review yielded 139 articles; from which 28 articles were retained. No articles were found utilizing learning health systems to address readmissions. Thus, a new architectural framework was developed incorporating common architectural themes from the literature with adaptations to fit the interests of patients, providers, and researchers in reducing readmissions within academic hospitals (Figure 1). DISCUSSION/SIGNIFICANCE OF IMPACT: Given the dearth of information applying learning health systems to readmissions, the proposed architecture for an integrative learning health system can be utilized as a dynamic foundation for adoption and pre-implementation planning for reducing readmissions within academic hospital settings. Additionally, the authors expect this model to be tested and continually refined to address historical and emerging issues for clinically-relevant and clinically-effective approaches to patient-centered practice and research.
We evaluated whether a diagnostic stewardship initiative consisting of ASP preauthorization paired with education could reduce false-positive hospital-onset (HO) Clostridioides difficile infection (CDI).
Single center, quasi-experimental study.
Tertiary academic medical center in Chicago, Illinois.
Adult inpatients were included in the intervention if they were admitted between October 1, 2016, and April 30, 2018, and were eligible for C. difficile preauthorization review. Patients admitted to the stem cell transplant (SCT) unit were not included in the intervention and were therefore considered a contemporaneous noninterventional control group.
The intervention consisted of requiring prescriber attestation that diarrhea has met CDI clinical criteria, ASP preauthorization, and verbal clinician feedback. Data were compared 33 months before and 19 months after implementation. Facility-wide HO-CDI incidence rates (IR) per 10,000 patient days (PD) and standardized infection ratios (SIR) were extracted from hospital infection prevention reports.
During the entire 52 month period, the mean facility-wide HO-CDI-IR was 7.8 per 10,000 PD and the SIR was 0.9 overall. The mean ± SD HO-CDI-IR (8.5 ± 2.0 vs 6.5 ± 2.3; P < .001) and SIR (0.97 ± 0.23 vs 0.78 ± 0.26; P = .015) decreased from baseline during the intervention. Segmented regression models identified significant decreases in HO-CDI-IR (Pstep = .06; Ptrend = .008) and SIR (Pstep = .1; Ptrend = .017) trends concurrent with decreases in oral vancomycin (Pstep < .001; Ptrend < .001). HO-CDI-IR within a noninterventional control unit did not change (Pstep = .125; Ptrend = .115).
A multidisciplinary, multifaceted intervention leveraging clinician education and feedback reduced the HO-CDI-IR and the SIR in select populations. Institutions may consider interventions like ours to reduce false-positive C. difficile NAAT tests.
Optimising short- and long-term outcomes for children and patients with CHD depends on continued scientific discovery and translation to clinical improvements in a coordinated effort by multiple stakeholders. Several challenges remain for clinicians, researchers, administrators, patients, and families seeking continuous scientific and clinical advancements in the field. We describe a new integrated research and improvement network – Cardiac Networks United – that seeks to build upon the experience and success achieved to-date to create a new infrastructure for research and quality improvement that will serve the needs of the paediatric and congenital heart community in the future. Existing gaps in data integration and barriers to improvement are described, along with the mission and vision, organisational structure, and early objectives of Cardiac Networks United. Finally, representatives of key stakeholder groups – heart centre executives, research leaders, learning health system experts, and parent advocates – offer their perspectives on the need for this new collaborative effort.
Psychopathy is a personality disorder associated with severe emotional and interpersonal consequences and persistent antisocial behavior. Neurobiological models of psychopathy emphasize impairments in emotional processing, attention, and integration of information across large-scale neural networks in the brain. One of the largest integrative hubs in the brain is the corpus callosum (CC) – a large white matter structure that connects the two cerebral hemispheres.
The current study examines CC volume, measured via Freesurfer parcellation, in a large sample (n = 495) of incarcerated men who were assessed for psychopathic traits using the Hare Psychopathy Checklist-Revised (PCL-R).
Psychopathy was associated with reduced volume across all five sub-regions of the CC. These relationships were primarily driven by the affective/interpersonal elements of psychopathy (PCL-R Factor 1), as no significant associations were found between the CC and the lifestyle/antisocial traits of psychopathy. The observed effects were not attributable to differences in substance use severity, age, IQ, or total brain volume.
These findings align with suggestions that core psychopathic traits may be fostered by reduced integrative capacity across large-scale networks in the brain.
The value of the nosological distinction between non-affective and affective psychosis has frequently been challenged. We aimed to investigate the transdiagnostic dimensional structure and associated characteristics of psychopathology at First Episode Psychosis (FEP). Regardless of diagnostic categories, we expected that positive symptoms occurred more frequently in ethnic minority groups and in more densely populated environments, and that negative symptoms were associated with indices of neurodevelopmental impairment.
This study included 2182 FEP individuals recruited across six countries, as part of the EUropean network of national schizophrenia networks studying Gene–Environment Interactions (EU-GEI) study. Symptom ratings were analysed using multidimensional item response modelling in Mplus to estimate five theory-based models of psychosis. We used multiple regression models to examine demographic and context factors associated with symptom dimensions.
A bifactor model, composed of one general factor and five specific dimensions of positive, negative, disorganization, manic and depressive symptoms, best-represented associations among ratings of psychotic symptoms. Positive symptoms were more common in ethnic minority groups. Urbanicity was associated with a higher score on the general factor. Men presented with more negative and less depressive symptoms than women. Early age-at-first-contact with psychiatric services was associated with higher scores on negative, disorganized, and manic symptom dimensions.
Our results suggest that the bifactor model of psychopathology holds across diagnostic categories of non-affective and affective psychosis at FEP, and demographic and context determinants map onto general and specific symptom dimensions. These findings have implications for tailoring symptom-specific treatments and inform research into the mood-psychosis spectrum.
Modern high-throughput molecular and analytical tools offer exciting opportunities to gain a mechanistic understanding of unique traits of weeds. During the past decade, tremendous progress has been made within the weed science discipline using genomic techniques to gain deeper insights into weedy traits such as invasiveness, hybridization, and herbicide resistance. Though the adoption of newer “omics” techniques such as proteomics, metabolomics, and physionomics has been slow, applications of these omics platforms to study plants, especially agriculturally important crops and weeds, have been increasing over the years. In weed science, these platforms are now used more frequently to understand mechanisms of herbicide resistance, weed resistance evolution, and crop–weed interactions. Use of these techniques could help weed scientists to further reduce the knowledge gaps in understanding weedy traits. Although these techniques can provide robust insights about the molecular functioning of plants, employing a single omics platform can rarely elucidate the gene-level regulation and the associated real-time expression of weedy traits due to the complex and overlapping nature of biological interactions. Therefore, it is desirable to integrate the different omics technologies to give a better understanding of molecular functioning of biological systems. This multidimensional integrated approach can therefore offer new avenues for better understanding of questions of interest to weed scientists. This review offers a retrospective and prospective examination of omics platforms employed to investigate weed physiology and novel approaches and new technologies that can provide holistic and knowledge-based weed management strategies for future.
The ideal sampling method and benefit of qualitative versus quantitative culture for carbapenem-resistant Enterobacteriaceae (CRE) recovery in hospitalized patient rooms and bathrooms is unknown. Although the use of nylon-flocked swabs improved overall gram-negative organism recovery compared with cellulose sponges, they were similar for CRE recovery. Quantitative culture was inferior and unrevealing beyond the qualitative results.
The conservation of threatened species requires information on how management activities influence habitat quality. The Critically Endangered black rhinoceros Diceros bicornis is restricted to savannahs representing c. 5% of its historical range. Fire is used extensively in savannahs but little is known about how rhinos respond to burning. Our aim was to understand rhino responses to fire by studying habitat selection and foraging at multiple scales. We used resource selection functions and locations of 31 rhinos during 2014–2016 to study rhino habitat use in Serengeti National Park, Tanzania. Rhino selectivity was quantified by comparing forage consumption to plant species availability in randomly sampled vegetation plots; rhino diets were subsequently verified through DNA metabarcoding analysis of faecal samples. Rhino habitat use was a unimodal function of fire history, with highly occupied sites having fire frequencies of < 0.6 fires/year and maximum occupancy occurring at a fire frequency of 0.1 fires/year. Foraging stations had characteristic plant communities, with 17 species associated with rhino foraging. Rhinos were associated with, and disproportionately consumed, woody plants, forbs and legumes, all of which decreased in abundance with increasing fire frequency. In contrast to common management practices, multiple lines of evidence suggest that the current fire regime in the Serengeti negatively influences rhino habitat use and foraging and that frequent fire limits access of rhinos to preferred forage. We outline a conceptual model to guide managers and conservationists in the use of fire under variable habitat conditions.
We compared sepsis “time zero” and Centers for Medicare and Medicaid Services (CMS) SEP-1 pass rates among 3 abstractors in 3 hospitals. Abstractors agreed on time zero in 29 of 80 (36%) cases. Perceived pass rates ranged from 9 of 80 cases (11%) to 19 of 80 cases (23%). Variability in time zero and perceived pass rates limits the utility of SEP-1 for measuring quality.
Outbreaks of Old World cutaneous leishmaniasis (CL) have significantly increased due to the conflicts in the Middle East, with most of the cases occurring in resource-limited areas such as refugee settlements. The standard methods of diagnosis include microscopy and parasite culture, which have several limitations. To address the growing need for a CL diagnostic that can be field applicable, we have identified five candidate neoglycoproteins (NGPs): Galα (NGP3B), Galα(1,3)Galα (NGP17B), Galα(1,3)Galβ (NGP9B), Galα(1,6)[Galα(1,2)]Galβ (NGP11B), and Galα(1,3)Galβ(1,4)Glcβ (NGP1B) that are differentially recognized in sera from individuals with Leishmania major infection as compared with sera from heterologous controls. These candidates contain terminal, non-reducing α-galactopyranosyl (α-Gal) residues, which are known potent immunogens to humans. Logistic regression models found that NGP3B retained the best diagnostic potential (area under the curve from receiver-operating characteristic curve = 0.8). Our data add to the growing body of work demonstrating the exploitability of the human anti-α-Gal response in CL diagnosis.
OBJECTIVES/SPECIFIC AIMS: The purpose of the present secondary data analysis was to examine the effect of moderate-severe disturbed sleep before the start of radiation therapy (RT) on subsequent RT-induced pain. METHODS/STUDY POPULATION: Analyses were performed on 676 RT-naïve breast cancer patients (mean age 58, 100% female) scheduled to receive RT from a previously completed nationwide, multicenter, phase II randomized controlled trial examining the efficacy of oral curcumin on radiation dermatitis severity. The trial was conducted at 21 community oncology practices throughout the US affiliated with the University of Rochester Cancer Center NCI’s Community Oncology Research Program (URCC NCORP) Research Base. Sleep disturbance was assessed using a single item question from the modified MD Anderson Symptom Inventory (SI) on a 0–10 scale, with higher scores indicating greater sleep disturbance. Total subjective pain as well as the subdomains of pain (sensory, affective, and perceived) were assessed by the short-form McGill Pain Questionnaire. Pain at treatment site (pain-Tx) was also assessed using a single item question from the SI. These assessments were included for pre-RT (baseline) and post-RT. For the present analyses, patients were dichotomized into 2 groups: those who had moderate-severe disturbed sleep at baseline (score≥4 on the SI; n=101) Versus those who had mild or no disturbed sleep (control group; score=0–3 on the SI; n=575). RESULTS/ANTICIPATED RESULTS: Prior to the start of RT, breast cancer patients with moderate-severe disturbed sleep at baseline were younger, less likely to have had lumpectomy or partial mastectomy while more likely to have had total mastectomy and chemotherapy, more likely to be on sleep, anti-anxiety/depression, and prescription pain medications, and more likely to suffer from depression or anxiety disorder than the control group (all p’s≤0.02). Spearman rank correlations showed that changes in sleep disturbance from baseline to post-RT were significantly correlated with concurrent changes in total pain (r=0.38; p<0.001), sensory pain (r=0.35; p<0.001), affective pain (r=0.21; p<0.001), perceived pain intensity (r=0.37; p<0.001), and pain-Tx (r=0.35; p<0.001). In total, 92% of patients with moderate-severe disturbed sleep at baseline reported post-RT total pain compared with 79% of patients in the control group (p=0.006). Generalized linear estimating equations, after controlling for baseline pain and other covariates (baseline fatigue and distress, age, sleep medications, anti-anxiety/depression medications, prescription pain medications, and depression or anxiety disorder), showed that patients with moderate-severe disturbed sleep at baseline had significantly higher mean values of post-RT total pain (by 39%; p=0.033), post-RT sensory pain (by 41%; p=0.046), and post-RT affective pain (by 55%; p=0.035) than the control group. Perceived pain intensity (p=0.066) and pain-Tx (p=0.086) at post-RT were not significantly different between the 2 groups. DISCUSSION/SIGNIFICANCE OF IMPACT: These findings suggest that moderate-severe disturbed sleep prior to RT is an important predictor for worsening of pain at post-RT in breast cancer patients. There could be several plausible reasons for this. Sleep disturbance, such as sleep loss and sleep continuity disturbance, could result in impaired sleep related recovery and repair of tissue damage associated with cancer and its treatment; thus, resulting in the amplification of pain. Sleep disturbance may also reduce pain tolerance threshold through increased sensitization of the central nervous system. In addition, pain and sleep disturbance may share common neuroimmunological pathways. Sleep disturbance may modulate inflammation, which in turn may contribute to increased pain. Further research is needed to confirm these findings and whether interventions targeting sleep disturbance in early phase could be potential alternate approaches to reduce pain after RT.
A recent paper, “Parkinson's disease mild cognitive impairment classifications and neurobehavioral symptoms” (McDermott et al., 2017), provides an interesting comparison of the influence of different criteria for Parkinson's disease with mild cognitive impairment (PD-MCI) on progression to dementia (PDD). Unfortunately, McDermott et al. (2017) incorrectly stated that “only 21% of PD-MCI participants (identified with a 1.5 SD cut-off) converted to PDD within four years” (p.6) in our study (Wood et al., 2016). However, the important point made by Wood et al. (2016) was that the proportion of conversions to PDD was 51% when the PD-MCI diagnosis required a minimum of two 1.5 SD impairments within any single cognitive domain, whereas additional PD-MCI patients classified with one impairment at 1.5 SD in each of the two domains (but never two impairments in the same domain) had a non-significant risk of dementia relative to non-MCI patients (11% vs. 6% converted, respectively). Our PDD conversion rate was 38% when combining both 1.5 SD criteria (21/56 PD-MCI patients vs. 4/65 non-MCI patients converted); McDermott et al. (2017) found a 42% conversion rate over three years for similarly described PD-MCI patients (10/24 PD-MCI patients vs. 0/27 non-MCI patients converted). Our study was also part of a multinational study (n = 467) showing that PD-MCI has predictive validity beyond known demographic and PD-specific factors of influence (Hoogland et al., 2017). All three studies found that multiple cognitive domain impairments are common in PD-MCI. Nonetheless, the research community needs to clarify the association between PD-MCI subtypes and, especially, the optimal cognitive markers for dementia risk in PD patients.
Leafy spurge (Euphorbia esula L.) is an invasive perennial weed infesting range and recreational lands of North America. Previous research and omics projects with E. esula have helped develop it as a model for studying many aspects of perennial plant development and response to abiotic stress. However, the lack of an assembled genome for E. esula has limited the power of previous transcriptomics studies to identify functional promoter elements and transcription factor binding sites. An assembled genome for E. esula would enhance our understanding of signaling processes controlling plant development and responses to environmental stress and provide a better understanding of genetic factors impacting weediness traits, evolution, and herbicide resistance. A comprehensive transcriptome database would also assist in analyzing future RNA-seq studies and is needed to annotate and assess genomic sequence assemblies. Here, we assembled and annotated 56,234 unigenes from an assembly of 589,235 RNA-seq-derived contigs and a previously published Sanger-sequenced expressed sequence tag collection. The resulting data indicate that we now have sequence for >90% of the expressed E. esula protein-coding genes. We also assembled the gene space of E. esula by using a limited coverage (18X) genomic sequence database. In this study, the programs Velvet and Trinity produced the best gene-space assemblies based on representation of expressed and conserved eukaryotic genes. The results indicate that E. esula contains as much as 23% repetitive sequences, of which 11% are unique. Our sequence data were also sufficient for assembling a full chloroplast and partial mitochondrial genome. Further, marker analysis identified more than 150,000 high-quality variants in our E. esula L-RNA–scaffolded, whole-genome, Trinity-assembled genome. Based on these results, E. esula appears to have limited heterozygosity. This study provides a blueprint for low-cost genomic assemblies in weed species and new resources for identifying conserved and novel promoter regions among coordinately expressed genes of E. esula.
To summarize and discuss logistic and administrative challenges we encountered during the Benefits of Enhanced Terminal Room (BETR) Disinfection Study and lessons learned that are pertinent to future utilization of ultraviolet (UV) disinfection devices in other hospitals
Multicenter cluster randomized trial
SETTING AND PARTICIPANTS
Nine hospitals in the southeastern United States
All participating hospitals developed systems to implement 4 different strategies for terminal room disinfection. We measured compliance with disinfection strategy, barriers to implementation, and perceptions from nurse managers and environmental services (EVS) supervisors throughout the 28-month trial.
Implementation of enhanced terminal disinfection with UV disinfection devices provides unique challenges, including time pressures from bed control personnel, efficient room identification, negative perceptions from nurse managers, and discharge volume. In the course of the BETR Disinfection Study, we utilized several strategies to overcome these barriers: (1) establishing safety as the priority; (2) improving communication between EVS, bed control, and hospital administration; (3) ensuring availability of necessary resources; and (4) tracking and providing feedback on compliance. Using these strategies, we deployed ultraviolet (UV) disinfection devices in 16,220 (88%) of 18,411 eligible rooms during our trial (median per hospital, 89%; IQR, 86%–92%).
Implementation of enhanced terminal room disinfection strategies using UV devices requires recognition and mitigation of 2 key barriers: (1) timely and accurate identification of rooms that would benefit from enhanced terminal disinfection and (2) overcoming time constraints to allow EVS cleaning staff sufficient time to properly employ enhanced terminal disinfection methods.
The Neotoma Paleoecology Database is a community-curated data resource that supports interdisciplinary global change research by enabling broad-scale studies of taxon and community diversity, distributions, and dynamics during the large environmental changes of the past. By consolidating many kinds of data into a common repository, Neotoma lowers costs of paleodata management, makes paleoecological data openly available, and offers a high-quality, curated resource. Neotoma’s distributed scientific governance model is flexible and scalable, with many open pathways for participation by new members, data contributors, stewards, and research communities. The Neotoma data model supports, or can be extended to support, any kind of paleoecological or paleoenvironmental data from sedimentary archives. Data additions to Neotoma are growing and now include >3.8 million observations, >17,000 datasets, and >9200 sites. Dataset types currently include fossil pollen, vertebrates, diatoms, ostracodes, macroinvertebrates, plant macrofossils, insects, testate amoebae, geochronological data, and the recently added organic biomarkers, stable isotopes, and specimen-level data. Multiple avenues exist to obtain Neotoma data, including the Explorer map-based interface, an application programming interface, the neotoma R package, and digital object identifiers. As the volume and variety of scientific data grow, community-curated data resources such as Neotoma have become foundational infrastructure for big data science.