To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The Subglacial Antarctic Lakes Scientific Access (SALSA) Project accessed Mercer Subglacial Lake using environmentally clean hot-water drilling to examine interactions among ice, water, sediment, rock, microbes and carbon reservoirs within the lake water column and underlying sediments. A ~0.4 m diameter borehole was melted through 1087 m of ice and maintained over ~10 days, allowing observation of ice properties and collection of water and sediment with various tools. Over this period, SALSA collected: 60 L of lake water and 10 L of deep borehole water; microbes >0.2 μm in diameter from in situ filtration of ~100 L of lake water; 10 multicores 0.32–0.49 m long; 1.0 and 1.76 m long gravity cores; three conductivity–temperature–depth profiles of borehole and lake water; five discrete depth current meter measurements in the lake and images of ice, the lake water–ice interface and lake sediments. Temperature and conductivity data showed the hydrodynamic character of water mixing between the borehole and lake after entry. Models simulating melting of the ~6 m thick basal accreted ice layer imply that debris fall-out through the ~15 m water column to the lake sediments from borehole melting had little effect on the stratigraphy of surficial sediment cores.
Parent-Child interaction therapy (PCIT) has been shown to improve positive, responsive parenting and lower risk for child maltreatment (CM), including among families who are already involved in the child welfare system. However, higher risk families show higher rates of treatment attrition, limiting effectiveness. In N = 120 child welfare families randomized to PCIT, we tested behavioral and physiological markers of parent self-regulation and socio-cognitive processes assessed at pre-intervention as predictors of retention in PCIT. Results of multinomial logistic regressions indicate that parents who declined treatment displayed more negative parenting, greater perceptions of child responsibility and control in adult–child transactions, respiratory sinus arrhythmia (RSA) increases to a positive dyadic interaction task, and RSA withdrawal to a challenging, dyadic toy clean-up task. Increased odds of dropout during PCIT's child-directed interaction phase were associated with greater parent attentional bias to angry facial cues on an emotional go/no-go task. Hostile attributions about one's child predicted risk for dropout during the parent-directed interaction phase, and readiness for change scores predicted higher odds of treatment completion. Implications for intervening with child welfare-involved families are discussed along with study limitations.
To assess the relationship between food insecurity, sleep quality, and days with mental and physical health issues among college students.
An online survey was administered. Food insecurity was assessed using the ten-item Adult Food Security Survey Module. Sleep was measured using the nineteen-item Pittsburgh Sleep Quality Index (PSQI). Mental health and physical health were measured using three items from the Healthy Days Core Module. Multivariate logistic regression was conducted to assess the relationship between food insecurity, sleep quality, and days with poor mental and physical health.
Twenty-two higher education institutions.
College students (n 17 686) enrolled at one of twenty-two participating universities.
Compared with food-secure students, those classified as food insecure (43·4 %) had higher PSQI scores indicating poorer sleep quality (P < 0·0001) and reported more days with poor mental (P < 0·0001) and physical (P < 0·0001) health as well as days when mental and physical health prevented them from completing daily activities (P < 0·0001). Food-insecure students had higher adjusted odds of having poor sleep quality (adjusted OR (AOR): 1·13; 95 % CI 1·12, 1·14), days with poor physical health (AOR: 1·01; 95 % CI 1·01, 1·02), days with poor mental health (AOR: 1·03; 95 % CI 1·02, 1·03) and days when poor mental or physical health prevented them from completing daily activities (AOR: 1·03; 95 % CI 1·02, 1·04).
College students report high food insecurity which is associated with poor mental and physical health, and sleep quality. Multi-level policy changes and campus wellness programmes are needed to prevent food insecurity and improve student health-related outcomes.
We introduce a class of non-uniform random recursive trees grown with an attachment preference for young age. Via the Chen–Stein method of Poisson approximation, we find that the outdegree of a node is characterized in the limit by ‘perturbed’ Poisson laws, and the perturbation diminishes as the node index increases. As the perturbation is attenuated, a pure Poisson limit ultimately emerges in later phases. Moreover, we derive asymptotics for the proportion of leaves and show that the limiting fraction is less than one half. Finally, we study the insertion depth in a random tree in this class. For the insertion depth, we find the exact probability distribution, involving Stirling numbers, and consequently we find the exact and asymptotic mean and variance. Under appropriate normalization, we derive a concentration law and a limiting normal distribution. Some of these results contrast with their counterparts in the uniform attachment model, and some are similar.
Patients with psychiatric illness are at increased risk of developing non-psychiatric medical illnesses. There have been positive reports regarding the integration of primary care services into mental health facilities. Here, we evaluate the appropriateness of psychiatry non-consultant hospital doctors (NCHD) transfers to the local emergency department (ED) in the context of an in-house primary care service.
We reviewed the inpatient transfers from St Patrick’s University Hospital (SPUH) to the local ED at St James’ Hospital (SJH) from 1 January 2016 to 31 December 2017. We used inpatient admission to SJH as our primary marker of an appropriate transfer.
246 inpatients were transferred from SPUH to the SJH ED for medical review in the years 2016 and 2017. 27 (11%) of these were referred to the ED by the primary care service. 51% of those referred were admitted with similar rates of admission for both general practitioner (n = 27, 54% admitted) and NCHD initiated referrals (n = 219, 51% admitted). Acute neurological illness, concern regarding a cardiac illness, and deliberate self-harm were the most common reasons for referral.
Our primary finding is that, of those transferred to ED by either primary care or a psychiatry NCHD, a similar proportion was judged to be in need of inpatient admission. This indicates that as a group, psychiatry NCHD assessment of acuity and need for transfer was similar to that of their colleagues in primary care.
We examined associations between preschool children's cumulative risk exposure, dyadic interaction patterns, and self-control abilities in 238 mother–child dyads. Positive interactive synchrony, relationship ruptures, and latency to repair were micro-coded during a 3–5 minute joint challenge task. Children's self-control was assessed via two laboratory tasks and by parent report. Structural equation modeling and mediation analyses were utilized to examine the direct and indirect effects of cumulative risk on children's observed and parent-reported self-control abilities. Parent–child interactive processes of dyadic synchrony and latency to repair ruptures in synchrony were examined as mediators. Dyadic synchrony and latency to repair ruptures were found to mediate associations between cumulative risk exposure and children's behavioral and parent-reported self-control. Children exposed to more cumulative risk engaged in less dyadic synchrony and experienced longer latencies to repair ruptures with their caregiver, which in turn was associated with lower child self-control. Though cross-sectional, findings suggest dyadic synchrony and repair processes may represent viable mechanistic pathways linking cumulative risk exposure and deficits in child self-control. However, independent replications using longitudinal and experimental intervention designs are needed to determine causal pathways and inform new approaches for targeting the effects of early risk exposure through a focus on two-generational interventions.
Secular noblemen and noblewomen’s relationships with monastic communities during the high Middle Ages follow many of the patterns already established in the preceding centuries. Across Latin Christendom, nobles continued to make donations to religious houses for the sake of their own souls and those of their ancestors and other relatives. Similarly, those who had sufficient resources to found new monasteries continued to do so, establishing and endowing religious communities dedicated in perpetuity to their spiritual well-being and the preservation of their memories. As in earlier centuries, nobles’ motives for patronizing monastic houses were not confined solely to the religious sphere. Anthropological models of “gift-giving” (as discussed by Isabelle Rosé in her article in volume 1) can be applied to the property agreements between nobles and monasteries of the high Middle Ages as well. Conflicts over lands and rights also continued to unsettle local societies, because the shifting nature of patronage and kinship networks over time repeatedly opened new questions about which nobles had claims to a piece of property even after it had been donated to a religious community.
Over half of the Irish population is overweight or obese. The Obesity Policy and Action Plan 2016–2025 will set reformulation targets for fat, saturated fat and sugar in Ireland and review progress. In 2016, the Food Safety Authority of Ireland undertook a cross-sectional market scan of yoghurts to evaluate the energy, fat, saturated fat and sugar content based solely on declared nutrition labels. The aims of this 2018 study were to verify the accuracy of declared nutrition information on yoghurts and to confirm the suitability of declared nutrition labels for energy, fat, saturated fat and sugar reformulation monitoring.
Yoghurts identified in the 2016 market scan (n578) were weighted based on categorisation of manufacturer type (branded, own brand), product category (natural, flavoured and luxury) and declared nutrition content. Samples (n200) were randomly selected from these weighted groups and tested by a laboratory accredited for energy, fat, saturated fat, and sugar analysis. Data was analysed using IBM SPSS (version25). As data was not normally distributed, median values were investigated for declared and tested energy, fat, saturated fat and sugar content using Wilcoxon Signed-Rank Test and Spearman Rank-Order Correlation.
Of the tested yoghurts, 3% (n6), 5% (n9) and 19% (n31) were outside the recommended European Commission (EC) labelling tolerance for fat, saturated fat and sugar, respectively. Tested nutrient content was consistently lower than declared. There was a statistically significant difference in declared vs. tested energy (87kcal vs. 84kcal p = 0.03), fat (2.7 g vs. 2.5 g p < 0.001), and sugar (9.9 g vs. 8.7 g p < 0.001) content per 100 g yoghurt. Declared vs. tested sugar content per 100 g yoghurt was statistically significant across all yoghurt types, including natural (4.8 g vs. 3.4 g p < 0.001), flavoured (9.7 g vs. 8.6 g p < 0.001) and luxury (15 g vs. 13.6 g p = 0.002). There was a statistically significant difference between declared vs. tested fat (2.8 g vs. 2.5 g p < 0.001) and saturated fat (1.9 g vs.1.6 g p = 0.017) content of own brand yoghurts per 100 g. There was a positive correlation between energy content and portion size (r = .2,p < 0.01).
There was a high level of agreement between declared vs. tested fat and saturated fat content of yoghurts, but a lower level of agreement between declared vs. tested sugar content of yoghurts. This indicates that declared nutrition labels are suitable for reformulation monitoring of fat and saturated fat, but may not be suitable for sugar. This finding will be further investigated and tested in future work planned for nutrition label verification of other food categories.
The Vietnam Era Twin Study of Aging (VETSA) is a longitudinal behavioral genetic study with a primary focus on cognitive and brain aging in men, particularly early identification of risk for mild cognitive impairment (MCI) and Alzheimer’s disease (AD). It comprises a subset of over 1600 twins from the Vietnam Era Twin Registry. Twins live all over the USA. Assessments began when participants were in their 50s. Follow-ups were conducted every 5–6 years, and wave 3 has been completed as of this writing. The age range of participants is narrow (about 10 years). An extensive neurocognitive test battery has added precision in assessing differences in middle-aged adults, and predicting progression to MCI. Young adult cognitive test data (at an average age of 20 years) provide a means of disentangling aging effects from longstanding differences. Genome wide genotyping and plasma assays of AD biomarkers from waves 1 and 3 were conducted in wave 3. These features make the VETSA ideal for studying the heterogeneity of within-individual trajectories from midlife to old age, and for early detection of risk factors for cognitive decline.
Herbicides have been a primary means of managing undesirable brush on grazing lands across the southwestern United States for decades. Continued encroachment of honey mesquite and huisache on grazing lands warrants evaluation of treatment life and economics of current and experimental treatments. Treatment life is defined as the time between treatment application and when canopy cover of undesirable brush returns to a competitive level with native forage grasses (i.e., 25% canopy cover for mesquite and 30% canopy cover for huisache). Treatment life of industry-standard herbicides was compared with that of aminocyclopyrachlor plus triclopyr amine (ACP+T) from 10 broadcast-applied honey mesquite and five broadcast-applied huisache trials established from 2007 through 2013 across Texas. On average, the treatment life of industry standard treatments (IST) for huisache was 3 yr. In comparison, huisache canopy cover was only 2.5% in plots treated with ACP+T 3 yr after treatment. The average treatment life of IST for honey mesquite was 8.6 yr, whereas plots treated with ACP+T had just 2% mesquite canopy cover at that time. Improved treatment life of ACP+T compared with IST life was due to higher mortality resulting in more consistent brush canopy reduction. The net present values (NPVs) of ACP+T and IST for both huisache and mesquite were similar until the treatment life of the IST application was reached (3 yr for huisache and 8.6 yr for honey mesquite). At that point, NPVs of the programs diverged as a result of brush competition with desirable forage grasses and additional input costs associated with theoretical follow-up IST necessary to maintain optimum livestock forage production. The ACP+T treatments did not warrant a sequential application over the 12-yr analysis for huisache or 20-yr analysis for honey mesquite that this research covered. These results indicate ACP+T provides cost-effective, long-term control of honey mesquite and huisache.
The COllaborative project of Development of Anthropometrical measures in Twins (CODATwins) project is a large international collaborative effort to analyze individual-level phenotype data from twins in multiple cohorts from different environments. The main objective is to study factors that modify genetic and environmental variation of height, body mass index (BMI, kg/m2) and size at birth, and additionally to address other research questions such as long-term consequences of birth size. The project started in 2013 and is open to all twin projects in the world having height and weight measures on twins with information on zygosity. Thus far, 54 twin projects from 24 countries have provided individual-level data. The CODATwins database includes 489,981 twin individuals (228,635 complete twin pairs). Since many twin cohorts have collected longitudinal data, there is a total of 1,049,785 height and weight observations. For many cohorts, we also have information on birth weight and length, own smoking behavior and own or parental education. We found that the heritability estimates of height and BMI systematically changed from infancy to old age. Remarkably, only minor differences in the heritability estimates were found across cultural–geographic regions, measurement time and birth cohort for height and BMI. In addition to genetic epidemiological studies, we looked at associations of height and BMI with education, birth weight and smoking status. Within-family analyses examined differences within same-sex and opposite-sex dizygotic twins in birth size and later development. The CODATwins project demonstrates the feasibility and value of international collaboration to address gene-by-exposure interactions that require large sample sizes and address the effects of different exposures across time, geographical regions and socioeconomic status.
Innovation Concept: Residents bear an enormous burden of responsibility for patient care which can lead to stress and mental exhaustion, especially in the face-paced and acute environment of emergency medicine (EM). In addition to numerous demands faced by EM residents, being a member of a geographically distributive residency program presents many unique challenges from a support and wellness perspective. To address these issues we sought to implement a video conferenced peer support network in hopes to foster wellness in the NOSM Family Medicine/EM program, where learners are commonly separated for training. Methods: Participants completed a pre-pilot questionnaire that strongly showed interest for this type of novel network. Furthermore residents conveyed that they are reluctant to access formal services and commonly rely on co-residents for support. This pilot program intends to decrease barriers that geography and stigma create that negatively hinder seeking support throughout medical training. Keeping the network small, consisting of only co-residents maintains a collegial and confidential environment that enables colleagues to provide relevant help to one another. Offering this outlet allows the opportunity to debrief and share unique experiences, which can lead to improved knowledge and wellbeing. Curriculum, Tool or Material: Informal, co-resident run and easy to access sessions are held twice monthly and average one hour in length. Discussion topics commonly include residency issues, difficult patient encounters and challenging situations. These sessions are conducted via video conferencing making them easily accessible from a distance and also from a comfortable and convenient environment of the participants choosing. Residents have commented that this is a helpful platform to discuss important issues while providing and safe and confidential resource to help cope with residency challenges. Conclusion: Further data analysis is underway as we are in the initial stages of implementing the program. In the final stages (April 2018) a pending post-pilot questionnaire will be interpreted to explore barriers, limitations and to determine the role of the network going forward. If found to be effective it is something that can be implemented and adapted for future residents. Other programs can use this feasible model to increase wellness and foster the same supportive environment among residents, especially those separated geographically from peers who may benefit most.
Background: There is growing concern about emergency physicians overuse of computed tomography (CT). In an attempt to ensure appropriate ordering many hospitals implement strict protocols for ordering of CT scans in the emergency department (ED) that include approval of all scans by a board-certified radiologist, and a reduced access to CT overnight. Aim Statement: The aim of this study is to review the impact of RAD ED – direct access to CT ordering by ED physicians, 24hr CT technologist and third-party reporting on CT scans overnight. Our objectives were to assess the effect on; 1) ED length of stay, 2) number of CT scans ordered and 3) admission rates. Measures & Design: We conducted a prospective pilot before & after study at a single tertiary-care emergency department between February 1st, 2018 and July 31st, 2018. Inclusion criteria were adult patients presenting to the emergency department and undergoing CT for any of the following: face, neck, spine, upper and lower extremities, chest, abdomen and pelvis. Exclusion criteria were those undergoing CT head for stroke or trauma. Evaluation/Results: A total of 924 patients met our criteria, 352 before and 568 after implementation. Comparison of the patient populations demonstrate very similar characteristics in both groups; (49% male, average age 56 years, CTAS 2(40%) and 3(47%). Results demonstrate that an additional 216 scans were performed in post-implementation group. This equates to an increase of 61%. ED length of stay averaged 5.6 hours pre-implementation and 4.7 hours post-implementation. This corresponds to a significant reduction in length of stay of approximately 0.9 hours (p < 0.01). Collection is currently ongoing for factors that we will adjust for a multivariate analysis, including admission rates. Discussion/Impact: RAD ED led to a significant increase in CT ordering and decrease in ED length of stay. We believe that this project provides important information to clinicians and patients with regards to overall CT utilization, ED wait times, follow up visits for CT scanning and admission rates. It is also important for administrators to help decide if these new rules are leading to improved efficiency, and to help estimate their financial impact.
In the USA, western Washington (WWA) and the Alaska (AK) Interior are two regions where maritime and continental climates, high latitude and cropping systems necessitate early maturing spring wheat (Triticum aestivum L.). Both regions aim to increase the production of hard spring bread wheat for human consumption to support regional agriculture and food systems. The Nordic region of Europe has a history of breeding for early maturing spring wheat and also experiences long daylengths with mixed maritime and continental climates. Nordic wheat also carries wildtype (wt) NAM-B1, an allele associated with accelerated senescence and increased grain protein and micronutrient content, at a higher frequency than global germplasm. Time to senescence, yield, protein and mineral content were evaluated on 42 accessions of Nordic hard red spring wheat containing wt NAM-B1 over 2 years on experimental stations in WWA and the AK Interior. Significant variation was found by location and accession for time to senescence, suggesting potential parental lines for breeding programmes targeting early maturity. Additionally, multiple regression analysis showed that decreased time to senescence correlated negatively with grain yield and positively with grain protein, iron and zinc content. Breeding for early maturity in these regions will need to account for this potential trade-off in yield. Nordic wt NAM-B1 accessions with early senescence yet with yields similar to regional checks are reported. Collaboration among alternative wheat regions can aid in germplasm exchange and varietal development as shown here for the early maturing trait.
In 1994, the National Jointed Goatgrass Research Program was initiated with funding from a special USDA grant. The 15-yr program provided $4.1 million to support jointed goatgrass (Aegilops cylindrica Host.) research and technology transfer projects in 10 western states. These projects resulted in approximately 80 refereed manuscripts, including journal articles and extension publications. The research covered various topics related to the biology and ecology of jointed goatgrass as well as its management and control in wheat (Triticum aestivum L.) production systems. This review summarizes the research on jointed goatgrass published after Donald and Ogg’s 1991 review, most of which was conducted as part of the USDA-funded National Jointed Goatgrass Research Program. Specific topics that were studied and reviewed here include A. cylindrica genetics, especially as it relates to gene flow and hybridization rates with wheat and fertility of the resulting hybrids; vernalization requirements; seed dormancy, longevity, and germination requirements; competitiveness with wheat; and herbicide resistance acquired through evolution or gene flow from wheat. With respect to management, a wide variety of practices were evaluated, including various tillage types and frequencies; crop rotations, especially diversified wheat production systems that include spring-seeded annual crops; competitive wheat cultivars, seeding dates, seeding density, and row spacing; fertility management, including nitrogen application timing and placement; and field burning. Finally, many studies evaluated the use of herbicides, especially the introduction of imazamox in imidazolinone-resistant wheat cultivars, as well as comparison of adjuvant systems and application timings. In addition to the many management practices that were studied individually, several integrated management systems were evaluated that combined crop rotations, tillage, and herbicide programs. Between 1993 and 2013, weed scientists in 14 western states estimated that jointed goatgrass infestations decreased by 45% to 55% and attributed the reduction to the implementation of more diverse crop rotations, improved cultural practices, and use of imazamox-resistant wheat technology. This is evidence that the practical implications of the National Jointed Goatgrass Research Program have been successfully implemented by growers throughout the western United States.
The Atypical Maternal Behavior Instrument for Assessment and Classification (AMBIANCE; Bronfman, Madigan, & Lyons-Ruth, 2009–2014; Bronfman, Parsons, & Lyons-Ruth, 1992–2004) is a widely used and well-validated measure for assessing disrupted forms of caregiver responsiveness within parent–child interactions. However, it requires evaluating approximately 150 behavioral items from videotape and extensive training to code, thus making its use impractical in most clinical contexts. Accordingly, the primary aim of the current study was to identify a reduced set of behavioral indicators most central to the AMBIANCE coding system using latent-trait item response theory (IRT) models. Observed mother–infant interaction data previously coded with the AMBIANCE was pooled from laboratories in both North America and Europe (N = 343). Using 2-parameter logistic IRT models, a reduced set of 45 AMBIANCE items was identified. Preliminary convergent and discriminant validity was evaluated in relation to classifications of maternal disrupted communication assigned using the full set of AMBIANCE indicators, to infant attachment disorganization, and to maternal sensitivity. The results supported the construct validity of the refined item set, opening the way for development of a brief screening measure for disrupted maternal communication. IRT models in clinical scale refinement and their potential for bridging clinical and research objectives in developmental psychopathology are discussed.
We present an indentation-scope that interfaces with confocal microscopy, enabling direct observation of the three-dimensional (3D) microstructural response of coatings on substrates. Using this method, we compared microns-thick polymer coatings on glass with and without silica nanoparticle filler. Bulk force data confirmed the >30% modulus difference, while microstructural data further revealed slip at the glass-coating interface. Filled coatings slipped more and about two times faster, as reflected in 3D displacement and von Mises strain fields. Overall, these data indicate that silica-doping of coatings can dramatically alter adhesion. Moreover, this method compliments existing theoretical and modeling approaches for studying indentation in layered systems.
Whether monozygotic (MZ) and dizygotic (DZ) twins differ from each other in a variety of phenotypes is important for genetic twin modeling and for inferences made from twin studies in general. We analyzed whether there were differences in individual, maternal and paternal education between MZ and DZ twins in a large pooled dataset. Information was gathered on individual education for 218,362 adult twins from 27 twin cohorts (53% females; 39% MZ twins), and on maternal and paternal education for 147,315 and 143,056 twins respectively, from 28 twin cohorts (52% females; 38% MZ twins). Together, we had information on individual or parental education from 42 twin cohorts representing 19 countries. The original education classifications were transformed to education years and analyzed using linear regression models. Overall, MZ males had 0.26 (95% CI [0.21, 0.31]) years and MZ females 0.17 (95% CI [0.12, 0.21]) years longer education than DZ twins. The zygosity difference became smaller in more recent birth cohorts for both males and females. Parental education was somewhat longer for fathers of DZ twins in cohorts born in 1990–1999 (0.16 years, 95% CI [0.08, 0.25]) and 2000 or later (0.11 years, 95% CI [0.00, 0.22]), compared with fathers of MZ twins. The results show that the years of both individual and parental education are largely similar in MZ and DZ twins. We suggest that the socio-economic differences between MZ and DZ twins are so small that inferences based upon genetic modeling of twin data are not affected.
For fifty years astronomers have been searching for pulsar signals in observational data. Throughout this time the process of choosing detections worthy of investigation, so called ‘candidate selection’, has been effective, yielding thousands of pulsar discoveries. Yet in recent years technological advances have permitted the proliferation of pulsar-like candidates, straining our candidate selection capabilities, and ultimately reducing selection accuracy. To overcome such problems, we now apply ‘intelligent’ machine learning tools. Whilst these have achieved success, candidate volumes continue to increase, and our methods have to evolve to keep pace with the change. This talk considers how to meet this challenge as a community.