To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In recent years, soybean acreage has increased significantly in western Canada. One of the challenges associated with growing soybean in western Canada is the control of volunteer glyphosate-resistant (GR) canola, as the majority of soybean cultivars are also glyphosate resistant. The objective of this research was to determine the impact of soybean seeding rate and planting date on competition with volunteer canola. We also attempted to determine how high seeding rate could be raised while still being economically feasible for producers. Soybean was seeded at five different seeding rates (targeted 10, 20, 40, 80 and 160 plants m-2) and three planting dates (targeted mid-May, late May, and early June) at four sites across western Canada in 2014 and 2015. Soybean yield consistently increased with higher seeding rates, while volunteer canola biomass decreased. Planting date generally produced variable results across site-years. An economic analysis determined that the optimal rate was 40 to 60 plants m-2 depending on market price, while the optimal planting date range was from May 20th to June 1st.
Evidence suggests that low birth weight and fetal exposure to extreme maternal undernutrition is associated with cardiovascular disease in adulthood. Hyperemesis gravidarum, a clinical entity characterized by severe nausea and excess vomiting leading to a suboptimal maternal nutritional status during early pregnancy, is associated with an increased risk of adverse pregnancy outcomes. Several studies also showed that different measures related to hyperemesis gravidarum, such as maternal daily vomiting or severe weight loss, are associated with increased risks of adverse fetal pregnancy outcomes. Not much is known about long-term offspring consequences of maternal hyperemesis gravidarum and related measures during pregnancy. We examined the associations of maternal daily vomiting during early pregnancy, as a measure related to hyperemesis gravidarum, with childhood cardiovascular risk factors.
In a population-based prospective cohort study from early pregnancy onwards among 4,769 mothers and their children in Rotterdam, the Netherlands, we measured childhood body mass index, total fat mass percentage, android/gynoid fat mass ratio, preperitoneal fat mass area, blood pressure, lipids, and insulin levels. We used multiple regression analyses to assess the associations of maternal vomiting during early pregnancy with childhood cardiovascular outcomes.
Compared with the children of mothers without daily vomiting during early pregnancy, the children of mothers with daily vomiting during early pregnancy had a higher childhood total body fat mass (difference 0.12 standard deviation score [SDS]; 95% confidence interval [CI] 0.03–0.20), android/gynoid fat mass ratio (difference 0.13 SDS; 95% CI 0.04–0.23), and preperitoneal fat mass area (difference 0.10 SDS; 95% CI 0–0.20). These associations were not explained by birth characteristics but partly explained by higher infant growth. Maternal daily vomiting during early pregnancy was not associated with childhood blood pressure, lipids, and insulin levels.
Maternal daily vomiting during early pregnancy is associated with higher childhood total body fat mass and abdominal fat mass levels, but not with other cardiovascular risk factors. Further studies are needed to replicate these findings, to explore the underlying mechanisms and to assess the long-term consequences.
Review findings on the role of dietary patterns in preventing depression are inconsistent, possibly due to variation in assessment of dietary exposure and depression. We studied the association between dietary patterns and depressive symptoms in six population-based cohorts and meta-analysed the findings using a standardised approach that defined dietary exposure, depression assessment and covariates.
Included were cross-sectional data from 23 026 participants in six cohorts: InCHIANTI (Italy), LASA, NESDA, HELIUS (the Netherlands), ALSWH (Australia) and Whitehall II (UK). Analysis of incidence was based on three cohorts with repeated measures of depressive symptoms at 5–6 years of follow-up in 10 721 participants: Whitehall II, InCHIANTI, ALSWH. Three a priori dietary patterns, Mediterranean diet score (MDS), Alternative Healthy Eating Index (AHEI-2010), and the Dietary Approaches to Stop Hypertension (DASH) diet were investigated in relation to depressive symptoms. Analyses at the cohort-level adjusted for a fixed set of confounders, meta-analysis used a random-effects model.
Cross-sectional and prospective analyses showed statistically significant inverse associations of the three dietary patterns with depressive symptoms (continuous and dichotomous). In cross-sectional analysis, the association of diet with depressive symptoms using a cut-off yielded an adjusted OR of 0.87 (95% confidence interval 0.84–0.91) for MDS, 0.93 (0.88–0.98) for AHEI-2010, and 0.94 (0.87–1.01) for DASH. Similar associations were observed prospectively: 0.88 (0.80–0.96) for MDS; 0.95 (0.84–1.06) for AHEI-2010; 0.90 (0.84–0.97) for DASH.
Population-scale observational evidence indicates that adults following a healthy dietary pattern have fewer depressive symptoms and lower risk of developing depressive symptoms.
Studies on neighbourhood characteristics and depression show equivocal results.
This large-scale pooled analysis examines whether urbanisation, socioeconomic, physical and social neighbourhood characteristics are associated with the prevalence and severity of depression.
Cross-sectional design including data are from eight Dutch cohort studies (n= 32 487). Prevalence of depression, either DSM-IV diagnosis of depressive disorder or scoring for moderately severe depression on symptom scales, and continuous depression severity scores were analysed. Neighbourhood characteristics were linked using postal codes and included (a) urbanisation grade, (b) socioeconomic characteristics: socioeconomic status, home value, social security beneficiaries and non-Dutch ancestry, (c) physical characteristics: air pollution, traffic noise and availability of green space and water, and (d) social characteristics: social cohesion and safety. Multilevel regression analyses were adjusted for the individual's age, gender, educational level and income. Cohort-specific estimates were pooled using random-effects analysis.
The pooled analysis showed that higher urbanisation grade (odds ratio (OR) = 1.05, 95% CI 1.01–1.10), lower socioeconomic status (OR = 0.90, 95% CI 0.87–0.95), higher number of social security beneficiaries (OR = 1.12, 95% CI 1.06–1.19), higher percentage of non-Dutch residents (OR = 1.08, 95% CI 1.02–1.14), higher levels of air pollution (OR = 1.07, 95% CI 1.01–1.12), less green space (OR = 0.94, 95% CI 0.88–0.99) and less social safety (OR = 0.92, 95% CI 0.88–0.97) were associated with higher prevalence of depression. All four socioeconomic neighbourhood characteristics and social safety were also consistently associated with continuous depression severity scores.
This large-scale pooled analysis across eight Dutch cohort studies shows that urbanisation and various socioeconomic, physical and social neighbourhood characteristics are associated with depression, indicating that a wide range of environmental aspects may relate to poor mental health.
Flax yield can be severely reduced by weeds. The combination of limited herbicide options and the spread of herbicide-resistant weeds across the prairies has resulted in a need for more weed control options for flax producers. The objective of this research was to evaluate the tolerance of flax to topramezone, pyroxasulfone, flumioxazin, and fluthiacet-methyl applied alone as well as in a mix with currently registered herbicides. These herbicides were applied alone and in mixtures at the 1X and 2X rates and compared with three industry standards and one nontreated control. This experiment was conducted at Carman, MB, and Saskatoon, SK, as a randomized complete block with four replications. Data were collected for crop population, crop height, yield, and thousand-seed weight. Ratings for crop damage (phytotoxicity) were also taken at three separate time intervals: 7 to 14, 21 to 28, and 56+ d after treatment. Crop tolerance to these herbicides varied between site-years. This was largely attributed to differences in spring moisture conditions and the differences in soil characteristics between sites. Herbicide injury was transient. Hence, no herbicide or combination of herbicides significantly impacted crop yield consistently. Flumioxazin was the least promising herbicide evaluated, as it caused severe crop damage (>90%) when conditions were conducive. Overall, flax had excellent tolerance to fluthiacet-methyl, pyroxasulfone, and topramezone. Flax had excellent crop safety to the combination of pyroxasulfone + sulfentrazone. However, mixing fluthiacet-methyl and topramezone with MCPA and bromoxynil, respectively, increased crop damage and would not be recommended.
We analyzed intestinal contents of two late-glacial mastodons preserved in lake sediments in Ohio (Burning Tree mastodon) and Michigan (Heisler mastodon). A multi-proxy suite of macrofossils and microfossils provided unique insights into what these individuals had eaten just before they died and added significantly to knowledge of mastodon diets. We reconstructed the mastodons’ habitats with similar multi-proxy analyses of the embedding lake sediments. Non-pollen palynomorphs, especially spores of coprophilous fungi differentiated intestinal and environmental samples. The Burning Tree mastodon gut sample originates from the small intestine. The Heisler mastodon sample is part of the large intestine to which humans had added clastic material to anchor parts of the carcass under water to cache the meat. Both carcasses had been dismembered, suggesting that the mastodons had been hunted or scavenged, in line with other contemporaneous mastodon finds and the timing of early human incursion into the Midwest. Both mastodons lived in mixed coniferous-deciduous late-glacial forests. They browsed tree leaves and twigs, especially Picea. They also ate sedge-swamp plants and drank the lake water. Our multi-proxy estimates for a spring/summer season of death contrast with autumn estimates derived from prior tusk analyses. We document the recovered fossil remains with photographs.
Mood disorders and adiposity are major public health challenges. Few studies have investigated the bidirectional association of weight and waist circumference (WC) change with psychological distress in middle age, while taking into account the potential U-shape of the association. The aim of this study was to examine the bidirectional association between psychological distress and categorical change in objectively measured weight and WC.
We analysed repeated measures (up to 17 522 person-observations in adjusted analyses) of psychological distress, weight and WC from the Whitehall II cohort. Participants were recruited at age 35–55 and 67% male. Psychological distress was assessed using the General Health Questionnaire. We used random-effects regressions to model the association between weight and WC changes and psychological distress, with and without a 5-year lag period.
Psychological distress was associated with weight and WC gain over the subsequent 5 years but not the second 5-year period. Weight gain and loss were associated with increased odds for incident psychological distress in models with and without time-lag [odds ratio (OR) for incident psychological distress after 5-year time-lag: loss 1.20, 95% confidence interval (CI) 1.00–1.43; gain>5% 1.20, 95% CI 1.02–1.40]. WC changes were only associated with psychological distress in models without time-lag (OR for incident psychological distress: loss 1.29, 95% CI 1.02–1.64; gain>5% 1.33, 95% CI 1.11–1.58).
Weight gain and loss increase the odds for psychological distress compared with stable weight over subsequent 10 years. In contrast, the association between psychological distress and subsequent weight and WC changes was limited to the first 5 years of follow-up.
Despite children’s unique vulnerability, clinical guidance and resources are lacking around the use of radiation medical countermeasures (MCMs) available commercially and in the Strategic National Stockpile to support immediate dispensing to pediatric populations. To better understand the current capabilities and shortfalls, a literature review and gap analysis were performed.
A comprehensive review of the medical literature, Food and Drug Administration (FDA)-approved labeling, FDA summary reviews, medical references, and educational resources related to pediatric radiation MCMs was performed from May 2016 to February 2017.
Fifteen gaps related to the use of radiation MCMs in children were identified. The need to address these gaps was prioritized based upon the potential to decrease morbidity and mortality, improve clinical management, strengthen caregiver education, and increase the relevant evidence base.
Key gaps exist in information to support the safe and successful use of MCMs in children during radiation emergencies; failure to address these gaps could have negative consequences for families and communities. There is a clear need for pediatric-specific guidance to ensure clinicians can appropriately identify, triage, and treat children who have been exposed to radiation, and for resources to ensure accurate communication about the safety and utility of radiation MCMs for children. (Disaster Med Public Health Preparedness. 2019;13:639-646)
We present a multi-frequency study of the intermediate spiral SAB(r)bc type galaxy NGC 6744, using available data from the Chandra X-Ray telescope, radio continuum data from the Australia Telescope Compact Array and Murchison Widefield Array, and Wide-field Infrared Survey Explorer infrared observations. We identify 117 X-ray sources and 280 radio sources. Of these, we find nine sources in common between the X-ray and radio catalogues, one of which is a faint central black hole with a bolometric radio luminosity similar to the Milky Way’s central black hole. We classify 5 objects as supernova remnant (SNR) candidates, 2 objects as likely SNRs, 17 as H ii regions, 1 source as an AGN; the remaining 255 radio sources are categorised as background objects and one X-ray source is classified as a foreground star. We find the star-formation rate (SFR) of NGC 6744 to be in the range 2.8–4.7 M⊙~yr − 1 signifying the galaxy is still actively forming stars. The specific SFR of NGC 6744 is greater than that of late-type spirals such as the Milky Way, but considerably less that that of a typical starburst galaxy.
Preparing and responding to the needs of children during public health emergencies continues to be challenging. The purpose of this study was to assess the usefulness of a tabletop exercise in initiating pediatric preparedness strategies and assessing the impact of the exercise on participants’ understanding of and confidence in their roles during pediatric public health emergencies.
A tabletop exercise was developed to simulate a public health emergency scenario involving smallpox in a child, with subsequent spread to multiple states. During the exercise, participants discussed and developed communication, collaboration, and medical countermeasure strategies to enhance pediatric public health preparedness. Exercise evaluation was designed to assess participants’ knowledge gained and level of confidence surrounding pediatric public health emergencies.
In total, 22 participants identified over 100 communication and collaboration strategies to promote pediatric public health preparedness during the exercise and found that the most beneficial aspect during the exercise was the partnership between pediatricians and public health officials. Participants’ knowledge and level of confidence surrounding a pediatric public health emergency increased after the exercise.
The tabletop exercise was effective in identifying strategies to improve pediatric public health preparedness as well as enhancing participants’ knowledge and confidence surrounding a potential pediatric public health emergency. (Disaster Med Public Health Preparedness. 2018;12:582–586)
Whether monozygotic (MZ) and dizygotic (DZ) twins differ from each other in a variety of phenotypes is important for genetic twin modeling and for inferences made from twin studies in general. We analyzed whether there were differences in individual, maternal and paternal education between MZ and DZ twins in a large pooled dataset. Information was gathered on individual education for 218,362 adult twins from 27 twin cohorts (53% females; 39% MZ twins), and on maternal and paternal education for 147,315 and 143,056 twins respectively, from 28 twin cohorts (52% females; 38% MZ twins). Together, we had information on individual or parental education from 42 twin cohorts representing 19 countries. The original education classifications were transformed to education years and analyzed using linear regression models. Overall, MZ males had 0.26 (95% CI [0.21, 0.31]) years and MZ females 0.17 (95% CI [0.12, 0.21]) years longer education than DZ twins. The zygosity difference became smaller in more recent birth cohorts for both males and females. Parental education was somewhat longer for fathers of DZ twins in cohorts born in 1990–1999 (0.16 years, 95% CI [0.08, 0.25]) and 2000 or later (0.11 years, 95% CI [0.00, 0.22]), compared with fathers of MZ twins. The results show that the years of both individual and parental education are largely similar in MZ and DZ twins. We suggest that the socio-economic differences between MZ and DZ twins are so small that inferences based upon genetic modeling of twin data are not affected.
A 141m ice core was recovered from Combatant Col (51.385° N, 125.258° W; 3000ma.s.l.), Mount Waddington, Coast Mountains, British Columbia, Canada. Records of black carbon, dust, lead and water stable isotopes demonstrate that unambiguous seasonality is preserved throughout the core, despite summer surface snowmelt and temperate ice. High accumulation rates at the site (>4 m ice eq. a-1) limit modification of annual stratigraphy by percolation of surface meltwater. The ice-core record spans the period 1973–2010. An annually averaged time series of lead concentrations from the core correlates well with historical records of lead emission from North America, and with ice-core records of lead from the Greenland ice sheet. The depth-age scale for the ice core provides sufficient constraint on the vertical strain to allow estimation of the age of the ice at bedrock. Total ice thickness at Combatant Col is ~250 m; an ice core to bedrock would likely contain ice in excess of 200 years in age. Accumulation at Combatant Col is significantly correlated with both regional precipitation and large-scale geopotential height anomalies.
On 1 December 2011 the West Antarctic Ice Sheet (WAIS) Divide ice-core project reached its final depth of 3405 m. The WAIS Divide ice core is not only the longest US ice core to date, but is also the highest-quality deep ice core, including ice from the brittle ice zone, that the US has ever recovered. The methods used at WAIS Divide to handle and log the drilled ice, the procedures used to safely retrograde the ice back to the US National Ice Core Laboratory (NICL) and the methods used to process and sample the ice at the NICL are described and discussed.
Turfgrass managers currently have few readily available means of evaluating herbicide resistance in annual bluegrass during the growing season. Research was conducted to determine if agar-based diagnostic tests developed for agronomic weeds could be used to reliably confirm herbicide resistance in annual bluegrass harvested from golf course turf. Annual bluegrass phenotypes with target-site resistance to acetolactate synthase (ALS; R3, R7), enolpyruvylshikimate-3-phosphate synthase (EPSPS; R5), and photosystem II (PSII; R3, R4) inhibiting herbicides were included in experiments along with an herbicidal susceptible phenotype (S). Single tiller plants were washed free of soil and transplanted into autoclavable polycarbonate plant culture boxes filled with plant tissue culture agar amended with a murashigee-skoog medium and trifloxysulfuron (6.25, 12.5, 25, 50, 75, 100, or 150 μM), glyphosate (0, 6, 12, 25, 50, 100, 200, or 400 μM), or simazine (0, 6, 12, 25, 50, 100, 200, or 400 μM). Mortality in agar was assessed 7 to 10 days after treatment (depending on herbicide) and compared to responses observed after treating individual plants of each phenotype with trifloxysulfuron (28 g ai ha-1), glyphosate (1120 g ae ha-1), or simazine (1120 g ai ha-1) in an enclosed spray chamber. Fisher’s exact test (α = 0.05) determined that mortality in agar with 12.5 μM trifloxysulfuron and 100 μM glyphosate was not significantly different than treating whole plants via traditional spray application. Mortality with all concentrations of simazine in agar was significantly different than that observed after treating resistant and susceptible phenotypes via traditional spray application. Our findings indicate that an agar-based diagnostic assay can be used to detect annual bluegrass resistance to ALS- or EPSPS-inhibiting herbicides in less than 10 days; however, additional research is needed to refine this assay for use with PSII-inhibiting herbicides.
To assess the burden of bloodstream infections (BSIs) among pediatric hematology-oncology (PHO) inpatients, to propose a comprehensive, all-BSI tracking approach, and to discuss how such an approach helps better inform within-center and across-center differences in CLABSI rate
Prospective cohort study
US multicenter, quality-improvement, BSI prevention network
PHO centers across the United States who agreed to follow a standardized central-line–maintenance care bundle and track all BSI events and central-line days every month.
Infections were categorized as CLABSI (stratified by mucosal barrier injury–related, laboratory-confirmed BSI [MBI-LCBI] versus non–MBI-LCBI) and secondary BSI, using National Healthcare Safety Network (NHSN) definitions. Single positive blood cultures (SPBCs) with NHSN defined common commensals were also tracked.
Between 2013 and 2015, 34 PHO centers reported 1,110 BSIs. Among them, 708 (63.8%) were CLABSIs, 170 (15.3%) were secondary BSIs, and 232 (20.9%) were SPBCs. Most SPBCs (75%) occurred in patients with profound neutropenia; 22% of SPBCs were viridans group streptococci. Among the CLABSIs, 51% were MBI-LCBI. Excluding SPBCs, CLABSI rates were higher (88% vs 77%) and secondary BSI rates were lower (12% vs 23%) after the NHSN updated the definition of secondary BSI (P<.001). Preliminary analyses showed across-center differences in CLABSI versus secondary BSI and between SPBC and CLABSI versus non-CLABSI rates.
Tracking all BSIs, not just CLABSIs in PHO patients, is a patient-centered, clinically relevant approach that could help better assess across-center and within-center differences in infection rates, including CLABSI. This approach enables informed decision making by healthcare providers, payors, and the public.