To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Previous research using the critical period for weed control (CPWC) has shown that high-yielding cotton crops are very sensitive to competition from grasses and large broadleaf weeds, but the CPWC has not been defined for smaller broadleaf weeds in Australian cotton. Field studies were conducted over five seasons from 2003 to 2015 to determine the CPWC for smaller broadleaf weeds, using mungbean as a mimic weed. Mungbean was planted at densities of 1, 3, 6, 15, 30, and 60 plants m−2 with or after cotton emergence, and added and removed at approximately 0, 150, 300, 450, 600, 750, and 900 degree days of crop growth (GDD). Mungbean competed strongly with cotton, with season-long interference with 60 mungbean plants m−2 resulting in an 84% reduction in cotton yield. A dynamic CPWC function was developed for densities of one to 60 mungbean plants m−2 using extended Gompertz and exponential curves including weed density as a covariate. Using a 1% yield-loss threshold, the CPWC defined by these curves extended for the full growing season of the crop at all weed densities. The minimum yield loss from a single weed control input was 35% at the highest weed density of 60 mungbean plants m−2. The relationship for the critical time of weed removal was further improved by substituting weed biomass for weed density in the relationship.
Field studies were conducted over five seasons from 2004 to 2015 to determine the critical period for weed control (CPWC) in high-yielding, irrigated cotton using a competitive mimic grass weed, Japanese millet. Japanese millet was planted with or after cotton emergence at densities of 10, 20, 50, 100, and 200 plants m−2. Japanese millet was added and removed at approximately 0, 150, 300, 450, 600, 750, and 900 degree days of crop growth (GDD). Data were combined over years. Japanese millet competed strongly with cotton, with season-long interference resulting in an 84% reduction in cotton yield with 200 Japanese millet plants m−2. The data were fit to extended Gompertz and logistic curves including weed density as a covariate, allowing a dynamic CPWC to be estimated for densities of 10 to 200 Japanese millet plants m−2. Using a 1% yield-loss threshold, the CPWC commenced at 65 GDD, corresponding to 0 to 7 d after crop emergence (DAE), and ended at 803 GDD, 76 to 98 DAE with 10 Japanese millet plants m−2, and 975 GDD, 90 to 115 DAE with 200 Japanese millet plants m−2. These results highlight the high level of weed control required throughout the cropping season in high-yielding cotton to ensure crop losses do not exceed the cost of weed control.
Why do hosts vary so much in parasite burden, how does this variation translate to variation in host demographic rates and parasite transmission, and how does varied transmission intensity impact selection upon immune defence of individuals? The theoretical foundations of disease ecology provide predictions for the answers to these questions, yet testing such predictions with empirical data poses many challenges. We show how the long-term ecological and genetic study of the unmanaged Soay sheep of St Kilda has addressed fundamental questions in disease ecology, with longitudinal data on parasite burden, immune defence, condition, survival, and fecundity of >10,000 individuals. The rich individual-scale data are complemented by >30 years of data on sheep population dynamics and genetic diversity as well as parasite dynamics and diversity. Population-scale work has documented the range of parasite species present and the contribution of the most prevalent and virulent parasites to regulating sheep dynamics. Individual-scale work has identified drivers of variation in parasite burden and tested hypotheses about costs and benefits of defence in a quest to determine how natural selection has shaped immune function of the sheep.
Iraq and Afghanistan Veterans with posttraumatic stress disorder (PTSD) and traumatic brain injury (TBI) history have high rates of performance validity test (PVT) failure. The study aimed to determine whether those with scores in the invalid versus valid range on PVTs show similar benefit from psychotherapy and if psychotherapy improves PVT performance.
Veterans (N = 100) with PTSD, mild-to-moderate TBI history, and cognitive complaints underwent neuropsychological testing at baseline, post-treatment, and 3-month post-treatment. Veterans were randomly assigned to cognitive processing therapy (CPT) or a novel hybrid intervention integrating CPT with TBI psychoeducation and cognitive rehabilitation strategies from Cognitive Symptom Management and Rehabilitation Therapy (CogSMART). Performance below standard cutoffs on any PVT trial across three different PVT measures was considered invalid (PVT-Fail), whereas performance above cutoffs on all measures was considered valid (PVT-Pass).
Although both PVT groups exhibited clinically significant improvement in PTSD symptoms, the PVT-Pass group demonstrated greater symptom reduction than the PVT-Fail group. Measures of post-concussive and depressive symptoms improved to a similar degree across groups. Treatment condition did not moderate these results. Rate of valid test performance increased from baseline to follow-up across conditions, with a stronger effect in the SMART-CPT compared to CPT condition.
Both PVT groups experienced improved psychological symptoms following treatment. Veterans who failed PVTs at baseline demonstrated better test engagement following treatment, resulting in higher rates of valid PVTs at follow-up. Veterans with invalid PVTs should be enrolled in trauma-focused treatment and may benefit from neuropsychological assessment after, rather than before, treatment.
Field studies were conducted over six seasons to determine the critical period for weed control (CPWC) in high-yielding cotton, using common sunflower as a mimic weed. Common sunflower was planted with or after cotton emergence at densities of 1, 2, 5, 10, 20, and 50 plants m−2. Common sunflower was added and removed at approximately 0, 150, 300, 450, 600, 750, and 900 growing degree days (GDD) after planting. Season-long interference resulted in no harvestable cotton at densities of five or more common sunflower plants m−2. High levels of intraspecific and interspecific competition occurred at the highest weed densities, with increases in weed biomass and reductions in crop yield not proportional to the changes in weed density. Using a 5% yield-loss threshold, the CPWC extended from 43 to 615 GDD, and 20 to 1,512 GDD for one and 50 common sunflower plants m−2, respectively. These results highlight the high level of weed control required in high-yielding cotton to ensure crop losses do not exceed the cost of control.
Using existing data from clinical registries to support clinical trials and other prospective studies has the potential to improve research efficiency. However, little has been reported about staff experiences and lessons learned from implementation of this method in pediatric cardiology.
We describe the process of using existing registry data in the Pediatric Heart Network Residual Lesion Score Study, report stakeholders’ perspectives, and provide recommendations to guide future studies using this methodology.
The Residual Lesion Score Study, a 17-site prospective, observational study, piloted the use of existing local surgical registry data (collected for submission to the Society of Thoracic Surgeons-Congenital Heart Surgery Database) to supplement manual data collection. A survey regarding processes and perceptions was administered to study site and data coordinating center staff.
Survey response rate was 98% (54/55). Overall, 57% perceived that using registry data saved research staff time in the current study, and 74% perceived that it would save time in future studies; 55% noted significant upfront time in developing a methodology for extracting registry data. Survey recommendations included simplifying data extraction processes and tailoring to the needs of the study, understanding registry characteristics to maximise data quality and security, and involving all stakeholders in design and implementation processes.
Use of existing registry data was perceived to save time and promote efficiency. Consideration must be given to the upfront investment of time and resources needed. Ongoing efforts focussed on automating and centralising data management may aid in further optimising this methodology for future studies.
Crop plants have been used as mimic weeds to substitute for real weeds in competition studies. These mimic weeds have the advantages of availability of seed, uniform germination and growth, and the potential to confer better experimental controllability and repeatability. However, the underlying assumption that the competitive effects of mimic weeds are similar to real weeds has not been tested. We compared a range of morphological traits (plant height, node and leaf number, leaf area, leaf size, and dry weight) between the mimic weeds and real weeds: Japanese millet vs. junglerice, mungbean vs. bladder ketmia, and common sunflower vs. fierce thornapple. The impact of these mimic and real weeds on cotton was also assessed. There were similarities and differences between the mimic and real weeds, but impact on cotton lint yield was most closely associated with weed height and dry weight at mid-season. Mimic weeds may be satisfactorily substituted for real weeds in competition experiments where seasonal and environmental conditions are not limiting, such as with fully irrigated cotton, provided the plants have similar dry weight and height at mid-season. Alternatively, one can account for the differences in dry weight and height. We define here a generalized relationship estimating the yield loss of high-yielding, irrigated cotton from weed competition over a range of weed dry weights and heights, allowing extrapolation from the results with mimic weeds to the competitive effects of a range of weeds.
Introduction: Intravenous insertion (IVI) is identified by children as extremely painful and the resultant distress can have lasting negative consequences. There is an urgent need to effectively manage such procedures. Our primary objective was to compare the pain and distress of IVI with the addition of humanoid robot-based distraction to standard care, versus standard care alone. Methods: This two-armed randomized controlled trial (RCT) was conducted from April 2017 to May 2018 at the Stollery Children's Hospital emergency department (ED). Children aged 6 to 11 years who required IVI were included. Exclusion criteria included hearing or visual impairments, neurocognitive delays, sensory impairment to pain, previous enrolment, and discretion of the ED clinical staff. Primary outcomes were measured using the Observational Scale of Behavioural Distress-Revised (OSBD-R) (distress) and the Faces Pain Scale-Revised (FPS-R) (pain). A total of 426 pediatric patients were screened and 340 were excluded. Results: We recruited 86 children, of which 55% (47/86) were male; 9% (7/82) were premature at birth; 82% (67/82) had a previous ED visit; 30% (25/82) required previous hospitalization; 78% (64/82) had previous IV placement and 96% (78/81) received topical anesthesia. The mean total OSBD-R score was 1.49 ± 2.36 (standard care) compared to 0.78 ± 1.32 (robot group) (p = 0.047). The median FPS-R during the IV procedure was 4 (IQR 2,6) in the standard care group alone, compared to 2 (IQR 0,4) with the addition of humanoid robot-based distraction (p = 0.10). Change in parental state anxiety pre-procedure versus post-procedure was not significantly different between groups (p = 0.49). Parental satisfaction with the IV start was 93% (39/42) in the robot arm compared to 74% (29/39) in the standard care arm (p = 0.03). Parents were also more satisfied with management of their child's pain in the robot group (95% very satisfied) compared with standard care (72% very satisfied) (p = 0.002). Conclusion: A statistically significant reduction in distress was observed with the addition of robot-based distraction to standard care. Humanoid robot-based distraction therapy reduces distress and to a lesser extent, pain, in children undergoing IVI in the ED. Further trials are required to confirm utility in other age groups and settings.
This paper describes a model of electron energization and cyclotron-maser emission applicable to astrophysical magnetized collisionless shocks. It is motivated by the work of Begelman, Ergun and Rees [Astrophys. J. 625, 51 (2005)] who argued that the cyclotron-maser instability occurs in localized magnetized collisionless shocks such as those expected in blazar jets. We report on recent research carried out to investigate electron acceleration at collisionless shocks and maser radiation associated with the accelerated electrons. We describe how electrons accelerated by lower-hybrid waves at collisionless shocks generate cyclotron-maser radiation when the accelerated electrons move into regions of stronger magnetic fields. The electrons are accelerated along the magnetic field and magnetically compressed leading to the formation of an electron velocity distribution having a horseshoe shape due to conservation of the electron magnetic moment. Under certain conditions the horseshoe electron velocity distribution function is unstable to the cyclotron-maser instability [Bingham and Cairns, Phys. Plasmas 7, 3089 (2000); Melrose, Rev. Mod. Plasma Phys. 1, 5 (2017)].
A new GIS-based screening tool to assess threats to shallow groundwater quality has been trialled in Glasgow, UK. The GRoundwater And Soil Pollutants (GRASP) tool is based on a British Standard method for assessing the threat from potential leaching of metal pollutants in unsaturated soil/superficial materials to shallow groundwater, using data on soil and Quaternary deposit properties, climate and depth to groundwater. GRASP breaks new ground by also incorporating a new Glasgow-wide soil chemistry dataset. GRASP considers eight metals, including chromium, lead and nickel at 1622 soil sample locations. The final output is a map to aid urban management, which highlights areas where shallow groundwater quality may be at risk from current and future surface pollutants. The tool indicated that 13% of soil sample sites in Glasgow present a very high potential threat to groundwater quality, due largely to shallow groundwater depths and high soil metal concentrations. Initial attempts to validate GRASP revealed partial spatial coincidence between the GRASP threat ranks (low, moderate, high and very high) and groundwater chemistry, with statistical correlation between areas of high soil and groundwater metal concentrations for both Cr and Cu (r2>0.152; P<0.05). Validation was hampered by a lack of, and inconsistency in, existing groundwater chemistry data. To address this, standardised subsurface data collection networks have been trialled recently in Glasgow. It is recommended that, once available, new groundwater depth and chemistry information from these networks is used to validate the GRASP model further.
The role that vitamin D plays in pulmonary function remains uncertain. Epidemiological studies reported mixed findings for serum 25-hydroxyvitamin D (25(OH)D)–pulmonary function association. We conducted the largest cross-sectional meta-analysis of the 25(OH)D–pulmonary function association to date, based on nine European ancestry (EA) cohorts (n 22 838) and five African ancestry (AA) cohorts (n 4290) in the Cohorts for Heart and Aging Research in Genomic Epidemiology Consortium. Data were analysed using linear models by cohort and ancestry. Effect modification by smoking status (current/former/never) was tested. Results were combined using fixed-effects meta-analysis. Mean serum 25(OH)D was 68 (sd 29) nmol/l for EA and 49 (sd 21) nmol/l for AA. For each 1 nmol/l higher 25(OH)D, forced expiratory volume in the 1st second (FEV1) was higher by 1·1 ml in EA (95 % CI 0·9, 1·3; P<0·0001) and 1·8 ml (95 % CI 1·1, 2·5; P<0·0001) in AA (Prace difference=0·06), and forced vital capacity (FVC) was higher by 1·3 ml in EA (95 % CI 1·0, 1·6; P<0·0001) and 1·5 ml (95 % CI 0·8, 2·3; P=0·0001) in AA (Prace difference=0·56). Among EA, the 25(OH)D–FVC association was stronger in smokers: per 1 nmol/l higher 25(OH)D, FVC was higher by 1·7 ml (95 % CI 1·1, 2·3) for current smokers and 1·7 ml (95 % CI 1·2, 2·1) for former smokers, compared with 0·8 ml (95 % CI 0·4, 1·2) for never smokers. In summary, the 25(OH)D associations with FEV1 and FVC were positive in both ancestries. In EA, a stronger association was observed for smokers compared with never smokers, which supports the importance of vitamin D in vulnerable populations.
Little is known about what motivates people to enroll in research registries. The purpose of this study is to identify facilitators of registry enrollment among diverse older adults.
Participants completed an 18-item Research Interest Assessment Tool. We used logistic regression analyses to examine responses across participants and by race and gender.
Participants (N=374) were 58% black, 76% women, with a mean age of 68.2 years. All participants were motivated to maintain their memory while aging. Facilitators of registry enrolled varied by both race and gender. Notably, blacks (estimate=0.71, p<0.0001) and women (estimate=0.32, p=0.03) were more willing to enroll in the registry due to home visits compared with whites and men, respectively.
Researchers must consider participant desire for maintaining memory while aging and home visits when designing culturally tailored registries.
Arachidonic acid (ARA) and DHA, supplied primarily from the mother, are required for early development of the central nervous system. Thus, variations in maternal ARA or DHA status may modify neurocognitive development. We investigated the relationship between maternal ARA and DHA status in early (11·7 weeks) or late (34·5 weeks) pregnancy on neurocognitive function at the age of 4 years or 6–7 years in 724 mother–child pairs from the Southampton Women’s Survey cohort. Plasma phosphatidylcholine fatty acid composition was measured in early and late pregnancy. ARA concentration in early pregnancy predicted 13 % of the variation in ARA concentration in late pregnancy (β=0·36, P<0·001). DHA concentration in early pregnancy predicted 21 % of the variation in DHA concentration in late pregnancy (β=0·46, P<0·001). Children’s cognitive function at the age of 4 years was assessed by the Wechsler Preschool and Primary Scale of Intelligence and at the age of 6–7 years by the Wechsler Abbreviated Scale of Intelligence. Executive function at the age of 6–7 years was assessed using elements of the Cambridge Neuropsychological Test Automated Battery. Neither DHA nor ARA concentrations in early or late pregnancy were associated significantly with neurocognitive function in children at the age of 4 years or the age of 6–7 years. These findings suggest that ARA and DHA status during pregnancy in the range found in this cohort are unlikely to have major influences on neurocognitive function in healthy children.
Introduction: ex-specific diagnostic cutoffs may improve the test characteristics of high-sensitivity troponin assays for the diagnosis of myocardial infarction. Sex-specific cutoffs for ruling in MI improve the sensitivity of the assay for MI among women, and improve the specificity of diagnosis among men. We hypothesized that the use of sex-specific high-sensitivity Troponin T (hsTnT) cutoffs for ruling out MI at the time of ED arrival would improve the classification efficiency of the assay by enabling more patients to have MI ruled out at the time of ED arrival while maintaining diagnostic sensitivity. The objective of this study was to quantify the test characteristics of sex-specific cutoffs of an hsTnT assay for acute myocardial infarction (AMI) when performed at ED arrival in patients with chest pain. Methods: This retrospective study included consecutive ED patients with suspected cardiac chest pain evaluated in four urban EDs were, excluding those with ST-elevation AMI, cardiac arrest or abnormal kidney function. The primary outcomes was AMI at 7 days. Secondary outcomes included major adverse cardiac events (MACE: all-cause mortality, AMI and revascularization) and the individual MACE components. We quantified test characteristics (sensitivity, negative predictive value, likelihood ratios and proportion of patients ruled out) for multiple combinations of sex-specific rule-out cutoffs. We calculated net reclassification improvement compared to universal rule-out cutoffs of 5ng/L (the assays limit of detection) and 6ng/L (the FDA-approved limit of quantitation for US laboratories). Results: 7130 patients, including 3931 men and 3199 women, were included. The 7-day incidence of AMI was 7.38% among men and 3.78% among women. Universal cutoffs of 5 and 6 ng/L ruled out AMI with 99.7% sensitivity in 33.6 and 42.2% of patients. The best-performing combination of sex-specific cutoffs (8g/L for men and 6ng/L for men) ruled out AMI with 98.7% sensitivity in 51.9% of patients. Conclusion: Sex-specific hsTnT cutoffs for ruling out AMI at ED arrival may achieve substantial improvement in classification performance, enabling more patients to be ruled out at ED arrival, while maintaining acceptable diagnostic sensitivity for AMI. Universal and sex-specific rule-out cutoffs differ by only small changes in hsTnT concentration. Therefore, these findings should be confirmed in other datasets.
Introduction: Patients with chronic kidney disease (CKD) are at high risk of cardiovascular events, and have worse outcomes following acute myocardial infarction (AMI). Cardiac troponin is often elevated in CKD, making the diagnosis of AMI challenging in this population. We sought to quantify test characteristics for AMI of a high-sensitivity troponin T (hsTnT) assay performed at emergency department (ED) arrival in CKD patients with chest pain, and to derive rule-out cutoffs specific to patient subgroups stratified by estimated glomerular filtration rate (eGFR). We also quantified the sensitivity and classification performance of the assays limit of detection (5 ng/L) and the FDA-approved limit of quantitation (6 ng/L) for ruling out AMI at ED arrival. Methods: Consecutive patients in four urban EDs from the 2013 calendar year with suspected cardiac chest pain who had a Roche Elecsys hsTnT assay performed on arrival were included f. This analysis was restricted to patients with an eGFR< 60 ml/min/1.73m2. The primary outcome was 7-day AMI. Secondary outcomes included major adverse cardiac events (death, AMI and revascularization). Test characteristics were calculated and ROC curves were generated for eGFR subgroups. Results: 1416 patients were included. 7-day AMI incidence was 10.1%. 73% of patients had an initial hsTnT concentration greater than the assays 99th percentile (14 ng/L). TCurrently accepted cutoffs to rule out MI at ED arrival ( 5 ng/L and 6 ng/L) had 100% sensitivity for AMI, but no patients with an eGFR less than 30 ml/min/1.73M had hsTnT concentrations below these thresholds. We derived eGFR-adjusted cutoffs to rule out MI with sensitivity >98% at ED arrival, which were able to rule out 6-42% of patients, depending on eGFR category. The proportion of patients able to be accurately ruled-in with a single hsTnT assay was substantially lower among patients with an eGFR <30 ml/min/1.73m2 (6-20% vs 25-43%). We also derived eGFR-adjusted cutoffs to rule-in AMI with specificity >90%, which accurately ruled-in up to 18% of patients. Conclusion: Cutoffs achieving acceptable diagnostic performance for AMI using single hsTnT sampling on ED arrival may have limited clinical utility, particularly among patients with very low eGFR. The ideal diagnostic strategy for AMI in patients with CKD likely involves serial high-sensitivity troponin testing with diagnostic thresholds customized to different eGFR categories.
The Pediatric Heart Network designed a career development award to train the next generation of clinician scientists in paediatric-cardiology-related research, a historically underfunded area. We sought to identify the strengths/weaknesses of the programme and describe the scholars’ academic achievements and the network’s return on investment.
Survey questions designed to evaluate the programme were sent to applicants – 13 funded and 19 unfunded applicants – and 20 mentors and/or principal investigators. Response distributions were calculated. χ2 tests of association assessed differences in ratings of the application/selection processes among funded scholars, unfunded applicants, and mentors/principal investigators. Scholars reported post-funding academic achievements.
Survey response rates were 88% for applicants and 100% for mentor/principal investigators. Clarity and fairness of the review were rated as “clear/fair” or “very clear/very fair” by 98% of respondents, but the responses varied among funded scholars, unfunded applicants, and mentors/principal investigators (clarity χ2=10.85, p=0.03; fairness χ2=16.97, p=0.002). Nearly half of the unfunded applicants rated feedback as “not useful” (47%). “Expanding their collaborative network” and “increasing publication potential” were the highest-rated benefits for scholars. Mentors/principal investigators found the programme “very” valuable for the scholars (100%) and the network (75%). The 13 scholars were first/senior authors for 97 abstracts and 109 manuscripts, served on 22 Pediatric Heart Network committees, and were awarded $9,673,660 in subsequent extramural funding for a return of ~$10 for every scholar dollar spent.
Overall, patient satisfaction with the Scholar Award was high and scholars met many academic markers of success. Despite this, programme challenges were identified and improvement strategies were developed.
To test the hypothesis that more frequent consumption of sugar-sweetened soft drinks would be associated with increased risk of obesity-related cancers. Associations for artificially sweetened soft drinks were assessed for comparison.
Prospective cohort study with cancers identified by linkage to cancer registries. At baseline, participants completed a 121-item FFQ including separate questions about the number of times in the past year they had consumed sugar-sweetened or artificially sweetened soft drinks. Anthropometric measurements, including waist circumference, were taken and questions about smoking, leisure-time physical activity and intake of alcoholic beverages were completed.
The Melbourne Collaborative Cohort Study (MCCS) is a prospective cohort study which recruited 41 514 men and women aged 40–69 years between 1990 and 1994. A second wave of data collection occurred in 2003–2007.
Data for 35 593 participants who developed 3283 incident obesity-related cancers were included in the main analysis.
Increasing frequency of consumption of both sugar-sweetened and artificially sweetened soft drinks was associated with greater waist circumference at baseline. For sugar-sweetened soft drinks, the hazard ratio (HR) for obesity-related cancers increased as frequency of consumption increased (HR for consumption >1/d v. <1/month=1·18; 95 % CI 0·97, 1·45; P-trend=0·007). For artificially sweetened soft drinks, the HR for obesity-related cancers was not associated with consumption (HR for consumption >1/d v. <1/month=1·00; 95 % CI 0·79, 1·27; P-trend=0·61).
Our results add to the justification to minimise intake of sugar-sweetened soft drinks.
The main purpose for holding a Workshop about the Large Synoptic Survey Telescope (LSST) was to move all participants further towards answering the question, “How will I do my science with LSST data?” Presentations included (i) the planned pipelines and products of the data management team, and (ii) the existing channels for communication within the science community and between the community and the LSST Data Management team. In between the formal presentations, small groups discussed matters such as how to select the data products or communications resources that were best suited to individual science goals. The latter discussions were designed both to facilitate engagement with the material and to foster collaboration. Participants should thus have become better equipped to continue on their respective individual paths towards science with LSST.