To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Seed shatter is an important weediness trait on which the efficacy of harvest weed seed control (HWSC) depends. The level of seed shatter in a species is likely influenced by agroecological and environmental factors. In 2016 and 2017, we assessed seed shatter of eight economically important grass weed species in soybean [Glycine max (L.) Merr.] from crop physiological maturity to four weeks after maturity at multiple sites spread across eleven states in the southern, northern, and mid-Atlantic U.S. From soybean maturity to four weeks after maturity, cumulative percent seed shatter was lowest in the southern U.S. regions and increased as the states moved further north. At soybean maturity, the percent of seed shatter ranged from 1 to 70%. That range had shifted to 5 to 100% (mean: 42%) by 25 days after soybean maturity. There were considerable differences in seed shatter onset and rate of progression between sites and years in some species that could impact their susceptibility to HWSC. Our results suggest that many summer annual grass species are likely not ideal candidates for HWSC, although HWSC could substantially reduce their seed output at during certain years.
Maltreatment adversely impacts the development of children across a host of domains. One way in which maltreatment may exert its deleterious effects is by becoming embedded in the activity of neurophysiological systems that regulate metabolic function. This paper reviews the literature regarding the association between childhood maltreatment and the activity of three systems: the parasympathetic nervous system, the sympathetic nervous system, and the hypothalamic–pituitary–adrenal axis. A particular emphasis is placed on the extent to which the literature supports a common account of activity across these systems under conditions of homeostasis and stress. The paper concludes with an outline of directions for future research and the implications of the literature for policy and practice.
In clinical and translational research, data science is often and fortuitously integrated with data collection. This contrasts to the typical position of data scientists in other settings, where they are isolated from data collectors. Because of this, effective use of data science techniques to resolve translational questions requires innovation in the organization and management of these data.
We propose an operational framework that respects this important difference in how research teams are organized. To maximize the accuracy and speed of the clinical and translational data science enterprise under this framework, we define a set of eight best practices for data management.
In our own work at the University of Rochester, we have strived to utilize these practices in a customized version of the open source LabKey platform for integrated data management and collaboration. We have applied this platform to cohorts that longitudinally track multidomain data from over 3000 subjects.
We argue that this has made analytical datasets more readily available and lowered the bar to interdisciplinary collaboration, enabling a team-based data science that is unique to the clinical and translational setting.
Implementation of genome-scale sequencing in clinical care has significant challenges: the technology is highly dimensional with many kinds of potential results, results interpretation and delivery require expertise and coordination across multiple medical specialties, clinical utility may be uncertain, and there may be broader familial or societal implications beyond the individual participant. Transdisciplinary consortia and collaborative team science are well poised to address these challenges. However, understanding the complex web of organizational, institutional, physical, environmental, technologic, and other political and societal factors that influence the effectiveness of consortia is understudied. We describe our experience working in the Clinical Sequencing Evidence-Generating Research (CSER) consortium, a multi-institutional translational genomics consortium.
A key aspect of the CSER consortium was the juxtaposition of site-specific measures with the need to identify consensus measures related to clinical utility and to create a core set of harmonized measures. During this harmonization process, we sought to minimize participant burden, accommodate project-specific choices, and use validated measures that allow data sharing.
Identifying platforms to ensure swift communication between teams and management of materials and data were essential to our harmonization efforts. Funding agencies can help consortia by clarifying key study design elements across projects during the proposal preparation phase and by providing a framework for data sharing data across participating projects.
In summary, time and resources must be devoted to developing and implementing collaborative practices as preparatory work at the beginning of project timelines to improve the effectiveness of research consortia.
The volume of evidence from scientific research and wider observation is greater than ever before, but much is inconsistent and scattered in fragments over increasingly diverse sources, making it hard for decision-makers to find, access and interpret all the relevant information on a particular topic, resolve seemingly contradictory results or simply identify where there is a lack of evidence. Evidence synthesis is the process of searching for and summarising a body of research on a specific topic in order to inform decisions, but is often poorly conducted and susceptible to bias. In response to these problems, more rigorous methodologies have been developed and subsequently made available to the conservation and environmental management community by the Collaboration for Environmental Evidence. We explain when and why these methods are appropriate, and how evidence can be synthesised, shared, used as a public good and benefit wider society. We discuss new developments with potential to address barriers to evidence synthesis and communication and how these practices might be mainstreamed in the process of decision-making in conservation.
We present a detailed overview of the cosmological surveys that we aim to carry out with Phase 1 of the Square Kilometre Array (SKA1) and the science that they will enable. We highlight three main surveys: a medium-deep continuum weak lensing and low-redshift spectroscopic HI galaxy survey over 5 000 deg2; a wide and deep continuum galaxy and HI intensity mapping (IM) survey over 20 000 deg2 from
$z = 0.35$
to 3; and a deep, high-redshift HI IM survey over 100 deg2 from
$z = 3$
to 6. Taken together, these surveys will achieve an array of important scientific goals: measuring the equation of state of dark energy out to
$z \sim 3$
with percent-level precision measurements of the cosmic expansion rate; constraining possible deviations from General Relativity on cosmological scales by measuring the growth rate of structure through multiple independent methods; mapping the structure of the Universe on the largest accessible scales, thus constraining fundamental properties such as isotropy, homogeneity, and non-Gaussianity; and measuring the HI density and bias out to
$z = 6$
. These surveys will also provide highly complementary clustering and weak lensing measurements that have independent systematic uncertainties to those of optical and near-infrared (NIR) surveys like Euclid, LSST, and WFIRST leading to a multitude of synergies that can improve constraints significantly beyond what optical or radio surveys can achieve on their own. This document, the 2018 Red Book, provides reference technical specifications, cosmological parameter forecasts, and an overview of relevant systematic effects for the three key surveys and will be regularly updated by the Cosmology Science Working Group in the run up to start of operations and the Key Science Programme of SKA1.
The mechanism through which developmental programming of offspring overweight/obesity following in utero exposure to maternal overweight/obesity operates is unknown but may operate through biologic pathways involving offspring anthropometry at birth. Thus, we sought to examine to what extent the association between in utero exposure to maternal overweight/obesity and childhood overweight/obesity is mediated by birth anthropometry. Analyses were conducted on a retrospective cohort with data obtained from one hospital system. A natural effects model framework was used to estimate the natural direct effect and natural indirect effect of birth anthropometry (weight, length, head circumference, ponderal index, and small-for-gestational age [SGA] or large-for-gestational age [LGA]) for the association between pre-pregnancy maternal body mass index (BMI) category (overweight/obese vs normal weight) and offspring overweight/obesity in childhood. Models were adjusted for maternal and child socio-demographics. Three thousand nine hundred and fifty mother–child dyads were included in analyses (1467 [57.8%] of mothers and 913 [34.4%] of children were overweight/obese). Results suggest that a small percentage of the effect of maternal pre-pregnancy BMI overweight/obesity on offspring overweight/obesity operated through offspring anthropometry at birth (weight: 15.5%, length: 5.2%, head circumference: 8.5%, ponderal index: 2.2%, SGA: 2.9%, and LGA: 4.2%). There was a small increase in the percentage mediated when gestational diabetes or hypertensive disorders were added to the models. Our study suggests that some measures of birth anthropometry mediate the association between maternal pre-pregnancy overweight/obesity and offspring overweight/obesity in childhood and that the size of this mediated effect is small.
It is unclear what session frequency is most effective in cognitive–behavioural therapy (CBT) and interpersonal psychotherapy (IPT) for depression.
Compare the effects of once weekly and twice weekly sessions of CBT and IPT for depression.
We conducted a multicentre randomised trial from November 2014 through December 2017. We recruited 200 adults with depression across nine specialised mental health centres in the Netherlands. This study used a 2 × 2 factorial design, randomising patients to once or twice weekly sessions of CBT or IPT over 16–24 weeks, up to a maximum of 20 sessions. Main outcome measures were depression severity, measured with the Beck Depression Inventory-II at baseline, before session 1, and 2 weeks, 1, 2, 3, 4, 5 and 6 months after start of the intervention. Intention-to-treat analyses were conducted.
Compared with patients who received weekly sessions, patients who received twice weekly sessions showed a statistically significant decrease in depressive symptoms (estimated mean difference between weekly and twice weekly sessions at month 6: 3.85 points, difference in effect size d = 0.55), lower attrition rates (n = 16 compared with n = 32) and an increased rate of response (hazard ratio 1.48, 95% CI 1.00–2.18).
In clinical practice settings, delivery of twice weekly sessions of CBT and IPT for depression is a way to improve depression treatment outcomes.
Treatment enactment, a final stage of treatment implementation, refers to patients’ application of skills and concepts from treatment sessions into everyday life situations. We examined treatment enactment in a two-arm, multicenter trial comparing two psychoeducational treatments for persons with chronic moderate to severe traumatic brain injury and problematic anger.
Seventy-one of 90 participants from the parent trial underwent a telephone enactment interview at least 2 months (median 97 days, range 64–586 days) after cessation of treatment. Enactment, quantified as average frequency of use across seven core treatment components, was compared across treatment arms: anger self-management training (ASMT) and personal readjustment and education (PRE), a structurally equivalent control. Components were also rated for helpfulness when used. Predictors of, and barriers to, enactment were explored.
More than 80% of participants reported remembering all seven treatment components when queried using a recognition format. Enactment was equivalent across treatments. Most used/most helpful components concerned normalizing anger and general anger management strategies (ASMT), and normalizing traumatic brain injury-related changes while providing hope for improvement (PRE). Higher baseline executive function and IQ were predictive of better enactment, as well as better episodic memory (trend). Poor memory was cited by many participants as a barrier to enactment, as was the reaction of other people to attempted use of strategies.
Treatment enactment is a neglected component of implementation in neuropsychological clinical trials, but is important both to measure and to help participants achieve sustained carryover of core treatment ingredients and learned material to everyday life.
The science of studying diamond inclusions for understanding Earth history has developed significantly over the past decades, with new instrumentation and techniques applied to diamond sample archives revealing the stories contained within diamond inclusions. This chapter reviews what diamonds can tell us about the deep carbon cycle over the course of Earth’s history. It reviews how the geochemistry of diamonds and their inclusions inform us about the deep carbon cycle, the origin of the diamonds in Earth’s mantle, and the evolution of diamonds through time.
A new fossil site in a previously unexplored part of western Madagascar (the Beanka Protected Area) has yielded remains of many recently extinct vertebrates, including giant lemurs (Babakotia radofilai, Palaeopropithecus kelyus, Pachylemur sp., and Archaeolemur edwardsi), carnivores (Cryptoprocta spelea), the aardvark-like Plesiorycteropus sp., and giant ground cuckoos (Coua). Many of these represent considerable range extensions. Extant species that were extirpated from the region (e.g., Prolemur simus) are also present. Calibrated radiocarbon ages for 10 bones from extinct primates span the last three millennia. The largely undisturbed taphonomy of bone deposits supports the interpretation that many specimens fell in from a rock ledge above the entrance. Some primates and other mammals may have been prey items of avian predators, but human predation is also evident. Strontium isotope ratios (87Sr/86Sr) suggest that fossils were local to the area. Pottery sherds and bones of extinct and extant vertebrates with cut and chop marks indicate human activity in previous centuries. Scarcity of charcoal and human artifacts suggests only occasional visitation to the site by humans. The fossil assemblage from this site is unusual in that, while it contains many sloth lemurs, it lacks ratites, hippopotami, and crocodiles typical of nearly all other Holocene subfossil sites on Madagascar.
Dissemination and implementation (D&I) science is not a formal element of the Clinical Translational Science Award (CTSA) Program, and D&I science activities across the CTSA Consortium are largely unknown.
The CTSA Dissemination, Implementation, and Knowledge Translation Working Group surveyed CTSA leaders to explore D&I science-related activities, barriers, and needed supports, then conducted univariate and qualitative analyses of the data.
Out of 67 CTSA leaders, 55.2% responded. CTSAs reported directly funding D&I programs (54.1%), training (51.4%), and projects (59.5%). Indirect support (e.g., promoted by CTSA without direct funding) for D&I activities was higher – programs (70.3%), training (64.9%), and projects (54.1%). Top barriers included funding (39.4%), limited D&I science faculty (30.3%), and lack of D&I science understanding (27.3%). Respondents (63.4%) noted the importance of D&I training and recommended coordination of D&I activities across CTSAs hubs (33.3%).
These findings should guide CTSA leadership in efforts to raise awareness and advance the role of D&I science in improving population health.
Identifying risk factors of individuals in a clinical-high-risk state for psychosis are vital to prevention and early intervention efforts. Among prodromal abnormalities, cognitive functioning has shown intermediate levels of impairment in CHR relative to first-episode psychosis and healthy controls, highlighting a potential role as a risk factor for transition to psychosis and other negative clinical outcomes. The current study used the AX-CPT, a brief 15-min computerized task, to determine whether cognitive control impairments in CHR at baseline could predict clinical status at 12-month follow-up.
Baseline AX-CPT data were obtained from 117 CHR individuals participating in two studies, the Early Detection, Intervention, and Prevention of Psychosis Program (EDIPPP) and the Understanding Early Psychosis Programs (EP) and used to predict clinical status at 12-month follow-up. At 12 months, 19 individuals converted to a first episode of psychosis (CHR-C), 52 remitted (CHR-R), and 46 had persistent sub-threshold symptoms (CHR-P). Binary logistic regression and multinomial logistic regression were used to test prediction models.
Baseline AX-CPT performance (d-prime context) was less impaired in CHR-R compared to CHR-P and CHR-C patient groups. AX-CPT predictive validity was robust (0.723) for discriminating converters v. non-converters, and even greater (0.771) when predicting CHR three subgroups.
These longitudinal outcome data indicate that cognitive control deficits as measured by AX-CPT d-prime context are a strong predictor of clinical outcome in CHR individuals. The AX-CPT is brief, easily implemented and cost-effective measure that may be valuable for large-scale prediction efforts.
The efficient and effective movement of research into practice is acknowledged as crucial to improving population health and assuring return on investment in healthcare research. The National Center for Advancing Translational Science which sponsors Clinical and Translational Science Awards (CTSA) recognizes that dissemination and implementation (D&I) sciences have matured over the last 15 years and are central to its goals to shift academic health institutions to better align with this reality. In 2016, the CTSA Collaboration and Engagement Domain Task Force chartered a D&I Science Workgroup to explore the role of D&I sciences across the translational research spectrum. This special communication discusses the conceptual distinctions and purposes of dissemination, implementation, and translational sciences. We propose an integrated framework and provide real-world examples for articulating the role of D&I sciences within and across all of the translational research spectrum. The framework’s major proposition is that it situates D&I sciences as targeted “sub-sciences” of translational science to be used by CTSAs, and others, to identify and investigate coherent strategies for more routinely and proactively accelerating research translation. The framework highlights the importance of D&I thought leaders in extending D&I principles to all research stages.
Clinical Enterobacteriacae isolates with a colistin minimum inhibitory concentration (MIC) ≥4 mg/L from a United States hospital were screened for the mcr-1 gene using real-time polymerase chain reaction (RT-PCR) and confirmed by whole-genome sequencing. Four colistin-resistant Escherichia coli isolates contained mcr-1. Two isolates belonged to the same sequence type (ST-632). All subjects had prior international travel and antimicrobial exposure.
Cardiopulmonary exercise testing has been used to measure functional capacity in children who have undergone a heart transplant. Cardiopulmonary exercise testing results have not been compared between children transplanted for a primary diagnosis of CHD and those with a primary diagnosis of cardiomyopathy despite differences in outcomes. This study is aimed to compare cardiopulmonary exercise testing performance between these two groups.
Patients who underwent heart transplant with subsequent cardiopulmonary exercise testing at least 6 months after transplant at our institution were identified. They were then divided into two groups based on primary cardiac diagnosis: CHD or cardiomyopathy. Patient characteristics, echocardiograms, cardiac catheterisations, outcomes, and cardiopulmonary exercise test results were compared between the two groups.
From the total of 35 patients, 15 (43%) had CHD and 20 (57%) had cardiomyopathy. Age at transplant, kidney disease, lung disease, previous rejection, coronary vasculopathy, catheterisation, and echocardiographic data were similar between the groups. Mean time from transplant to cardiopulmonary exercise testing, exercise duration, and maximum oxygen consumption were similar in both groups. There was a difference in heart rate response with CHD heart rate response of 63 beats per minute compared to cardiomyopathy group of 78 (p = 0.028). Patients with CHD had more chronotropic incompetence than those with cardiomyopathy (p = 0.036).
Primary diagnosis of CHD is associated with abnormal heart rate response and more chronotropic incompetence compared to those transplanted for cardiomyopathy.