To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this paper, the generation of relativistic electron mirrors (REM) and the reflection of an ultra-short laser off the mirrors are discussed, applying two-dimension particle-in-cell simulations. REMs with ultra-high acceleration and expanding velocity can be produced from a solid nanofoil illuminated normally by an ultra-intense femtosecond laser pulse with a sharp rising edge. Chirped attosecond pulse can be produced through the reflection of a counter-propagating probe laser off the accelerating REM. In the electron moving frame, the plasma frequency of the REM keeps decreasing due to its rapid expansion. The laser frequency, on the contrary, keeps increasing due to the acceleration of REM and the relativistic Doppler shift from the lab frame to the electron moving frame. Within an ultra-short time interval, the two frequencies will be equal in the electron moving frame, which leads to the resonance between laser and REM. The reflected radiation near this interval and corresponding spectra will be amplified due to the resonance. Through adjusting the arriving time of the probe laser, a certain part of the reflected field could be selectively amplified or depressed, leading to the selective adjustment of the corresponding spectra.
We aimed to investigate the heterogeneity of seasonal suicide patterns among multiple geographically, demographically and socioeconomically diverse populations.
Weekly time-series data of suicide counts for 354 communities in 12 countries during 1986–2016 were analysed. Two-stage analysis was performed. In the first stage, a generalised linear model, including cyclic splines, was used to estimate seasonal patterns of suicide for each community. In the second stage, the community-specific seasonal patterns were combined for each country using meta-regression. In addition, the community-specific seasonal patterns were regressed onto community-level socioeconomic, demographic and environmental indicators using meta-regression.
We observed seasonal patterns in suicide, with the counts peaking in spring and declining to a trough in winter in most of the countries. However, the shape of seasonal patterns varied among countries from bimodal to unimodal seasonality. The amplitude of seasonal patterns (i.e. the peak/trough relative risk) also varied from 1.47 (95% confidence interval [CI]: 1.33–1.62) to 1.05 (95% CI: 1.01–1.1) among 12 countries. The subgroup difference in the seasonal pattern also varied over countries. In some countries, larger amplitude was shown for females and for the elderly population (≥65 years of age) than for males and for younger people, respectively. The subperiod difference also varied; some countries showed increasing seasonality while others showed a decrease or little change. Finally, the amplitude was larger for communities with colder climates, higher proportions of elderly people and lower unemployment rates (p-values < 0.05).
Despite the common features of a spring peak and a winter trough, seasonal suicide patterns were largely heterogeneous in shape, amplitude, subgroup differences and temporal changes among different populations, as influenced by climate, demographic and socioeconomic conditions. Our findings may help elucidate the underlying mechanisms of seasonal suicide patterns and aid in improving the design of population-specific suicide prevention programmes based on these patterns.
The COllaborative project of Development of Anthropometrical measures in Twins (CODATwins) project is a large international collaborative effort to analyze individual-level phenotype data from twins in multiple cohorts from different environments. The main objective is to study factors that modify genetic and environmental variation of height, body mass index (BMI, kg/m2) and size at birth, and additionally to address other research questions such as long-term consequences of birth size. The project started in 2013 and is open to all twin projects in the world having height and weight measures on twins with information on zygosity. Thus far, 54 twin projects from 24 countries have provided individual-level data. The CODATwins database includes 489,981 twin individuals (228,635 complete twin pairs). Since many twin cohorts have collected longitudinal data, there is a total of 1,049,785 height and weight observations. For many cohorts, we also have information on birth weight and length, own smoking behavior and own or parental education. We found that the heritability estimates of height and BMI systematically changed from infancy to old age. Remarkably, only minor differences in the heritability estimates were found across cultural–geographic regions, measurement time and birth cohort for height and BMI. In addition to genetic epidemiological studies, we looked at associations of height and BMI with education, birth weight and smoking status. Within-family analyses examined differences within same-sex and opposite-sex dizygotic twins in birth size and later development. The CODATwins project demonstrates the feasibility and value of international collaboration to address gene-by-exposure interactions that require large sample sizes and address the effects of different exposures across time, geographical regions and socioeconomic status.
The Pain Catastrophizing Scale (PCS) measures three aspects of catastrophic cognitions about pain—rumination, magnification, and helplessness. To facilitate assessment and clinical application, we aimed to (a) develop a short version on the basis of its factorial structure and the items’ correlations with key pain-related outcomes, and (b) identify the threshold on the short form indicative of risk for depression.
Social centers for older people.
664 Chinese older adults with chronic pain.
Besides the PCS, pain intensity, pain disability, and depressive symptoms were assessed.
For the full scale, confirmatory factor analysis showed that the hypothesized 3-factor model fit the data moderately well. On the basis of the factor loadings, two items were selected from each of the three dimensions. An additional item significantly associated with pain disability and depressive symptoms, over and above these six items, was identified through regression analyses. A short-PCS composed of seven items was formed, which correlated at r=0.97 with the full scale. Subsequently, receiver operating characteristic (ROC) curves were plotted against clinically significant depressive symptoms, defined as a score of ≥12 on a 10-item version of the Center for Epidemiologic Studies-Depression Scale. This analysis showed a score of ≥7 to be the optimal cutoff for the short-PCS, with sensitivity = 81.6% and specificity = 78.3% when predicting clinically significant depressive symptoms.
The short-PCS may be used in lieu of the full scale and as a brief screen to identify individuals with serious catastrophizing.
Recent studies indicate that early postnatal period is a critical window for gut microbiota manipulation to optimise the immunity and body growth. This study investigated the effects of maternal faecal microbiota orally administered to neonatal piglets after birth on growth performance, selected microbial populations, intestinal permeability and the development of intestinal mucosal immune system. In total, 12 litters of crossbred newborn piglets were selected in this study. Litter size was standardised to 10 piglets. On day 1, 10 piglets in each litter were randomly allotted to the faecal microbiota transplantation (FMT) and control groups. Piglets in the FMT group were orally administrated with 2ml faecal suspension of their nursing sow per day from the age of 1 to 3 days; piglets in the control group were treated with the same dose of a placebo (0.1M potassium phosphate buffer containing 10% glycerol (vol/vol)) inoculant. The experiment lasted 21 days. On days 7, 14 and 21, plasma and faecal samples were collected for the analysis of growth-related hormones and cytokines in plasma and lipocalin-2, secretory immunoglobulin A (sIgA), selected microbiota and short-chain fatty acids (SCFAs) in faeces. Faecal microbiota transplantation increased the average daily gain of piglets during week 3 and the whole experiment period. Compared with the control group, the FMT group had increased concentrations of plasma growth hormone and IGF-1 on days 14 and 21. Faecal microbiota transplantation also reduced the incidence of diarrhoea during weeks 1 and 3 and plasma concentrations of zonulin, endotoxin and diamine oxidase activities in piglets on days 7 and 14. The populations of Lactobacillus spp. and Faecalibacterium prausnitzii and the concentrations of faecal and plasma acetate, butyrate and total SCFAs in FMT group were higher than those in the control group on day 21. Moreover, the FMT piglets have higher concentrations of plasma transforming growth factor-β and immunoglobulin G, and faecal sIgA than the control piglets on day 21. These findings indicate that early intervention with maternal faecal microbiota improves growth performance, decreases intestinal permeability, stimulates sIgA secretion, and modulates gut microbiota composition and metabolism in suckling piglets.
To validate a system to detect ventilator associated events (VAEs) autonomously and in real time.
Retrospective review of ventilated patients using a secure informatics platform to identify VAEs (ie, automated surveillance) compared to surveillance by infection control (IC) staff (ie, manual surveillance), including development and validation cohorts.
The Massachusetts General Hospital, a tertiary-care academic health center, during January–March 2015 (development cohort) and January–March 2016 (validation cohort).
Ventilated patients in 4 intensive care units.
The automated process included (1) analysis of physiologic data to detect increases in positive end-expiratory pressure (PEEP) and fraction of inspired oxygen (FiO2); (2) querying the electronic health record (EHR) for leukopenia or leukocytosis and antibiotic initiation data; and (3) retrieval and interpretation of microbiology reports. The cohorts were evaluated as follows: (1) manual surveillance by IC staff with independent chart review; (2) automated surveillance detection of ventilator-associated condition (VAC), infection-related ventilator-associated complication (IVAC), and possible VAP (PVAP); (3) senior IC staff adjudicated manual surveillance–automated surveillance discordance. Outcomes included sensitivity, specificity, positive predictive value (PPV), and manual surveillance detection errors. Errors detected during the development cohort resulted in algorithm updates applied to the validation cohort.
In the development cohort, there were 1,325 admissions, 479 ventilated patients, 2,539 ventilator days, and 47 VAEs. In the validation cohort, there were 1,234 admissions, 431 ventilated patients, 2,604 ventilator days, and 56 VAEs. With manual surveillance, in the development cohort, sensitivity was 40%, specificity was 98%, and PPV was 70%. In the validation cohort, sensitivity was 71%, specificity was 98%, and PPV was 87%. With automated surveillance, in the development cohort, sensitivity was 100%, specificity was 100%, and PPV was 100%. In the validation cohort, sensitivity was 85%, specificity was 99%, and PPV was 100%. Manual surveillance detection errors included missed detections, misclassifications, and false detections.
Manual surveillance is vulnerable to human error. Automated surveillance is more accurate and more efficient for VAE surveillance.
Patients with cardiovascular diseases are common in the emergency department (ED), and continuity of care following that visit is needed to ensure that they receive evidence-based diagnostic tests and therapy. We examined the frequency of follow-up care after discharge from an ED with a new diagnosis of one of three cardiovascular diseases.
We performed a retrospective cohort study of patients with a new diagnosis of heart failure, atrial fibrillation, or hypertension, who were discharged from 157 non-pediatric EDs in Ontario, Canada, between April 2007 and March 2014. We determined the frequency of follow-up care with a family physician, cardiologist, or internist within seven and 30 days, and assessed the association of patient, emergency physician, and family physician characteristics with obtaining follow-up care using cause-specific hazard modeling.
There were 41,485 qualifying ED visits. Just under half (47.0%) had follow-up care within seven days, with 78.7% seen by 30 days. Patients with serious comorbidities (renal failure, dementia, COPD, stroke, coronary artery disease, and cancer) had a lower adjusted hazard of obtaining 7-day follow-up care (HRs 0.77-0.95) and 30-day follow-up care (HR 0.76-0.95). The only emergency physician characteristic associated with follow-up care was 5-year emergency medicine specialty training (HR 1.11). Compared to those whose family physician was remunerated via a primarily fee-for-service model, patients were less likely to obtain 7-day follow-up care if their family physician was remunerated via three types of capitation models (HR 0.72, 0.81, 0.85) or via traditional fee-for-service (HR 0.91). Findings were similar for 30-day follow-up care.
Only half of patients discharged from an ED with a new diagnosis of atrial fibrillation, heart failure, and hypertension were seen within a week of being discharged. Patients with significant comorbidities were less likely to obtain follow-up care, as were those with a family physician who was remunerated via primarily capitation methods.
The unique phenotypic and genetic aspects of obsessive-compulsive (OCD) and attention-deficit/hyperactivity disorder (ADHD) among individuals with Tourette syndrome (TS) are not well characterized. Here, we examine symptom patterns and heritability of OCD and ADHD in TS families.
OCD and ADHD symptom patterns were examined in TS patients and their family members (N = 3494) using exploratory factor analyses (EFA) for OCD and ADHD symptoms separately, followed by latent class analyses (LCA) of the resulting OCD and ADHD factor sum scores jointly; heritability and clinical relevance of the resulting factors and classes were assessed.
EFA yielded a 2-factor model for ADHD and an 8-factor model for OCD. Both ADHD factors (inattentive and hyperactive/impulsive symptoms) were genetically related to TS, ADHD, and OCD. The doubts, contamination, need for sameness, and superstitions factors were genetically related to OCD, but not ADHD or TS; symmetry/exactness and fear-of-harm were associated with TS and OCD while hoarding was associated with ADHD and OCD. In contrast, aggressive urges were genetically associated with TS, OCD, and ADHD. LCA revealed a three-class solution: few OCD/ADHD symptoms (LC1), OCD & ADHD symptoms (LC2), and symmetry/exactness, hoarding, and ADHD symptoms (LC3). LC2 had the highest psychiatric comorbidity rates (⩾50% for all disorders).
Symmetry/exactness, aggressive urges, fear-of-harm, and hoarding show complex genetic relationships with TS, OCD, and ADHD, and, rather than being specific subtypes of OCD, transcend traditional diagnostic boundaries, perhaps representing an underlying vulnerability (e.g. failure of top-down cognitive control) common to all three disorders.
The Universe is permeated by hot, turbulent, magnetized plasmas. Turbulent plasma is a major constituent of active galactic nuclei, supernova remnants, the intergalactic and interstellar medium, the solar corona, the solar wind and the Earth’s magnetosphere, just to mention a few examples. Energy dissipation of turbulent fluctuations plays a key role in plasma heating and energization, yet we still do not understand the underlying physical mechanisms involved. THOR is a mission designed to answer the questions of how turbulent plasma is heated and particles accelerated, how the dissipated energy is partitioned and how dissipation operates in different regimes of turbulence. THOR is a single-spacecraft mission with an orbit tuned to maximize data return from regions in near-Earth space – magnetosheath, shock, foreshock and pristine solar wind – featuring different kinds of turbulence. Here we summarize the THOR proposal submitted on 15 January 2015 to the ‘Call for a Medium-size mission opportunity in ESAs Science Programme for a launch in 2025 (M4)’. THOR has been selected by European Space Agency (ESA) for the study phase.
Pathogens utilize type III secretion systems to deliver effector proteins, which facilitate bacterial infections. The Escherichia coli type III secretion system 2 (ETT2) which plays a crucial role in bacterial virulence, is present in the majority of E. coli strains, although ETT2 has undergone widespread mutational attrition. We investigated the distribution and characteristics of ETT2 in avian pathogenic E. coli (APEC) isolates and identified five different ETT2 isoforms, including intact ETT2, in 57·6% (141/245) of the isolates. The ETT2 locus was present in the predominant APEC serotypes O78, O2 and O1. All of the ETT2 loci in the serotype O78 isolates were degenerate, whereas an intact ETT2 locus was mostly present in O1 and O2 serotype strains, which belong to phylogenetic groups B2 and D, respectively. Interestingly, a putative second type III secretion-associated locus (eip locus) was present only in the isolates with an intact ETT2. Moreover, ETT2 was more widely distributed in APEC isolates and exhibited more isoforms compared to ETT2 in human extraintestinal pathogenic E. coli, suggesting that APEC might be a potential risk to human health. However, there was no distinct correlation between ETT2 and other virulence factors in APEC.
Major depressive disorder (MDD) is moderately heritable, however genome-wide association studies (GWAS) for MDD, as well as for related continuous outcomes, have not shown consistent results. Attempts to elucidate the genetic basis of MDD may be hindered by heterogeneity in diagnosis. The Center for Epidemiological Studies Depression (CES-D) scale provides a widely used tool for measuring depressive symptoms clustered in four different domains which can be combined together into a total score but also can be analysed as separate symptom domains.
We performed a meta-analysis of GWAS of the CES-D symptom clusters. We recruited 12 cohorts with the 20- or 10-item CES-D scale (32 528 persons).
One single nucleotide polymorphism (SNP), rs713224, located near the brain-expressed melatonin receptor (MTNR1A) gene, was associated with the somatic complaints domain of depression symptoms, with borderline genome-wide significance (pdiscovery = 3.82 × 10−8). The SNP was analysed in an additional five cohorts comprising the replication sample (6813 persons). However, the association was not consistent among the replication sample (pdiscovery+replication = 1.10 × 10−6) with evidence of heterogeneity.
Despite the effort to harmonize the phenotypes across cohorts and participants, our study is still underpowered to detect consistent association for depression, even by means of symptom classification. On the contrary, the SNP-based heritability and co-heritability estimation results suggest that a very minor part of the variation could be captured by GWAS, explaining the reason of sparse findings.
The present study investigated the effects of different levels of urea nitrogen (N) fertilizer on nutrient accumulation, in vitro rumen gas production and fermentation characteristics of forage oat straw (FOS) from oats (Avena sativa L. ‘Qinghai 444’) grown in the Tibet region of China. Fertilizer, applied at seeding (day 1), stem elongation (days 52–54) and heading (days 63–67), increased plant height and prolonged the maturity stage of the plant by 4–11 days compared with the non-fertilized control. Oat plants were harvested at maturity at the node 3–4 cm above ground, and then separated into grains and FOS. Both FOS and grain yields increased quadratically with increasing N fertilization, and their theoretical maximums occurred at the N fertilizing rates of 439 and 385 kg/ha, respectively. Increases in N fertilization did not affect the hemicellulose content of FOS, but substantially promoted the accumulation of crude protein, cellulose and lignin, resulting in a decrease in the energy content available for metabolism. A 72-h incubation of FOS with rumen fluids from lactating cows showed that increasing N resulted in FOS that showed a slower fermentation rate, decreased in vitro dry matter disappearance and lower cumulative gas production, but unchanged fermentation gas composition. Nitrogen fertilization increased the final pH in culture fluids and decreased the microbial volatile fatty acid (VFA) production. The molar proportions of acetate and propionate were not affected, but molar propionate proportion decreased linearly with increasing urea fertilization, and consequently, the ratio of lipogenic (e.g., acetate and butyrate)-to-glucogenic acids (propionate) tended to increase. In brief, increasing urea N fertilization promoted the growth of forage oats and increased the biomass yield as well as the crude protein and cellulose content of FOS. Considering the negative effect of increased lignin content on nutrient digestibility and total VFA production, the suggested range of urea N fertilization is 156–363 kg N/ha for forage oats planted in Tibet to retain the nutritive value of FOS in the rumen.
Neurological soft signs (NSS) have long been considered potential endophenotypes for schizophrenia. However, few studies have investigated the heritability and familiality of NSS. The present study examined the heritability and familiality of NSS in healthy twins and patient–relative pairs.
The abridged version of the Cambridge Neurological Inventory was administered to 267 pairs of monozygotic twins, 124 pairs of dizygotic twins, and 75 pairs of patients with schizophrenia and their non-psychotic first-degree relatives.
NSS were found to have moderate but significant heritability in the healthy twin sample. Moreover, patients with schizophrenia correlated closely with their first-degree relatives on NSS.
Taken together, the findings provide evidence on the heritability and familiality of NSS in the Han Chinese population.
A completely randomized experiment for planting highland barley in 36 field plots of the Lhasa Agricultural Experiment Station was applied to investigate the effect of urea nitrogen (N) fertilization levels of 0 (control), 156, 258, 363, 465 and 570 kg/ha on nutrient accumulation, in vitro rumen gas production and fermentation characteristics of highland barley straw (HBS). Each urea application was divided into three portions of 0.4, 0.3 and 0.3 and sequentially fertilized at seeding (growth stage (GS) 0), stem elongation (GS 32) and heading (GS 49), respectively. The maturity stage lasted 5–13 days longer in response to the urea N fertilization compared with the control. After removing grains, HBS biomass was harvested at maturity. The biomass yields of leaf, stem, straw and grain were increased quadratically with increasing urea N fertilization, and HBS and grain yields peaked at the estimated urea N fertilization levels of 385 and 428 kg/ha, respectively. The increase of urea N fertilization increased the accumulation of crude protein, cellulose and lignin, and decreased the content of ash and hemicellulose in HBS, resulting in a decrease of the energy content available to be metabolized. After incubating HBS for 72 h with rumen fluids from lactating cows, the urea N fertilization decreased in vitro dry matter disappearance and cumulative gas production, and slightly altered fermentation end-gas composition. Urea N fertilization decreased microbial volatile fatty acid production, but did not alter the ratio of lipogenic acetate and butyrate to glucogenic propionate. In a brief, the current urea N fertilization strategy promoted the growth of the highland barley and increased biomass yield, protein and cellulose accumulation of HBS. A urea N fertilization level ⩽385 kg/ha could be sufficient for growth of highland barley in Tibet without a consequent nutritive reduction in ruminal digestion.
Studies have suggested that maternal PUFA status during pregnancy may influence early childhood allergic diseases, although findings are inconsistent. We examined the relationship between maternal PUFA status and risk of allergic diseases in early childhood in an Asian cohort. Maternal plasma samples from the Growing Up in Singapore Towards Healthy Outcomes mother–offspring cohort were assayed at 26–28 weeks of gestation for relative abundance of PUFA. Offspring (n 960) were followed up from 3 weeks to 18 months of age, and clinical outcomes of potential allergic diseases (rhinitis, eczema and wheezing) were assessed by repeated questionnaires. Skin prick testing (SPT) was also performed at the age of 18 months. Any allergic disease with positive SPT was defined as having any one of the clinical outcomes plus a positive SPT. The prevalence of a positive SPT, rhinitis, eczema, wheezing and any allergic disease with positive SPT was 14·1 % (103/728), 26·5 % (214/808), 17·6 % (147/833), 10·9 % (94/859) and 9·4 % (62/657), respectively. After adjustment for confounders, maternal total n-3, n-6 PUFA status and the n-6:n-3 PUFA ratio were not significantly associated with offspring rhinitis, eczema, wheezing, a positive SPT and having any allergic disease with positive SPT in the offspring (P>0·01 for all). A weak trend of higher maternal n-3 PUFA being associated with higher risk of allergic diseases with positive SPT in offspring was observed. These findings do not support the hypothesis that the risk of early childhood allergic diseases is modified by variation in maternal n-3 and n-6 PUFA status during pregnancy in an Asian population.
Simulation models can offer valuable insights into the effectiveness of different control strategies and act as important decision support tools when comparing and evaluating outbreak scenarios and control strategies. An international modelling study was performed to compare a range of vaccination strategies in the control of foot-and-mouth disease (FMD). Modelling groups from five countries (Australia, New Zealand, USA, UK, The Netherlands) participated in the study. Vaccination is increasingly being recognized as a potentially important tool in the control of FMD, although there is considerable uncertainty as to how and when it should be used. We sought to compare model outputs and assess the effectiveness of different vaccination strategies in the control of FMD. Using a standardized outbreak scenario based on data from an FMD exercise in the UK in 2010, the study showed general agreement between respective models in terms of the effectiveness of vaccination. Under the scenario assumptions, all models demonstrated that vaccination with ‘stamping-out’ of infected premises led to a significant reduction in predicted epidemic size and duration compared to the ‘stamping-out’ strategy alone. For all models there were advantages in vaccinating cattle-only rather than all species, using 3-km vaccination rings immediately around infected premises, and starting vaccination earlier in the control programme. This study has shown that certain vaccination strategies are robust even to substantial differences in model configurations. This result should increase end-user confidence in conclusions drawn from model outputs. These results can be used to support and develop effective policies for FMD control.