To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Abnormal effort-based decision-making represents a potential mechanism underlying motivational deficits (amotivation) in psychotic disorders. Previous research identified effort allocation impairment in chronic schizophrenia and focused mostly on physical effort modality. No study has investigated cognitive effort allocation in first-episode psychosis (FEP).
Cognitive effort allocation was examined in 40 FEP patients and 44 demographically-matched healthy controls, using Cognitive Effort-Discounting (COGED) paradigm which quantified participants’ willingness to expend cognitive effort in terms of explicit, continuous discounting of monetary rewards based on parametrically-varied cognitive demands (levels N of N-back task). Relationship between reward-discounting and amotivation was investigated. Group differences in reward-magnitude and effort-cost sensitivity, and differential associations of these sensitivity indices with amotivation were explored.
Patients displayed significantly greater reward-discounting than controls. In particular, such discounting was most pronounced in patients with high levels of amotivation even when N-back performance and reward base amount were taken into consideration. Moreover, patients exhibited reduced reward-benefit sensitivity and effort-cost sensitivity relative to controls, and that decreased sensitivity to reward-benefit but not effort-cost was correlated with diminished motivation. Reward-discounting and sensitivity indices were generally unrelated to other symptom dimensions, antipsychotic dose and cognitive deficits.
This study provides the first evidence of cognitive effort-based decision-making impairment in FEP, and indicates that decreased effort expenditure is associated with amotivation. Our findings further suggest that abnormal effort allocation and amotivation might primarily be related to blunted reward valuation. Prospective research is required to clarify the utility of effort-based measures in predicting amotivation and functional outcome in FEP.
There have been significant changes in the diagnostic criteria for diffuse gliomas in the 2016 WHO CNS tumor classification, with the incorporation of molecular criteria into a number of definitions. This has placed a greater emphasis on the availability of key immunohistochemical and molecular tests. In order to determine the effect that these changes have had on neuropathology practice and the access of different centres to these tests, we designed a survey that was sent to all members of the Canadian Association of Neuropathology member list in the fall of 2017. This survey asked a number of questions relating to the approach to glioma diagnosis, immunohistochemical/molecular test ordering patterns, in-house test availability, and need to send out for testing. In this presentation we will present preliminary results from this survey, with a focus on institutional testing capabilities. This provides a valuable resource that could ultimately need to a national database of immunohistochemical and molecular test availability for each neuropathology centre.
This presentation will enable the learner to:
1.Review the key molecular markers in the diagnosis of adult gliomas and methods of testing for them
2.Discuss the effect that the 2016 WHO CNS tumor update has had on clinical practice in Canada
The COllaborative project of Development of Anthropometrical measures in Twins (CODATwins) project is a large international collaborative effort to analyze individual-level phenotype data from twins in multiple cohorts from different environments. The main objective is to study factors that modify genetic and environmental variation of height, body mass index (BMI, kg/m2) and size at birth, and additionally to address other research questions such as long-term consequences of birth size. The project started in 2013 and is open to all twin projects in the world having height and weight measures on twins with information on zygosity. Thus far, 54 twin projects from 24 countries have provided individual-level data. The CODATwins database includes 489,981 twin individuals (228,635 complete twin pairs). Since many twin cohorts have collected longitudinal data, there is a total of 1,049,785 height and weight observations. For many cohorts, we also have information on birth weight and length, own smoking behavior and own or parental education. We found that the heritability estimates of height and BMI systematically changed from infancy to old age. Remarkably, only minor differences in the heritability estimates were found across cultural–geographic regions, measurement time and birth cohort for height and BMI. In addition to genetic epidemiological studies, we looked at associations of height and BMI with education, birth weight and smoking status. Within-family analyses examined differences within same-sex and opposite-sex dizygotic twins in birth size and later development. The CODATwins project demonstrates the feasibility and value of international collaboration to address gene-by-exposure interactions that require large sample sizes and address the effects of different exposures across time, geographical regions and socioeconomic status.
Introduction: Although acute gastroenteritis is an extremely common childhood illness, there is a paucity of literature characterizing the associated pain and its management. Our primary objective was to quantify the pain experienced by children with acute gastroenteritis in the 24-hours prior to emergency department (ED) presentation. Secondary objectives included describing maximum pain, analgesic use, discharge recommendations, and factors that influenced analgesic use in the ED. Methods: Study participants were recruited into this prospective cohort study by the Alberta Provincial Pediatric EnTeric Infection TEam between January 2014 and September 2017. This study was conducted at two Canadian pediatric EDs; the Alberta Children's Hospital (Calgary) and the Stollery Children's Hospital (Edmonton). Eligibility criteria included < 18 years of age, acute gastroenteritis (□ 3 episodes of diarrhea or vomiting in the previous 24 hours), and symptom duration □ 7 days. The primary study outcome, caregiver-reported maximum pain in the 24-hours prior to presentation, was assessed using the 11-point Verbal Numerical Rating Scale. Results: We recruited 2136 patients, median age 20.8 months (IQR 10.4, 47.4); 45.8% (979/2136) female. In the 24-hours prior to enrolment, 28.6% (610/2136) of caregivers reported that their child experienced moderate (4-6) and 46.2% (986/2136) severe (7-10) pain in the preceding 24-hours. During the emergency visit, 31.1% (664/2136) described pain as moderate and 26.7% (571/2136) as severe. In the ED, analgesia was provided to 21.2% (452/2131) of children. The most commonly administered analgesics in the ED were ibuprofen (68.1%, 308/452) and acetaminophen (43.4%, 196/452); at home, acetaminophen was most commonly administered (77.7%, 700/901), followed by ibuprofen (37.5%, 338/901). Factors associated with analgesia use in the ED were greater pain scores during the visit, having a primary-care physician, shorter illness duration, fewer diarrheal episodes, presence of fever and hospitalization. Conclusion: Although children presenting to the ED with acute gastroenteritis experience moderate to severe pain, both prior to and during their emergency visit, analgesic use is limited. Future research should focus on appropriate pain management through the development of effective and safe pain treatment plans.
Reciprocal space mapping can be efficiently carried out using a position-sensitive x-ray detector (PSD) coupled to a traditional double-axis diffractometer. The PSD offers parallel measurement of the total scattering angle of all diffracted x-rays during a single rocking-curve scan. As a result, a two-dimensional reciprocal space map can be made in a very short time similar to that of a one-dimensional rocking-curve scan. Fast, efficient reciprocal space mapping offers numerous routine advantages to the x-ray diffraction analyst. Some of these advantages arc the explicit differentiation of lattice strain from crystal orientation effects in strain-relaxed heteroepitaxial layers; the nondestructive characterization of the size, shape and orientation of nanocrystalline domains in ordered-alloy epilayers; and the ability to measure the average size and shape of voids in porous epilayers. Here, the PSD-based diffractometer is described, and specific examples clearly illustrating the advantages of complete reciprocal space analysis are presented.
Loneliness and social networks have been extensively studied in relation to cognitive impairments, but how they interact with each other in relation to cognition is still unclear. This study aimed at exploring the interaction of loneliness and various types of social networks in relation to cognition in older adults.
a cross-sectional study.
497 older adults with normal global cognition were interviewed.
Loneliness was assessed with Chinese 6-item De Jong Gierverg’s Loneliness Scale. Confiding network was defined as people who could share inner feelings with, whereas non-confiding network was computed by subtracting the confiding network from the total network size. Cognitive performance was expressed as a global composite z-score of Cantonese version of mini mental state examination (CMMSE), Categorical verbal fluency test (CVFT) and delayed recall. Linear regression was used to test the main effects of loneliness and the size of various networks, and their interaction on cognitive performance with the adjustment of sociodemographic, physical and psychological confounders.
Significant interaction was found between loneliness and non-confiding network on cognitive performance (B = .002, β = .092, t = 2.099, p = .036). Further analysis showed a significant interaction between loneliness and the number of family members in non-confiding network on cognition (B = .021, β = .119, t = 2.775, p = .006).
Results suggested that a non-confiding relationship with family members might put lonely older adults at risk of cognitive impairment. Our study might have implications on designing psychosocial intervention for those who are vulnerable to loneliness as an early prevention of neurocognitive impairments.
A cluster of Salmonella Paratyphi B variant L(+) tartrate(+) infections with indistinguishable pulsed-field gel electrophoresis patterns was detected in October 2015. Interviews initially identified nut butters, kale, kombucha, chia seeds and nutrition bars as common exposures. Epidemiologic, environmental and traceback investigations were conducted. Thirteen ill people infected with the outbreak strain were identified in 10 states with illness onset during 18 July–22 November 2015. Eight of 10 (80%) ill people reported eating Brand A raw sprouted nut butters. Brand A conducted a voluntary recall. Raw sprouted nut butters are a novel outbreak vehicle, though contaminated raw nuts, nut butters and sprouted seeds have all caused outbreaks previously. Firms producing raw sprouted products, including nut butters, should consider a kill step to reduce the risk of contamination. People at greater risk for foodborne illness may wish to consider avoiding raw products containing raw sprouted ingredients.
Objectives: Past research suggests that youth with sex chromosome aneuploidies (SCAs) present with verbal fluency deficits. However, most studies have focused on sex chromosome trisomies. Far less is known about sex chromosome tetrasomies and pentasomies. Thus, the current research sought to characterize verbal fluency performance among youth with sex chromosome trisomies, tetrasomies, and pentasomies by contrasting how performance varies as a function of extra X number and X versus Y status. Methods: Participants included 79 youth with SCAs and 42 typically developing controls matched on age, maternal education, and racial/ethnic background. Participants completed the phonemic and semantic conditions of a verbal fluency task and an abbreviated intelligence test. Results: Both supernumerary X and Y chromosomes were associated with verbal fluency deficits relative to controls. These impairments increased as a function of the number of extra X chromosomes, and the pattern of impairments on phonemic and semantic fluency differed for those with a supernumerary X versus Y chromosome. Whereas one supernumerary Y chromosome was associated with similar performance across fluency conditions, one supernumerary X chromosome was associated with relatively stronger semantic than phonemic fluency skills. Conclusions: Verbal fluency skills in youth with supernumerary X and Y chromosomes are impaired relative to controls. However, the degree of impairment varies across groups and task condition. Further research into the cognitive underpinnings of verbal fluency in youth with SCAs may provide insights into their verbal fluency deficits and help guide future treatments. (JINS, 2018, 24, 917–927)
In 2016, imported Zika virus (ZIKV) infections and the presence of a potentially competent mosquito vector (Aedes albopictus) implied that ZIKV transmission in New York City (NYC) was possible. The NYC Department of Health and Mental Hygiene developed contingency plans for a urosurvey to rule out ongoing local transmission as quickly as possible if a locally acquired case of confirmed ZIKV infection was suspected. We identified tools to (1) rapidly estimate the population living in any given 150-m radius (i.e. within the typical flight distance of an Aedes mosquito) and (2) calculate the sample size needed to test and rule out the further local transmission. As we expected near-zero ZIKV prevalence, methods relying on the normal approximation to the binomial distribution were inappropriate. Instead, we assumed a hypergeometric distribution, 10 missed cases at maximum, a urine assay sensitivity of 92.6% and 100% specificity. Three suspected example risk areas were evaluated with estimated population sizes of 479–4,453, corresponding to a minimum of 133–1244 urine samples. This planning exercise improved our capacity for ruling out local transmission of an emerging infection in a dense, urban environment where all residents in a suspected risk area cannot be feasibly sampled.
BACKGROUND: Meningiomas are the most common primary benign brain tumors in adults. Given the extended life expectancy of most meningiomas, consideration of quality of life (QOL) is important when selecting the optimal management strategy. There is currently a dearth of meningioma-specific QOL tools in the literature. OBJECTIVE: In this systematic review, we analyze the prevailing themes and propose toward building a meningioma-specific QOL assessment tool. METHODS: A systematic search was conducted, and only original studies based on adult patients were considered. QOL tools used in the various studies were analyzed for identification of prevailing themes in the qualitative analysis. The quality of the studies was also assessed. RESULTS: Sixteen articles met all inclusion criteria. Fifteen different QOL assessment tools assessed social and physical functioning, psychological, and emotional well-being. Patient perceptions and support networks had a major impact on QOL scores. Surgery negatively affected social functioning in younger patients, while radiation therapy had a variable impact. Any intervention appeared to have a greater negative impact on physical functioning compared to observation. CONCLUSION: Younger patients with meningiomas appear to be more vulnerable within social and physical functioning domains. All of these findings must be interpreted with great caution due to great clinical heterogeneity, limited generalizability, and risk of bias. For meningioma patients, the ideal QOL questionnaire would present outcomes that can be easily measured, presented, and compared across studies. Existing scales can be the foundation upon which a comprehensive, standard, and simple meningioma-specific survey can be prospectively developed and validated.
Brain tumor behavior is driven by aberrations in the genome and epigenome. Many of these changes, such as IDH mutations in diffuse low-grade glioma (DLGG), are common amongst the same class of tumour and can be incorporated into the diagnostic criteria. However, any given tumor may have other, less common genomic aberrations that are essential for its biological behavior and may inform on underlying aberrant cellular pathways, and potential therapeutic agents. Precision oncology is a genomics-based approach which profiles these alterations to better manage cancer patients and has established itself within the practice of oncology and is slowly making its way into neuro-oncology. The BC Cancer’s Personalized OncoGenomics (POG) program has profiled 16 adult tumours originating from the central nervous system using whole genome and transcriptome analysis (WGTA), for the first time, within a meaningful clinical timeframe/setting. As expected, primary genomic drivers were consistent with their respective diagnoses, though secondary drivers were found to be unique to each tumour. Although these analyses did not result in altered clinical management for these patients, primarily due to availability of drug or clinical trials, they highlight the heterogeneity of secondary drivers in cancers and provide clinicians with meaningful biological information. Lastly, the data generated by POG has highlighted the frequency and complexity of novel driver fusions which are predicted to behave similarly to canonical driver events in their respective tumours. The information available to clinicians through POG has provided paramount knowledge into the biology of each unique tumour.
Toca 511 (vocimagene amiretrorepvec) is an investigational retroviral replicating vector that selectively infects dividing cancer cells, integrates into the genome and replicates due to immune defects in tumors. Toca 511 spreads through tumors and stably delivers the gene encoding an optimized yeast cytosine deaminase that converts the prodrug Toca FC (investigational, extended-release of 5-fluorocytosine) into 5-fluorouracil. In preclinical models, 5-fluorouracil kills infected dividing cancer cells, myeloid derived suppressor cells and tumor associated macrophages, enabling immune activation against the tumor. In this dose ascending Ph1 trial (NCT01470794), Toca 511 was injected into the resection cavity wall of patients with rHGG, followed by courses of oral Toca FC. Additional cohorts included combination with bevacizumab or lomustine. Across the Ph1 program, the safety profile remains favorable. Objective responses (ORs) were assessed by IRR using MRI scans prior to Toca FC treatment as baseline. ORs occurred 6-19 months after Toca 511 administration, suggesting an immunologic mechanism. The ORs were observed in 4 patients with IDH1-wildtype and 2 patients with IDH1-mutant tumors, including 5 complete responses (CRs) with the investigational therapy alone, and 1 CR in combination with bevacizumab. The median duration of response (mDoR) was 35.1+ months. As of AUG2017, all responders were CR and remain alive. In a 23-patient subgroup who received high doses of Toca 511 and met Ph3 trial criteria, mOS was 14.4 months, 3-year survival rate was 26.1%, and mDoR was 35.7+ months with a durable response rate of 21.7%. Data suggest a positive association of durable response with OS.
Childhood obesity rates are higher among Indigenous compared with non-Indigenous Australian children. It has been hypothesized that early-life influences beginning with the intrauterine environment predict the development of obesity in the offspring. The aim of this paper was to assess, in 227 mother–child dyads from the Gomeroi gaaynggal cohort, associations between prematurity, Gestation Related-Optimal Weight (GROW) centiles, maternal adiposity (percentage body fat, visceral fat area), maternal non-fasting plasma glucose levels (measured at mean gestational age of 23.1 weeks) and offspring BMI and adiposity (abdominal circumference, subscapular skinfold thickness) in early childhood (mean age 23.4 months). Maternal non-fasting plasma glucose concentrations were positively associated with infant birth weight (P=0.005) and GROW customized birth weight centiles (P=0.008). There was a significant association between maternal percentage body fat (P=0.02) and visceral fat area (P=0.00) with infant body weight in early childhood. Body mass index (BMI) in early childhood was significantly higher in offspring born preterm compared with those born at term (P=0.03). GROW customized birth weight centiles was significantly associated with body weight (P=0.01), BMI (P=0.007) and abdominal circumference (P=0.039) at early childhood. Our findings suggest that being born preterm, large for gestational age or exposed to an obesogenic intrauterine environment and higher maternal non-fasting plasma glucose concentrations are associated with increased obesity risk in early childhood. Future strategies should aim to reduce the prevalence of overweight/obesity in women of child-bearing age and emphasize the importance of optimal glycemia during pregnancy, particularly in Indigenous women.
Introduction: Ideal management of alcohol withdrawal syndrome (AWS) incorporates a symptom driven approach, whereby patients are regularly assessed using a standardized scoring system (Clinical Institute Withdrawal Assessment for Alcohol-Revised; CIWA-Ar) and treated according to severity. Accurate administration of the CIWA-Ar requires experience, yet there is no training program to teach this competency. The objective of this study was to develop and evaluate a web-based curriculum to teach clinicians how to accurately assess and treat AWS. Methods: This was a three-phase educational program consisting of a series of 3 e-learning modules of core competency material, in-person seminar to orient learners to high fidelity simulation, and summative evaluation in an OSCE setting using a standardized patient. To determine the ED impact of the AWS curriculum, we recorded how often the CIWA-Ar was appropriately applied in the ED pre and post training. ED length of stay, total dose of benzodiazepines administered in the ED, and number of prescriptions and unit benzodiazepine doses given upon discharge were also recorded. Results: 74 nurses from an academic ED completed the AWS curriculum. There were 130 and 126 patients in the pre and post AWS training periods, respectively. Management of AWS was not compliant with CIWA-Ar protocol in 78 (60.0%) and 46 (36.5%) patients pre and post AWS training, respectively ( 23.5%; 95% CI: 11.3%, 34.7%), resulting in administration of benzodiazepine when it was not required, or not giving benzodiazepines with a CIWA-Ar score of 10. There was an average of 4 CIWA-Ar scores per patient in both the pre and post implementation periods. Prior to AWS training, 144/560 (25.5%) CIWA-Ar scores resulted in a breach of protocol, compared to 64/547 (11.7%) following AWS training ( 13.8%; 95% CI: 9.3%, 18.3%). Median total dose of benzodiazepines administered in the ED was lower after the implementation of the AWS curriculum (40mg vs 30mg; 10 mg; 95% CI: 0mg, 20mg). ED length of stay and the amount of benzodiazepines given to patients at discharge were similar between groups. Conclusion: This AWS curriculum appears to be an effective way to train ED clinicians on the proper administration of the CIWA-Ar protocol, and results in improved patient care.
Introduction: Prevalence and incidence of delirium in older patients admitted to acute and long-term care facilities ranges between 9.6% and 89% but little is known in the context of emergency department (ED) incident delirium. Literature regarding the incidence of delirium in the ED and its potential impacts on hospital length of stay (LOS), functional status and unplanned ED readmissions is scant, its consequences have yet to be clearly identified in order to orient modern acute medical care. Methods: This study is part of the multicenter prospective cohort INDEED study. Three Canadian EDs completed the two years prospective study (March-July 2015 and Feb-May 2016). Patients aged 65 years old, initially free of delirium with an ED stay 8hours were followed up to 24h after ward admission. Patients were assessed 2x/day during their entire ED stay and up to 24 hours on hospital ward by research assistants (RA). The primary outcome of this study was incident delirium in the ED or within 24 h of ward admission. Functional and cognitive status were assessed using validated Older Americans’ Resources and Services and the Telephone Interview for Cognitive Status- modified tools. The Confusion Assessment Method (CAM) was used to detect incident delirium. ED and hospital administrative data were collected. Inter-observer agreement was realized among RA. Results: Incident delirium was not different between sites, nor between phases, nor between times from one site to another. All phases confounded, there is between 7 to 11% of ED related incident delirious episodes. Differences were seen in ED LOS between sites in non-delirious patients, but also between some sites for delirious participants (p<0.05). Only one site had a difference in ED LOS between their delirious and non-delirious patients, respectively of 52.1 and 40.1 hours (p<0.05). There is also a difference between sites in the time between arrival to the ED and the incidence of delirium (p=0.003). Kappa statistics were computed to measure inter-rater reliability of the CAM. Based on an alpha of 5%, 138 patients would allow 80% power for an estimated overall incidence proportion of 15 % with 5% precision.. Other predictive delirium variables, such as cognitive status, environmental factors, functional status, comorbidities, physiological status, and ED and hospital length of stay were similar between sites and phases. Conclusion: The fact that incidence of delirium was the same for all sites, despite the differences of ED LOS and different time periods suggest that many other modifiable and non-modifiable factors along LOS influenced the incidence of ED induced delirium. Emergency physician should concentrate on improving senior-friendly environment for the ED.
Introduction: It is documented that physicians and nurses fail to detect delirium in more than half of cases from various clinical settings, which could have serious consequences for seniors and for our health care system. The present study aimed to describe the rate of documented incident delirium in 5 Canadian Emergency departments (ED) by health professionals (HP). Methods: This study is part of the multicenter prospective cohort INDEED study. Patients aged 65 years old, initially free of delirium with an ED stay 8hours were followed up to 24h after ward admission. Delirium status was assessed twice daily using the Confusion Assessment Method (CAM) by trained research assistants (RA). HP reviewed patient charts to assess detection of delirium. HP had no specific routine detection of delirious ED patients. Inter-observer agreement was realized among RA. Comparison of detection between RA and HP was realized with univariate analyses. Results: Among the 652 included patients, 66 developed a delirium as evaluated with the CAM by the RA. Among those 66 patients, only 10 deliriums (15.2%) were documented in the patients medical file by the HP. 54 (81.8%) patients with a CAM positive for delirium by the RA were not recorded by the HP, 2 had incomplete charts. The delirium index was significantly higher in the HP reported group compared to the HP not reported, respectively 7.1 and 4.5 (p<0.05). Other predictive delirium variables, such as cognitive status, functional status, comorbidities, physiological status, and ED and hospital length of stay were similar between groups. Conclusion: It seems that health professionals missed 81.8% of the potential delirious ED patients in comparison to routine structured screening of delirium. HP could identify patients with a greater severity of symptoms. Our study points out the need to better identify elders at risk to develop delirium and the need for fast and reliable tools to improve the screening of this disorder.
To identify predictors of disagreement with antimicrobial stewardship prospective audit and feedback recommendations (PAFR) at a free-standing children’s hospital.
Retrospective cohort study of audits performed during the antimicrobial stewardship program (ASP) from March 30, 2015, to April 17, 2017.
The ASP included audits of antimicrobial use and communicated PAFR to the care team, with follow-up on adherence to recommendations. The primary outcome was disagreement with PAFR. Potential predictors for disagreement, including patient-level, antimicrobial, programmatic, and provider-level factors, were assessed using bivariate and multivariate logistic regression models.
In total, 4,727 antimicrobial audits were performed during the study period; 1,323 PAFR (28%) and 187 recommendations (15%) were not followed due to disagreement. Providers were more likely to disagree with PAFR when the patient had a gastrointestinal infection (odds ratio [OR], 5.50; 95% confidence interval [CI], 1.99–15.21), febrile neutropenia (OR, 6.14; 95% CI, 2.08–18.12), skin or soft-tissue infections (OR, 6.16; 95% CI, 1.92–19.77), or had been admitted for 31–90 days at the time of the audit (OR, 2.08; 95% CI, 1.36–3.18). The longer the duration since the attending provider had been trained (ie, the more years of experience), the more likely they were to disagree with PAFR recommendations (OR, 1.02; 95% CI, 1.01–1.04).
Evaluation of our program confirmed patient-level predictors of PAFR disagreement and identified additional programmatic and provider-level factors, including years of attending experience. Stewardship interventions focused on specific diagnoses and antimicrobials are unlikely to result in programmatic success unless these factors are also addressed.
For livestock production systems to play a positive role in global food security, the balance between their benefits and disbenefits to society must be appropriately managed. Based on the evidence provided by field-scale randomised controlled trials around the world, this debate has traditionally centred on the concept of economic-environmental trade-offs, of which existence is theoretically assured when resource allocation is perfect on the farm. Recent research conducted on commercial farms indicates, however, that the economic-environmental nexus is not nearly as straightforward in the real world, with environmental performances of enterprises often positively correlated with their economic profitability. Using high-resolution primary data from the North Wyke Farm Platform, an intensively instrumented farm-scale ruminant research facility located in southwest United Kingdom, this paper proposes a novel, information-driven approach to carry out comprehensive assessments of economic-environmental trade-offs inherent within pasture-based cattle and sheep production systems. The results of a data-mining exercise suggest that a potentially systematic interaction exists between ‘soil health’, ecological surroundings and livestock grazing, whereby a higher level of soil organic carbon (SOC) stock is associated with a better animal performance and less nutrient losses into watercourses, and a higher stocking density with greater botanical diversity and elevated SOC. We contend that a combination of farming system-wide trials and environmental instrumentation provides an ideal setting for enrolling scientifically sound and biologically informative metrics for agricultural sustainability, through which agricultural producers could obtain guidance to manage soils, water, pasture and livestock in an economically and environmentally acceptable manner. Priority areas for future farm-scale research to ensure long-term sustainability are also discussed.
Solar activity is observed to fluctuate with time, undergoing a wide range of periodicities from minutes up to thousands of years as evinced from proxies based on cosmogenic isotopes. In this work, we apply Multichannel Singular Spectrum Analysis (MSSA), a data-adaptive, multivariate technique that simultaneously exploits the spatial and temporal correlations of the input data to extract common modes of variability to investigate the intermediate quasi-periodicities of the green coronal emission line at 530.3 nm for the period between 1944 and 2008. A preliminary MSSA analysis confirms the presence of significant quasi-biennial oscillations in the data with amplitude varying significantly with time and latitude. On the other hand, a clear North-South asymmetry is observed both in their intensity and period distribution.