To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Abnormal effort-based decision-making represents a potential mechanism underlying motivational deficits (amotivation) in psychotic disorders. Previous research identified effort allocation impairment in chronic schizophrenia and focused mostly on physical effort modality. No study has investigated cognitive effort allocation in first-episode psychosis (FEP).
Cognitive effort allocation was examined in 40 FEP patients and 44 demographically-matched healthy controls, using Cognitive Effort-Discounting (COGED) paradigm which quantified participants’ willingness to expend cognitive effort in terms of explicit, continuous discounting of monetary rewards based on parametrically-varied cognitive demands (levels N of N-back task). Relationship between reward-discounting and amotivation was investigated. Group differences in reward-magnitude and effort-cost sensitivity, and differential associations of these sensitivity indices with amotivation were explored.
Patients displayed significantly greater reward-discounting than controls. In particular, such discounting was most pronounced in patients with high levels of amotivation even when N-back performance and reward base amount were taken into consideration. Moreover, patients exhibited reduced reward-benefit sensitivity and effort-cost sensitivity relative to controls, and that decreased sensitivity to reward-benefit but not effort-cost was correlated with diminished motivation. Reward-discounting and sensitivity indices were generally unrelated to other symptom dimensions, antipsychotic dose and cognitive deficits.
This study provides the first evidence of cognitive effort-based decision-making impairment in FEP, and indicates that decreased effort expenditure is associated with amotivation. Our findings further suggest that abnormal effort allocation and amotivation might primarily be related to blunted reward valuation. Prospective research is required to clarify the utility of effort-based measures in predicting amotivation and functional outcome in FEP.
Maternal systemic inflammation during pregnancy may restrict embryo−fetal growth, but the extent of this effect remains poorly established in undernourished populations. In a cohort of 653 maternal−newborn dyads participating in a multi-armed, micronutrient supplementation trial in southern Nepal, we investigated associations between maternal inflammation, assessed by serum α1-acid glycoprotein and C-reactive protein, in the first and third trimesters of pregnancy, and newborn weight, length and head and chest circumferences. Median (IQR) maternal concentrations in α1-acid glycoprotein and C-reactive protein in the first and third trimesters were 0.65 (0.53–0.76) and 0.40 (0.33–0.50) g/l, and 0.56 (0.25–1.54) and 1.07 (0.43–2.32) mg/l, respectively. α1-acid glycoprotein was inversely associated with birth size: weight, length, head circumference and chest circumference were lower by 116 g (P = 2.3 × 10−6), and 0.45 (P = 3.1 × 10−5), 0.18 (P = 0.0191) and 0.48 (P = 1.7 × 10−7) cm, respectively, per 50% increase in α1-acid glycoprotein averaged across both trimesters. Adjustment for maternal age, parity, gestational age, nutritional and socio-economic status and daily micronutrient supplementation failed to alter any association. Serum C-reactive protein concentration was largely unassociated with newborn size. In rural Nepal, birth size was inversely associated with low-grade, chronic inflammation during pregnancy as indicated by serum α1-acid glycoprotein.
Better understanding of interplay among symptoms, cognition and functioning in first-episode psychosis (FEP) is crucial to promoting functional recovery. Network analysis is a promising data-driven approach to elucidating complex interactions among psychopathological variables in psychosis, but has not been applied in FEP.
This study employed network analysis to examine inter-relationships among a wide array of variables encompassing psychopathology, premorbid and onset characteristics, cognition, subjective quality-of-life and psychosocial functioning in 323 adult FEP patients in Hong Kong. Graphical Least Absolute Shrinkage and Selection Operator (LASSO) combined with extended Bayesian information criterion (BIC) model selection was used for network construction. Importance of individual nodes in a generated network was quantified by centrality analyses.
Our results showed that amotivation played the most central role and had the strongest associations with other variables in the network, as indexed by node strength. Amotivation and diminished expression displayed differential relationships with other nodes, supporting the validity of two-factor negative symptom structure. Psychosocial functioning was most strongly connected with amotivation and was weakly linked to several other variables. Within cognitive domain, digit span demonstrated the highest centrality and was connected with most of the other cognitive variables. Exploratory analysis revealed no significant gender differences in network structure and global strength.
Our results suggest the pivotal role of amotivation in psychopathology network of FEP and indicate its critical association with psychosocial functioning. Further research is required to verify the clinical significance of diminished motivation on functional outcome in the early course of psychotic illness.
Several life-threatening diseases of the kidney have their origins in mutational events that occur during embryonic development. In this study, we investigate the role of the Wolffian duct (WD), the earliest embryonic epithelial progenitor of renal tubules, in the etiology of autosomal dominant polycystic kidney disease (ADPKD). ADPKD is associated with a germline mutation of one of the two Pkd1 alleles. For the disease to occur, a second event that disrupts the expression of the other inherited Pkd1 allele must occur. We postulated that this secondary event can occur in the pronephric WD. Using Cre-Lox recombination, mice with WD-specific deletion of one or both Pkd1 alleles were generated. Homozygous Pkd1-targeted deletion in WD-derived tissues resulted in mice with large cystic kidneys and serologic evidence of renal failure. In contrast, heterozygous deletion of Pkd1 in the WD led to kidneys that were phenotypically indistinguishable from control in the early postnatal period. High-throughput sequencing, however, revealed underlying gene and microRNA (miRNA) changes in these heterozygous mutant kidneys that suggest a strong predisposition toward developing ADPKD. Bioinformatic analysis of this data demonstrated an upregulation of several miRNAs that have been previously associated with PKD; pathway analysis further demonstrated that the differentially expressed genes in the heterozygous mutant kidneys were overrepresented in signaling pathways associated with maintenance and function of the renal tubular epithelium. These results suggest that the WD may be an early epithelial target for the genetic or molecular signals that can lead to cyst formation in ADPKD.
The COllaborative project of Development of Anthropometrical measures in Twins (CODATwins) project is a large international collaborative effort to analyze individual-level phenotype data from twins in multiple cohorts from different environments. The main objective is to study factors that modify genetic and environmental variation of height, body mass index (BMI, kg/m2) and size at birth, and additionally to address other research questions such as long-term consequences of birth size. The project started in 2013 and is open to all twin projects in the world having height and weight measures on twins with information on zygosity. Thus far, 54 twin projects from 24 countries have provided individual-level data. The CODATwins database includes 489,981 twin individuals (228,635 complete twin pairs). Since many twin cohorts have collected longitudinal data, there is a total of 1,049,785 height and weight observations. For many cohorts, we also have information on birth weight and length, own smoking behavior and own or parental education. We found that the heritability estimates of height and BMI systematically changed from infancy to old age. Remarkably, only minor differences in the heritability estimates were found across cultural–geographic regions, measurement time and birth cohort for height and BMI. In addition to genetic epidemiological studies, we looked at associations of height and BMI with education, birth weight and smoking status. Within-family analyses examined differences within same-sex and opposite-sex dizygotic twins in birth size and later development. The CODATwins project demonstrates the feasibility and value of international collaboration to address gene-by-exposure interactions that require large sample sizes and address the effects of different exposures across time, geographical regions and socioeconomic status.
Introduction: According to WHO, one third of patients aged ≥65 fall every year. Those falls account for 25% of all geriatric emergency department (ED) visits. Fear of falling (FOF) is common in older patients who sustained a fall and is associated with a decline in mobility and health issues for patients. We hypothesized that there is an association between FOF and return to ED (RTED) and future falls. Objective: To assess the relation between FOF and RTED and subsequent falls in older ED patients Methods: This research was conducted as part of the Canadian Emergency Team Initiative in elderly (CETIe) multicenter prospective cohort study from 2011 to 2016. Participants: Patients 65 years or older were assessed and discharged from ED following a minor trauma. They had to be independent in all basic activities of daily living and being able to communicate in English or French. Measures: Primary outcome was RTED and secondary outcome was subsequent falls. Both were self-reported at 3 and 6 months. Patients were stratified according to Short Falls Efficacy Scale International (SFES-I) score, assessing FOF in different situations. A total score is calculated to determine the mild, moderate or severe level of FOF. Previous falls and TUG were used to evaluate patients’ mobility. OARS, ISAR and SOF were used to evaluated patient frailty. Descriptive statistical were performed and multiple regression were performed to show the association between SFES-1 score and outcomes. Results: FOF was measured in 2899 participants, of which 2214 participated at the 3 months follow-up and 2009 participated at the 6 months follow-up. Odds Ratio (OR) of return to ED at 3 months was 1.10 for moderate FOF and 1.52 for severe FOF (Type 3 test p = 0.11). At 6 months, OR was 1.03 for moderate FOF and 1.25 for severe FOF (Type 3 test p = 0.63). OR of subsequent fall at 3 months was 1.80 for moderate FOF and 2.18 for severe FOF (Type 3 test p < 0.001). At 6 months, OR of subsequent fall was 1.63 for moderate FOF and 2.37 for severe FOF (Type 3 test p < 0.001). Conclusion: The multicenter cohort study showed that severe fear of falling is strongly associated with subsequent falls over the next 6 months following ED discharge, but not significantly associated with return to ED episodes. Further research should be done to analyze the association between severe FOF and RTED.
We assessed self-reported drives for alcohol use and their impact on clinical features of alcohol use disorder (AUD) patients. Our prediction was that, in contrast to “affectively” (reward or fear) driven drinking, “habitual” drinking would be associated with worse clinical features in relation to alcohol use and higher occurrence of associated psychiatric symptoms.
Fifty-eight Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition (DSM-IV) alcohol abuse patients were assessed with a comprehensive battery of reward- and fear-based behavioral tendencies. An 18-item self-report instrument (the Habit, Reward and Fear Scale; HRFS) was employed to quantify affective (fear or reward) and non-affective (habitual) motivations for alcohol use. To characterize clinical and demographic measures associated with habit, reward, and fear, we conducted a partial least squares analysis.
Habitual alcohol use was significantly associated with the severity of alcohol dependence reflected across a range of domains and with lower number of detoxifications across multiple settings. In contrast, reward-driven alcohol use was associated with a single domain of alcohol dependence, reward-related behavioral tendencies, and lower number of detoxifications.
These results seem to be consistent with a shift from goal-directed to habit-driven alcohol use with severity and progression of addiction, complementing preclinical work and informing biological models of addiction. Both reward-related and habit-driven alcohol use were associated with lower number of detoxifications, perhaps stemming from more benign course for the reward-related and lack of treatment engagement for the habit-related alcohol abuse group. Future work should further explore the role of habit in this and other addictive disorders, and in obsessive-compulsive related disorders.
Production electrolytic chromium coatings suffer from cracks, which result from high internal stresses generated during the electrolytic crystallization process. This paper characterizes the anisotropy, texture and residual stress state in high contraction chromium coatings on steel-Strong <111> fiber texture, almost perfect in-plane azimuthal symmetry, and high surface residual stresses were observed. Anisotropy factor and aggregate elastic moduli were calculated from single crystal elastic constants. A matrix inversion method was developed to solve for biaxial residual stress and unstrained lattice parameter in textured chromium coatings. Assuming an isotropic elastic Hill-Neerfeld model, residual stress was also evaluated using the sin2ψ method adapted to multiple families of reflections.
Reciprocal space mapping can be efficiently carried out using a position-sensitive x-ray detector (PSD) coupled to a traditional double-axis diffractometer. The PSD offers parallel measurement of the total scattering angle of all diffracted x-rays during a single rocking-curve scan. As a result, a two-dimensional reciprocal space map can be made in a very short time similar to that of a one-dimensional rocking-curve scan. Fast, efficient reciprocal space mapping offers numerous routine advantages to the x-ray diffraction analyst. Some of these advantages arc the explicit differentiation of lattice strain from crystal orientation effects in strain-relaxed heteroepitaxial layers; the nondestructive characterization of the size, shape and orientation of nanocrystalline domains in ordered-alloy epilayers; and the ability to measure the average size and shape of voids in porous epilayers. Here, the PSD-based diffractometer is described, and specific examples clearly illustrating the advantages of complete reciprocal space analysis are presented.
To assess variability in antimicrobial use and associations with infection testing in pediatric ventilator-associated events (VAEs).
Descriptive retrospective cohort with nested case-control study.
Pediatric intensive care units (PICUs), cardiac intensive care units (CICUs), and neonatal intensive care units (NICUs) in 6 US hospitals.
Children≤18 years ventilated for≥1 calendar day.
We identified patients with pediatric ventilator-associated conditions (VACs), pediatric VACs with antimicrobial use for≥4 days (AVACs), and possible ventilator-associated pneumonia (PVAP, defined as pediatric AVAC with a positive respiratory diagnostic test) according to previously proposed criteria.
Among 9,025 ventilated children, we identified 192 VAC cases, 43 in CICUs, 70 in PICUs, and 79 in NICUs. AVAC criteria were met in 79 VAC cases (41%) (58% CICU; 51% PICU; and 23% NICU), and varied by hospital (CICU, 20–67%; PICU, 0–70%; and NICU, 0–43%). Type and duration of AVAC antimicrobials varied by ICU type. AVAC cases in CICUs and PICUs received broad-spectrum antimicrobials more often than those in NICUs. Among AVAC cases, 39% had respiratory infection diagnostic testing performed; PVAP was identified in 15 VAC cases. Also, among AVAC cases, 73% had no associated positive respiratory or nonrespiratory diagnostic test.
Antimicrobial use is common in pediatric VAC, with variability in spectrum and duration of antimicrobials within hospitals and across ICU types, while PVAP is uncommon. Prolonged antimicrobial use despite low rates of PVAP or positive laboratory testing for infection suggests that AVAC may provide a lever for antimicrobial stewardship programs to improve utilization.
We observed pediatric S. aureus hospitalizations decreased 36% from 26.3 to 16.8 infections per 1,000 admissions from 2009 to 2016, with methicillin-resistant S. aureus (MRSA) decreasing by 52% and methicillin-susceptible S. aureus decreasing by 17%, among 39 pediatric hospitals. Similar decreases were observed for days of therapy of anti-MRSA antibiotics.
A cluster of Salmonella Paratyphi B variant L(+) tartrate(+) infections with indistinguishable pulsed-field gel electrophoresis patterns was detected in October 2015. Interviews initially identified nut butters, kale, kombucha, chia seeds and nutrition bars as common exposures. Epidemiologic, environmental and traceback investigations were conducted. Thirteen ill people infected with the outbreak strain were identified in 10 states with illness onset during 18 July–22 November 2015. Eight of 10 (80%) ill people reported eating Brand A raw sprouted nut butters. Brand A conducted a voluntary recall. Raw sprouted nut butters are a novel outbreak vehicle, though contaminated raw nuts, nut butters and sprouted seeds have all caused outbreaks previously. Firms producing raw sprouted products, including nut butters, should consider a kill step to reduce the risk of contamination. People at greater risk for foodborne illness may wish to consider avoiding raw products containing raw sprouted ingredients.
Introduction: Gastroenteritis accounts for 1.7 million emergency department visits by children annually in the United States. We conducted a double-blind trial to determine whether twice daily probiotic administration for 5 days, improves outcomes. Methods: 886 children aged 348 months with gastroenteritis were enrolled in six Canadian pediatric emergency departments. Participants were randomly assigned to twice daily Lactobacillus rhamnosus R0011 and Lactobacillus helveticus R0052, 4.0 x 109 CFU, in a 95:5 ratio or placebo. Primary outcome was development of moderate-severe disease within 14 days of randomization defined by a Modified Vesikari Scale score 9. Secondary outcomes included duration of diarrhea and vomiting, subsequent physician visits and adverse events. Results: Moderate-severe disease occurred in 108 (26.1%) participants administered probiotics and 102 (24.7%) participants allocated to placebo (OR 1.06; 95%CI: 0.77, 1.46; P=0.72). After adjustment for site, age, and frequency of vomiting and diarrhea, treatment assignment did not predict moderate-severe disease (OR, 1.11, 95%CI, 0.80 to 1.56; P=0.53). In the probiotic versus placebo groups, there were no differences in the median duration of diarrhea [52.5 (18.3, 95.8) vs. 55.5 (20.2, 102.3) hours; P=0.31], vomiting [17.7 (0, 58.6) vs. 18.7 (0, 51.6) hours; P=0.18], physician visits (30.2% vs. 26.6%; OR 1.19; 95% CI0.87. 1.62; P=0.27), or adverse events (32.9% vs. 36.8%; OR 0.83; 95%CI 0.62. 1.11; P=0.21). Conclusion: In children presenting to an emergency department with gastroenteritis, twice daily administration of 4.0 x 109 CFU of a Lactobacillus rhamnosus/helveticus probiotic does not prevent development of moderate-severe disease or improvements in other outcomes measured.
Introduction: The purpose of this study is to determine if the introduction of a pre-arrival and pre-departure Trauma Checklist as a cognitive aid, coupled with an educational session, will improve clinical performance in a simulated environment. The Trauma Checklist was developed in response to a quality assurance review of high-acuity trauma activations. It focuses on pre-arrival preparation and a pre-departure review prior to patient transfer to diagnostic imaging or the operating room. We conducted a pilot, randomized control trial assessing the impact of the Trauma Checklist on time to critical interventions on a simulated pediatric patient by multidisciplinary teams. Methods: Emergency department teams composed of 2 physicians, 2 nurses and 2 confederate actors were enrolled in our study. In the intervention arm, participants watched a 10-minute educational video modelling the use of the trauma checklist prior to their simulation scenario and were provided a copy of the checklist. Teams participated in a standardized simulation scenario caring for a severely injured adolescent patient with hemorrhagic shock, respiratory failure and increased intracranial pressure. Our primary outcome of interest was time measurement to initiation of key clinical interventions, including intubation, first blood product administration, massive transfusion protocol activation, initiation of hyperosmolar therapy and others. Secondary outcome measures included a Trauma Task Performance score and checklist completion scores. Results: We enrolled 14 multidisciplinary teams (n=56 participants) into our study. There was a statistically significant decrease in median time to initiation of hyperosmolar therapy by teams in the intervention arm compared to the control arm (581 seconds, [509-680] vs. 884 seconds, [588-1144], p=0.03). Time to initiation of other clinical interventions was not statistically significant. There was a trend to higher Trauma Task Performance scores in the intervention group however this did not reach statistical significant (p=0.09). Pre-arrival and pre-departure checklist scores were higher in the intervention group (9.0 [9.0-10.0] vs. 7.0 [6.0-8.0], p=0.17 and 12.0 [11.5-12.0] vs. 7.5 [6.0-8.5], p=0.01). Conclusion: Teams using the Trauma Checklist did not have decreased time to initiation of key clinical interventions except in initiating hyperosmolar therapy. Teams in the intervention arm had statistically significantly higher pre-arrival and pre-departure scores, with a trend to higher Trauma Task Performance scores. Our study was a pilot and recruitment did not achieve the anticipated sample size, thus underpowered. The impact of this checklist should be studied outside tertiary trauma centres, particularly in trainees and community emergency providers, to assess for benefit and further generalizability.
Introduction: Prevalence and incidence of delirium in older patients admitted to acute and long-term care facilities ranges between 9.6% and 89% but little is known in the context of emergency department (ED) incident delirium. Literature regarding the incidence of delirium in the ED and its potential impacts on hospital length of stay (LOS), functional status and unplanned ED readmissions is scant, its consequences have yet to be clearly identified in order to orient modern acute medical care. Methods: This study is part of the multicenter prospective cohort INDEED study. Three Canadian EDs completed the two years prospective study (March-July 2015 and Feb-May 2016). Patients aged 65 years old, initially free of delirium with an ED stay 8hours were followed up to 24h after ward admission. Patients were assessed 2x/day during their entire ED stay and up to 24 hours on hospital ward by research assistants (RA). The primary outcome of this study was incident delirium in the ED or within 24 h of ward admission. Functional and cognitive status were assessed using validated Older Americans’ Resources and Services and the Telephone Interview for Cognitive Status- modified tools. The Confusion Assessment Method (CAM) was used to detect incident delirium. ED and hospital administrative data were collected. Inter-observer agreement was realized among RA. Results: Incident delirium was not different between sites, nor between phases, nor between times from one site to another. All phases confounded, there is between 7 to 11% of ED related incident delirious episodes. Differences were seen in ED LOS between sites in non-delirious patients, but also between some sites for delirious participants (p<0.05). Only one site had a difference in ED LOS between their delirious and non-delirious patients, respectively of 52.1 and 40.1 hours (p<0.05). There is also a difference between sites in the time between arrival to the ED and the incidence of delirium (p=0.003). Kappa statistics were computed to measure inter-rater reliability of the CAM. Based on an alpha of 5%, 138 patients would allow 80% power for an estimated overall incidence proportion of 15 % with 5% precision.. Other predictive delirium variables, such as cognitive status, environmental factors, functional status, comorbidities, physiological status, and ED and hospital length of stay were similar between sites and phases. Conclusion: The fact that incidence of delirium was the same for all sites, despite the differences of ED LOS and different time periods suggest that many other modifiable and non-modifiable factors along LOS influenced the incidence of ED induced delirium. Emergency physician should concentrate on improving senior-friendly environment for the ED.
Introduction: It is documented that physicians and nurses fail to detect delirium in more than half of cases from various clinical settings, which could have serious consequences for seniors and for our health care system. The present study aimed to describe the rate of documented incident delirium in 5 Canadian Emergency departments (ED) by health professionals (HP). Methods: This study is part of the multicenter prospective cohort INDEED study. Patients aged 65 years old, initially free of delirium with an ED stay 8hours were followed up to 24h after ward admission. Delirium status was assessed twice daily using the Confusion Assessment Method (CAM) by trained research assistants (RA). HP reviewed patient charts to assess detection of delirium. HP had no specific routine detection of delirious ED patients. Inter-observer agreement was realized among RA. Comparison of detection between RA and HP was realized with univariate analyses. Results: Among the 652 included patients, 66 developed a delirium as evaluated with the CAM by the RA. Among those 66 patients, only 10 deliriums (15.2%) were documented in the patients medical file by the HP. 54 (81.8%) patients with a CAM positive for delirium by the RA were not recorded by the HP, 2 had incomplete charts. The delirium index was significantly higher in the HP reported group compared to the HP not reported, respectively 7.1 and 4.5 (p<0.05). Other predictive delirium variables, such as cognitive status, functional status, comorbidities, physiological status, and ED and hospital length of stay were similar between groups. Conclusion: It seems that health professionals missed 81.8% of the potential delirious ED patients in comparison to routine structured screening of delirium. HP could identify patients with a greater severity of symptoms. Our study points out the need to better identify elders at risk to develop delirium and the need for fast and reliable tools to improve the screening of this disorder.
To identify predictors of disagreement with antimicrobial stewardship prospective audit and feedback recommendations (PAFR) at a free-standing children’s hospital.
Retrospective cohort study of audits performed during the antimicrobial stewardship program (ASP) from March 30, 2015, to April 17, 2017.
The ASP included audits of antimicrobial use and communicated PAFR to the care team, with follow-up on adherence to recommendations. The primary outcome was disagreement with PAFR. Potential predictors for disagreement, including patient-level, antimicrobial, programmatic, and provider-level factors, were assessed using bivariate and multivariate logistic regression models.
In total, 4,727 antimicrobial audits were performed during the study period; 1,323 PAFR (28%) and 187 recommendations (15%) were not followed due to disagreement. Providers were more likely to disagree with PAFR when the patient had a gastrointestinal infection (odds ratio [OR], 5.50; 95% confidence interval [CI], 1.99–15.21), febrile neutropenia (OR, 6.14; 95% CI, 2.08–18.12), skin or soft-tissue infections (OR, 6.16; 95% CI, 1.92–19.77), or had been admitted for 31–90 days at the time of the audit (OR, 2.08; 95% CI, 1.36–3.18). The longer the duration since the attending provider had been trained (ie, the more years of experience), the more likely they were to disagree with PAFR recommendations (OR, 1.02; 95% CI, 1.01–1.04).
Evaluation of our program confirmed patient-level predictors of PAFR disagreement and identified additional programmatic and provider-level factors, including years of attending experience. Stewardship interventions focused on specific diagnoses and antimicrobials are unlikely to result in programmatic success unless these factors are also addressed.
For livestock production systems to play a positive role in global food security, the balance between their benefits and disbenefits to society must be appropriately managed. Based on the evidence provided by field-scale randomised controlled trials around the world, this debate has traditionally centred on the concept of economic-environmental trade-offs, of which existence is theoretically assured when resource allocation is perfect on the farm. Recent research conducted on commercial farms indicates, however, that the economic-environmental nexus is not nearly as straightforward in the real world, with environmental performances of enterprises often positively correlated with their economic profitability. Using high-resolution primary data from the North Wyke Farm Platform, an intensively instrumented farm-scale ruminant research facility located in southwest United Kingdom, this paper proposes a novel, information-driven approach to carry out comprehensive assessments of economic-environmental trade-offs inherent within pasture-based cattle and sheep production systems. The results of a data-mining exercise suggest that a potentially systematic interaction exists between ‘soil health’, ecological surroundings and livestock grazing, whereby a higher level of soil organic carbon (SOC) stock is associated with a better animal performance and less nutrient losses into watercourses, and a higher stocking density with greater botanical diversity and elevated SOC. We contend that a combination of farming system-wide trials and environmental instrumentation provides an ideal setting for enrolling scientifically sound and biologically informative metrics for agricultural sustainability, through which agricultural producers could obtain guidance to manage soils, water, pasture and livestock in an economically and environmentally acceptable manner. Priority areas for future farm-scale research to ensure long-term sustainability are also discussed.
Environmental enteric dysfunction (EED) and systemic inflammation (SI) are common in developing countries and may cause stunting. In Bangladesh, >40 % of preschool children are stunted, but EED and SI contributions are unknown. We aimed to determine the impact of EED and SI (assessed with multiple indicators) on growth in children (n 539) enrolled in a community-based randomised food supplementation trial in rural Bangladesh. EED was defined with faecal myeloperoxidase, α-1 antitrypsin and neopterin and serum endotoxin core antibody and glucagon-like peptide-2, consolidated into gut inflammation (GI) and permeability (GP) scores, and urinary lactulose:mannitol α-1 acid glycoprotein (AGP) characterised SI. Biomarker associations with anthropometry (15-, 18- and 24-month length-for-age (LAZ), weight-for-length (WLZ) and weight-for-age (WAZ) z scores) were examined in pairwise correlations and adjusted mixed-effects regressions. Stunting, wasting and underweight prevalence at 18 months were 45, 15 and 37 %, respectively, with elevated EED and SI markers common. EED and SI were not associated with 15–24-month length trajectory. Elevated (worse) GI and GP scores predicted reduced 18–24-month WLZ change (β −0·01 (se 0·00) z score/month for both). Elevated GP was also associated with reduced 15–18-month WLZ change (β −0·03 (se 0·01) z score/month) and greater 15-month WLZ (β 0·16 (se 0·05)). Higher AGP was associated with reduced prior and increased subsequent WLZ change (β −0·04 (se 0·01) and β 0·02 (se 0·00) z score/month for 15–18 and 18–24 months). The hypothesised link from EED to stunting was not observed in this sample of Bangladeshi 18-month-olds, but the effects of EED on constrained weight gain may have consequences for later linear growth or for other health and development outcomes.