To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The Eating Assessment in Toddlers FFQ (EAT FFQ) has been shown to have good reliability and comparative validity for ranking nutrient intakes in young children. With the addition of food items (n 4), we aimed to re-assess the validity of the EAT FFQ and estimate calibration factors in a sub-sample of children (n 97) participating in the Growing Up Milk – Lite (GUMLi) randomised control trial (2015–2017). Participants completed the ninety-nine-item GUMLi EAT FFQ and record-assisted 24-h recalls (24HR) on two occasions. Energy and nutrient intakes were assessed at months 9 and 12 post-randomisation and calibration factors calculated to determine predicted estimates from the GUMLi EAT FFQ. Validity was assessed using Pearson correlation coefficients, weighted kappa (κ) and exact quartile categorisation. Calibration was calculated using linear regression models on 24HR, adjusted for sex and treatment group. Nutrient intakes were significantly correlated between the GUMLi EAT FFQ and 24HR at both time points. Energy-adjusted, de-attenuated Pearson correlations ranged from 0·3 (fibre) to 0·8 (Fe) at 9 months and from 0·3 (Ca) to 0·7 (Fe) at 12 months. Weighted κ for the quartiles ranged from 0·2 (Zn) to 0·6 (Fe) at 9 months and from 0·1 (total fat) to 0·5 (Fe) at 12 months. Exact agreement ranged from 30 to 74 %. Calibration factors predicted up to 56 % of the variation in the 24HR at 9 months and 44 % at 12 months. The GUMLi EAT FFQ remained a useful tool for ranking nutrient intakes with similar estimated validity compared with other FFQ used in children under 2 years.
Emergency Medical Services (EMS) systems have developed protocols for prehospital activation of the cardiac catheterization laboratory for patients with suspected ST-elevation myocardial infarction (STEMI) to decrease first-medical-contact-to-balloon time (FMC2B). The rate of “false positive” prehospital activations is high. In order to decrease this rate and expedite care for patients with true STEMI, the American Heart Association (AHA; Dallas, Texas USA) developed the Mission Lifeline PreAct STEMI algorithm, which was implemented in Los Angeles County (LAC; California USA) in 2015. The hypothesis of this study was that implementation of the PreAct algorithm would increase the positive predictive value (PPV) of prehospital activation.
This is an observational pre-/post-study of the effect of the implementation of the PreAct algorithm for patients with suspected STEMI transported to one of five STEMI Receiving Centers (SRCs) within the LAC Regional System. The primary outcome was the PPV of cardiac catheterization laboratory activation for percutaneous coronary intervention (PCI) or coronary artery bypass graft (CABG). The secondary outcome was FMC2B.
A total of 1,877 patients were analyzed for the primary outcome in the pre-intervention period and 405 patients in the post-intervention period. There was an overall decrease in cardiac catheterization laboratory activations, from 67% in the pre-intervention period to 49% in the post-intervention period (95% CI for the difference, -14% to -22%). The overall rate of cardiac catheterization declined in post-intervention period as compared the pre-intervention period, from 34% to 30% (95% CI, for the difference -7.6% to 0.4%), but actually increased for subjects who had activation (48% versus 58%; 95% CI, 4.6%-15.0%). Implementation of the PreAct algorithm was associated with an increase in the PPV of activation for PCI or CABG from 37.9% to 48.6%. The overall odds ratio (OR) associated with the intervention was 1.4 (95% CI, 1.1-1.8). The effect of the intervention was to decrease variability between medical centers. There was no associated change in average FMC2B.
The implementation of the PreAct algorithm in the LAC EMS system was associated with an overall increase in the PPV of cardiac catheterization laboratory activation.
While echocardiographic parameters are used to quantify ventricular function in infants with single ventricle physiology, there are few data comparing these to invasive measurements. This study correlates echocardiographic measures of diastolic function with ventricular end-diastolic pressure in infants with single ventricle physiology prior to superior cavopulmonary anastomosis.
Data from 173 patients enrolled in the Pediatric Heart Network Infant Single Ventricle enalapril trial were analysed. Those with mixed ventricular types (n = 17) and one outlier (end-diastolic pressure = 32 mmHg) were excluded from the analysis, leaving a total sample size of 155 patients. Echocardiographic measurements were correlated to end-diastolic pressure using Spearman’s test.
Median age at echocardiogram was 4.6 (range 2.5–7.4) months. Median ventricular end-diastolic pressure was 7 (range 3–19) mmHg. Median time difference between the echocardiogram and catheterisation was 0 days (range −35 to 59 days). Examining the entire cohort of 155 patients, no echocardiographic diastolic function variable correlated with ventricular end-diastolic pressure. When the analysis was limited to the 86 patients who had similar sedation for both studies, the systolic:diastolic duration ratio had a significant but weak negative correlation with end-diastolic pressure (r = −0.3, p = 0.004). The remaining echocardiographic variables did not correlate with ventricular end-diastolic pressure.
In this cohort of infants with single ventricle physiology prior to superior cavopulmonary anastomosis, most conventional echocardiographic measures of diastolic function did not correlate with ventricular end-diastolic pressure at cardiac catheterisation. These limitations should be factored into the interpretation of quantitative echo data in this patient population.
The second year of life is a period of nutritional vulnerability. We aimed to investigate the dietary patterns and nutrient intakes from 1 to 2 years of age during the 12-month follow-up period of the Growing Up Milk – Lite (GUMLi) trial. The GUMLi trial was a multi-centre, double-blinded, randomised controlled trial of 160 healthy 1-year-old children in Auckland, New Zealand and Brisbane, Australia. Dietary intakes were collected at baseline, 3, 6, 9 and 12 months post-randomisation, using a validated FFQ. Dietary patterns were identified using principal component analysis of the frequency of food item consumption per d. The effect of the intervention on dietary patterns and intake of eleven nutrients over the duration of the trial were investigated using random effects mixed models. A total of three dietary patterns were identified at baseline: ‘junk/snack foods’, ‘healthy/guideline foods’ and ‘breast milk/formula’. A significant group difference was observed in ‘breast milk/formula’ dietary pattern z scores at 12 months post-randomisation, where those in the GUMLi group loaded more positively on this pattern, suggesting more frequent consumption of breast milk. No difference was seen in the other two dietary patterns. Significant intervention effects were seen on nutrient intake between the GUMLi (intervention) and cows’ milk (control) groups, with lower protein and vitamin B12, and higher Fe, vitamin D, vitamin C and Zn intake in the GUMLi (intervention) group. The consumption of GUMLi did not affect dietary patterns, however, GUMLi participants had lower protein intake and higher Fe, vitamins D and C and Zn intake at 2 years of age.
Research into the gut microbiota of human infants is necessary in order to better understand how inter-species interactions and ecological succession shape the diversity of the gut microbiota, and in turn, how the specific composition of the gut microbiota impacts on host health both during infancy and in later years. Blastocystis is a ubiquitous intestinal protist that has been linked to a number of intestinal and extra-intestinal diseases. However, emerging data show that asymptomatic carriage is common and that Blastocystis is prevalent in the healthy adult gut microbiota. Nonetheless, little is known about the prevalence and diversity of this microorganism in the healthy infant gut, including when and how individuals become colonized by Blastocystis. Here, we surveyed the prevalence and diversity of Blastocystis in an infant population (n = 59) from an industrialized country (Ireland) using Blastocystis-specific primers at three or more time-points up to 24 months old. Only three infants were positive for Blastocystis (prevalence = 5%) and this was only noted for samples collected at month 24. This rate is comparatively low relative to previously reported prevalence rates in the contemporaneous adult population. These data suggest that infants in Westernized countries that are successfully colonized by Blastocystis most likely acquire this microorganism via horizontal transfer.
The goal of the present study was to use a methodology that accurately and reliably describes the availability, price and quality of healthy foods at both the store and community levels using the Nutrition Environment Measures Survey in Stores (NEMS-S), to propose a spatial methodology for integrating these store and community data into measures for defining objective food access.
Two hundred and sixty-five retail food stores in and within 2 miles (3·2 km) of Flint, Michigan, USA, were mapped using ArcGIS mapping software.
A survey based on the validated NEMS-S was conducted at each retail food store. Scores were assigned to each store based on a modified version of the NEMS-S scoring system and linked to the mapped locations of stores. Neighbourhood characteristics (race and socio-economic distress) were appended to each store. Finally, spatial and kernel density analyses were run on the mapped store scores to obtain healthy food density metrics.
Regression analyses revealed that neighbourhoods with higher socio-economic distress had significantly lower dairy sub-scores compared with their lower-distress counterparts (β coefficient=−1·3; P=0·04). Additionally, supermarkets were present only in neighbourhoods with <60 % African-American population and low socio-economic distress. Two areas in Flint had an overall NEMS-S score of 0.
By identifying areas with poor access to healthy foods via a validated metric, this research can be used help local government and organizations target interventions to high-need areas. Furthermore, the methodology used for the survey and the mapping exercise can be replicated in other cities to provide comparable results.
A number of socio-economic, biological and lifestyle characteristics change with advancing age and place very old adults at increased risk of micronutrient deficiencies. The aim of this study was to assess vitamin and mineral intakes and respective food sources in 793 75-year-olds (302 men and 491 women) in the North-East of England, participating in the Newcastle 85+ Study. Micronutrient intakes were estimated using a multiple-pass recall tool (2×24 h recalls). Determinants of micronutrient intake were assessed with multinomial logistic regression. Median vitamin D, Ca and Mg intakes were 2·0 (interquartile range (IQR) 1·2–6·5) µg/d, 731 (IQR 554–916) mg/d and 215 (IQR 166–266) mg/d, respectively. Fe intake was 8·7 (IQR 6·7–11·6) mg/d, and Se intake was 39·0 (IQR 27·3–55·5) µg/d. Cereals and cereal products were the top contributors to intakes of folate (31·5 %), Fe (49·2 %) and Se (46·7 %) and the second highest contributors to intakes of vitamin D (23·8 %), Ca (27·5 %) and K (15·8 %). More than 95 % (n 756) of the participants had vitamin D intakes below the UK’s Reference Nutrient Intake (10 µg/d). In all, >20 % of the participants were below the Lower Reference Nutrient Intake for Mg (n 175), K (n 238) and Se (n 418) (comparisons with dietary reference values (DRV) do not include supplements). As most DRV are not age specific and have been extrapolated from younger populations, results should be interpreted with caution. Participants with higher education, from higher social class and who were more physically active had more nutrient-dense diets. More studies are needed to inform the development of age-specific DRV for micronutrients for the very old.
Very old people (referred to as those aged 85 years and over) are the fastest growing age segment of many Western societies owing to the steady rise of life expectancy and decrease in later life mortality. In the UK, there are now more than 1·5 million very old people (2·5 % of total population) and the number is projected to rise to 3·3 million or 5 % over the next 20 years. Reduced mobility and independence, financial constraints, higher rates of hospitalisation, chronic diseases and disabilities, changes in body composition, taste perception, digestion and absorption of food all potentially influence either nutrient intake or needs at this stage of life. The nutritional needs of the very old have been identified as a research priority by the British Nutrition Foundation's Task Force report, Healthy Ageing: The Role of Nutrition and Lifestyle. However, very little is known about the dietary habits and nutritional status of the very old. The Newcastle 85+ study, a cohort of more than 1000 85-year olds from the North East of England and the Life and Living in Advanced Age study (New Zealand), a bicultural cohort study of advanced ageing of more than 900 participants from the Bay of Plenty and Rotorua regions of New Zealand are two unique cohort studies of ageing, which aim to assess the spectrum of health in the very old as well as examine the associations of health trajectories and outcomes with biological, clinical and social factors as each cohort ages. The nutrition domain included in both studies will help to fill the evidence gap by identifying eating patterns, and measures of nutritional status associated with better, or worse, health and wellbeing. This review will explore some of this ongoing work.
Food and nutrient intake data are scarce in very old adults (85 years and older) – one of the fastest growing age segments of Western societies, including the UK. Our primary objective was to assess energy and macronutrient intakes and respective food sources in 793 85-year-olds (302 men and 491 women) living in North-East England and participating in the Newcastle 85+ cohort Study. Dietary information was collected using a repeated multiple-pass recall (2×24 h recalls). Energy, macronutrient and NSP intakes were estimated, and the contribution (%) of food groups to nutrient intake was calculated. The median energy intake was 6·65 (interquartile ranges (IQR) 5·49–8·16) MJ/d – 46·8 % was from carbohydrates, 36·8 % from fats and 15·7 % from proteins. NSP intake was 10·2 g/d (IQR 7·3–13·7). NSP intake was higher in non-institutionalised, more educated, from higher social class and more physically active 85-year-olds. Cereals and cereal products were the top contributors to intakes of energy and most macronutrients (carbohydrates, non-milk extrinsic sugars, NSP and fat), followed by meat and meat products. The median intakes of energy and NSP were much lower than the estimated average requirement for energy (9·6 MJ/d for men and 7·7 MJ/d for women) and the dietary reference value (DRV) for NSP (≥18 g/d). The median SFA intake was higher than the DRV (≤11 % of dietary energy). This study highlights the paucity of data on dietary intake and the uncertainties about DRV for this age group.
Integrated weed management (IWM) for agronomic and vegetable production
systems utilizes all available options to effectively manage weeds.
Late-season weed control measures are often needed to improve crop harvest
and stop additions to the weed seed bank. Eliminating the production of
viable weed seeds is one of the key IWM practices. The objective of this
research was to determine how termination method and timing influence viable
weed seed production of late-season weed infestations. Research was
conducted in Delaware, Michigan, and New York over a 2-yr period. The weeds
studied included: common lambsquarters, common ragweed, giant foxtail,
jimsonweed, and velvetleaf. Three termination methods were imposed: cutting
at the plant base (simulating hand hoeing), chopping (simulating mowing),
and applying glyphosate. The three termination timings were flowering,
immature seeds present, and mature seeds present. Following termination,
plants were stored in the field in mesh bags until mid-Fall when seeds were
counted and tested for viability. Termination timing influenced viable seed
development; however, termination method did not. Common ragweed and giant
foxtail produced viable seeds when terminated at the time of flowering. All
species produced some viable seed when immature seeds were present at the
time of termination. The time of viable seed formation varied based on
species and site-year, ranging from plants terminated the day of flowering
to 1,337 growing degree d after flowering (base 10, 0 to 57 calendar d).
Viable seed production was reduced by 64 to 100% when common lambsquarters,
giant foxtail, jimsonweed, and velvetleaf were terminated with immature
seeds present, compared to when plants were terminated with some mature
seeds present. Our results suggest that terminating common lambsquarters,
common ragweed, and giant foxtail prior to flowering, and velvetleaf and
jimsonweed less than 2 and 3 wk after flowering, respectively, greatly
reduces weed seed bank inputs.
The first observations by a worldwide network of advanced interferometric gravitational wave detectors offer a unique opportunity for the astronomical community. At design sensitivity, these facilities will be able to detect coalescing binary neutron stars to distances approaching 400 Mpc, and neutron star–black hole systems to 1 Gpc. Both of these sources are associated with gamma-ray bursts which are known to emit across the entire electromagnetic spectrum. Gravitational wave detections provide the opportunity for ‘multi-messenger’ observations, combining gravitational wave with electromagnetic, cosmic ray, or neutrino observations. This review provides an overview of how Australian astronomical facilities and collaborations with the gravitational wave community can contribute to this new era of discovery, via contemporaneous follow-up observations from the radio to the optical and high energy. We discuss some of the frontier discoveries that will be made possible when this new window to the Universe is opened.
Considerable research has documented that exposure to traumatic events has negative effects on physical and mental health. Much less research has examined the predictors of traumatic event exposure. Increased understanding of risk factors for exposure to traumatic events could be of considerable value in targeting preventive interventions and anticipating service needs.
General population surveys in 24 countries with a combined sample of 68 894 adult respondents across six continents assessed exposure to 29 traumatic event types. Differences in prevalence were examined with cross-tabulations. Exploratory factor analysis was conducted to determine whether traumatic event types clustered into interpretable factors. Survival analysis was carried out to examine associations of sociodemographic characteristics and prior traumatic events with subsequent exposure.
Over 70% of respondents reported a traumatic event; 30.5% were exposed to four or more. Five types – witnessing death or serious injury, the unexpected death of a loved one, being mugged, being in a life-threatening automobile accident, and experiencing a life-threatening illness or injury – accounted for over half of all exposures. Exposure varied by country, sociodemographics and history of prior traumatic events. Being married was the most consistent protective factor. Exposure to interpersonal violence had the strongest associations with subsequent traumatic events.
Given the near ubiquity of exposure, limited resources may best be dedicated to those that are more likely to be further exposed such as victims of interpersonal violence. Identifying mechanisms that account for the associations of prior interpersonal violence with subsequent trauma is critical to develop interventions to prevent revictimization.
Although interventions exist to reduce violent crime, optimal implementation requires accurate targeting. We report the results of an attempt to develop an actuarial model using machine learning methods to predict future violent crimes among US Army soldiers.
A consolidated administrative database for all 975 057 soldiers in the US Army in 2004–2009 was created in the Army Study to Assess Risk and Resilience in Servicemembers (Army STARRS). Of these soldiers, 5771 committed a first founded major physical violent crime (murder-manslaughter, kidnapping, aggravated arson, aggravated assault, robbery) over that time period. Temporally prior administrative records measuring socio-demographic, Army career, criminal justice, medical/pharmacy, and contextual variables were used to build an actuarial model for these crimes separately among men and women using machine learning methods (cross-validated stepwise regression, random forests, penalized regressions). The model was then validated in an independent 2011–2013 sample.
Key predictors were indicators of disadvantaged social/socioeconomic status, early career stage, prior crime, and mental disorder treatment. Area under the receiver-operating characteristic curve was 0.80–0.82 in 2004–2009 and 0.77 in the 2011–2013 validation sample. Of all administratively recorded crimes, 36.2–33.1% (male-female) were committed by the 5% of soldiers having the highest predicted risk in 2004–2009 and an even higher proportion (50.5%) in the 2011–2013 validation sample.
Although these results suggest that the models could be used to target soldiers at high risk of violent crime perpetration for preventive interventions, final implementation decisions would require further validation and weighing of predicted effectiveness against intervention costs and competing risks.
In the United States alone, ∼14,000 children are hospitalised annually with acute heart failure. The science and art of caring for these patients continues to evolve. The International Pediatric Heart Failure Summit of Johns Hopkins All Children’s Heart Institute was held on February 4 and 5, 2015. The 2015 International Pediatric Heart Failure Summit of Johns Hopkins All Children’s Heart Institute was funded through the Andrews/Daicoff Cardiovascular Program Endowment, a philanthropic collaboration between All Children’s Hospital and the Morsani College of Medicine at the University of South Florida (USF). Sponsored by All Children’s Hospital Andrews/Daicoff Cardiovascular Program, the International Pediatric Heart Failure Summit assembled leaders in clinical and scientific disciplines related to paediatric heart failure and created a multi-disciplinary “think-tank”. The purpose of this manuscript is to summarise the lessons from the 2015 International Pediatric Heart Failure Summit of Johns Hopkins All Children’s Heart Institute, to describe the “state of the art” of the treatment of paediatric cardiac failure, and to discuss future directions for research in the domain of paediatric cardiac failure.
Amorphous solid water (ASW) is of great importance in astrochemistry as it has been detected in star forming regions, comets, and cold solar-system objects. A key property of ASW is its porous nature (with the extent of porosity reflecting the formation and growth conditions) and the subsequent pore collapse when the ice is heated. If interstellar ices are porous there are huge implications to both the process of planet formation and the budgets of molecular gas in the solid and gas phases. It is therefore vital to understand ASW porosity over astronomically relevant conditions in order to effectively model its potential effects on these processes.
Background: It has been hypothesized that [18F]-sodium fluoride (NaF) uptake imaged with positron emission tomography (PET) binds to hydroxyapatite molecules expressed in regions with active calcification. Therefore, we aimed to validate NaF as a marker of hydroxyapatite expression in high-risk carotid plaque. Methods: Eleven patients (69 ± 5 years, 3 female) scheduled for carotid endarterectomy were prospectively recruited for NaF PET/CT. One patient received a second contralateral endarterectomy; two patients were excluded (intolerance to contrast media and PET/CT misalignment). The bifurcation of the common carotid was used as the reference point; NaF uptake (tissue to blood ratio - TBR) was measured at every PET slice extending 2 cm above and below the bifurcation. Excised plaque was immunostained with Goldner’s Trichrome and whole-slide digitized images were used to quantify hydroxyapatite expression. Pathology was co-registered with PET. Results: NaF uptake was related to the extent of hydroxyapatite expression (r=0.45, p<0.001). Upon classifying bilateral plaque for symptomatology, symptomatic plaque was associated with cerebrovascular events (3.75±1.1 TBR, n=9) and had greater NaF uptake than clinically silent asymptomatic plaque (2.79±0.6 TBR, n=11) (p=0.04). Conclusion: NaF uptake is related to hydroxyapatite expression and is increased in plaque associated with cerebrovascular events. NaF may serve as a novel biomarker of active calcification and plaque vulnerability.
The incidence of recreational water-associated outbreaks in the United States has significantly increased, driven, at least in part, by outbreaks both caused by Cryptosporidium and associated with treated recreational water venues. Because of the parasite's extreme chlorine tolerance, transmission can occur even in well-maintained treated recreational water venues (e.g. pools) and a focal cryptosporidiosis outbreak can evolve into a community-wide outbreak associated with multiple recreational water venues and settings (e.g. childcare facilities). In August 2004 in Auglaize County, Ohio, multiple cryptosporidiosis cases were identified and anecdotally linked to pool A. Within 5 days of the first case being reported, pool A was hyperchlorinated to achieve 99·9% Cryptosporidium inactivition. A case-control study was launched to epidemiologically ascertain the outbreak source 11 days later. A total of 150 confirmed and probable cases were identified; the temporal distribution of illness onset was peaked, indicating a point-source exposure. Cryptosporidiosis was significantly associated with swimming in pool A (matched odds ratio 121·7, 95% confidence interval 27·4–∞) but not with another venue or setting. The findings of this investigation suggest that proactive implementation of control measures, when increased Cryptosporidium transmission is detected but before an outbreak source is epidemiologically ascertained, might prevent a focal cryptosporidiosis outbreak from evolving into a community-wide outbreak.
The correspondences between the names in the Scylding genealogy at the beginning of Beowulf and three names in the upper reaches of the genealogy of Æthelwulf in the Anglo-Saxon Chronicle, Beaw, Sceldwa and Sceaf, frequently appear in arguments for a late dating of Beowulf. But these arguments overlook many aspects of Æthelwulf's genealogy that disrupt their case for a late dating. As H. Munro Chadwick pointed out over a century ago, the forms Sceldwa and Beaw found in the Chronicle for Scyld and Beow are not West Saxon spellings, and the -wa suffix of Sceldwa and Tætwa suggests that these forms may be archaic. Thus spelling alone indicates that these names were probably copied from an older, non-West Saxon text. Furthermore, the very presence of these names in the royal pedigree is puzzling. On one level the presence of Scyld is easy to explain: Scyld and the Scyldings were famous in heroic legend, and his inclusion in Æthelwulf's pedigree provides reflected glory for the West Saxon dynasty and implies genealogical, political and cultural connections between the West Saxons and the Danes that could be useful for Alfred and his heirs to foster. But on another level his inclusion is rather surprising: according to genealogical conventions, the presence of Scyld implies that the West Saxon royal family is a cadet branch of the Scylding dynasty, and is thus potentially subordinate to Scandinavian rulers in England claiming direct descent from Scyld.