We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
A study of turbulent impurity transport by means of quasilinear and nonlinear gyrokinetic simulations is presented for Wendelstein 7-X (W7-X). The calculations have been carried out with the recently developed gyrokinetic code stella. Different impurity species are considered in the presence of various types of background instabilities: ion temperature gradient (ITG), trapped electron mode (TEM) and electron temperature gradient (ETG) modes for the quasilinear part of the work; ITG and TEM for the nonlinear results. While the quasilinear approach allows one to draw qualitative conclusions about the sign or relative importance of the various contributions to the flux, the nonlinear simulations quantitatively determine the size of the turbulent flux and check the extent to which the quasilinear conclusions hold. Although the bulk of the nonlinear simulations are performed at trace impurity concentration, nonlinear simulations are also carried out at realistic effective charge values, in order to know to what degree the conclusions based on the simulations performed for trace impurities can be extrapolated to realistic impurity concentrations. The presented results conclude that the turbulent radial impurity transport in W7-X is mainly dominated by ordinary diffusion, which is close to that measured during the recent W7-X experimental campaigns. It is also confirmed that thermodiffusion adds a weak inward flux contribution and that, in the absence of impurity temperature and density gradients, ITG- and TEM-driven turbulence push the impurities inwards and outwards, respectively.
To address appropriateness of antibiotic use, we implemented an electronic framework to evaluate antibiotic “never events” (NEs) at 2 medical centers. Patient-level vancomycin administration records were classified as NEs or non-NEs. The objective framework allowed capture of true-positive vancomycin NEs in one-third of patients identified by the electronic strategy.
Potential effectiveness of harvest weed seed control (HWSC) systems depends upon seed shatter of the target weed species at crop maturity, enabling its collection and processing at crop harvest. However, seed retention likely is influenced by agroecological and environmental factors. In 2016 and 2017, we assessed seed-shatter phenology in 13 economically important broadleaf weed species in soybean [Glycine max (L.) Merr.] from crop physiological maturity to 4 wk after physiological maturity at multiple sites spread across 14 states in the southern, northern, and mid-Atlantic United States. Greater proportions of seeds were retained by weeds in southern latitudes and shatter rate increased at northern latitudes. Amaranthus spp. seed shatter was low (0% to 2%), whereas shatter varied widely in common ragweed (Ambrosia artemisiifolia L.) (2% to 90%) over the weeks following soybean physiological maturity. Overall, the broadleaf species studied shattered less than 10% of their seeds by soybean harvest. Our results suggest that some of the broadleaf species with greater seed retention rates in the weeks following soybean physiological maturity may be good candidates for HWSC.
Seed shatter is an important weediness trait on which the efficacy of harvest weed seed control (HWSC) depends. The level of seed shatter in a species is likely influenced by agroecological and environmental factors. In 2016 and 2017, we assessed seed shatter of eight economically important grass weed species in soybean [Glycine max (L.) Merr.] from crop physiological maturity to 4 wk after maturity at multiple sites spread across 11 states in the southern, northern, and mid-Atlantic United States. From soybean maturity to 4 wk after maturity, cumulative percent seed shatter was lowest in the southern U.S. regions and increased moving north through the states. At soybean maturity, the percent of seed shatter ranged from 1% to 70%. That range had shifted to 5% to 100% (mean: 42%) by 25 d after soybean maturity. There were considerable differences in seed-shatter onset and rate of progression between sites and years in some species that could impact their susceptibility to HWSC. Our results suggest that many summer annual grass species are likely not ideal candidates for HWSC, although HWSC could substantially reduce their seed output during certain years.
Background: Antibiotics are among the most commonly prescribed drugs in nursing homes; urinary tract infections (UTIs) are a frequent indication. Although there is no gold standard for the diagnosis of UTIs, various criteria have been developed to inform and standardize nursing home prescribing decisions, with the goal of reducing unnecessary antibiotic prescribing. Using different published criteria designed to guide decisions on initiating treatment of UTIs (ie, symptomatic, catheter-associated, and uncomplicated cystitis), our objective was to assess the appropriateness of antibiotic prescribing among NH residents. Methods: In 2017, the CDC Emerging Infections Program (EIP) performed a prevalence survey of healthcare-associated infections and antibiotic use in 161 nursing homes from 10 states: California, Colorado, Connecticut, Georgia, Maryland, Minnesota, New Mexico, New York, Oregon, and Tennessee. EIP staff reviewed resident medical records to collect demographic and clinical information, infection signs, symptoms, and diagnostic testing documented on the day an antibiotic was initiated and 6 days prior. We applied 4 criteria to determine whether initiation of treatment for UTI was supported: (1) the Loeb minimum clinical criteria (Loeb); (2) the Suspected UTI Situation, Background, Assessment, and Recommendation tool (UTI SBAR tool); (3) adaptation of Infectious Diseases Society of America UTI treatment guidelines for nursing home residents (Crnich & Drinka); and (4) diagnostic criteria for uncomplicated cystitis (cystitis consensus) (Fig. 1). We calculated the percentage of residents for whom initiating UTI treatment was appropriate by these criteria. Results: Of 248 residents for whom UTI treatment was initiated in the nursing home, the median age was 79 years [IQR, 19], 63% were female, and 35% were admitted for postacute care. There was substantial variability in the percentage of residents with antibiotic initiation classified as appropriate by each of the criteria, ranging from 8% for the cystitis consensus, to 27% for Loeb, to 33% for the UTI SBAR tool, to 51% for Crnich and Drinka (Fig. 2). Conclusions: Appropriate initiation of UTI treatment among nursing home residents remained low regardless of criteria used. At best only half of antibiotic treatment met published prescribing criteria. Although insufficient documentation of infection signs, symptoms and testing may have contributed to the low percentages observed, adequate documentation in the medical record to support prescribing should be standard practice, as outlined in the CDC Core Elements of Antibiotic Stewardship for nursing homes. Standardized UTI prescribing criteria should be incorporated into nursing home stewardship activities to improve the assessment and documentation of symptomatic UTI and to reduce inappropriate antibiotic use.
The Eating Assessment in Toddlers FFQ (EAT FFQ) has been shown to have good reliability and comparative validity for ranking nutrient intakes in young children. With the addition of food items (n 4), we aimed to re-assess the validity of the EAT FFQ and estimate calibration factors in a sub-sample of children (n 97) participating in the Growing Up Milk – Lite (GUMLi) randomised control trial (2015–2017). Participants completed the ninety-nine-item GUMLi EAT FFQ and record-assisted 24-h recalls (24HR) on two occasions. Energy and nutrient intakes were assessed at months 9 and 12 post-randomisation and calibration factors calculated to determine predicted estimates from the GUMLi EAT FFQ. Validity was assessed using Pearson correlation coefficients, weighted kappa (κ) and exact quartile categorisation. Calibration was calculated using linear regression models on 24HR, adjusted for sex and treatment group. Nutrient intakes were significantly correlated between the GUMLi EAT FFQ and 24HR at both time points. Energy-adjusted, de-attenuated Pearson correlations ranged from 0·3 (fibre) to 0·8 (Fe) at 9 months and from 0·3 (Ca) to 0·7 (Fe) at 12 months. Weighted κ for the quartiles ranged from 0·2 (Zn) to 0·6 (Fe) at 9 months and from 0·1 (total fat) to 0·5 (Fe) at 12 months. Exact agreement ranged from 30 to 74 %. Calibration factors predicted up to 56 % of the variation in the 24HR at 9 months and 44 % at 12 months. The GUMLi EAT FFQ remained a useful tool for ranking nutrient intakes with similar estimated validity compared with other FFQ used in children under 2 years.
A thorough understanding of commonly used herbicide application practices and technologies is needed to provide recommendations and determine necessary application education efforts. An online survey to assess ground and aerial herbicide application practices in Arkansas was made available online in spring 2019. The survey was direct-emailed to 272 agricultural aviators and 831 certified commercial pesticide applicators, as well as made publicly available online through multiple media sources. A total of 124 responses were received, of which 75 responses were specific to herbicide applications in Arkansas agronomic crops, accounting for approximately 49% of Arkansas’ planted agronomic crop hectares in 2019. Ground and aerial application equipment were used for 49% and 51% of the herbicide applications on reported hectares, respectively. Rate controllers were commonly used application technologies for both ground and aerial application equipment. In contrast, global positioning system-driven automatic nozzle and boom shut-offs were much more common on ground spray equipment than aerial equipment. Applicator knowledge of nozzles and usage was limited, regardless of ground or aerial applicators, as only 28% of respondents provided a specific nozzle type used, indicating a need for educational efforts on nozzles and their importance in herbicide applications. Of the reported nozzle types, venturi nozzles and straight-stream nozzles were the most commonly used for ground and aerial spray equipment, respectively. Spray carrier volumes of 96.3 and 118.8 L ha−1 for ground spray equipment and 49.6 and 59.9 L ha−1 for aerial application equipment were the means of reported spray volumes for systemic and contact herbicides, respectively. Respondents indicated application optimization was a major benefit of utilizing newer application technologies, herbicide drift was a primary challenge, and research needs expressed by respondents included adjuvants, spray volume efficacy, and herbicide drift. Findings from this survey provided insight into current practices, technologies, and needs of Arkansas herbicide applicators. Research and education efforts can be implemented as a result to address aforementioned needs while providing applied research-based information to applicators based on current practices.
Preferential removal of W relative to other trace elements from zoned, W–Sn–U–Pb-bearing hematite coupled with disturbance of U–Pb isotope systematics is attributed to pseudomorphic replacement via coupled dissolution reprecipitation reaction (CDRR). This hematite has been studied down to the nanoscale to understand the mechanisms leading to compositional and U/Pb isotope heterogeneity at the grain scale. High-Angle Annular Dark Field Scanning Transmission Electron Microscopy (HAADF STEM) imaging of foils extracted in situ from three locations across the W-rich to W-depleted domains show lattice-scale defects and crystal structure modifications adjacent to twin planes. Secondary sets of twins and associated splays are common, but wider (up to ~100 nm) inclusion trails occur only at the boundary between the W-rich and W-depleted domains. STEM energy-dispersive X-ray mapping reveals W- and Pb-enrichment along 2–3 nm-wide features defining the twin planes; W-bearing nanoparticles occur along the splays. Tungsten and Pb are both present, albeit at low concentrations, within Na–K–Cl-bearing inclusions along the trails. HAADF STEM imaging of hematite reveals modifications relative to ideal crystal structure. A two-fold hematite superstructure (a = b = c = 10.85 Å; α = β = γ = 55.28°) involving oxygen vacancies was constructed and assessed by STEM simulations with a good match to data. This model can account for significant W release during interaction with fluids percolating through twin planes and secondary structures as CDRR progresses from the zoned domain, otherwise apparently undisturbed at the micrometre scale. Lead remobilisation is confirmed here at the nanoscale and is responsible for a disturbance of U/Pb ratios in hematite affected by CDRR. Twin planes can provide pathways for fluid percolation and metal entrapment during post-crystallisation overprinting. The presence of complex twinning can therefore predict potential disturbances of isotope systems in hematite that will affect its performance as a robust geochronometer.
Life course research embraces the complexity of health and disease development, tackling the extensive interactions between genetics and environment. This interdisciplinary blueprint, or theoretical framework, offers a structure for research ideas and specifies relationships between related factors. Traditionally, methodological approaches attempt to reduce the complexity of these dynamic interactions and decompose health into component parts, ignoring the complex reciprocal interaction of factors that shape health over time. New methods that match the epistemological foundation of the life course framework are needed to fully explore adaptive, multilevel, and reciprocal interactions between individuals and their environment. The focus of this article is to (1) delineate the differences between lifespan and life course research, (2) articulate the importance of complex systems science as a methodological framework in the life course research toolbox to guide our research questions, (3) raise key questions that can be asked within the clinical and translational science domain utilizing this framework, and (4) provide recommendations for life course research implementation, charting the way forward. Recent advances in computational analytics, computer science, and data collection could be used to approximate, measure, and analyze the intertwining and dynamic nature of genetic and environmental factors involved in health development.
Acute change in mental status (ACMS), defined by the Confusion Assessment Method, is used to identify infections in nursing home residents. A medical record review revealed that none of 15,276 residents had an ACMS documented. Using the revised McGeer criteria with a possible ACMS definition, we identified 296 residents and 21 additional infections. The use of a possible ACMS definition should be considered for retrospective nursing home infection surveillance.
We present a detailed analysis of the radio galaxy PKS
$2250{-}351$
, a giant of 1.2 Mpc projected size, its host galaxy, and its environment. We use radio data from the Murchison Widefield Array, the upgraded Giant Metre-wavelength Radio Telescope, the Australian Square Kilometre Array Pathfinder, and the Australia Telescope Compact Array to model the jet power and age. Optical and IR data come from the Galaxy And Mass Assembly (GAMA) survey and provide information on the host galaxy and environment. GAMA spectroscopy confirms that PKS
$2250{-}351$
lies at
$z=0.2115$
in the irregular, and likely unrelaxed, cluster Abell 3936. We find its host is a massive, ‘red and dead’ elliptical galaxy with negligible star formation but with a highly obscured active galactic nucleus dominating the mid-IR emission. Assuming it lies on the local M–
$\sigma$
relation, it has an Eddington accretion rate of
$\lambda_{\rm EDD}\sim 0.014$
. We find that the lobe-derived jet power (a time-averaged measure) is an order of magnitude greater than the hotspot-derived jet power (an instantaneous measure). We propose that over the lifetime of the observed radio emission (
${\sim} 300\,$
Myr), the accretion has switched from an inefficient advection-dominated mode to a thin disc efficient mode, consistent with the decrease in jet power. We also suggest that the asymmetric radio morphology is due to its environment, with the host of PKS
$2250{-}351$
lying to the west of the densest concentration of galaxies in Abell 3936.
Psychiatric disorders, particularly mood disorders, have a profound effect on the use of and adherence to highly active antiretroviral therapy (HAART) among patients with human immunodeficiency virus (HIV) infection.
HIV infection and mood disorders have features in common, and each is a significant risk factor for the other.
Objective
The objective is to highlight the clinicians on the importance of screening and treating affective disorders among patients with HIV infection.
Methods
Two cases of HIV infected patients with comorbid mood disorder and torpid evolutions by poor adherence to treatment are reported.
A brief literature review on this subject is done.
Results
Major depression has been shown to alter the function of killer lymphocytes in HIV-infected patients and may be associated with the progression of HIV disease.
HIV-positive patients with mental disorders are less likely to receive and adherence to antiretroviral therapy.
First case-report: a man 52 years old, HIV-positive since 1985 with a comorbid bipolar disorder, with recurrent depressions and poor adherence to both treatment with a rapidly exitus laetalis.
Second case-report: man 45 years old, HIV-positive since 1992 with a comorbid depressive disorder, non-adhered to both therapy and HIV-associated dementia.
Conclusions
Depressive disorders are common in HIV infection. Antiretroviral regimens for HIV-infected patients require strict adherence. Untreated depression has been associated with medication nonadherence. Understanding the contribution of depression and its subsequent treatment on antiretroviral therapy adherence might direct clinicians toward earlier identification and more aggressive treatment among this population.
Disclosure of interest
The authors have not supplied their declaration of competing interest.
We present a detailed overview of the cosmological surveys that we aim to carry out with Phase 1 of the Square Kilometre Array (SKA1) and the science that they will enable. We highlight three main surveys: a medium-deep continuum weak lensing and low-redshift spectroscopic HI galaxy survey over 5 000 deg2; a wide and deep continuum galaxy and HI intensity mapping (IM) survey over 20 000 deg2 from
$z = 0.35$
to 3; and a deep, high-redshift HI IM survey over 100 deg2 from
$z = 3$
to 6. Taken together, these surveys will achieve an array of important scientific goals: measuring the equation of state of dark energy out to
$z \sim 3$
with percent-level precision measurements of the cosmic expansion rate; constraining possible deviations from General Relativity on cosmological scales by measuring the growth rate of structure through multiple independent methods; mapping the structure of the Universe on the largest accessible scales, thus constraining fundamental properties such as isotropy, homogeneity, and non-Gaussianity; and measuring the HI density and bias out to
$z = 6$
. These surveys will also provide highly complementary clustering and weak lensing measurements that have independent systematic uncertainties to those of optical and near-infrared (NIR) surveys like Euclid, LSST, and WFIRST leading to a multitude of synergies that can improve constraints significantly beyond what optical or radio surveys can achieve on their own. This document, the 2018 Red Book, provides reference technical specifications, cosmological parameter forecasts, and an overview of relevant systematic effects for the three key surveys and will be regularly updated by the Cosmology Science Working Group in the run up to start of operations and the Key Science Programme of SKA1.
The European conquest and colonization of the Caribbean precipitated massive changes in indigenous cultures and societies of the region. One of the earliest changes was the introduction of new plant and animal foods and culinary traditions. This study presents the first archaeological reconstruction of indigenous diets and foodways in the Caribbean spanning the historical divide of 1492. We use multiple isotope datasets to reconstruct these diets and investigate the potential relationships between dietary and mobility patterns at multiple scales. Dietary patterns are assessed by isotope analyses of different skeletal elements from the archaeological skeletal population of El Chorro de Maíta, Cuba. This approach integrates carbon and nitrogen isotope analyses of bone and dentine collagen with carbon and oxygen isotope analyses of bone and enamel apatite. The isotope results document extreme intrapopulation dietary heterogeneity but few systematic differences in diet between demographic/social groups. Comparisons with published isotope data from other precolonial and colonial period populations in the Caribbean indicate distinct dietary and subsistence practices at El Chorro de Maíta. The majority of the local population consumed more animal protein resources than other indigenous populations in the Caribbean, and their overall dietary patterns are more similar to colonial period enslaved populations than to indigenous ones.
Nearly half of care home residents with advanced dementia have clinically significant agitation. Little is known about costs associated with these symptoms toward the end of life. We calculated monetary costs associated with agitation from UK National Health Service, personal social services, and societal perspectives.
Design:
Prospective cohort study.
Setting:
Thirteen nursing homes in London and the southeast of England.
Participants:
Seventy-nine people with advanced dementia (Functional Assessment Staging Tool grade 6e and above) residing in nursing homes, and thirty-five of their informal carers.
Measurements:
Data collected at study entry and monthly for up to 9 months, extrapolated for expression per annum. Agitation was assessed using the Cohen-Mansfield Agitation Inventory (CMAI). Health and social care costs of residing in care homes, and costs of contacts with health and social care services were calculated from national unit costs; for a societal perspective, costs of providing informal care were estimated using the resource utilization in dementia (RUD)-Lite scale.
Results:
After adjustment, health and social care costs, and costs of providing informal care varied significantly by level of agitation as death approached, from £23,000 over a 1-year period with no agitation symptoms (CMAI agitation score 0–10) to £45,000 at the most severe level (CMAI agitation score >100). On average, agitation accounted for 30% of health and social care costs. Informal care costs were substantial, constituting 29% of total costs.
Conclusions:
With the increasing prevalence of dementia, costs of care will impact on healthcare and social services systems, as well as informal carers. Agitation is a key driver of these costs in people with advanced dementia presenting complex challenges for symptom management, service planners, and providers.
Nuclear star clusters hosted by dwarf galaxies exhibit similar characteristics to high-mass, metal complex globular clusters. This type of globular clusters could, therefore, be former nuclei from accreted galaxies. M54 resides in the photometric center of the Sagittarius dwarf galaxy, at a distance where resolving stars is possible. M54 offers the opportunity to study a nucleus before the stripping of their host by the tidal field effects of the Milky Way. We use a MUSE data set to perform a detailed analysis of over 6600 stars. We characterize the stars by metallicity, age, and kinematics, identifying the presence of three stellar populations: a young metal-rich (YMR), an intermediate-age metal-rich (IMR), and an old metal-poor (OMP). The evidence suggests that the OMP population is the result of accretion of globular clusters in the center of the host, while the YMR population was born in-situ in the center of the OMP population.
The majority of paediatric Clostridioides difficile infections (CDI) are community-associated (CA), but few data exist regarding associated risk factors. We conducted a case–control study to evaluate CA-CDI risk factors in young children. Participants were enrolled from eight US sites during October 2014–February 2016. Case-patients were defined as children aged 1–5 years with a positive C. difficile specimen collected as an outpatient or ⩽3 days of hospital admission, who had no healthcare facility admission in the prior 12 weeks and no history of CDI. Each case-patient was matched to one control. Caregivers were interviewed regarding relevant exposures. Multivariable conditional logistic regression was performed. Of 68 pairs, 44.1% were female. More case-patients than controls had a comorbidity (33.3% vs. 12.1%; P = 0.01); recent higher-risk outpatient exposures (34.9% vs. 17.7%; P = 0.03); recent antibiotic use (54.4% vs. 19.4%; P < 0.0001); or recent exposure to a household member with diarrhoea (41.3% vs. 21.5%; P = 0.04). In multivariable analysis, antibiotic exposure in the preceding 12 weeks was significantly associated with CA-CDI (adjusted matched odds ratio, 6.25; 95% CI 2.18–17.96). Improved antibiotic prescribing might reduce CA-CDI in this population. Further evaluation of the potential role of outpatient healthcare and household exposures in C. difficile transmission is needed.
The parent-child relationship undergoes substantial reorganization over the transition to adolescence. Navigating this change is a challenge for parents because teens desire more behavioral autonomy as well as input in decision-making processes. Although it has been demonstrated that changes in parental socialization approaches facilitates adolescent adjustment, very little work has been devoted to understanding the underlying mechanisms supporting parents’ abilities to adjust caregiving during this period. Guided by self-regulation models of parenting, the present study examined how parental physiological and cognitive regulatory capacities were associated with hostile and insensitive parent conflict behavior over time. From a process-oriented perspective, we tested the explanatory role of parents’ dysfunctional child-oriented attributions in this association. A sample of 193 fathers, mothers, and their early adolescent (ages 12–14) participated in laboratory-based research assessments spaced approximately 1 year apart. Parental physiological regulation was measured using square root of the mean of successive differences during a conflict task; cognitive regulation was indicated by set-shifting capacity. Results showed that parental difficulties in vagal regulation during parent-adolescent conflict were associated with increased hostile conflict behavior over time; however, greater set-shifting capacity moderated this association for fathers only. In turn, father's dysfunctional attributions regarding adolescent behavior mediated the moderating effect. The results highlight how models of self-regulation and social cognition may explain the determinants of hostile parenting with differential implications for fathers during adolescence.
The second year of life is a period of nutritional vulnerability. We aimed to investigate the dietary patterns and nutrient intakes from 1 to 2 years of age during the 12-month follow-up period of the Growing Up Milk – Lite (GUMLi) trial. The GUMLi trial was a multi-centre, double-blinded, randomised controlled trial of 160 healthy 1-year-old children in Auckland, New Zealand and Brisbane, Australia. Dietary intakes were collected at baseline, 3, 6, 9 and 12 months post-randomisation, using a validated FFQ. Dietary patterns were identified using principal component analysis of the frequency of food item consumption per d. The effect of the intervention on dietary patterns and intake of eleven nutrients over the duration of the trial were investigated using random effects mixed models. A total of three dietary patterns were identified at baseline: ‘junk/snack foods’, ‘healthy/guideline foods’ and ‘breast milk/formula’. A significant group difference was observed in ‘breast milk/formula’ dietary pattern z scores at 12 months post-randomisation, where those in the GUMLi group loaded more positively on this pattern, suggesting more frequent consumption of breast milk. No difference was seen in the other two dietary patterns. Significant intervention effects were seen on nutrient intake between the GUMLi (intervention) and cows’ milk (control) groups, with lower protein and vitamin B12, and higher Fe, vitamin D, vitamin C and Zn intake in the GUMLi (intervention) group. The consumption of GUMLi did not affect dietary patterns, however, GUMLi participants had lower protein intake and higher Fe, vitamins D and C and Zn intake at 2 years of age.