We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Flumioxazin and S-metolachlor are widely used in conventional sweetpotato production in North Carolina and other states; however, some growers have recently expressed concerns about potential effects of these herbicides on sweetpotato yield and quality. Previous research indicates that activated charcoal has the potential to reduce herbicide injury. Field studies were conducted in 2021 and 2022 to determine whether flumioxazin applied preplant and S-metolachlor applied before and after transplanting negatively affect sweetpotato yield and quality when activated charcoal is applied with transplant water. The studies evaluated five herbicide treatments and two activated charcoal treatments. Herbicide treatments included two flumioxazin rates, one S-metolachlor rate applied immediately before and immediately after transplanting, and no herbicide. Charcoal treatments consisted of activated charcoal applied at 9 kg ha−1, and no charcoal. No visual injury from herbicides or charcoal was observed. Likewise, no effect of herbicide or charcoal treatment on no. 1, marketable (sum of no. 1 and jumbo grades), or total yield (sum of canner, no. 1, and jumbo grades) was observed. Additionally, shape analysis conducted on calculated length-to-width ratio (LWR) for no. 1 sweetpotato roots found no effect from flumioxazin at either rate on sweetpotato root shape. However, both S-metolachlor treatments resulted in lower LWR of no. 1 sweetpotato roots in 2021. Results are consistent with prior research and indicate that flumioxazin and S-metolachlor are safe for continued use on sweetpotato at registered rates.
Identifying children and/or adolescents who are at highest risk for developing chronic depression is of utmost importance, so that we can develop more effective and targeted interventions to attenuate the risk trajectory of depression. To address this, the objective of this study was to identify young people with persistent depressive symptoms across adolescence and young adulthood and examine the prospective associations between factors and persistent depressive symptoms in young people.
Methods
We used data from 6711 participants in the Avon Longitudinal Study of Parents and Children. Depressive symptoms were assessed at 12.5, 13.5, 16, 17.5, 21 and 22 years with the Short Mood and Feelings Questionnaire, and we further examined the influence of multiple biological, psychological and social factors in explaining chronic depressive symptoms.
Results
Using latent class growth analysis, we identified four trajectories of depressive symptoms: persistent high, persistent low, persistent moderate and increasing high. After applying several logistic regression models, we found that loneliness and feeling less connected at school were the most relevant factors for chronic course of depressive symptoms.
Conclusions
Our findings contribute with the identification of those children who are at highest risk for developing chronic depressive symptoms.
Background: Minimally invasive endoscopic techniques via the transorbital approach (ETOA) have emerged as a promising alternative for addressing skull base tumours. This study aims to showcase our institution’s extensive experience with ETOA, detailing the surgical technique employed and presenting comprehensive patient outcomes. Methods: A retrospective analysis was conducted on data from patients who underwent ETOA within the past five years. Results: Over the study period, 24 ETOA procedures were performed on 21 patients, with an average age of 48.92, 13 of whom were women. The superior orbital corridor was utilized in 95.83% of cases, and in 79.17%, ETOA was complemented by a transnasal approach. Spheno-orbital meningioma accounted for the most common surgical indication (33.33%, n=8), all resulting in vision improvement, followed by lateral frontal sinus mucocele (25%, n=6). The median length of stay was one day, and ETOA achieved the procedure goal in 19 patients. Transient V1 numbness was the primary complication (29.17%, n=7), and 20.83% (n=5) necessitated another surgery. Notably, no mortality was associated with this procedure. Conclusions: Our institution’s experience underscores the notable safety and efficacy potential of ETOA, with 19 out of 21 patients exhibiting positive outcomes, obviating the need for revision surgery in most cases.
Greenhouse trials were conducted to determine the response of stevia to reduced-risk synthetic and nonsynthetic herbicides applied over-the-top post-transplant. In addition, field trials were conducted with stevia grown in a polyethylene mulch production system to determine crop response and weed control in planting holes to reduced-risk synthetic and nonsynthetic herbicides applied post-transplant directed. Treatments included caprylic acid plus capric acid, clove oil plus cinnamon oil, d-limonene, acetic acid (200 grain), citric acid, pelargonic acid, eugenol, ammonium nonanoate, and ammoniated soap of fatty acids. Stevia yield (dry aboveground biomass) in the greenhouse was reduced by all herbicide treatments. Citric acid and clove oil plus cinnamon oil were the least injurious, reducing yield by 16% to 20%, respectively. In field studies, d-limonene, pelargonic acid, ammonium nonanoate, and ammoniated soap of fatty acids controlled Palmer amaranth (>90% 1 wk after treatment (WAT). In field studies caprylic acid plus capric acid, pelargonic acid, and ammonium nonanoate caused >30% injury to stevia plants at 2 WAT, and d-limonene, citric acid, acetic acid, and ammoniated soap of fatty acids caused 18% to 25% injury 2 WAT. Clove oil plus cinnamon oil and eugenol caused <10% injury. Despite being injurious, herbicides applied in the field did not reduce yield compared to the nontreated check. Based upon yield data, these herbicides have potential for use in stevia; however, these products could delay harvest if applied to established stevia. In particular, clove oil plus cinnamon oil has potential for use for early-season weed management for organic production systems. The application of clove oil plus cinnamon oil over-the-top resulted in <10% injury 28 d after treatment (DAT) in the greenhouse and 3% injury 6 WAT postemergence-directed in the field. In addition, this treatment provided 95% control of Palmer amaranth 4 WAT.
This article examines the development, early operation and subsequent failure of the Tot-Kolowa Red Cross irrigation scheme in Kenya’s Kerio Valley. Initially conceived as a technical solution to address regional food insecurity, the scheme aimed to scale up food production through the implementation of a fixed pipe irrigation system and the provision of agricultural inputs for cash cropping. A series of unfolding circumstances, however, necessitated numerous modifications to the original design as the project became increasingly entangled with deep and complex histories of land use patterns, resource allocation and conflict. Failure to understand the complexity of these dynamics ultimately led to the project’s collapse as the region spiralled into a period of significant unrest. In tracing these events, we aim to foreground the lived realities of imposed development, including both positive and negative responses to the scheme’s participatory obligations and its wider impact on community resilience.
The 001 spacing of Na-smectite was found to vary from 9.6 Å at 0% relative humidity (RH) to 12.4 Å at 60-65% RH. The 9.6-Å spacing corresponds to dehydrated Na-smectite, and the 12.4-Å corresponds to Na-smectite with one water layer. A regular series of intermediate values resulted from ordered interstratification of the 9.6- and 12.4-Å units. Ordered interstratification was confirmed by the presence of a 001 spacing of 9.6 + 12.4 Å = 22 Å. This peak appeared under experimental conditions at about 35% RH. It appeared for calculated simulations of ordered stacking of 50/50 mixtures (±10%) of 9.6- and 12.4-Å units. The 004 peak of this 22-Å spacing interacted with the 002 of the 9.6-Å spacing of ordered mixtures of more than 50% 9.6-Å units and with the 002 of the 12.4-Å spacing of ordered mixtures of more than 50% 12.4-Å units. The result of this interaction was a complex peak, the position of which was a function of the ratio of 9.6- and 12.4-Å units. This complex peak was noted for experimental and for calculated conditions. Calculated tracings assuming ordered stacking matched the experimental tracings closely, whereas those assuming random stacking did not.
Ordering was apparently due to the interaction of the positive charge of the interlayer cation repelling the positive charge of the hydrogens of the hydroxyl ions, one above and one below, closest to the interlayer space. The collapse of a single interlayer space (dehydration) brought the interlayer cation closer to the hydrogens of the hydroxyls causing the hydroxyls to rotate such that the hydrogens shifted toward the adjacent interlayer spaces. Collapse of these two interlayer spaces was therefore more difficult. This same mechanism helps explain ordering in illite/smectite. The difference is that hydration/dehydration is quick and reversible, whereas the change from smectite to illite is slow and irreversible.
Converging evidence across languages suggests that the word length effect (WLE; rate of number of syllables, phonemes, or pronunciation times per word) significantly contributes to estimates of verbal working memory (WM) capacity limits in the storage phase, but not in the manipulation phase (i.e., word length effect decay), of WM. Direct examination of the WLE on verbal WM performance within monolingual Spanish-speakers has not been reported. We investigated the psychophysical mechanisms of capacity consumption in Spanish-speakers across three syllabic word length rates to clarify the relative contributions of the WLE to storage (digit span forward) versus manipulation (digit span backward) memory phases within one language of monolingual speakers.
Participants and Methods:
Monolingual Spanish-speaking adults (N = 84) born in Latin American countries and age 18-65 completed testing over Zoom. Inclusion criteria required proficiency in the Spanish-language; exclusion criteria were bilingualism, multilingualism, TONI-4 IQ < 85, or history of head injury/LOC. A within-group design measured the WLE across three cognitive load conditions in the forward and backward directions of the digit span test varying in Spanish syllabic word length: the Mexican WAIS-IV Digit Span Test (“Standard Load”), and two modified measures with either a ∼20% decrease (“Low Load”) or ∼20% increase (“High Load”) in total syllables/digit relative to the Standard Load.
Results:
A reverse WLE was observed on syllable accuracy percentage task performance (p < 0.01), such that longer word length led to higher capacity limits during storage WM. A WLE, not decay, was found on both raw score (p < .001) and syllable accuracy percentage (p < 0.01) task performances during manipulation WM, where longer word length led to lower capacity limits.
Conclusions:
The reverse WLE was attributed to higher-order, executive-function cognitive strategies (such as chunking) that superseded negative word length effects. A larger syllabic discrepancy during manipulation WM could have superseded executive-function strategies, rendering a traditional WLE. Our study contributed more precise capacity estimates and clearer understanding of successful WM performance within monolingual, Latin American-born Spanish-speakers, helping to reduce cultural disparities in neurocognitive and neuropsychological research. Future studies may extend these findings to examine how WM capacity resources can be harnessed to improve memory strategies in clinically-applied settings with Spanish-speaking populations.
Older people with HIV (PWH) are at-risk for Alzheimer’s disease (AD) and its precursor, amnestic mild cognitive impairment (aMCI). Identifying aMCI among PWH is challenging because memory impairment is also common in HIV-associated neurocognitive disorders (HAND). The neuropathological hallmarks of aMCI/AD are amyloid-ß42 (Aß42) plaque and phosphorylated tau (p-tau) accumulation. Neurofilament light chain protein (NfL) is a marker of neuronal injury in AD and other neurodegenerative diseases. In this study, we assessed the prognostic value of the CSF AD pathology markers of lower Aß42, and higher p-tau, p-tau/Aß42 ratio, and NfL levels to identify an aMCI-like profile among older PWH and differentiating it from HAND. We assessed the relationship between aMCI and HAND diagnosis and AD biomarker levels
Participants and Methods:
Participants included 74 PWH (Mean age=48 [SD=8.5]; 87.4% male, 56.5% White) from the National NeuroAIDS Tissue Consortium (NNTC). CSF Aß42, Aß40, p-tau and NfL were measured by commercial immunoassay. Participants completed a neurocognitive evaluation assessing the domains of learning, recall, executive function, speed of information processing, working memory, verbal fluency, and motor. Memory domains were assessed with the Hopkins Verbal Learning Test-Revised and the Brief Visuospatial Memory Test-Revised, and aMCI was defined as impairment (<1.0 SD below normative mean) on two or more memory outcomes among HVLT-R and BVMT-R learning, delayed recall and recognition with at-least one recognition impairment required. HAND was defined as impairment (<1.0 SD below normative mean) in 2 or more cognitive domains. A series of separate linear regression models were used to examine how the levels of CSF p-tau, Aß42, p-tau/Aß42 ratio, and NfL relate to aMCI and HAND status while controlling for demographic variables (age, gender, race and education). Covariates were excluded from the model if they did not reach statistical significance.
Results:
58% percent of participants were diagnosed with HAND, 50.5% were diagnosed with aMCI. PWH with aMCI had higher levels of CSF p-tau/Aß42 ratio compared to PWH without aMCI (ß=.222, SE=.001, p=.043) while controlling for age (ß=.363, p=.001). No other AD biomarker significantly differed by aMCI or HAND status.
Conclusions:
Our results indicate that the CSF p-tau/Aß42 ratio relates specifically to an aMCI-like profile among PWH with high rates of cognitive impairment across multiple domains in this advanced HIV disease cohort. Thus, the p-tau/Aß42 ratio may have utility in disentangling aMCI from HAND and informing the need for further diagnostic procedures and intervention. Further research is needed to fully identify, among a broader group of PWH, who is at greatest risk for aMCI/AD and whether there is increased risk for aMCI/AD among PWH as compared to those without HIV.
Explore the relationship between a motor programming and sequencing procedure and informant rating of patients' functional abilities, especially driving. The Fist-Edge-Palm (FEP; Luria, 1970; 1980) task has previously demonstrated merit distinguishing between healthy controls and those with neurodegenerative processes (Weiner et al., 2011). However, associations between FEP performance and informant-rated functional status, particularly driving ability, have been minimally reported. This exploratory review examined the relationship between FEP, informant-rated driving ability, overall functional impairment, and neurocognitive diagnostic severity.
Participants and Methods:
41 Veterans seen in a South-Central VA Memory Clinic between 08/2020 and 07/2022 served as participants. Neuropsychological assessment included gathering demographic information, chairside neurobehavioral examination (including FEP), cognitive testing, and collateral informant completed Functional Activities Questionnaire (FAQ). Diagnostic severity [no diagnosis, mild cognitive impairment (MCI), dementia (MNCD)] was determined based on the patient's cognitive and functional deficits as measured by neuropsychological testing and informant-rated functional deficits. Correlational analyses were conducted to examine the strength of possible relationships between FEP performance, diagnostic severity, informant-rated functional status including driving impairment. Linear regression analyses determined the extent to which diagnostic severity and FEP performance predict informant-reported driving and ADL impairments
Results:
Participants were 97.5% male, 78% white, 22% black. Diagnostically, 3 patients received no diagnoses, 14 with MCI, and 24 with MNCD. Spearman rank correlations were computed; FEP performance was moderately negatively correlated with diagnostic severity [rho = -.35; p < .05] and driving impairment [rho = -.31; p < .05]. Diagnostic severity was moderately positively correlated with driving [rho= .44; p < .05] and total functional [rho = .65; p < .05] impairment. Total functional impairment positively correlated with reported driving impairment [rho = .58; p < .05]. Simple linear regressions tested if FEP performance and diagnostic severity independently predicted informant-reported driving and functional impairment. FEP performance predicted diagnostic severity (R2 = .12, p < .05) and reported driving impairment severity (R2 = .10, p <.05) but did not predict total functional impairment severity (R2 = .06, p = .14). Diagnostic severity predicted both informant-reported driving impairment severity (R2 = .16, p <.05) and functional severity (R2 = .30, p < .05). Multiple regression tested if diagnostic severity and FEP performance together was more predictive of driving and functional impairment than individually; the overall model was predictive of driving (R2 = .19, p < .05) and total functional (R2 = .30, p < .05) impairment, but only diagnostic severity significantly predicted reported driving (B = .63, p < .05) and functional (B = 6.25, p < .05) impairments.
Conclusions:
FEP performance was associated with diagnosis and collateral informant concerns of patient driving ability but not statistically related to overall functional impairment or nondriving related ADLs. FEP demonstrates utility in identification of patients demonstrating concerning driving fitness per collateral informants and diagnostic severity due to rapidity of administration, ease of instructing providers, and implementation in a wide variety of clinical settings when a caregiver or informant may not be available. Future directions include explaining the relationship between FEP and driving ability and exploring associations between FEP and other neuropsychological instruments.
Children born to mothers infected with human immunodeficiency virus (HIV) during pregnancy experience increased risk of neurocognitive impairment. In Botswana, HIV infection is common, but standardized cognitive testing is limited. The Penn Computerized Neurocognitive Battery (PennCNB) is a widely used cognitive test battery that streamlines evaluation of neurocognitive functioning. Our group translated and culturally adapted the PennCNB for use among children and adolescents in this high-burden, low-resource setting. The current study examined the construct validity and sensitivity to HIV infection and exposure of the culturally adapted PennCNB among a cohort of HIV-affected children and adolescents in Gaborone, Botswana.
Participants and Methods:
628 school-aged children aged 7-17 years (n=223 children living with HIV [HIV+]; n=204 HIV exposed, uninfected [HEU]; and 201 HIV unexposed, uninfected [HUU]) completed the PennCNB. Participants were recruited from a clinic specializing in the care and treatment of HIV+ children and adolescents in Gaborone, Botswana, as well as from local schools. Confirmatory factor analyses were performed on efficiency measures for 13 PennCNB tests. Multiple regressions examined associations between HIV and neurocognitive functioning while controlling for age and sex. Multivariate normative comparisons were used to examine rates of overall cognitive impairment by comparing individual profiles of test scores to the multivariate distribution of test scores using age-normed data from the HUU group.
Results:
Confirmatory factor analysis supported four hypothesized neurocognitive domains: executive functioning, episodic memory, complex cognition, and sensorimotor/processing speed. As expected, there were main effects of age on cognitive performance across all domains (ps < .001), and there were small sex differences, with females performing better in executive functioning and males performing better on visuospatial processing. Children and adolescents living with HIV performed significantly worse than HUU across all domains (ps < .001), with the largest effect sizes on measures of abstraction, working memory, and processing speed. HEU also performed worse than HUU across several domains, with smaller effect sizes. Multivariate normative comparisons indicated that 27% of the HIV+ group evidenced global neurocognitive impairment.
Conclusions:
Overall, results support the validity of a neurocognitive battery adapted for use in Botswana, a non-Western, resource-limited setting. Results indicated that the adapted battery applied to children and adolescents with limited computer familiarity had a similar factor structure as in Western settings, indicating that the PennCNB appeared to assess the hypothesized neurocognitive domains. Hypothesized associations with age and sex supported the battery’s construct validity. Moreover, the battery appears to be sensitive to cognitive impairments associated with perinatally-acquired HIV and in utero HIV-related exposures, as it discriminated between the HUU, HIV+, and HEU groups. Differences were found in specific domains and in detection of overall impairment, including approximately one quarter of children and adolescents living with HIV in this cohort evidencing global neurocognitive impairment. Together, these results provide evidence that the PennCNB could serve as a useful tool for the assessment of neurocognitive functioning in school-aged children and adolescents from Botswana and, potentially, other resource-limited settings.
The Arcanum mission is a proposed L-class mother-daughter spacecraft configuration for the Neptunian system, the mass and volume of which have been maximised to highlight the wide-ranging science the next generation of launch vehicles will enable. The spacecraft is designed to address a long-neglected but high-value region of the outer Solar System, showing that current advances make such a mission more feasible than ever before. This paper adds to a series on Arcanum and specifically provides progress on the study of areas identified as critical weaknesses by the 2013–2022 decadal survey and areas relevant to the recently published Voyage 2050 recommendations to the European Space Agency (ESA).
Background: LivaNova 3T heating and cooling devices (HCDs) have been associated with Mycobacterium chimaera, a Mycobacterium avium-intracellulare (MAIC) species, infections after cardiothoracic surgery. We describe our outbreak, which persisted despite escalating infection control measures. Methods: We identified patients with a positive MAIC culture following cardiothoracic surgery from January 2015 to the present at our institution. We classified these as “definite,” “possible,” or “operating room contamination” cases based on positive cultures from sterile sites, airway, or surgical specimens without evidence of infection. To identify patient or surgery characteristics associated with risk for MAIC infection, we conducted a case–control study comparing definite cases to randomly selected unmatched controls of patients over the same period without a positive MAIC culture after cardiothoracic surgery. Results: We identified 26 patients with a positive MAIC culture after cardiothoracic surgery: 13 definite, 9 possible, and 4 contamination cases. Among definite cases, the most common surgeries were valve replacements and left ventricular assist devices (5 cases each). The mean time from cardiothoracic surgery to diagnosis was 525 days. Overall, 10 (77%) cases occurred after exposure to our oldest HCDs (manufactured in 2013 or earlier). To date, 16 (62%) have undergone or are undergoing treatment for MAIC infection, and 4 (15%) have died due to NTM infection or complications. Compared to 47 controls, definite cases were associated with chronic kidney disease, implants, procedure type, use of cardiopulmonary bypass, and HCD age. Cases were not associated with time on bypass, time in the operating room, or other comorbid conditions (Table). All cases occurred despite enhanced disinfection and reorienting the HCD within the operating room, according to manufacturer recommendations. Moreover, 18 cases, including 7 definite cases, occurred after most HCDs were either deep cleaned or upgraded by the manufacturer. Also, 5 cases, including 3 possible cases and 2 contamination cases, occurred after physical separation of the HCD from the operating room. In August 2022, we purchased a fleet of glycol-cooled HCDs, and we have not identified additional MAIC cases since their deployment (Fig.). Conclusions: MAIC infections after cardiothoracic surgery were associated with procedure type, especially implants, use of cardiopulmonary bypass, and HCD age. Contrary to prior reports, neither operative nor CPB time was associated with MAIC infection after cardiothoracic surgery. The outbreak persisted despite disinfection and/or deep cleaning and reorienting HCDs within the operating room; some possible and contamination cases occurred even after moving HCDs outside the operating room. Thus, HCD water contamination events in the operating room (eg, spills from HCD tubing) may be a route of exposure, and different infection prevention measures are needed.
To assess the relationship between programme attendance in a produce prescription (PRx) programme and changes in cardiovascular risk factors.
Design:
The Georgia Food for Health (GF4H) programme provided six monthly nutrition education sessions, six weekly cooking classes and weekly produce vouchers. Participants became programme graduates attending at least 4 of the 6 of both the weekly cooking classes and monthly education sessions. We used a longitudinal, single-arm approach to estimate the association between the number of monthly programme visits attended and changes in health indicators.
Setting:
GF4H was implemented in partnership with a large safety-net health system in Atlanta, GA.
Participants:
Three hundred thirty-one participants living with or at-risk of chronic disease and food insecurity were recruited from primary care clinics. Over three years, 282 participants graduated from the programme.
Results:
After adjusting for programme site, year, participant sex, age, race and ethnicity, Supplemental Nutrition Assistance Program participation and household size, we estimated that each additional programme visit attended beyond four visits was associated with a 0·06 kg/m2 reduction in BMI (95 % CI –0·12, –0·01; P = 0·02), a 0·37 inch reduction in waist circumference (95 % CI –0·48, –0·27; P < 0·001), a 1·01 mmHg reduction in systolic blood pressure (95 % CI –1·45, –0·57; P < 0·001) and a 0·43 mmHg reduction in diastolic blood pressure (95 % CI –0·69, –0·17; P = 0·001).
Conclusions:
Each additional cooking and nutrition education visit attended beyond the graduation threshold was associated with modest but significant improvements in CVD risk factors, suggesting that increased engagement in educational components of a PRx programme improves health outcomes.
The NIH National Center for Advancing Translational Science (NCATS) was established to support translational research that spans the entire TS Continuum, with the goal of bridging the gap between preclinical biomedical research and real-world applications to advance treatments to patients more quickly. In 2018, the Translational Science Training (TST) TL1 Program at the University of Texas Health Science Center at San Antonio implemented new strategies to better include and encourage research more broadly across the TS Continuum, including the addition of postdoctoral scientists and a clinically trained Program Co-Director, expansion of team science and community engagement programming, and targeted trainee recruitment from schools of nursing, dentistry, and allied health, in addition to medicine. The objective of this bibliometric analysis was to determine if the program exhibited a more diverse mix of T-types after the adjustments made in 2018. The TST/TL1 Program experienced a shift in T-type, from mostly T0 (preclinical) to more T3/T4 (clinical implementation/public health) research, after new strategies were implemented. This supports the conclusion that strategic programmatic adjustments by an NCATS-funded predoctoral training program resulted in outcomes that better align with NCATS priorities to develop Trainees who contribute across the entire TS Continuum.