We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Migraine and post-traumatic stress disorder (PTSD) are both twice as common in women as men. Cross-sectional studies have shown associations between migraine and several psychiatric conditions, including PTSD. PTSD is disproportionally common among patients in headache clinics, and individuals with migraine and PTSD report greater disability from migraines and more frequent medication use. To further clarify the nature of the relationship between PTSD and migraine, we conducted bidirectional analyses of the association between (1) migraine and incident PTSD and (2) PTSD and incident migraine.
Methods
We used longitudinal data from 1989–2020 among the 33,327 Nurses’ Health Study II respondents to the 2018 stress questionnaire. We used log-binomial models to estimate the relative risk of developing PTSD among women with migraine and the relative risk of developing migraine among individuals with PTSD, trauma-exposed individuals without PTSD, and individuals unexposed to trauma, adjusting for race, education, marital status, high blood pressure, high cholesterol, alcohol intake, smoking, and body mass index.
Results
Overall, 48% of respondents reported ever experiencing migraine, 82% reported experiencing trauma and 9% met the Diagnostic and Statistical Manual of Mental Disorders-5 criteria for PTSD. Of those reporting migraine and trauma, 67% reported trauma before migraine onset, 2% reported trauma and migraine onset in the same year and 31% reported trauma after migraine onset. We found that migraine was associated with incident PTSD (adjusted relative risk [RR]: 1.26, 95% confidence interval [CI]: 1.14–1.39). PTSD, but not trauma without PTSD, was associated with incident migraine (adjusted RR: 1.20, 95% CI: 1.14–1.27). Findings were consistently stronger in both directions among those experiencing migraine with aura.
Conclusions
Our study provides further evidence that migraine and PTSD are strongly comorbid and found associations of similar magnitude between migraine and incident PTSD and PTSD and incident migraine.
Computerized clinical decision support (CDS) assists healthcare professionals in making decisions to improve patient care. In the realms of antimicrobial stewardship (ASP) and infection prevention (IP) programs, CDS interventions can play a crucial role in optimizing antibiotic prescribing practices, reducing healthcare-associated infections, and promoting diagnostic stewardship when optimally designed. This primer article aims to provide ASP and IP professionals with a practical framework for the development, design, and evaluation of CDS interventions.
Setting:
Large academic medical center design: Established frameworks of CDS evaluation, “Five Rights” of CDS and the “Ten Commandments of Effective Clinical Decision Support”, were applied to two real-world examples of CDS tools, a Vancomycin Best Practice Advisory and a Clostridioides Difficile order panel, to demonstrate a structured approach to developing and enhancing the functionality of ASP/IP CDS interventions to promote efficacy and reduce unintended consequences of CDS.
Conclusions:
By outlining a structured approach for the development and evaluation of CDS interventions, with focus on end user engagement, efficiency and feasibility, ASP and IP professionals can leverage CDS to enhance IP/ASP quality improvement initiatives aimed to improve antibiotic utilization, diagnostic stewardship, and adherence to IP protocols.
We present the Pilot Survey Phase 2 data release for the Wide-field ASKAP L-band Legacy All-sky Blind surveY (WALLABY), carried-out using the Australian SKA Pathfinder (ASKAP). We present 1760 H i detections (with a default spatial resolution of 30′′) from three pilot fields including the NGC 5044 and NGC 4808 groups as well as the Vela field, covering a total of $\sim 180$ deg$^2$ of the sky and spanning a redshift up to $z \simeq 0.09$. This release also includes kinematic models for over 126 spatially resolved galaxies. The observed median rms noise in the image cubes is 1.7 mJy per 30′′ beam and 18.5 kHz channel. This corresponds to a 5$\sigma$ H i column density sensitivity of $\sim 9.1\times10^{19}(1 + z)^4$ cm$^{-2}$ per 30′′ beam and $\sim 20$ km s$^{-1}$ channel and a 5$\sigma$ H i mass sensitivity of $\sim 5.5\times10^8 (D/100$ Mpc)$^{2}$ M$_{\odot}$ for point sources. Furthermore, we also present for the first time 12′′ high-resolution images (“cut-outs”) and catalogues for a sub-sample of 80 sources from the Pilot Survey Phase 2 fields. While we are able to recover sources with lower signal-to-noise ratio compared to sources in the Public Data Release 1, we do note that some data quality issues still persist, notably, flux discrepancies that are linked to the impact of side lobes associated with the dirty beams due to inadequate deconvolution. However, in spite of these limitations, the WALLABY Pilot Survey Phase 2 has already produced roughly a third of the number of HIPASS sources, making this the largest spatially resolved H i sample from a single survey to date.
Herbicide drift to sensitive crops can result in significant injury, yield loss, and even crop destruction. When pesticide drift is reported to the Georgia Department of Agriculture (GDA), tissue samples are collected and analyzed for residues. Seven field studies were conducted in 2020 and 2021 in cooperation with the GDA to evaluate the effect of (1) time interval between simulated drift event and sampling, (2) low-dose herbicide rates, and (3) the sample collection methods on detecting herbicide residues in cotton (Gossypium hirsutum L.) foliage. Simulated drift rates of 2,4-D, dicamba, and imazapyr were applied to non-tolerant cotton in the 8- to 9-leaf stage with plant samples collected at 7 or 21 d after treatment (DAT). During collection, plant sampling consisted of removing entire plants or removing new growth occurring after the 7-leaf stage. Visual cotton injury from 2,4-D reached 43% to 75% at 0.001 and 0.004 kg ae ha−1, respectively; for dicamba, it was 9% to 41% at 0.003 or 0.014 kg ae ha−1, respectively; and for imazapyr, it was 1% to 74% with 0.004 and 0.03 kg ae ha−1 rates, respectively. Yield loss was observed with both rates of 2,4-D (11% to 51%) and with the high rate of imazapyr (52%); dicamba did not influence yield. Herbicide residues were detected in 88%, 88%, and 69% of samples collected from plants treated with 2,4-D, dicamba, and imazapyr, respectively, at 7 DAT compared with 25%, 16%, and 22% when samples were collected at 21 DAT, highlighting the importance of sampling quickly after a drift event. Although the interval between drift event and sampling, drift rate, and sampling method can all influence residue detection for 2,4-D, dicamba, and imazapyr, the factor with the greatest influence is the amount of time between drift and sample collection.
Most Australian school students take a packed lunch to school(1). However, parents have reported many barriers to packing a healthy lunch(2). Subsequently, foods eaten during school hours are not consistent with the Australian Dietary Guidelines, with discretionary foods providing about 44% of energy consumed during this time(3). In addition, some children go to school without any food for lunch or money to buy lunch. The Tasmanian School Lunch Project provides free nutritious cooked lunches for Kinder to Year 10 students attending 30 government schools (15 commenced 2022, 15 commenced 2023) in areas of high socioeconomic disadvantage. The lunches were provided 1-3 days/week. The menu and recipes were designed by dietitians. This analysis aimed to describe parents’ perceptions of the School Lunch Project during the first year. Six of the 15 schools that commenced in term 2 2022 were invited, and agreed, to participate in the evaluation. During term 3 or 4 2022, parents completed online or written surveys (n = 159) and/or participated in discussion groups (n = 26) to share their thoughts on the menu, their concerns, likes, and willingness to pay. Survey data were analysed descriptively and open-ended survey responses and discussion group data thematically. During 2022, 78,832 nutritious cooked lunches were provided to 1,678 students. Most parents felt there was enough variety on the menu (66%) and the right amount of food was served (69%). Most students (79%) ate the lunches every day they were provided yet 52% of parents continued to provide a packed lunch. Parents enjoyed that their child was having a healthy lunch (66%) and trying new foods (74%). Some parents in the discussion groups indicated positive flow on effects at home with students trying new foods and sitting down together as a family to eat the evening meal. Half the parents (50%) had no concerns about the school providing lunches. The most commonly reported concerns were their child might not like the food (36%) or their child does not try new foods (8.6%). These concerns were also raised in the discussion groups. Most parents (93%) were prepared to pay for the lunches in future (median $3, range $1-$12) and 85% thought there should be a family discount. Parents acknowledged some payment was necessary for the sustainability of the program but some expressed concern for those who may struggle to pay. More direct communication with families about the meals offered, the availability of bread (from term 4 2022) for students who choose not to eat the cooked lunch or want more to eat, and allowing families time to adjust to the new lunch system, may address some of the concerns raised. Further data on parents’ perceptions of the school lunches will be collected during term 3 2023.
The brain can be represented as a network, with nodes as brain regions and edges as region-to-region connections. Nodes with the most connections (hubs) are central to efficient brain function. Current findings on structural differences in Major Depressive Disorder (MDD) identified using network approaches remain inconsistent, potentially due to small sample sizes. It is still uncertain at what level of the connectome hierarchy differences may exist, and whether they are concentrated in hubs, disrupting fundamental brain connectivity.
Methods
We utilized two large cohorts, UK Biobank (UKB, N = 5104) and Generation Scotland (GS, N = 725), to investigate MDD case–control differences in brain network properties. Network analysis was done across four hierarchical levels: (1) global, (2) tier (nodes grouped into four tiers based on degree) and rich club (between-hub connections), (3) nodal, and (4) connection.
Results
In UKB, reductions in network efficiency were observed in MDD cases globally (d = −0.076, pFDR = 0.033), across all tiers (d = −0.069 to −0.079, pFDR = 0.020), and in hubs (d = −0.080 to −0.113, pFDR = 0.013–0.035). No differences in rich club organization and region-to-region connections were identified. The effect sizes and direction for these associations were generally consistent in GS, albeit not significant in our lower-N replication sample.
Conclusion
Our results suggest that the brain's fundamental rich club structure is similar in MDD cases and controls, but subtle topological differences exist across the brain. Consistent with recent large-scale neuroimaging findings, our findings offer a connectomic perspective on a similar scale and support the idea that minimal differences exist between MDD cases and controls.
Interactions between the endocannabinoid system (ECS) and neurotransmitter systems might mediate the risk of developing a schizophrenia spectrum disorder (SSD). Consequently, we investigated in patients with SSD and healthy controls (HC) the relations between (1) plasma concentrations of two prototypical endocannabinoids (N-arachidonoylethanolamine [anandamide] and 2-arachidonoylglycerol [2-AG]) and (2) striatal dopamine synthesis capacity (DSC), and glutamate and y-aminobutyric acid (GABA) levels in the anterior cingulate cortex (ACC). As anandamide and 2-AG might reduce the activity of these neurotransmitters, we hypothesized negative correlations between their plasma levels and the abovementioned neurotransmitters in both groups.
Methods
Blood samples were obtained from 18 patients and 16 HC to measure anandamide and 2-AG plasma concentrations. For all subjects, we acquired proton magnetic resonance spectroscopy scans to assess Glx (i.e. glutamate plus glutamine) and GABA + (i.e. GABA plus macromolecules) concentrations in the ACC. Ten patients and 14 HC also underwent [18F]F-DOPA positron emission tomography for assessment of striatal DSC. Multiple linear regression analyses were used to investigate the relations between the outcome measures.
Results
A negative association between 2-AG plasma concentration and ACC Glx concentration was found in patients (p = 0.008). We found no evidence of other significant relationships between 2-AG or anandamide plasma concentrations and dopaminergic, glutamatergic, or GABAergic measures in either group.
Conclusions
Our preliminary results suggest an association between peripheral 2-AG and ACC Glx levels in patients.
To investigate the symptoms of SARS-CoV-2 infection, their dynamics and their discriminatory power for the disease using longitudinally, prospectively collected information reported at the time of their occurrence. We have analysed data from a large phase 3 clinical UK COVID-19 vaccine trial. The alpha variant was the predominant strain. Participants were assessed for SARS-CoV-2 infection via nasal/throat PCR at recruitment, vaccination appointments, and when symptomatic. Statistical techniques were implemented to infer estimates representative of the UK population, accounting for multiple symptomatic episodes associated with one individual. An optimal diagnostic model for SARS-CoV-2 infection was derived. The 4-month prevalence of SARS-CoV-2 was 2.1%; increasing to 19.4% (16.0%–22.7%) in participants reporting loss of appetite and 31.9% (27.1%–36.8%) in those with anosmia/ageusia. The model identified anosmia and/or ageusia, fever, congestion, and cough to be significantly associated with SARS-CoV-2 infection. Symptoms’ dynamics were vastly different in the two groups; after a slow start peaking later and lasting longer in PCR+ participants, whilst exhibiting a consistent decline in PCR- participants, with, on average, fewer than 3 days of symptoms reported. Anosmia/ageusia peaked late in confirmed SARS-CoV-2 infection (day 12), indicating a low discrimination power for early disease diagnosis.
Red clay is considered to be of significant value to the economy in Morocco, particularly in the Safi region, because of its abundance. This raw material has long been known for its quality in the manufacture of clay materials, but its use was limited to traditional ceramics. The red clay raw material was the subject of the current study with the objective of opening new industrial applications that will give added value to the Safi red clay. The physicochemical, mineralogical, and thermal properties of the Moroccan red clay were determined by X-ray fluorescence (XRF), inductively coupled plasma-atomic emission spectroscopy (ICP-AES) analysis, X-ray diffraction (XRD), oriented aggregate, and particle-size analyses, powder density by helium pycnometry, carbonate content using the Bernard method, differential thermal analysis (TG–DTA), and the BET surface area. The compacted dry powder particles were calcined at three sintering temperatures: 900, 1000, and 1100°C for 2 h. The effect of sintering temperature on ceramic properties, such as apparent porosity, water adsorption, bulk density, and mechanical strength, was examined. Dense ceramics with lower porosity and greater mechanical resistance (~300%) were produced by increasing the sintering temperature from 900 to 1100°C. The conclusion was that the evolution of physicochemical and thermal properties is related to mineralogical changes, which show that anorthite is the major phase at higher temperatures.
The early and sensitive detection of microbial contamination of kaolinite slurries is needed for timely treatment to prevent spoilage. The sensitivity, reproducibility, and time required by current methods, such as the dip-slide method, do not meet this challenge. A more sensitive, reproducible, and efficient method is required. The objective of the present study was to develop and validate such a method. The new method is based on the measured growth kinetics of indigenous kaolinite-slurry microorganisms. The microorganisms from kaolinite slurries with different contamination levels were eluted and quantified as colony-forming units (CFUs). Known quantities of E. coli (ATCC 11775) were inoculated into sterilized kaolinite slurries to relate kaolinite-slurry CFUs to true microbial concentrations. The inoculated slurries were subsequently incubated, re-extracted, and microbial concentrations quantified. The ratio of the known inoculated E. coli concentration to the measured concentration was expressed as the recovery efficiency coefficient. Indigenous microbial communities were serially diluted, incubated, and the growth kinetics measured and related to CFUs. Using the new method, greater optical densities (OD) and visible microbial growth were measured for greater dilutions of kaolinite slurries with large microbial-cell concentrations. Growth conditions were optimized to maximize the correlation between contamination level, microbial growth kinetics, and OD value. A Standard Bacterial Unit (SBU) scale with five levels of microbial contamination was designed for kaolinite slurries using the experimental results. The SBU scale was validated using a blind test of 50 unknown slurry samples with various contamination levels provided by the Imerys Company. The validation tests revealed that the new method using the SBU scale was more time efficient, sensitive, and reproducible than the dip-slide method.
Helium or neopentane can be used as surrogate gas fill for deuterium (D2) or deuterium-tritium (DT) in laser-plasma interaction studies. Surrogates are convenient to avoid flammability hazards or the integration of cryogenics in an experiment. To test the degree of equivalency between deuterium and helium, experiments were conducted in the Pecos target chamber at Sandia National Laboratories. Observables such as laser propagation and signatures of laser-plasma instabilities (LPI) were recorded for multiple laser and target configurations. It was found that some observables can differ significantly despite the apparent similarity of the gases with respect to molecular charge and weight. While a qualitative behaviour of the interaction may very well be studied by finding a suitable compromise of laser absorption, electron density, and LPI cross sections, a quantitative investigation of expected values for deuterium fills at high laser intensities is not likely to succeed with surrogate gases.
Data compilations expand the scope of research; however, data citation practice lags behind advances in data use. It remains uncommon for data users to credit data producers in professionally meaningful ways. In paleontology, databases like the Paleobiology Database (PBDB) enable assessment of patterns and processes spanning millions of years, up to global scale. The status quo for data citation creates an imbalance wherein publications drawing data from the PBDB receive significantly more citations (median: 4.3 ± 3.5 citations/year) than the publications producing the data (1.4 ± 1.3 citations/year). By accounting for data reuse where citations were neglected, the projected citation rate for data-provisioning publications approached parity (4.2 ± 2.2 citations/year) and the impact factor of paleontological journals (n = 55) increased by an average of 13.4% (maximum increase = 57.8%) in 2019. Without rebalancing the distribution of scientific credit, emerging “big data” research in paleontology—and science in general—is at risk of undercutting itself through a systematic devaluation of the work that is foundational to the discipline.
Former professional American football players have a high relative risk for neurodegenerative diseases like chronic traumatic encephalopathy (CTE). Interpreting low cognitive test scores in this population occasionally is complicated by performance on validity testing. Neuroimaging biomarkers may help inform whether a neurodegenerative disease is present in these situations. We report three cases of retired professional American football players who completed comprehensive neuropsychological testing, but “failed” performance validity tests, and underwent multimodal neuroimaging (structural MRI, Aß-PET, and tau-PET).
Participants and Methods:
Three cases were identified from the Focused Neuroimaging for the Neurodegenerative Disease Chronic Traumatic Encephalopathy (FIND-CTE) study, an ongoing multimodal imaging study of retired National Football League players with complaints of progressive cognitive decline conducted at Boston University and the UCSF Memory and Aging Center. Participants were relatively young (age range 55-65), had 16 or more years of education, and two identified as Black/African American. Raw neuropsychological test scores were converted to demographically-adjusted z-scores. Testing included standalone (Test of Memory Malingering; TOMM) and embedded (reliable digit span, RDS) performance validity measures. Validity cutoffs were TOMM Trial 2 < 45 and RDS < 7. Structural MRIs were interpreted by trained neurologists. Aß-PET with Florbetapir was used to quantify cortical Aß deposition as global Centiloids (0 = mean cortical signal for a young, cognitively normal, Aß negative individual in their 20s, 100 = mean cortical signal for a patient with mild-to-moderate Alzheimer’s disease dementia). Tau-PET was performed with MK-6240 and first quantified as standardized uptake value ratio (SUVR) map. The SUVR map was then converted to a w-score map representing signal intensity relative to a sample of demographically-matched healthy controls.
Results:
All performed in the average range on a word reading-based estimate of premorbid intellect. Contribution of Alzheimer’s disease pathology was ruled out in each case based on Centiloids quantifications < 0. All cases scored below cutoff on TOMM Trial 2 (Case #1=43, Case #2=42, Case #3=19) and Case #3 also scored well below RDS cutoff (2). Each case had multiple cognitive scores below expectations (z < -2.0) most consistently in memory, executive function, processing speed domains. For Case #1, MRI revealed mild atrophy in dorsal fronto-parietal and medial temporal lobe (MTL) regions and mild periventricular white matter disease. Tau-PET showed MTL tau burden modestly elevated relative to controls (regional w-score=0.59, 72nd%ile). For Case #2, MRI revealed cortical atrophy, mild hippocampal atrophy, and a microhemorrhage, with no evidence of meaningful tau-PET signal. For Case #3, MRI showed cortical atrophy and severe white matter disease, and tau-PET revealed significantly elevated MTL tau burden relative to controls (w-score=1.90, 97th%ile) as well as focal high signal in the dorsal frontal lobe (overall frontal region w-score=0.64, 74th%ile).
Conclusions:
Low scores on performance validity tests complicate the interpretation of the severity of cognitive deficits, but do not negate the presence of true cognitive impairment or an underlying neurodegenerative disease. In the rapidly developing era of biomarkers, neuroimaging tools can supplement neuropsychological testing to help inform whether cognitive or behavioral changes are related to a neurodegenerative disease.
A commonly used confrontation naming task used in the United States is The Boston Naming Test (BNT). Performance differences has been found in Caucasian and ethnic minorities on the BNT. The Cordoba Naming Test (CNT) is a 30-item confrontation naming task developed in Argentina. Past research has shown acculturation levels can influence cognitive performance. Furthermore, one study evaluated geriatric gender differences on CNT performance in Spanish. Researchers reported that older male participants outperformed female participants on the CNT. To our knowledge, researchers have not evaluated ethnic differences on the CNT using a geriatric sample. The purpose of the present study was to examined CNT performance and acculturation in a Latinx and Caucasian geriatric sample. It was predicted the Caucasian group would outperform the Latinx group on the CNT. Moreover, the Caucasian group would report higher acculturation levels on the Abbreviated Multidimensional Acculturation Scale (AMAS) compared to the Latinx group.
Participants and Methods:
The sample consisted of 9 Latinx and 11 Caucasian participants with a mean age of 66.80 (SD =6.10), with an average of 14.30 (SD = 2.00) years of education. All participants were neurologically and psychologically healthy and completed the CNT and the AMAS in English. Acculturation was measured via the AMAS English subscales (i.e., English Language, United States. Identity, United States, Competency). A series of ANCOVAs, controlling for years of education completed and gender, was used to evaluate CNT performance and acculturation.
Results:
The ethnic groups were not well demographically matched (i.e., years of education and gender).We found that the Caucasian group outperformed the Latinx group on CNT performance p = .012, ηp 2 = .34. Furthermore, the Caucasian group reported higher acculturation levels (i.e., English Language, United States, Identity, United States, Competency) compared to the Latinx group p’s < .05, ηps2 = .42-.64.
Conclusions:
To our knowledge, this is the first study to evaluate CNT performance between ethnic groups with a geriatric sample. As expected the Caucasian group outperformed the Latinx group on the CNT. Also, as expected the Caucasian group reported higher English acculturation levels compared to the Latinx group. Our findings are consistent with past studies showing ethnic differences on confrontational naming performance (i.e., The Boston Naming Test), favoring Caucasians. A possible explanation for group differences could have been linguistic factors (e.g., speaking multiple languages) in our Latinx group. Therefore, since our Latinx group reported lower levels of English Language, United States identity, and United States competency the Latinx group assimilation towards United States culture might of influence their CNT performance. Future studies with different ethnic groups (e.g., African-Americans) and a larger sample size should examine if ethnic differences continue to cross-validate in a geriatric sample.
A 30-item confrontation naming test was developed in Argentina for Spanish speakers, The Cordoba Naming Test (CNT). The Boston Naming Test is an established confrontation naming task in the United States. Researchers have used the Boston Naming Test to identify individuals with different clinical pathologies (e.g., Alzheimer’s disease). The current literature on how Spanish speakers across various countries perform on confrontational naming tasks is limited. To our knowledge, one study investigated CNT performance across three Spanish-speaking countries (i.e., Argentina, Mexico, and Guatemala). Investigators found that the Guatemalan group underperformed on the CNT compared to the Argentine and Mexican groups. The purpose of this study was to extend the current literature and investigate CNT performance across five Spanish-speaking countries (i.e., Argentina, Mexico, Guatemala, Colombia, United States). We predicted that the Argentine group would outperform the other Spanish-speaking countries.
Participants and Methods:
The present study sample consisted of 502 neurologically and psychologically healthy participants with a mean age of 29.06 (SD = 13.41) with 14.75 years of education completed (SD = 3.01). Participants were divided into five different groups based on their country of birth and current country residency (i.e., United States, Mexico, Guatemala, Argentina, & Colombia). All participants consented to voluntary participation and completed the CNT and a comprehensive background questionnaire in Spanish. The CNT consisted of 30 black and white line drawings, ranging from easy to hard in difficulty. An ANCOVA, controlling for gender, education, and age, was used to evaluate CNT performance between the five Spanish-speaking country groups. Meanwhile, a Bonferroni post-hoc test was utilized to evaluate the significant differences between Spanish-speaking groups. We used a threshold of p < .05 for statistical significance.
Results:
Results revealed significant group differences between the five Spanish speaking groups on the CNT, p = .000, np2 = .48. Bonferroni post-hoc test revealed that the United States group significantly underperformed on the CNT compared to all the Spanish-speaking groups. Next, we found the Guatemalan group underperformed on the CNT compared to the Argentinian, Mexican, and Colombian groups. Additionally, we found the Argentinian group outperformed the Mexican, Guatemalan, and United States groups on the CNT. No significant differences were found between the Argentinian group and Colombian group or the Mexican group and Colombian group on the CNT.
Conclusions:
As predicted, the Argentinian group outperformed all the Spanish-speaking groups on the CNT except the Colombian group. Additionally, we found that the United States group underperformed on the CNT compared to all the Spanish-speaking groups. A possible explanation is that Spanish is not the official language in the United States compared to the rest of the Spanish-speaking groups. Meanwhile, a possible reason why the Argentinian and Colombian groups demonstrated better CNT performances might have been that it was less culturally sensitive than the United States, Mexican, and Guatemalan groups. Further analysis is needed with bigger sample sizes across other Spanish-speaking countries (e.g., Costa Rica, Chile) to evaluate what variables, if any, are influencing CNT performance.
Nonpathological aging has been linked to decline in both verbal and visuospatial memory abilities in older adults. Disruptions in resting-state functional connectivity within well-characterized, higherorder cognitive brain networks have also been coupled with poorer memory functioning in healthy older adults and in older adults with dementia. However, there is a paucity of research on the association between higherorder functional connectivity and verbal and visuospatial memory performance in the older adult population. The current study examines the association between resting-state functional connectivity within the cingulo-opercular network (CON), frontoparietal control network (FPCN), and default mode network (DMN) and verbal and visuospatial learning and memory in a large sample of healthy older adults. We hypothesized that greater within-network CON and FPCN functional connectivity would be associated with better immediate verbal and visuospatial memory recall. Additionally, we predicted that within-network DMN functional connectivity would be associated with improvements in delayed verbal and visuospatial memory recall. This study helps to glean insight into whether within-network CON, FPCN, or DMN functional connectivity is associated with verbal and visuospatial memory abilities in later life.
Participants and Methods:
330 healthy older adults between 65 and 89 years old (mean age = 71.6 ± 5.2) were recruited at the University of Florida (n = 222) and the University of Arizona (n = 108). Participants underwent resting-state fMRI and completed verbal memory (Hopkins Verbal Learning Test - Revised [HVLT-R]) and visuospatial memory (Brief Visuospatial Memory Test - Revised [BVMT-R]) measures. Immediate (total) and delayed recall scores on the HVLT-R and BVMT-R were calculated using each test manual’s scoring criteria. Learning ratios on the HVLT-R and BVMT-R were quantified by dividing the number of stimuli (verbal or visuospatial) learned between the first and third trials by the number of stimuli not recalled after the first learning trial. CONN Toolbox was used to extract average within-network connectivity values for CON, FPCN, and DMN. Hierarchical regressions were conducted, controlling for sex, race, ethnicity, years of education, number of invalid scans, and scanner site.
Results:
Greater CON connectivity was significantly associated with better HVLT-R immediate (total) recall (ß = 0.16, p = 0.01), HVLT-R learning ratio (ß = 0.16, p = 0.01), BVMT-R immediate (total) recall (ß = 0.14, p = 0.02), and BVMT-R delayed recall performance (ß = 0.15, p = 0.01). Greater FPCN connectivity was associated with better BVMT-R learning ratio (ß = 0.13, p = 0.04). HVLT-R delayed recall performance was not associated with connectivity in any network, and DMN connectivity was not significantly related to any measure.
Conclusions:
Connectivity within CON demonstrated a robust relationship with different components of memory function as well across verbal and visuospatial domains. In contrast, FPCN only evidenced a relationship with visuospatial learning, and DMN was not significantly associated with memory measures. These data suggest that CON may be a valuable target in longitudinal studies of age-related memory changes, but also a possible target in future non-invasive interventions to attenuate memory decline in older adults.
We present detailed characterization of laser-driven fusion and neutron production ($\sim {10}^5$/second) using 8 mJ, 40 fs laser pulses on a thin (<1 μm) D${}_2$O liquid sheet employing a measurement suite. At relativistic intensity ($\sim 5\times {10}^{18}$ W/cm${}^2$) and high repetition rate (1 kHz), the system produces deuterium–deuterium (D-D) fusion, allowing for consistent neutron generation. Evidence of D-D fusion neutron production is verified by a measurement suite with three independent detection systems: an EJ-309 organic scintillator with pulse-shape discrimination, a ${}^3\mathrm{He}$ proportional counter and a set of 36 bubble detectors. Time-of-flight analysis of the scintillator data shows the energy of the produced neutrons to be consistent with 2.45 MeV. Particle-in-cell simulations using the WarpX code support significant neutron production from D-D fusion events in the laser–target interaction region. This high-repetition-rate laser-driven neutron source could provide a low-cost, on-demand test bed for radiation hardening and imaging applications.
SNP addresses are a pathogen typing method based on whole-genome sequences (WGSs), assigning groups at seven different levels of genetic similarity. Public health surveillance uses it for several gastro-intestinal infections; this work trialled its use in veterinary surveillance for salmonella outbreak detection. Comparisons were made between temporal and spatio-temporal cluster detection models that either defined cases by their SNP address or by phage type, using historical data sets. Clusters of SNP incidents were effectively detected by both methods, but spatio-temporal models consistently detected these clusters earlier than the corresponding temporal models. Unlike phage type, SNP addresses appeared spatially and temporally limited, which facilitated the differentiation of novel, stable, or expanding clusters in spatio-temporal models. Furthermore, these models flagged spatio-temporal clusters containing only two to three cases at first detection, compared with a median of seven cases in phage-type models. The large number of SNP addresses will require automated methods to implement these detection models routinely. Further work is required to explore how temporal changes and different host species may impact the sensitivity and specificity of cluster detection. In conclusion, given validation with more sequencing data, SNP addresses are likely to be a valuable addition to early warning systems in veterinary surveillance.