We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Anxiety disorders and treatment-resistant major depressive disorder (TRD) are often comorbid. Studies suggest ketamine has anxiolytic and antidepressant properties.
Aims
To investigate if subcutaneous racemic ketamine, delivered twice weekly for 4 weeks, reduces anxiety in people with TRD.
Method
The Ketamine for Adult Depression Study was a multisite 4-week randomised, double-blind, active (midazolam)-controlled trial. The study initially used fixed low dose ketamine (0.5 mg/kg, cohort 1), before protocol revision to flexible, response-guided dosing (0.5–0.9 mg/kg, cohort 2). This secondary analysis assessed anxiety using the Hamilton Anxiety (HAM-A) scale (primary measure) and ‘inner tension’ item 3 of the Montgomery–Åsberg Depression Rating Scale (MADRS), at baseline, 4 weeks (end treatment) and 4 weeks after treatment end. Analyses of change in anxiety between ketamine and midazolam groups included all participants who received at least one treatment (n = 174), with a mixed effects repeated measures model used to assess the primary anxiety measure. The trial was registered at www.anzctr.org.au (ACTRN12616001096448).
Results
In cohort 1 (n = 68) the reduction in HAM-A score was not statistically significant: −1.4 (95% CI [−8.6, 3.2], P = 0.37), whereas a significant reduction was seen for cohort 2 (n = 106) of −4.0 (95% CI [−10.6, −1.9], P = 0.0058), favouring ketamine over midazolam. These effects were mediated by total MADRS and were not maintained at 4 weeks after treatment end. MADRS item 3 was also significantly reduced in cohort 2 (P = 0.026) but not cohort 1 (P = 0.96).
Conclusion
Ketamine reduces anxiety in people with TRD when administered subcutaneously in adequate doses.
We present a re-discovery of G278.94+1.35a as possibly one of the largest known Galactic supernova remnants (SNRs) – that we name Diprotodon. While previously established as a Galactic SNR, Diprotodon is visible in our new Evolutionary Map of the Universe (EMU) and GaLactic and Extragalactic All-sky MWA (GLEAM) radio continuum images at an angular size of $3{{{{.\!^\circ}}}}33\times3{{{{.\!^\circ}}}}23$, much larger than previously measured. At the previously suggested distance of 2.7 kpc, this implies a diameter of 157$\times$152 pc. This size would qualify Diprotodon as the largest known SNR and pushes our estimates of SNR sizes to the upper limits. We investigate the environment in which the SNR is located and examine various scenarios that might explain such a large and relatively bright SNR appearance. We find that Diprotodon is most likely at a much closer distance of $\sim$1 kpc, implying its diameter is 58$\times$56 pc and it is in the radiative evolutionary phase. We also present a new Fermi-LAT data analysis that confirms the angular extent of the SNR in gamma rays. The origin of the high-energy emission remains somewhat puzzling, and the scenarios we explore reveal new puzzles, given this unexpected and unique observation of a seemingly evolved SNR having a hard GeV spectrum with no breaks. We explore both leptonic and hadronic scenarios, as well as the possibility that the high-energy emission arises from the leftover particle population of a historic pulsar wind nebula.
Objectives: Despite the increasing number of people with dementia (PWD), detection remains low worldwide. In Brazil, PWD is expected to triple by 2050, and diagnosis can be challenging, contributing to high and growing rates of underdiagnosis. At the moment, there is no national estimate of the under detection or characteristics of its distribution according to gender, age and region. We aimed to estimate the proportion of PWD not diagnosed in relation to the estimated number of PWD.
Methods: The number of diagnosed individuals were estimated based on national records of the prescription of anticholinesterases drugs (AChE) in 2022 for the treatment of mild and moderate stages of Alzheimer’s Disease (AD) held by the Unified Health System (SUS). Data were obtained from ftp://ftp.datasus.gov.br and drugs were dispensed according to the national clinical protocol. Studies from the national literature were consulted to estimate: (i) the number of people currently diagnosed with mild and moderate AD; (ii) the proportion of those who obtain AChE from SUS; (iii) the proportion of those who do not take AChE; and (iv) the proportion of AD related to other dementias. We assumed that the under-detection rate of AD would be similar to other dementias and 70% of the diagnosed AD individuals obtain AChE from SUS.
Results: More than 80% of the PWD 60+ are undetected (88.7%, 95% CI = 88.6–88.7). The poorest regions had higher rates (94.6% 95%, CI = 94.5–94.6) than the richest (84.8%, 95% CI = 84.7–84.8). Men had higher rates (89.8%, 95% CI = 89.7–89.9) than women (87.4%, 95% CI = 87.4–87.5). The youngest age group (60-64) had the highest rate (94.6%, 95% CI = 94.5–94.7) which decreased until 85–89 (84.3%, 95% CI = 84.2–84.4), before increasing again to 91.1% (95% CI = 91.0–91.2) among 90+.
Conclusions: Dementia under detection in Brazil is among the highest in the world. Fast populational aging and the highest rates among the youngest individuals are of concern as it may be related to late diagnosis. Gender and regional disparities also need to be considered when developing health policies.
From early on, infants show a preference for infant-directed speech (IDS) over adult-directed speech (ADS), and exposure to IDS has been correlated with language outcome measures such as vocabulary. The present multi-laboratory study explores this issue by investigating whether there is a link between early preference for IDS and later vocabulary size. Infants’ preference for IDS was tested as part of the ManyBabies 1 project, and follow-up CDI data were collected from a subsample of this dataset at 18 and 24 months. A total of 341 (18 months) and 327 (24 months) infants were tested across 21 laboratories. In neither preregistered analyses with North American and UK English, nor exploratory analyses with a larger sample did we find evidence for a relation between IDS preference and later vocabulary. We discuss implications of this finding in light of recent work suggesting that IDS preference measured in the laboratory has low test-retest reliability.
Clostridioides difficile infection (CDI) may be misdiagnosed if testing is performed in the absence of signs or symptoms of disease. This study sought to support appropriate testing by estimating the impact of signs, symptoms, and healthcare exposures on pre-test likelihood of CDI.
Methods:
A panel of fifteen experts in infectious diseases participated in a modified UCLA/RAND Delphi study to estimate likelihood of CDI. Consensus, defined as agreement by >70% of panelists, was assessed via a REDCap survey. Items without consensus were discussed in a virtual meeting followed by a second survey.
Results:
All fifteen panelists completed both surveys (100% response rate). In the initial survey, consensus was present on 6 of 15 (40%) items related to risk of CDI. After panel discussion and clarification of questions, consensus (>70% agreement) was reached on all remaining items in the second survey. Antibiotics were identified as the primary risk factor for CDI and grouped into three categories: high-risk (likelihood ratio [LR] 7, 93% agreement among panelists in first survey), low-risk (LR 3, 87% agreement in first survey), and minimal-risk (LR 1, 71% agreement in first survey). Other major factors included new or unexplained severe diarrhea (e.g., ≥ 10 liquid bowel movements per day; LR 5, 100% agreement in second survey) and severe immunosuppression (LR 5, 87% agreement in second survey).
Conclusion:
Infectious disease experts concurred on the importance of signs, symptoms, and healthcare exposures for diagnosing CDI. The resulting risk estimates can be used by clinicians to optimize CDI testing and treatment.
Diagnostic criteria for major depressive disorder allow for heterogeneous symptom profiles but genetic analysis of major depressive symptoms has the potential to identify clinical and etiological subtypes. There are several challenges to integrating symptom data from genetically informative cohorts, such as sample size differences between clinical and community cohorts and various patterns of missing data.
Methods
We conducted genome-wide association studies of major depressive symptoms in three cohorts that were enriched for participants with a diagnosis of depression (Psychiatric Genomics Consortium, Australian Genetics of Depression Study, Generation Scotland) and three community cohorts who were not recruited on the basis of diagnosis (Avon Longitudinal Study of Parents and Children, Estonian Biobank, and UK Biobank). We fit a series of confirmatory factor models with factors that accounted for how symptom data was sampled and then compared alternative models with different symptom factors.
Results
The best fitting model had a distinct factor for Appetite/Weight symptoms and an additional measurement factor that accounted for the skip-structure in community cohorts (use of Depression and Anhedonia as gating symptoms).
Conclusion
The results show the importance of assessing the directionality of symptoms (such as hypersomnia versus insomnia) and of accounting for study and measurement design when meta-analyzing genetic association data.
Background: The Uganda Ministry of Health (MoH) and implementing partners instituted an infection prevention and control (IPC) response strategy during the Uganda SVD outbreak in 2022 that involved rapid enhancement of screening capacity at HCFs. Rapid scale-up of screening for infectious diseases, such as SVD, is critical for early identification and triage of suspected or confirmed cases in HCFs. We describe the rapid deployment of a multimodal IPC strategy implemented in response to the SVD outbreak and the resulting impact on screening measures at HCFs. Methods: We implemented a multimodal IPC strategy in HCFs from five high risk districts to improve screening practices from November 2022–January 2023. The strategy included training health workers (HCWs) identified as IPC mentors; establishing screening areas; and providing screening supplies and communication materials. The three-day training utilized an MoH standardized training package with didactic and practice sessions. The mentors then cascaded screening information and skills to other HCWs through onsite trainings and mentorships and established screening areas. Baseline and endline (3 months after baseline) cross-sectional assessments were conducted using the MoH IPC Assessment Tool adapted from the WHO Ebola IPC Scorecard. The five main screening parameters assessed included presence of ≥ 1 meter distance between screener and the person screened, availability of a functional handwashing facility and infrared thermometer, correct record of each person’s temperature, and appropriate referral process for those suspected of having SVD to holding areas. IPC capacity was measured through the summation of each of these parameter results and calculated as an overall percentage. IBM SPSS Statistics 20 software was used for data analysis and a paired t-test done to determine any significant findings between mean scores (percentage) at baseline and endpoint. Results: A total of 296 IPC mentors were trained, screening information was cascaded to 3,899 HCWs, and screening areas were established in 1,135 HCFs. Based on the screening results from the MoH IPC assessment tool, capacity improved from 44% (SD=37) at baseline to 67% (SD=34) at endpoint. Screening capacity improved from baseline to endpoint among level II and public HCFs from 33% (SD=35) to 60% (SD=35) (p < 0 .05) and from 54% (SD=38) to 76% (SD=31) (p < 0 .05), respectively. Conclusion: Rapid implementation of a multimodal IPC strategy was successful in enhancing screening capacity across Uganda’s HCFs during a SVD response, which is critical for early identification of infected patients to interrupt transmission. This multimodal approach should be recommended for future response actions.
Diagnosis of acute ischemia typically relies on evidence of ischemic lesions on magnetic resonance imaging (MRI), a limited diagnostic resource. We aimed to determine associations of clinical variables and acute infarcts on MRI in patients with suspected low-risk transient ischemic attack (TIA) and minor stroke and to assess their predictive ability.
Methods:
We conducted a post-hoc analysis of the Diagnosis of Uncertain-Origin Benign Transient Neurological Symptoms (DOUBT) study, a prospective, multicenter cohort study investigating the frequency of acute infarcts in patients with low-risk neurological symptoms. Primary outcome parameter was defined as diffusion-weighted imaging (DWI)-positive lesions on MRI. Logistic regression analysis was performed to evaluate associations of clinical characteristics with MRI-DWI-positivity. Model performance was evaluated by Harrel’s c-statistic.
Results:
In 1028 patients, age (Odds Ratio (OR) 1.03, 95% Confidence Interval (CI) 1.01–1.05), motor (OR 2.18, 95%CI 1.27–3.65) or speech symptoms (OR 2.53, 95%CI 1.28–4.80), and no previous identical event (OR 1.75, 95%CI 1.07–2.99) were positively associated with MRI-DWI-positivity. Female sex (OR 0.47, 95%CI 0.32–0.68), dizziness and gait instability (OR 0.34, 95%CI 0.14–0.69), normal exam (OR 0.55, 95%CI 0.35–0.85) and resolved symptoms (OR 0.49, 95%CI 0.30–0.78) were negatively associated. Symptom duration and any additional symptoms/symptom combinations were not associated. Predictive ability of the model was moderate (c-statistic 0.72, 95%CI 0.69–0.77).
Conclusion:
Detailed clinical information is helpful in assessing the risk of ischemia in patients with low-risk neurological events, but a predictive model had only moderate discriminative ability. Patients with clinically suspected low-risk TIA or minor stroke require MRI to confirm the diagnosis of cerebral ischemia.
A growing theoretical literature identifies how the process of constitutional review shapes judicial decision-making, legislative behavior, and even the constitutionality of legislation and executive actions. However, the empirical interrogation of these theoretical arguments is limited by the absence of a common protocol for coding constitutional review decisions across courts and time. We introduce such a coding protocol and database (CompLaw) of rulings by 42 constitutional courts. To illustrate the value of CompLaw, we examine a heretofore untested empirical implication about how review timing relates to rulings of unconstitutionality (Ward and Gabel 2019). First, we conduct a nuanced analysis of rulings by the French Constitutional Council over a 13-year period. We then examine the relationship between review timing and strike rates with a set of national constitutional courts in one year. Our data analysis highlights the benefits and flexibility of the CompLaw coding protocol for scholars of judicial review.
Medical resuscitations in rugged prehospital settings require emergency personnel to perform high-risk procedures in low-resource conditions. Just-in-Time Guidance (JITG) utilizing augmented reality (AR) guidance may be a solution. There is little literature on the utility of AR-mediated JITG tools for facilitating the performance of emergent field care.
Study Objective:
The objective of this study was to investigate the feasibility and efficacy of a novel AR-mediated JITG tool for emergency field procedures.
Methods:
Emergency medical technician-basic (EMT-B) and paramedic cohorts were randomized to either video training (control) or JITG-AR guidance (intervention) groups for performing bag-valve-mask (BVM) ventilation, intraosseous (IO) line placement, and needle-decompression (Needle-d) in a medium-fidelity simulation environment. For the interventional condition, subjects used an AR technology platform to perform the tasks. The primary outcome was participant task performance; the secondary outcomes were participant-reported acceptability. Participant task score, task time, and acceptability ratings were reported descriptively and compared between the control and intervention groups using chi-square analysis for binary variables and unpaired t-testing for continuous variables.
Results:
Sixty participants were enrolled (mean age 34.8 years; 72% male). In the EMT-B cohort, there was no difference in average task performance score between the control and JITG groups for the BVM and IO tasks; however, the control group had higher performance scores for the Needle-d task (mean score difference 22%; P = .01). In the paramedic cohort, there was no difference in performance scores between the control and JITG group for the BVM and Needle-d tasks, but the control group had higher task scores for the IO task (mean score difference 23%; P = .01). For all task and participant types, the control group performed tasks more quickly than in the JITG group. There was no difference in participant usability or usefulness ratings between the JITG or control conditions for any of the tasks, although paramedics reported they were less likely to use the JITG equipment again (mean difference 1.96 rating points; P = .02).
Conclusions:
This study demonstrated preliminary evidence that AR-mediated guidance for emergency medical procedures is feasible and acceptable. These observations, coupled with AR’s promise for real-time interaction and on-going technological advancements, suggest the potential for this modality in training and practice that justifies future investigation.
In September 2023, the UK Health Security Agency identified cases of Salmonella Saintpaul distributed across England, Scotland, and Wales, all with very low genetic diversity. Additional cases were identified in Portugal following an alert raised by the United Kingdom. Ninety-eight cases with a similar genetic sequence were identified, 93 in the United Kingdom and 5 in Portugal, of which 46% were aged under 10 years. Cases formed a phylogenetic cluster with a maximum distance of six single nucleotide polymorphisms (SNPs) and average of less than one SNP between isolates. An outbreak investigation was undertaken, including a case–control study. Among the 25 UK cases included in this study, 13 reported blood in stool and 5 were hospitalized. One hundred controls were recruited via a market research panel using frequency matching for age. Multivariable logistic regression analysis of food exposures in cases and controls identified a strong association with cantaloupe consumption (adjusted odds ratio: 14.22; 95% confidence interval: 2.83–71.43; p-value: 0.001). This outbreak, together with other recent national and international incidents, points to an increase in identifications of large outbreaks of Salmonella linked to melon consumption. We recommend detailed questioning and triangulation of information sources to delineate consumption of specific fruit varieties during Salmonella outbreaks.
The COVID-19 pandemic created many challenges for in-patient care including patient isolation and limitations on hospital visitation. Although communication technology, such as video calling or texting, can reduce social isolation, there are challenges for implementation, particularly for older adults.
Objective/Methods
This study used a mixed methodology to understand the challenges faced by in-patients and to explore the perspectives of patients, family members, and health care providers (HCPs) regarding the use of communication technology. Surveys and focus groups were used.
Findings
Patients who had access to communication technology perceived the COVID-19 pandemic to have more adverse impact on their well-beings but less on hospitalization outcomes, compared to those without. Most HCPs perceived that technology could improve programs offered, connectedness of patients to others, and access to transitions of care supports. Focus groups highlighted challenges with technology infrastructure in hospitals.
Discussion
Our study findings may assist efforts in appropriately adopting communication technology to improve the quality of in-patient and transition care.
Control of massive hemorrhage (MH) is a life-saving intervention. The use of tourniquets has been studied in prehospital and battlefield settings but not in aquatic environments.
Objective:
The aim of this research is to assess the control of MH in an aquatic environment by analyzing the usability of two tourniquet models with different adjustment mechanisms: windlass rod versus ratchet.
Methodology:
A pilot simulation study was conducted using a randomized crossover design to assess the control of MH resulting from an upper extremity arterial perforation in an aquatic setting. A sample of 24 trained lifeguards performed two randomized tests: one using a windlass-based Combat Application Tourniquet 7 Gen (T-CAT) and the other using a ratchet-based OMNA Marine Tourniquet (T-OMNA) specifically designed for aquatic use on a training arm for hemorrhage control. The tests were conducted after swimming an approximate distance of 100 meters and the tourniquets were applied while in the water. The following parameters were recorded: time of rescue (rescue phases and tourniquet application), perceived fatigue, and technical actions related to tourniquet skills.
Results:
With the T-OMNA, 46% of the lifeguards successfully stopped the MH compared to 21% with the T-CAT (P = .015). The approach swim time was 135 seconds with the T-OMNA and 131 seconds with the T-CAT (P = .42). The total time (swim time plus tourniquet placement) was 174 seconds with the T-OMNA and 177 seconds with the T-CAT (P = .55). The adjustment time (from securing the Velcro to completing the manipulation of the windlass or ratchet) for the T-OMNA was faster than with the T-CAT (six seconds versus 19 seconds; P < .001; effect size [ES] = 0.83). The perceived fatigue was high, with a score of seven out of ten in both tests (P = .46).
Conclusions:
Lifeguards in this study demonstrated the ability to use both tourniquets during aquatic rescues under conditions of fatigue. The tourniquet with the ratcheting-fixation system controlled hemorrhage in less time than the windlass rod-based tourniquet, although achieving complete bleeding control had a low success rate.
On one level, this chapter invites readers on an engaging cultural and geographic journey across the Americas, but more deeply it challenges our ways of facilitating early childhood teacher education for sustainability. There is no one right way to motivate teachers – both pre-service and in-service – to take on the challenges of sustainability; however, this chapter offers a range of possibilities. Harwood examines a Canadian pre-service ECEfS course of study with Indigenous and colonial perspectives interwoven; Carr focuses on the in-service experiences of educators participating in a centre transformation towards ECEfS at Arlitt Child Development Center; while Bascopé documents an in-service exploration of the role of Indigenous Chilean culture in ECEfS with teachers. Diversity and richness characterise this chapter.
This paper describes an improved simple, sample-mounting method for random powder X-ray diffraction (XRD), namely the razor tamped surface (RTS) method, which prepares a powder mount by tamping the loose powder with the sharp edge of a razor blade. Four kaolinites and a quartz powder were used to evaluate the RTS method by quantifying the degree of orientation in the sample mounts using orientation indices. Comparisons between the RTS and other published simple methods, consisting of front loading, back loading and side loading, indicate that the RTS method produces minimum packing density and minimum preferred orientation in the powder mounts of all five samples. The quartz powder used in this study does exhibit a tendency to preferred orientation. The mechanism by which the RTS method reduces preferred orientation is examined by comparing the width of the sharp blade edge with the size of clay particles. The advantages and disadvantages of the RTS method are also discussed.
Blood-based biomarkers represent a scalable and accessible approach for the detection and monitoring of Alzheimer’s disease (AD). Plasma phosphorylated tau (p-tau) and neurofilament light (NfL) are validated biomarkers for the detection of tau and neurodegenerative brain changes in AD, respectively. There is now emphasis to expand beyond these markers to detect and provide insight into the pathophysiological processes of AD. To this end, a reactive astrocytic marker, namely plasma glial fibrillary acidic protein (GFAP), has been of interest. Yet, little is known about the relationship between plasma GFAP and AD. Here, we examined the association between plasma GFAP, diagnostic status, and neuropsychological test performance. Diagnostic accuracy of plasma GFAP was compared with plasma measures of p-tau181 and NfL.
Participants and Methods:
This sample included 567 participants from the Boston University (BU) Alzheimer’s Disease Research Center (ADRC) Longitudinal Clinical Core Registry, including individuals with normal cognition (n=234), mild cognitive impairment (MCI) (n=180), and AD dementia (n=153). The sample included all participants who had a blood draw. Participants completed a comprehensive neuropsychological battery (sample sizes across tests varied due to missingness). Diagnoses were adjudicated during multidisciplinary diagnostic consensus conferences. Plasma samples were analyzed using the Simoa platform. Binary logistic regression analyses tested the association between GFAP levels and diagnostic status (i.e., cognitively impaired due to AD versus unimpaired), controlling for age, sex, race, education, and APOE e4 status. Area under the curve (AUC) statistics from receiver operating characteristics (ROC) using predicted probabilities from binary logistic regression examined the ability of plasma GFAP to discriminate diagnostic groups compared with plasma p-tau181 and NfL. Linear regression models tested the association between plasma GFAP and neuropsychological test performance, accounting for the above covariates.
Results:
The mean (SD) age of the sample was 74.34 (7.54), 319 (56.3%) were female, 75 (13.2%) were Black, and 223 (39.3%) were APOE e4 carriers. Higher GFAP concentrations were associated with increased odds for having cognitive impairment (GFAP z-score transformed: OR=2.233, 95% CI [1.609, 3.099], p<0.001; non-z-transformed: OR=1.004, 95% CI [1.002, 1.006], p<0.001). ROC analyses, comprising of GFAP and the above covariates, showed plasma GFAP discriminated the cognitively impaired from unimpaired (AUC=0.75) and was similar, but slightly superior, to plasma p-tau181 (AUC=0.74) and plasma NfL (AUC=0.74). A joint panel of the plasma markers had greatest discrimination accuracy (AUC=0.76). Linear regression analyses showed that higher GFAP levels were associated with worse performance on neuropsychological tests assessing global cognition, attention, executive functioning, episodic memory, and language abilities (ps<0.001) as well as higher CDR Sum of Boxes (p<0.001).
Conclusions:
Higher plasma GFAP levels differentiated participants with cognitive impairment from those with normal cognition and were associated with worse performance on all neuropsychological tests assessed. GFAP had similar accuracy in detecting those with cognitive impairment compared with p-tau181 and NfL, however, a panel of all three biomarkers was optimal. These results support the utility of plasma GFAP in AD detection and suggest the pathological processes it represents might play an integral role in the pathogenesis of AD.
Blood-based biomarkers offer a more feasible alternative to Alzheimer’s disease (AD) detection, management, and study of disease mechanisms than current in vivo measures. Given their novelty, these plasma biomarkers must be assessed against postmortem neuropathological outcomes for validation. Research has shown utility in plasma markers of the proposed AT(N) framework, however recent studies have stressed the importance of expanding this framework to include other pathways. There is promising data supporting the usefulness of plasma glial fibrillary acidic protein (GFAP) in AD, but GFAP-to-autopsy studies are limited. Here, we tested the association between plasma GFAP and AD-related neuropathological outcomes in participants from the Boston University (BU) Alzheimer’s Disease Research Center (ADRC).
Participants and Methods:
This sample included 45 participants from the BU ADRC who had a plasma sample within 5 years of death and donated their brain for neuropathological examination. Most recent plasma samples were analyzed using the Simoa platform. Neuropathological examinations followed the National Alzheimer’s Coordinating Center procedures and diagnostic criteria. The NIA-Reagan Institute criteria were used for the neuropathological diagnosis of AD. Measures of GFAP were log-transformed. Binary logistic regression analyses tested the association between GFAP and autopsy-confirmed AD status, as well as with semi-quantitative ratings of regional atrophy (none/mild versus moderate/severe) using binary logistic regression. Ordinal logistic regression analyses tested the association between plasma GFAP and Braak stage and CERAD neuritic plaque score. Area under the curve (AUC) statistics from receiver operating characteristics (ROC) using predicted probabilities from binary logistic regression examined the ability of plasma GFAP to discriminate autopsy-confirmed AD status. All analyses controlled for sex, age at death, years between last blood draw and death, and APOE e4 status.
Results:
Of the 45 brain donors, 29 (64.4%) had autopsy-confirmed AD. The mean (SD) age of the sample at the time of blood draw was 80.76 (8.58) and there were 2.80 (1.16) years between the last blood draw and death. The sample included 20 (44.4%) females, 41 (91.1%) were White, and 20 (44.4%) were APOE e4 carriers. Higher GFAP concentrations were associated with increased odds for having autopsy-confirmed AD (OR=14.12, 95% CI [2.00, 99.88], p=0.008). ROC analysis showed plasma GFAP accurately discriminated those with and without autopsy-confirmed AD on its own (AUC=0.75) and strengthened as the above covariates were added to the model (AUC=0.81). Increases in GFAP levels corresponded to increases in Braak stage (OR=2.39, 95% CI [0.71-4.07], p=0.005), but not CERAD ratings (OR=1.24, 95% CI [0.004, 2.49], p=0.051). Higher GFAP levels were associated with greater temporal lobe atrophy (OR=10.27, 95% CI [1.53,69.15], p=0.017), but this was not observed with any other regions.
Conclusions:
The current results show that antemortem plasma GFAP is associated with non-specific AD neuropathological changes at autopsy. Plasma GFAP could be a useful and practical biomarker for assisting in the detection of AD-related changes, as well as for study of disease mechanisms.
To examine differences in noticing and use of nutrition information comparing jurisdictions with and without mandatory menu labelling policies and examine differences among sociodemographic groups.
Design:
Cross-sectional data from the International Food Policy Study (IFPS) online survey.
Setting:
IFPS participants from Australia, Canada, Mexico, United Kingdom and USA in 2019.
Participants:
Adults aged 18–99; n 19 393.
Results:
Participants in jurisdictions with mandatory policies were significantly more likely to notice and use nutrition information, order something different, eat less of their order and change restaurants compared to jurisdictions without policies. For noticed nutrition information, the differences between policy groups were greatest comparing older to younger age groups and comparing high education (difference of 10·7 %, 95 % CI 8·9, 12·6) to low education (difference of 4·1 %, 95 % CI 1·8, 6·3). For used nutrition information, differences were greatest comparing high education (difference of 4·9 %, 95 % CI 3·5, 6·4) to low education (difference of 1·8 %, 95 % CI 0·2, 3·5). Mandatory labelling was associated with an increase in ordering something different among the majority ethnicity group and a decrease among the minority ethnicity group. For changed restaurant visited, differences were greater for medium and high education compared to low education, and differences were greater for higher compared to lower income adequacy.
Conclusions:
Participants living in jurisdictions with mandatory nutrition information in restaurants were more likely to report noticing and using nutrition information, as well as greater efforts to modify their consumption. However, the magnitudes of these differences were relatively small.
The U.S. Department of Agriculture–Agricultural Research Service (USDA-ARS) has been a leader in weed science research covering topics ranging from the development and use of integrated weed management (IWM) tactics to basic mechanistic studies, including biotic resistance of desirable plant communities and herbicide resistance. ARS weed scientists have worked in agricultural and natural ecosystems, including agronomic and horticultural crops, pastures, forests, wild lands, aquatic habitats, wetlands, and riparian areas. Through strong partnerships with academia, state agencies, private industry, and numerous federal programs, ARS weed scientists have made contributions to discoveries in the newest fields of robotics and genetics, as well as the traditional and fundamental subjects of weed–crop competition and physiology and integration of weed control tactics and practices. Weed science at ARS is often overshadowed by other research topics; thus, few are aware of the long history of ARS weed science and its important contributions. This review is the result of a symposium held at the Weed Science Society of America’s 62nd Annual Meeting in 2022 that included 10 separate presentations in a virtual Weed Science Webinar Series. The overarching themes of management tactics (IWM, biological control, and automation), basic mechanisms (competition, invasive plant genetics, and herbicide resistance), and ecosystem impacts (invasive plant spread, climate change, conservation, and restoration) represent core ARS weed science research that is dynamic and efficacious and has been a significant component of the agency’s national and international efforts. This review highlights current studies and future directions that exemplify the science and collaborative relationships both within and outside ARS. Given the constraints of weeds and invasive plants on all aspects of food, feed, and fiber systems, there is an acknowledged need to face new challenges, including agriculture and natural resources sustainability, economic resilience and reliability, and societal health and well-being.