We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Asymptomatic bacteriuria (ASB) treatment is a common form of antibiotic overuse and diagnostic error. Antibiotic stewardship using the inappropriate diagnosis of urinary tract infection (ID-UTI) measure has reduced ASB treatment in diverse hospitals. However, critical access hospitals (CAHs) have differing resources that could impede stewardship. We aimed to determine if stewardship including the ID-UTI measure could reduce ASB treatment in CAHs.
Methods:
From October 2022 to July 2023, ten CAHs participated in an Intensive Quality Improvement Cohort (IQIC) program including 3 interventions to reduce ASB treatment: 1) learning labs (ie, didactics with shared learning), 2) mentoring, and 3) data-driven performance reports including hospital peer comparison based on the ID-UTI measure. To assess effectiveness of the IQIC program, change in the ID-UTI measure (ie, percentage of patients treated for a UTI who had ASB) was compared to two non-equivalent control outcomes (antibiotic duration and unjustified fluoroquinolone use).
Results:
Ten CAHs abstracted a total of 608 positive urine culture cases. Over the cohort period, the percentage of patients treated for a UTI who had ASB declined (aOR per month = 0.935, 95% CI: 0.873, 1.001, P = 0.055) from 28.4% (range across hospitals, 0%-63%) in the first to 18.6% (range, 0%-33%) in the final month. In contrast, antibiotic duration and unjustified fluoroquinolone use were unchanged (P = 0.768 and 0.567, respectively).
Conclusions:
The IQIC intervention, including learning labs, mentoring, and performance reports using the ID-UTI measure, was associated with a non-significant decrease in treatment of ASB, while control outcomes (duration and unjustified fluoroquinolone use) did not change.
Observations of glacier melt and runoff are of fundamental interest in the study of glaciers and their interactions with their environment. Considerable recent interest has developed around distributed acoustic sensing (DAS), a sensing technique which utilizes Rayleigh backscatter in fiber optic cables to measure the seismo-acoustic wavefield in high spatial and temporal resolution. Here, we present data from a month-long, 9 km DAS deployment extending through the ablation and accumulation zones on Rhonegletscher, Switzerland, during the 2020 melt season. While testing several types of machine learning (ML) models, we establish a regression problem, using the DAS data as the dependent variable, to infer the glacier discharge observed at a proglacial stream gauge. We also compare two predictive models that only depend on meteorological station data. We find that the seismo-acoustic wavefield recorded by DAS can be utilized to infer proglacial discharge. Models using DAS data outperform the two models trained on meteorological data with mean absolute errors of 0.64, 2.25 and 2.72 m3 s−1, respectively. This study demonstrates the ability of in situ glacier DAS to be used for quantifying proglacial discharge and points the way to a new approach to measuring glacier runoff.
Obstructive sleep apnea (OSA) is associated with worse outcomes in stroke, Alzheimer’s disease (AD) and Parkinson’s disease (PD), but diagnosis is challenging in these groups. We aimed to compare the prevalence of high risk of OSA based on commonly used questionnaires and self-reported OSA diagnosis: 1. within groups with stroke, AD, PD and the general population (GP); 2. Between neurological groups and GP.
Methods:
Individuals with stroke, PD and AD were identified in the Canadian Longitudinal Study of Aging (CLSA) by survey. STOP, STOP-BAG, STOP-B28 and GOAL screening tools and OSA self-report were compared by the Chi-squared test. Logistic regression was used to compare high risk/self-report of OSA, in neurological conditions vs. GP, adjusted for confounders.
Results:
We studied 30,097 participants with mean age of 62.3 years (SD 10.3) (stroke n = 1791; PD n = 175; AD n = 125). In all groups, a positive GOAL was the most prevalent, while positive STOP was least prevalent among questionnaires. Significant variations in high-risk OSA were observed between different questionnaires across all groups. Under 1.5% of individuals self-reported OSA. While all questionnaires suggested a higher prevalence of OSA in stroke than the GP, for PD and AD, there was heterogeneity depending on questionnaire.
Conclusions:
The wide range of prevalences of high risk of OSA resulting from commonly used screening tools underscores the importance of validating them in older adults with neurological disorders. OSA was self-reported in disproportionately small numbers across groups, suggesting that OSA is underdiagnosed in older adults or underreported by patients, which is concerning given its increasingly recognized impact on brain health.
As part of the Research Domain Criteria (RDoC) initiative, the NIMH seeks to improve experimental measures of cognitive and positive valence systems for use in intervention research. However, many RDoC tasks have not been psychometrically evaluated as a battery of measures. Our aim was to examine the factor structure of 7 such tasks chosen for their relevance to schizophrenia and other forms of serious mental illness. These include the n-back, Sternberg, and self-ordered pointing tasks (measures of the RDoC cognitive systems working memory construct); flanker and continuous performance tasks (measures of the RDoC cognitive systems cognitive control construct); and probabilistic learning and effort expenditure for reward tasks (measures of reward learning and reward valuation constructs).
Participants and Methods:
The sample comprised 286 cognitively healthy participants who completed novel versions of all 7 tasks via an online recruitment platform, Prolific, in the summer of 2022. The mean age of participants was 38.6 years (SD = 14.5, range 18-74), 52% identified as female, and stratified recruitment ensured an ethnoracially diverse sample. Excluding time for instructions and practice, each task lasted approximately 6 minutes. Task order was randomized. We estimated optimal scores from each task including signal detection d-prime measures for the n-back, Sternberg, and continuous performance task, mean accuracy for the flanker task, win-stay to win-shift ratio for the probabilistic learning task, and trials completed for the effort expenditure for reward task. We used parallel analysis and a scree plot to determine the number of latent factors measured by the 7 task scores. Exploratory factor analysis with oblimin (oblique) rotation was used to examine the factor loading matrix.
Results:
The scree plot and parallel analyses of the 7 task scores suggested three primary factors. The flanker and continuous performance task both strongly loaded onto the first factor, suggesting that these measures are strong indicators of cognitive control. The n-back, Sternberg, and self-ordered pointing tasks strongly loaded onto the second factor, suggesting that these measures are strong indicators of working memory. The probabilistic learning task solely loaded onto the third factor, suggesting that it is an independent indicator of reinforcement learning. Finally, the effort expenditure for reward task modestly loaded onto the second but not the first and third factors, suggesting that effort is most strongly related to working memory.
Conclusions:
Our aim was to examine the factor structure of 7 RDoC tasks. Results support the RDoC suggestion of independent cognitive control, working memory, and reinforcement learning. However, effort is a factorially complex construct that is not uniquely or even most strongly related to positive valance. Thus, there is reason to believe that the use of at least 6 of these tasks are appropriate measures of constructs such as working memory, reinforcement learning and cognitive control.
Eliciting perceived cognitive complaints is a routine part of a clinical neuropsychological evaluation, presumably because complaints are informative of underlying pathology. However, there is no strong empirical support that subjective cognitive impairment (SCI) is actually related to objective cognitive impairment as measured by neurocognitive tests. Instead, internalizing psychopathology is thought to predominately influence the endorsement of SCI. Specifically, individuals with greater symptoms of depression and anxiety, when accounting for comorbidities, have a higher disposition to overestimate their degree of cognitive impairment as compared to objective testing. Yet, there are few existing studies that have determined which factors influence both SCI and the discrepancy between subjective and objective cognitive impairment in general outpatient populations. The current study examined the relationship between subjective and objective cognitive impairment in a clinically diverse sample of outpatients. We additionally explored the associations between SCI and relevant intrapersonal factors including internalizing psychopathology, number of medical comorbidities, and demographics. Finally, we quantified the degree of discrepancy between subjective and objective impairment and examined this discrepancy in relation to the intrapersonal factors.
Participants and Methods:
The sample comprised 142 adult women and men (age range 18–79 years) seen in an outpatient neuropsychology clinic for a diverse range of referral questions. Scores on the cognition portion of the WHO Disability Assessment Schedule (WHODAS 2.0) were used to index SCI. A composite score from 14 measures across various domains of cognitive functioning served as an objective measure of cognitive functioning. Internalizing psychopathology was measured via a standardized composite of scores from screening measures of anxiety and depression. Medical comorbidities were indexed by the number of different ICD diagnostic categories documented in patients' medical records. Demographics included age, sex, race, and years of formal education. Objective-subjective discrepancy scores were computed by saving standardized residuals from a linear regression of neurocognitive test performance on the WHODAS 2.0 scores.
Results:
A hierarchical linear regression revealed that objective cognitive impairment was not significantly related to SCI (p > .05), explaining less than 2% of the variance in SCI ratings. Likewise, participants' demographics (age, sex, education, race) and number of comorbidities were not significantly related to their SCI ratings, explaining about 6% of the variance. However, participants' level of internalizing psychopathology was significantly associated with SCI (F[10, 131] = 4.99, p < .001), and explained approximately 20% of the variance in SCI ratings. Similarly, the degree of discrepancy between subjective and objective cognitive impairment was primarily influenced by internalizing psychopathology (F[9, 132] = 5.20, p < .001, R2 = 21%) and largely unrelated to demographics and number of comorbidities, which explained about 6% of the variance.
Conclusions:
These findings are consistent with prior research suggesting that SCI may be more indicative of the extent of internalizing psychopathology rather than actual cognitive impairment. Taken together, these results illuminate potential treatment and diagnostic implications associated with assessing perceived cognitive complaints during a neuropsychological evaluation.
Agricultural workers are immersed in environments associated with increased risk for adverse psychiatric and neurological outcomes. Agricultural work-related risks to brain health include exposure to pesticides, heavy metals, and organic dust. Despite this, there is a gap in our understanding of the underlying brain systems impacted by these risks. This study explores clinical and cognitive domains, and functional brain activity in agricultural workers. We hypothesized that a history of agricultural work-related risks would be associated with poorer clinical and cognitive outcomes as well as changes in functional brain activity within cortico-striatal regions.
Participants and Methods:
The sample comprised 17 agricultural workers and a comparison group of 45 non-agricultural workers recruited in the Northern Colorado area. All participants identified as White and non-Hispanic. The mean age of participants was 51.7 years (SD = 21.4, range 18-77), 60% identified as female, and 37% identified as male. Participants completed the National Institute of Health Toolbox (NIH Toolbox) and Montreal Cognitive Assessment (MoCA) on their first visit. During the second visit, they completed NIH Patient-Reported Outcomes Measurement Information System (PROMIS) measures and underwent functional magnetic resonance imaging (fMRI; N = 15 agriculture and N = 35 non-agriculture) while completing a working memory task (Sternberg). Blood oxygen-level dependent (BOLD) response was compared between participants. Given the small sample size, the whole brain voxel-wise group comparison threshold was set at alpha = .05, but not otherwise corrected for multiple comparisons. Cohen’s d effect sizes were estimated for all voxels.
Results:
Analyses of cognitive scores showed significant deficits in episodic memory for the agricultural work group. Additionally, the agricultural work group scored higher on measures of self-reported anger, cognitive concerns, and social participation. Analyses of fMRI data showed increased BOLD activity around the orbitofrontal cortex (medium to large effects) and bilaterally in the entorhinal cortex (large effects) for the agricultural work group. The agricultural work group also showed decreased BOLD activity in the cerebellum and basal ganglia (medium to large effects).
Conclusions:
To our knowledge, this study provides the first-ever evidence showing differences in brain activity associated with a history of working in agriculture. These findings of poorer memory, concerns about cognitive functioning, and increased anger suggest clinical relevance. Social participation associated with agricultural work should be explored as a potential protective factor for cognition and brain health. Brain imaging data analyses showed increased activation in areas associated with motor functioning, cognitive control, and emotion. These findings are limited by small sample size, lack of diversity in our sample, and coarsely defined risk. Despite these limitations, the results are consistent with an overall concern that risks associated with agricultural work can lead to cognitive and psychiatric harm via changes in brain health. Replications and future studies with larger sample sizes, more diverse participants, and more accurately defined risks (e.g., pesticide exposure) are needed.
Deficits in cognitive ability are common among patients with schizophrenia. The MATRICS Consensus Cognitive Battery (MCCB) was designed to assess cognitive ability in studies of patients diagnosed with schizophrenia and has demonstrated high test-retest reliability with minimal practice effects, even in multi-site trials. However, given the motivational challenges associated with schizophrenia, it is unknown whether performance on MCCB tasks affects performance at later stages of testing. The goal of this study was to determine whether there are differences between people with and without schizophrenia in how their performance on individual MCCB tasks influences their performance throughout the battery.
Participants and Methods:
The sample comprised 92 total participants including 49 cognitively healthy comparison participants and 43 outpatients diagnosed with schizophrenia. The mean age of participants was 44.2 years (SD = 12.0, range 21–69) and 61% identified as male. The Trail Making Test, Brief Assessment of Cognition in Schizophrenia, Hopkins Verbal Learning Test – Revised, Letter-Number Span, and Category Fluency from the MCCB were administered in the same order at 2 different sites and studies from 2016–2022. The autocorrelation between t-scores for task scores within each participant was computed and then compared between control and outpatient participants to determine if there are differences between groups. Group mean t-scores for each task were also compared between groups.
Results:
We found no significant difference in autocorrelations across MCCB tasks between healthy comparison participants and outpatients. However, mean performance in all tasks was lower for the outpatient group than for the healthy comparison group. None of the tasks used stood out as having significantly lower mean scores than other tasks for either group.
Conclusions:
Our findings suggest that performance on individual MCCB tasks do not affect performance throughout the battery differently between the healthy comparison group and outpatients. This suggests that participants with schizophrenia are not particularly reactive to past performance on MCCB tasks. Additionally, this finding further supports use of the MCCB in this population. Further research is needed to determine whether subgroups of patients and/or different batteries of measures show different patterns of reactivity.
Hookworm infection affects millions globally, leading to chronic conditions like malnutrition and anaemia. Among the hookworm species, Ancylostoma ceylanicum stands out as a generalist, capable of infecting various hosts, including humans, cats, dogs and hamsters. Surprisingly, it cannot establish in mice, despite their close phylogenetic relationship to hamsters. The present study investigated the development of A. ceylanicum in immunodeficient NSG mice to determine the contribution of the immune system to host restriction. The infections became patent on day 19 post-infection (PI) and exhibited elevated egg production which lasted for at least 160 days PI. Infective A. ceylanicum larvae reared from eggs released by infected NSG mice were infectious to hamsters and capable of reproduction, indicating that the adults in the NSG mice were producing viable offspring. In contrast, A. ceylanicum showed limited development in outbred Swiss Webster mice. Furthermore, the closely related canine hookworm Ancylostoma caninum was unable to infect and develop in NSG mice, indicating that different mechanisms may determine host specificity even in closely related species. This is the first report of any hookworm species completing its life cycle in a mouse and implicate the immune system in determining host specificity in A. ceylanicum.
Identifying youths most at risk to COVID-19-related mental illness is essential for the development of effective targeted interventions.
Aims
To compare trajectories of mental health throughout the pandemic in youth with and without prior mental illness and identify those most at risk of COVID-19-related mental illness.
Method
Data were collected from individuals aged 18–26 years (N = 669) from two existing cohorts: IMAGEN, a population-based cohort; and ESTRA/STRATIFY, clinical cohorts of individuals with pre-existing diagnoses of mental disorders. Repeated COVID-19 surveys and standardised mental health assessments were used to compare trajectories of mental health symptoms from before the pandemic through to the second lockdown.
Results
Mental health trajectories differed significantly between cohorts. In the population cohort, depression and eating disorder symptoms increased by 33.9% (95% CI 31.78–36.57) and 15.6% (95% CI 15.39–15.68) during the pandemic, respectively. By contrast, these remained high over time in the clinical cohort. Conversely, trajectories of alcohol misuse were similar in both cohorts, decreasing continuously (a 15.2% decrease) during the pandemic. Pre-pandemic symptom severity predicted the observed mental health trajectories in the population cohort. Surprisingly, being relatively healthy predicted increases in depression and eating disorder symptoms and in body mass index. By contrast, those initially at higher risk for depression or eating disorders reported a lasting decrease.
Conclusions
Healthier young people may be at greater risk of developing depressive or eating disorder symptoms during the COVID-19 pandemic. Targeted mental health interventions considering prior diagnostic risk may be warranted to help young people cope with the challenges of psychosocial stress and reduce the associated healthcare burden.
This study utilizes speleothem trace elements as climate proxies to reconstruct hydroclimate variability over approximately 350 years in the Southern Cook Islands. Stalagmites Pu17 and Pu4 from Pouatea cave were analyzed using high-resolution LA-ICP-MS for trace elements (Mg, Na, Sr, P, U, Y). By monitoring cave dripwater and conducting regression analysis, we found that Mg, Sr, and Na in Pouatea dripwater mostly originated from marine aerosols, while Sr and Ba were primarily from bedrock, with additional Ba coming from marine aerosols and weathered oceanic basalt leaching. Mg was identified as the most reliable element for hydroclimate reconstruction due to its predominantly marine aerosol origin. Infiltration, via dilution of marine aerosols and bedrock inputs, was identified as the main driver of trace element variations in Pouatea at a seasonal scale. Transfer functions were established between each trace element and effective infiltration was calculated, with Mg showing the strongest correlation. The reconstructed infiltration data were compared with climate indices, showing an overarching role of the SPCZ and ENSO in controlling rainfall in the South Pacific. This research demonstrates the potential of speleothem trace elements for paleohydroclimate reconstructions, improving understanding of rainfall variability in the climatically vulnerable South Pacific Islands over the past millennia.
Rapid antigen detection tests (Ag-RDT) for SARS-CoV-2 with emergency use authorization generally include a condition of authorization to evaluate the test’s performance in asymptomatic individuals when used serially. We aim to describe a novel study design that was used to generate regulatory-quality data to evaluate the serial use of Ag-RDT in detecting SARS-CoV-2 virus among asymptomatic individuals.
Methods:
This prospective cohort study used a siteless, digital approach to assess longitudinal performance of Ag-RDT. Individuals over 2 years old from across the USA with no reported COVID-19 symptoms in the 14 days prior to study enrollment were eligible to enroll in this study. Participants throughout the mainland USA were enrolled through a digital platform between October 18, 2021 and February 15, 2022. Participants were asked to test using Ag-RDT and molecular comparators every 48 hours for 15 days. Enrollment demographics, geographic distribution, and SARS-CoV-2 infection rates are reported.
Key Results:
A total of 7361 participants enrolled in the study, and 492 participants tested positive for SARS-CoV-2, including 154 who were asymptomatic and tested negative to start the study. This exceeded the initial enrollment goals of 60 positive participants. We enrolled participants from 44 US states, and geographic distribution of participants shifted in accordance with the changing COVID-19 prevalence nationwide.
Conclusions:
The digital site-less approach employed in the “Test Us At Home” study enabled rapid, efficient, and rigorous evaluation of rapid diagnostics for COVID-19 and can be adapted across research disciplines to optimize study enrollment and accessibility.
Functional recovery is a treatment goal that goes beyond symptomatic remission and encompasses multiple aspects of schizophrenia patients’ lives, including quality of life, physical, and mental functioning. There is evidence that long-acting injectable (LAI) treatments promote adherence and reduce rehospitalisation and functional decline, which could facilitate patients’ ability to reach functional recovery. Despite this, LAIs are underused in the first-episode (FEP) and early-phase (EP) patient population, due to physician hesitancy and concerns around stigma. A Delphi panel was held to gain expert consensus on an approach to the domains and assessment of functional recovery elements in FEP and EP schizophrenia patients.
A literature review and input from a steering committee of 5 experts in psychiatry informed statements development for a three-round modified Delphi process. Round one was conducted via one-to-one video conference interviews, and the successive rounds were conducted via electronic surveys, which enabled international collaboration. Statements on the different domains and assessment for functional recovery were presented to 17 psychiatrists, practicing in 7 countries (France, Italy, US, Germany, Spain, Denmark, and UK), experienced in the treatment of schizophrenia with LAIs. Several analysis rules determined whether a statement could progress to the next round and specified the level of agreement required to achieve consensus. Measures of central tendency (mode, mean) and variability (interquartile range) were reported back to help panelists look at their previous responses in the context of the overall group.
A consensus was reached (defined a priori as ≥80% agreement) on all 27 statements covering the dimensions, assessment, and level of achieved functional recovery for FEP and EP patients. The following domains are important to consider when assessing functional recovery: depression, aggressive behaviour, social interaction, family functioning, education/employment, sexual functioning, and leisure activities. Additionally, panellists reached consensus that dimensions should be minimally impairing, if present (excluding sexual functioning) and asked about at every encounter with the patient (excluding sexual functioning and leisure activities). In summary, this Delphi panel yielded agreement that functional recovery is multidimensional and should be assessed regularly as part of usual care on an individual patient level in FEP and EP schizophrenia patients.
Schizophrenia is among the top ten causes of years lost due to disability. Goals of treatment are evolving beyond remission of psychotic symptoms to include physical and mental functioning, quality of life, and long-term functional recovery. Evidence has shown long-acting injectables (LAIs) are beneficial for schizophrenia patients by increasing treatment adherence and decreasing relapse and rehospitalisation. This potentially reduces disease progression and facilitates functional recovery. However, LAIs are underused and often seen as a last resort for first-episode (FEP) and early-phase (EP) patients, due to physicians’ lack of familiarity and stigma.
A three-round modified Delphi panel was held to gain expert consensus on an approach to functional recovery in FEP and EP patients with LAIs. A literature review and input from a steering committee of 5 experts in psychiatry informed the development of statements. Round one was carried out via one-to-one video conference interviews, and the subsequent rounds were conducted via electronic surveys, which enabled international collaboration. Delphi panellists were 17 psychiatrists with schizophrenia treatment experience, practicing in 7 countries (France, Italy, US, Germany, Spain, Denmark, and UK). Several analysis rules determined whether a statement could progress to the next round and specified the level of agreement required to achieve consensus. Measures of central tendencies (mode, mean) and variability (interquartile range) of aggregated responses from the previous round were reported to panelists to understand their response in relation to the group.
There was consensus (defined a priori as ≥80% agreement) on the 8 statements relating to long-term treatment goals and LAI links to functional recovery. LAI treatment in FEP and EP patients increases adherence and reduces treatment burden and functional decline compared to the same and other oral medication. Additionally, there was consensus that LAIs lead to better treatment outcome and functional recovery. Other important factors to achieving functional recovery include patient attitude towards treatment and psychoeducation. Furthermore, consensus was reached that functional recovery and quality of life are linked. In summary, this Delphi panel yielded agreement that functional recovery is a reachable goal for FEP and EP patients and can be enhanced using LAIs.
Soil-transmitted nematodes (STNs) place a tremendous burden on health and economics worldwide with an estimate of at least 1.5 billion people, or 24% of the population, being infected with at least 1 STN globally. Children and pregnant women carry the heavier pathological burden, and disease caused by the blood-feeding worm in the intestine can result in anaemia and delays in physical and intellectual development. These parasites are capable of infecting and reproducing in various host species, but what determines host specificity remains unanswered. Identifying the molecular determinants of host specificity would provide a crucial breakthrough towards understanding the biology of parasitism and could provide attractive targets for intervention. To investigate specificity mechanisms, members of the hookworm genus Ancylostoma provide a powerful system as they range from strict specialists to generalists. Using transcriptomics, differentially expressed genes (DEGs) in permissive (hamster) and non-permissive (mouse) hosts at different early time points during infection with A. ceylanicum were examined. Analysis of the data has identified unique immune responses in mice, as well as potential permissive signals in hamsters. Specifically, immune pathways associated with resistance to infection are upregulated in the non-permissive host, providing a possible protection mechanism that is absent in the permissive host. Furthermore, unique signatures of host specificity that may inform the parasite that it has invaded a permissive host were identified. These data provide novel insight into the tissue-specific gene expression differences between permissive and non-permissive hosts in response to hookworm infection.
Reported childhood adversity (CA) is associated with development of depression in adulthood and predicts a more severe course of illness. Although elevated serotonin 1A receptor (5-HT1AR) binding potential, especially in the raphe nuclei, has been shown to be a trait associated with major depression, we did not replicate this finding in an independent sample using the partial agonist positron emission tomography tracer [11C]CUMI-101. Evidence suggests that CA can induce long-lasting changes in expression of 5-HT1AR, and thus, a history of CA may explain the disparate findings.
Methods
Following up on our initial report, 28 unmedicated participants in a current depressive episode (bipolar n = 16, unipolar n = 12) and 19 non-depressed healthy volunteers (HVs) underwent [11C]CUMI-101 imaging to quantify 5-HT1AR binding potential. Participants in a depressive episode were stratified into mild/moderate and severe CA groups via the Childhood Trauma Questionnaire. We hypothesized higher hippocampal and raphe nuclei 5-HT1AR with severe CA compared with mild/moderate CA and HVs.
Results
There was a group-by-region effect (p = 0.011) when considering HV, depressive episode mild/moderate CA, and depressive episode severe CA groups, driven by significantly higher hippocampal 5-HT1AR binding potential in participants in a depressive episode with severe CA relative to HVs (p = 0.019). Contrary to our hypothesis, no significant binding potential differences were detected in the raphe nuclei (p-values > 0.05).
Conclusions
With replication in larger samples, elevated hippocampal 5-HT1AR binding potential may serve as a promising biomarker through which to investigate the neurobiological link between CA and depression.
SARS-CoV-2 has severely affected capacity in the National Health Service (NHS), and waiting lists are markedly increasing due to downtime of up to 50 min between patient consultations/procedures, to reduce the risk of infection. Ventilation accelerates this air cleaning, but retroactively installing built-in mechanical ventilation is often cost-prohibitive. We investigated the effect of using portable air cleaners (PAC), a low-energy and low-cost alternative, to reduce the concentration of aerosols in typical patient consultation/procedure environments. The experimental setup consisted of an aerosol generator, which mimicked the subject affected by SARS-CoV-19, and an aerosol detector, representing a subject who could potentially contract SARS-CoV-19. Experiments of aerosol dispersion and clearing were undertaken in situ in a variety of rooms with two different types of PAC in various combinations and positions. Correct use of PAC can reduce the clearance half-life of aerosols by 82% compared to the same indoor-environment without any ventilation, and at a broadly equivalent rate to built-in mechanical ventilation. In addition, the highest level of aerosol concentration measured when using PAC remains at least 46% lower than that when no mitigation is used, even if the PAC's operation is impeded due to placement under a table. The use of PAC leads to significant reductions in the level of aerosol concentration, associated with transmission of droplet-based airborne diseases. This could enable NHS departments to reduce the downtime between consultations/procedures
‘E-learning’ can be defined broadly as the use of internet technologies to deliver teaching and to enhance knowledge and performance. It is also referred to as web-based, online, distributed or internet-based learning (Ruiz et al. 2006). Many sites use ‘blended learning’, where e-learning is combined with in-person or virtual face-to-face instructor-led training.
The increase in portability, power and connectivity of devices means that most smartphones can easily access information in real time (Marzano et al. 2017) and, of internet users worldwide, 93% access the internet via mobile devices (Johnson 2021). This means that access to the internet to gather information about mental health is immediate, but the vast number of information sites can easily become overwhelming for both patients and clinicians. A simple search for a single mental health topic generates a huge number and range of results. These vary from reviews of the evidence and primary research articles, to news articles and advertisements for treatment centres. The internet user is swamped with an array of sites of variable (and often unknown) quality, which are neither necessarily relevant to the original question nor ranked in order of reliability.
The magnitude and azimuth of horizontal ice flow at Camp Century, Greenland have been measured several times since 1963. Here, we provide a further two independent measurements over the 2017–21 period. Our consensus estimate of horizontal ice flow from four independent satellite-positioning solutions is 3.65 ± 0.13 m a−1 at an azimuth of 236 ± 2°. A portion of the small, but significant, differences in ice velocity and azimuth reported between studies likely results from spatial gradients in ice flow. This highlights the importance of restricting inter-study comparisons of ice flow estimates to measurements surveyed within a horizontal distance of one ice thickness from each other. We suggest that ice flow at Camp Century is stable on seasonal to multi-decadal timescales. The airborne and satellite laser altimetry record indicates an ice thickening trend of 1.1 ± 0.3 cm a−1 since 1994. This thickening trend is qualitatively consistent with previously inferred ongoing millennial-scale ice thickening at Camp Century. The ice flow divide immediately north of Camp Century may now be migrating southward, although the reasons for this divide migration are poorly understood. The Camp Century flowlines presently terminate in the vicinity of Innaqqissorsuup Oqquani Sermeq (Gade Gletsjer) on the Melville Bay coast.
Artificial illumination is a fundamental human need. Burning wood and other materials usually in hearths and fireplaces extended daylight hours, whilst the use of flammable substances in torches offered light on the move. It is increasingly understood that pottery played a role in light production. In this study, we focus on ceramic oval bowls, made and used primarily by hunter-gatherer-fishers of the circum-Baltic over a c. 2000 year period beginning in the mid-6th millennium cal bc. Oval bowls commonly occur alongside larger (cooking) vessels. Their function as ‘oil lamps’ for illumination has been proposed on many occasions but only limited direct evidence has been secured to test this functional association. This study presents the results of molecular and isotopic analysis of preserved organic residues obtained from 115 oval bowls from 25 archaeological sites representing a wide range of environmental settings. Our findings confirm that the oval bowls of the circum-Baltic were used primarily for burning fats and oils, predominantly for the purposes of illumination. The fats derive from the tissues of marine, freshwater, and terrestrial organisms. Bulk isotope data of charred surface deposits show a consistently different pattern of use when oval bowls are compared to other pottery vessels within the same assemblage. It is suggested that hunter-gatherer-fishers around the 55th parallel commonly deployed material culture for artificial light production but the evidence is restricted to times and places where more durable technologies were employed, including the circum-Baltic.