We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure coreplatform@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
To examine the association between leisure activity (LA) frequency and cognitive trajectories over 5 years across adulthood, and whether gender and age moderate these associations.
Method:
A total of 234 cognitively healthy adults (21–80 years) completed a LA questionnaire at baseline and neuropsychological measures at baseline and after 5 years. Latent change score analysis was applied to generate latent variables estimating changes in different cognitive domains. For a secondary analysis, LA components’ scores were calculated, reflecting cognitive-intellectual, social, and physical activities. Regression analysis examined the association between baseline LA and cognitive change, and potential moderation of gender and age. In addition, we tested the influence of cortical gray matter thickness on the results.
Results:
We found that higher LA engagement was associated with slower cognitive decline for reasoning, speed, and memory, as well as better vocabulary across two time points. Regarding LA components, higher Social-LA and Intellectual-LA predicted slower rates of cognitive decline across different domains, while Physical-LA was not associated with cognitive change. Gender, but not age, moderated some of the associations observed. Our results remained the same after controlling for cortical gray matter thickness.
Conclusions:
We demonstrated a protective effect of LA engagement on cognitive trajectories over 5 years, independent from demographics and a measure of brain health. The effects were in part moderated by gender, but not age. Results should be replicated in larger and more diverse samples. Our findings support cognitive reserve hypothesis and have implications for future reserve-enhancing interventions.
As TeV gamma-ray astronomy progresses into the era of the Cherenkov Telescope Array (CTA), there is a desire for the capacity to instantaneously follow up on transient phenomena and continuously monitor gamma-ray flux at energies above
$10^{12}\,\mathrm{eV}$
. To this end, a worldwide network of Imaging Air Cherenkov Telescopes (IACTs) is required to provide triggers for CTA observations and complementary continuous monitoring. An IACT array sited in Australia would contribute significant coverage of the Southern Hemisphere sky. Here, we investigate the suitability of a small IACT array and how different design factors influence its performance. Monte Carlo simulations were produced based on the Small-Sized Telescope (SST) and Medium-Sized Telescope (MST) designs from CTA. Angular resolution improved with larger baseline distances up to 277 m between telescopes, and energy thresholds were lower at 1 000 m altitude than at 0 m. The
${\sim} 300\,\mathrm{GeV}$
energy threshold of MSTs proved more suitable for observing transients than the
${\sim}1.2\,\mathrm{TeV}$
threshold of SSTs. An array of four MSTs at 1 000 m was estimated to give a 5.7
$\sigma$
detection of an RS Ophiuchi-like nova eruption from a 4-h observation. We conclude that an array of four MST-class IACTs at an Australian site would ideally complement the capabilities of CTA.
Young people are most vulnerable to suicidal behaviours but least likely to seek help. A more elaborate study of the intrinsic and extrinsic correlates of suicidal ideation and behaviours particularly amid ongoing population-level stressors and the identification of less stigmatising markers in representative youth populations is essential.
Methods
Participants (n = 2540, aged 15–25) were consecutively recruited from an ongoing large-scale household-based epidemiological youth mental health study in Hong Kong between September 2019 and 2021. Lifetime and 12-month prevalence of suicidal ideation, plan, and attempt were assessed, alongside suicide-related rumination, hopelessness and neuroticism, personal and population-level stressors, family functioning, cognitive ability, lifetime non-suicidal self-harm, 12-month major depressive disorder (MDD), and alcohol use.
Results
The 12-month prevalence of suicidal ideation, ideation-only (no plan or attempt), plan, and attempt was 20.0, 15.4, 4.6, and 1.3%, respectively. Importantly, multivariable logistic regression findings revealed that suicide-related rumination was the only factor associated with all four suicidal outcomes (all p < 0.01). Among those with suicidal ideation (two-stage approach), intrinsic factors, including suicide-related rumination, poorer cognitive ability, and 12-month MDE, were specifically associated with suicide plan, while extrinsic factors, including coronavirus disease 2019 (COVID-19) stressors, poorer family functioning, and personal life stressors, as well as non-suicidal self-harm, were specifically associated with suicide attempt.
Conclusions
Suicide-related rumination, population-level COVID-19 stressors, and poorer family functioning may be important less-stigmatising markers for youth suicidal risks. The respective roles played by not only intrinsic but also extrinsic factors in suicide plan and attempt using a two-stage approach should be considered in future preventative intervention work.
Contrasting the well-described effects of early intervention (EI) services for youth-onset psychosis, the potential benefits of the intervention for adult-onset psychosis are uncertain. This paper aims to examine the effectiveness of EI on functioning and symptomatic improvement in adult-onset psychosis, and the optimal duration of the intervention.
Methods
360 psychosis patients aged 26–55 years were randomized to receive either standard care (SC, n = 120), or case management for two (2-year EI, n = 120) or 4 years (4-year EI, n = 120) in a 4-year rater-masked, parallel-group, superiority, randomized controlled trial of treatment effectiveness (Clinicaltrials.gov: NCT00919620). Primary (i.e. social and occupational functioning) and secondary outcomes (i.e. positive and negative symptoms, and quality of life) were assessed at baseline, 6-month, and yearly for 4 years.
Results
Compared with SC, patients with 4-year EI had better Role Functioning Scale (RFS) immediate [interaction estimate = 0.008, 95% confidence interval (CI) = 0.001–0.014, p = 0.02] and extended social network (interaction estimate = 0.011, 95% CI = 0.004–0.018, p = 0.003) scores. Specifically, these improvements were observed in the first 2 years. Compared with the 2-year EI group, the 4-year EI group had better RFS total (p = 0.01), immediate (p = 0.01), and extended social network (p = 0.05) scores at the fourth year. Meanwhile, the 4-year (p = 0.02) and 2-year EI (p = 0.004) group had less severe symptoms than the SC group at the first year.
Conclusions
Specialized EI treatment for psychosis patients aged 26–55 should be provided for at least the initial 2 years of illness. Further treatment up to 4 years confers little benefits in this age range over the course of the study.
We construct a double common factor model for projecting the mortality of a population using as a reference the minimum death rate at each age among a large number of countries. In particular, the female and male minimum death rates, described as best-performance or best-practice rates, are first modelled by a common factor model structure with both common and sex-specific parameters. The differences between the death rates of the population under study and the best-performance rates are then modelled by another common factor model structure. An important result of using our proposed model is that the projected death rates of the population being considered are coherent with the projected best-performance rates in the long term, the latter of which serves as a very useful reference for the projection based on the collective experience of multiple countries. Our out-of-sample analysis shows that the new model has potential to outperform some conventional approaches in mortality projection.
A machine learning approach to zero-inflated Poisson (ZIP) regression is introduced to address common difficulty arising from imbalanced financial data. The suggested ZIP can be interpreted as an adaptive weight adjustment procedure that removes the need for post-modeling re-calibration and results in a substantial enhancement of predictive accuracy. Notwithstanding the increased complexity due to the expanded parameter set, we utilize a cyclic coordinate descent optimization to implement the ZIP regression, with adjustments made to address saddle points. We also study how various approaches alleviate the potential drawbacks of incomplete exposures in insurance applications. The procedure is tested on real-life data. We demonstrate a significant improvement in performance relative to other popular alternatives, which justifies our modeling techniques.
Numerous research studies have demonstrated an association between higher symptom severity and cognitive impairment with poorer social functioning in first-episode psychosis (FEP). By contrast, the influence of subjective experiences, such as social relatedness and self-beliefs, has received less attention. Consequently, a cohesive understanding of how these variables interact to influence social functioning is lacking.
Method
We used structural equation modeling to examine the direct and indirect relationships among neurocognition (processing speed) and social cognition, symptoms, and social relatedness (perceived social support and loneliness) and self-beliefs (self-efficacy and self-esteem) in 170 individuals with FEP.
Results
The final model yielded an acceptable model fit (χ2 = 45.48, comparative fit index = 0.96; goodness of fit index = 0.94; Tucker–Lewis index = 0.94; root mean square error of approximation = 0.06) and explained 45% of social functioning. Negative symptoms, social relatedness, and self-beliefs exerted a direct effect on social functioning. Social relatedness partially mediated the impact of social cognition and negative symptoms on social functioning. Self-beliefs also mediated the relationship between social relatedness and social functioning.
Conclusions
The observed associations highlight the potential value of targeting social relatedness and self-beliefs to improve functional outcomes in FEP. Explanatory models of social functioning in FEP not accounting for social relatedness and self-beliefs might be overestimating the effect of the illness-related factors.
Background: In spring of 2019, 2 positive sputum cases of Pseudomonas aeruginosa in the cardiac critical care unit (CCU) were reported to the UFHJ infection prevention (IP) department. The initial 2 cases, detected within 3 days of each other, were followed shortly by a third case. Epidemiological evidence was initially consistent with a hospital-acquired infection (HAI): 2 of the 3 patients roomed next to each other, and all 3 patients were ventilated, 2 of whom shared the same respiratory therapist. However, no other changes in routine or equipment were noted. The samples were cultured and processed using Illumina NGS technology, generating 1–2 million short (ie, 250-bp) reads across the P. aeruginosa genome. As an additional positive control, 8 P. aeruginosa NGS data sets, previously shown to be from a single outbreak in a UK facility, were included. Reads were mapped back to a reference sequence, and single-nucleotide polymorphisms (SNPs) between each sample and the reference were extracted. Genetic distances (ie, the number of unshared SNPs) between all UFHJ and UK samples were calculated. Genetic linkage was determined using hierarchical clustering, based on a commonly used threshold of 40 SNPs. All UFHJ patient samples were separated by >18,000 SNPs, indicating genetically distinct samples from separate sources. In contrast, UK samples were separated from each other by <16 SNPs, consistent with genetic linkage and a single outbreak. Furthermore, the UFHJ samples were separated from the UK samples by >17,000 SNPs, indicating a lack of geographical distinction of the UFHJ samples (Fig. 1). These results demonstrated that while the initial epidemiological evidence pointed towards a single HAI, the high-precision and relatively inexpensive (<US$1500) NGS analysis conclusively demonstrated that all 3 CCU P. aeruginosa cases derived from separate origins. The hospital avoided costly and invasive infection prevention interventions in an attempt to track down a single nonexistent source on the CCU, and no further cases were found. This finding supports the conclusion reached from the NGS that this represented a pseudo-outbreak. Furthermore, these genomes serve as an ongoing record of P. aeruginosa infection, providing even higher resolution for future cases. Our study supports the use of NGS technology to develop rational and data-driven strategies. Furthermore, the ability of NGS to discriminate between single-source and multiple-source outbreaks can prevent inaccurate classification and reporting of HAIs, avoiding unnecessary costs and damage to hospital reputations.
Funding: None
Disclosures: Susanna L. Lamers reports salary from BioInfoExperts and contract research for the NIH, the University of California - San Francisco, and UMASS Medical School.
Investigation of treatments that effectively treat adults with post-traumatic stress disorder from childhood experiences (Ch-PTSD) and are well tolerated by patients is needed to improve outcomes for this population.
Aims
The purpose of this study was to compare the effectiveness of two trauma-focused treatments, imagery rescripting (ImRs) and eye movement desensitisation and reprocessing (EMDR), for treating Ch-PTSD.
Method
We conducted an international, multicentre, randomised clinical trial, recruiting adults with Ch-PTSD from childhood trauma before 16 years of age. Participants were randomised to treatment condition and assessed by blind raters at multiple time points. Participants received up to 12 90-min sessions of either ImRs or EMDR, biweekly.
Results
A total of 155 participants were included in the final intent-to-treat analysis. Drop-out rates were low, at 7.7%. A generalised linear mixed model of repeated measures showed that observer-rated post-traumatic stress disorder (PTSD) symptoms significantly decreased for both ImRs (d = 1.72) and EMDR (d = 1.73) at the 8-week post-treatment assessment. Similar results were seen with secondary outcome measures and self-reported PTSD symptoms. There were no significant differences between the two treatments on any standardised measure at post-treatment and follow-up.
Conclusions
ImRs and EMDR treatments were found to be effective in treating PTSD symptoms arising from childhood trauma, and in reducing other symptoms such as depression, dissociation and trauma-related cognitions. The low drop-out rates suggest that the treatments were well tolerated by participants. The results from this study provide evidence for the use of trauma-focused treatments for Ch-PTSD.
Implementation of genome-scale sequencing in clinical care has significant challenges: the technology is highly dimensional with many kinds of potential results, results interpretation and delivery require expertise and coordination across multiple medical specialties, clinical utility may be uncertain, and there may be broader familial or societal implications beyond the individual participant. Transdisciplinary consortia and collaborative team science are well poised to address these challenges. However, understanding the complex web of organizational, institutional, physical, environmental, technologic, and other political and societal factors that influence the effectiveness of consortia is understudied. We describe our experience working in the Clinical Sequencing Evidence-Generating Research (CSER) consortium, a multi-institutional translational genomics consortium.
Methods:
A key aspect of the CSER consortium was the juxtaposition of site-specific measures with the need to identify consensus measures related to clinical utility and to create a core set of harmonized measures. During this harmonization process, we sought to minimize participant burden, accommodate project-specific choices, and use validated measures that allow data sharing.
Results:
Identifying platforms to ensure swift communication between teams and management of materials and data were essential to our harmonization efforts. Funding agencies can help consortia by clarifying key study design elements across projects during the proposal preparation phase and by providing a framework for data sharing data across participating projects.
Conclusions:
In summary, time and resources must be devoted to developing and implementing collaborative practices as preparatory work at the beginning of project timelines to improve the effectiveness of research consortia.
Induced abortion is an indicator of access to, and quality of reproductive healthcare, but rates are relatively unknown in women with schizophrenia.
Aims
We examined whether women with schizophrenia experience increased induced abortion compared with those without schizophrenia, and identified factors associated with induced abortion risk.
Method
In a population-based, repeated cross-sectional study (2011–2013), we compared women with and without schizophrenia in Ontario, Canada on rates of induced abortions per 1000 women and per 1000 live births. We then followed a longitudinal cohort of women with schizophrenia aged 15–44 years (n = 11 149) from 2011, using modified Poisson regression to identify risk factors for induced abortion.
Results
Women with schizophrenia had higher abortion rates than those without schizophrenia in all years (15.5–17.5 v. 12.8–13.6 per 1000 women; largest rate ratio, 1.33; 95% CI 1.16–1.54). They also had higher abortion ratios (592–736 v. 321–341 per 1000 live births; largest rate ratio, 2.25; 95% CI 1.96–2.59). Younger age (<25 years; adjusted relative risk (aRR), 1.84; 95% CI 1.39–2.44), multiparity (aRR 2.17, 95% CI 1.66–2.83), comorbid non-psychotic mental illness (aRR 2.15, 95% CI 1.34–3.46) and substance misuse disorders (aRR 1.85, 95% CI 1.47–2.34) were associated with increased abortion risk.
Conclusions
These results demonstrate vulnerability related to reproductive healthcare for women with schizophrenia. Evidence-based interventions to support optimal sexual health, particularly in young women, those with psychiatric and addiction comorbidity, and women who have already had a child, are warranted.
OBJECTIVES/SPECIFIC AIMS: The population of cancer survivors is rapidly growing in the United States. Long term and late effects of cancer, combined with ongoing management of other chronic conditions, make cancer survivors particularly vulnerable to polypharmacy and its adverse effects. We examined patterns of prescription medication use and polypharmacy in a population-based sample of cancer survivors. METHODS/STUDY POPULATION: Using data from the Medical Expenditure Panel Survey (MEPS), we matched cancer survivors (n=5216) to noncancer controls (n=19,588) by age, sex, and survey year. We defined polypharmacy as using 5 or more unique medications. We also estimated proportion of respondents prescribed specific medications within therapeutic classes and total prescription expenditures. RESULTS/ANTICIPATED RESULTS: A higher proportion of cancer survivors were prescribed 5 or more unique medications (64.0%, 95% CI 62.3%–65.8%) compared with noncancer controls (51.5%, 95% CI 50.4%–52.6%), including drugs with abuse potential. Across all therapeutic classes, a higher proportion of newly (≤1 year since diagnosis) and previously (>1 years since diagnosis) diagnosed survivors were prescribed medications compared to controls, with large differences in central nervous system agents (65.8% vs. 57.4% vs. 46.2%), psychotherapeutic agents (25.4% vs. 26.8% vs. 18.3%), and gastrointestinal agents (31.9% vs. 29.6% vs. 22.0%). Specifically, nearly 10% of cancer survivors were prescribed benzodiazepines and/or opioids compared to about 5% of controls. Survivors had more than double prescription expenditures (median $1633 vs. $784 among noncancer controls). Findings persisted similarly across categories of age and comorbidity. DISCUSSION/SIGNIFICANCE OF IMPACT: Cancer survivors were frequently prescribed a higher number of unique medications and inappropriate medications or drugs with abuse potential, increasing risk of adverse drug events, financial toxicity, poor adherence, and drug-drug interactions. Adolescent and young adult survivors appear at increased risk of polypharmacy.