We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Functional impairment is a major concern among those presenting to youth mental health services and can have a profound impact on long-term outcomes. Early recognition and prevention for those at risk of functional impairment is essential to guide effective youth mental health care. Yet, identifying those at risk is challenging and impacts the appropriate allocation of indicated prevention and early intervention strategies.
Methods
We developed a prognostic model to predict a young person’s social and occupational functional impairment trajectory over 3 months. The sample included 718 young people (12–25 years) engaged in youth mental health care. A Bayesian random effects model was designed using demographic and clinical factors and model performance was evaluated on held-out test data via 5-fold cross-validation.
Results
Eight factors were identified as the optimal set for prediction: employment, education, or training status; self-harm; psychotic-like experiences; physical health comorbidity; childhood-onset syndrome; illness type; clinical stage; and circadian disturbances. The model had an acceptable area under the curve (AUC) of 0.70 (95% CI, 0.56–0.81) overall, indicating its utility for predicting functional impairment over 3 months. For those with good baseline functioning, it showed excellent performance (AUC = 0.80, 0.67–0.79) for identifying individuals at risk of deterioration.
Conclusions
We developed and validated a prognostic model for youth mental health services to predict functional impairment trajectories over a 3-month period. This model serves as a foundation for further tool development and demonstrates its potential to guide indicated prevention and early intervention for enhancing functional outcomes or preventing functional decline.
Scholarly and practitioner interest in authentic leadership has grown at an accelerating rate over the last decade, resulting in a proliferation of publications across diverse social science disciplines. Accompanying this interest has been criticism of authentic leadership theory and the methods used to explore it. We conducted a systematic review of 303 scholarly articles published from 2010 to 2023 to critically assess the conceptual and empirical strengths and limitations of this literature and map the nomological network of the authentic leadership construct. Results indicate that much of the extant research does not follow best practices in terms of research design and analysis. Based on the findings obtained, an agenda for advancing authentic leadership theory and research that embraces a signaling theory perspective is proposed.
In response to the COVID-19 pandemic, we rapidly implemented a plasma coordination center, within two months, to support transfusion for two outpatient randomized controlled trials. The center design was based on an investigational drug services model and a Food and Drug Administration-compliant database to manage blood product inventory and trial safety.
Methods:
A core investigational team adapted a cloud-based platform to randomize patient assignments and track inventory distribution of control plasma and high-titer COVID-19 convalescent plasma of different blood groups from 29 donor collection centers directly to blood banks serving 26 transfusion sites.
Results:
We performed 1,351 transfusions in 16 months. The transparency of the digital inventory at each site was critical to facilitate qualification, randomization, and overnight shipments of blood group-compatible plasma for transfusions into trial participants. While inventory challenges were heightened with COVID-19 convalescent plasma, the cloud-based system, and the flexible approach of the plasma coordination center staff across the blood bank network enabled decentralized procurement and distribution of investigational products to maintain inventory thresholds and overcome local supply chain restraints at the sites.
Conclusion:
The rapid creation of a plasma coordination center for outpatient transfusions is infrequent in the academic setting. Distributing more than 3,100 plasma units to blood banks charged with managing investigational inventory across the U.S. in a decentralized manner posed operational and regulatory challenges while providing opportunities for the plasma coordination center to contribute to research of global importance. This program can serve as a template in subsequent public health emergencies.
Palmer amaranth (Amaranthus palmeri S. Watson, AMAPA) is one of the most troublesome weeds in North America due to its rapid growth rate, substantial seed production, competitiveness and the evolution of herbicide-resistant populations. Though frequently encountered in the South, Midwest, and Mid-Atlantic regions of the United States, A. palmeri was recently identified in soybean [Glycine max (L.) Merr.] fields in Genesee, Orange, and Steuben counties, NY, where glyphosate was the primary herbicide for in-crop weed control. This research, conducted in 2023, aimed to (1) describe the dose response of three putative resistant NY A. palmeri populations to glyphosate, (2) determine their mechanisms of resistance, and (3) assess their sensitivity to other postemergence herbicides commonly used in NY crop production systems. Based on the effective dose necessary to reduce aboveground biomass by 50% (ED50), the NY populations were 42 to 67 times more resistant to glyphosate compared with a glyphosate-susceptible population. Additionally, the NY populations had elevated EPSPS gene copy numbers ranging from 25 to 135 located within extrachromosomal circular DNA (eccDNA). Label rate applications of Weed Science Society of America (WSSA) Group 2 herbicides killed up to 42% of the NY populations of A. palmeri. Some variability was observed among populations in response to WSSA Group 5 and 27 herbicides. All populations were effectively controlled by labeled rates of herbicides belonging to WSSA Groups 4, 10, 14, and 22. Additional research is warranted to confirm whether NY populations have evolved multiple resistance to herbicides within other WSSA groups and to develop effective A. palmeri management strategies suitable for NY crop production.
Accelerating COVID-19 Treatment Interventions and Vaccines (ACTIV) was initiated by the US government to rapidly develop and test vaccines and therapeutics against COVID-19 in 2020. The ACTIV Therapeutics-Clinical Working Group selected ACTIV trial teams and clinical networks to expeditiously develop and launch master protocols based on therapeutic targets and patient populations. The suite of clinical trials was designed to collectively inform therapeutic care for COVID-19 outpatient, inpatient, and intensive care populations globally. In this report, we highlight challenges, strategies, and solutions around clinical protocol development and regulatory approval to document our experience and propose plans for future similar healthcare emergencies.
The aim of this study was to explore the associations between diet quality, socio-demographic measures, smoking, and weight status in a large, cross-sectional cohort of adults living in Yorkshire and Humber, UK. Data from 43, 023 participants aged over 16 years in the Yorkshire Health Survey, 2nd wave (2013–2015) were collected on diet quality, socio-demographic measures, smoking, and weight status. Diet quality was assessed using a brief, validated tool. Associations between these variables were assessed using multiple regression methods. Split-sample cross-validation was utilised to establish model portability. Observed patterns in the sample showed that the greatest substantive differences in diet quality were between females and males (3.94 points; P < 0.001) and non-smokers vs smokers (4.24 points; P < 0.001), with higher diet quality scores observed in females and non-smokers. Deprivation, employment status, age, and weight status categories were also associated with diet quality. Greater diet quality scores were observed in those with lower levels of deprivation, those engaged in sedentary occupations, older people, and those in a healthy weight category. Cross-validation procedures revealed that the model exhibited good transferability properties. Inequalities in patterns of diet quality in the cohort were consistent with those indicated by the findings of other observational studies. The findings indicate population subgroups that are at higher risk of dietary-related ill health due to poor quality diet and provide evidence for the design of targeted national policy and interventions to prevent dietary-related ill health in these groups. The findings support further research exploring inequalities in diet quality in the population.
Background: External comparisons of antimicrobial use (AU) may be more informative if adjusted for encounter characteristics. Optimal methods to define input variables for encounter-level risk-adjustment models of AU are not established. Methods: This retrospective analysis of electronic health record data included 50 US hospitals in 2020-2021. We used NHSN definitions for all antibacterials days of therapy (DOT), including adult and pediatric encounters with at least 1 day present in inpatient locations. We assessed 4 methods to define input variables: 1) diagnosis-related group (DRG) categories by Yu et al., 2) adjudicated Elixhauser comorbidity categories by Goodman et al., 3) all Clinical Classification Software Refined (CCSR) diagnosis and procedure categories, and 4) adjudicated CCSR categories where codes not appropriate for AU risk-adjustment were excluded by expert consensus, requiring review of 867 codes over 4 months to attain consensus. Data were split randomly, stratified by bed size as follows: 1) training dataset including two-thirds of encounters among two-thirds of hospitals; 2) internal testing set including one-third of encounters within training hospitals, and 3) external testing set including the remaining one-third of hospitals. We used a gradient-boosted machine (GBM) tree-based model and two-staged approach to first identify encounters with zero DOT, then estimate DOT among those with >0.5 probability of receiving antibiotics. Accuracy was assessed using mean absolute error (MAE) in testing datasets. Correlation plots compared model estimates and observed DOT among testing datasets. The top 20 most influential variables were defined using modeled variable importance. Results: Our datasets included 629,445 training, 314,971 internal testing, and 419,109 external testing encounters. Demographic data included 41% male, 59% non-Hispanic White, 25% non-Hispanic Black, 9% Hispanic, and 5% pediatric encounters. DRG was missing in 29% of encounters. MAE was lower in pediatrics as compared to adults, and lowest for models incorporating CCSR inputs (Figure 1). Performance in internal and external testing was similar, though Goodman/Elixhauser variable strategies were less accurate in external testing and underestimated long DOT outliers (Figure 2). Agnostic and adjudicated CCSR model estimates were highly correlated; their influential variables lists were similar (Figure 3). Conclusion: Larger numbers of CCSR diagnosis and procedure inputs improved risk-adjustment model accuracy compared with prior strategies. Variable importance and accuracy were similar for agnostic and adjudicated approaches. However, maintaining adjudications by experts would require significant time and potentially introduce personal bias. If findings are confirmed, the need for expert adjudication of input variables should be reconsidered.
Disclosure: Elizabeth Dodds Ashley: Advisor- HealthTrackRx. David J Weber: Consultant on vaccines: Pfizer; DSMB chair: GSK; Consultant on disinfection: BD, GAMA, PDI, Germitec
Children hospitalised with severe malnutrition have high mortality and readmission rates post-discharge. Current milk-based formulations target restoring ponderal growth but not the modification of gut barrier integrity or microbiome which increases the risk of gram-negative sepsis and poor outcomes. We propose that legume-based feeds rich in fermentable carbohydrates will promote better gut health and improve overall outcomes. We conducted an open-label phase II trial at Mbale and Soroti Regional Referral Hospitals, Uganda, involving 160 children aged 6 months to 5 years with severe malnutrition (mid-upper arm circumference (MUAC) < 11·5 cm and/or nutritional oedema). Children were randomised to a lactose-free, chickpea-enriched legume paste feed (LF) (n 80) v. WHO standard F75/F100 feeds (n 80). Co-primary outcomes were change in MUAC and mortality to day 90. Secondary outcomes included weight gain (> 5 g/kg/d), de novo development of diarrhoea, time to diarrhoea and oedema resolution. Day 90 MUAC increase was marginally lower in LF v. WHO arm (1·1 cm (interquartile range (IQR) 1·1) v. 1·4 cm (IQR 1·40), P = 0·09); day 90 mortality was similar (11/80 (13·8 %) v. 12/80 (15 %), respectively, OR 0·91 (95 % CI 0·40, 2·07), P = 0·83). There were no differences in any of the other secondary outcomes. Owing to initial poor palatability of the LF, ten children switched to WHO feeds. Per-protocol analysis indicated a trend to lower day 90 mortality and readmission rates in the LF (6/60 (10 %) and 2/60(3 %)) v. WHO feeds (12/71(17·5 %) and 4/71(6 %)). Further refinement of LF and clinical trials are warranted, given the poor outcomes in children with severe malnutrition.
The Minnesota Longitudinal Study of Risk and Adaptation (MLSRA) is a landmark prospective, longitudinal study of human development focused on a sample of mothers experiencing poverty and their firstborn children. Although the MLSRA pioneered a number of important topics in the area of social and emotional development, it began with the more specific goal of examining the antecedents of child maltreatment. From that foundation and for more than 40 years, the study has produced a significant body of research on the origins, sequelae, and measurement of childhood abuse and neglect. The principal objectives of this report are to document the early history of the MLSRA and its contributions to the study of child maltreatment and to review and summarize results from the recently updated childhood abuse and neglect coding of the cohort, with particular emphasis on findings related to adult adjustment. While doing so, we highlight key themes and contributions from Dr Dante Cicchetti’s body of research and developmental psychopathology perspective to the MLSRA, a project launched during his tenure as a graduate student at the University of Minnesota.
We demonstrate that a ~20 km long valley glacier in the St. Elias Mountains, Yukon, can experience both partial and full surges, likely controlled by the presence of a topographic constriction and the formation and drainage of supraglacial lakes. Based on analysis of air photos, satellite images and field observations since the 1940s, we identify a full surge of ‘Little Kluane Glacier’ from 2013 to 2018, and a partial surge of just the upper north arm between 1963 and 1972. Repeat digital elevation models and velocity profiles indicate that the recent surge initiated from the upper north arm in 2013, which developed into a full surge of the main trunk from 2017 to 2018 with peak velocities of ~3600 m a−1 and frontal advance of ~1.7 km from May to September 2018. In 2016, a mass movement from the north arm to the main trunk generated a surface depression in a region immediately downstream of a topographic constriction, which promoted the formation and rapid drainage of supraglacial lakes to the glacier bed, and likely established the conditions to propel the initial partial surge into a full surge. Our results underscore the complex interplay between glacier geometry, surface hydrology and topography required to drive full surges of this glacier.
Aviation passenger screening has been used worldwide to mitigate the translocation risk of SARS-CoV-2. We present a model that evaluates factors in screening strategies used in air travel and assess their relative sensitivity and importance in identifying infectious passengers. We use adapted Monte Carlo simulations to produce hypothetical disease timelines for the Omicron variant of SARS-CoV-2 for travelling passengers. Screening strategy factors assessed include having one or two RT-PCR and/or antigen tests prior to departure and/or post-arrival, and quarantine length and compliance upon arrival. One or more post-arrival tests and high quarantine compliance were the most important factors in reducing pathogen translocation. Screening that combines quarantine and post-arrival testing can shorten the length of quarantine for travelers, and variability and mean testing sensitivity in post-arrival RT-PCR and antigen tests decrease and increase with the greater time between the first and second post-arrival test, respectively. This study provides insight into the role various screening strategy factors have in preventing the translocation of infectious diseases and a flexible framework adaptable to other existing or emerging diseases. Such findings may help in public health policy and decision-making in present and future evidence-based practices for passenger screening and pandemic preparedness.
Mild traumatic brain injury (mTBI), depression, and posttraumatic stress disorder (PTSD) are a notable triad in Operation Enduring Freedom, Operation Iraqi Freedom, and Operation New Dawn (OEF/OIF/OND) Veterans. With the comorbidity of depression and PTSD in Veterans with mTBI histories, and their role in exacerbating cognitive and emotional dysfunction, interventions addressing cognitive and psychiatric functioning are critical. Compensatory Cognitive Training (CCT) is associated with improvements in areas such as prospective memory, attention, and executive functioning and has also yielded small-to-medium treatment effects on PTSD and depressive symptom severity. Identifying predictors of psychiatric symptom change following CCT would further inform the interventional approach. We sought to examine neuropsychological predictors of PTSD and depressive symptom improvement in Veterans with a history of mTBI who received CCT.
Participants and Methods:
37 OEF/OIF/OND Veterans with mTBI history and cognitive complaints received 10-weekly 120-minute CCT group sessions as part of a clinical trial. Participants completed a baseline neuropsychological assessment including tests of premorbid functioning, attention/working memory, processing speed, verbal learning/memory, and executive functioning, and completed psychiatric symptom measures (PTSD Checklist-Military Version; Beck Depression Inventory-II) at baseline, post-treatment, and 5-week follow-up. Paired samples t-tests were used to examine statistically significant change in PTSD (total and symptom cluster scores) and depressive symptom scores over time. Pearson correlations were calculated between neuropsychological scores and PTSD and depressive symptom change scores at post-treatment and follow-up. Neuropsychological measures identified as significantly correlated with psychiatric symptom change scores (p^.05) were entered as independent variables in separate multiple linear regression analyses to predict symptom change at post-treatment and follow-up.
Results:
Over 50% of CCT participants had clinically meaningful improvement in depressive symptoms (>17.5% score reduction) and over 20% had clinically meaningful improvement in PTSD symptoms (>10-point improvement) at post-treatment and follow-up. Examination of PTSD symptom cluster scores (re-experiencing, avoidance/numbing, and arousal) revealed a statistically significant improvement in avoidance/numbing at follow-up. Bivariate correlations indicated that worse baseline performance on D-KEFS Category Fluency was moderately associated with PTSD symptom improvement at post-treatment. Worse performance on both D-KEFS Category Fluency and Category Switching Accuracy was associated with improvement in depressive symptoms at post-treatment and follow-up. Worse performance on D-KEFS Trail Making Test Switching was associated with improvement in depressive symptoms at follow-up. Subsequent regression analyses revealed worse processing speed and worse aspects of executive functioning at baseline significantly predicted depressive symptom improvement at post-treatment and follow-up.
Conclusions:
Worse baseline performances on tests of processing speed and aspects of executive functioning were significantly associated with improvements in PTSD and depressive symptoms during the trial. Our results suggest that cognitive training may bolster skills that are helpful for PTSD and depressive symptom reduction and that those with worse baseline functioning may benefit more from treatment because they have more room to improve. Although CCT is not a primary treatment for PTSD or depressive symptoms, our results support consideration of including CCT in hybrid treatment approaches. Further research should examine these relationships in larger samples.
I-InTERACT-North is a stepped-care telepsychological parenting intervention designed to promote positive parenting skills and improve child behaviour. Initially developed for children with traumatic brain injury, our pilot study has shown efficacy in increasing positive parenting skills and reducing problem behaviours for children with early brain injury (e.g., stroke, encephalopathy). Recently, the program has expanded to include children with neurodevelopmental disorders, including Autism Spectrum Disorder. Although positive parenting programs (e.g., Parent-Child Interaction Therapy) can be effective for autistic children, it is unknown whether the goals most important to these families can be addressed with IInTERACT-North program. An examination of suitability and preliminary efficacy was conducted.
Participants and Methods:
Parent participants of autistic children between 3 and 9 years (n= 20) were recruited from the neonatal, neurology, psychiatry, or cardiology clinics at The Hospital for Sick Children and the Province of Ontario Neurodevelopmental Disorders (POND) Network. Top problems, as reported by parents at baseline, were analyzed qualitatively through a cross-case analysis procedure in order to identify common themes and facilitate generalizations surrounding concerning behaviours. Parent-reported intensity of their children’s top problem behaviours on a scale from 1 (“not a problem”) to 8 (“huge problem”) were quantified. To explore preliminary program efficacy, t-tests were used to compare pre- and post-intervention problems and intensity on the Eyberg Child Behavior Inventory (ECBI) (n=16).
Results:
A total of 56 top problem data units were examined, with convergent thematic coding on 53 of 56 (94.6% inter-coder reliability). Four prevalent, high-agreement themes were retained: emotion dysregulation (19; 33.9%), non-compliance (12; 21.4%), sibling conflict (7; 12.5%), and inattention and hyperactivity (7; 12.5%). Average problem intensity for these themes ranged from 5.85 to 6.53 (where 8 is greatest impairment) with emotion dysregulation having the highest intensity (6.53) compared to the others. Scores on the ECBI were lower post-intervention (Intensity scale: M= 59.06, SD= 8.1; Problem scale: M= 60.69, SD= 11.5) compared to pre-intervention (Intensity scale: M= 61.19, SD= 10.4; Problem scale: M= 64.31, SD= 11.7), but small sample size precluded detecting statistical significance (p’s = .16 and .07, respectively).
Conclusions:
Thematic analysis of top problems identified by parents of autistic children suggested that concerns were transdiagnostic in nature, and represent common treatment targets of the I-InTERACTNorth program. Though challenging behaviours related to restricted interests or repetitive behaviours may exist in our sample, parental behavioural goals appeared to align with the types of concerns traditionally raised by participants of the program, supporting a transdiagnostic approach. Preliminary data point to positive treatment outcomes in these families.
The Consensus Reporting Items for Studies in Primary care (CRISP) provides a new research reporting guideline to meet the needs of the producers and users of primary care (PC) research. Developed through an iterative program of research, including investigators, practicing clinicians, patients, community representatives, and educators, the CRISP Checklist guides PC researchers across the spectrum of research methods, study designs, and topics. This pilot test included a variety of team members using the CRISP Checklist for writing, revising, and reviewing PC research reports. All or most of the 15 participants reported that the checklist was easy to use, improved research reports, and should be recommended by PC research journals. The checklist is adaptable to different study types; not all items apply to all reports. The CRISP Checklist can help meet the needs of PC research when used in parallel with existing guidelines that focus on specific methods and limited topics.
State Medical Boards (SMBs) can take severe disciplinary actions (e.g., license revocation or suspension) against physicians who commit egregious wrongdoing in order to protect the public. However, there is noteworthy variability in the extent to which SMBs impose severe disciplinary action. In this manuscript, we present and synthesize a subset of 11 recommendations based on findings from our team’s larger consensus-building project that identified a list of 56 policies and legal provisions SMBs can use to better protect patients from egregious wrongdoing by physicians.
Prior trials suggest that intravenous racemic ketamine is a highly effective for treatment-resistant depression (TRD), but phase 3 trials of racemic ketamine are needed.
Aims
To assess the acute efficacy and safety of a 4-week course of subcutaneous racemic ketamine in participants with TRD. Trial registration: ACTRN12616001096448 at www.anzctr.org.au.
Method
This phase 3, double-blind, randomised, active-controlled multicentre trial was conducted at seven mood disorders centres in Australia and New Zealand. Participants received twice-weekly subcutaneous racemic ketamine or midazolam for 4 weeks. Initially, the trial tested fixed-dose ketamine 0.5 mg/kg versus midazolam 0.025 mg/kg (cohort 1). Dosing was revised, after a Data Safety Monitoring Board recommendation, to flexible-dose ketamine 0.5–0.9 mg/kg or midazolam 0.025–0.045 mg/kg, with response-guided dosing increments (cohort 2). The primary outcome was remission (Montgomery-Åsberg Rating Scale for Depression score ≤10) at the end of week 4.
Results
The final analysis (those who received at least one treatment) comprised 68 in cohort 1 (fixed-dose), 106 in cohort 2 (flexible-dose). Ketamine was more efficacious than midazolam in cohort 2 (remission rate 19.6% v. 2.0%; OR = 12.1, 95% CI 2.1–69.2, P = 0.005), but not different in cohort 1 (remission rate 6.3% v. 8.8%; OR = 1.3, 95% CI 0.2–8.2, P = 0.76). Ketamine was well tolerated. Acute adverse effects (psychotomimetic, blood pressure increases) resolved within 2 h.
Conclusions
Adequately dosed subcutaneous racemic ketamine was efficacious and safe in treating TRD over a 4-week treatment period. The subcutaneous route is practical and feasible.
Rapid antigen detection tests (Ag-RDT) for SARS-CoV-2 with emergency use authorization generally include a condition of authorization to evaluate the test’s performance in asymptomatic individuals when used serially. We aim to describe a novel study design that was used to generate regulatory-quality data to evaluate the serial use of Ag-RDT in detecting SARS-CoV-2 virus among asymptomatic individuals.
Methods:
This prospective cohort study used a siteless, digital approach to assess longitudinal performance of Ag-RDT. Individuals over 2 years old from across the USA with no reported COVID-19 symptoms in the 14 days prior to study enrollment were eligible to enroll in this study. Participants throughout the mainland USA were enrolled through a digital platform between October 18, 2021 and February 15, 2022. Participants were asked to test using Ag-RDT and molecular comparators every 48 hours for 15 days. Enrollment demographics, geographic distribution, and SARS-CoV-2 infection rates are reported.
Key Results:
A total of 7361 participants enrolled in the study, and 492 participants tested positive for SARS-CoV-2, including 154 who were asymptomatic and tested negative to start the study. This exceeded the initial enrollment goals of 60 positive participants. We enrolled participants from 44 US states, and geographic distribution of participants shifted in accordance with the changing COVID-19 prevalence nationwide.
Conclusions:
The digital site-less approach employed in the “Test Us At Home” study enabled rapid, efficient, and rigorous evaluation of rapid diagnostics for COVID-19 and can be adapted across research disciplines to optimize study enrollment and accessibility.