To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Despite considerable preclinical evidence, clinical trials assessing the effects of probiotics on individuals with major depressive disorder (MDD) are scarce. This study aimed to provide further evidence of the acceptability, tolerability and putative efficacy of probiotics in this patient group and to improve our understanding of the underlying mechanisms of action.
This double-blind randomised placebo-controlled pilot and mechanistic trial investigated the effects of an 8-week adjunctive multi-strain probiotic intervention in adults with MDD taking antidepressants. Psychiatric data and stool and blood samples were collected at baseline, week 4 and week 8. A computer-based emotion recognition task was also administered. Stool samples from 25 matched healthy controls were also obtained.
49 participants, randomised to probiotic (n = 24) or placebo (n = 25), were included in intent-to-treat analyses. Standardised effect sizes (SES) from linear mixed models demonstrated that the probiotic group attained greater improvements in depressive (HAMD week 4: SES [95%CI] = 0.70[0.01, 0.98]; IDS week 8: SES [95%CI] = 0.64 [0.03, 0.87]) and anxiety symptoms (HAMA week 4: SES [95%CI] = 0.67 [0.00, 0.95]; week 8: SES [95%CI] = 0.79 [0.06, 1.05]), compared to the placebo group. Attrition was 8% (n = 3 placebo, n = 1 probiotic), adherence was 97.2% and there were no serious adverse reactions. The probiotic modified the composition of the faecal microbiota by normalising richness and diversity towards healthy control levels. The probiotic also increased levels of specific taxa, including Bacillaceae (FDR p < 0.05), which correlated with reductions in anxiety scores (FDR p < 0.05). There was no impact of treatment on levels of inflammatory cytokines (CRP, TNFα, IL-1β, IL-6, IL-17) or BDNF. However, probiotics showed a tendency to increase positive affective bias and improved the accuracy of recognition of all emotions, except sadness.
Compared to placebo, the probiotic group had greater improvement in depressive and anxiety scores, from as early as 4 weeks. The acceptability, tolerability and estimated effect sizes on key clinical outcomes are promising and encourage further investigation of this probiotic as add-on treatment in MDD. The beneficial effects of probiotics in this patient group may be partially mediated by modification of the composition of the gut microbiota and improvement of affective biases, inherent to depressive disorders.
Climate services (CS) and agricultural advisory services (AAS) have the potential to play synergistic roles in helping farmers manage climate-related risk, providing they are integrated. For information and communication technology (ICT)-enabled, climate-informed AAS to contribute towards transformation, the focus must shift from scaling access to scaling impact. With expanding rural ICT capacity and mobile phone penetration, digital innovation brings significant opportunities to improve access to services. Achieving impact requires the following actions: building farmers’ capacity and voice; employing a diverse delivery strategy for CS that exploits digital innovation; bundling CS, agri-advisories, and other services; investing in institutional capacity; and embedding services in a sustainable and enabling environment in terms of policy, governance, and resourcing. Recent experiences in several countries demonstrate how well targeted investments can alleviate constraints and enhance the impact of climate-informed AAS.
Adolescent major depressive disorder (MDD) is associated with disrupted processing of emotional stimuli and difficulties in cognitive reappraisal. Little is known however about how current pharmacotherapies act to modulate the neural mechanisms underlying these key processes. The current study therefore investigated the neural effects of fluoxetine on emotional reactivity and cognitive reappraisal in adolescent depression.
Thirty-one adolescents with MDD were randomised to acute fluoxetine (10 mg) or placebo. Seventeen healthy adolescents were also recruited but did not receive any treatment for ethical reasons. During functional magnetic resonance imaging (fMRI), participants viewed aversive images and were asked to either experience naturally the emotional state elicited (‘Maintain’) or to reinterpret the content of the pictures to reduce negative affect (‘Reappraise’). Significant activations were identified using whole-brain analysis.
No significant group differences were seen when comparing Reappraise and Maintain conditions. However, when compared to healthy controls, depressed adolescents on placebo showed reduced visual activation to aversive pictures irrespective of the condition. The depressed adolescent group on fluoxetine showed the opposite pattern, i.e. increased visuo-cerebellar activity in response to aversive pictures, when compared to depressed adolescents on placebo.
These data suggest that depression in adolescence may be associated with reduced visual processing of aversive imagery and that fluoxetine may act to reduce avoidance of such cues. This could reflect a key mechanism whereby depressed adolescents engage with negative cues previously avoided. Future research combining fMRI with eye-tracking is nonetheless needed to further clarify these effects.
While adolescent-onset schizophrenia (ADO-SCZ) and adolescent-onset bipolar disorder with psychosis (psychotic ADO-BPD) present a more severe clinical course than their adult forms, their pathophysiology is poorly understood. Here, we study potentially state- and trait-related white matter diffusion-weighted magnetic resonance imaging (dMRI) abnormalities along the adolescent-onset psychosis continuum to address this need.
Forty-eight individuals with ADO-SCZ (20 female/28 male), 15 individuals with psychotic ADO-BPD (7 female/8 male), and 35 healthy controls (HCs, 18 female/17 male) underwent dMRI and clinical assessments. Maps of extracellular free-water (FW) and fractional anisotropy of cellular tissue (FAT) were compared between individuals with psychosis and HCs using tract-based spatial statistics and FSL's Randomise. FAT and FW values were extracted, averaged across all voxels that demonstrated group differences, and then utilized to test for the influence of age, medication, age of onset, duration of illness, symptom severity, and intelligence.
Individuals with adolescent-onset psychosis exhibited pronounced FW and FAT abnormalities compared to HCs. FAT reductions were spatially more widespread in ADO-SCZ. FW increases, however, were only present in psychotic ADO-BPD. In HCs, but not in individuals with adolescent-onset psychosis, FAT was positively related to age.
We observe evidence for cellular (FAT) and extracellular (FW) white matter abnormalities in adolescent-onset psychosis. Although cellular white matter abnormalities were more prominent in ADO-SCZ, such alterations may reflect a shared trait, i.e. neurodevelopmental pathology, present across the psychosis spectrum. Extracellular abnormalities were evident in psychotic ADO-BPD, potentially indicating a more dynamic, state-dependent brain reaction to psychosis.
As part of a quality improvement project beginning in October 2011, our centre introduced changes to reduce radiation exposure during paediatric cardiac catheterisations. This led to significant initial decreases in radiation to patients. Starting in April 2016, we sought to determine whether these initial reductions were sustained.
After a 30-day trial period, we implemented (1) weight-based reductions in preset frame rates for fluoroscopy and angiography, (2) increased use of collimators and safety shields, (3) utilisation of stored fluoroscopy and virtual magnification, and (4) hiring of a devoted radiation technician. We collected patient weight (kg), total fluoroscopy time (min), and procedure radiation dosage (cGy-cm2) for cardiac catheterisations between October, 2011 and September, 2019.
A total of 1889 procedures were evaluated (196 pre-intervention, 303 in the post-intervention time period, and 1400 in the long-term group). Fluoroscopy times (18.3 ± 13.6 pre; 19.8 ± 14.1 post; 17.11 ± 15.06 long-term, p = 0.782) were not significantly different between the three groups. Patient mean radiation dose per kilogram decreased significantly after the initial quality improvement intervention (39.7% reduction, p = 0.039) and was sustained over the long term (p = 0.043). Provider radiation exposure was also significantly decreased from the onset of this project through the long-term period (overall decrease of 73%, p < 0.01) despite several changes in the interventional cardiologists who made up the team over this time period.
Introduction of technical and clinical practice changes can result in a significant reduction in radiation exposure for patients and providers in a paediatric cardiac catheterisation laboratory. These reductions can be maintained over the long term.
Medically unexplained symptoms otherwise referred to as persistent physical symptoms (PPS) are debilitating to patients. As many specific PPS syndromes share common behavioural, cognitive, and affective influences, transdiagnostic treatments might be effective for this patient group. We evaluated the clinical efficacy and cost-effectiveness of a therapist-delivered, transdiagnostic cognitive behavioural intervention (TDT-CBT) plus (+) standard medical care (SMC) v. SMC alone for the treatment of patients with PPS in secondary medical care.
A two-arm randomised controlled trial, with measurements taken at baseline and at 9, 20, 40- and 52-weeks post randomisation. The primary outcome measure was the Work and Social Adjustment Scale (WSAS) at 52 weeks. Secondary outcomes included mood (PHQ-9 and GAD-7), symptom severity (PHQ-15), global measure of change (CGI), and the Persistent Physical Symptoms Questionnaire (PPSQ).
We randomised 324 patients and 74% were followed up at 52 weeks. The difference between groups was not statistically significant for the primary outcome (WSAS at 52 weeks: estimated difference −1.48 points, 95% confidence interval from −3.44 to 0.48, p = 0.139). However, the results indicated that some secondary outcomes had a treatment effect in favour of TDT-CBT + SMC with three outcomes showing a statistically significant difference between groups. These were WSAS at 20 weeks (p = 0.016) at the end of treatment and the PHQ-15 (p = 0.013) and CGI at 52 weeks (p = 0.011).
We have preliminary evidence that TDT-CBT + SMC may be helpful for people with a range of PPS. However, further study is required to maximise or maintain effects seen at end of treatment.
Pharmacogenomic testing has emerged to aid medication selection for patients with major depressive disorder (MDD) by identifying potential gene-drug interactions (GDI). Many pharmacogenomic tests are available with varying levels of supporting evidence, including direct-to-consumer and physician-ordered tests. We retrospectively evaluated the safety of using a physician-ordered combinatorial pharmacogenomic test (GeneSight) to guide medication selection for patients with MDD in a large, randomized, controlled trial (GUIDED).
Materials and Methods
Patients diagnosed with MDD who had an inadequate response to ≥1 psychotropic medication were randomized to treatment as usual (TAU) or combinatorial pharmacogenomic test-guided care (guided-care). All received combinatorial pharmacogenomic testing and medications were categorized by predicted GDI (no, moderate, or significant GDI). Patients and raters were blinded to study arm, and physicians were blinded to test results for patients in TAU, through week 8. Measures included adverse events (AEs, present/absent), worsening suicidal ideation (increase of ≥1 on the corresponding HAM-D17 question), or symptom worsening (HAM-D17 increase of ≥1). These measures were evaluated based on medication changes [add only, drop only, switch (add and drop), any, and none] and study arm, as well as baseline medication GDI.
Most patients had a medication change between baseline and week 8 (938/1,166; 80.5%), including 269 (23.1%) who added only, 80 (6.9%) who dropped only, and 589 (50.5%) who switched medications. In the full cohort, changing medications resulted in an increased relative risk (RR) of experiencing AEs at both week 4 and 8 [RR 2.00 (95% CI 1.41–2.83) and RR 2.25 (95% CI 1.39–3.65), respectively]. This was true regardless of arm, with no significant difference observed between guided-care and TAU, though the RRs for guided-care were lower than for TAU. Medication change was not associated with increased suicidal ideation or symptom worsening, regardless of study arm or type of medication change. Special attention was focused on patients who entered the study taking medications identified by pharmacogenomic testing as likely having significant GDI; those who were only taking medications subject to no or moderate GDI at week 8 were significantly less likely to experience AEs than those who were still taking at least one medication subject to significant GDI (RR 0.39, 95% CI 0.15–0.99, p=0.048). No other significant differences in risk were observed at week 8.
These data indicate that patient safety in the combinatorial pharmacogenomic test-guided care arm was no worse than TAU in the GUIDED trial. Moreover, combinatorial pharmacogenomic-guided medication selection may reduce some safety concerns. Collectively, these data demonstrate that combinatorial pharmacogenomic testing can be adopted safely into clinical practice without risking symptom degradation among patients.
Linoleic acid (LA), an essential n-6 fatty acid (FA), is critical for fetal development. We investigated the effects of maternal high LA (HLA) diet on offspring cardiac development and its relationship to circulating FA and cardiovascular function in adolescent offspring, and the ability of the postnatal diet to reverse any adverse effects. Female Wistar Kyoto rats were fed low LA (LLA; 1·44 % energy from LA) or high LA (HLA; 6·21 % energy from LA) diets for 10 weeks before pregnancy and during gestation/lactation. Offspring, weaned at postnatal day 25, were fed LLA or HLA diets and euthanised at postnatal day 40 (n 6–8). Maternal HLA diet decreased circulating total cholesterol and HDL-cholesterol in females and decreased total plasma n-3 FA in males, while maternal and postnatal HLA diets decreased total plasma n-3 FA in females. α-Linolenic acid (ALA) and EPA were decreased by postnatal but not maternal HLA diets in both sexes. Maternal and postnatal HLA diets increased total plasma n-6 and LA, and a maternal HLA diet increased circulating leptin, in both male and female offspring. Maternal HLA decreased slopes of systolic and diastolic pressure–volume relationship (PVR), and increased cardiac Col1a1, Col3a1, Atp2a1 and Notch1 in males. Maternal and postnatal HLA diets left-shifted the diastolic PVR in female offspring. Coronary reactivity was altered in females, with differential effects on flow repayment after occlusion. Thus, maternal HLA diets impact lipids, FA and cardiac function in offspring, with postnatal diet modifying FA and cardiac function in the female offspring.
Advance Clinical and Translational Research (Advance-CTR) serves as a central hub to support and educate clinical and translational researchers in Rhode Island. Understanding barriers to clinical research in the state is the key to setting project aims and priorities.
We implemented a Group Concept Mapping exercise to characterize the views of researchers and administrators regarding how to increase the quality and quantity of clinical and translational research in their settings. Participants generated ideas in response to this prompt and rated each unique idea in terms of how important it was and feasible it seemed to them.
Participants generated 78 unique ideas, from which 9 key themes emerged (e.g., Building connections between researchers). Items rated highest in perceived importance and feasibility included providing seed grants for pilot projects, connecting researchers with common interests and networking opportunities. Implications of results are discussed.
The Group Concept Mapping exercise enabled our project leadership to better understand stakeholder-perceived priorities and to act on ideas and aims most relevant to researchers in the state. This method is well suited to translational research enterprises beyond Rhode Island when a participatory evaluation stance is desired.
Demographic trends and the globalization of neuropsychology have led to a push toward inclusivity and diversity in neuropsychological research in order to maintain relevance in the healthcare marketplace. However, in a review of neuropsychological journals, O’Bryant et al. found systematic under-reporting of sample characteristics vital for understanding the generalizability of research findings. We sought to update and expand the findings reported by O’Bryant et al.
We evaluated 1648 journal articles published between 2016 and 2019 from 7 neuropsychological journals. Of these, 1277 were original research or secondary analyses and were examined further. Articles were coded for reporting of age, sex/gender, years of education, ethnicity/race, socioeconomic status (SES), language, and acculturation. Additionally, we recorded information related to sample size, country, and whether the article focused on a pediatric or adult sample.
Key variables such as age and sex/gender (both over 95%) as well as education (71%) were frequently reported. Language (20%) and race/ethnicity (36%) were modestly reported, and SES (13%), and acculturation (<1%) were more rarely reported. SES was more commonly reported in pediatric than adult samples, and the opposite was true for education. There were differences between the present results and those of O’Bryant et al., though the same general trends remained.
Reporting of demographic data in neuropsychological research appears to be slowly changing toward greater comprehensiveness, though clearly more work is needed. Greater systematic reporting of such data is likely to be beneficial for the generalizability and contextualization of neurocognitive function.
Background: In recent years, the historic declines in the incidence of methicillin-resistant Staphylococcus aureus (MRSA) bloodstream infections (BSIs) in the United States have slowed. We examined trends in the incidence of community-onset (CO) MRSA BSIs among hospitalized persons with and without substance-use diagnoses. Methods: Using data from >200 US hospitals reporting to the Premier Healthcare Database (PHD) during 2012–2017, we conducted a retrospective study among hospitalized persons aged ≥18 years. MRSA BSIs with substance use were defined as hospitalizations having both a blood culture positive for MRSA and at least 1 International Classification of Disease, Ninth Revision, Clinical Modification (ICD-9-CM) or ICD-10-CM diagnostic code for substance use including opioids, cocaine, amphetamines, or other substances (excluding cannabis, alcohol, and nicotine). MRSA BSIs were considered community onset when a positive blood culture was collected within 3 days of admission. We assessed annual trends and described characteristics of CO MRSA BSI hospitalizations, stratified by substance use. Results: Of 20,049 MRSA BSIs from 2012 to 2017, 17,634 (88%) were CO. Overall, MRSA BSI incidence decreased 7%, from 178.5 to 166.2 per 100,000 hospitalizations during the study period; However, CO MRSA BSI rates remained stable (152.7 to 149.9 per 100,000 hospitalizations). Among CO MRSA BSIs, 1,838 (10%) were BSIs with substance-use diagnoses; the incidence of CO MRSA BSIs with substance use increased 236% (from 8.2 to 27.6 per 100,000 hospitalizations) and represented a greater proportion of the CO MRSA rate over the study period (Fig. 1). The incidence of CO MRSA BSIs without substance use decreased 15% (from 144.5 to 122.4 per 100,000 hospitalizations). Patients with CO MRSA BSIs with substance use were younger (median, 40 vs 65 years), more likely to be female (50% vs 40%), white (79% vs 69%), and to leave against medical advice (15% vs 1%). Among patients not leaving against medical advice, CO BSI patients with substance-use diagnoses had longer lengths of stay (median, 11 vs 9 days), lower in-hospital mortality (9% vs 14%), and higher hospitalization costs (median, $22,912 vs $17,468) compared to patients without substance-use diagnoses. Conclusions: Although the overall CO MRSA BSI rate remained unchanged from 2012 to 2017, infections with substance use diagnoses increased >3-fold, and infections without substance use diagnoses decreased. These data suggest that the emergence of MRSA associated with substance-use diagnoses threatens potential progress in reducing the incidence of CO MRSA infections. Additional strategies may be needed to prevent MRSA BSI in patients with substance-use diagnoses, and to maintain national progress in the reduction of MRSA infections overall.
To assess the quantity and focus of recent empirical research regarding the effect of micronutrient supplementation on live birth outcomes in low-risk pregnancies from high-income countries.
A systematic quantitative literature review.
Low-risk pregnancies in World Bank-classified high-income countries, 2019.
Using carefully selected search criteria, a total of 2475 publications were identified, of which seventeen papers met the inclusion criteria for this review. Data contributing to nine of the studies were sourced from four cohorts; research originated from ten countries. These cohorts exhibited a large number of participants, stable data and a low probability of bias. The most recent empirical data offered by these studies was 2011; the most historical was 1980. In total, fifty-five categorical outcome/supplement combinations were examined; 67·3 % reported no evidence of micronutrient supplementation influencing selected outcomes.
A coordinated, cohesive and uniform empirical approach to future studies is required to determine what constitutes appropriate, effective and safe micronutrient supplementation in contemporary cohorts from high-income countries, and how this might influence pregnancy outcomes.
This study examined the relationship between patient performance on multiple memory measures and regional brain volumes using an FDA-cleared quantitative volumetric analysis program – Neuroreader™.
Ninety-two patients diagnosed with mild cognitive impairment (MCI) by a clinical neuropsychologist completed cognitive evaluations and underwent MR Neuroreader™ within 1 year of testing. Select brain regions were correlated with three widely used memory tests. Regression analyses were conducted to determine if using more than one memory measures would better predict hippocampal z-scores and to explore the added value of recognition memory to prediction models.
Memory performances were most strongly correlated with hippocampal volumes than other brain regions. After controlling for encoding/Immediate Recall standard scores, statistically significant correlations emerged between Delayed Recall and hippocampal volumes (rs ranging from .348 to .490). Regression analysis revealed that evaluating memory performance across multiple memory measures is a better predictor of hippocampal volume than individual memory performances. Recognition memory did not add further predictive utility to regression analyses.
This study provides support for use of MR Neuroreader™ hippocampal volumes as a clinically informative biomarker associated with memory performance, which is a critical diagnostic feature of MCI phenotype.
The Genomics Used to Improve DEpresssion Decisions (GUIDED) trial assessed outcomes associated with combinatorial pharmacogenomic (PGx) testing in patients with major depressive disorder (MDD). Analyses used the 17-item Hamilton Depression (HAM-D17) rating scale; however, studies demonstrate that the abbreviated, core depression symptom-focused, HAM-D6 rating scale may have greater sensitivity toward detecting differences between treatment and placebo. However, the sensitivity of HAM-D6 has not been tested for two active treatment arms. Here, we evaluated the sensitivity of the HAM-D6 scale, relative to the HAM-D17 scale, when assessing outcomes for actively treated patients in the GUIDED trial.
Outpatients (N=1,298) diagnosed with MDD and an inadequate treatment response to >1 psychotropic medication were randomized into treatment as usual (TAU) or combinatorial PGx-guided (guided-care) arms. Combinatorial PGx testing was performed on all patients, though test reports were only available to the guided-care arm. All patients and raters were blinded to study arm until after week 8. Medications on the combinatorial PGx test report were categorized based on the level of predicted gene-drug interactions: ‘use as directed’, ‘moderate gene-drug interactions’, or ‘significant gene-drug interactions.’ Patient outcomes were assessed by arm at week 8 using HAM-D6 and HAM-D17 rating scales, including symptom improvement (percent change in scale), response (≥50% decrease in scale), and remission (HAM-D6 ≤4 and HAM-D17 ≤7).
At week 8, the guided-care arm demonstrated statistically significant symptom improvement over TAU using HAM-D6 scale (Δ=4.4%, p=0.023), but not using the HAM-D17 scale (Δ=3.2%, p=0.069). The response rate increased significantly for guided-care compared with TAU using both HAM-D6 (Δ=7.0%, p=0.004) and HAM-D17 (Δ=6.3%, p=0.007). Remission rates were also significantly greater for guided-care versus TAU using both scales (HAM-D6 Δ=4.6%, p=0.031; HAM-D17 Δ=5.5%, p=0.005). Patients taking medication(s) predicted to have gene-drug interactions at baseline showed further increased benefit over TAU at week 8 using HAM-D6 for symptom improvement (Δ=7.3%, p=0.004) response (Δ=10.0%, p=0.001) and remission (Δ=7.9%, p=0.005). Comparatively, the magnitude of the differences in outcomes between arms at week 8 was lower using HAM-D17 (symptom improvement Δ=5.0%, p=0.029; response Δ=8.0%, p=0.008; remission Δ=7.5%, p=0.003).
Combinatorial PGx-guided care achieved significantly better patient outcomes compared with TAU when assessed using the HAM-D6 scale. These findings suggest that the HAM-D6 scale is better suited than is the HAM-D17 for evaluating change in randomized, controlled trials comparing active treatment arms.
This paper synthesises the data and results of the Konya Regional Archaeological Survey Project (2016–2020) in order to address the earliest evidence for cities and states on the Konya and Karaman plains, central Turkey. A nested and integrative approach is developed that draws on a wide range of spatially extensive datasets to outline meaningful trends in settlement, water management and regional defensive systems during the Bronze and Iron Ages. The significance of the regional centre of Türkmen-Karahöyük for a reconstruction of early state polities between the 13th and eighth centuries BCE is addressed. In light of this regional analysis, it is tentatively suggested that, during the Late Bronze Age, Türkmen-Karahöyük was the location of the city of Tarḫuntašša, briefly the Hittite capital during the reign of Muwatalli II. More assuredly, based on the analysis of the newly discovered Middle Iron Age TÜRKMEN-KARAHÖYÜK 1 inscription, it is proposed that Türkmen-Karahöyük was the seat of a kingdom during the eighth century BCE that likely encompassed the Konya and Karaman plains.
Objectives: To describe multivariate base rates (MBRs) of low scores and reliable change (decline) scores on Immediate Post-Concussion Assessment and Cognitive Testing (ImPACT) in college athletes at baseline, as well as to assess MBR differences among demographic and medical history subpopulations. Methods: Data were reported on 15,909 participants (46.5% female) from the NCAA/DoD CARE Consortium. MBRs of ImPACT composite scores were derived using published CARE normative data and reliability metrics. MBRs of sex-corrected low scores were reported at <25th percentile (Low Average), <10th percentile (Borderline), and ≤2nd percentile (Impaired). MBRs of reliable decline scores were reported at the 75%, 90%, 95%, and 99% confidence intervals. We analyzed subgroups by sex, race, attention-deficit/hyperactivity disorder and/or learning disability (ADHD/LD), anxiety/depression, and concussion history using chi-square analyses. Results: Base rates of low scores and reliable decline scores on individual composites approximated the normative distribution. Athletes obtained ≥1 low score with frequencies of 63.4% (Low Average), 32.0% (Borderline), and 9.1% (Impaired). Athletes obtained ≥1 reliable decline score with frequencies of 66.8%, 32.2%, 18%, and 3.8%, respectively. Comparatively few athletes had low scores or reliable decline on ≥2 composite scores. Black/African American athletes and athletes with ADHD/LD had higher rates of low scores, while greater concussion history was associated with lower MBRs (p < .01). MBRs of reliable decline were not associated with demographic or medical factors. Conclusions: Clinical interpretation of low scores and reliable decline on ImPACT depends on the strictness of the low score cutoff, the reliable change criterion, and the number of scores exceeding these cutoffs. Race and ADHD influence the frequency of low scores at all cutoffs cross-sectionally.
OBJECTIVES/SPECIFIC AIMS: The objective of this research was to assess the clinical impact of simulation-based team leadership training on team leadership effectiveness and patient care during actual trauma resuscitations. This translational work addresses an important gap in simulation research and medical education research. METHODS/STUDY POPULATION: Eligible trauma team leaders were randomized to the intervention (4-hour simulation-based leadership training) or control (standard training) condition. Subject-led actual trauma patient resuscitations were video recorded and coded for leadership behaviors (primary outcome) and patient care (secondary outcome) using novel leadership and trauma patient care metrics. Patient outcomes for trauma resuscitations were obtained through the Harborview Medical Center Trauma Registry and analyzed descriptively. A one-way ANCOVA analysis was conducted to test the effectiveness of our training intervention versus a control group for each outcome (leadership effectiveness and patient care) while accounting for pre-training performance, injury severity score, postgraduate training year, and days since training occurred. Association between leadership effectiveness and patient care was evaluated using random coefficient modeling. RESULTS/ANTICIPATED RESULTS: Sixty team leaders, 30 in each condition, completed the study. There was a significant difference in post-training leadership effectiveness [F(1,54)=30.19, p<.001, η2=.36] between the experimental and control conditions. There was no direct impact of training on patient care [F(1,54)=1.0, p=0.33, η2=.02]; however, leadership effectiveness mediated an indirect effect of training on patient care. Across all trauma resuscitations team leader effectiveness correlated with patient care (p<0.05) as predicted by team leadership conceptual models. DISCUSSION/SIGNIFICANCE OF IMPACT: This work represents a critical step in advancing translational simulation-based research (TSR). While there are several examples of high quality translational research programs, they primarily focus on procedural tasks and do not evaluate highly complex skills such as leadership. Complex skills present significant measurement challenges because individuals and processes are interrelated, with multiple components and emergent nature of tasks and related behaviors. We provide evidence that simulation-based training of a complex skill (team leadership behavior) transfers to a complex clinical setting (emergency department) with highly variable clinical tasks (trauma resuscitations). Our novel team leadership training significantly improved overall leadership performance and partially mediated the positive effect between leadership and patient care. This represents the first rigorous, randomized, controlled trial of a leadership or teamwork-focused training that systematically evaluates the impact on process (leadership) and performance (patient care).