To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Retrospective self-report is typically used for diagnosing previous pediatric traumatic brain injury (TBI). A new semi-structured interview instrument (New Mexico Assessment of Pediatric TBI; NewMAP TBI) investigated test–retest reliability for TBI characteristics in both the TBI that qualified for study inclusion and for lifetime history of TBI.
One-hundred and eight-four mTBI (aged 8–18), 156 matched healthy controls (HC), and their parents completed the NewMAP TBI within 11 days (subacute; SA) and 4 months (early chronic; EC) of injury, with a subset returning at 1 year (late chronic; LC).
The test–retest reliability of common TBI characteristics [loss of consciousness (LOC), post-traumatic amnesia (PTA), retrograde amnesia, confusion/disorientation] and post-concussion symptoms (PCS) were examined across study visits. Aside from PTA, binary reporting (present/absent) for all TBI characteristics exhibited acceptable (≥0.60) test–retest reliability for both Qualifying and Remote TBIs across all three visits. In contrast, reliability for continuous data (exact duration) was generally unacceptable, with LOC and PCS meeting acceptable criteria at only half of the assessments. Transforming continuous self-report ratings into discrete categories based on injury severity resulted in acceptable reliability. Reliability was not strongly affected by the parent completing the NewMAP TBI.
Categorical reporting of TBI characteristics in children and adolescents can aid clinicians in retrospectively obtaining reliable estimates of TBI severity up to a year post-injury. However, test–retest reliability is strongly impacted by the initial data distribution, selected statistical methods, and potentially by patient difficulty in distinguishing among conceptually similar medical concepts (i.e., PTA vs. confusion).
Recent cannabis exposure has been associated with lower rates of neurocognitive impairment in people with HIV (PWH). Cannabis’s anti-inflammatory properties may underlie this relationship by reducing chronic neuroinflammation in PWH. This study examined relations between cannabis use and inflammatory biomarkers in cerebrospinal fluid (CSF) and plasma, and cognitive correlates of these biomarkers within a community-based sample of PWH.
263 individuals were categorized into four groups: HIV− non-cannabis users (n = 65), HIV+ non-cannabis users (n = 105), HIV+ moderate cannabis users (n = 62), and HIV+ daily cannabis users (n = 31). Differences in pro-inflammatory biomarkers (IL-6, MCP-1/CCL2, IP-10/CXCL10, sCD14, sTNFR-II, TNF-α) by study group were determined by Kruskal–Wallis tests. Multivariable linear regressions examined relationships between biomarkers and seven cognitive domains, adjusting for age, sex/gender, race, education, and current CD4 count.
HIV+ daily cannabis users showed lower MCP-1 and IP-10 levels in CSF compared to HIV+ non-cannabis users (p = .015; p = .039) and were similar to HIV− non-cannabis users. Plasma biomarkers showed no differences by cannabis use. Among PWH, lower CSF MCP-1 and lower CSF IP-10 were associated with better learning performance (all ps < .05).
Current daily cannabis use was associated with lower levels of pro-inflammatory chemokines implicated in HIV pathogenesis and these chemokines were linked to the cognitive domain of learning which is commonly impaired in PWH. Cannabinoid-related reductions of MCP-1 and IP-10, if confirmed, suggest a role for medicinal cannabis in the mitigation of persistent inflammation and cognitive impacts of HIV.
Definition of disorder subtypes may facilitate precision treatment for posttraumatic stress disorder (PTSD). We aimed to identify PTSD subtypes and evaluate their associations with genetic risk factors, types of stress exposures, comorbidity, and course of PTSD.
Data came from a prospective study of three U.S. Army Brigade Combat Teams that deployed to Afghanistan in 2012. Soldiers with probable PTSD (PTSD Checklist for Diagnostic and Statistical Manual of Mental Disorders-Fifth Edition ≥31) at three months postdeployment comprised the sample (N = 423) for latent profile analysis using Gaussian mixture modeling and PTSD symptom ratings as indicators. PTSD profiles were compared on polygenic risk scores (derived from external genomewide association study summary statistics), experiences during deployment, comorbidity at three months postdeployment, and persistence of PTSD at nine months postdeployment.
Latent profile analysis revealed profiles characterized by prominent intrusions, avoidance, and hyperarousal (threat-reactivity profile; n = 129), anhedonia and negative affect (dysphoric profile; n = 195), and high levels of all PTSD symptoms (high-symptom profile; n = 99). The threat-reactivity profile had the most combat exposure and the least comorbidity. The dysphoric profile had the highest polygenic risk for major depression, and more personal life stress and co-occurring major depression than the threat-reactivity profile. The high-symptom profile had the highest rates of concurrent mental disorders and persistence of PTSD.
Genetic and trauma-related factors likely contribute to PTSD heterogeneity, which can be parsed into subtypes that differ in symptom expression, comorbidity, and course. Future studies should evaluate whether PTSD typology modifies treatment response and should clarify distinctions between the dysphoric profile and depressive disorders.
To prioritise and refine a set of evidence-informed statements into advice messages to promote vegetable liking in early childhood, and to determine applicability for dissemination of advice to relevant audiences.
A nominal group technique (NGT) workshop and a Delphi survey were conducted to prioritise and achieve consensus (≥70% agreement) on 30 evidence-informed maternal (perinatal and lactation stage), infant (complementary feeding stage) and early years (family diet stage) vegetable-related advice messages. Messages were validated via triangulation analysis against the strength of evidence from an Umbrella review of strategies to increase children’s vegetable liking, and gaps in advice from a Desktop review of vegetable feeding advice.
A purposeful sample of key stakeholders (NGT workshop, n=8 experts; Delphi survey, n=23 end-users).
Participant consensus identified the most highly ranked priority messages associated with the strategies of: ‘in-utero exposure’ (perinatal and lactation, n=56 points); and ‘vegetable variety’ (complementary feeding, n=97 points; family diet, n=139 points). Triangulation revealed two strategies (‘repeated exposure’ and ‘variety’) and their associated advice messages suitable for policy and practice, 12 for research and four for food industry.
Supported by national and state feeding guideline documents and resources, the advice messages relating to ‘repeated exposure’ and ‘variety’ to increase vegetable liking can be communicated to families and caregivers by healthcare practitioners. The food industry provides a vehicle for advice promotion and product development. Further research, where stronger evidence is needed, could further inform strategies for policy and practice, and food industry application.
Our objective was to compare patterns of dental antibiotic prescribing in Australia, England, and North America (United States and British Columbia, Canada).
Population-level analysis of antibiotic prescription.
Outpatient prescribing by dentists in 2017.
Patients receiving an antibiotic dispensed by an outpatient pharmacy.
Prescription-based rates adjusted by population were compared overall and by antibiotic class. Contingency tables assessed differences in the proportion of antibiotic class by country.
In 2017, dentists in the United States had the highest antibiotic prescribing rate per 1,000 population and Australia had the lowest rate. The penicillin class, particularly amoxicillin, was the most frequently prescribed for all countries. The second most common agents prescribed were clindamycin in the United States and British Columbia (Canada) and metronidazole in Australia and England. Broad-spectrum agents, amoxicillin-clavulanic acid, and azithromycin were the highest in Australia and the United States, respectively.
Extreme differences exist in antibiotics prescribed by dentists in Australia, England, the United States, and British Columbia. The United States had twice the antibiotic prescription rate of Australia and the most frequently prescribed antibiotic in the US was clindamycin. Significant opportunities exist for the global dental community to update their prescribing behavior relating to second-line agents for penicillin allergic patients and to contribute to international efforts addressing antibiotic resistance. Patient safety improvements will result from optimizing dental antibiotic prescribing, especially for antibiotics associated with resistance (broad-spectrum agents) or C. difficile (clindamycin). Dental antibiotic stewardship programs are urgently needed worldwide.
Ecosystem modeling, a pillar of the systems ecology paradigm (SEP), addresses questions such as, how much carbon and nitrogen are cycled within ecological sites, landscapes, or indeed the earth system? Or how are human activities modifying these flows? Modeling, when coupled with field and laboratory studies, represents the essence of the SEP in that they embody accumulated knowledge and generate hypotheses to test understanding of ecosystem processes and behavior. Initially, ecosystem models were primarily used to improve our understanding about how biophysical aspects of ecosystems operate. However, current ecosystem models are widely used to make accurate predictions about how large-scale phenomena such as climate change and management practices impact ecosystem dynamics and assess potential effects of these changes on economic activity and policy making. In sum, ecosystem models embedded in the SEP remain our best mechanism to integrate diverse types of knowledge regarding how the earth system functions and to make quantitative predictions that can be confronted with observations of reality. Modeling efforts discussed are the Century ecosystem model, DayCent ecosystem model, Grassland Ecosystem Model ELM, food web models, Savanna model, agent-based and coupled systems modeling, and Bayesian modeling.
Milrinone is a phosphodiesterase type 3 inhibitor that results in a positive inotropic effect in the heart through an increase in cyclic adenosine monophosphate. The purpose of this study was to evaluate circulating cyclic adenosine monophosphate and milrinone concentrations in milrinone treated paediatric patients undergoing congenital heart surgery.
Single-centre prospective observational pilot study from January 2015 to December 2017 including children aged birth to 18 years. Milrinone and circulating cyclic adenosine monophosphate concentrations were measured at four time points through the first post-operative day and compared between patients with and without low cardiac output syndrome, defined using clinical and laboratory criteria.
Fifty patients were included. Nine (18%) developed low cardiac output syndrome. For all patients, 22% had single ventricle heart disease. The density and distribution of cyclic adenosine monophosphate concentrations varied between those with and without low cardiac output syndrome but were not significantly different. Milrinone concentrations increased in all patients. Paired t-tests demonstrated an increase in circulating cyclic adenosine monophosphate concentrations during the post-operative period among patients without low cardiac output syndrome.
In this prospective observational study, circulating cyclic adenosine monophosphate concentrations increased in those without low cardiac output syndrome during the first 24 post-operative hours and milrinone concentrations increased in all patients. Further study of the utility of cyclic adenosine monophosphate concentrations in milrinone treated patients is necessary.
To analyse nutritional and packaging characteristics of toddler-specific foods and milks in the Australian retail food environment to identify how such products fit within the Australian Dietary Guidelines (ADG) and the NOVA classification.
Cross-sectional retail audit of toddler foods and milks. On-pack product attributes were recorded. Products were categorised as (1) food or milk; (2) snack food or meal and (3) snacks sub-categorised depending on main ingredients. Products were classified as a discretionary or core food as per the ADG and level of processing according to NOVA classification.
Supermarkets and pharmacies in Australia.
A total of 154 foods and thirty-two milks were identified. Eighty percentage of foods were snacks, and 60 % of foods were classified as core foods, while 85 % were ultraprocessed (UP). Per 100 g, discretionary foods provided significantly more energy, protein, total and saturated fat, carbohydrate, total sugar and Na (P < 0·001) than core foods. Total sugars were significantly higher (P < 0·001) and Na significantly lower (P < 0·001) in minimally processed foods than in UP foods. All toddler milks (n 32) were found to have higher energy, carbohydrate and total sugar levels than full-fat cow’s milk per 100 ml. Claims and messages were present on 99 % of foods and all milks.
The majority of toddler foods available in Australia are UP snack foods and do not align with the ADG. Toddler milks, despite being UP, do align with the ADG. A strengthened regulatory approach may address this issue.
Cognitive behavior therapy (CBT) is effective for most patients with a social anxiety disorder (SAD) but a substantial proportion fails to remit. Experimental and clinical research suggests that enhancing CBT using imagery-based techniques could improve outcomes. It was hypothesized that imagery-enhanced CBT (IE-CBT) would be superior to verbally-based CBT (VB-CBT) on pre-registered outcomes.
A randomized controlled trial of IE-CBT v. VB-CBT for social anxiety was completed in a community mental health clinic setting. Participants were randomized to IE (n = 53) or VB (n = 54) CBT, with 1-month (primary end point) and 6-month follow-up assessments. Participants completed 12, 2-hour, weekly sessions of IE-CBT or VB-CBT plus 1-month follow-up.
Intention to treat analyses showed very large within-treatment effect sizes on the social interaction anxiety at all time points (ds = 2.09–2.62), with no between-treatment differences on this outcome or clinician-rated severity [1-month OR = 1.45 (0.45, 4.62), p = 0.53; 6-month OR = 1.31 (0.42, 4.08), p = 0.65], SAD remission (1-month: IE = 61.04%, VB = 55.09%, p = 0.59); 6-month: IE = 58.73%, VB = 61.89%, p = 0.77), or secondary outcomes. Three adverse events were noted (substance abuse, n = 1 in IE-CBT; temporary increase in suicide risk, n = 1 in each condition, with one being withdrawn at 1-month follow-up).
Group IE-CBT and VB-CBT were safe and there were no significant differences in outcomes. Both treatments were associated with very large within-group effect sizes and the majority of patients remitted following treatment.
In response to advancing clinical practice guidelines regarding concussion management, service members, like athletes, complete a baseline assessment prior to participating in high-risk activities. While several studies have established test stability in athletes, no investigation to date has examined the stability of baseline assessment scores in military cadets. The objective of this study was to assess the test–retest reliability of a baseline concussion test battery in cadets at U.S. Service Academies.
All cadets participating in the Concussion Assessment, Research, and Education (CARE) Consortium investigation completed a standard baseline battery that included memory, balance, symptom, and neurocognitive assessments. Annual baseline testing was completed during the first 3 years of the study. A two-way mixed-model analysis of variance (intraclass correlation coefficent (ICC)3,1) and Kappa statistics were used to assess the stability of the metrics at 1-year and 2-year time intervals.
ICC values for the 1-year test interval ranged from 0.28 to 0.67 and from 0.15 to 0.57 for the 2-year interval. Kappa values ranged from 0.16 to 0.21 for the 1-year interval and from 0.29 to 0.31 for the 2-year test interval. Across all measures, the observed effects were small, ranging from 0.01 to 0.44.
This investigation noted less than optimal reliability for the most common concussion baseline assessments. While none of the assessments met or exceeded the accepted clinical threshold, the effect sizes were relatively small suggesting an overlap in performance from year-to-year. As such, baseline assessments beyond the initial evaluation in cadets are not essential but could aid concussion diagnosis.
Unit cohesion may protect service member mental health by mitigating effects of combat exposure; however, questions remain about the origins of potential stress-buffering effects. We examined buffering effects associated with two forms of unit cohesion (peer-oriented horizontal cohesion and subordinate-leader vertical cohesion) defined as either individual-level or aggregated unit-level variables.
Longitudinal survey data from US Army soldiers who deployed to Afghanistan in 2012 were analyzed using mixed-effects regression. Models evaluated individual- and unit-level interaction effects of combat exposure and cohesion during deployment on symptoms of post-traumatic stress disorder (PTSD), depression, and suicidal ideation reported at 3 months post-deployment (model n's = 6684 to 6826). Given the small effective sample size (k = 89), the significance of unit-level interactions was evaluated at a 90% confidence level.
At the individual-level, buffering effects of horizontal cohesion were found for PTSD symptoms [B = −0.11, 95% CI (−0.18 to −0.04), p < 0.01] and depressive symptoms [B = −0.06, 95% CI (−0.10 to −0.01), p < 0.05]; while a buffering effect of vertical cohesion was observed for PTSD symptoms only [B = −0.03, 95% CI (−0.06 to −0.0001), p < 0.05]. At the unit-level, buffering effects of horizontal (but not vertical) cohesion were observed for PTSD symptoms [B = −0.91, 90% CI (−1.70 to −0.11), p = 0.06], depressive symptoms [B = −0.83, 90% CI (−1.24 to −0.41), p < 0.01], and suicidal ideation [B = −0.32, 90% CI (−0.62 to −0.01), p = 0.08].
Policies and interventions that enhance horizontal cohesion may protect combat-exposed units against post-deployment mental health problems. Efforts to support individual soldiers who report low levels of horizontal or vertical cohesion may also yield mental health benefits.
Introduction: The Cunningham reduction method for anterior shoulder dislocation offers an atraumatic alternative to traditional reduction techniques without the inconvenience and risk of procedural sedation and analgesia (PSA). Unfortunately, success rates as low as 27% have limited widespread use of this method. Inhaled methoxyflurane (I-MEOF) offers a rapidly administered, minimally invasive option for short-term analgesia. We conducted a pilot study to evaluate the feasibility of studying whether I-MEOF increased success rates for atraumatic reduction of anterior shoulder dislocation. Methods: A convenience sample of 20 patients with uncomplicated anterior shoulder dislocations were offered the Cunningham reduction method supported by methoxyflurane analgesia under the guidance of an advanced care paramedic. Operators were instructed to limit their attempt to the Cunningham method. Outcomes included success rate without the requirement for PSA, time to discharge, and operator and patient satisfaction with the procedure. Results: 20 patients received I-MEOF and an attempt at Cunningham reduction. 80% of patients were male, median age was 38.6 (range 18-71), and 55% were first dislocations of that joint. 35% (8/20 patients) had reduction successfully achieved by the Cunningham method under I-MEOF analgesia. The remainder proceeded to closed reduction under PSA. All patients had eventual successful reduction in the ED. 60% of operators reported good to excellent satisfaction with the process, with inadequate muscle relaxation being identified as the primary cause of failed initial attempts. 80% of patients reported good to excellent satisfaction. Conclusion: Success with the Cunningham technique was marginally increased with the use of I-MEOF, although 65% of patients still required PSA to facilitate reduction. The process was generally met with satisfaction by both providers and patients, suggesting that early administration of analgesia is appreciated. Moreover, one-third of patients had reduction achieved atraumatically without need for further intervention. A larger, randomized study may identify patient characteristics which make this reduction method more likely to be successful.
Frascati international research criteria for HIV-associated neurocognitive disorders (HAND) are controversial; some investigators have argued that Frascati criteria are too liberal, resulting in a high false positive rate. Meyer et al. recommended more conservative revisions to HAND criteria, including exploring other commonly used methodologies for neurocognitive impairment (NCI) in HIV including the global deficit score (GDS). This study compares NCI classifications by Frascati, Meyer, and GDS methods, in relation to neuroimaging markers of brain integrity in HIV.
Two hundred forty-one people living with HIV (PLWH) without current substance use disorder or severe (confounding) comorbid conditions underwent comprehensive neurocognitive testing and brain structural magnetic resonance imaging and magnetic resonance spectroscopy. Participants were classified using Frascati criteria versus Meyer criteria: concordant unimpaired [Frascati(Un)/Meyer(Un)], concordant impaired [Frascati(Imp)/Meyer(Imp)], or discordant [Frascati(Imp)/Meyer(Un)] which were impaired via Frascati criteria but unimpaired via Meyer criteria. To investigate the GDS versus Meyer criteria, the same groupings were utilized using GDS criteria instead of Frascati criteria.
When examining Frascati versus Meyer criteria, discordant Frascati(Imp)/Meyer(Un) individuals had less cortical gray matter, greater sulcal cerebrospinal fluid volume, and greater evidence of neuroinflammation (i.e., choline) than concordant Frascati(Un)/Meyer(Un) individuals. GDS versus Meyer comparisons indicated that discordant GDS(Imp)/Meyer(Un) individuals had less cortical gray matter and lower levels of energy metabolism (i.e., creatine) than concordant GDS(Un)/Meyer(Un) individuals. In both sets of analyses, the discordant group did not differ from the concordant impaired group on any neuroimaging measure.
The Meyer criteria failed to capture a substantial portion of PLWH with brain abnormalities. These findings support continued use of Frascati or GDS criteria to detect HIV-associated CNS dysfunction.
Background: SMA is characterized by reduced levels of survival of motor neuron (SMN) protein from deletions and/or mutations of the SMN1 gene. While SMN1 produces full-length SMN protein, a second gene, SMN2, produces low levels of functional SMN protein. Risdiplam (RG7916/RO7034067) is an investigational, orally administered, centrally and peripherally distributed small molecule that modulates pre-mRNA splicing of SMN2 to increase SMN protein levels. Methods: SUNFISH (NCT02908685) is an ongoing multicenter, double-blind, placebo-controlled, operationally seamless study (randomized 2:1, risdiplam:placebo) in patients aged 2–25 years, with Type 2/3 SMA. Part 1 (n=51) assesses safety, tolerability, pharmacokinetics and pharmacodynamics of different risdiplam dose levels. Pivotal Part 2 (n=180) assesses safety and efficacy of the risdiplam dose level selected based on Part 1 results. Results: Part 1 results showed a sustained, >2-fold increase in median SMN protein versus baseline following 1 year of treatment. Adverse events were mostly mild, resolved despite ongoing treatment and reflected underlying disease. No drug-related safety findings have led to withdrawal (data-cut 06/17/18). SUNFISH Part 1 exploratory endpoint results and Part 2 study design will also be presented. Conclusions: To date, no drug-related safety findings have led to withdrawal. Risdiplam led to sustained increases in SMN protein levels.
Introduction: Acute migraine headaches are common causes of presentation to the emergency department (ED). There is great variability in the efficacy of the available parenteral agents to manage pain, though triptans are among the recommended treatments. The objective of this systematic review was to update a previous review examining the effectiveness of parenteral agents for the treatment of acute migraine in the ED or equivalent acute care setting; our review examined pain management in emergency settings and assessed the effectiveness of triptan agents. Methods: A comprehensive search of 10 electronic databases and grey literature was conducted to supplement the previous systematic review. Two independent reviewers completed study selection, quality assessment, and data extraction. Any discrepancies were resolved by third party adjudication. Pain scale scores were analyzed using standardized mean difference (SMD) with 95% confidence intervals (CIs) calculated using a random effects model; heterogeneity (I2) was reported. Results: Titles and abstracts of 5039 unique studies were reviewed, of which, 51 studies were included. Sixty-four studies from the original review were included, resulting in a total of 115 included studies. Pain was measured within the ED or equivalent acute care setting using a variety of pain scales, most commonly the 0-10 cm or 100 mm visual analog scale. Four studies compared pain scores between patients receiving sumatriptan vs. other agents, of which, patients receiving sumatriptan reported higher pain scale scores (SMD = 0.53; 95% CI: 0.04, 1.02; I2 = 80%). In particular, patients receiving sumatriptan reported higher pain scale scores than patients receiving metoclopramide (SMD = 0.68; 95% CI: 0.31, 1.04; n = 1) or ketorolac (SMD = 1.39; 95% CI: 0.56, 2.21; n = 1). Overall, studies comparing anti-inflammatory agents (i.e., ketorolac or dexketoprofen) to other agents reported improved pain scale scores among patients receiving anti-inflammatory agents (SMD = -0.38; 95% CI: -0.73, -0.03; I2 = 66%; n = 5). Conclusion: Limited evidence suggests that patients treated with metoclopramide or anti-inflammatory agents experience greater pain reduction compared to patients treated with sumatriptan. This review will conduct a network analysis of parenteral agents to examine the comparative effectiveness of parenteral agents to manage pain among patients with acute migraine. Further analysis will also consider the balance between efficacy and adverse events.
Introduction: Although a variety of parenteral agents exist for the treatment of acute migraine, relapse after an emergency department (ED) visit is still a common occurrence. The objective of this systematic review was to update a previous review examining the effectiveness of parenteral agents for the treatment of acute migraine in the ED or equivalent acute care setting; our review focused on those studies aiming a reduction in relapse after an ED visit. Methods: A comprehensive search of 10 electronic databases and grey literature was conducted to identify comparative studies to supplement the previous systematic review. Two independent reviewers completed study selection, quality assessment, and data extraction. Any discrepancies were resolved by third party adjudication. Relative risks (RR) with 95% confidence intervals (CIs) were calculated using a random effects model and heterogeneity (I2) was reported. Results: Titles and abstracts of 5039 unique studies were reviewed, of which, 51 studies were included. Sixty-four studies from the original review were included, resulting in a total of 115 included studies. Relapse was reported in 44 (38%) included studies and occurred commonly in patients receiving placebo or no interventions (median = 39%; IQR: 14%, 47%). Overall, no differences in headache relapse were found between patients receiving sumatriptan or placebo (RR = 1.09; 95% CI: 0.55, 2.17; I2 = 93%; n = 8). Conversely, patients receiving neuroleptic agents experienced fewer relapses compared to placebo (RR = 0.27; 95% CI: 0.12, 0.58; I2 = 0%; n = 3); however, patients receiving neuroleptics reported an increase in adverse events (RR = 1.87; 95% CI: 1.17, 3.00; I2 = 0%; n = 3). Compared to placebo, patients receiving dexamethasone were less likely to experience a headache recurrence (RR = 0.71; 95% CI: 0.53, 0.95; I2 = 60%, n = 9); however, no differences were found in reported adverse events (RR = 1.09; 95% CI: 0.81, 1.47; I2 = 0%; n = 3). Conclusion: Relapse is a common occurrence for patients with migraine headaches. This review found patients receiving neuroleptics or dexamethasone experienced fewer headache recurrences. Conversely, triptan agents appear to have minimal effect on reducing the risk for headache recurrence following discharge from an acute care setting. Limited available data on adverse events is an important limitation to inform decision-making. Guidelines should be revised to reflect these results.
Whereas genetic susceptibility increases the risk for major depressive disorder (MDD), non-genetic protective factors may mitigate this risk. In a large-scale prospective study of US Army soldiers, we examined whether trait resilience and/or unit cohesion could protect against the onset of MDD following combat deployment, even in soldiers at high polygenic risk.
Data were analyzed from 3079 soldiers of European ancestry assessed before and after their deployment to Afghanistan. Incident MDD was defined as no MDD episode at pre-deployment, followed by a MDD episode following deployment. Polygenic risk scores were constructed from a large-scale genome-wide association study of major depression. We first examined the main effects of the MDD PRS and each protective factor on incident MDD. We then tested the effects of each protective factor on incident MDD across strata of polygenic risk.
Polygenic risk showed a dose–response relationship to depression, such that soldiers at high polygenic risk had greatest odds for incident MDD. Both unit cohesion and trait resilience were prospectively associated with reduced risk for incident MDD. Notably, the protective effect of unit cohesion persisted even in soldiers at highest polygenic risk.
Polygenic risk was associated with new-onset MDD in deployed soldiers. However, unit cohesion – an index of perceived support and morale – was protective against incident MDD even among those at highest genetic risk, and may represent a potent target for promoting resilience in vulnerable soldiers. Findings illustrate the value of combining genomic and environmental data in a prospective design to identify robust protective factors for mental health.
Objectives: Studies of neurocognitively elite older adults, termed SuperAgers, have identified clinical predictors and neurobiological indicators of resilience against age-related neurocognitive decline. Despite rising rates of older persons living with HIV (PLWH), SuperAging (SA) in PLWH remains undefined. We aimed to establish neuropsychological criteria for SA in PLWH and examined clinically relevant correlates of SA. Methods: 734 PLWH and 123 HIV-uninfected participants between 50 and 64 years of age underwent neuropsychological and neuromedical evaluations. SA was defined as demographically corrected (i.e., sex, race/ethnicity, education) global neurocognitive performance within normal range for 25-year-olds. Remaining participants were labeled cognitively normal (CN) or impaired (CI) based on actual age. Chi-square and analysis of variance tests examined HIV group differences on neurocognitive status and demographics. Within PLWH, neurocognitive status differences were tested on HIV disease characteristics, medical comorbidities, and everyday functioning. Multinomial logistic regression explored independent predictors of neurocognitive status. Results: Neurocognitive status rates and demographic characteristics differed between PLWH (SA=17%; CN=38%; CI=45%) and HIV-uninfected participants (SA=35%; CN=55%; CI=11%). In PLWH, neurocognitive groups were comparable on demographic and HIV disease characteristics. Younger age, higher verbal IQ, absence of diabetes, fewer depressive symptoms, and lifetime cannabis use disorder increased likelihood of SA. SA reported increased independence in everyday functioning, employment, and health-related quality of life than non-SA. Conclusions: Despite combined neurological risk of aging and HIV, youthful neurocognitive performance is possible for older PLWH. SA relates to improved real-world functioning and may be better explained by cognitive reserve and maintenance of cardiometabolic and mental health than HIV disease severity. Future research investigating biomarker and lifestyle (e.g., physical activity) correlates of SA may help identify modifiable neuroprotective factors against HIV-related neurobiological aging. (JINS, 2019, 25, 507–519)