To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Research on post-traumatic stress disorder (PTSD) course finds a substantial proportion of cases remit within 6 months, a majority within 2 years, and a substantial minority persists for many years. Results are inconsistent about pre-trauma predictors.
The WHO World Mental Health surveys assessed lifetime DSM-IV PTSD presence-course after one randomly-selected trauma, allowing retrospective estimates of PTSD duration. Prior traumas, childhood adversities (CAs), and other lifetime DSM-IV mental disorders were examined as predictors using discrete-time person-month survival analysis among the 1575 respondents with lifetime PTSD.
20%, 27%, and 50% of cases recovered within 3, 6, and 24 months and 77% within 10 years (the longest duration allowing stable estimates). Time-related recall bias was found largely for recoveries after 24 months. Recovery was weakly related to most trauma types other than very low [odds-ratio (OR) 0.2–0.3] early-recovery (within 24 months) associated with purposefully injuring/torturing/killing and witnessing atrocities and very low later-recovery (25+ months) associated with being kidnapped. The significant ORs for prior traumas, CAs, and mental disorders were generally inconsistent between early- and later-recovery models. Cross-validated versions of final models nonetheless discriminated significantly between the 50% of respondents with highest and lowest predicted probabilities of both early-recovery (66–55% v. 43%) and later-recovery (75–68% v. 39%).
We found PTSD recovery trajectories similar to those in previous studies. The weak associations of pre-trauma factors with recovery, also consistent with previous studies, presumably are due to stronger influences of post-trauma factors.
The stress sensitization theory hypothesizes that individuals exposed to childhood adversity will be more vulnerable to mental disorders from proximal stressors. We aimed to test this theory with respect to risk of 30-day major depressive episode (MDE) and generalized anxiety disorder (GAD) among new US Army soldiers.
The sample consisted of 30 436 new soldier recruits in the Army Study to Assess Risk and Resilience (Army STARRS). Generalized linear models were constructed, and additive interactions between childhood maltreatment profiles and level of 12-month stressful experiences on the risk of 30-day MDE and GAD were analyzed.
Stress sensitization was observed in models of past 30-day MDE (χ28 = 17.6, p = 0.025) and GAD (χ28 = 26.8, p = 0.001). This sensitization only occurred at high (3+) levels of reported 12-month stressful experiences. In pairwise comparisons for the risk of 30-day MDE, the risk difference between 3+ stressful experiences and no stressful experiences was significantly greater for all maltreatment profiles relative to No Maltreatment. Similar results were found with the risk for 30-day GAD with the exception of the risk difference for Episodic Emotional and Sexual Abuse, which did not differ statistically from No Maltreatment.
New soldiers are at an increased risk of 30-day MDE or GAD following recent stressful experiences if they were exposed to childhood maltreatment. Particularly in the military with an abundance of unique stressors, attempts to identify this population and improve stress management may be useful in the effort to reduce the risk of mental disorders.
The U.S. Army uses universal preventives interventions for several negative outcomes (e.g. suicide, violence, sexual assault) with especially high risks in the early years of service. More intensive interventions exist, but would be cost-effective only if targeted at high-risk soldiers. We report results of efforts to develop models for such targeting from self-report surveys administered at the beginning of Army service.
21 832 new soldiers completed a self-administered questionnaire (SAQ) in 2011–2012 and consented to link administrative data to SAQ responses. Penalized regression models were developed for 12 administratively-recorded outcomes occurring by December 2013: suicide attempt, mental hospitalization, positive drug test, traumatic brain injury (TBI), other severe injury, several types of violence perpetration and victimization, demotion, and attrition.
The best-performing models were for TBI (AUC = 0.80), major physical violence perpetration (AUC = 0.78), sexual assault perpetration (AUC = 0.78), and suicide attempt (AUC = 0.74). Although predicted risk scores were significantly correlated across outcomes, prediction was not improved by including risk scores for other outcomes in models. Of particular note: 40.5% of suicide attempts occurred among the 10% of new soldiers with highest predicted risk, 57.2% of male sexual assault perpetrations among the 15% with highest predicted risk, and 35.5% of female sexual assault victimizations among the 10% with highest predicted risk.
Data collected at the beginning of service in self-report surveys could be used to develop risk models that define small proportions of new soldiers accounting for high proportions of negative outcomes over the first few years of service.
Clinicians need guidance to address the heterogeneity of treatment responses of patients with major depressive disorder (MDD). While prediction schemes based on symptom clustering and biomarkers have so far not yielded results of sufficient strength to inform clinical decision-making, prediction schemes based on big data predictive analytic models might be more practically useful.
We review evidence suggesting that prediction equations based on symptoms and other easily-assessed clinical features found in previous research to predict MDD treatment outcomes might provide a foundation for developing predictive analytic clinical decision support models that could help clinicians select optimal (personalised) MDD treatments. These methods could also be useful in targeting patient subsamples for more expensive biomarker assessments.
Approximately two dozen baseline variables obtained from medical records or patient reports have been found repeatedly in MDD treatment trials to predict overall treatment outcomes (i.e., intervention v. control) or differential treatment outcomes (i.e., intervention A v. intervention B). Similar evidence has been found in observational studies of MDD persistence-severity. However, no treatment studies have yet attempted to develop treatment outcome equations using the full set of these predictors. Promising preliminary empirical results coupled with recent developments in statistical methodology suggest that models could be developed to provide useful clinical decision support in personalised treatment selection. These tools could also provide a strong foundation to increase statistical power in focused studies of biomarkers and MDD heterogeneity of treatment response in subsequent controlled trials.
Coordinated efforts are needed to develop a protocol for systematically collecting information about established predictors of heterogeneity of MDD treatment response in large observational treatment studies, applying and refining these models in subsequent pragmatic trials, carrying out pooled secondary analyses to extract the maximum amount of information from these coordinated studies, and using this information to focus future discovery efforts in the segment of the patient population in which continued uncertainty about treatment response exists.
Although interventions exist to reduce violent crime, optimal implementation requires accurate targeting. We report the results of an attempt to develop an actuarial model using machine learning methods to predict future violent crimes among US Army soldiers.
A consolidated administrative database for all 975 057 soldiers in the US Army in 2004–2009 was created in the Army Study to Assess Risk and Resilience in Servicemembers (Army STARRS). Of these soldiers, 5771 committed a first founded major physical violent crime (murder-manslaughter, kidnapping, aggravated arson, aggravated assault, robbery) over that time period. Temporally prior administrative records measuring socio-demographic, Army career, criminal justice, medical/pharmacy, and contextual variables were used to build an actuarial model for these crimes separately among men and women using machine learning methods (cross-validated stepwise regression, random forests, penalized regressions). The model was then validated in an independent 2011–2013 sample.
Key predictors were indicators of disadvantaged social/socioeconomic status, early career stage, prior crime, and mental disorder treatment. Area under the receiver-operating characteristic curve was 0.80–0.82 in 2004–2009 and 0.77 in the 2011–2013 validation sample. Of all administratively recorded crimes, 36.2–33.1% (male-female) were committed by the 5% of soldiers having the highest predicted risk in 2004–2009 and an even higher proportion (50.5%) in the 2011–2013 validation sample.
Although these results suggest that the models could be used to target soldiers at high risk of violent crime perpetration for preventive interventions, final implementation decisions would require further validation and weighing of predicted effectiveness against intervention costs and competing risks.
Civilian suicide rates vary by occupation in ways related to occupational stress exposure. Comparable military research finds suicide rates elevated in combat arms occupations. However, no research has evaluated variation in this pattern by deployment history, the indicator of occupation stress widely considered responsible for the recent rise in the military suicide rate.
The joint associations of Army occupation and deployment history in predicting suicides were analysed in an administrative dataset for the 729 337 male enlisted Regular Army soldiers in the US Army between 2004 and 2009.
There were 496 suicides over the study period (22.4/100 000 person-years). Only two occupational categories, both in combat arms, had significantly elevated suicide rates: infantrymen (37.2/100 000 person-years) and combat engineers (38.2/100 000 person-years). However, the suicide rates in these two categories were significantly lower when currently deployed (30.6/100 000 person-years) than never deployed or previously deployed (41.2–39.1/100 000 person-years), whereas the suicide rate of other soldiers was significantly higher when currently deployed and previously deployed (20.2–22.4/100 000 person-years) than never deployed (14.5/100 000 person-years), resulting in the adjusted suicide rate of infantrymen and combat engineers being most elevated when never deployed [odds ratio (OR) 2.9, 95% confidence interval (CI) 2.1–4.1], less so when previously deployed (OR 1.6, 95% CI 1.1–2.1), and not at all when currently deployed (OR 1.2, 95% CI 0.8–1.8). Adjustment for a differential ‘healthy warrior effect’ cannot explain this variation in the relative suicide rates of never-deployed infantrymen and combat engineers by deployment status.
Efforts are needed to elucidate the causal mechanisms underlying this interaction to guide preventive interventions for soldiers at high suicide risk.
The Army Study to Assess Risk and Resilience in Servicemembers (Army STARRS) has found that the proportional elevation in the US Army enlisted soldier suicide rate during deployment (compared with the never-deployed or previously deployed) is significantly higher among women than men, raising the possibility of gender differences in the adverse psychological effects of deployment.
Person-month survival models based on a consolidated administrative database for active duty enlisted Regular Army soldiers in 2004–2009 (n = 975 057) were used to characterize the gender × deployment interaction predicting suicide. Four explanatory hypotheses were explored involving the proportion of females in each soldier's occupation, the proportion of same-gender soldiers in each soldier's unit, whether the soldier reported sexual assault victimization in the previous 12 months, and the soldier's pre-deployment history of treated mental/behavioral disorders.
The suicide rate of currently deployed women (14.0/100 000 person-years) was 3.1–3.5 times the rates of other (i.e. never-deployed/previously deployed) women. The suicide rate of currently deployed men (22.6/100 000 person-years) was 0.9–1.2 times the rates of other men. The adjusted (for time trends, sociodemographics, and Army career variables) female:male odds ratio comparing the suicide rates of currently deployed v. other women v. men was 2.8 (95% confidence interval 1.1–6.8), became 2.4 after excluding soldiers with Direct Combat Arms occupations, and remained elevated (in the range 1.9–2.8) after adjusting for the hypothesized explanatory variables.
These results are valuable in excluding otherwise plausible hypotheses for the elevated suicide rate of deployed women and point to the importance of expanding future research on the psychological challenges of deployment for women.
Email your librarian or administrator to recommend adding this to your organisation's collection.