To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Several life-threatening diseases of the kidney have their origins in mutational events that occur during embryonic development. In this study, we investigate the role of the Wolffian duct (WD), the earliest embryonic epithelial progenitor of renal tubules, in the etiology of autosomal dominant polycystic kidney disease (ADPKD). ADPKD is associated with a germline mutation of one of the two Pkd1 alleles. For the disease to occur, a second event that disrupts the expression of the other inherited Pkd1 allele must occur. We postulated that this secondary event can occur in the pronephric WD. Using Cre-Lox recombination, mice with WD-specific deletion of one or both Pkd1 alleles were generated. Homozygous Pkd1-targeted deletion in WD-derived tissues resulted in mice with large cystic kidneys and serologic evidence of renal failure. In contrast, heterozygous deletion of Pkd1 in the WD led to kidneys that were phenotypically indistinguishable from control in the early postnatal period. High-throughput sequencing, however, revealed underlying gene and microRNA (miRNA) changes in these heterozygous mutant kidneys that suggest a strong predisposition toward developing ADPKD. Bioinformatic analysis of this data demonstrated an upregulation of several miRNAs that have been previously associated with PKD; pathway analysis further demonstrated that the differentially expressed genes in the heterozygous mutant kidneys were overrepresented in signaling pathways associated with maintenance and function of the renal tubular epithelium. These results suggest that the WD may be an early epithelial target for the genetic or molecular signals that can lead to cyst formation in ADPKD.
The COllaborative project of Development of Anthropometrical measures in Twins (CODATwins) project is a large international collaborative effort to analyze individual-level phenotype data from twins in multiple cohorts from different environments. The main objective is to study factors that modify genetic and environmental variation of height, body mass index (BMI, kg/m2) and size at birth, and additionally to address other research questions such as long-term consequences of birth size. The project started in 2013 and is open to all twin projects in the world having height and weight measures on twins with information on zygosity. Thus far, 54 twin projects from 24 countries have provided individual-level data. The CODATwins database includes 489,981 twin individuals (228,635 complete twin pairs). Since many twin cohorts have collected longitudinal data, there is a total of 1,049,785 height and weight observations. For many cohorts, we also have information on birth weight and length, own smoking behavior and own or parental education. We found that the heritability estimates of height and BMI systematically changed from infancy to old age. Remarkably, only minor differences in the heritability estimates were found across cultural–geographic regions, measurement time and birth cohort for height and BMI. In addition to genetic epidemiological studies, we looked at associations of height and BMI with education, birth weight and smoking status. Within-family analyses examined differences within same-sex and opposite-sex dizygotic twins in birth size and later development. The CODATwins project demonstrates the feasibility and value of international collaboration to address gene-by-exposure interactions that require large sample sizes and address the effects of different exposures across time, geographical regions and socioeconomic status.
Introduction: It is recommended that seniors consulting to the Emergency Department (ED) undergo a comprehensive geriatric screening, which is difficult for most EDs. Patient self-assessment using electronic tablet could be an interesting solution to this issue. However, the acceptability of self-assessment by older ED patients remains unknown. Assessing acceptability is a fundamental step in evaluating new interventions. The main objective of this project is to compare the acceptability of older patient self-assessment in the ED to that of a standard assessment made by a professional, according to seniors and their caregivers. Methods: Design: This randomized crossover design cohort study took place between May and July 2018. Participants: 1) Patients aged ≥65 years consulting to the ED, 2) their caregiver, when present. Measurements: Patients performed self-assessment of their frailty, cognitive and functional status using an electronic tablet. Acceptability was measured using the Treatment Acceptability and Preferences (TAP) questionnaires. Analyses: Descriptive analyses were performed for sociodemographic variables. Scores were adjusted for confounding variables using multivariate linear regression. Thematic content analysis was performed by two independent analysts for qualitative data collected in the TAP's open-ended question. Results: A total of 67 patients were included in this study. Mean age was 75.5 ± 8.0 and 55.2% of participants were women. Adjusted mean TAP scores for RA evaluation and patient self-assessment were 2.36 and 2.20, respectively. We found no difference between the two types of evaluations (p = 0.0831). When patients are stratified by age groups, patients aged 85 and over (n = 11) showed a difference between the TAPs scores, 2.27 for RA evaluation and 1.72 for patient self-assessment (p = 0.0053). Our qualitative data shows that this might be attributed to the use of technology, rather than to the self-assessment itself. Data from 9 caregivers showed a 2.42 mean TAP score for RA evaluation and 2.44 for self-assessment. However, this relatively small sample size prevented us to perform statistical tests. Conclusion: Our results show that older patients find self-assessment in the ED using an electronic tablet just as acceptable as a standard evaluation by a professional.
Polyphenol oxidase (PPO) in red clover (RC) has been shown to reduce both lipolysis and proteolysis in silo and implicated (in vitro) in the rumen. However, all in vivo comparisons have compared RC with other forages, typically with lower levels of PPO, which brings in other confounding factors as to the cause for the greater protection of dietary nitrogen (N) and C18 polyunsaturated fatty acids (PUFA) on RC silage. This study compared two RC silages which when ensiled had contrasting PPO activities (RC+ and RC−) against a control of perennial ryegrass silage (PRG) to ascertain the effect of PPO activity on dietary N digestibility and PUFA biohydrogenation. Two studies were performed the first to investigate rumen and duodenal flow with six Hereford×Friesian steers, prepared with rumen and duodenal cannulae, and the second investigating whole tract N balance using six Holstein-Friesian non-lactating dairy cows. All diets were offered at a restricted level based on animal live weight with each experiment consisting of two 3×3 Latin squares using big bale silages ensiled in 2010 and 2011, respectively. For the first experiment digesta flow at the duodenum was estimated using a dual-phase marker system with ytterbium acetate and chromium ethylenediaminetetraacetic acid as particulate and liquid phase markers, respectively. Total N intake was higher on the RC silages in both experiments and higher on RC− than RC+. Rumen ammonia-N reflected intake with ammonia-N per unit of N intake lower on RC+ than RC−. Microbial N duodenal flow was comparable across all silage diets with non-microbial N higher on RC than the PRG with no difference between RC+ and RC−, even when reported on a N intake basis. C18 PUFA biohydrogenation was lower on RC silage diets than PRG but with no difference between RC+ and RC−. The N balance trial showed a greater retention of N on RC+ over RC−; however, this response is likely related to the difference in N intake over any PPO driven protection. The lack of difference between RC silages, despite contrasting levels of PPO, may reflect a similar level of protein-bound-phenol complexing determined in each RC silage. Previously this complexing has been associated with PPOs protection mechanism; however, this study has shown that protection is not related to total PPO activity.
To assess variability in antimicrobial use and associations with infection testing in pediatric ventilator-associated events (VAEs).
Descriptive retrospective cohort with nested case-control study.
Pediatric intensive care units (PICUs), cardiac intensive care units (CICUs), and neonatal intensive care units (NICUs) in 6 US hospitals.
Children≤18 years ventilated for≥1 calendar day.
We identified patients with pediatric ventilator-associated conditions (VACs), pediatric VACs with antimicrobial use for≥4 days (AVACs), and possible ventilator-associated pneumonia (PVAP, defined as pediatric AVAC with a positive respiratory diagnostic test) according to previously proposed criteria.
Among 9,025 ventilated children, we identified 192 VAC cases, 43 in CICUs, 70 in PICUs, and 79 in NICUs. AVAC criteria were met in 79 VAC cases (41%) (58% CICU; 51% PICU; and 23% NICU), and varied by hospital (CICU, 20–67%; PICU, 0–70%; and NICU, 0–43%). Type and duration of AVAC antimicrobials varied by ICU type. AVAC cases in CICUs and PICUs received broad-spectrum antimicrobials more often than those in NICUs. Among AVAC cases, 39% had respiratory infection diagnostic testing performed; PVAP was identified in 15 VAC cases. Also, among AVAC cases, 73% had no associated positive respiratory or nonrespiratory diagnostic test.
Antimicrobial use is common in pediatric VAC, with variability in spectrum and duration of antimicrobials within hospitals and across ICU types, while PVAP is uncommon. Prolonged antimicrobial use despite low rates of PVAP or positive laboratory testing for infection suggests that AVAC may provide a lever for antimicrobial stewardship programs to improve utilization.
We observed pediatric S. aureus hospitalizations decreased 36% from 26.3 to 16.8 infections per 1,000 admissions from 2009 to 2016, with methicillin-resistant S. aureus (MRSA) decreasing by 52% and methicillin-susceptible S. aureus decreasing by 17%, among 39 pediatric hospitals. Similar decreases were observed for days of therapy of anti-MRSA antibiotics.
A cluster of Salmonella Paratyphi B variant L(+) tartrate(+) infections with indistinguishable pulsed-field gel electrophoresis patterns was detected in October 2015. Interviews initially identified nut butters, kale, kombucha, chia seeds and nutrition bars as common exposures. Epidemiologic, environmental and traceback investigations were conducted. Thirteen ill people infected with the outbreak strain were identified in 10 states with illness onset during 18 July–22 November 2015. Eight of 10 (80%) ill people reported eating Brand A raw sprouted nut butters. Brand A conducted a voluntary recall. Raw sprouted nut butters are a novel outbreak vehicle, though contaminated raw nuts, nut butters and sprouted seeds have all caused outbreaks previously. Firms producing raw sprouted products, including nut butters, should consider a kill step to reduce the risk of contamination. People at greater risk for foodborne illness may wish to consider avoiding raw products containing raw sprouted ingredients.
We consider a polling system with two queues, exhaustive service, no switchover times, and exponential service times with rate µ in each queue. The waiting cost depends on the position of the queue relative to the server: it costs a customer c per time unit to wait in the busy queue (where the server is) and d per time unit in the idle queue (where there is no server). Customers arrive according to a Poisson process with rate λ. We study the control problem of how arrivals should be routed to the two queues in order to minimize the expected waiting costs and characterize individually and socially optimal routeing policies under three scenarios of available information at decision epochs: no, partial, and complete information. In the complete information case, we develop a new iterative algorithm to determine individually optimal policies (which are symmetric Nash equilibria), and show that such policies can be described by a switching curve. We use Markov decision processes to compute the socially optimal policies. We observe numerically that the socially optimal policy is well approximated by a linear switching curve. We prove that the control policy described by this linear switching curve is indeed optimal for the fluid version of the two-queue polling system.
BACKGROUND: Meningiomas are the most common primary benign brain tumors in adults. Given the extended life expectancy of most meningiomas, consideration of quality of life (QOL) is important when selecting the optimal management strategy. There is currently a dearth of meningioma-specific QOL tools in the literature. OBJECTIVE: In this systematic review, we analyze the prevailing themes and propose toward building a meningioma-specific QOL assessment tool. METHODS: A systematic search was conducted, and only original studies based on adult patients were considered. QOL tools used in the various studies were analyzed for identification of prevailing themes in the qualitative analysis. The quality of the studies was also assessed. RESULTS: Sixteen articles met all inclusion criteria. Fifteen different QOL assessment tools assessed social and physical functioning, psychological, and emotional well-being. Patient perceptions and support networks had a major impact on QOL scores. Surgery negatively affected social functioning in younger patients, while radiation therapy had a variable impact. Any intervention appeared to have a greater negative impact on physical functioning compared to observation. CONCLUSION: Younger patients with meningiomas appear to be more vulnerable within social and physical functioning domains. All of these findings must be interpreted with great caution due to great clinical heterogeneity, limited generalizability, and risk of bias. For meningioma patients, the ideal QOL questionnaire would present outcomes that can be easily measured, presented, and compared across studies. Existing scales can be the foundation upon which a comprehensive, standard, and simple meningioma-specific survey can be prospectively developed and validated.
Background: With advancements in technology, the use of video as a pedagogical method in medical education has gained in popularity, and may aid in teaching clinical skills. In the UBC MD program, videos have been used to assist in teaching the -neurological exam for several decades, but the currently available videos are outdated and not of contemporary quality. Methods: Drawing upon the cognitive theory of multimedia learning from Mayer and Moreno (2003) which describes methods to maximize learning by minimizing cognitive load, we developed a tool to systematically assess pedagogical videos. We inventoried twelve existing neurology videos and analyzed their use of methods such as weeding (removing extraneous information), signalling (visually highlighting important information), and chunking (grouping similar information together). Results: Generally, older videos had poor audiovisual quality that introduced extraneous load, while more current videos had higher production value, albeit inconsistent with the depth of their content. We therefore produced a new three-part neurological exam video series. We wrote storyboards, filmed with a focus on visually depicting the exam and findings, and edited to elucidate relevant physiological concepts. Conclusions: The end product has been adopted by the UBC MD program, and can be shared with other programs who may wish to adopt them.
Introduction: The purpose of this study is to determine if the introduction of a pre-arrival and pre-departure Trauma Checklist as a cognitive aid, coupled with an educational session, will improve clinical performance in a simulated environment. The Trauma Checklist was developed in response to a quality assurance review of high-acuity trauma activations. It focuses on pre-arrival preparation and a pre-departure review prior to patient transfer to diagnostic imaging or the operating room. We conducted a pilot, randomized control trial assessing the impact of the Trauma Checklist on time to critical interventions on a simulated pediatric patient by multidisciplinary teams. Methods: Emergency department teams composed of 2 physicians, 2 nurses and 2 confederate actors were enrolled in our study. In the intervention arm, participants watched a 10-minute educational video modelling the use of the trauma checklist prior to their simulation scenario and were provided a copy of the checklist. Teams participated in a standardized simulation scenario caring for a severely injured adolescent patient with hemorrhagic shock, respiratory failure and increased intracranial pressure. Our primary outcome of interest was time measurement to initiation of key clinical interventions, including intubation, first blood product administration, massive transfusion protocol activation, initiation of hyperosmolar therapy and others. Secondary outcome measures included a Trauma Task Performance score and checklist completion scores. Results: We enrolled 14 multidisciplinary teams (n=56 participants) into our study. There was a statistically significant decrease in median time to initiation of hyperosmolar therapy by teams in the intervention arm compared to the control arm (581 seconds, [509-680] vs. 884 seconds, [588-1144], p=0.03). Time to initiation of other clinical interventions was not statistically significant. There was a trend to higher Trauma Task Performance scores in the intervention group however this did not reach statistical significant (p=0.09). Pre-arrival and pre-departure checklist scores were higher in the intervention group (9.0 [9.0-10.0] vs. 7.0 [6.0-8.0], p=0.17 and 12.0 [11.5-12.0] vs. 7.5 [6.0-8.5], p=0.01). Conclusion: Teams using the Trauma Checklist did not have decreased time to initiation of key clinical interventions except in initiating hyperosmolar therapy. Teams in the intervention arm had statistically significantly higher pre-arrival and pre-departure scores, with a trend to higher Trauma Task Performance scores. Our study was a pilot and recruitment did not achieve the anticipated sample size, thus underpowered. The impact of this checklist should be studied outside tertiary trauma centres, particularly in trainees and community emergency providers, to assess for benefit and further generalizability.
Introduction: Prevalence and incidence of delirium in older patients admitted to acute and long-term care facilities ranges between 9.6% and 89% but little is known in the context of emergency department (ED) incident delirium. Literature regarding the incidence of delirium in the ED and its potential impacts on hospital length of stay (LOS), functional status and unplanned ED readmissions is scant, its consequences have yet to be clearly identified in order to orient modern acute medical care. Methods: This study is part of the multicenter prospective cohort INDEED study. Three Canadian EDs completed the two years prospective study (March-July 2015 and Feb-May 2016). Patients aged 65 years old, initially free of delirium with an ED stay 8hours were followed up to 24h after ward admission. Patients were assessed 2x/day during their entire ED stay and up to 24 hours on hospital ward by research assistants (RA). The primary outcome of this study was incident delirium in the ED or within 24 h of ward admission. Functional and cognitive status were assessed using validated Older Americans’ Resources and Services and the Telephone Interview for Cognitive Status- modified tools. The Confusion Assessment Method (CAM) was used to detect incident delirium. ED and hospital administrative data were collected. Inter-observer agreement was realized among RA. Results: Incident delirium was not different between sites, nor between phases, nor between times from one site to another. All phases confounded, there is between 7 to 11% of ED related incident delirious episodes. Differences were seen in ED LOS between sites in non-delirious patients, but also between some sites for delirious participants (p<0.05). Only one site had a difference in ED LOS between their delirious and non-delirious patients, respectively of 52.1 and 40.1 hours (p<0.05). There is also a difference between sites in the time between arrival to the ED and the incidence of delirium (p=0.003). Kappa statistics were computed to measure inter-rater reliability of the CAM. Based on an alpha of 5%, 138 patients would allow 80% power for an estimated overall incidence proportion of 15 % with 5% precision.. Other predictive delirium variables, such as cognitive status, environmental factors, functional status, comorbidities, physiological status, and ED and hospital length of stay were similar between sites and phases. Conclusion: The fact that incidence of delirium was the same for all sites, despite the differences of ED LOS and different time periods suggest that many other modifiable and non-modifiable factors along LOS influenced the incidence of ED induced delirium. Emergency physician should concentrate on improving senior-friendly environment for the ED.
Introduction: Emergency department (ED) stay and its associated conditions (immobility, inadequate hydration and nutrition, lack of stimulation) favor the development of delirium in vulnerable elderly patients. Poorly controlled pain, and paradoxically opioid pain treatment, has also been identified as a trigger for delirium. The aim of this study was to assess the relationship between pain, opioid treatment, and delirium in elderly ED patients. Methods: A multicenter prospective cohort study was conducted in four hospitals across the province of Québec (Canada). Patients aged 65 years old, waiting for care unit admission between February and May 2016, who were non-delirious upon ED arrival, independent or semi-independent for their activities of daily living, and had an ED stay of at least 8 hours were included. Delirium assessments were made twice a day for their entire ED stay and for the first 24 hours in the hospital ward using the Confusion Assessment Method (CAM). Pain intensity was evaluated using a visual analog scale (0-100) during the initial interview, and all opioid treatments were documented. Results: A total of 338 patients were included; 51% were female, mean age was 77 years (SD: 8). Forty-one patients (12%) experienced delirium during their hospital stay occurring within a mean delay of 47 hours (SD: 19) after ED admission. Among patients with pain intensity 60, 22% experienced delirium compared to 10.7% for patients with pain <60 (p<0.05). No significant association was found between opioid consumption and delirium (p=0.22). Logistic regression controlling for age, sex, ED stay duration, and opioids intake showed that patients with pain intensity 60 are 2.6 (95%CI: 1.2-5.9) more likely to develop delirium than patients who had pain <60. Conclusion: Severe pain, not opioids, is associated with the development of delirium during ED stay. Adequate pain control during the hospital stay may contribute to the decrease of delirium episodes.
Introduction: It is documented that physicians and nurses fail to detect delirium in more than half of cases from various clinical settings, which could have serious consequences for seniors and for our health care system. The present study aimed to describe the rate of documented incident delirium in 5 Canadian Emergency departments (ED) by health professionals (HP). Methods: This study is part of the multicenter prospective cohort INDEED study. Patients aged 65 years old, initially free of delirium with an ED stay 8hours were followed up to 24h after ward admission. Delirium status was assessed twice daily using the Confusion Assessment Method (CAM) by trained research assistants (RA). HP reviewed patient charts to assess detection of delirium. HP had no specific routine detection of delirious ED patients. Inter-observer agreement was realized among RA. Comparison of detection between RA and HP was realized with univariate analyses. Results: Among the 652 included patients, 66 developed a delirium as evaluated with the CAM by the RA. Among those 66 patients, only 10 deliriums (15.2%) were documented in the patients medical file by the HP. 54 (81.8%) patients with a CAM positive for delirium by the RA were not recorded by the HP, 2 had incomplete charts. The delirium index was significantly higher in the HP reported group compared to the HP not reported, respectively 7.1 and 4.5 (p<0.05). Other predictive delirium variables, such as cognitive status, functional status, comorbidities, physiological status, and ED and hospital length of stay were similar between groups. Conclusion: It seems that health professionals missed 81.8% of the potential delirious ED patients in comparison to routine structured screening of delirium. HP could identify patients with a greater severity of symptoms. Our study points out the need to better identify elders at risk to develop delirium and the need for fast and reliable tools to improve the screening of this disorder.
A substantial proportion of persons with mental disorders seek treatment from complementary and alternative medicine (CAM) professionals. However, data on how CAM contacts vary across countries, mental disorders and their severity, and health care settings is largely lacking. The aim was therefore to investigate the prevalence of contacts with CAM providers in a large cross-national sample of persons with 12-month mental disorders.
In the World Mental Health Surveys, the Composite International Diagnostic Interview was administered to determine the presence of past 12 month mental disorders in 138 801 participants aged 18–100 derived from representative general population samples. Participants were recruited between 2001 and 2012. Rates of self-reported CAM contacts for each of the 28 surveys across 25 countries and 12 mental disorder groups were calculated for all persons with past 12-month mental disorders. Mental disorders were grouped into mood disorders, anxiety disorders or behavioural disorders, and further divided by severity levels. Satisfaction with conventional care was also compared with CAM contact satisfaction.
An estimated 3.6% (standard error 0.2%) of persons with a past 12-month mental disorder reported a CAM contact, which was two times higher in high-income countries (4.6%; standard error 0.3%) than in low- and middle-income countries (2.3%; standard error 0.2%). CAM contacts were largely comparable for different disorder types, but particularly high in persons receiving conventional care (8.6–17.8%). CAM contacts increased with increasing mental disorder severity. Among persons receiving specialist mental health care, CAM contacts were reported by 14.0% for severe mood disorders, 16.2% for severe anxiety disorders and 22.5% for severe behavioural disorders. Satisfaction with care was comparable with respect to CAM contacts (78.3%) and conventional care (75.6%) in persons that received both.
CAM contacts are common in persons with severe mental disorders, in high-income countries, and in persons receiving conventional care. Our findings support the notion of CAM as largely complementary but are in contrast to suggestions that this concerns person with only mild, transient complaints. There was no indication that persons were less satisfied by CAM visits than by receiving conventional care. We encourage health care professionals in conventional settings to openly discuss the care patients are receiving, whether conventional or not, and their reasons for doing so.
Acupuncture has become increasingly popular in veterinary medicine. Within the scientific literature there is debate regarding its efficacy. Due to the complex nature of acupuncture, a scoping review was undertaken to identify and categorize the evidence related to acupuncture in companion animals (dogs, cats, and horses). Our search identified 843 relevant citations. Narrative reviews represented the largest proportion of studies (43%). We identified 179 experimental studies and 175 case reports/case series that examined the efficacy of acupuncture. Dogs were the most common subjects in the experimental trials. The most common indication for use was musculoskeletal conditions, and the most commonly evaluated outcome categories among experimental trials were pain and cardiovascular parameters. The limited number of controlled trials and the breadth of indications for use, outcome categories, and types of acupuncture evaluated present challenges for future systematic reviews or meta-analyses. There is a need for high-quality randomized controlled trials addressing the most common clinical uses of acupuncture, and using consistent and clinically relevant outcomes, to inform conclusions regarding the efficacy of acupuncture in companion animals.
The treatment gap between the number of people with mental disorders and the number treated represents a major public health challenge. We examine this gap by socio-economic status (SES; indicated by family income and respondent education) and service sector in a cross-national analysis of community epidemiological survey data.
Data come from 16 753 respondents with 12-month DSM-IV disorders from community surveys in 25 countries in the WHO World Mental Health Survey Initiative. DSM-IV anxiety, mood, or substance disorders and treatment of these disorders were assessed with the WHO Composite International Diagnostic Interview (CIDI).
Only 13.7% of 12-month DSM-IV/CIDI cases in lower-middle-income countries, 22.0% in upper-middle-income countries, and 36.8% in high-income countries received treatment. Highest-SES respondents were somewhat more likely to receive treatment, but this was true mostly for specialty mental health treatment, where the association was positive with education (highest treatment among respondents with the highest education and a weak association of education with treatment among other respondents) but non-monotonic with income (somewhat lower treatment rates among middle-income respondents and equivalent among those with high and low incomes).
The modest, but nonetheless stronger, an association of education than income with treatment raises questions about a financial barriers interpretation of the inverse association of SES with treatment, although future within-country analyses that consider contextual factors might document other important specifications. While beyond the scope of this report, such an expanded analysis could have important implications for designing interventions aimed at increasing mental disorder treatment among socio-economically disadvantaged people.
Although childhood adversities are known to predict increased risk of post-traumatic stress disorder (PTSD) after traumatic experiences, it is unclear whether this association varies by childhood adversity or traumatic experience types or by age.
To examine variation in associations of childhood adversities with PTSD according to childhood adversity types, traumatic experience types and life-course stage.
Epidemiological data were analysed from the World Mental Health Surveys (n = 27017).
Four childhood adversities (physical and sexual abuse, neglect, parent psychopathology) were associated with similarly increased odds of PTSD following traumatic experiences (odds ratio (OR)=1.8), whereas the other eight childhood adversities assessed did not predict PTSD. Childhood adversity–PTSD associations did not vary across traumatic experience types, but were stronger in childhood-adolescence and early-middle adulthood than later adulthood.
Childhood adversities are differentially associated with PTSD, with the strongest associations in childhood-adolescence and early-middle adulthood. Consistency of associations across traumatic experience types suggests that childhood adversities are associated with generalised vulnerability to PTSD following traumatic experiences.
Research on post-traumatic stress disorder (PTSD) course finds a substantial proportion of cases remit within 6 months, a majority within 2 years, and a substantial minority persists for many years. Results are inconsistent about pre-trauma predictors.
The WHO World Mental Health surveys assessed lifetime DSM-IV PTSD presence-course after one randomly-selected trauma, allowing retrospective estimates of PTSD duration. Prior traumas, childhood adversities (CAs), and other lifetime DSM-IV mental disorders were examined as predictors using discrete-time person-month survival analysis among the 1575 respondents with lifetime PTSD.
20%, 27%, and 50% of cases recovered within 3, 6, and 24 months and 77% within 10 years (the longest duration allowing stable estimates). Time-related recall bias was found largely for recoveries after 24 months. Recovery was weakly related to most trauma types other than very low [odds-ratio (OR) 0.2–0.3] early-recovery (within 24 months) associated with purposefully injuring/torturing/killing and witnessing atrocities and very low later-recovery (25+ months) associated with being kidnapped. The significant ORs for prior traumas, CAs, and mental disorders were generally inconsistent between early- and later-recovery models. Cross-validated versions of final models nonetheless discriminated significantly between the 50% of respondents with highest and lowest predicted probabilities of both early-recovery (66–55% v. 43%) and later-recovery (75–68% v. 39%).
We found PTSD recovery trajectories similar to those in previous studies. The weak associations of pre-trauma factors with recovery, also consistent with previous studies, presumably are due to stronger influences of post-trauma factors.
Sexual assault is a global concern with post-traumatic stress disorder (PTSD), one of the common sequelae. Early intervention can help prevent PTSD, making identification of those at high risk for the disorder a priority. Lack of representative sampling of both sexual assault survivors and sexual assaults in prior studies might have reduced the ability to develop accurate prediction models for early identification of high-risk sexual assault survivors.
Data come from 12 face-to-face, cross-sectional surveys of community-dwelling adults conducted in 11 countries. Analysis was based on the data from the 411 women from these surveys for whom sexual assault was the randomly selected lifetime traumatic event (TE). Seven classes of predictors were assessed: socio-demographics, characteristics of the assault, the respondent's retrospective perception that she could have prevented the assault, other prior lifetime TEs, exposure to childhood family adversities and prior mental disorders.
Prevalence of Diagnostic and Statistical Manual of Mental Disorders IV (DSM-IV) PTSD associated with randomly selected sexual assaults was 20.2%. PTSD was more common for repeated than single-occurrence victimization and positively associated with prior TEs and childhood adversities. Respondent's perception that she could have prevented the assault interacted with history of mental disorder such that it reduced odds of PTSD, but only among women without prior disorders (odds ratio 0.2, 95% confidence interval 0.1–0.9). The final model estimated that 40.3% of women with PTSD would be found among the 10% with the highest predicted risk.
Whether counterfactual preventability cognitions are adaptive may depend on mental health history. Predictive modelling may be useful in targeting high-risk women for preventive interventions.