To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) pandemic, with its impact on our way of life, is affecting our experiences and mental health. Notably, individuals with mental disorders have been reported to have a higher risk of contracting SARS-CoV-2. Personality traits could represent an important determinant of preventative health behaviour and, therefore, the risk of contracting the virus.
We examined overlapping genetic underpinnings between major psychiatric disorders, personality traits and susceptibility to SARS-CoV-2 infection.
Linkage disequilibrium score regression was used to explore the genetic correlations of coronavirus disease 2019 (COVID-19) susceptibility with psychiatric disorders and personality traits based on data from the largest available respective genome-wide association studies (GWAS). In two cohorts (the PsyCourse (n = 1346) and the HeiDE (n = 3266) study), polygenic risk scores were used to analyse if a genetic association between, psychiatric disorders, personality traits and COVID-19 susceptibility exists in individual-level data.
We observed no significant genetic correlations of COVID-19 susceptibility with psychiatric disorders. For personality traits, there was a significant genetic correlation for COVID-19 susceptibility with extraversion (P = 1.47 × 10−5; genetic correlation 0.284). Yet, this was not reflected in individual-level data from the PsyCourse and HeiDE studies.
We identified no significant correlation between genetic risk factors for severe psychiatric disorders and genetic risk for COVID-19 susceptibility. Among the personality traits, extraversion showed evidence for a positive genetic association with COVID-19 susceptibility, in one but not in another setting. Overall, these findings highlight a complex contribution of genetic and non-genetic components in the interaction between COVID-19 susceptibility and personality traits or mental disorders.
Outcome of schizophrenia in later life can be evaluated from different perspectives. The recovery concept has moved forward this evaluation, discerning clinical-based and patient-based definitions. Longitudinal data on measures of recovery in older individuals with schizophrenia are scant. This study evaluated the five-year outcome of clinical recovery and subjective well-being in a sample of 73 older Dutch schizophrenia patients (mean age 65.9 years; SD 5.4), employing a catchment-area based design that included both community living and institutionalized patients regardless of the age of onset of their disorder. At baseline (T1), 5.5% of participants qualified for clinical recovery, while at five-year follow-up (T2), this rate was 12.3% (p = 0.18; exact McNemar’s test). Subjective well-being was reported by 20.5% of participants at T1 and by 27.4% at T2 (p = 0.27; exact McNemar’s test). Concurrence of clinical recovery and subjective well-being was exceptional, being present in only one participant (1.4%) at T1 and in two participants (2.7%) at T2. Clinical recovery and subjective well-being were not correlated neither at T1 (p = 0.82; phi = 0.027) nor at T2 (p = 0.71; phi = −0.044). There was no significant correlation over time between clinical recovery at T1 and subjective well-being at T2 (p = 0.30; phi = 0.122) nor between subjective well-being at T1 and clinical recovery at T2 (p = 0.45; phi = −0.088). These results indicate that while reaching clinical recovery is relatively rare in older individuals with schizophrenia, it is not a prerequisite to experience subjective well-being.
Community studies have found a relatively high prevalence of hallucinations, which are associated with a range of (psychotic and non-psychotic) mental disorders, as well as with suicidal ideation and behaviour. The literature on hallucinations in the general population has largely focused on adolescents and young adults.
We aimed to explore the prevalence and psychopathologic significance of hallucinations across the adult lifespan.
Using the 1993, 2000, 2007 and 2014 cross-sectional Adult Psychiatric Morbidity Survey series (N = 33 637), we calculated the prevalence of past-year hallucinations in the general population ages 16 to ≥90 years. We used logistic regression to examine the relationship between hallucinations and a range of mental disorders, suicidal ideation and suicide attempts.
The prevalence of past-year hallucinations varied across the adult lifespan, from a high of 7% in individuals aged 16–19 years, to a low of 3% in individuals aged ≥70 years. In all age groups, hallucinations were associated with increased risk for mental disorders, suicidal ideation and suicide attempts, but there was also evidence of significant age-related variation. In particular, hallucinations in older adults were less likely to be associated with a cooccurring mental disorder, suicidal ideation or suicide attempt compared with early adulthood and middle age.
Our findings highlight important life-course developmental features of hallucinations from early adulthood to old age.
Inflammatory diets are increasingly recognised as a modifiable determinant of mental illness. However, there is a dearth of studies in early life and across the full mental well-being spectrum (mental illness to positive well-being) at the population level. This is a critical gap given that inflammatory diet patterns and mental well-being trajectories typically establish by adolescence. We examined the associations of inflammatory diet scores with mental well-being in 11–12-year-olds and mid-life adults. Throughout Australia, 1759 11–12-year-olds (49 % girls) and 1812 parents (88 % mothers) contributed cross-sectional population-based data. Alternate inflammatory diet scores were calculated from a twenty-six-item FFQ, based on the prior literature and prediction of inflammatory markers. Participants reported negatively and positively framed mental well-being via psychosocial health, quality of life and life satisfaction surveys. We used causal inference modelling techniques via generalised linear regression models (mean differences and risk ratios (RR)) to examine how inflammatory diets might influence mental well-being. In children and adults, respectively, a 1 sd higher literature-derived inflammatory diet score conferred between a 44 % (RR 95 % CI 1·2, 1·8) to 57 % (RR 95 % CI 1·3, 2·0) and 54 % (95 % CI 1·2, 2·0) to 86 % (RR 95 % CI 1·4, 2·4) higher risk of being in the worst mental well-being category (i.e. <16th percentile) across outcome measures. Results for inflammation-derived scores were similar. BMI mediated effects (21–39 %) in adults. Inflammatory diet patterns were cross-sectionally associated with mental well-being at age 11–12 years, with similar effects observed in mid-adulthood. Reducing inflammatory dietary components in childhood could improve population-level mental well-being across the life course.
Impulsivity is a central symptom of borderline personality disorder (BPD) and its neural basis may be instantiated in a frontoparietal network involved in response inhibition. However, research has yet to determine whether neural activation differences in BPD associated with response inhibition are attributed to attentional saliency, which is subserved by a partially overlapping network of brain regions.
Patients with BPD (n = 45) and 29 healthy controls (HCs; n = 29) underwent functional magnetic resonance imaging while completing a novel go/no-go task with infrequent odd-ball trials to control for attentional saliency. Contrasts reflecting a combination of response inhibition and attentional saliency (no-go > go), saliency processing alone (oddball > go), and response inhibition controlling for attentional saliency (no-go > oddball) were compared between BPD and HC.
Compared to HC, BPD showed less activation in the combined no-go > go contrast in the right posterior inferior and middle-frontal gyri, and less activation for oddball > go in left-hemispheric inferior frontal junction, frontal pole, superior parietal lobe, and supramarginal gyri. Crucially, BPD and HC showed no activation differences for the no-go > oddball contrast. In BPD, higher vlPFC activation for no-go > go was correlated with greater self-rated BPD symptoms, whereas lower vlPFC activation for oddball > go was associated with greater self-rated attentional impulsivity.
Patients with BPD show frontoparietal disruptions related to the combination of response inhibition and attentional saliency or saliency alone, but no specific response inhibition neural activation difference when attentional saliency is controlled. The findings suggest a neural dysfunction in BPD underlying attention to salient or infrequent stimuli, which is supported by a negative correlation with self-rated impulsiveness.
Humans are a remarkably social species. They form and live in groups and recurrently have to decide whether to cooperate or compete with others within and among groups. Cooperation has been essential for group survival and prosperity across human history. In hunter-gatherer societies, people need to form alliances in hunting to alleviate the risks from predator attacks. Likewise, modern societies require groups of people to cooperate in large ventures. Yet, social situations often involve a conflict between one’s short-term personal interest and the long-term collective interest (i.e., social dilemmas; Dawes, 1980; Van Lange et al., 2013). In such mixed-motive situations, what is good for an individual may often harm the collective, and this makes people tempted to free ride and harvest the benefits from others’ cooperation. Indeed, many societal problems and global issues (e.g., traffic problems, environmental pollution, and resource depletion) involve such conflicts of interests. Solving these problems often requires individuals to cooperate by paying a personal cost to benefit another person or the group.
Haematopoietic stem cell transplantation is an important and effective treatment strategy for many malignancies, marrow failure syndromes, and immunodeficiencies in children, adolescents, and young adults. Despite advances in supportive care, patients undergoing transplant are at increased risk to develop cardiovascular co-morbidities.
This study was performed as a feasibility study of a rapid cardiac MRI protocol to substitute for echocardiography in the assessment of left ventricular size and function, pericardial effusion, and right ventricular hypertension.
A total of 13 patients were enrolled for the study (age 17.5 ± 7.7 years, 77% male, 77% white). Mean study time was 13.2 ± 5.6 minutes for MRI and 18.8 ± 5.7 minutes for echocardiogram (p = 0.064). Correlation between left ventricular ejection fraction by MRI and echocardiogram was good (ICC 0.76; 95% CI 0.47, 0.92). None of the patients had documented right ventricular hypertension. Patients were given a survey regarding their experiences, with the majority both perceiving that the echocardiogram took longer (7/13) and indicating they would prefer the MRI if given a choice (10/13).
A rapid cardiac MRI protocol was shown feasible to substitute for echocardiogram in the assessment of key factors prior to or in follow-up after haematopoietic stem cell transplantation.
People living in precarious housing or homelessness have higher than expected rates of psychotic disorders, persistent psychotic symptoms, and premature mortality. Psychotic symptoms can be modeled as a complex dynamic system, allowing assessment of roles for risk factors in symptom development, persistence, and contribution to premature mortality.
The severity of delusions, conceptual disorganization, hallucinations, suspiciousness, and unusual thought content was rated monthly over 5 years in a community sample of precariously housed/homeless adults (n = 375) in Vancouver, Canada. Multilevel vector auto-regression analysis was used to construct temporal, contemporaneous, and between-person symptom networks. Network measures were compared between participants with (n = 219) or without (n = 156) history of psychotic disorder using bootstrap and permutation analyses. Relationships between network connectivity and risk factors including homelessness, trauma, and substance dependence were estimated by multiple linear regression. The contribution of network measures to premature mortality was estimated by Cox proportional hazard models.
Delusions and unusual thought content were central symptoms in the multilevel network. Each psychotic symptom was positively reinforcing over time, an effect most pronounced in participants with a history of psychotic disorder. Global connectivity was similar between those with and without such a history. Greater connectivity between symptoms was associated with methamphetamine dependence and past trauma exposure. Auto-regressive connectivity was associated with premature mortality in participants under age 55.
Past and current experiences contribute to the severity and dynamic relationships between psychotic symptoms. Interrupting the self-perpetuating severity of psychotic symptoms in a vulnerable group of people could contribute to reducing premature mortality.
This study examines the relationship of serum total tau, neurofilament light (NFL), ubiquitin carboxyl-terminal hydrolase L1 (UCH-L1), and glial fibrillary acidic protein (GFAP) with neurocognitive performance in service members and veterans with a history of traumatic brain injury (TBI).
Service members (n = 488) with a history of uncomplicated mild (n = 172), complicated mild, moderate, severe, or penetrating TBI (sTBI; n = 126), injured controls (n = 116), and non-injured controls (n = 74) prospectively enrolled from Military Treatment Facilities. Participants completed a blood draw and neuropsychological assessment a year or more post-injury. Six neuropsychological composite scores and presence/absence of mild neurocognitive disorder (MNCD) were evaluated. Within each group, stepwise hierarchical regression models were conducted.
Within the sTBI group, increased serum UCH-L1 was related to worse immediate memory and delayed memory (R2Δ = .065–.084, ps < .05) performance, while increased GFAP was related to worse perceptual reasoning (R2Δ = .030, p = .036). Unexpectedly, within injured controls, UCH-L1 and GFAP were inversely related to working memory (R2Δ = .052–.071, ps < .05), and NFL was related to executive functioning (R2Δ = .039, p = .021) and MNCD (Exp(B) = 1.119, p = .029).
Results suggest GFAP and UCH-L1 could play a role in predicting poor cognitive outcome following complicated mild and more severe TBI. Further investigation of blood biomarkers and cognition is warranted.
Operators are mindful of the balloon-to-aortic annulus ratio when performing balloon aortic valvuloplasty. The method of measurement of the aortic valve annulus has not been standardised.
Methods and results:
Patients who underwent aortic valvuloplasty at two paediatric centres between 2007 and 2014 were included. The valve annulus measured by echocardiography and angiography was used to calculate the balloon-to-aortic annulus ratio and measurements were compared. The primary endpoint was an increase in aortic insufficiency by ≥2 degrees. Ninety-eight patients with a median age at valvuloplasty of 2.1 months (Interquartile range (IQR): 0.2–105.5) were included. The angiographic-based annulus was 8.2 mm (IQR: 6.8–16.0), which was greater than echocardiogram-based annulus of 7.5 mm (IQR: 6.1–14.8) (p < 0.001). This corresponded to a significantly lower angiographic balloon-to-aortic annulus ratio of 0.9 (IQR: 0.9–1.0), compared to an echocardiographic ratio of 1.1 (IQR: 1.0–1.1) (p < 0.001). The degree of discrepancy in measured diameter increased with smaller valve diameters (p = 0.041) and in neonates (p = 0.044). There was significant disagreement between angiographic and echocardiographic balloon-to-aortic annulus ratio measures regarding “High” ratio of >1.2, with angiographic ratio flagging only 2/12 (16.7%) of patients flagged by echocardiographic ratio as “High” (p = 0.012). Patients who had an increase in the degree of aortic insufficiency post valvuloplasty, only 3 (5.5%) had angiographic ratio > 1.1, while 21 (38%) had echocardiographic ratio >1.1 (p < 0.001). Patients with resultant ≥ moderate insufficiency more often had an echocardiographic ratio of >1.1 than angiographic ratio of >1.1 There was no association between increase in balloon-to-aortic annulus ratio and gradient reduction.
Angiographic measurement is associated with a greater measured aortic valve annulus and the development of aortic insufficiency. Operators should use caution when relying solely on angiographic measurement when performing balloon aortic valvuloplasty.
The radiocarbon (14C) calibration curve so far contains annually resolved data only for a short period of time. With accelerator mass spectrometry (AMS) matching the precision of decay counting, it is now possible to efficiently produce large datasets of annual resolution for calibration purposes using small amounts of wood. The radiocarbon intercomparison on single-year tree-ring samples presented here is the first to investigate specifically possible offsets between AMS laboratories at high precision. The results show that AMS laboratories are capable of measuring samples of Holocene age with an accuracy and precision that is comparable or even goes beyond what is possible with decay counting, even though they require a thousand times less wood. It also shows that not all AMS laboratories always produce results that are consistent with their stated uncertainties. The long-term benefits of studies of this kind are more accurate radiocarbon measurements with, in the future, better quantified uncertainties.
Non-medical cannabis recently became legal for adults in Canada. Legalization provides opportunity to investigate the public health effects of national cannabis legalization on presentations to emergency departments (EDs). Our study aimed to explore association between cannabis-related ED presentations, poison control and telemedicine calls, and cannabis legalization.
Data were collected from the National Ambulatory Care Reporting System from October 1, 2013, to July 31, 2019, for 14 urban Alberta EDs, from Alberta poison control, and from HealthLink, a public telehealth service covering all of Alberta. Visitation data were obtained to compare pre- and post-legalization periods. An interrupted time-series analysis accounting for existing trends was completed, in addition to the incidence rate ratio (IRR) and relative risk calculation (to evaluate changes in co-diagnoses).
Although only 3 of every 1,000 ED visits within the time period were attributed to cannabis, the number of cannabis-related ED presentations increased post-legalization by 3.1 (range -11.5 to 12.6) visits per ED per month (IRR 1.45, 95% confidence interval [CI]; 1.39, 1.51; absolute level change: 43.5 visits per month, 95% CI; 26.5, 60.4). Cannabis-related calls to poison control also increased (IRR 1.87, 95% CI; 1.55, 2.37; absolute level change: 4.0 calls per month, 95% CI; 0.1, 7.9). Lastly, we observed increases in cannabis-related hyperemesis, unintentional ingestion, and individuals leaving the ED pre-treatment. We also observed a decrease in co-ingestant use.
Overall, Canadian cannabis legalization was associated with small increases in urban Alberta cannabis-related ED visits and calls to a poison control centre.
The diagnostic value of exploratory tympanotomy in sudden sensorineural hearing loss remains controversial. This study and review were performed to identify the incidence of perilymphatic fistula in patients with sudden sensorineural hearing loss. The effectiveness of tympanotomy for sealing of the cochlear windows in cases with perilymphatic fistula was evaluated.
A search in common databases was performed. Overall, 5034 studies were retrieved. Further, a retrospective analysis on 90 patients was performed.
Eight publications dealing with tympanotomy in patients with sudden sensorineural hearing loss were identified. In 90 patients diagnosed with sudden sensorineural hearing loss and undergoing exploratory tympanotomy, 10 patients (11 per cent) were identified with a perilymphatic fistula, and this corresponds to the results obtained from our review (13.6 per cent).
There was no significant improvement after exploratory tympanotomy and sealing of the membranes for patients with a definite perilymphatic fistula.
The nature of schizophrenia spectrum disorders with an onset in middle or late adulthood remains controversial. The aim of our study was to determine in patients aged 60 and older if clinically relevant subtypes based on age at onset can be distinguished, using admixture analysis, a data-driven technique. We conducted a cross-sectional study in 94 patients aged 60 and older with a diagnosis of schizophrenia or schizoaffective disorder. Admixture analysis was used to determine if the distribution of age at onset in this cohort was consistent with one or more populations of origin and to determine cut-offs for age at onset groups, if more than one population could be identified. Results showed that admixture analysis based on age at onset demonstrated only one normally distributed population. Our results suggest that in older schizophrenia patients, early- and late-onset ages form a continuum.
Introduction: There is ongoing concern about the burden placed on healthcare systems by lab tests. Although these concerns are widespread, it is difficult to quantify the extent of the problem. One approach involves use of a metric known as the Mean Abnormal Response Rate (MARR), which is the proportion of tests ordered that return an abnormal result; a higher MARR value indicates higher yield. The primary objective of this study was to calculate MARRs for tests ordered between April 2014 and March 2019 at the four adult emergency departments (EDs) covering a metropolitan population of 1.3 million. Secondary objectives included identifying tests with highest and lowest MARRs; comparison of MARRs for nurse- and physician-initiated orders; correlation of the number of tests per order requisition to MARR; and correlation of physician experience to MARR. Methods: In total, 40 laboratory tests met inclusion criteria for this study. Administrative data on these tests as ordered at the four EDs were obtained and analyzed. Multi-component test results, such as from CBC, were consolidated such that an abnormal result for any component was coded as an abnormal result for the entire test. Repeat tests ordered within a single patient visit were excluded. Physician experience was quantified for 209 ED physicians as number of years since licensure. Analyses were descriptive where appropriate for whole-population data. Risk of bias was attenuated by the focus on administrative data. Results: The population dataset comprised 33,757,004 test results on 415,665 unique patients. Of these results, 30.3% were the outcomes of nurse-initiated orders. The 5-year MARRs for the four hospitals were 38.3%, 40.0%, 40.7% and 40.9%. The highest per-test MARRs were for BNP (80.5%) and CBC (62.6%), while the lowest were for glucose (7.9%) and sodium (11.6%). MARRs were higher for nurse-initiated orders than for physician-initiated orders (44.7% vs. 38.1%), likely due to the greater order frequency of high-yield CBC in nurse-initiated orders (38.6% vs. 18.1%). The number of tests per order requisition was inversely associated with MARR (r = -0.90, p < 0.001). Finally, the number of years since licensure was modestly but significantly associated with MARR (r = 0.28, p < 0.001). Conclusion: This is the first and largest study to apply the MARR in an ED setting. As a metric, MARR effectively identifies differences in test ordering practices on per-test and per-hospital bases, which could be useful for data-informed practice optimization.
Introduction: Mild Traumatic Brain Injury (mTBI) is a common problem: each year in Canada, its incidence is estimated at 500-600 cases per 100 000. Between 10 and 56% of mTBI patients develop persistent post-concussion symptoms (PPCS) that can last for more than 90 days. It is therefore important for clinicians to identify patients who are at risk of developing PPCS. We hypothesized that blood biomarkers drawn upon patient arrival to the Emergency Department (ED) could help predict PPCS. The main objective of this project was to measure the association between four biomarkers and the incidence of PPCS 90 days post mTBI. Methods: Patients were recruited in seven Canadian ED. Non-hospitalized patients, aged ≥14 years old with a documented mTBI that occurred ≤24 hrs of ED consultation, with a GCS ≥13 at arrival were included. Sociodemographic and clinical data as well as blood samples were collected in the ED. A standardized telephone questionnaire was administered at 90 days post ED visit. The following biomarkers were analyzed using enzyme-linked immunosorbent assay (ELISA): S100B protein, Neuron Specific Enolase (NSE), cleaved-Tau (c-Tau) and Glial fibrillary acidic protein (GFAP). The primary outcome measure was the presence of persistent symptoms at 90 days after mTBI, as assessed using the Rivermead Post-Concussion symptoms Questionnaire (RPQ). A ROC curve was constructed for each biomarker. Results: 1276 patients were included in the study. The median age for this cohort was 39 (IQR 23-57) years old, 61% were male and 15% suffered PPCS. The median values (IQR) for patients with PPCS compared to those without were: 43 pg/mL (26-67) versus 42 pg/mL (24-70) for S100B protein, 50 pg/mL (50-223) versus 50 pg/mL (50-199) for NSE, 2929 pg/mL (1733-4744) versus 3180 pg/mL (1835-4761) for c-Tau and 1644 pg/mL (650-3215) versus 1894 pg/mL (700-3498) for GFAP. For each of these biomarkers, Areas Under the Curve (AUC) were 0.495, 0.495, 0.51 and 0.54, respectively. Conclusion: Among mTBI patients, S100B protein, NSE, c-Tau or GFAP during the first 24 hours after trauma do not seem to be able to predict PPCS. Future research testing of other biomarkers is needed in order to determine their usefulness in predicting PPCS when combined with relevant clinical data.
Introduction: Compared to other areas in Alberta Health Services (AHS), internal data show that emergency departments (EDs) and urgent care centres (UCCs) experience a high rate of workforce violence. As such, reducing violence in AHS EDs and UCCs is a key priority. This project explored staff's lived experience with patient violence with the goal of better understanding its impact, and what strategies and resources could be put in place. Methods: To obtain a representative sample, we recruited staff from EDs and a UCC (n = 6) situated in urban and rural settings across Alberta. As the interviews had the potential to be upsetting, we conducted in-person interviews in a private space. Interviews were conducted with over 60 staff members including RNs, LPNs, unit clerks, physicians, and protective services. Data collection and analysis occurred simultaneously and iteratively until saturation was reached. The analysis involved data reduction, category development, and synthesis. Key phrases and statements were first highlighted. Preliminary labels were then assigned to the data and data was then organized into meaningful clusters. Finally, we identified common themes of participants’ lived experience. Triangulation of sources, independent and team analysis, and frequent debriefing sessions were used to enhance the trustworthiness of the data. Results: Participants frequently noted the worry they carry with them when coming into work, but also said there was a high threshold of acceptance dominating ED culture. A recurring feature of this experience was the limited resources (e.g., no peace officers, scope of security staff) available to staff to respond when patients behave violently or are threatening. Education like non-violent crisis intervention training, although helpful, was insufficient to make staff feel safe. Participants voiced the need for more protective services, the addition of physical barriers like locking doors and glass partitions, more investment in addictions and mental health services (e.g., increased access to psychiatrists or addictions counsellors), and a greater shared understanding of AHS’ zero tolerance policy. Conclusion: ED and UCC staff describe being regularly exposed to violence from patients and visitors. Many of these incidents go unreported and unresolved, leaving the workforce feeling worried and unsupported. Beyond education, the ED and UCC workforce need additional resources to support them in feeling safe coming to work.
Introduction: Emergency Departments (EDs) are at high risk of workforce-directed violence (WDV). To address ED violence in Alberta Health Services (AHS), we conducted key informant interviews to identify successful strategies that could be adopted in AHS EDs. Methods: The project team identified potential participants through their ED network; additional contacts were identified through snowball sampling. We emailed 197 individuals from Alberta (123), Canada (46), and abroad (28). The interview guide was developed and reviewed in partnership with ED managers and Workplace Health and Safety. We conducted semi-structured phone interviews with 26 representatives from urban and rural EDs or similar settings from Canada, the United States, and Australia. This interview process received an ARECCI score of 2. Two researchers conducted a content analysis of the interview notes; rural and urban sites were analyzed separately. We extracted strategies, their impact, and implementation barriers and facilitators. Strategies identified were categorized into emergent themes. We aggregated similar strategies and highlighted key or unique findings. Results: Interview results showed that there is no single solution to address ED violence. Sites with effective violence prevention strategies used a comprehensive approach where multiple strategies were used to address the issue. For example, through a violence prevention working group, one site implemented weekly violence simulations, a peer mentorship support team, security rounding, and more. This multifaceted approach had positive results: a decrease in code whites, staff feeling more supported, and the site no longer being on union “concerned” lists. Another promising strategy included addressing the culture of violence by increasing reporting, clarifying policies (i.e., zero tolerance), and establishing flagging or alert systems for visitors with violent histories. Physician involvement and support was highly valued in responding to violence (e.g., support when refusing care, on the code white response team, flagging). Conclusion: Overall, one strategy is not enough to successfully address WDV in EDs. Strategies need to be comprehensive and context specific, especially when considering urban and rural sites with different resources available. We note that few strategies were formally evaluated, and recommend that future work focus on developing comprehensive metrics to evaluate the strategies and define success.
Introduction: Clinical assessment of patients with mTBI is challenging and overuse of head CT in the emergency department (ED) is a major problem. During the last decades, studies have attempted to reduce unnecessary head CTs following a mTBI by identifying new tools aiming to predict intracranial bleeding. S100B serum protein level might be helpful reducing those imaging since a higher level of S-100B protein has been associated with intracranial hemorrhage following a mTBI in previous literature. The main objective of this study was to assess whether the S100B serum protein level is associated with clinically important brain injury and could be used to reduce the number of head CT following a mTBI. Methods: This prospective multicenter cohort study was conducted in five Canadian ED. MTBI patients with a Glasgow Coma Scale (GCS) score of 13-15 in the ED and a blood sample drawn within 24-hours after the injury were included. S-100B protein was analyzed using enzyme-linked immunosorbent assay (ELISA). All types of intracranial bleedings were reviewed by a radiologist who was blinded to the biomarker results. The main outcome was the presence of clinically important brain injury. Results: A total of 476 patients were included. Mean age was 41 ± 18 years old and 150 (31.5%) were female. Twenty-four (5.0%) patients had a clinically significant intracranial hemorrhage while 37 (7.8%) had any type of intracranial bleeding. S100B median value (Q1-Q3) of was: 0.043 ug/L (0.008-0.080) for patients with clinically important brain injury versus 0.039 μg/L (0.023-0.059) for patients without clinically important brain injury. Sensitivity and specificity of the S100B protein level, if used alone to detect clinically important brain injury, were 16.7% (95% CI 4.7-37.4) and 88.5% (95% CI 85.2-91.3), respectively. Conclusion: S100B serum protein level was not associated with clinically significant intracranial hemorrhage in mTBI patients. This protein did not appear to be useful to reduce the number of CT prescribed in the ED and would have missed many clinically important brain injuries. Future research should focus on different ways to assess mTBI patient and ultimately reduce unnecessary head CT.