To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Innovation Concept: Research training programs for students, especially in emergency medicine (EM), may be difficult to initiate due to lack of protected time, resources, and mentors (Chang Y, Ramnanan CJ. Academic Medicine 2015). We developed a ten-week summer program for medical students aimed at cultivating research skills through mentorship, clinical enrichment, and immersion in EM research culture through shadowing and project support. Methods: Five second year Ontario medical students were recruited to participate in the Summer Training and Research in Emergency Medicine (STAR-EM) program at University Health Network, Toronto, from June - Aug, 2019. Program design followed review of existing summer research programs and literature regarding challenges to EM research (McRae, Perry, Brehaut et al. CJEM 2018). The program had broad emergency physician (EP) engagement, with five EP research project mentors, and over ten EPs delivering academic sessions. Curriculum development was collaborative and iterative. All projects were approved by the hospital Research Ethics Board (REB). Curriculum, Tool or Material: Each weekly academic morning comprised small group teaching (topics including research methodology, manuscript preparation, health equity, quality improvement, and wellness), followed by EP-led group progress review of each student's project. Each student spent one half day per week in the emergency department (ED), shadowing an EP and identifying patients for recruitment for ongoing mentor-initiated ED research projects. Remaining time was spent on independent student project work. Presentation to faculty and program evaluation occurred in week 10. Scholarly output included one abstract submitted for publication per student. Program evaluation by students reflected a uniform impression that course material and mentorship were each excellent (100%, n = 5). Interest in pursuing academic EM as a career was identified by all students. Faculty researchers rated the program as very effective (80%, n = 4) or somewhat effective (20%, n = 1) in terms of enhancing productivity and scholarly output. Conclusion: The STAR-EM program provides a transferable model for other academic departments seeking to foster the development of future clinician investigators and enhance ED research culture. Program challenges included delays in REB approval for student projects and engaging recalcitrant staff to participate in research.
Introduction: Epidemiologic and modeling studies suggest that between 45 and 70% of individuals with chronic hepatitis C virus (HCV) infection in Canada remain undiagnosed. The Canadian Association for the Study of the Liver (CASL) recommends one-time screening of baby boomers (1945-1975). Screening programs in the US have shown a very high prevalence of previously undiagnosed HCV among patients seen in the emergency department (ED). We sought to assess the feasibility of implementing a targeted birth-cohort HCV screening program in a Canadian ED setting. Methods: Patients born from 1945 to 1975 presenting to the ED of a downtown Toronto hospital were offered HCV testing. Patients with life-threatening conditions, unable to provide verbal consent in English or intoxication were excluded. Blood samples were collected by finger prick on Dried Blood Spot (DBS) collection cards and tested for anti-HCV antibody with reflex to HCV RNA. Patients with positive HCV RNA were referred to a liver specialist. Results: During a 27-month period (July 2017 - Sept 2019), 8363 patients in the birth cohort presented to the ED during daytime hours. 80% (6714) met eligibility criteria, and 48.4% (3247) were offered testing. Screening was performed by non-medical staff (mean 8/day, median spots on DBS 4). 345 (10.6%) had been previously tested, and 639 (19.7%) declined. 2136 (65.8%) patients underwent testing: median age 58.4 years (40-82), 1117 male (52.3%). Of these, 45 patients (2.1%; 95% CI 1.5%-2.7%) were anti-HCV positive: 32 (76.2%) were HCV RNA positive, 10 (23.8%) negative and 3 not done due to inadequate DBS sample. 26 patients (81.3%) were linked to care and 3 (9.4%) lost to follow-up. HCV prevalence in the ED was significantly higher than the general Canadian population (2.1% vs 0.7%; p < 0.0001) but much lower than reported rates in American EDs (2.1% vs 10.3%; p < 0.0001). Conclusion: Acceptance of HCV screening in the ED birth cohort was high and easily performed using DBS to ensure the majority of positive samples were tested for HCV RNA. Challenges included implementation that limited number of people tested, and linkage to care for HCV positive patients. HCV prevalence among this ED birth cohort was higher than the general population but lower than seen in the ED in the US. This may in part be due to exclusion of individuals with more severe medical issues, refusal by higher risk subgroups, or population and healthcare system differences between countries.
Introduction: We seek to characterize unhelmeted injured cyclists presenting to the emergency department (ED): demographics, cycling behaviour, and attitudes towards helmet use. Methods: This was a prospective cohort study in a downtown teaching hospital, from May 2016 - Sept 2019. Injured cyclists presenting to the ED were recruited if they were not wearing a helmet at time of injury and over age 18. Exclusion criteria included intoxication, inability to consent, or admission to hospital. A standardized survey was administered by a research coordinator. Descriptive statistics were used to summarize the data, and survey responses reported as percentages. Results: We surveyed a convenience sample of 68 unhelmeted injured cyclists (UICs) with mean age of 33.6 years (range 18 to 68, median 29.5 years). Ratio of males to females was 1:1. The majority of UICs cycled most days per week or every day in non-winter months (89.6 %, n = 60). Cycling in Toronto was perceived as somewhat dangerous (45.6%, n = 31) or very dangerous (5.9%, n = 4) by most, and very safe (2.94 %, n = 2) or somewhat safe (19.12%, n = 13) by few. Almost a third (29.4 %, n = 20) had been in a cycling accident in the prior year, some of these (15.0%, n = 3) prompting an ED visit. All cyclists were riding their personal bike (100 %, n = 68) at time of injury, and most (98.5%, n = 67) had planned to cycle when they departed home that day. Purpose of trip was primarily for commuting to work (50%, n = 34), social activities (19.1%, n = 13), school (7.4%, n = 5), and recreation (7.4%, n = 5). Bicycle helmet ownership was low (41.2 %, n = 28). UICs reported rarely (10.3%, n = 7) or never (64.7%, n = 44) wearing a helmet when cycling. Reported factors discouraging helmet use included inconvenience (33.8%, n = 23), lack of ownership (32.4%, n = 22), discomfort (29.4%, n = 20), and ‘messed hair’ (14.7%, n = 10). Few characterized helmets as unnecessary (10.3%, n = 7) or ineffective (1.5%, n = 1). The majority had a college diploma or more advanced education (77.9%, n = 53), and spoke English at home (85.3%, n = 58). Conclusion: Unhelmeted injured cyclists surveyed were frequent commuter cyclists who do not regard cycling as safe, yet choose not to wear helmets for reasons largely related to convenience rather than perceptions regarding safety or necessity. Initiatives to increase helmet use in this subgroup should address the reasons given for not wearing a helmet, potentially using principles of adult education and behavioral economics.
Introduction: Helmets are effective in preventing brain injury and fatality in cyclists. Methods to promote their use include legislation and non-legislative interventions (NLIs) such as education, social interventions, and subsidies. These have been systematically reviewed and proven effective in pediatric populations. We conducted a scoping review regarding NLIs to promote helmet use amongst adult cyclists. Methods: We conducted a scoping review of NLIs to promote helmet use amongst cyclists age 18 or older. PRISMA guidelines were followed. Databases searched included MEDLINE, EMBASE, CINAHL, PsycINFO, and SportDiscus, in addition to grey literature. Articles were excluded if non-English, focused on age <18, on legislative interventions, or did not report on outcomes related to helmet use or ownership. Study inclusion and data extraction were conducted in duplicate. Data were extracted regarding participant demographics, setting, intervention details and effects, and were reported using descriptive statistics with a narrative synthesis. A limited quality assessment was conducted. Results: A total of 16 papers were included, stratified as 4 randomized-controlled trials and 12 pre-post studies. Only 4 were specific to adults. Community cyclists (5/16, 31%) and community members were most commonly targeted, with most interventions taking place in the community (8/16, 50%) or in a healthcare setting (4/16, 25%). Most interventions were multi-faceted, involving components of community awareness programs, education, information distribution, helmet giveaways and monetary incentives, use of mass media, motivational interviewing, and social marketing. The studies were heterogeneous in quality. Changes in helmet rate use varied between -6% and 26%, with half the studies (8/16, 50%) noting a statistically significant increase. Duration of follow-up of helmet use rates following the intervention varied between 4.5 weeks and 11 years (median 1.38 years, mean 3.0 years.) Conclusion: NLIs to encourage bicycle helmet use were frequently multi-faceted and generally associated with an increase in use amongst adults. Studies were heterogenous in quality, varied in their targeted audiences and often not focused on adults. Further evidence is needed to better characterize the efficacy of non-legislative interventions to achieve sustained helmet use in adult cyclists.
Introduction: The use of regional anesthesia (RA) by emergency physicians (EPs) is expanding in frequency and range of application as expertise in point-of-care ultrasound (POCUS) grows, but widespread use remains limited. We sought to characterize the use of RA by Canadian EPs, including practices, perspectives and barriers to use in the ED. Methods: A cross-sectional survey of Canadian EPs was administered to members of the Canadian Association of Emergency Physicians (CAEP), consisting of sixteen multiple choice and numerical responses. Responses were summarized descriptively as percentages and as the median and inter quartile range (IQR) for quantitative variables. Results: The survey was completed by 149/1144 staff EPs, with a response rate of 13%. EPs used RA a median of 2 (IQR 0-4) times in the past ten shifts. The most broadly used applications were soft tissue repair (84.5% of EPs, n = 126), fracture pain management (79.2%, n = 118) and orthopedic reduction (72.5%, n = 108). EPs agreed that RA is safe to use in the ED (98.7%) and were interested in using it more frequently (78.5%). Almost all (98.0%) respondents had POCUS available, however less than half (49.0%) felt comfortable using it for RA. EPs indicated that they required more training (76.5%), a departmental protocol (47.0%), and nursing assistance (30.2%) to increase their use. Conclusion: Canadian EPs engage in limited use of RA but express an interest in expanding their use. While equipment is available, additional training, protocols, and increased support from nursing staff are modifiable factors that could facilitate uptake of RA in the ED.
Alexithymia refers to a cluster of cognitive-affective deficit in emotion-processing characterized by difficulties in experiencing and expression emotions. Seasonal Affective Disorder (SAD) is a form of recurrent depressive or bipolar disorder highlighting somatic symptoms (hyperphagia and snacking for carbohydrate/high fat food, hypersomnia). Alexithymic characteristics could explained why some patients suffering from winter depression are likely to selectively focus on somatic symptoms.
We report the first study assessing the prevalence, sociodemographic and clinical correlates of Alexithymia in patients suffering from Winter Seasonal Affective Disorder (SAD).
In a sample of 59 consecutive depressed outpatients with winter seasonal features (DSM-IV criteria), alexithymia was assessed with the Toronto Alexithymia Scale -20 (TAS-20), severity of depression was assessed with the Hamilton Depression Rating Scale and Sigh-SAD version -25, depressive and anxious symptoms were evaluated with the depression and anxiety subscales of the Hospital Depression scale (HAD).
The prevalence of alexithymia was 35.6%. Total TAS-20 scores were significantly correlated with: age (r= 0.27), duration of the illness (r= 0.31), depression and anxiety HAD scores, respectively r = 0.34 and r= 0.37. Alexithymia was not related to other sociodemographic and clinical variables (hyperphagia, snacking for carbohydrate food and hypersomnia).
Alexithymia is frequent in patients suffering from Winter Seasonal Affective Disorder. Nevertheless, this study does not provide support to a relationship between alexithymia and somatic symptoms. Larger prospective studies are required to define whether alexithymia is a stable personality trait or a state-dependent phenomenon in patients suffering from winter SAD.
The frequency of anxiety and depressive disorders was examined in 69 insulin-dependent diabetic mellitus (IDDM) outpatients and in two control group. Based on self-report measures, these disorders were similar in IDDM sample and in the control groups. In diabetic outpatients, according to DSM-III-R criteria, there was a high lifetime prevalence of not otherwise specified anxiety and depressive disorders (44% and 41.5%), of simple phobia (26.8%), social phobia (24.6%), and agoraphobia — with and without panic disorder (14.6%). Current social phobia, dysthymia and not otherwise specified depressive disorders were associated with impaired glycaemic control. Glycosylated haemoglobin was associated with compliance but psychiatric disorders were not, except for social phobia which was significantly associated with more frequent consultations and a bad compliance for dietary regimen (more snacking). Somatic complications were not associated with anxious and depressive disorders (current or lifetime) or compliance and were best explained by the duration of the illness and impaired glycaemic control.
Single-blind placebo washout periods before randomisation enable the elimination of the psychotropic agents previously received. They subdue carryover effects which could be achieved without using a placebo. Washout periods also purport to identify and eliminate the placebo responders. Trivedi et al. performed a meta-analysis which included 101 studies. They demonstrated that placebo washout periods do not reduce the response rate in the placebo group and do not increase the difference between the placebo and the treated group. This held true for all the different antidepressant classes. In another study, Greenberg et al. analysed 28 antidepressant controlled trials published between 1983 and 1992 and found no difference between trials with or without a placebo washout period in terms of response rate in either the placebo or the treated group. Therefore, placebo washout periods, although appealing and widely used, may not reduce the number of patients who respond to placebo. Besides, the patients who respond during the washout period have very diverse outcomes after three months. This subgroup is likely to be heterogeneous and should be better studied. Some authors have stated that washout periods may bring in confounding effects such as lowering the observed difference between the treated and placebo group. Their explanation was that response to placebo is not a stable characteristic and that responding to placebo during the washout period may subsequently lower the level of placebo-induced improvement. It would also be cumbersome if washout periods covered the problems related to the placebo and blindness issues, which are often neglected. Finally, it appears necessary to further assess the usefulness of single-blind washout periods.
Atypical antipsychotics are actually the first-line treatment in schizophrenia. Obsessive–compulsive symptoms (OCS) are common in patients suffering from schizophrenia and seem to worsen prognosis. Whilst atypical antipsychotics can be a useful augmentation strategy in refractory Obsessive Compulsive Disorder (OCD), their efficacy in case of comorbid obsessive compulsive symptoms in schizophrenia remains unclear.
The purpose of this literature review was to examine the relationships between atypical antipsychotics, Obsessive Compulsive Symptoms (OCS) in schizophrenia.
A systematic MEDLINE database was run using the following key-words: atypical antipsychotics, obsessive compulsive symptoms and schizophrenia (27 articles).
Clozapine, risperidone,olanzapine and quietiapine may induce or exacerbate OCS in patients with schizophrenia due to their anti-serotoninergic properties. There was no study with ziprazidone,aripiprazole nor amisulpiride. For schizophrenic patients with comorbid OCS, the first line strategy appears to be combination therapy with clomipramine or an Selective Serotoninergic Reuptake Inhibitors (SSRIs)(fluvoxamine, sertraline, fluoxétine) and an atypical antipsychotic. Moreover, in these cases, cognitive behavioural therapy should also be considered.
Obsessive Compulsive symptoms and schizophrenia are an ongoing matter of debate in terms of comorbidity or constitution of a specific "schizo-obsessive" subtype. Nevertheless, according to the worsening prognosis of this phenomenon, combination therapy (atypical antipsychotics and SSRIs) remains the most relevant therapeutic approach. Moreover, cognitive behavioural therapy studies in this area are required.
We report the results from the first 12 months of a 2-year maintenance phase of a study evaluating long-term efficacy and safety of venlafaxine extended-release (XR) in preventing recurrence of depression.
Patients with recurrent unipolar depression (N=1096) were randomly assigned in a 3:1 ratio to 10-week treatment with venlafaxine XR (75 mg/d to 300 mg/d) or fluoxetine (20 mg/d to 60 mg/d). Responders (HAM-D17 total score ≤12 and ≥50% decrease from baseline) entered a 6-month, double-blind, continuation phase on the same medication. Continuation phase responders enrolled into the maintenance treatment period consisting of 2 consecutive 12-month phases. At the start of each maintenance phase, venlafaxine XR responders were randomly assigned to double-blind treatment with venlafaxine XR or placebo; fluoxetine responders continued for each period. Time to recurrence (HAM-D17 total score >12 and <50% reduction from acute phase baseline at 2 consecutive visits or the last visit prior to discontinuation) was evaluated using Kaplan-Meier methods and compared between groups using log-rank tests.
At the end of the continuation phase, venlafaxine XR responders were randomly assigned to venlafaxine XR (n=164) or placebo (n=172); 129 patients in each group were evaluated for efficacy. The cumulative probability of recurrence through 12 months was 23.1% (95% CI: 15.3, 30.9) for venlafaxine XR and 42.0% (95% CI: 31.8, 52.2) for placebo (P=0.005).
Twelve months of venlafaxine XR maintenance treatment was effective in preventing recurrence in depressed patients who had been successfully treated with venlafaxine XR during acute and continuation therapy.
This study evaluated the efficacy and safety of venlafaxine extended-release (XR) in preventing recurrence of depression.
Outpatients with recurrent unipolar depression (N=1096) were randomly assigned in a 3:1 ratio to 10-week treatment with venlafaxine XR (75 mg/d to 300 mg/d) or fluoxetine (20 mg/d to 60 mg/d). Responders (HAM-D17 ≤12 and ≥50% decrease from baseline) entered a 6-month, double-blind, continuation phase on the same medication. Continuation phase responders enrolled into maintenance treatment consisting of 2 consecutive 12-month phases. At the start of each maintenance phase, venlafaxine XR responders were randomized to double-blind treatment with venlafaxine XR or placebo; fluoxetine responders continued on fluoxetine. Time to recurrence (HAM-D17 >12 and <50% reduction from acute phase baseline at 2 consecutive visits or the last valid visit prior to discontinuation) was evaluated using Kaplan-Meier methods and compared between groups using log-rank tests.
In the second maintenance phase, the cumulative probabilities of recurrence through 12 months in the venlafaxine XR (n=43) and placebo (n=40) groups were 8.0% (95% CI: 0.0, 16.8) and 44.8% (95% CI: 27.6, 62.0), respectively (P<0.001). The probabilities of recurrence over 24 months for patients assigned to venlafaxine XR (n=129) or placebo (n=129) for the first maintenance phase were 28.5% (95% CI 18.3, 37.8) and 47.3% (95% CI 36.4, 58.2), respectively (P=0.005).
An additional 12 months of venlafaxine XR maintenance therapy was effective in preventing recurrence in depressed patients who had responded to venlafaxine XR after acute, continuation, and 12 months' initial maintenance therapy.
The efficacy of venlafaxine extended-release (XR) at doses between 75 mg/d and 300 mg/d has been demonstrated in patients with recurrent major depressive disorder (MDD) over 2.5 years. This analysis evaluated the long-term efficacy of venlafaxine XR ≤225 mg/d, the approved dosage in many countries.
In the primary multicenter, double-blind trial, outpatients with recurrent MDD (N=1096) were randomized to receive 10-week acute-phase treatment with venlafaxine XR (75 mg/d to 300 mg/d) or fluoxetine (20 mg/d to 60 mg/d), followed by a 6-month continuation phase. Subsequently, at the start of 2 consecutive, double-blind, 12-month maintenance phases, venlafaxine XR responders were randomized to receive venlafaxine XR or placebo. Data from the 24 months of maintenance treatment were analyzed for the combined end point of maintenance of response (ie, no recurrence of depression and no dose increase above 225 mg/d), and each component individually. Time to each outcome was evaluated with Kaplan-Meier methods using log-rank tests for venlafaxine XR-placebo comparisons.
The analysis population included 114 patients who had received venlafaxine XR doses less than or equal to 225 mg/d prior to maintenance phase baseline (venlafaxine XR: n=55; placebo: n=59). Probability estimates for maintaining response were 70% for venlafaxine XR and 38% for placebo (P=0.007), for no dose increase were 76% and 58%, respectively (P=0.019), and for no recurrence were 87% vs 65%, respectively (P=.099).
These data confirm venlafaxine XR is effective maintaining response at doses ≤225 mg/d for up to 2.5 years in patients with MDD.
Retinoblastoma is the most common primary intraocular tumor of childhood with >95% survival rates in the US. Traditional therapy for retinoblastoma often included enucleation (removal of the eye). While much is known about the visual, physical, and cognitive ramifications of enucleation, data are lacking about survivors' perception of how this treatment impacts overall quality of life.
Qualitative analysis of an open-ended response describing how much the removal of an eye had affected retinoblastoma survivors' lives and in what ways in free text, narrative form.
Four hundred and four retinoblastoma survivors who had undergone enucleation (bilateral disease = 214; 52% female; mean age = 44, SD = 11) completed the survey. Survivors reported physical problems (n = 205, 50.7%), intrapersonal problems (n = 77, 19.1%), social and relational problems (n = 98, 24.3%), and affective problems (n = 34, 8.4%) at a mean of 42 years after diagnosis. Three key themes emerged from survivors' responses; specifically, they (1) continue to report physical and intrapersonal struggles with appearance and related self-consciousness due to appearance; (2) have multiple social and relational problems, with teasing and bullying being prominent problems; and (3) reported utilization of active coping strategies, including developing more acceptance and learning compensatory skills around activities of daily living.
Significance of results
This study suggests that adult retinoblastoma survivors treated with enucleation continue to struggle with a unique set of psychosocial problems. Future interventions can be designed to teach survivors more active coping skills (e.g., for appearance-related issues, vision-related issues, and teasing/bullying) to optimize survivors' long-term quality of life.
Can children tell how different a speaker's accent is from their own? In Experiment 1 (N = 84), four- and five-year-olds heard speakers with different accents and indicated where they thought each speaker lived relative to a reference point on a map that represented their current location. Five-year-olds generally placed speakers with stronger accents (as judged by adults) at more distant locations than speakers with weaker accents. In contrast, four-year-olds did not show differences in where they placed speakers with different accents. In Experiment 2 (N = 56), the same sentences were low-pass filtered so that only prosodic information remained. This time, children judged which of five possible aliens had produced each utterance, given a reference speaker. Children of both ages showed differences in which alien they chose based on accent, and generally rated speakers with foreign accents as more different from their native accent than speakers with regional accents. Together, the findings show that preschoolers perceive accent distance, that children may be sensitive to the distinction between foreign and regional accents, and that preschoolers likely use prosody to differentiate among accents.
Immune system markers may predict affective disorder treatment response, but whether an overall immune system marker predicts bipolar disorder treatment effect is unclear.
Bipolar CHOICE (N = 482) and LiTMUS (N = 283) were similar comparative effectiveness trials treating patients with bipolar disorder for 24 weeks with four different treatment arms (standard-dose lithium, quetiapine, moderate-dose lithium plus optimised personalised treatment (OPT) and OPT without lithium). We performed secondary mixed effects linear regression analyses adjusted for age, gender, smoking and body mass index to investigate relationships between pre-treatment white blood cell (WBC) levels and clinical global impression scale (CGI) response.
Compared to participants with WBC counts of 4.5–10 × 109/l, participants with WBC < 4.5 or WBC ≥ 10 showed similar improvement within each specific treatment arm and in gender-stratified analyses.
An overall immune system marker did not predict differential treatment response to four different treatment approaches for bipolar disorder all lasting 24 weeks.
In early October 2014, 7 months after the 2014–2015 Ebola epidemic in West Africa began, a cluster of reported deaths in Koinadugu, a remote district of Sierra Leone, was the first evidence of Ebola virus disease (Ebola) in the district. Prior to this event, geographic isolation was thought to have prevented the introduction of Ebola to this area. We describe our initial investigation of this cluster of deaths and subsequent public health actions after Ebola was confirmed, and present challenges to our investigation and methods of overcoming them. We present a transmission tree and results of whole genome sequencing of selected isolates to identify the source of infection in Koinadugu and demonstrate transmission between its villages. Koinadugu's experience highlights the danger of assuming that remote location and geographic isolation can prevent the spread of Ebola, but also demonstrates how deployment of rapid field response teams can help limit spread once Ebola is detected.
This study examined whether executive functions (EFs) might be common features of internalizing and externalizing behavior problems across development. We examined relations between three EF latent variables (a common EF factor and factors specific to updating working memory and shifting sets), constructed from nine laboratory tasks administered at age 17, to latent growth intercept (capturing stability) and slope (capturing change) factors of teacher- and parent-reported internalizing and externalizing behaviors in 885 individual twins aged 7 to 16 years. We then estimated the proportion of intercept–intercept and slope–slope correlations predicted by EF as well as the association between EFs and a common psychopathology factor (P factor) estimated from all 9 years of internalizing and externalizing measures. Common EF was negatively associated with the intercepts of teacher-rated internalizing and externalizing behavior in males, and explained 32% of their covariance; in the P factor model, common EF was associated with the P factor in males. Shifting-specific was positively associated with the externalizing slope across sex. EFs did not explain covariation between parent-rated behaviors. These results suggest that EFs are associated with stable problem behavior variation, explain small proportions of covariance, and are a risk factor that that may depend on gender.
National security is one of many fields where experts make vague probability assessments when evaluating high-stakes decisions. This practice has always been controversial, and it is often justified on the grounds that making probability assessments too precise could bias analysts or decision makers. Yet these claims have rarely been submitted to rigorous testing. In this paper, we specify behavioral concerns about probabilistic precision into falsifiable hypotheses which we evaluate through survey experiments involving national security professionals. Contrary to conventional wisdom, we find that decision makers responding to quantitative probability assessments are less willing to support risky actions and more receptive to gathering additional information. Yet we also find that when respondents estimate probabilities themselves, quantification magnifies overconfidence, particularly among low-performing assessors. These results hone wide-ranging concerns about probabilistic precision into a specific and previously undocumented bias that training may be able to correct.