To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The chapter on the psychology of the trial consists of three sections. The first section examines the question of whether the general insights of psychology and behavioral economics can be applied to trial judges, or whether, by virtue of their special training and experience or some other reason, trial judges are different. The second section surveys the relatively scant behavioral literature on judging and places it in the larger context of non-judge decision making. It does so by considering four well-studied “heuristics,” or cognitive shortcuts that allow people to make quick, intuitive decisions with little or no deliberation, but that can sometimes also result in errors in reasoning. The four heuristics considered are: anchoring, hindsight, compromise and contrast, and representativeness. The third section finishes the chapter with a series of reflections about specific challenges the author has thought about over the twenty-nine years that he has been a trial judge, coupled with a few suggestions about how the system might better accommodate some of those challenges.
Proglacial braided river systems discharge large volumes of meltwater from ice sheets and transport coarse-grained sediments from the glaciated areas to the oceans. Here, we test the hypothesis if high-energy hydrological events can leave distinctive signatures in the sedimentary record of braided river systems. We characterize the morphology and infer a mode of formation of a 25 km long and 1–3 km wide Early Pleistocene incised valley recently imaged in 3-D seismic data in the Hoop area, SW Barents Sea. The fluvial system, named Bjørnelva River Valley, carved 20 m deep channels into Lower Cretaceous bedrock at a glacial paleo-surface and deposited 28 channel bars along a paleo-slope gradient of ~0.64 m km−1. The landform morphologies and position relative to the paleo-surface support that Bjørnelva River Valley was formed in the proglacial domain of the Barents Sea Ice Sheet. Based on valley width and valley depth, we suggest that Bjørnelva River Valley represents a braided river system fed by violent outburst floods from a glacial lake, with estimated outburst discharges of ~160 000 m3 s−1. The morphological configuration of Bjørnelva River Valley can inform geohazard assessments in areas at risk of outburst flooding today and is an analogue for landscapes evolving in areas currently covered by the Greenland and Antarctic ice sheets.
Many older adults experience memory changes that can have a meaningful impact on their everyday lives, such as restrictions to lifestyle activities and negative emotions. Older adults also report a variety of positive coping responses that help them manage these changes. The purpose of this study was to determine how objective cognitive performance and self-reported memory are related to the everyday impact of memory change.
We examined these associations in a sample of 94 older adults (age 60–89, 52% female) along a cognitive ability continuum from normal cognition to mild cognitive impairment.
Correlational analyses revealed that greater restrictions to lifestyle activities (|rs| = .36–.66), more negative emotion associated with memory change (|rs| = .27–.76), and an overall greater burden of memory change on everyday living (|rs| = .28–.61) were associated with poorer objective memory performance and lower self-reported memory ability and satisfaction. Performance on objective measures of executive attention was unrelated to the impact of memory change. Self-reported strategy use was positively related to positive coping with memory change (|r| = .26), but self-reported strategy use was associated with more negative emotions regarding memory change (|r| = .23).
Given the prevalence of memory complaints among older adults, it is important to understand the experience of memory change and its impact on everyday functioning in order to develop services that target the specific needs of this population.
Antibiotics are among the most common medications prescribed in nursing homes. The annual prevalence of antibiotic use in residents of nursing homes ranges from 47% to 79%, and more than half of antibiotic courses initiated in nursing-home settings are unnecessary or prescribed inappropriately (wrong drug, dose, or duration). Inappropriate antibiotic use is associated with a variety of negative consequences including Clostridioides difficile infection (CDI), adverse drug effects, drug–drug interactions, and antimicrobial resistance. In response to this problem, public health authorities have called for efforts to improve the quality of antibiotic prescribing in nursing homes.
To estimate the impact of California’s antimicrobial stewardship program (ASP) mandate on methicillin-resistant Staphylococcus aureus (MRSA) and Clostridioides difficile infection (CDI) rates in acute-care hospitals.
Centers for Medicare and Medicaid Services (CMS)–certified acute-care hospitals in the United States.
2013–2017 data from the CMS Hospital Compare, Provider of Service File and Medicare Cost Reports.
Difference-in-difference model with hospital fixed effects to compare California with all other states before and after the ASP mandate. We considered were standardized infection ratios (SIRs) for MRSA and CDI as the outcomes. We analyzed the following time-variant covariates: medical school affiliation, bed count, quality accreditation, number of changes in ownership, compliance with CMS requirements, % intensive care unit beds, average length of stay, patient safety index, and 30-day readmission rate.
In 2013, California hospitals had an average MRSA SIR of 0.79 versus 0.94 in other states, and an average CDI SIR of 1.01 versus 0.77 in other states. California hospitals had increases (P < .05) of 23%, 30%, and 20% in their MRSA SIRs in 2015, 2016, and 2017, respectively. California hospitals were associated with a 20% (P < .001) decrease in the CDI SIR only in 2017.
The mandate was associated with a decrease in CDI SIR and an increase in MRSA SIR.
Postprandial glycaemia and insulinaemia are important risk factors for type 2 diabetes. The prevalence of insulin resistance in adolescents is increasing, but it is unknown how adolescent participant characteristics such as BMI, waist circumference, fitness and maturity offset may explain responses to a standard meal. The aim of the present study was to examine how such participant characteristics affect the postprandial glycaemic and insulinaemic responses to an ecologically valid mixed meal. Data from the control trials of three separate randomised, crossover experiments were pooled, resulting in a total of 108 participants (fifty-two boys, fifty-six girls; aged 12·5 (SD 0·6) years; BMI 19·05 (SD 2·66) kg/m2). A fasting blood sample was taken for the calculation of fasting insulin resistance, using the homoeostatic model assessment of insulin resistance (HOMA-IR). Further capillary blood samples were taken before and 30, 60 and 120 min after a standardised lunch, providing 1·5 g/kg body mass of carbohydrate, for the quantification of blood glucose and plasma insulin total AUC (tAUC). Hierarchical multiple linear regression demonstrated significant predictors for plasma insulin tAUC were waist circumference, physical fitness and HOMA-IR (F(3,98) = 36·78, P < 0·001, adjusted R2 = 0·515). The variance in blood glucose tAUC was not significantly explained by the predictors used (F(7,94) = 1·44, P = 0·198). Significant predictors for HOMA-IR were BMI and maturity offset (F(2,102) = 14·06, P < 0·001, adjusted R2 = 0·021). In summary, the key findings of the study are that waist circumference, followed by physical fitness, best explained the insulinaemic response to an ecologically valid standardised meal in adolescents. This has important behavioural consequences because these variables can be modified.
Introduction: For rhythm control of acute atrial flutter (AAFL) in the emergency department (ED), choices include initial drug therapy or initial electrical cardioversion (ECV). We compared the strategies of pharmacological cardioversion followed by ECV if necessary (Drug-Shock), and ECV alone (Shock Only). Methods: We conducted a randomized, blinded, placebo-controlled trial (1:1 allocation) comparing two rhythm control strategies at 11 academic EDs. We included stable adult patients with AAFL, where onset of symptoms was <48 hours. Patients underwent central web-based randomization stratified by site. The Drug-Shock group received an infusion of procainamide (15mg/kg over 30 minutes) followed 30 minutes later, if necessary, by ECV at 200 joules x 3 shocks. The Shock Only group received an infusion of saline followed, if necessary, by ECV x 3 shocks. The primary outcome was conversion to sinus rhythm for ≥30 minutes at any time following onset of infusion. Patients were followed for 14 days. The primary outcome was evaluated on an intention-to-treat basis. Statistical significance was assessed using chi-squared tests and multivariable logistic regression. Results: We randomized 76 patients, and none was lost to follow-up. The Drug-Shock (N = 33) and Shock Only (N = 43) groups were similar for all characteristics including mean age (66.3 vs 63.4 yrs), duration of AAFL (30.1 vs 24.5 hrs), previous AAFL (72.7% vs 69.8%), median CHADS2 score (1 vs 1), and mean initial heart rate (128.9 vs 126.0 bpm). The Drug-Shock and Shock only groups were similar for the primary outcome of conversion (100% vs 93%; absolute difference 7.0%, 95% CI -0.6;14.6; P = 0.25). The multivariable analyses confirmed the similarity of the two strategies (P = 0.19). In the Drug-Shock group 21.2% of patients converted with the infusion. There were no statistically significant differences for time to conversion (84.2 vs 97.6 minutes), total ED length of stay (9.4 vs 7.5 hours), disposition home (100% vs 95.3%), and stroke within 14 days (0 vs 0). Premature discontinuation of infusion (usually for transient hypotension) was more common in the Drug-Shock group (9.1% vs 0.0%) but there were no serious adverse events. Conclusion: Both the Drug-Shock and Shock Only strategies were highly effective and safe in allowing AAFL patients to go home in sinus rhythm. IV procainamide alone was effective in only one fifth of patients, much less than for acute AF.
Introduction: Cases of anaphylaxis in children are often not appropriately managed by caregivers. We aimed to develop and to test the effectiveness of an education tool to help pediatric patients and their families better understand anaphylaxis and its management and to improve current knowledge and treatment guidelines adherence. Methods: The GEAR (Guidelines and Educational programs based on an Anaphylaxis Registry) is an initiative that recruits children with food-induced anaphylaxis who have visited the ED at the Montreal Children's Hospital and at The Children's Clinic located in Montreal, Quebec. The patients and parents, together, were asked to complete six questions related to the triggers, recognition and management of anaphylaxis at the time of presentation to the allergy clinic. Participants were automatically shown a 5-minute animated video addressing the main knowledge gaps related to the causes and management of anaphylaxis. At the end of the video, participants were redirected to same 6 questions to respond again. To test long-term knowledge retention, the questionnaire will be presented again in one year's time. A paired t-test was used to compare the difference between the baseline score and the follow-up score based on percentage of correct answers of the questionnaire. Results: From June to November 2019, 95 pediatric patients with diagnosed food-induced anaphylaxis were recruited. The median patient age was 4.5 years (Interquartile Range (IQR): 1.6–7.4) and half were male (51.6%). The mean questionnaire baseline score was 0.77 (77.0%, standard deviation (sd): 0.16) and the mean questionnaire follow-up score was 0.83 (83.0%, sd: 0.17). There was a significant difference between the follow-up score and baseline score (difference: 0.06, 95% CI: 0.04, 0.09). There were no associations of baseline questionnaire scores and change in scores with age and sex. Conclusion: Our video teaching method was successful in educating patients and their families to better understand anaphylaxis. The next step is to acquire long-term follow up scored to determine retention of knowledge.
Introduction: CAEP recently developed the acute atrial fibrillation (AF) and flutter (AFL) [AAFF] Best Practices Checklist to promote optimal care and guidance on cardioversion and rapid discharge of patients with AAFF. We sought to assess the impact of implementing the Checklist into large Canadian EDs. Methods: We conducted a pragmatic stepped-wedge cluster randomized trial in 11 large Canadian ED sites in five provinces, over 14 months. All hospitals started in the control period (usual care), and then crossed over to the intervention period in random sequence, one hospital per month. We enrolled consecutive, stable patients presenting with AAFF, where symptoms required ED management. Our intervention was informed by qualitative stakeholder interviews to identify perceived barriers and enablers for rapid discharge of AAFF patients. The many interventions included local champions, presentation of the Checklist to physicians in group sessions, an online training module, a smartphone app, and targeted audit and feedback. The primary outcome was length of stay in ED in minutes from time of arrival to time of disposition, and this was analyzed at the individual patient-level using linear mixed effects regression accounting for the stepped-wedge design. We estimated a sample size of 800 patients. Results: We enrolled 844 patients with none lost to follow-up. Those in the control (N = 316) and intervention periods (N = 528) were similar for all characteristics including mean age (61.2 vs 64.2 yrs), duration of AAFF (8.1 vs 7.7 hrs), AF (88.6% vs 82.9%), AFL (11.4% vs 17.1%), and mean initial heart rate (119.6 vs 119.9 bpm). Median lengths of stay for the control and intervention periods respectively were 413.0 vs. 354.0 minutes (P < 0.001). Comparing control to intervention, there was an increase in: use of antiarrhythmic drugs (37.4% vs 47.4%; P < 0.01), electrical cardioversion (45.1% vs 56.8%; P < 0.01), and discharge in sinus rhythm (75.3% vs. 86.7%; P < 0.001). There was a decrease in ED consultations to cardiology and medicine (49.7% vs 41.1%; P < 0.01), but a small but insignificant increase in anticoagulant prescriptions (39.6% vs 46.5%; P = 0.21). Conclusion: This multicenter implementation of the CAEP Best Practices Checklist led to a significant decrease in ED length of stay along with more ED cardioversions, fewer ED consultations, and more discharges in sinus rhythm. Widespread and rigorous adoption of the CAEP Checklist should lead to improved care of AAFF patients in all Canadian EDs.
Electroconvulsive therapy (ECT) is recommended in treatment guidelines as an efficacious therapy for treatment-resistant depression. However, it has been associated with loss of autobiographical memory and short-term reduction in new learning.
To provide clinically useful guidelines to aid clinicians in informing patients regarding the cognitive side-effects of ECT and in monitoring these during a course of ECT, using complex data.
A Committee of clinical and academic experts from Australia and New Zealand met to the discuss the key issues pertaining to ECT and cognitive side-effects. Evidence regarding cognitive side-effects was reviewed, as was the limited evidence regarding how to monitor them. Both issues were supplemented by the clinical experience of the authors.
Meta-analyses suggest that new learning is impaired immediately following ECT but that group mean scores return at least to baseline by 14 days after ECT. Other cognitive functions are generally unaffected. However, the finding of a mean score that is not reduced from baseline cannot be taken to indicate that impairment, particularly of new learning, cannot occur in individuals, particularly those who are at greater risk. Therefore, monitoring is still important. Evidence suggests that ECT does cause deficits in autobiographical memory. The evidence for schedules of testing to monitor cognitive side-effects is currently limited. We therefore make practical recommendations based on clinical experience.
Despite modern ECT techniques, cognitive side-effects remain an important issue, although their nature and degree remains to be clarified fully. In these circumstances it is useful for clinicians to have guidance regarding what to tell patients and how to monitor these side-effects clinically.
To evaluate the cognitive status in an elderly population including both community-dwellers and institutionalised subjects.
462 subjects (mean age 85.1±6.9 years, 53.2% females) living in the Faenza district (Ravenna, Northern Italy) were interviewed and clinically evaluated. The Cambridge Mental Disorders of the Elderly Examination (CAMDEX) was administered to all participants to collect socio-demographic and clinical information. The cognitive status was evaluated using the cognitive assessment included in the CAMDEX (CAMCOG) and the Mini-Mental State Examination (MMSE) (adjusted by sex and age). Cut-offs were as follow: CAMCOG scores < 80; MMSE scores < 24.
The CAMCOG identified 245 subjects (53.0%) as cognitively impaired; 132 persons (28.6%) had a MMSE score < 24 and were impaired in the activities of daily living. Prevalence of dementia (DSM-IV criteria) was 19.1% (N=88), including 11 cases of ‘questionable’ dementia. Demented subjects were more likely to be women (65.9%), were less educated (p< 0.05) and older than non-demented (p< 0.001). Demented subjects scored significantly lower than non-demented subjects in any cognitive domain at CAMCOG (p< 0.001).
Cognitive domains: mean score and standard deviation (p< 0.001).
Non-demented vs Demented
All subjects: 78.4(±15.9) vs 28.7(±21.7)
Males: 81.1(±13.0) vs 35.0(±19.9)
≤85: 83.3(±12.3) vs 38.0(±20.5)
>85: 75.7(±13.2) vs 34.0(±20.2)
Females all: 75.7(±18.0) vs 24.3(±21.9)
≤85: 82.5(±12.4) vs 58.5(±10.8)
>85: 67.0(±20.2) vs 18.4(±17.5)
Among demented subjects, only 4.5% were treated with acetylcholinesterase inhibitors (p=0.046); 10.2% used other anti-dementia medications (p=0.067).
Despite of the high prevalence of dementia, only few subjects affected by dementia were properly treated.
To evaluate the long-term safety and efficacy of adjunctive aripiprazole (ARI) to lithium (LI) or valproate (VAL) in delaying time to relapse in bipolar I disorder.
Bipolar I disorder subjects with a current manic or mixed episode received LI or VAL for at least 2 weeks; inadequate responders (YMRS score ≥ 16 and ≤35% decrease from baseline at 2 weeks) received adjunctive ARI. Subjects maintaining mood stability (YMRS and MADRS ≤ 12 for 12 consecutive weeks) were randomised 1:1 to double-blind ARI (10 to 30 mg/day) or placebo (PBO) plus LI or VAL. Relapse was monitored up to 52 weeks.
337 subjects were randomised to continuation of mood stabiliser plus adjunctive ARI or PBO; 61.3% and 52.7%, respectively, completed the study. Adjunctive ARI significantly delayed the time to any relapse, hazard ratio = 0.544 (95% CI: 0.33, 0.89, log-rank p = 0.014). Overall relapse rates at 52 weeks were 14.9% and 25.4% in ARI vs PBO subjects. A superior reduction in CGI-BP Mania Severity of Illness from baseline at 52 weeks was also observed (0.3 vs. 0.0, respectively, p = 0.01). Adverse events generally were as expected per known drug and illness profiles with no significant difference in mean change in body weight between adjunctive PBO (0.60 kg) and adjunctive ARI (1.07 kg) (p = 0.49 Week 52, LOCF).
Continuation of aripiprazole treatment increased time to relapse to any mood episode compared with placebo plus LI/VAL over 1 year, indicating a long-term benefit in continuing adjunctive aripiprazole to a mood stabiliser after sustained remission is achieved.
Major depression is a significant problem for people with a traumatic brain injury (TBI) and its treatment remains difficult. A promising approach to treat depression is Mindfulness-based cognitive therapy (MBCT), a relatively new therapeutic approach rooted in mindfulness based stress-reduction (MBSR) and cognitive behavioral therapy (CBT). We conducted this study to examine the effectiveness of MBCT in reducing depression symptoms among people who have a TBI.
Twenty individuals diagnosed with major depression were recruited from a rehabilitation clinic and completed the 8-week MBCT intervention. Instruments used to measure depression symptoms included: BDI-II, PHQ-9, HADS, SF-36 (Mental Health subscale), and SCL-90 (Depression subscale). They were completed at baseline and post-intervention.
All instruments indicated a statistically significant reduction in depression symptoms post-intervention (p < .05). For example, the total mean score on the BDI-II decreased from 25.2 (9.8) at baseline to 18.2 (11.7) post-intervention (p=.001). Using a PHQ threshold of 10, the proportion of participants with a diagnosis of major depression was reduced by 59% at follow-up (p=.012).
Most participants reported reductions in depression symptoms after the intervention such that many would not meet the criteria for a diagnosis of major depression. This intervention may provide an opportunity to address a debilitating aspect of TBI and could be implemented concurrently with more traditional forms of treatment, possibly enhancing their success. The next step will involve the execution of multi-site, randomized controlled trials to fully demonstrate the value of the intervention.
Attentional bias is an important psychological mechanism that has been extensively explored within the anxiety literature and more recently in chronic pain. Cognitive behavioural models of chronic fatigue syndrome (CFS) and chronic pain suggest an overlap in the mechanisms of these two conditions. the current study investigated attentional bias towards health-threat stimuli in individuals with CFS, compared to healthy controls. the study also examined whether individuals with CFS have impaired executive attention, and how it was related to attentional bias.
Two participant groups, CFS (n = 27) and healthy control (n = 35), completed a Visual Probe Task measuring attentional bias towards health-threat stimuli (words and pictures) presented at 500ms and 1250ms, and an Attention Network Test measuring alerting, orienting and executive attention. Participants also completed a series of standard self-report measures.
When compared to the control group, the CFS group showed greater attentional bias towards threat-words, but not pictures, regardless of stimulus duration. This was not related to anxiety or depression. the CFS group was also significantly impaired on executive attention compared to the controls. Post-hoc analyses indicated that only CFS individuals with poor executive attention showed a threat-word bias when compared to controls and CFS individuals with good executive attention.
The findings from this study suggest that CFS individuals show enhanced attentional biases for health-threat stimuli, which may contribute to the perpetuation of the condition. Moreover, the attentional biases in CFS are dependent on an individual's capacity to voluntarily control their attention.
Nearly half of care home residents with advanced dementia have clinically significant agitation. Little is known about costs associated with these symptoms toward the end of life. We calculated monetary costs associated with agitation from UK National Health Service, personal social services, and societal perspectives.
Prospective cohort study.
Thirteen nursing homes in London and the southeast of England.
Seventy-nine people with advanced dementia (Functional Assessment Staging Tool grade 6e and above) residing in nursing homes, and thirty-five of their informal carers.
Data collected at study entry and monthly for up to 9 months, extrapolated for expression per annum. Agitation was assessed using the Cohen-Mansfield Agitation Inventory (CMAI). Health and social care costs of residing in care homes, and costs of contacts with health and social care services were calculated from national unit costs; for a societal perspective, costs of providing informal care were estimated using the resource utilization in dementia (RUD)-Lite scale.
After adjustment, health and social care costs, and costs of providing informal care varied significantly by level of agitation as death approached, from £23,000 over a 1-year period with no agitation symptoms (CMAI agitation score 0–10) to £45,000 at the most severe level (CMAI agitation score >100). On average, agitation accounted for 30% of health and social care costs. Informal care costs were substantial, constituting 29% of total costs.
With the increasing prevalence of dementia, costs of care will impact on healthcare and social services systems, as well as informal carers. Agitation is a key driver of these costs in people with advanced dementia presenting complex challenges for symptom management, service planners, and providers.
To measure the association between statewide adoption of the Centers for Disease Control and Prevention’s (CDC’s) Core Elements for Hospital Antimicrobial Stewardship Programs (Core Elements) and hospital-associated methicillin-resistant Staphylococcus aureus bacteremia (MRSA) and Clostridioides difficile infection (CDI) rates in the United States. We hypothesized that states with a higher percentage of reported compliance with the Core Elements have significantly lower MRSA and CDI rates.
All US states.
Observational longitudinal study.
We used 2014–2016 data from Hospital Compare, Provider of Service files, Medicare cost reports, and the CDC’s Patient Safety Atlas website. Outcomes were MRSA standardized infection ratio (SIR) and CDI SIR. The key explanatory variable was the percentage of hospitals that meet the Core Elements in each state. We estimated state and time fixed-effects models with time-variant controls, and we weighted our analyses for the number of hospitals in the state.
The percentage of hospitals reporting compliance with the Core Elements between 2014 and 2016 increased in all states. A 1% increase in reported ASP compliance was associated with a 0.3% decrease (P < .01) in CDIs in 2016 relative to 2014. We did not find an association for MRSA infections.
Increasing documentation of the Core Elements may be associated with decreases in the CDI SIR. We did not find evidence of such an association for the MRSA SIR, probably due to the short length of the study and variety of stewardship strategies that ASPs may encompass.
Background: Observational studies have reported an association between childhood obesity and a higher risk of multiple sclerosis (MS). However, the difficulties to fully account for confounding and long recall periods make causal inference from these studies challenging. The objective of this study was to assess the contribution of childhood obesity to the development of MS through Mendelian randomization, which uses genetic associations to minimize the risk of confounding. Methods: We selected 23 independent genetic variants strongly associated with childhood body mass index (BMI) in a genome-wide association study (GWAS) which included 47,541 children. The corresponding effects of these variants on risk of MS were obtained from a GWAS of 14,802 MS cases and 26,703 controls. Standard two-sample Mendelian randomization methods were performed, with additional sensitivity analyses to assess the likelihood of bias from genetic pleiotropy. Results: The inverse-variance weighted MR analysis revealed that one standard deviation increase in childhood BMI increased odds of MS by 26% (odds ratio=1.26, 95% confidence interval 1.10-1.45, p=0.001). There was no significant heterogeneity across the individual estimates. Sensitivity analyses were consistent with the main findings and provided no evidence of pleiotropy. Conclusions: This study provides genetic support of a role for increased childhood BMI in the development of MS.
Introduction: Les patients ayant un retour de circulation spontanée (RCS) durant la phase préhospitalière de leur réanimation suite à un arrêt cardiaque extrahospitalier (ACEH) ont un meilleur taux de survie que ceux n'en ayant pas. La durée des efforts de réanimation avant l'initiation d'un transport ne varie généralement pas en fonction du rythme initial observé. Cette étude vise à comparer la durée des manœuvres de réanimation nécessaire afin de générer la majorité des RCS préhospitaliers et des RCS préhospitaliers menant à une survie en fonction du rythme initial. Methods: La présente étude de cohorte a été réalisée à partir des bases de données collectées de la Corporation d'Urgences-santé dans la région de Montréal entre 2010 et 2015. Les patients avec un ACEH d'origine médicale ont été inclus. Les patients dont l'ACEH était témoigné par les paramédics ont été exclus, tout comme ceux dont le rythme initial était inconnu. Nous avons comparé entre les groupes (rythme défibrillable [RD], activité électrique sans pouls [AESP] et asystolie) les taux de RCS préhospitalier et le temps nécessaires pour obtenir une majorité des RCS préhospitaliers et des RCS préhospitaliers menant à une survie. Results: Un total de 6002 patients (3851 hommes et 2151 femmes) d'un âge moyen de 52 ans ( ±10) ont été inclus dans l’étude, parmi lesquels 563 (9%) ont survécu jusqu’à leur congé hospitalier et 1310 (22%) ont obtenu un RCS préhospitalier. Un total de 1545 (26%) patients avaient un RD, 1654 (28%) une AESP et 2803 (47%) une asystolie. Les patients avec un RD ont obtenu plus fréquemment un RCS préhospitalier et un RCS préhospitalier menant à une survie que les patients avec une AESP qui eux même avaient un meilleur pronostic que ceux avec une asystolie initial (777 patients [55%] vs 385 [23%] vs 148 [5%], p < 0,001; 431 [28%] vs 85 [5%] vs 7 [0,2%], p < 0,001, respectivement). Les RCS survenaient également plus rapidement lorsque le rythme initial était un RD (13 minutes [ ±12] vs 18 [ ±13] vs 25 [ ±12], p < 0,001). Cependant, une période de réanimation plus longue était nécessaire afin d'obtenir 95% des RCS préhospitaliers menant à une survie pour les patients avec un RD (26 minutes vs 21 minutes vs 21 minutes). Conclusion: Les patients avec un rythme initial défibrillable suite à leur ACEH sont à meilleur pronostic. Il serait envisageable de transporter plus rapidement vers l'hôpital les patients avec une AESP ou une asystolie que ceux avec un rythme défibrillable si l'arrêt des manœuvres n'est pas envisagé.