To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Introduction: Selecting appropriate patients for hospitalization following emergency department (ED) evaluation of syncope is critical for serious adverse event (SAE) identification. The primary objective of this study is to determine the association of hospitalization and SAE detection using propensity score (PS) matching. The secondary objective was to determine if SAE identification with hospitalization varied by the Canadian Syncope Risk Score (CSRS) risk-category. Methods: This was a secondary analysis of two large prospective cohort studies that enrolled adults (age ≥ 16 years) with syncope at 11 Canadian EDs. Patients with a serious condition identified during index ED evaluation were excluded. Outcome was a 30-day SAE identified either in-hospital for hospitalized patients or after ED disposition for discharged patients and included death, ventricular arrhythmia, non-lethal arrhythmia and non-arrhythmic SAE (myocardial infarction, structural heart disease, pulmonary embolism, hemorrhage). Patients were propensity matched using age, sex, blood pressure, prodrome, presumed ED diagnosis, ECG abnormalities, troponin, heart disease, hypertension, diabetes, arrival by ambulance and hospital site. Multivariable logistic regression assessed the interaction between CSRS and SAE detection and we report odds ratios (OR). Results: Of the 8183 patients enrolled, 743 (9.0%) patients were hospitalized and 658 (88.6%) were PS matched. The OR for SAE detection for hospitalized patients in comparison to those discharged from the ED was 5.0 (95%CI 3.3, 7.4), non-lethal arrhythmia 5.4 (95%CI 3.1, 9.6) and non-arrhythmic SAE 6.3 (95%CI 2.9, 13.5). Overall, the odds of any SAE identification, and specifically non-lethal arrhythmia and non-arrhythmia was significantly higher in-hospital among hospitalized patients than those discharged from the ED (p < 0.001). There were no significant differences in 30-day mortality (p = 1.00) or ventricular arrhythmia detection (p = 0.21). The interaction between ED disposition and CSRS was significant (p = 0.04) and the probability of 30-day SAEs while in-hospital was greater for medium and high risk CSRS patients. Conclusion: In this multicenter prospective cohort, 30-day SAE detection was greater for hospitalized compared with discharged patients. CSRS low-risk patients are least likely to have SAEs identified in-hospital; out-patient monitoring for moderate risk patients requires further study.
Introduction: For rhythm control of acute atrial flutter (AAFL) in the emergency department (ED), choices include initial drug therapy or initial electrical cardioversion (ECV). We compared the strategies of pharmacological cardioversion followed by ECV if necessary (Drug-Shock), and ECV alone (Shock Only). Methods: We conducted a randomized, blinded, placebo-controlled trial (1:1 allocation) comparing two rhythm control strategies at 11 academic EDs. We included stable adult patients with AAFL, where onset of symptoms was <48 hours. Patients underwent central web-based randomization stratified by site. The Drug-Shock group received an infusion of procainamide (15mg/kg over 30 minutes) followed 30 minutes later, if necessary, by ECV at 200 joules x 3 shocks. The Shock Only group received an infusion of saline followed, if necessary, by ECV x 3 shocks. The primary outcome was conversion to sinus rhythm for ≥30 minutes at any time following onset of infusion. Patients were followed for 14 days. The primary outcome was evaluated on an intention-to-treat basis. Statistical significance was assessed using chi-squared tests and multivariable logistic regression. Results: We randomized 76 patients, and none was lost to follow-up. The Drug-Shock (N = 33) and Shock Only (N = 43) groups were similar for all characteristics including mean age (66.3 vs 63.4 yrs), duration of AAFL (30.1 vs 24.5 hrs), previous AAFL (72.7% vs 69.8%), median CHADS2 score (1 vs 1), and mean initial heart rate (128.9 vs 126.0 bpm). The Drug-Shock and Shock only groups were similar for the primary outcome of conversion (100% vs 93%; absolute difference 7.0%, 95% CI -0.6;14.6; P = 0.25). The multivariable analyses confirmed the similarity of the two strategies (P = 0.19). In the Drug-Shock group 21.2% of patients converted with the infusion. There were no statistically significant differences for time to conversion (84.2 vs 97.6 minutes), total ED length of stay (9.4 vs 7.5 hours), disposition home (100% vs 95.3%), and stroke within 14 days (0 vs 0). Premature discontinuation of infusion (usually for transient hypotension) was more common in the Drug-Shock group (9.1% vs 0.0%) but there were no serious adverse events. Conclusion: Both the Drug-Shock and Shock Only strategies were highly effective and safe in allowing AAFL patients to go home in sinus rhythm. IV procainamide alone was effective in only one fifth of patients, much less than for acute AF.
Introduction: Older (age >=65 years) trauma patients suffer increased morbidity and mortality. This is due to under-triage of older trauma victims, resulting in lack of transfer to a trauma centre or failure to activate the trauma team. There are currently no Canadian guidelines for the management of older trauma patients. The objective of this study was to identify modifiers to the prehospital and emergency department (ED) phases of major trauma care for older adults based on expert consensus. Methods: We conducted a modified Delphi study to assess senior-friendly major trauma care modifiers based on national expert consensus. The panel consisted of 24 trauma care providers across Canada, including medical directors, paramedics, emergency physicians, emergency nurses, trauma surgeons and trauma administrators. Following a literature review, we developed an online Delphi survey consisting of 16 trauma care modifiers. Three online survey rounds were distributed and panelists were asked to score items on a 9-point Likert scale. The following predetermined thresholds were used: appropriate (median score 7–9, without disagreement); inappropriate (median score 1–3; without disagreement), and uncertain (any median score with disagreement). The disagreement index (DI) is a method for measuring consensus within groups. Agreement was defined a priori as a DI score <1. Results: There was a 100% response rate for all survey rounds. Three new trauma care modifiers were suggested by panelists. Of 19 trauma care modifiers, the expert panel achieved consensus agreement for 17 items. The prehospital modifier with the strongest agreement to transfer to a trauma centre was a respiratory rate <10 or >20 breaths/minute or needing ventilatory support (DI = 0.24). The ED modifier with the strongest level of agreement was obtaining a 12-lead electrocardiogram following the primary and secondary survey for all older adults (DI = 0.01). Two trauma care modifiers failed to reach consensus agreement: transporting older patients with ground level falls to a trauma centre and activating the trauma team based solely on an age >=65 years. Conclusion: Using a modified Delphi process, an expert panel agreed upon 17 trauma care modifiers for older adults in the prehospital and ED phases of care. These modifiers may improve the delivery of senior-friendly trauma care and should be considered when developing local and national trauma guidelines.
Introduction: CAEP recently developed the acute atrial fibrillation (AF) and flutter (AFL) [AAFF] Best Practices Checklist to promote optimal care and guidance on cardioversion and rapid discharge of patients with AAFF. We sought to assess the impact of implementing the Checklist into large Canadian EDs. Methods: We conducted a pragmatic stepped-wedge cluster randomized trial in 11 large Canadian ED sites in five provinces, over 14 months. All hospitals started in the control period (usual care), and then crossed over to the intervention period in random sequence, one hospital per month. We enrolled consecutive, stable patients presenting with AAFF, where symptoms required ED management. Our intervention was informed by qualitative stakeholder interviews to identify perceived barriers and enablers for rapid discharge of AAFF patients. The many interventions included local champions, presentation of the Checklist to physicians in group sessions, an online training module, a smartphone app, and targeted audit and feedback. The primary outcome was length of stay in ED in minutes from time of arrival to time of disposition, and this was analyzed at the individual patient-level using linear mixed effects regression accounting for the stepped-wedge design. We estimated a sample size of 800 patients. Results: We enrolled 844 patients with none lost to follow-up. Those in the control (N = 316) and intervention periods (N = 528) were similar for all characteristics including mean age (61.2 vs 64.2 yrs), duration of AAFF (8.1 vs 7.7 hrs), AF (88.6% vs 82.9%), AFL (11.4% vs 17.1%), and mean initial heart rate (119.6 vs 119.9 bpm). Median lengths of stay for the control and intervention periods respectively were 413.0 vs. 354.0 minutes (P < 0.001). Comparing control to intervention, there was an increase in: use of antiarrhythmic drugs (37.4% vs 47.4%; P < 0.01), electrical cardioversion (45.1% vs 56.8%; P < 0.01), and discharge in sinus rhythm (75.3% vs. 86.7%; P < 0.001). There was a decrease in ED consultations to cardiology and medicine (49.7% vs 41.1%; P < 0.01), but a small but insignificant increase in anticoagulant prescriptions (39.6% vs 46.5%; P = 0.21). Conclusion: This multicenter implementation of the CAEP Best Practices Checklist led to a significant decrease in ED length of stay along with more ED cardioversions, fewer ED consultations, and more discharges in sinus rhythm. Widespread and rigorous adoption of the CAEP Checklist should lead to improved care of AAFF patients in all Canadian EDs.
Introduction: Mild Traumatic Brain Injury (mTBI) is a common problem: each year in Canada, its incidence is estimated at 500-600 cases per 100 000. Between 10 and 56% of mTBI patients develop persistent post-concussion symptoms (PPCS) that can last for more than 90 days. It is therefore important for clinicians to identify patients who are at risk of developing PPCS. We hypothesized that blood biomarkers drawn upon patient arrival to the Emergency Department (ED) could help predict PPCS. The main objective of this project was to measure the association between four biomarkers and the incidence of PPCS 90 days post mTBI. Methods: Patients were recruited in seven Canadian ED. Non-hospitalized patients, aged ≥14 years old with a documented mTBI that occurred ≤24 hrs of ED consultation, with a GCS ≥13 at arrival were included. Sociodemographic and clinical data as well as blood samples were collected in the ED. A standardized telephone questionnaire was administered at 90 days post ED visit. The following biomarkers were analyzed using enzyme-linked immunosorbent assay (ELISA): S100B protein, Neuron Specific Enolase (NSE), cleaved-Tau (c-Tau) and Glial fibrillary acidic protein (GFAP). The primary outcome measure was the presence of persistent symptoms at 90 days after mTBI, as assessed using the Rivermead Post-Concussion symptoms Questionnaire (RPQ). A ROC curve was constructed for each biomarker. Results: 1276 patients were included in the study. The median age for this cohort was 39 (IQR 23-57) years old, 61% were male and 15% suffered PPCS. The median values (IQR) for patients with PPCS compared to those without were: 43 pg/mL (26-67) versus 42 pg/mL (24-70) for S100B protein, 50 pg/mL (50-223) versus 50 pg/mL (50-199) for NSE, 2929 pg/mL (1733-4744) versus 3180 pg/mL (1835-4761) for c-Tau and 1644 pg/mL (650-3215) versus 1894 pg/mL (700-3498) for GFAP. For each of these biomarkers, Areas Under the Curve (AUC) were 0.495, 0.495, 0.51 and 0.54, respectively. Conclusion: Among mTBI patients, S100B protein, NSE, c-Tau or GFAP during the first 24 hours after trauma do not seem to be able to predict PPCS. Future research testing of other biomarkers is needed in order to determine their usefulness in predicting PPCS when combined with relevant clinical data.
Introduction: Clinical assessment of patients with mTBI is challenging and overuse of head CT in the emergency department (ED) is a major problem. During the last decades, studies have attempted to reduce unnecessary head CTs following a mTBI by identifying new tools aiming to predict intracranial bleeding. S100B serum protein level might be helpful reducing those imaging since a higher level of S-100B protein has been associated with intracranial hemorrhage following a mTBI in previous literature. The main objective of this study was to assess whether the S100B serum protein level is associated with clinically important brain injury and could be used to reduce the number of head CT following a mTBI. Methods: This prospective multicenter cohort study was conducted in five Canadian ED. MTBI patients with a Glasgow Coma Scale (GCS) score of 13-15 in the ED and a blood sample drawn within 24-hours after the injury were included. S-100B protein was analyzed using enzyme-linked immunosorbent assay (ELISA). All types of intracranial bleedings were reviewed by a radiologist who was blinded to the biomarker results. The main outcome was the presence of clinically important brain injury. Results: A total of 476 patients were included. Mean age was 41 ± 18 years old and 150 (31.5%) were female. Twenty-four (5.0%) patients had a clinically significant intracranial hemorrhage while 37 (7.8%) had any type of intracranial bleeding. S100B median value (Q1-Q3) of was: 0.043 ug/L (0.008-0.080) for patients with clinically important brain injury versus 0.039 μg/L (0.023-0.059) for patients without clinically important brain injury. Sensitivity and specificity of the S100B protein level, if used alone to detect clinically important brain injury, were 16.7% (95% CI 4.7-37.4) and 88.5% (95% CI 85.2-91.3), respectively. Conclusion: S100B serum protein level was not associated with clinically significant intracranial hemorrhage in mTBI patients. This protein did not appear to be useful to reduce the number of CT prescribed in the ED and would have missed many clinically important brain injuries. Future research should focus on different ways to assess mTBI patient and ultimately reduce unnecessary head CT.
Introduction: Wide variability exists in emergency department (ED) syncope management. The Canadian Syncope Risk Score (CSRS) was derived and validated to predict the probability of 30-day serious outcomes after ED disposition. The objective was to identify barriers and facilitators among physicians for CSRS use to stratify risk and guide disposition decisions Methods: We conducted semi-structured interviews with physicians involved in ED syncope care at 8 Canadian sites. We used purposive sampling, contacting ED physicians, cardiologists, internists, and hospitalists until theme saturation was reached. Interview questions were designed to understand whether the CSRS recommendations are consistent with current practice, barriers and facilitators for application into practice, and intention for future CSRS use. Interviews were conducted via telephone or videoconference. Two independent raters coded interviews using an inductive approach to identify themes, with discrepancies resolved through consensus. Our methods were consistent with the Knowledge to Action Framework, which highlights the need to assess barriers and facilitators for knowledge use and for adapting new interventions into local contexts. Results: We interviewed 14 ED physicians, 7 cardiologists, and 10 hospitalists/internists across 8 sites. All physicians reported the use of electrocardiograms for patients with syncope, a key component in the CSRS criteria. Almost all physicians reported that the low risk recommendation (discharge without specific follow-up) was consistent with current practice, while less consistency was seen for moderate (15 days outpatient monitoring) and high risk recommendations (outpatient monitoring and/or admission). Key barriers to following the CSRS included a lack of access to outpatient monitoring and uncertainty over timely follow-up care. Other barriers included patient/family concerns, social factors, and necessary bloodwork. Facilitators included assisting with patient education, reassurance of their clinical gestalt, and optimal patient factors (e.g. reliability to return, support at home, few comorbidities). Conclusion: Physicians are receptive to using the CSRS tool for risk stratification and decision support. Implementation should address identified barriers, and adaptation to local settings may involve modifying the recommended clinical actions based on local resources and feasibility.
Introduction: Each year, 3/1000 Canadians sustain a mild traumatic brain injury (mTBI). Many of those mTBI are accompanied by various co-injuries such as dislocations, sprains, fractures or internal injuries. A number of those patients, with or without co-injuries will suffer from persistent post-concussive symptoms (PPCS) more than 90 days post injury. However, little is known about the impact of co-injuries on mTBI outcome. This study aims to describe the impact of co-injuries on PPCS and on patient return to normal activities. Methods: This multicenter prospective cohort study took place in seven large Canadian Emergency Departments (ED). Inclusion criteria: patients aged ≥ 14 who had a documented mTBI that occurred within 24 hours of ED visit, with a Glasgow Coma Scale score of 13-15. Patients who were admitted following their ED visit or unable to consent were excluded. Clinical and sociodemographic information was collected during the initial ED visit. A research nurse then conducted three follow-up phone interviews at 7, 30 and 90 days post-injury, in which they assessed symptom evolution using the validated Rivermead Post-concussion Symptoms Questionnaire (RPQ). Adjusted risk ratios (RR) were calculated to estimate the influence of co-injuries. Results: A total of 1674 patients were included, of which 1023 (61.1%) had at least one co-injury. At 90 days, patients with co-injuries seemed to be at higher risk of having 3 symptoms ≥2 points according to the RPQ (RR: 1.28 95% CI 1.02-1.61) and of experiencing the following symptoms: dizziness (RR: 1.50 95% CI 1.03-2.20), fatigue (RR: 1.35 95% CI 1.05-1.74), headaches (RR: 1.53 95% CI 1.10-2.13), taking longer to think (RR: 1.50 95% CI 1.07-2.11) and feeling frustrated (RR: 1.45 95% CI 1.01-2.07). We also observed that patients with co-injuries were at higher risk of non-return to their normal activities (RR: 2.31 95% CI 1.37-3.90). Conclusion: Patients with co-injuries could be at higher risk of suffering from specific symptoms at 90 days post-injury and to be unable to return to normal activities 90 days post-injury. A better understanding of the impact of co-injuries on mTBI could improve patient management. However, further research is needed to determine if the differences shown in this study are due to the impact of co-injuries on mTBI recovery or to the co-injuries themselves.
Introduction: Emergency department (ED) syncope management is extremely variable. We developed practice recommendations based on the validated Canadian Syncope Risk Score (CSRS) and outpatient cardiac monitoring strategy with physician input. Methods: We used a 2-step approach. Step-1: We pooled data from the derivation and validation prospective cohort studies (with adequate sample size) conducted at 11 Canadian sites (Sep 2010 to Apr 2018). Adults with syncope were enrolled excluding those with serious outcome identified during index ED evaluation. 30-day adjudicated serious outcomes were arrhythmic (arrhythmias, unknown cause of death) and non-arrhythmic (MI, structural heart disease, pulmonary embolism, hemorrhage)]. We compared the serious outcome proportion among risk categories using Cochran-Armitage test. Step-2: We conducted semi-structured interviews using observed risk to develop and refine the recommendations. We used purposive sampling of physicians involved in syncope care at 8 sites from Jun-Dec 2019 until theme saturation was reached. Two independent raters coded interviews using an inductive approach to identify themes; discrepancies were resolved by consensus. Results: Of the 8176 patients (mean age 54, 55% female), 293 (3.6%; 95%CI 3.2-4.0%) experienced 30-day serious outcomes; 0.4% deaths, 2.5% arrhythmic, 1.1% non-arrhythmic outcomes. The serious outcome proportion significantly increased from low to high-risk categories (p < 0.001; overall 0.6% to 27.7%; arrhythmic 0.2% to 17.3%; non-arrhythmic 0.4% to 5.9% respectively). C-statistic was 0.88 (95%CI0.86–0.90). Non-arrhythmia risk per day for the first 2 days was 0.5% for medium-risk, 2% for high-risk and very low thereafter. We recruited 31 physicians (14 ED, 7 cardiologists, 10 hospitalists/internists). 80% of physicians agreed that low risk patients can be discharged without specific follow-up with inconsistencies around length of ED observation. For cardiac monitoring of medium and high-risk, 64% indicated that they don't have access; 56% currently admit high-risk patients and an additional 20% agreed to this recommendation. A deeper exploration led to following refinement: discharge without specific follow-up for low-risk, a shared decision approach for medium-risk and short course of hospitalization for high-risk patients. Conclusion: The recommendations were developed (with online calculator) based on in-depth feedback from key stakeholders to improve uptake during implementation.
Introduction: Mild traumatic brain injury (mTBI) is a serious public health issue and as much as one third of mTBI patients could be affected by persistent post-concussion symptoms (PPCS) three months after their injury. Even though a significant proportion of all mTBIs are sports-related (SR), little is known on the recovery process of SR mTBI patients and the potential differences between SR mTBI and patients who suffered non-sports-related mTBI. The objective of this study was to describe the evolution of PPCS among patients who sustained a SR mTBI compared to those who sustained non sport-related mTBI. Methods: This Canadian multicenter prospective cohort study included patients aged ≥ 14 who had a documented mTBI that occurred within 24 hours of Emergency Department (ED) visit, with a Glasgow Coma Scale score of 13-15. Patients who were hospitalized following their ED visit or unable to consent were excluded. Clinical and sociodemographic information was collected during the initial ED visit. Three follow-up phone interviews were conducted by a research nurse at 7, 30 and 90 days post-injury to assess symptom evolution using the validated Rivermead Post-concussion Symptoms Questionnaire (RPQ). Adjusted risk ratios (RR) were calculated to demonstrate the impact of the mechanism of injury (sports vs non-sports) on the presence and severity of PPCS. Results: A total of 1676 mTBI patients were included, 358 (21.4%) of which sustained a SR mTBI. At 90 days post-injury, patients who suffered a SR mTBI seemed to be significantly less affected by fatigue (RR: 0.70 (95% CI: 0.50-0.97)) and irritability (RR: 0.60 (95% CI: 0.38-0.94)). However, no difference was observed between the two groups regarding each other symptom evaluated in the RPQ. Moreover, the proportion of patients with three symptoms or more, a score ≥21 on the RPQ and those who did return to their normal activities were also comparable. Conclusion: Although persistent post-concussion symptoms are slightly different depending on the mechanism of trauma, our results show that patients who sustained SR-mTBI could be at lower risk of experiencing some types of symptoms 90 days post-injury, in particular, fatigue and irritability.
We consider the unbounded settling dynamics of a circular disk of diameter
and finite thickness
evolving with a vertical speed
in a linearly stratified fluid of kinematic viscosity
of the stratifying agent, at moderate Reynolds numbers (
). The influence of the disk geometry (diameter
and aspect ratio
) and of the stratified environment (buoyancy frequency
, viscosity and diffusivity) are experimentally and numerically investigated. Three regimes for the settling dynamics have been identified for a disk reaching its gravitational equilibrium level. The disk first falls broadside-on, experiencing an enhanced drag force that can be linked to the stratification. A second regime corresponds to a change of stability for the disk orientation, from broadside-on to edgewise settling. This occurs when the non-dimensional velocity
becomes smaller than some threshold value. Uncertainties in identifying the threshold value is discussed in terms of disk quality. It differs from the same problem in a homogeneous fluid which is associated with a fixed orientation (at its initial value) in the Stokes regime and a broadside-on settling orientation at low, but finite Reynolds numbers. Finally, the third regime corresponds to the disk returning to its broadside orientation after stopping at its neutrally buoyant level.
The extending market of concentrated solar power plants requires high-temperature materials for solar surface receivers that would ideally heat an air coolant beyond 1300 K. This work presents investigation on high-temperature alloys with ceramic coatings (AlN or SiC/AlN stacking) to combine the properties of the substrate (creep resistance, machinability) and coating (slow oxidation kinetics, high solar absorptivity). The first results showed that high-temperature oxidation resistance and optical properties of metallic alloys were improved by the different coatings. However, the fast thermal shocks led to high stress levels not compatible due to the differences in thermal expansion coefficients.
Stratification due to salt or heat gradients greatly affects the distribution of inert particles and living organisms in the ocean and the lower atmosphere. Laboratory studies considering the settling of a sphere in a linearly stratified fluid confirmed that stratification may dramatically enhance the drag on the body, but failed to identify the generic physical mechanism responsible for this increase. We present a rigorous splitting scheme of the various contributions to the drag on a settling body, which allows them to be properly disentangled whatever the relative magnitude of inertial, viscous, diffusive and buoyancy effects. We apply this splitting procedure to data obtained via direct numerical simulation of the flow past a settling sphere over a range of parameters covering a variety of situations of laboratory and geophysical interest. Contrary to widespread belief, we show that, in the parameter range covered by the simulations, the drag enhancement is generally not primarily due to the extra buoyancy force resulting from the dragging of light fluid by the body, but rather to the specific structure of the vorticity field set in by buoyancy effects. Simulations also reveal how the different buoyancy-induced contributions to the drag vary with the flow parameters. To unravel the origin of these variations, we analyse the different possible leading-order balances in the governing equations. Thanks to this procedure, we identify several distinct regimes which differ by the relative magnitude of length scales associated with stratification, viscosity and diffusivity. We derive the scaling laws of the buoyancy-induced drag contributions in each of these regimes. Considering tangible examples, we show how these scaling laws combined with numerical results may be used to obtain reliable predictions beyond the range of parameters covered by the simulations.
Introduction: For rhythm control of acute atrial fibrillation (AAF) in the emergency department (ED), choices include initial drug therapy or initial electrical cardioversion (ECV). We compared the strategies of pharmacological cardioversion followed by ECV if necessary (Drug-Shock), and ECV alone (Shock Only). Methods: We conducted a randomized, blinded, placebo-controlled trial (1:1 allocation) comparing two rhythm control strategies at 11 academic EDs. We included stable adult patients with AAF, where onset of symptoms was <48 hours. Patients underwent central web-based randomization stratified by site. The Drug-Shock group received an infusion of procainamide (15mg/kg over 30 minutes) followed 30 minutes later, if necessary, by ECV at 200 joules x 3 shocks. The Shock Only group received an infusion of saline followed, if necessary, by ECV x 3 shocks. The primary outcome was conversion to sinus rhythm for ≥30 minutes at any time following onset of infusion. Patients were followed for 14 days. The primary outcome was evaluated on an apriori-specified modified intention-to-treat (MITT) basis excluding patients who never received the study infusion (e.g. spontaneous conversion). Data were analyzed using chi-squared tests and logistic regression. Our target sample size was 374 evaluable patients. Results: Of 395 randomized patients, 18 were excluded from the MITT analysis; none were lost to follow-up. The Drug-Shock (N = 198) and Shock Only (N = 180) groups (total = 378) were similar for all characteristics including mean age (60.0 vs 59.5 yrs), duration of AAF (10.1 vs 10.8 hrs), previous AF (67.2% vs 68.3%), median CHADS2 score (0 vs 0), and mean initial heart rate (119.9 vs 118.0 bpm). More patients converted to normal sinus rhythm in the Drug-Shock group (97.0% vs 92.2%; absolute difference 4.8%, 95% CI 0.2-9.9; P = 0.04). The multivariable analyses confirmed the Drug-Shock strategy superiority (P = 0.04). There were no statistically significant differences for time to conversion (91.4 vs 85.4 minutes), total ED length of stay (7.1 vs 7.7 hours), disposition home (97.0% vs 96.1%), and stroke within 14 days (0 vs 0). Premature discontinuation of infusion was more common in the Drug-Shock group (8.1% vs 0.6%) but there were no serious adverse events. Conclusion: Both the Drug-Shock and Shock Only strategies were highly effective and safe in allowing AAF patients to go home in sinus rhythm. A strategy of initial cardioversion with procainamide was superior to a strategy of immediate ECV.
Introduction: This systematic scoping review aims to synthetize the available evidence on the epidemiology, risk factors, clinical characteristics, screening tools, prevention strategies, interventions and knowledge of health care providers regarding elder abuse in the emergency department (ED). Methods: A systematic literature search was performed using three databases (Medline, Embase and Cochrane Library). Grey literature was scrutinized. Studies were considered eligible when they were observational studies or randomized control trials reporting on elder abuse in the prehospital and/or ED setting. Data extraction was performed independently by two researchers and a qualitative approach was used to synthetize the findings. Results: A total of 443 citations were retrieved from which 58 studies published between 1988 and 2018 were finally included. Prevalence of elder abuse following an ED visit varied between 0.01% and 0.03%. Reporting of elder abuse to proper law authorities by ED physicians varied between 2% to 50% of suspected cases. The most common reported type of elder abuse detected was neglect followed by physical abuse. Female gender was the most consistent factor associated with elder abuse. Cognitive impairment, behavioral problems and psychiatric disorder of the patient or the caregiver were also associated with physical abuse and neglect as well as more frequent ED consultations. Several screening tools have been proposed, but ED-based validation is lacking. Literature on prehospital- or ED-initiated prevention and interventions was scarce without any controlled trial. Health care providers were poorly trained to detect and care for older adults who are suspected of being a victim of elder abuse. Conclusion: Elder abuse in the ED is an understudied topic. It remains underrecognized and underreported with ED prevalence rates lower than those in community-dwelling older adults. Health care providers reported lacking appropriate training and knowledge with regards to elder abuse. Dedicated ED studies are required.
Introduction: Elder abuse is infrequently detected in the emergency department (ED) and less than 2% are reported to proper law authorities by ED physicians. This study aims to examine the characteristics of community-dwelling older adults who screened positive for elder abuse during home care assessments and the epidemiology of ED visits by these patients relative to other home care patients. Methods: This study utilized a population-based retrospective cohort study of home care patients in Canada between April 1, 2007 and March 31, 2015. Standardized, comprehensive home care assessments were extracted from the Home Care Reporting System. A positive screen for elder abuse was defined as at least one these criteria: fearful of a caregiver; unusually poor hygiene; unexplained injuries; or neglected, abused, or mistreated. Home care assessments were linked to the National Ambulatory Care Reporting System in the regions and time periods in which population-based estimates could be obtained to identify all ED visits within 6 months of the home care assessment. Results: A total of 30,413 from the 2,401,492 patients (1.3%) screened positive for elder abuse during a home care assessment. They were more likely to be male (40.5% versus 35.3%, p < 0.001), to have a cognitive impairment (82.9% versus 65.3%, p < 0.001), a higher frailty index (0.27 versus 0.22, p < 0.001) and to exhibit more depressive symptoms (depression rating scale 1 or more: 68.7% versus 42.7%, p < 0.001). Patient who screened positive for elder abuse were less likely to be independent in activities of daily living (41.9% versus 52.7%, p < 0.001) and reported having fallen more frequently (44.2% versus 35.5%, p < 0.001). Caregiver expressing distress was associated with elder abuse (35.3% versus 18.3%, p < 0.001) but not a higher number of hours caring for the patient. Victims of elder abuse were more likely to attend the ED for low acuity conditions (Canadian triage and acuity scale (CTAS) 4 or 5). Diagnosis at discharge from ED were similar with the exception of acute intoxication that was more frequent in patients who are victims of abuse. Conclusion: Elder abuse is infrequently detected during home care assessments in community-dwelling older adults. Higher frailty index, cognitive impairment, depressive symptoms were associated with elder abuse during homecare assessments. Patients who are victims of elder abuse are attending EDs more frequently for low acuity conditions but ED diagnosis at discharge, except for acute intoxication, are similar.
Introduction: Prompt defibrillation is critical during paediatric cardiac arrest. The main objective of this systematic review was to determine the initial defibrillation energy dose for ventricular fibrillation (VF) or pulseless ventricular tachycardia (pVT) that is associated with sustained return of spontaneous circulation (ROSC) during paediatric cardiac arrest. Associations between initial defibrillation energy dose with any ROSC, survival and defibrillation-induced complications were also assessed. Methods: A systematic review was performed using four databases (Medline, Embase, Web of Science, Cochrane Library) (PROSPERO: CRD42016036734). Human studies (cohort studies or controlled trials) and animal model studies (controlled trials) of pediatric cardiac arrest involving assessment of external defibrillation energy dosing were considered. The primary outcome was sustained ROSC. Two researchers independently reviewed all the titles and abstracts of the retrieved citations, selected the studies and extracted the data using a standardized template. Risk of bias of human non-randomised studies were assessed using the ROBIN-I tool (formerly ACROBAT-NRSI) tool proposed by the Cochrane Collaboration group. Results: The search strategy identified 14,471 citations of which 232 manuscripts were reviewed. Ten human and 10 animal model studies met the inclusion criteria. Human studies were prospective (n = 6) or retrospective (n = 4) cohort studies and included between 11 and 266 patients (median = 46 patients). Sustained ROSC rates ranged from 0 to 61% (n = 7). No studies reported a statistically significant association between the initial defibrillation energy dose and the rate of sustained ROSC (n = 7) or survival (n = 6). No human studies reported defibrillation-induced complications. Meta-analysis was not considered appropriate due to clinical heterogeneity. The overall risk of bias was moderate. All animal studies were randomized controlled trials with 8 and 52 (median = 27) piglets. ROSC was frequently achieved (more than 85%) with energy dose ranging from 2 to 7 joules/kg (n = 7). The defibrillation threshold varied according to the body weight and appears to be higher in infant models. Conclusion: Defibrillation energy doses and thresholds varied according to the body weight and trended higher for infants. No definitive association between initial defibrillation doses and the outcomes of sustained ROSC or survival could be demonstrated.
Swimming propagules (embryos and larvae) are a critical component of the life histories of benthic marine animals. Larvae that feed (planktotrophic) have been assumed to swim faster, disperse farther and have more complex behavioural patterns than non-feeding (lecithotrophic) larvae. However, a number of recent studies challenge these early assumptions, suggesting a need to revisit them more formally. The current review presents a quantitative analysis of swimming speed and body size in planktotrophic and lecithotrophic propagules across five major marine phyla (Porifera, Cnidaria, Annelida, Mollusca and Echinodermata). Results of the comparative study showed that swimming speed differences among ciliated propagules can be driven by taxonomy, adult mobility (motile vs sessile) and/or larval nutritional mode. On a phylogenetic level, distinct patterns emerge across phyla and life stages, whereby planktotrophic propagules swim faster in some of them, and lecithotrophic propagules swim faster in others. Interestingly, adults with sessile and sedentary lifestyles produce propagules that swam faster than the propagules produced by motile adults. Understanding similarities and differences among marine propagules associated with different reproductive strategies and adult lifestyles are significant from ecological, evolutionary and applied perspectives. Patterns of swimming can directly impact the dispersal/recruitment potential with incidence on the design of larval rearing methods and marine protected areas.