To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Introduction: For rhythm control of acute atrial flutter (AAFL) in the emergency department (ED), choices include initial drug therapy or initial electrical cardioversion (ECV). We compared the strategies of pharmacological cardioversion followed by ECV if necessary (Drug-Shock), and ECV alone (Shock Only). Methods: We conducted a randomized, blinded, placebo-controlled trial (1:1 allocation) comparing two rhythm control strategies at 11 academic EDs. We included stable adult patients with AAFL, where onset of symptoms was <48 hours. Patients underwent central web-based randomization stratified by site. The Drug-Shock group received an infusion of procainamide (15mg/kg over 30 minutes) followed 30 minutes later, if necessary, by ECV at 200 joules x 3 shocks. The Shock Only group received an infusion of saline followed, if necessary, by ECV x 3 shocks. The primary outcome was conversion to sinus rhythm for ≥30 minutes at any time following onset of infusion. Patients were followed for 14 days. The primary outcome was evaluated on an intention-to-treat basis. Statistical significance was assessed using chi-squared tests and multivariable logistic regression. Results: We randomized 76 patients, and none was lost to follow-up. The Drug-Shock (N = 33) and Shock Only (N = 43) groups were similar for all characteristics including mean age (66.3 vs 63.4 yrs), duration of AAFL (30.1 vs 24.5 hrs), previous AAFL (72.7% vs 69.8%), median CHADS2 score (1 vs 1), and mean initial heart rate (128.9 vs 126.0 bpm). The Drug-Shock and Shock only groups were similar for the primary outcome of conversion (100% vs 93%; absolute difference 7.0%, 95% CI -0.6;14.6; P = 0.25). The multivariable analyses confirmed the similarity of the two strategies (P = 0.19). In the Drug-Shock group 21.2% of patients converted with the infusion. There were no statistically significant differences for time to conversion (84.2 vs 97.6 minutes), total ED length of stay (9.4 vs 7.5 hours), disposition home (100% vs 95.3%), and stroke within 14 days (0 vs 0). Premature discontinuation of infusion (usually for transient hypotension) was more common in the Drug-Shock group (9.1% vs 0.0%) but there were no serious adverse events. Conclusion: Both the Drug-Shock and Shock Only strategies were highly effective and safe in allowing AAFL patients to go home in sinus rhythm. IV procainamide alone was effective in only one fifth of patients, much less than for acute AF.
Introduction: Patients with major bleeding (e.g. gastrointestinal bleeding, and intracranial hemorrhage [ICH]) are commonly encountered in the Emergency Department (ED). A growing number of patients are on either oral or parenteral anticoagulation (AC), but the impact of AC on outcomes of patients with major bleeding is unknown. With regards to oral anticoagulation (OAC), we particularly sought to analyze differences between patients on Warfarin or Direct Oral Anticoagulants (DOACs). Methods: We analyzed a prospectively collected registry (2011-2016) of patients who presented to the ED with major bleeding at two academic hospitals. “Major bleeding” was defined by the International Society on Thrombosis and Haemostasis criteria. The primary outcome, in-hospital mortality, was analyzed using a multivariable logistic regression model. Secondary outcomes included discharge to long-term care among survivors, total hospital length of stay (LOS) among survivors, and total hospital costs. Results: 1,477 patients with major bleeding were included. AC use was found among 215 total patients (14.6%). Among OAC patients (n = 181), 141 (77.9%) had used Warfarin, and 40 (22.1%) had used a DOAC. 484 patients (32.8%) died in-hospital. AC use was associated with higher in-hospital mortality (adjusted odds ratio [OR]: 1.50 [1.17-1.93]). Among survivors to discharge, AC use was associated with higher discharge to long-term care (adjusted OR: 1.73 [1.18-2.57]), prolonged median LOS (19 days vs. 16 days, P = 0.03), and higher mean costs ($69,273 vs. $58,156, P = 0.02). With regards to OAC, a higher proportion of ICH was seen among patients on Warfarin (39.0% vs. 32.5%), as compared to DOACs. No difference in mortality was seen between DOACs and Warfarin (adjusted OR: 0.84 [0.40-1.72]). Patients with major bleeding on Warfarin had longer median LOS (11 days vs. 6 days, P = 0.03) and higher total costs ($51,524 vs. $35,176, P < 0.01) than patients on DOACs. Conclusion: AC use was associated with higher mortality among ED patients with major bleeding. Among survivors, AC use was associated with increased LOS, costs, and discharge to long-term care. Among OAC patients, no difference in mortality was found. Warfarin was associated with prolonged LOS and costs, likely secondary to higher incidence of ICH, as compared to DOACs.
Introduction: For rhythm control of acute atrial fibrillation (AAF) in the emergency department (ED), choices include initial drug therapy or initial electrical cardioversion (ECV). We compared the strategies of pharmacological cardioversion followed by ECV if necessary (Drug-Shock), and ECV alone (Shock Only). Methods: We conducted a randomized, blinded, placebo-controlled trial (1:1 allocation) comparing two rhythm control strategies at 11 academic EDs. We included stable adult patients with AAF, where onset of symptoms was <48 hours. Patients underwent central web-based randomization stratified by site. The Drug-Shock group received an infusion of procainamide (15mg/kg over 30 minutes) followed 30 minutes later, if necessary, by ECV at 200 joules x 3 shocks. The Shock Only group received an infusion of saline followed, if necessary, by ECV x 3 shocks. The primary outcome was conversion to sinus rhythm for ≥30 minutes at any time following onset of infusion. Patients were followed for 14 days. The primary outcome was evaluated on an apriori-specified modified intention-to-treat (MITT) basis excluding patients who never received the study infusion (e.g. spontaneous conversion). Data were analyzed using chi-squared tests and logistic regression. Our target sample size was 374 evaluable patients. Results: Of 395 randomized patients, 18 were excluded from the MITT analysis; none were lost to follow-up. The Drug-Shock (N = 198) and Shock Only (N = 180) groups (total = 378) were similar for all characteristics including mean age (60.0 vs 59.5 yrs), duration of AAF (10.1 vs 10.8 hrs), previous AF (67.2% vs 68.3%), median CHADS2 score (0 vs 0), and mean initial heart rate (119.9 vs 118.0 bpm). More patients converted to normal sinus rhythm in the Drug-Shock group (97.0% vs 92.2%; absolute difference 4.8%, 95% CI 0.2-9.9; P = 0.04). The multivariable analyses confirmed the Drug-Shock strategy superiority (P = 0.04). There were no statistically significant differences for time to conversion (91.4 vs 85.4 minutes), total ED length of stay (7.1 vs 7.7 hours), disposition home (97.0% vs 96.1%), and stroke within 14 days (0 vs 0). Premature discontinuation of infusion was more common in the Drug-Shock group (8.1% vs 0.6%) but there were no serious adverse events. Conclusion: Both the Drug-Shock and Shock Only strategies were highly effective and safe in allowing AAF patients to go home in sinus rhythm. A strategy of initial cardioversion with procainamide was superior to a strategy of immediate ECV.
Cognitive–behavioural therapy (CBT) is the treatment of choice for generalised anxiety disorder (GAD), yielding significant improvements in approximately 50% of patients. There is significant room for improvement in the outcomes of treatment, especially in recovery.
We aimed to compare metacognitive therapy (MCT) with the gold standard treatment, CBT, in patients with GAD (clinicaltrials.gov identifier: NCT00426426).
A total of 246 patients with long-term GAD were assessed and 81 were randomised into three conditions: CBT (n = 28), MCT (n = 32) and a wait-list control (n = 21). Assessments were made at pre-treatment, post-treatment and at 2 year follow-up.
Both CBT and MCT were effective treatments, but MCT was more effective (mean difference 9.762, 95% CI 2.679–16.845, P = 0.004) and led to significantly higher recovery rates (65% v. 38%). These differences were maintained at 2 year follow-up.
MCT seems to produce recovery rates that exceed those of CBT. These results demonstrate that the effects of treatment cannot be attributed to non-specific therapy factors.
Declaration of interest
A.W. wrote the treatment protocol in MCT and several books on CBT and MCT, and receives royalties from these. T.D.B. wrote the protocol in CBT and has published several articles and chapters on CBT and receives royalties from these. All other authors declare no competing interests.
Introduction: Two published studies reported natriuretic peptides can aid in risk-stratification of Emergency Department (ED) syncope. We sought to assess the role of N-Terminal pro Brain Natriuretic Peptide (NT pro-BNP) to identify syncope patients at risk for serious adverse events (SAE) within 30 days of the ED visit, and its value above that of the Canadian Syncope Risk Score (CSRS). Methods: We conducted a multicenter prospective cohort study at 6 large Canadian EDs from Nov 2011 to Feb 2015. We enrolled adults who presented within 24-hours of syncope and excluded those with persistent altered mentation, obvious seizure, and intoxication. We collected patient characteristics, nine CSRS predictors (includes troponin), ED management and NT pro-BNP levels. Adjudicated serious adverse events (SAE) included death, cardiac SAE (arrhythmias, myocardial infarction, serious structural heart disease) and non-cardiac SAE (pulmonary embolism, severe hemorrhage and procedural interventions within 30-days). We used two tailed t-test and logistic regression analysis. Results: Of the 1359 patients (mean age 57.2 years, 54.7% females, 13.3% hospitalized) enrolled, 148 patients (10.9%; 0.7% deaths, 7.9% cardiac SAE including 6.1% arrhythmia) suffered SAE within 30-days. The mean NT pro-BNP values, when compared to the patients with no SAE (499.8ng/L) was significantly higher among the 56 patients who suffered SAE after ED disposition (3147ng/L, p=0.001), and among the 35 patients with cardiac SAE after ED disposition (2016.2ng/L, p=0.02). While there was a trend to higher levels among patients who suffered arrhythmia after the ED visit, it was not statistically significant (1776.4ng/L, p=0.07). In a model with CSRS predictors, the adjusted odds ratio for NT pro-BNP was 8.0 (95%CI 1.8, 35.9) and troponin was 3.8 (95%CI 1.7, 8.8). The addition of NT pro-BNP did not significantly improve the classification performance (p=0.76) with areas under the curves for CSRS was 0.91 (95%CI 0.88, 0.95) and CSRS with NT pro-BNP was 0.92 (95%CI 0.88, 0.95). Conclusion: In this multicenter study, mean NT pro-BNP levels were significantly higher among ED syncope patients who suffered SAE including cardiac SAE after ED disposition. Though NT pro-BNP was a significant independent predictor of SAE after ED disposition, it did not improve accuracy in ED syncope risk-stratification when compared to CSRS. Hence, we do not recommend NT pro-BNP measurement for ED syncope management.
We sought to evaluate the role healthcare providers play in carbapenem-resistant Enterobacteriaceae (CRE) acquisition among hospitalized patients.
A 1:4 case-control study with incidence density sampling.
Academic healthcare center with regular CRE perirectal screening in high-risk units.
We included case patients with ≥1 negative CRE test followed by positive culture with a length of stay (LOS) >9 days. For controls, we included patients with ≥2 negative CRE tests and assignment to the same unit set as case patients with a LOS >9 days.
Controls were time-matched to each case patient. Case exposure was evaluated between days 2 and 9 before positive culture and control evaluation was based on maximizing overlap with the case window. Exposure sources were all CRE-colonized or -infected patients. Nonphysician providers were compared between study patients and sources during their evaluation windows. Dichotomous and continuous exposures were developed from the number of source-shared providers and were used in univariate and multivariate regression.
In total, 121 cases and 484 controls were included. Multivariate analysis showed odds of dichotomous exposure (≥1 source-shared provider) of 2.27 (95% confidence interval [CI], 1.25–4.15; P=.006) for case patients compared to controls. Multivariate continuous exposure showed odds of 1.02 (95% CI, 1.01–1.03; P=.009) for case patients compared to controls.
Patients who acquire CRE during hospitalization are more likely to receive care from a provider caring for a patient with CRE than those patients who do not acquire CRE. These data support the importance of hand hygiene and cohorting measures for CRE patients to reduce transmission risk.
Introduction: Most ambulance communication officers receive minimal education on agonal breathing, often leading to unrecognized out-of-hospital cardiac arrest (OHCA). We sought to evaluate the impact of an educational program on cardiac arrest recognition, and on bystander CPR and survival rates. Methods: Ambulance communication officers in Ottawa, Canada received additional training on agonal breathing, while the control site (Windsor, Canada) did not. Sites were compared to their pre-study performance (before-after design), and to each other (concurrent control). Trained investigators used a piloted-standardized data collection tool when reviewing the recordings for all potential OHCA cases submitted. OHCA was confirmed using our local OHCA registry, and we requested 9-1-1 recordings for OHCA cases not initially suspected. Two independent investigators reviewed medical records for non-OHCA cases receiving telephone-assisted CPR in Ottawa. We present descriptive and chi-square statistics. Results: There were 988 confirmed and suspected OHCA in the “before” (540 Ottawa; 448 Windsor), and 1,076 in the “after” group (689 Ottawa; 387 Windsor). Characteristics of “after” group OHCA patients were: mean age (68.1 Ottawa, 68.2 Windsor); Male (68.5% Ottawa, 64.8% Windsor); witnessed (45.0% Ottawa, 41.9% Windsor); and initial rhythm VF/VT (Ottawa 28.9, Windsor 22.5%). Before-after comparisons were: for cardiac arrest recognition (from 65.4% to 71.9% in Ottawa p=0.03; from 70.9% to 74.1% in Windsor p=0.37); for bystander CPR rates (from 23.0% to 35.9% in Ottawa p=0.0001; from 28.2% to 39.4% in Windsor p=0.001); and for survival to hospital discharge (from 4.1% to 12.5% in Ottawa p=0.001; from 3.9% to 6.9% in Windsor p=0.03). “After” group comparisons between Ottawa and Windsor (control) were not statistically different, except survival (p=0.02). Agonal breathing was common (25.6% Ottawa, 22.4% Windsor) and present in 18.5% of missed cases (15.8% Ottawa, 22.2% Windsor p=0.27). In Ottawa, 31 patients not in OHCA received chest compressions resulting from telephone-assisted CPR instructions. None suffered injury or adverse effects. Conclusion: While all OHCA outcomes improved over time, the educational intervention significantly improved OHCA recognition in Ottawa, and appeared to mitigate the impact of agonal breathing.
Intestinal barrier integrity is a prerequisite for homeostasis of mucosal function, which is balanced to maximise absorptive capacity, while maintaining efficient defensive reactions against chemical and microbial challenges. Evidence is mounting that disruption of epithelial barrier integrity is one of the major aetiological factors associated with several gastrointestinal diseases, including infection by pathogens, obesity and diabetes, necrotising enterocolitis, irritable bowel syndrome and inflammatory bowel disease. The notion that specific probiotic bacterial strains can affect barrier integrity fuelled research in which in vitro cell lines, animal models and clinical trials are used to assess whether probiotics can revert the diseased state back to homeostasis and health. This review catalogues and categorises the lines of evidence available in literature for the role of probiotics in epithelial integrity and, consequently, their beneficial effect for the reduction of gastrointestinal disease symptoms.
The emergence of invasive fungal wound infections (IFIs) in combat casualties led to development of a combat trauma-specific IFI case definition and classification. Prospective data were collected from 1133 US military personnel injured in Afghanistan (June 2009–August 2011). The IFI rates ranged from 0·2% to 11·7% among ward and intensive care unit admissions, respectively (6·8% overall). Seventy-seven IFI cases were classified as proven/probable (n = 54) and possible/unclassifiable (n = 23) and compared in a case-case analysis. There was no difference in clinical characteristics between the proven/probable and possible/unclassifiable cases. Possible IFI cases had shorter time to diagnosis (P = 0·02) and initiation of antifungal therapy (P = 0·05) and fewer operative visits (P = 0·002) compared to proven/probable cases, but clinical outcomes were similar between the groups. Although the trauma-related IFI classification scheme did not provide prognostic information, it is an effective tool for clinical and epidemiological surveillance and research.
To examine barriers to initiation and continuation of mental health treatment among individuals with common mental disorders.
Data were from the World Health Organization (WHO) World Mental Health (WMH) surveys. Representative household samples were interviewed face to face in 24 countries. Reasons to initiate and continue treatment were examined in a subsample (n = 636 78) and analyzed at different levels of clinical severity.
Among those with a DSM-IV disorder in the past 12 months, low perceived need was the most common reason for not initiating treatment and more common among moderate and mild than severe cases. Women and younger people with disorders were more likely to recognize a need for treatment. A desire to handle the problem on one's own was the most common barrier among respondents with a disorder who perceived a need for treatment (63.8%). Attitudinal barriers were much more important than structural barriers to both initiating and continuing treatment. However, attitudinal barriers dominated for mild-moderate cases and structural barriers for severe cases. Perceived ineffectiveness of treatment was the most commonly reported reason for treatment drop-out (39.3%), followed by negative experiences with treatment providers (26.9% of respondents with severe disorders).
Low perceived need and attitudinal barriers are the major barriers to seeking and staying in treatment among individuals with common mental disorders worldwide. Apart from targeting structural barriers, mainly in countries with poor resources, increasing population mental health literacy is an important endeavor worldwide.
Inflammation is associated with preterm premature rupture of membranes (PPROM) and adverse neonatal outcomes. Subchorionic thrombi, with or without inflammation, may also be a significant pathological finding in PPROM. Patterns of inflammation and thrombosis may give insight into mechanisms of adverse neonatal outcomes associated with PPROM. To characterize histologic findings of placentas from pregnancies complicated by PPROM at altitude, 44 placentas were evaluated for gross and histological indicators of inflammation and thrombosis. Student's t-test (or Mann–Whitney U-test), χ2 analysis (or Fisher's exact test), mean square contingency and logistic regression were used when appropriate. The prevalence of histologic acute chorioamnionitis (HCA) was 59%. Fetal-derived inflammation (funisitis and chorionic plate vasculitis) was seen at lower frequency (30% and 45%, respectively) and not always in association with HCA. There was a trend for Hispanic women to have higher odds of funisitis (OR = 5.9; P = 0.05). Subchorionic thrombi were seen in 34% of all placentas. The odds of subchorionic thrombi without HCA was 6.3 times greater that the odds of subchorionic thrombi with HCA (P = 0.02). There was no difference in gestational age or rupture-to-delivery interval, with the presence or absence of inflammatory or thrombotic lesions. These findings suggest that PPROM is caused by or can result in fetal inflammation, placental malperfusion, or both, independent of gestational age or rupture-to-delivery interval; maternal ethnicity and altitude may contribute to these findings. Future studies focused on this constellation of PPROM placental findings, genetic polymorphisms and neonatal outcomes are needed.
Most previous attempts to determine the psychological cost of military deployment have been limited by reliance on convenience samples, lack of pre-deployment data or confidentiality and cross-sectional designs.
This study addressed these limitations using a population-based, prospective cohort of US military personnel deployed in support of the operations in Iraq and Afghanistan.
The sample consisted of US military service members in all branches including active duty, reserve and national guard who deployed once (n = 3393) or multiple times (n = 4394). Self-reported symptoms of post-traumatic stress were obtained prior to deployment and at two follow-ups spaced 3 years apart. Data were examined for longitudinal trajectories using latent growth mixture modelling.
Each analysis revealed remarkably similar post-traumatic stress trajectories across time. The most common pattern was low–stable post-traumatic stress or resilience (83.1% single deployers, 84.9% multiple deployers), moderate–improving (8.0%, 8.5%), then worsening–chronic posttraumatic stress (6.7%, 4.5%), high–stable (2.2% single deployers only) and high–improving (2.2% multiple deployers only). Covariates associated with each trajectory were identified.
The final models exhibited similar types of trajectories for single and multiple deployers; most notably, the stable trajectory of low post-traumatic stress pre- to post-deployment, or resilience, was exceptionally high. Several factors predicting trajectories were identified, which we hope will assist in future research aimed at decreasing the risk of post-traumatic stress disorder among deployers.
Prior research on whether marriage is equally beneficial to the mental health of men and women is inconsistent due to methodological variation. This study addresses some prior methodological limitations and investigates gender differences in the association of first marriage and being previously married, with subsequent first onset of a range of mental disorders.
Cross-sectional household surveys in 15 countries from the WHO World Mental Health survey initiative (n=34493), with structured diagnostic assessment of mental disorders using the Composite International Diagnostic Interview 3.0. Discrete-time survival analyses assessed the interaction of gender and marital status in the association with first onset of mood, anxiety and substance use disorders.
Marriage (versus never married) was associated with reduced risk of first onset of most mental disorders in both genders; but for substance use disorders this reduced risk was stronger among women, and for depression and panic disorder it was confined to men. Being previously married (versus stably married) was associated with increased risk of all disorders in both genders; but for substance use disorders, this increased risk was stronger among women and for depression it was stronger among men.
Marriage was associated with reduced risk of the first onset of most mental disorders in both men and women but there were gender differences in the associations between marital status and onset of depressive and substance use disorders. These differences may be related to gender differences in the experience of multiple role demands within marriage, especially those concerning parenting.
Multidisciplinary antimicrobial utilization teams (AUTs) have been proposed as a mechanism for improving antimicrobial use, but data on their efficacy remain limited.
To determine the impact of an AUT on antimicrobial use at a teaching hospital.
Randomized controlled intervention trial.
A 953-bed, public, university-affiliated, urban teaching hospital.
Patients who were given selected antimicrobial agents (piperacillin-tazobactam, levofloxacin, or vancomycin) by internal medicine ward teams.
Twelve internal medicine teams were randomly assigned monthly: 6 teams to an intervention group (academic detailing by the AUT) and 6 teams to a control group that was given indication-based guidelines for prescription of broad-spectrum antimicrobials (standard of care), during a 10-month study period.
Proportion of appropriate empirical, definitive (therapeutic), and end (overall) antimicrobial usage.
A total of 784 new prescriptions of piperacillin-tazobactam, levofloxacin, and vancomycin were reviewed. The proportion of antimicrobial prescriptions written by the intervention teams that was considered to be appropriate was significantly higher than the proportion of antimicrobial prescriptions written by the control teams that was considered to be appropriate: 82% versus 73% for empirical (risk ratio [RR], 1.14; 95% confidence interval [CI], 1.04-1.24), 82% versus 43% for definitive (RR, 1.89; 95% CI, 1.53-2.33), and 94% versus 70% for end antimicrobial usage (RR, 1.34; 95% CI, 1.25-1.43). In multivariate analysis, teams that received feedback from the AUT alone (adjusted RR, 1.37; 95% CI, 1.27-1.48) or from both the AUT and the infectious diseases consultation service (adjusted RR, 2.28; 95% CI, 1.64-3.19) were significantiy more likely to prescribe end antimicrobial usage appropriately, compared with control teams.
A multidisciplinary AUT that provides feedback to prescribing physicians was an effective method in improving antimicrobial use.