To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
From the lens of two Macanese publics, this study rethinks cosmopolitanism as a diverse identity and pursuit that can vary from one individual to another. It complicates what we know about polyglot Asian publics often profiled as ‘cosmopolitan’ for their foreign education, middle-class status, social commitment, and internationalist visions. I argue that, while these subjects shared a common background, they diverged according to shifting global contexts, generational differences, and personal experience. On a par with imagining themselves as part of a global community, cosmopolitan publics navigated between personal worlds and communal networks, as well as between a narrower nationalist and/or urban context and a broader global framework. My first subject, Macao-born Lourenço Pereira Marques, saw Hong Kong as a liberal ground to disseminate Darwinism across southern China's Lusophone public sphere during the 1880s, whereas Hong Kong-born José Pedro Braga worked to preach an internationalist vision of racial equality through a wider Anglophone public sphere and an emerging transnational associational culture in the early twentieth century. This study also aims to further our understanding regarding Hong Kong as a vibrant port city and explore the diversity of cosmopolitan publics in the context of transitioning internal and external worlds.
This paper describes a computational investigation of multimode instability growth and multimaterial mixing induced by multiple shock waves in a high-energy-density (HED) environment, where pressures exceed 1 Mbar. The simulations are based on a series of experiments performed at the National Ignition Facility (NIF) and designed as an HED analogue of non-HED shock-tube studies of the Richtmyer–Meshkov instability and turbulent mixing. A three-dimensional computational modelling framework is presented. It treats many complications absent from canonical non-HED shock-tube flows, including distinct ion and free-electron internal energies, non-ideal equations of state, radiation transport and plasma-state mass diffusivities, viscosities and thermal conductivities. The simulations are tuned to the available NIF data, and traditional statistical quantities of turbulence are analysed. Integrated measures of turbulent kinetic energy and enstrophy both increase by over an order of magnitude due to reshock. Large contributions to enstrophy production during reshock are seen from both the baroclinic source and enstrophy–dilatation terms, highlighting the significance of fluid compressibility in the HED regime. Dimensional analysis reveals that Reynolds numbers and diffusive Péclet numbers in the HED flow are similar to those in a canonical non-HED analogue, but conductive Péclet numbers are much smaller in the HED flow due to efficient thermal conduction by free electrons. It is shown that the mechanism of electron thermal conduction significantly softens local spanwise gradients of both temperature and density, which causes a minor but non-negligible decrease in enstrophy production and small-scale mixing relative to a flow without this mechanism.
The MITIGATE toolkit was developed to assist urgent care and emergency departments in the development of antimicrobial stewardship programs. At the University of Washington, we adopted the MITIGATE toolkit in 10 urgent care centers, 9 primary care clinics, and 1 emergency department. We encountered and overcame challenges: a complex data build, choosing feasible outcomes to measure, issues with accurate coding, and maintaining positive stewardship relationships. Herein, we discuss solutions to challenges we encountered to provide guidance for those considering using this toolkit.
Bipolar disorder is associated with premature mortality, but evidence is mostly derived from Western countries. There has been no research evaluating shortened lifespan in bipolar disorder using life-years lost (LYLs), which is a recently developed mortality metric taking into account illness onset for life expectancy estimation. The current study aimed to examine the extent of premature mortality in bipolar disorder patients relative to the general population in Hong Kong (HK) in terms of standardised mortality ratio (SMR) and excess LYLs, and changes of mortality rate over time.
This population-based cohort study investigated excess mortality in 12 556 bipolar disorder patients between 2008 and 2018, by estimating all-cause and cause-specific SMRs, and LYLs. Trends in annual SMRs over the 11-year study period were assessed. Study data were retrieved from a territory-wide medical-record database of HK public healthcare services.
Patients had higher all-cause [SMR: 2.60 (95% CI: 2.45–2.76)], natural-cause [SMR: 1.90 (95% CI: 1.76–2.05)] and unnatural-cause [SMR: 8.63 (95% CI: 7.34–10.03)] mortality rates than the general population. Respiratory diseases, cardiovascular diseases and cancers accounted for the majority of deaths. Men and women with bipolar disorder had 6.78 (95% CI: 6.00–7.84) years and 7.35 (95% CI: 6.75–8.06) years of excess LYLs, respectively. The overall mortality gap remained similar over time, albeit slightly improved in men with bipolar disorder.
Bipolar disorder is associated with increased premature mortality and substantially reduced lifespan in a predominantly Chinese population, with excess deaths mainly attributed to natural causes. Persistent mortality gap underscores an urgent need for targeted interventions to improve physical health of patients with bipolar disorder.
Although testing is widely regarded as critical to fighting the COVID-19 pandemic, what measure and level of testing best reflects successful infection control remains unresolved. Our aim was to compare the sensitivity of two testing metrics – population testing number and testing coverage – to population mortality outcomes and identify a benchmark for testing adequacy. We aggregated publicly available data through 12 April on testing and outcomes related to COVID-19 across 36 OECD (Organization for Economic Development) countries and Taiwan. Spearman correlation coefficients were calculated between the aforementioned metrics and following outcome measures: deaths per 1 million people, case fatality rate and case proportion of critical illness. Fractional polynomials were used to generate scatter plots to model the relationship between the testing metrics and outcomes. We found that testing coverage, but not population testing number, was highly correlated with population mortality (rs = −0.79, P = 5.975 × 10−9vs. rs = −0.3, P = 0.05) and case fatality rate (rs = −0.67, P = 9.067 × 10−6vs. rs = −0.21, P = 0.20). A testing coverage threshold of 15–45 signified adequate testing: below 15, testing coverage was associated with exponentially increasing population mortality; above 45, increased testing did not yield significant incremental mortality benefit. Taken together, testing coverage was better than population testing number in explaining country performance and can serve as an early and sensitive indicator of testing adequacy and disease burden.
Families facing end-stage nonmalignant chronic diseases (NMCDs) are presented with similar symptom burdens and need for psycho-social–spiritual support as their counterparts with advanced cancers. However, NMCD patients tend to face more variable disease trajectories, and thus may require different anticipatory supports, delivered in familiar environments. The Life Rainbow Programme (LRP) provides holistic, transdisciplinary, community-based end-of-life care for patients with NMCDs and their caregivers. This paper reports on the 3-month outcomes using a single-group, pre–post comparison.
Patients with end-stage NMCDs were screened for eligibility by a medical team before being referred to the LRP. Patients were assessed at baseline (T0), 1 month (T1), and 3 months (T2) using the Integrated Palliative Outcome Scale (IPOS). Their hospital use in the previous month was also measured by presentations at accident and emergency services, admissions to intensive care units, and number of hospital bed-days. Caregivers were assessed at T0 and T2 using the Chinese version of the Modified Caregiver Strain Index, and self-reported health, psychological, spiritual, and overall well-being. Over-time changes in outcomes for patients, and caregivers, were tested using paired-sample t-tests, Wilcoxon-signed rank tests, and chi-square tests.
Seventy-four patients and 36 caregivers participated in this research study. Patients reported significant improvements in all IPOS domains at both 1 and 3 months [ranging from Cohen's d = 0.495 (nausea) to 1.793 (depression and information needs fulfilled)]. Average hospital bed-days in the previous month fell from 3.50 to 1.68, comparing baseline and 1 month (p < 0.05). At 3 months, caregiver strain was significantly reduced (r = 0.332), while spiritual well-being was enhanced (r = 0.333).
After receiving 3 month's LRP services, patients with end-stage NMCDs and their caregivers experienced significant improvements in the quality of life and well-being, and their hospital bed-days were reduced.
To conduct a pilot study implementing combined genomic and epidemiologic surveillance for hospital-acquired multidrug-resistant organisms (MDROs) to predict transmission between patients and to estimate the local burden of MDRO transmission.
Pilot prospective multicenter surveillance study.
The study was conducted in 8 university hospitals (2,800 beds total) in Melbourne, Australia (population 4.8 million), including 4 acute-care, 1 specialist cancer care, and 3 subacute-care hospitals.
All clinical and screening isolates from hospital inpatients (April 24 to June 18, 2017) were collected for 6 MDROs: vanA VRE, MRSA, ESBL Escherichia coli (ESBL-Ec) and Klebsiella pneumoniae (ESBL-Kp), and carbapenem-resistant Pseudomonas aeruginosa (CRPa) and Acinetobacter baumannii (CRAb). Isolates were analyzed and reported as routine by hospital laboratories, underwent whole-genome sequencing at the central laboratory, and were analyzed using open-source bioinformatic tools. MDRO burden and transmission were assessed using combined genomic and epidemiologic data.
In total, 408 isolates were collected from 358 patients; 47.5% were screening isolates. ESBL-Ec was most common (52.5%), then MRSA (21.6%), vanA VRE (15.7%), and ESBL-Kp (7.6%). Most MDROs (88.3%) were isolated from patients with recent healthcare exposure.
Combining genomics and epidemiology identified that at least 27.1% of MDROs were likely acquired in a hospital; most of these transmission events would not have been detected without genomics. The highest proportion of transmission occurred with vanA VRE (88.4% of patients).
Genomic and epidemiologic data from multiple institutions can feasibly be combined prospectively, providing substantial insights into the burden and distribution of MDROs, including in-hospital transmission. This analysis enables infection control teams to target interventions more effectively.
Shortages of personal protective equipment during the coronavirus disease 2019 (COVID-19) pandemic have led to the extended use or reuse of single-use respirators and surgical masks by frontline healthcare workers. The evidence base underpinning such practices warrants examination.
To synthesize current guidance and systematic review evidence on extended use, reuse, or reprocessing of single-use surgical masks or filtering face-piece respirators.
We used the World Health Organization, the European Centre for Disease Prevention and Control, the US Centers for Disease Control and Prevention, and Public Health England websites to identify guidance. We used Medline, PubMed, Epistemonikos, Cochrane Database, and preprint servers for systematic reviews.
Two reviewers conducted screening and data extraction. The quality of included systematic reviews was appraised using AMSTAR-2. Findings were narratively synthesized.
In total, 6 guidance documents were identified. Levels of detail and consistency across documents varied. They included 4 high-quality systematic reviews: 3 focused on reprocessing (decontamination) of N95 respirators and 1 focused on reprocessing of surgical masks. Vaporized hydrogen peroxide and ultraviolet germicidal irradiation were highlighted as the most promising reprocessing methods, but evidence on the relative efficacy and safety of different methods was limited. We found no well-established methods for reprocessing respirators at scale.
Evidence on the impact of extended use and reuse of surgical masks and respirators is limited, and gaps and inconsistencies exist in current guidance. Where extended use or reuse is being practiced, healthcare organizations should ensure that policies and systems are in place to ensure these practices are carried out safely and in line with available guidance.
Background: Detection of unusual carbapenemase-producing organisms (CPOs) in a healthcare facility may signify broader regional spread. During investigation of a VIM-producing Pseudomonas aeruginosa (VIM-CRPA) outbreak in a long-term acute-care hospital in central Florida, enhanced surveillance identified VIM-CRPA from multiple facilities, denoting potential regional emergence. We evaluated infection control and performed screening for CPOs in skilled nursing facilities (SNFs) across the region to identify potential CPO reservoirs and improve practices. Methods: All SNFs in 2 central Florida counties were offered a facility-wide point-prevalence survey (PPS) for CPOs and a nonregulatory infection control consultation. PPSs were conducted using a PCR-based screening method; specimens with a carbapenemase gene detected were cultured to identify the organisms. Infection control assessments focused on direct observations of hand hygiene (HH), environmental cleaning, and the sink splash zone. Thoroughness of environmental cleaning was evaluated using fluorescent markers applied to 6 standardized high-touch surfaces in at least 2 rooms per facility. Results: Overall, 21 (48%) SNFs in the 2-county region participated; 18 conducted PPS. Bed size ranged from 40 to 391, 5 (24%) facilities were ventilator-capable SNFs (vSNFs), and 12 had short-stay inpatient rehabilitation units. Of 1,338 residents approached, 649 agreed to rectal screening, and 14 (2.2%) carried CPOs. CPO-colonized residents were from the ventilator-capable units of 3 vSNFs (KPC-CRE=7; KPC-CRPA=1) and from short-stay units of 2 additional facilities (VIM-CRPA, n = 5; KPC-CRE, n = 1). Among the 5 facilities where CPO colonization was identified, the prevalence ranged from 1.1% in a short-stay unit to 16.1% in a ventilator unit. All facilities had access to soap and water in resident bathrooms; 14 (67%) had alcohol-based hand rubs accessible. Overall, mean facility HH adherence was 52% (range, 37%–66%; mean observations per facility = 106) (Fig. 1). We observed the use of non–EPA-registered disinfectants and cross contamination from dirty to clean areas during environmental cleaning; the overall surface cleaning rate was 46% (n = 178 rooms); only 1 room had all 6 markers removed. Resident supplies were frequently stored in the sink splash zone. Conclusions: A regional assessment conducted in response to emergence of VIM-CRPA identified a relatively low CPO prevalence at participating SNFs; CPOs were primarily identified in vSNFs and among short-stay residents. Across facilities, we observed low adherence to core infection control practices that could facilitate spread of CPOs and other resistant organisms. In this region, targeting ventilator and short-stay units of SNFs for surveillance and infection control efforts may have the greatest prevention impact.
As the novel coronavirus disease 2019 changed patient presentation, this study aimed to prospectively identify these changes in a single ENT centre.
A seven-week prospective case series was conducted of urgently referred patients from primary care and accident and emergency department.
There was a total of 133 referrals. Referral rates fell by 93 per cent over seven weeks, from a mean of 5.4 to 0.4 per day. Reductions were seen in referrals from both primary care (89 per cent) and the accident and emergency department (93 per cent). Presentations of otitis externa and epistaxis fell by 83 per cent, and presentations of glandular fever, tonsillitis and peritonsillar abscess fell by 67 per cent.
Coronavirus disease 2019 has greatly reduced the number of referrals into secondary care ENT. The cause for this reduction is likely to be due to patients’ increased perceived risk of the virus presence in a medical setting. The impact of this reduction is yet to be ascertained, but will likely result in a substantial increase in emergency pressures once the lockdown is lifted and the general public's perception of the coronavirus disease 2019 risk reduces.
There is growing interest globally in using real-world data (RWD) and real-world evidence (RWE) for health technology assessment (HTA). Optimal collection, analysis, and use of RWD/RWE to inform HTA requires a conceptual framework to standardize processes and ensure consistency. However, such framework is currently lacking in Asia, a region that is likely to benefit from RWD/RWE for at least two reasons. First, there is often limited Asian representation in clinical trials unless specifically conducted in Asian populations, and RWD may help to fill the evidence gap. Second, in a few Asian health systems, reimbursement decisions are not made at market entry; thus, allowing RWD/RWE to be collected to give more certainty about the effectiveness of technologies in the local setting and inform their appropriate use. Furthermore, an alignment of RWD/RWE policies across Asia would equip decision makers with context-relevant evidence, and improve timely patient access to new technologies. Using data collected from eleven health systems in Asia, this paper provides a review of the current landscape of RWD/RWE in Asia to inform HTA and explores a way forward to align policies within the region. This paper concludes with a proposal to establish an international collaboration among academics and HTA agencies in the region: the REAL World Data In ASia for HEalth Technology Assessment in Reimbursement (REALISE) working group, which seeks to develop a non-binding guidance document on the use of RWD/RWE to inform HTA for decision making in Asia.
Introduction: There are few large-scale studies assessing the true risk of epinephrine use during anaphylaxis in adults. We aimed to assess the demographics, clinical characteristics, and secondary effects of epinephrine treatment and to determine factors associated with major and minor secondary effects associated with epinephrine use among adults with anaphylaxis. Methods: From May 2012 to February 2018, adults presenting to the Hôpital du Sacré-Coeur de Montréal (HSCM) emergency department (ED) with anaphylaxis were recruited prospectively as part of the Cross-Canada Anaphylaxis Registry (C-CARE). Missed cases were identified through a previously validated algorithm. Data were collected on demographics, clinical characteristics, and management of anaphylaxis using a structured chart review. Multivariate logistic regression models were compared to estimate factors associated with side effects of epinephrine administration. Results: Over a 6-year period, 402 adult patients presented to the ED at HSCM with anaphylaxis. The median age was 38 years (Interquartile Range [IQR]: 27, 52) and 40.4% were males. The main trigger for anaphylaxis was food (53.0%). A total of 286 patients (71.1%) received epinephrine treatment, of which 23.9% were treated in the pre-hospital setting, 47.0% received treatment in the ED, and 5.0% received epinephrine in both settings. Among patients treated with epinephrine, major secondary effects were rare (1.4% of patients), including new changes to electrocardiogram, arrhythmia, and neurological symptoms. Minor secondary effects due to epinephrine were reported in 50.0% of patients, mainly inappropriate sinus tachycardia (defined as a rate over 100 beats/minute in 30.1%). Major cardiovascular secondary effects were associated with regular use of beta-blockers (aOR 1.10 [95%CI, 1.02, 1.18]), regular use of ACE-inhibitors (aOR 1.16 [95%CI, 1.07, 1.27]), and receiving more than two doses of epinephrine (aOR 1.09 [95%CI, 1.00, 1.18]). The model was adjusted for age, history of ischemic heart disease, trigger of anaphylaxis, presence of asthma, sex, and reaction severity. Inappropriate sinus tachycardia was more likely in females (aOR 1.18 [95%CI, 1.04, 1.33]) and palpitations, tremors, and psychomotor agitation were more likely in females (aOR 1.09 [95%CI, 1.00, 1.19]) and among those receiving more than two doses of epinephrine (aOR 1.49 [95%CI, 1.14, 1.96]). The models were adjusted for age, regular use of medications, history of ischemic heart disease, triggers of anaphylaxis, presence of asthma, reaction severity, and IV administration of epinephrine. Conclusion: The low rate of occurrence of major secondary effects of epinephrine in the treatment of anaphylaxis in our study demonstrates the overall safety of epinephrine use.
Introduction: With the transition of Emergency Medicine into competency based medical education (CBME), entrustable professional activities (EPAs) are used to evaluate residents on performed clinical duties. This study aimed to determine if implementing a case-based orientation, designed to increase recognition of available EPAs, into CBME orientation would help residents increase the number of EPAs completed. Methods: We designed an intervention consisting of clinical cases that were reviewed by national EPA experts who identified which EPAs could be assessed from each case. A case-based session was incorporated into the 2019 CBME orientation for The McMaster Emergency Medicine Program. Postgraduate Year (PGY)1 residents read the cases and discussed which EPAs could be obtained with PGY2/faculty facilitators. The number of EPAs completed in the first two blocks of PGY1 was determined from local program data and Student's t-test was used to compare averages between cohorts. Results: We analyzed data from 22 trainees (7 in 2017, 8 in 2018, and 7 in 2019). In the first two blocks of PGY1, the intervention cohort (2019) had a significantly higher average number of EPAs completed per trainee (47.4 [SD 11.8]) than the historical cohort (25.3 [SD 6.7]) (p < 0.001) (Cohen's d = 2.3). No significant difference existed in the number EPAs obtained between the 2017/2018 cohorts, with averages of 24.3 [SD 6.8] and 26.1 [SD 7.0] per trainee respectively (p = 0.6). Conclusion: Implementation of a case-based orientation led by CBME-experienced facilitators nearly doubled the EPA acquisition rate of our PGY1s. The consistent EPA acquisition by the 2017/2018 cohorts suggest that the post-intervention increase was not solely due to developed familiarity with the CBME curriculum.
Introduction: Time-to-treatment plays a pivotal role in survival from sudden cardiac arrest (SCA). Every minute delay in defibrillation results in a 7-10% reduction in survival. This is particularly problematic in rural and remote regions, where bystander and EMS response is often prolonged and automated external defibrillators (AED) are often not available. Our objective was to examine the feasibility of a novel AED drone delivery method for rural and remote SCA. A secondary objective was to compare times between AED drone delivery and ambulance response to various mock SCA resuscitations. Methods: We conducted 6 simulations in two different rural communities in southern Ontario. During phase 1 (4 simulations) a “mock” call was placed to 911 and a single AED drone and an ambulance were simultaneously dispatched from the same location to a pre-determined destination. Once on scene, trained first responders retrieved the AED from the drone and initiated resuscitative efforts on a manikin. The second phase (2 scenarios) were done in a similar manner save for the drone being dispatched from a regionally optimized location for drone response. Results: Phase 1: The distance from dispatch location to scene varied from 6.6 km to 8.8 km. Mean (SD) response time from 911 call to scene arrival was 11.2 (+/- 1.0) minutes for EMS compared to 8.1 (+/- 0.1) for AED drone delivery. In all four simulations, the AED drone arrived before EMS, ranging from 2.1 to 4.4 minutes faster. The mean time for trained responders to retrieve the AED and apply it to the manikin was 35 (+/- 5) sec. No difficulties were encountered in drone activation by dispatch, drone lift off, landing or removal of the AED from the drone by responders. Phase 2: The ambulance response distance was 20km compared to 9km for the drone. Drones were faster to arrival at the scene by 7 minutes and 8 minutes with AED application 6 and 7 minutes prior to ambulance respectively. Conclusion: This implementation study suggests AED drone delivery is feasible with improvements in response time during a simulated SCA scenario. These results suggest the potential for AED drone delivery to decrease time to first defibrillation in rural and remote communities. Further research is required to determine the appropriate distance for drone delivery of an AED in an integrated EMS system as well as optimal strategies to simplify bystander application of a drone delivered AED.
Introduction: Despite studies highlighting the inaccuracies of self-assessment, practicing physicians continue to rely on self-perception to maintain clinical competence. Many approaches have been proposed to augment physician performance. In the realm of Quality Improvement (QI), Audit and Feedback (A&F) has a modest effect. Educators have proposed coaching interventions and academic constructs have invoked training for early-career clinicians. Very few of these are driven by the perceptions and the needs of the end-user - the physicians. We currently lack a model to understand physicians’ perceptions of their own practice data and an understanding of the factors which would enable practice change. In this study, we sought to develop a model for data feedback which may best help physicians change practice. Methods: In a previous study, we conducted a needs analysis of 105 physicians in the Hamilton-Niagara area in order to understand which data metrics were most valuable to physicians. Using the survey results, we designed an interview guide that was used as a qualitative study of physicians’ perspectives on A&F. By intentional sampling, we recruited 15 physicians amongst gender groups, types of practice (academic vs community) and durations of practice. We conducted this interview with all 15 participants which were then transcribed. We then performed thematic analysis and extraction of all interviews using a realist framework. These were then translated into broader themes and, by using a grounded theory framework, created a model to understand how physicians relate practice data to their own sense of self. Interviews were anonymized and no identifying data was shared as part of the interview. All interviewees consented to participation at the outset and could withdraw at any time. Results: Via stakeholder interviews from 15 key informants, we developed a model for the understanding of how a physician's sense of self and the nature of the data (quantity and quality) may be combined to understand the likelihood of practice change and the adoption of the change strategy. Using this model, it is possible to understand the conditions under which A&F would provide the greatest opportunity for practice change. Conclusion: Physician identity intersects with A&F data to shed insights on practice improvement. Understanding the core identity constructs of different physician groups may allow for increased uptake in A&F processes.
Background: Emergency physicians (EPs) can choose from several evidence-based pathways to diagnose pulmonary embolism (PE), however literature suggests that EPs frequently use computer tomography (CT) scanning as a stand-alone test for PE. This is a program of research to improve adherence to evidence-based PE diagnosis in the emergency department (ED). Aim Statement: To create a novel approach to PE diagnosis in the ED based on a framework explaining EP diagnostic PE behaviour and barriers to using evidence-based PE testing. Measures & Design: We conducted two types of qualitative interviews: 1). EPs in 5 Canadian cities watched videos of 2 simulated cases and then explained how they would test the patient. 2). Semi-structured EP interviews using the theoretical domains framework (TDF). The results of our analyses informed the construction of an explanatory framework for common EP diagnostic PE behaviours. Barriers to evidence-based behaviour were classified into domains. A Canadian EP expert group reviewed these results along with the existing evidence on ED PE diagnostic implementation. We developed a new approach to diagnosis of PE in the ED which addresses each of our domains. Evaluation/Results: We conducted 71 interviews. We identified 4 domains, each addressed in our pathway. ‘PE in a mythical and deadly beast’ PE kills and can masquerade so EPs look for PE in places where it does not exist and are rewarded for ‘over-testing’. Response: Creating a departmental conversation about missing PE, talking about the facts, busting the myths. EP feedback on PE testing including positive rate. ‘The end goal is CTPE’ PE creates anxiety for EPs and ordering a CTPE hands over responsibility to the radiologist. Response: A departmental protocol for PE testing which starts with D-dimer for every patient. Shifting focus to ruling out PE with D-dimer. Protocol is automated once initiated by EP. ‘PERC eases anxiety’ PERC is documented when it is negative and allows EP to stop. Response: EPs can choose to use and document PERC. ‘No-one has been fighting for the Wells score’ Poor understanding of purpose and function. Often at odds to Gestalt. Response: Protocol does not use Wells score. Discussion/Impact: We have developed a new diagnostic PE pathway which addresses current barriers to evidence-based practice which we will evaluate further.
The risk factors of criminal behavior in patients with schizophrenia are not well explored. This study is to explore the risk factors for criminal behavior in patients with schizophrenia in rural China.
We used data from a 14-year prospective follow-up study (1994-2008) of criminal behavior among a cohort (n=510) of patients with schizophrenia in Xinjin County, Chengdu, China.
There were 489 patients (95.9%) who were followed up from 1994 to 2008. The rate of criminal behavior was 13.5% among these patients with schizophrenia during the follow-up period. Compared with female subjects (6 cases, 20.0%), male patients had significantly higher rate of violent criminal behavior (e.g., arson, sexual assault, physical assault, and murder) (24 cases, 80.0%) (p< 0.001). Bivariate analyses showed that the risk of criminal behavior was significantly associated with being unmarried, of younger age, previous violent behavior, homelessness, lower family economic status, no family caregivers, and higher scores on measures (PANSS) of positive, negative, and total symptoms of illness. In multiple logistic regression analyses being unmarried and previous violent behavior were identified as independent predictors of increased criminal behavior in persons with schizophrenia.
The risk factors for criminal behavior among patients with schizophrenia should be understood within a particular social context. Criminal behavior may be predicted by specific characteristics of patients with schizophrenia in rural community. The findings of risk factors for criminal behavior should be considered in planning community mental health care and interventions for high-risk patients and their families.
Currently there is no consensus regarding how long anti-psychotics medication should be continued following a first/single psychotic episode. Clinically patients often request discontinuation after a period of remission. This is one of the first double-blind randomized-controlled studies designed to address the issue.
Patients with DSM-IV schizophrenia and related psychoses (excluding substance induced psychosis) who remitted well following a first/single-episode, and had remained well on maintenance medication for one year, were randomized to receive either maintenance therapy with quetiapine (400 mg/day), or placebo for 12 months. Relapse was defined by the presence of (i) an increase in at least one of the following PANSS psychotic symptom items to a threshold score (delusion, hallucinatory behaviour, conceptual disorganization, unusual thought content, suspiciousness); (ii) CGI Severity of Illness 3 or above; and (iii) CGI Improvement 5 or above.
178 patients were randomized. 144 patients completed the study (80.9%). The relapse rate was 33.7% (30/89) for the maintenance group and 66.3% (59/89) for the placebo group (log-rank test, chi-square=13.328, p<0.001). Relapse was not related to age or gender. Other significant predictors of relapse include medication status, pre-morbid schizotypal traits, verbal memory and soft neurological signs.
There is a substantial risk of relapse if medication is discontinued in remitted first-episode psychosis patients following one year of maintenance therapy. On the contrary 33.7% of patients discontinued medication and remained well.
Medication discontinuation in remitted single episode patients after a period of maintenance therapy is a major clinical decision and thus the identification of risk factors controlling for medication status is important.
Following a first/single episode with DSM-IV schizophrenia and related psychoses, remitted patients who had remained well on maintenance medication for at least one year were randomized to receive either maintenance therapy (with quetiapine 400 mg/day), or placebo for 12 months.
178 patients were randomized. Relapse rates were 33.7% (30/89) in maintenance group and 66.3% (59/89) in placebo group. Potential predictors were initially identified in univariate Cox regression models (p<0.1) and were subsequently entered into a multivariate Cox regression model for measuring the relapse risk. Significant predictors included patients on placebo (hazard ratio, 0.41; CI, 0.25 – 0.68; p=0.001); having more pre-morbid schizotypal traits (hazard ratio, 2.32; CI, 1.33 – 4.04; p=0.003); scoring lower in the logical memory test (hazard ratio, 0.94; CI, 0.9 – 0.99; p=0.028); and having more soft neurological signs (disinhibition) (hazard ratio, 1.33; CI, 1.02 – 1.74; p=0.039).
Relapse predictors may help to inform clinical decisions about discontinuation of maintenance therapy specifically for patients with a first/single episode psychosis following at least one year of maintenance therapy.
We are grateful to Dr TJ Yao at the Clinical Trials Center, University of Hong Kong, for statistical advice. The study was supported by investigator initiated trial award from AstraZeneca and the Research Grants Council Hong Kong (Project number: 765505).