To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Gravitational waves from coalescing neutron stars encode information about nuclear matter at extreme densities, inaccessible by laboratory experiments. The late inspiral is influenced by the presence of tides, which depend on the neutron star equation of state. Neutron star mergers are expected to often produce rapidly rotating remnant neutron stars that emit gravitational waves. These will provide clues to the extremely hot post-merger environment. This signature of nuclear matter in gravitational waves contains most information in the 2–4 kHz frequency band, which is outside of the most sensitive band of current detectors. We present the design concept and science case for a Neutron Star Extreme Matter Observatory (NEMO): a gravitational-wave interferometer optimised to study nuclear physics with merging neutron stars. The concept uses high-circulating laser power, quantum squeezing, and a detector topology specifically designed to achieve the high-frequency sensitivity necessary to probe nuclear matter using gravitational waves. Above 1 kHz, the proposed strain sensitivity is comparable to full third-generation detectors at a fraction of the cost. Such sensitivity changes expected event rates for detection of post-merger remnants from approximately one per few decades with two A+ detectors to a few per year and potentially allow for the first gravitational-wave observations of supernovae, isolated neutron stars, and other exotica.
This paper discusses the evidence for periodic human activity in the Cairngorm Mountains of Scotland from the late 9th millennium to the early 4th millennium cal bc. While contemporary paradigms for Mesolithic Europe acknowledge the significance of upland environments, the archaeological record for these areas is not yet as robust as that for the lowland zone. Results of excavation at Chest of Dee, along the headwaters of the River Dee, are set into a wider context with previously published excavations in the area. A variety of site types evidences a sophisticated relationship between people and a dynamic landscape through a period of changing climate. Archaeological benefits of the project include the ability to examine novel aspects of the archaeology leading to a more comprehensive understanding of Mesolithic lifeways. It also offers important lessons in site survival, archaeological investigation, and the management of the upland zone.
The Spoon-billed Sandpiper Calidris pygmaea is a ‘Critically Endangered’ migratory shorebird. The species faces an array of threats in its non-breeding range, making conservation intervention essential. However, conservation efforts are reliant on identifying the species’ key stopover and wintering sites. Using Maximum Entropy models, we predicted Spoon-billed Sandpiper distribution across the non-breeding range, using data from recent field surveys and satellite tracking. Model outputs suggest only a limited number of stopover sites are suitable for migrating birds, with sites in the Yellow Sea and on the Jiangsu coast in China highlighted as particularly important. All the previously known core wintering sites were identified by the model including the Ganges-Brahmaputra Delta, Nan Thar Island and the Gulf of Mottama. In addition, the model highlighted sites subsequently found to be occupied, and pinpointed potential new sites meriting investigation, notably on Borneo and Sulawesi, and in parts of India and the Philippines. A comparison between the areas identified as most likely to be occupied and protected areas showed that very few locations are covered by conservation designations. Known sites must be managed for conservation as a priority, and potential new sites should be surveyed as soon as is feasible to assess occupancy status. Site protection should take place in concert with conservation interventions including habitat management, discouraging hunting, and fostering alternative livelihoods.
Introduction: Hyperkalemia is a common electrolyte disturbance associated with morbidity and mortality. Commonly used therapies for hyperkalemia include IV calcium, sodium bicarbonate, insulin, beta-adrenergic agents, ion-exchange resins, diuretics and hemodialysis. This study aims to evaluate which treatments are more commonly used to treat hyperkalemia and to examine factors which influence those clinical decisions. Methods: This is a retrospective chart review of all cases of hyperkalemia encountered in 2017 at a Canadian adult ED. Potassium values were classified as mild (5.5 - 6.5 mEq/L), moderate (>6.5 - 7.5 mEq/L) and severe (>7.5 mEq/L). Treatment choices were then recorded and matched to hemodynamic stability, degree of hyperkalemia and ECG findings. More statistical methods to test correlation between treatment and specific variables will be performed over the next 2 months, including logistic regression to highlight potential determinants of treatment and Chi-square tests to verify randomness and to construct 95% confidence intervals. Results: 1867 ED visits were identified, of which 479 met the inclusion criteria. 89.1% of hyperkalemia cases were mild, 8.2% were moderate, and 2.7% were severe. IV insulin was used in 22.1% of cases, followed by Kayexalate in 20.5%, sodium bicarbonate in 12.3%, IV calcium in 9.4%, frusemide in 7.3%, salbutamol in 2.7%, and dialysis in 1.9%. Moderate and severe hyperkalemia were associated with higher use of insulin (79.5% and 64.3% respectively), IV calcium (41% and 64.3% respectively), sodium bicarbonate (56.4% and 85.7% respectively). Bradycardia was associated with higher insulin and IV calcium use (46.7% and 33.3% respectively). Hypotension was associated with a similar increase in use of insulin and IV calcium (34.2% and 23.7% respectively). There were only 15 cases of cardiac arrest in which sodium bicarbonate and IV calcium were more frequently used (80% and 60% respectively). Conclusion: This study demonstrates variability in the ED management of hyperkalemia. We found that Insulin and Kayexalate were the 2 most common interventions, with degree of hyperkalemia, bradycardia and hypotension influencing rates of treatment. Overuse of kayexalate for emergent treatment of hyperkalemia is evident despite weak supporting evidence. Paradoxically, beta adrenergic agents were underutilized despite their rapid effect and safer profile. The development of a widely accepted guideline may help narrow the differences in practice and potentially improve outcomes.
Introduction: CAEP recently developed the acute atrial fibrillation (AF) and flutter (AFL) [AAFF] Best Practices Checklist to promote optimal care and guidance on cardioversion and rapid discharge of patients with AAFF. We sought to assess the impact of implementing the Checklist into large Canadian EDs. Methods: We conducted a pragmatic stepped-wedge cluster randomized trial in 11 large Canadian ED sites in five provinces, over 14 months. All hospitals started in the control period (usual care), and then crossed over to the intervention period in random sequence, one hospital per month. We enrolled consecutive, stable patients presenting with AAFF, where symptoms required ED management. Our intervention was informed by qualitative stakeholder interviews to identify perceived barriers and enablers for rapid discharge of AAFF patients. The many interventions included local champions, presentation of the Checklist to physicians in group sessions, an online training module, a smartphone app, and targeted audit and feedback. The primary outcome was length of stay in ED in minutes from time of arrival to time of disposition, and this was analyzed at the individual patient-level using linear mixed effects regression accounting for the stepped-wedge design. We estimated a sample size of 800 patients. Results: We enrolled 844 patients with none lost to follow-up. Those in the control (N = 316) and intervention periods (N = 528) were similar for all characteristics including mean age (61.2 vs 64.2 yrs), duration of AAFF (8.1 vs 7.7 hrs), AF (88.6% vs 82.9%), AFL (11.4% vs 17.1%), and mean initial heart rate (119.6 vs 119.9 bpm). Median lengths of stay for the control and intervention periods respectively were 413.0 vs. 354.0 minutes (P < 0.001). Comparing control to intervention, there was an increase in: use of antiarrhythmic drugs (37.4% vs 47.4%; P < 0.01), electrical cardioversion (45.1% vs 56.8%; P < 0.01), and discharge in sinus rhythm (75.3% vs. 86.7%; P < 0.001). There was a decrease in ED consultations to cardiology and medicine (49.7% vs 41.1%; P < 0.01), but a small but insignificant increase in anticoagulant prescriptions (39.6% vs 46.5%; P = 0.21). Conclusion: This multicenter implementation of the CAEP Best Practices Checklist led to a significant decrease in ED length of stay along with more ED cardioversions, fewer ED consultations, and more discharges in sinus rhythm. Widespread and rigorous adoption of the CAEP Checklist should lead to improved care of AAFF patients in all Canadian EDs.
Background: Emergency department (ED) overcrowding is associated with a broad spectrum of poor medical outcomes, including medical errors, mortality, higher rates of leaving without being seen, and reduced patient and physician satisfaction. The largest contributor to overcrowding is access block – the inability of admitted patients to access in-patient beds from the ED. One component to addressing access block involves streamlining the decision process to rapidly determine which hospital service will admit the patient. Aim Statement: As of Sep 2011, admission algorithms at our institution were supported and formalised. The pancreatitis algorithm clarified whether general surgery or internal medicine would admit ED patients with pancreatitis. We hypothesize that this prior uncertainty delayed the admission decision and prolonged ED length of stay (LOS) for patients with pancreatitis. Our project evaluates whether implementing a pancreatitis admission algorithm at our institution reduced ED time to disposition (TTD) and LOS. Measures & Design: A retrospective review was conducted in a tertiary care academic hospital in Montreal for all adult ED patients diagnosed with pancreatitis from Apr 2010 to Mar 2014. The data was used to plot separate run charts for ED TTD and LOS. Serial measurements of each outcome were used to monitor change and evaluate for special cause variation. The mean ED LOS and TTD before and after algorithm implementation were also compared using the Student's t test. Evaluation/Results: Over four years, a total of 365 ED patients were diagnosed with pancreatitis and 287 (79%) were admitted. The mean ED LOS for patients with pancreatitis decreased following the implementation of an admission algorithm (1616 vs. 1418 mins, p = 0.05). The mean ED TTD was also reduced (1171 vs. 899 mins, p = 0.0006). A non-random signal of change was suggested by a shift above the median prior to algorithm implementation and one below the median following. Discussion/Impact: This project demonstrates that in a busy tertiary care academic hospital, an admission algorithm helped reduce ED TTD and LOS for patients with pancreatitis. This proves especially valuable when considering the potential applicability of such algorithms to other disease processes, such as gastrointestinal bleeding and congestive heart failure, among others. Future studies demonstrating this external applicability, and the impact of such decision algorithms on physician decision fatigue and within non-academic institutions, proves warranted.
Introduction: The Brain Injury Guidelines (BIG) stratifies complicated mild traumatic brain injury (mTBI) patients into 3 groups to guide hospitalization, neurosurgical consultation and repeat head-CT. BIG-1 patients could be managed safely without neurosurgical consultation or transfer. Systematic transfer to neurotrauma centers provide few benefits to this subgroup leading to overtriage. Similarly, unnecessary clinical and radiological follow-ups utilize significant health-care resources. Objective: to validate the safety and efficacy of the BIG for complicated mTBIs. Methods: We performed a multicenter historical cohort study in 3 level-1 trauma centers in Quebec. Patients ≥16 years old assessed in the Emergency Department (ED) with complicated mTBI between 2014 and 2017 were included. Patients with penetrating trauma, cerebral aneurysm or tumor were excluded. Clinical, demographic and radiological data, BIG variables, TBI-related death and neurosurgical intervention were collected using a standardized form. A second reviewer assessed all ambiguous files. Descriptive statistics, over- and under-triage were calculated. Results: A total of 342 patients’ records were assessed. Mean age was 63 ± 20,7 and 236 (69 %) were male. Thirty-five patients were classified under BIG-1 (10.2%), 110 under BIG-2 (32.2%) and 197 under BIG-3 (57.6%). Twenty-six patients (7%) required neurosurgical intervention, all were BIG-3. 90% of TBI-related deaths occurred in BIG-3 and none were classified BIG-1. Among the 192 transfers (51%), 14 were classified under BIG-1 (7.3%) and should not have been transferred according to the guidelines and 50 under BIG-2 (26%). In addition, 40% of BIG-1 received a repeat head computed tomography, although not indicated. Similarly, 7 % of all patients had a neurosurgical consult even if not required. Projected implementation of BIG would lead to 47% of overtriage and 0.3% of undertriage. Conclusion: Our results suggest that the Brain Injury Guidelines could safely identify patients with negative outcomes and could lead to a safe and effective management of complicated mTBI. Applying these guidelines to our cohort could have resulted in significantly fewer repeat head CTs, neurosurgical consults and transfers to level 1 neurotrauma centers.
Introduction: An important challenge physicians face when treating acute heart failure (AHF) patients in the emergency department (ED) is deciding whether to admit or discharge, with or without early follow-up. The overall goal of our project was to improve care for AHF patients seen in the ED while avoiding unnecessary hospital admissions. The specific goal was to introduce hospital rapid referral clinics to ensure AHF patients were seen within 7 days of ED discharge. Methods: This prospective before-after study was conducted at two campuses of a large tertiary care hospital, including the EDs and specialty outpatient clinics. We enrolled AHF patients ≥50 years who presented to the ED with shortness of breath (<7 days). The 12-month before (control) period was separated from the 12-month after (intervention) period by a 3-month implementation period. Implementation included creation of rapid access AHF clinics staffed by cardiology and internal medicine, and development of referral procedures. There was extensive in-servicing of all ED staff. The primary outcome measure was hospital admission at the index visit or within 30 days. Secondary outcomes included mortality and actual access to rapid follow-up. We used segmented autoregression analysis of the monthly proportions to determine whether there was a change in admissions coinciding with the introduction of the intervention and estimated a sample size of 700 patients. Results: The patients in the before period (N = 355) and the after period (N = 374) were similar for age (77.8 vs. 78.1 years), arrival by ambulance (48.7% vs 51.1%), comorbidities, current medications, and need for non-invasive ventilation (10.4% vs. 6.7%). Comparing the before to the after periods, we observed a decrease in hospital admissions on index visit (from 57.7% to 42.0%; P <0.01), as well as all admissions within 30 days (from 65.1% to 53.5% (P < 0.01). The autoregression analysis, however, demonstrated a pre-existing trend to fewer admissions and could not attribute this to the intervention (P = 0.91). Attendance at a specialty clinic, amongst those discharged increased from 17.8% to 42.1% (P < 0.01) and the median days to clinic decreased from 13 to 6 days (P < 0.01). 30-day mortality did not change (4.5% vs. 4.0%; P = 0.76). Conclusion: Implementation of rapid-access dedicated AHF clinics led to considerably increased access to specialist care, much reduced follow-up times, and possible reduction in hospital admissions. Widespread use of this approach can improve AHF care in Canada.
Bipolar disorder (BD) is associated with impaired psychosocial behaviours. Little is known about deficits in neurocognitive functions like decision-making possibly related both to these behaviours and to the nature of the disorder.
To determine whether decision-making impairments exist in manic (M), depressed (D) and euthymic (E) bipolar patients (BP) and to determine whether illness and course-of-illness characteristics can predict participants’ performance
A power analysis was conducted. A total of 315 subjects, including 45 M and 32 D inpatients and 90 E outpatients with BD I, medicated, and 150 Healthy Controls (HC), age, IQ and gender-matched, were included. Decision-making ability and sensitivity to punishment frequency were assessed with the Iowa Gambling Task (IGT).
On the IGT, MBP (p< 0.001), DBP (p< 0.01) and EBP (p< 0.05) selected significantly more cards from the risky decks than HC with no significant differences between BP groups. Unlike HC, MBP (p< 0.001), DBP (p< 0.05) and EBP (p< 0.05) showed little capacity to learn from incurred losses with no significant differences between BP groups, but, like HC, BP preferred decks that yielded infrequent penalties over those decks that yielded frequent penalties. In a multivariate analysis, decision-making impairment in the BP was significantly (p=0.001) predicted by low level of education, high total number of admissions and family history of BD.
BP clearly show defects in decision-making predicted by course-of-illness illness characteristics. Impaired decision-making might be a trait-related neurocognitive deficit in BD and partly explain impaired psychosocial behaviours of BP.
Worldwide, the Irish diaspora experience elevated psychiatric morbidity across generations, not accounted for through socioeconomic position. The present study assessed the contribution of intergenerational migration and settlementrelated adversity in accounting for adult mental health, in second generation Irish people.
Analysis of prospective data from a nationally representative birth cohort from Britain, comprising 17,000 babies born in a single week in 1958 and followed up to mid-life. Common mental disorders were assessed at age 44/ 45.
Relative to the rest of the cohort, second generation Irish children grew up in marked material and social disadvantage, which tracked into early adulthood. By mid-life, parity was reached between second generation Irish cohort members and the rest of the sample on most disadvantage indicators. At age 23 Irish cohort members were more likely to screen positive for common mental disorders (OR: 1.44; 95% CI: 1.06, 1.94). This had reduced slightly by mid-life (OR: 1.27; 95% CI: 0.96, 1.69). Adjustment for childhood and early adulthood adversity fully attenuated differences in adult mental health disadvantages.
Social and material disadvantage experienced in childhood continues to have long-range adverse effects on mental health at mid-life, in second generation Irish cohort members. This suggests important mechanisms over the lifecourse, which may have important policy implications in the settlement of migrant families.
The purpose of this paper was to examine national differences in the desire to participate in decision-making of people with severe mental illness in six European countries.
The data was taken from a European longitudinal observational study (CEDAR; ISRCTN75841675). A sample of 514 patients with severe mental illness from the study centers in Ulm, Germany, London, England, Naples, Italy, Debrecen, Hungary, Aalborg, Denmark and Zurich, Switzerland were assessed as to desire to participate in medical decision-making. Associations between desire for participation in decision-making and center location were analyzed with generalized estimating equations.
We found large cross-national differences in patients’ desire to participate in decision-making, with the center explaining 47.2% of total variance in the desire for participation (P < 0.001). Averaged over time and independent of patient characteristics, London (mean = 2.27), Ulm (mean = 2.13) and Zurich (mean = 2.14) showed significantly higher scores in desire for participation, followed by Aalborg (mean = 1.97), where scores were in turn significantly higher than in Debrecen (mean = 1.56). The lowest scores were reported in Naples (mean = 1.14). Over time, the desire for participation in decision-making increased significantly in Zurich (b = 0.23) and decreased in Naples (b = −0.14). In all other centers, values remained stable.
This study demonstrates that patients’ desire for participation in decision-making varies by location. We suggest that more research attention be focused on identifying specific cultural and social factors in each country to further explain observed differences across Europe.
Mental disorders are increasingly common among adults in both the developed and developing world and are predicted by the WHO to be the leading cause of disease burden by 2030. Many common physical conditions are more common among people who also have a common mental disorder. This scoping review aims to examine the current literature about the prevention, identification and treatment of physical problems among people with pre-existing mental health disorders in primary care in Europe.
The scoping review framework comprised a five-stage process developed by Arksey & O’Malley (2005). The search process was guided by the Preferred Reporting Items for Systematic Reviews and Meta-Analyses. Both quantitative and qualitative studies were included, with no restriction on study design.
The initial search identified 299 studies, with a further 28 added from the hand-search (total n = 327) of which 19 were considered relevant to the review research question and included for full analysis. Depression was the mental health condition most commonly studied (nine studies), followed by depression and anxiety (seven studies), with three studies examining any mental disorder. Eleven studies examined the effects of various interventions to address physical and mental comorbidity, with the most commonly studied intervention being collaborative care.
With just 19 studies meeting our criteria for inclusion, there is clearly a paucity of research in this area. Further research is essential in order to understand the pathophysiological mechanisms underlying the association between mental disorders and chronic conditions.
Engaging in paid employment after claiming retirement benefits may be an important avenue for individuals to work longer as life expectancies rise. After separating from one's career employer, individuals may engage in paid work to stay active or to supplement their current level of retirement savings or both. Individuals who choose not to work after claiming may be expressing their preference to stay retired, perhaps because their retirement income is sufficient. However, the decision to work after claiming may be driven by the lack of retirement planning and insufficient savings, while the lack of post-claiming work may reflect the inability to find adequate employment opportunities. We use administrative records merged with panel data from several surveys of public employees in North Carolina to study the decision to engage in paid work after claiming retirement benefits. More than 60% of active workers plan to work after claiming benefits, while only around 42% of the same sample of individuals have engaged in post-claiming paid work in the first few years after leaving public sector employment. Despite this gap, stated work plans are strongly predictive of actual post-claiming work behavior. Our final analysis uses self-reported measures to gauge the financial well-being of our sample in the early years after leaving career employment.
When at the end of the twelfth century the universities first emerged in Italy, Spain, and France, the culture of monastic learning was already centuries-old and clearly defined. Indeed, it was the monasteries’ lively discourse on the place and purpose of study in the years after the Gregorian reform that gave form and focus to the emerging intellectual program of the new, secular schools. Europe’s monasteries did not react to the rise of the universities; rather, they were active in their evolution, shaping their learned culture with a mature syllabus of their own. Secular masters fashioned an image which was set self-consciously in opposition to the professed path of humility. Yet as a corporate, and later collegiate, body, these masters found much inspiration in the monastery, from its cloister, a purpose-built study space, to its morning schedule of teaching and its seasonal circulation of books. In their turn, the schools extended the intellectual horizons of the monks and equipped them to participate in the clerical culture of the institutional Church. It was no easy exchange. The secular university struck out frequently at a source of such obvious cultural influence and immutable institutional strength. For their part, in almost every generation after Bernard of Clairvaux (d. 1153), monks questioned the priorities of their mental opus, and struggled to reconcile traditional ascetic and modish academic impulses.
The sermon was a feature of monastic life from early times as a vehicle for spiritual instruction, pastoral guidance, and formal ceremonial (for example, to mark the election of a superior), but it was only in the later Middle Ages that the practice of preaching to audiences inside and outside the cloister became a common occupation for monastic men and women.
This study examines the distribution options of 85 large public retirement plans covering general state employees, teachers, and local government employees. The interest rates used to price annuities vary considerably across the plans. As a result, retirees with the same monthly benefit if a single life benefit is chosen will have substantially different monthly benefits if they select a joint and survivor annuity. We examine the impact of variation in the pricing of annuity options using both cross-plan differences in interest rates and the change in the choice of annuity options in one plan after the price of options changes due to new assumed interest rates and mortality rates.