To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
A 64-year-old male with lung cancer presents to the emergency department with one week of cough and increasing shortness of breath. At triage, his temperature is 37.3° Celsius, heart rate 106 beats per minute, blood pressure 136/80, and oxygen saturation 87% on room air, which improves to 94% with 3 L of oxygen via nasal prongs. He has a chest X-ray that demonstrates bilateral patchy infiltrates. He is treated for pneumonia with antibiotics. His respiratory status worsens, with an increasing oxygen requirement. Additional history reveals that the patient recently finished treatment for lung cancer with an immune checkpoint inhibitor.
Introduction: Acute pharyngitis is a common emergency department (ED) presentation. The Centor (Modified/McIsaac) score uses five criteria (age, tonsillar exudates, swollen tender anterior cervical nodes, absence of a cough, and history of fever) to predict Group A Streptococcus (GAS) infection. The recommendation is patients with a Centor score of 0-1 should not undergo testing and should not be given antibiotics, patients with a score of 2-3 may warrant throat cultures, and for patients with a score ≥ 4, empiric antibiotics may be appropriate. Associated pain is often first managed with acetaminophen or non-steroidal anti-inflammatory drugs, however recent evidence suggests a short course of low-to-moderate dose corticosteroids as adjunctive therapy may reduce inflammation and provide pain relief. The objective of this study was to describe the ED management of acute pharyngitis for adult patients presenting to an academic ED over a two-year study period. Methods: This was a retrospective chart review of all adult (> 17 years) patients presenting to Mount Sinai Hospital ED with a discharge diagnosis of acute pharyngitis (ICD-10 code J02.9) from January 1st 2016 to December 31st 2018. Trained research personnel reviewed medical records and extracted data using a computerized, data abstraction form. Results: Of the 638 patients included in the study, 286 (44.8%) had a Centor score of 0-1, 328 (51.4%) had a score of 2-3, and 24 (3.8%) had a score of ≥ 4. Of those with a Centor score of 0-1, 83 (29.0%) had a throat culture, 88 (30.8%) were prescribed antibiotics, 15 (5.2%) were positive for GAS and 74 (25.9%) were given corticosteroids in the ED or at discharge. Of those with a Centor score of 2-3, 156 (47.6%) had a throat culture, 220 (67.1%) were prescribed antibiotics, 44 (13.4%) were positive for GAS, and 145 (44.2%) were given corticosteroids. Of those with a Centor score ≥ 4, 14 (58.3%) had a throat culture, 18 (75.0%) were prescribed antibiotics, 7 (29.2%) were positive for GAS and 12 (50.0%) were given corticosteroids. Conclusion: As predicted, a higher Centor score was associated with higher risk of GAS infection, increased antibiotic prescribing and use of corticosteroids. Many patients with low Centor scores were prescribed antibiotics and also had throat cultures. Further work is required to understand clinical decision making for the management of acute pharyngitis.
Introduction: eCTAS is a real time electronic triage decision-support tool designed to improve patient safety and quality of care by standardizing the application of the Canadian Triage and Acuity Scale (CTAS). The tool dynamically calculates a recommended CTAS score based on the presenting complaint, vital signs and selected clinical modifiers. The primary objective was to assess consistency of CTAS score distributions across 35 emergency departments (EDs) by 16 presenting complaints pre and post eCTAS implementation. Methods: This retrospective cohort study used population-based administrative data from January 2016 to December 2018 from all hospital EDs in Ontario that had implemented eCTAS with at least 9 months of data. Following a 3-month stabilization period, we compared data for 6 months post-eCTAS implementation to the same 6-month period the previous year (pre-implementation) to account for potential seasonal variation, patient volume and case-mix. We included triage encounters of adult (≥18 years) patients if they had one of 16 pre-specified high-volume, presenting complaints. A paired-samples t-test was used to determine consistency by estimating the absolute difference in CTAS distribution for each presenting complaint, by each hospital, pre and post eCTAS implementation, compared to the overall average of the 35 EDs. Results: There were 183,231 triage encounters in the pre-eCTAS cohort and 179,983 in the post-eCTAS cohort from 35 EDs across the province. Triage scores were more consistent with the overall average after eCTAS implementation in 6 (37.5%) presenting complaints: chest pain (cardiac features) (p < 0.001), extremity weakness/symptoms of cerebrovascular accident (p < 0.001), fever (p < 0.001), shortness of breath (p < 0.001), syncope (p = 0.02), and hyperglycemia (p = 0.03). Triage consistency was similar pre and post eCTAS implementation for the presenting complaints of altered level of consciousness, anxiety/situational crisis, confusion, depression/suicidal/deliberate self-harm, general weakness, head injury, palpitations, seizure, substance misuse/intoxication or vertigo. Conclusion: A standardized, electronic approach to performing triage assessments increased consistency in CTAS scores across many, but not all, high-volume CEDIS complaints. This does not reflect triage accuracy, as there are no known benchmarks for triage accuracy. Improvements in consistency were greatest for sentinel presenting complaints with a minimum allowable CTAS score.
Introduction: Emergency department (ED) boarding is associated with worse outcomes for critically ill patients. There have been mixed findings in other patient populations. The primary objective of this study was to examine predictors of prolonged ED boarding among cancer patients receiving chemotherapy who required hospital admission from the ED. Secondary objectives were to examine the association between prolonged ED boarding and in-hospital mortality, 30-day mortality, and hospital length of stay (LOS). Methods: Using administrative databases from Ontario, we identified adult (≥ 18 years) cancer patients who received chemotherapy within 30 days prior to a hospital admission from the ED between 2013 to 2017. ED boarding time was calculated as the time from the decision to admit the patient to when the patient physically left the ED. Prolonged ED boarding was defined as ≥ 8 hours. Multivariable logistic regression was used to examine predictors of prolonged ED boarding and to determine if prolonged boarding was associated with mortality. Multivariable quantile regression was used to determine the association between prolonged boarding and hospital LOS. Results: 45,879 patients were included in the study. Median (interquartile range (IQR)) ED LOS of stay was 11.8 (7.0, 21.7) hours and median (IQR) ED boarding time was 4.2 (1.6, 14.2) hours. 17,053 (37.2%) patients had prolonged ED boarding. Severe ED crowding was the strongest predictor of prolonged ED boarding (odds ratio: 17.7, 95% CI: 15.0 to 20.9). Prolonged ED boarding was not associated with in-hospital mortality or 30-day mortality. Median hospital LOS was over 9 hours (p <0.0001) longer among patients with the longest ED boarding times. Conclusion: Severe ED crowding was associated with a significant increase in the odds of prolonged ED boarding. While our study demonstrated that prolonged boarding was not associated with increased mortality, further work is required to understand if ED boarding is associated with other adverse outcomes in this immunocompromised population.
The number of novel psychoactive substances (NPS) available is increasing. Synthetic cannabinoids (SC) are one of many NPS sold. SC aim to emulate the effects of natural cannabis by acting on cannabinoid receptors. Despite much research into pharmacology, there is limited data on the user experience of SC.
It is useful for psychiatrists, to understand what experiences people have whilst on illicit substances. The aim of this qualitative study is to gain an initial understanding of what characterizes the experiences of those who use SC.
Fourty anonymously written online reports were collected from the “Erowid experience vaults” and analysed using the Empirical Phenomenological Psychological Method.
The analysis yielded 488 meaning units (MU). These were grouped into 36 categories revealing 5 broad themes: (1) physical affects; (2) sensory distortions and distortions of perception; (3) emotional and psychological effects; (4) re-dosing, addiction and comedown effects; (5) similarities to other substances.
Synthetic cannabinoids have a mixed effect on users with a myriad of experiences reported. Some experienced positive results from their usage such as euphoria and relaxation, however these were counter balanced by those who experienced some serious negative emotional and physical side effects such as anxiety, paranoia, palpitations and convulsions. SC appear to often emulate that of their natural counterpart, yet there is an unpredictability to them which can end with serious consequences. Online forum content gives us a strong base understanding of users experiences of SC. Further research is required to elucidate a more nuanced understanding.
Disclosure of interest
The authors have not supplied their declaration of competing interest.
What drives some Islamists to become “Muslim Democrats,” downplaying religion and accepting secular democracy? This article hypothesizes that one channel of ideological change is migration to secular democracies. Drawing on an ideal point analysis of parliamentary votes from the Tunisian Islamist movement Ennahda, I find that MPs who had lived in secular democracies held more liberal voting records than their counterparts who had lived only in Tunisia. In particular, they were more likely to defend freedom of conscience and to vote against enshrining Islamic law in the constitution. Interviews with several of these MPs demonstrate that they recognize a causal effect of their experiences abroad on their ideologies, and provide support for three distinct mechanisms by which this effect may have occurred: socialization, intergroup contact, and political learning.
The (re)insurance industry is faced with a growing risk related to the development of information technology (IT). This growth is creating an increasingly digitally interconnected world with more and more dependence being placed on IT systems to manage processes. This is generating opportunities for new insurance products and coverages to directly address the risks that companies face. However, it is also changing the risk landscape of existing classes of business within non-life insurance where there is inherent risk of loss as a result of IT events that cannot be or have not been excluded in policy wordings or are changing the risk profile of traditional risks. This risk of losses to non-cyber classes of business resulting from cyber as a peril that has not been intentionally included (often by not clearly excluding it) is defined as non-affirmative cyber risk, and the level of understanding of this issue and the cyber peril exposure from non-cyber policies varies across the market. In contract wordings, the market has remained relatively “silent” across most lines of business about potential losses resulting from IT-related events, either by not addressing the potential issue or excluding via exclusions. Some classes of business recognise the exposure by use of write-backs. Depending on the line of business, the approach will vary as to how best to turn any “silent” exposure into a known quantity either by robust exclusionary language, pricing or exposure monitoring. This paper proposes a framework to help insurance companies address the issue of non-affirmative cyber risk across their portfolios. Whilst the framework is not intended to be an all-encompassing solution to the issue, it has been developed to help those tasked with addressing the issue to be able to perform a structured analysis of the issue. Each company’s analysis will need to tailor the basis of the framework to fit their structure and underwriting procedures. Ultimately, the framework should be used to help analysts engage with management on this issue so that the risk is understood, and any risk mitigation actions can be taken if required. In the appendix, we present a worked example to illustrate how companies could implement the framework. The example is entirely fictional, is focused on non-life specialty insurance, and is intended only to help demonstrate one possible way in which to apply the framework.
This chapter reviews what is known about the fate of carbon during early differentiation of inner solar system planets. It reviews the nature of carbon fractionation in a magma ocean as compared to the core, mantle, and atmosphere, and how this may have varied between planetary bodies in the solar system. It discusses whether magma ocean processes could have established the present-day budget of carbon in Earth’s bulk silicate, and also reviews possibilities for the early temporal evolution of the mantle carbon budget through core formation, later veneer addition, and magma ocean crystallization processes.
Background: Migraine is a common disorder most typically presenting as headache and often associated with vertigo and motion sickness. It is a genetically complex condition with multiple genes ultimately contributing to the predisposition and development of this episodic neurological disorder. We identified a large American family of 29 individuals of which 17 members suffered from at least one of these disorders, migraine, vertigo, or motion sickness. Many of these individuals suffered from several simultaneously. We hypothesized that vertigo and motion sickness may involve genes that are independent to those directly contributing to migraine susceptibility. Methods: Genome-wide linkage analysis performed using 400 microsatellite repeat markers spaced at 10 cM throughout the genome. The members of this family were phenotyped for each condition, migraine, vertigo, and motion sickness and analyzed separately. Statistical analysis was performed using two-point and multipoint linkage analysis employing a number of models including autosomal recessive or dominant patterns of inheritance with high and low genetic penetrance. Results: We identified a novel locus for migraine, 9q13-q22 (maximum two-point logarithm of odds [LOD] score-2.51). In addition, there are suggestive LOD scores that localize to different chromosomes for each phenotype; vertigo (chromosome 18, LOD score of 1.82) and motion sickness (chromosome 4, LOD score of 2.09). Conclusions: Our analysis supports our hypothesis that the migraine-associated vertigo and motion sickness may involve distinct susceptibility genes.
To characterize the association of longitudinal changes in maternal anthropometric measures with neonatal anthropometry and to assess to what extent late-gestational changes in maternal anthropometry are associated with neonatal body composition.
In a prospective cohort of pregnant women, maternal anthropometry was measured at six study visits across pregnancy and after birth, neonates were measured and fat and lean mass calculated. We estimated maternal anthropometric trajectories and separately assessed rate of change in the second (15–28 weeks) and third trimester (28–39 weeks) in relation to neonatal anthropometry. We investigated the extent to which tertiles of third-trimester maternal anthropometry change were associated with neonatal outcomes.
Women were recruited from twelve US sites (2009–2013).
Non-obese women with singleton pregnancies (n 2334).
A higher rate of increase in gestational weight gain was associated with larger-birth-weight infants with greater lean and fat mass. In contrast, higher rates of increase in maternal anthropometry measures were not associated with infant birth weight but were associated with decreased neonatal lean mass. In the third trimester, women in the tertile of lowest change in triceps skinfold (−0·57 to −0·06 mm per week) had neonates with 35·8 g more lean mass than neonates of mothers in the middle tertile of rate of change (−0·05 to 0·06 mm per week).
The rate of change in third-trimester maternal anthropometry measures may be related to neonatal lean and fat mass yet have a negligible impact on infant birth weight, indicating that neonatal anthropometry may provide additional information over birth weight alone.
Cyber Operational Risk: Cyber risk is routinely cited as one of the most important sources of operational risks facing organisations today, in various publications and surveys. Further, in recent years, cyber risk has entered the public conscience through highly publicised events involving affected UK organisations such as TalkTalk, Morrisons and the NHS. Regulators and legislators are increasing their focus on this topic, with General Data Protection Regulation (“GDPR”) a notable example of this. Risk actuaries and other risk management professionals at insurance companies therefore need to have a robust assessment of the potential losses stemming from cyber risk that their organisations may face. They should be able to do this as part of an overall risk management framework and be able to demonstrate this to stakeholders such as regulators and shareholders. Given that cyber risks are still very much new territory for insurers and there is no commonly accepted practice, this paper describes a proposed framework in which to perform such an assessment. As part of this, we leverage two existing frameworks – the Chief Risk Officer (“CRO”) Forum cyber incident taxonomy, and the National Institute of Standards and Technology (“NIST”) framework – to describe the taxonomy of a cyber incident, and the relevant cyber security and risk mitigation items for the incident in question, respectively.Summary of Results: Three detailed scenarios have been investigated by the working party:
∙ Employee leaks data at a general (non-life) insurer: Internal attack through social engineering, causing large compensation costs and regulatory fines, driving a 1 in 200 loss of £210.5m (c. 2% of annual revenue).
∙ Cyber extortion at a life insurer: External attack through social engineering, causing large business interruption and reputational damage, driving a 1 in 200 loss of £179.5m (c. 6% of annual revenue).
∙ Motor insurer telematics device hack: External attack through software vulnerabilities, causing large remediation / device replacement costs, driving a 1 in 200 loss of £70.0m (c. 18% of annual revenue).
Limitations: The following sets out key limitations of the work set out in this paper:
∙ While the presented scenarios are deemed material at this point in time, the threat landscape moves fast and could render specific narratives and calibrations obsolete within a short-time frame.
∙ There is a lack of historical data to base certain scenarios on and therefore a high level of subjectivity is used to calibrate them.
∙ No attempt has been made to make an allowance for seasonality of renewals (a cyber event coinciding with peak renewal season could exacerbate cost impacts)
∙ No consideration has been given to the impact of the event on the share price of the company.
∙ Correlation with other risk types has not been explicitly considered.
Conclusions: Cyber risk is a very real threat and should not be ignored or treated lightly in operational risk frameworks, as it has the potential to threaten the ongoing viability of an organisation. Risk managers and capital actuaries should be aware of the various sources of cyber risk and the potential impacts to ensure that the business is sufficiently prepared for such an event. When it comes to quantifying the impact of cyber risk on the operations of an insurer there are significant challenges. Not least that the threat landscape is ever changing and there is a lack of historical experience to base assumptions off. Given this uncertainty, this paper sets out a framework upon which readers can bring consistency to the way scenarios are developed over time. It provides a common taxonomy to ensure that key aspects of cyber risk are considered and sets out examples of how to implement the framework. It is critical that insurers endeavour to understand cyber risk better and look to refine assumptions over time as new information is received. In addition to ensuring that sufficient capital is being held for key operational risks, the investment in understanding cyber risk now will help to educate senior management and could have benefits through influencing internal cyber security capabilities.
This paper presents latest thinking from the Institute and Faculty of Actuaries’ Model Risk Working Party and follows on from their Phase I work, Model Risk: Daring to Open the Black Box. This is a more practical paper and presents the contributors’ experiences of model risk gained from a wide range of financial and non-financial organisations with suggestions for good practice and proven methods to reduce model risk. After a recap of the Phase I work, examples of model risk communication are given covering communication: to the Board; to the regulator; and to external stakeholders. We present a practical framework for model risk management and quantification with examples of the key actors, processes and cultural challenge. Lessons learned are then presented from other industries that make extensive use of models and include the weather forecasting, software and aerospace industries. Finally, a series of case studies in practical model risk management and mitigation are presented from the contributors’ own experiences covering primarily financial services.
Vertigo is common in the emergency department (ED). Most aetiologies are peripheral and do not require hospitalization, but many patients still fear falling. Some patients may be taking opioid analgesic medications (for other reasons); the risk of falls leading to fractures among patients with vertigo could be potentiated by the simultaneous use of opioids.
To examine the risk of fractures in discharged ED patients with peripheral vertigo who were being prescribed opioids during the same time period.
Linked administrative databases from Ontario were used to compare discharged ED patients aged ≥65 with peripheral vertigo to patients with urinary tract infection (UTI) from 2006 to 2011. We used Cox regression analysis with an interaction term to estimate the modifying effect of an opioid prescription on the hazard of fracture within 90 days.
There were 13,012 patients with a peripheral vertigo syndrome and 76,885 with a UTI. Thirteen percent of the vertigo cohort and 25% of the UTI cohort had access to a filled opioid prescription. Compared to vertigo patients who did not fill an opioid prescription, the adjusted hazard of fracture among vertigo patients who did fill a prescription was 3.59 (95% CI 1.97–6.13). Among UTI patients who filled an opioid prescription the hazard ratio was 1.68 (95% CI 1.43–1.97) compared to UTI patients who did not.
Patients discharged from the ED with peripheral vertigo who were also being prescribed opioids had a higher hazard of subsequent fracture compared to those who were not, and the effect was much greater than among UTI patients. These results suggest that in the acutely vertiginous older patient, opioid analgesic medications should be modified, where possible.
To examine breast-feeding and complementary feeding practices during the first 6 months of life among Norwegian infants of Somali and Iraqi family origin.
A cross-sectional survey was performed during March 2013–February 2014. Data were collected using a semi-quantitative FFQ adapted from the second Norwegian national dietary survey among infants in 2006–2007.
Somali-born and Iraqi-born mothers living in eastern Norway were invited to participate.
One hundred and seven mothers/infants of Somali origin and eighty mothers/infants of Iraqi origin participated.
Breast-feeding was almost universally initiated after birth. Only 7 % of Norwegian-Somali and 10 % of Norwegian-Iraqi infants were exclusively breast-fed at 4 months of age. By 1 month of age, water had been introduced to 30 % of Norwegian-Somali and 26 % of Norwegian-Iraqi infants, and infant formula to 44 % and 34 %, respectively. Fifty-four per cent of Norwegian-Somali and 68 % of Norwegian-Iraqi infants had been introduced to solid or semi-solid foods at 4 months of age. Breast-feeding at 6 months of age was more common among Norwegian-Somali infants (79 %) compared with Norwegian-Iraqi infants (58 %; P=0·001). Multivariate analyses indicated no significant factors associated with exclusive breast-feeding at 3·5 months of age. Factors positively associated with breast-feeding at 6 months were country of origin (Somalia) and parity (>2).
Breast-feeding initiation was common among Iraqi-born and Somali-born mothers, but the exclusive breast-feeding period was shorter than recommended in both groups. The study suggests that there is a need for new culture-specific approaches to support exclusive breast-feeding and complementary feeding practices among foreign-born mothers living in Norway.
This article examines the memoirs of Indian Civil Service officers as they continued to work in what became the Indian Administrative Service after independence. Rather than being understood solely as historical archives, these texts constitute a genre that can be called the ‘bureaucratic memoir’ which reveals masculinities that are both colonial and post-colonial. These memoirs, and their publication decades after independence reveal attempts by elites to preserve the power of the bureaucracy into subsequent decades. The texts hope to disavow but instead also reveal the patriarchal intimacies of these elites, even as these were challenged by charges of corruption and failure which emerged almost from the first moments of independence.
Recent scholarship finds that new democracies are more likely than established democracies to make binding commitments to international human rights institutions. Are new democracies also better at following through on these commitments? Stated differently, does their greater willingness to join international institutions reflect a genuine commitment to human rights reform or is it just “cheap talk?” We analyze this question using a new data set of more than 1,000 leading European Court of Human Rights (ECtHR) cases. Since new democracies face judgments that are more difficult to implement than established democracies, we employ a genetic matching algorithm to balance the data set. After controlling for bureaucratic and judicial capacity, we find that new democracies do implement similar ECtHR judgments initially more quickly than established democracies, but this effect reverses the longer a judgment remains pending. Although new democracies have incentives to implement judgments quickly, they sometimes lack checks and balances that help ensure implementation should an executive resist.
This study aimed to compare the outcomes of two frequently employed interventions for the management of tinnitus: tinnitus retraining therapy and cognitive behavioural therapy.
A systematic review of literature published up to and including February 2013 was performed. Only randomised control trials and studies involving only human participants were included.
Nine high-quality studies evaluating the efficacy of tinnitus retraining therapy and cognitive behavioural therapy were identified. Of these, eight assessed cognitive behavioural therapy relative to a no-treatment control and one compared tinnitus retraining therapy to tinnitus masking therapy. Each study used a variety of standardised and validated questionnaires. Outcome measures were heterogeneous, but both therapies resulted in significant improvements in quality of life scores. Depression scores improved with cognitive behavioural therapy.
Both cognitive behavioural therapy and tinnitus retraining therapy are effective for tinnitus, with neither therapy being demonstrably superior. Further research using standardised, validated questionnaires is needed so that objective comparisons can be made.
The liquid impingement erosion behavior of a zirconium-based bulk metallic glass (BMG), Zr44Ti11Cu10Ni10Be25, was evaluated in this study. For comparison, commonly used hydroturbine steel was evaluated under the same test conditions. BMG demonstrated more than four times higher resistance against cavitation erosion compared with hydroturbine steel. The unusually high erosion resistance for BMG is attributed to its uniform amorphous structure with no grain boundaries, higher hardness, and ability to accommodate strain through localized shear bands.