We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure coreplatform@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We describe the management of two linked severe acute respiratory coronavirus 2 (SARS-CoV-2) outbreaks, predominantly amongst 18–35-year-olds, in a UK county in July-to-September 2021, following the lifting of national coronavirus disease 2019 (COVID-19)-associated social restrictions. One was associated with a nightclub and one with five air force bases. On week beginning 2nd August 2021, air force contact tracing teams detected 68 cases across five bases within one county; 21 (30.9%) were associated with a night-time economy venue, 13 (19.1%) with night-time economy venues in the county's main town and at least one case per base (n = 6, 8.8%) with a particular nightclub in this town, which itself had been associated with 302 cases in the previous week (coinciding with its reopening following a national lockdown). In response, Public Health England/United Kingdom Health Security Agency, air force and local authority teams collaboratively implemented communication strategies and enhanced access to SARS-CoV-2 testing and vaccination. Key challenges included attempting to encourage behaviours that reduce likelihood of transmission to a population who may have considered themselves at low risk from severe COVID-19. This report may inform future preparation for, and management of, easing of potential future pandemic-related social restrictions, and how an outbreak in this context may be addressed.
To evaluate the clinical impact of the BioFire FilmArray Pneumonia Panel (PNA panel) in critically ill patients.
Design:
Single-center, preintervention and postintervention retrospective cohort study.
Setting:
Tertiary-care academic medical center.
Patients:
Adult ICU patients.
Methods:
Patients with quantitative bacterial cultures obtained by bronchoalveolar lavage or tracheal aspirate either before (January–March 2021, preintervention period) or after (January–March 2022, postintervention period) implementation of the PNA panel were randomly screened until 25 patients per study month (75 in each cohort) who met the study criteria were included. Antibiotic use from the day of culture collection through day 5 was compared.
Results:
The primary outcome of median time to first antibiotic change based on microbiologic data was 50 hours before the intervention versus 21 hours after the intervention (P = .0006). Also, 56 postintervention regimens (75%) were eligible for change based on PNA panel results; actual change occurred in 30 regimens (54%). Median antibiotic days of therapy (DOTs) were 8 before the intervention versus 6 after the intervention (P = .07). For the patients with antibiotic changes made based on PNA panel results, the median time to first antibiotic change was 10 hours. For patients who were initially on inadequate therapy, time to adequate therapy was 67 hours before the intervention versus 37 hours after the intervention (P = .27).
Conclusions:
The PNA panel was associated with decreased time to first antibiotic change and fewer antibiotic DOTs. Its impact may have been larger if a higher percentage of potential antibiotic changes had been implemented. The PNA panel is a promising tool to enhance antibiotic stewardship.
Clozapine is licensed for treatment-resistant psychosis and remains underutilised. This may berelated to the stringent haematological monitoring requirements that are mandatory in most countries. We aimed to compare guidelines internationally and develop a novel Stringency Index. We hypothesised that the most stringent countries would have increased healthcare costs and reduced prescription rates.
Method
We conducted a literature review and survey of guidelines internationally. Guideline identification involved a literature review and consultation with clinical academics. We focused on the haematological monitoring parameters, frequency and thresholds for discontinuation and rechallenge after suspected clozapine-induced neutropenia. In addition, indicators reflecting monitoring guideline stringency were scored and visualised using a choropleth map. We developed a Stringency Index with an international panel of clozapine experts, through a modified-Delphi-survey. The Stringency Index was compared to health expenditure per-capita and clozapine prescription per 100 000 persons.
Results
One hundred twocountries were included, from Europe (n = 35), Asia (n = 24), Africa (n = 20), South America (n = 11), North America (n = 7) and Oceania and Australia (n = 5). Guidelines differed in frequency of haematological monitoring and discontinuation thresholds. Overall, 5% of included countries had explicit guidelines for clozapine-rechallenge and 40% explicitly prohibited clozapine-rechallenge. Furthermore, 7% of included countries had modified discontinuation thresholds for benign ethnic neutropenia. None of the guidelines specified how long haematological monitoring should continue. The most stringent guidelines were in Europe, and the least stringent were in Africa and South America. There was a positive association (r = 0.43, p < 0.001) between a country's Stringency Index and healthcare expenditure per capita.
Conclusions
Recommendations on how haematological function should be monitored in patients treated with clozapine vary considerably between countries. It would be useful to standardise guidelines on haematological monitoring worldwide.
Clozapine is the only drug licensed for treatment-resistant schizophrenia (TRS) but the real-world clinical and cost-effectiveness of community initiation of clozapine is unclear.
Aims
The aim was to assess the feasibility and cost-effectiveness of community initiation of clozapine.
Method
This was a naturalistic study of community patients recommended for clozapine treatment.
Results
Of 158 patients recommended for clozapine treatment, 88 (56%) patients agreed to clozapine initiation and, of these, 58 (66%) were successfully established on clozapine. The success rate for community initiation was 65.4%; which was not significantly different from that for in-patient initiation (58.82%, χ2(1,88) = 0.47, P = 0.49). Following clozapine initiation, there was a significant reduction in median out-patient visits over 1 year (from 24.00 (interquartile range (IQR) = 14.00–41.00) to 13.00 visits (IQR = 5.00–24.00), P < 0.001), and 2 years (from 47.50 visits (IQR = 24.75–71.00) to 22.00 (IQR = 11.00–42.00), P < 0.001), and a 74.71% decrease in psychiatric hospital bed days (z = −2.50, P = 0.01). Service-use costs decreased (1 year: –£963/patient (P < 0.001); 2 years: –£1598.10/patient (P < 0.001). Subanalyses for community-only initiation also showed significant cost reductions (1 year: –£827.40/patient (P < 0.001); 2 year: –£1668.50/patient (P < 0.001) relative to costs prior to starting clozapine. Relative to before initiation, symptom severity was improved in patients taking clozapine at discharge (median Positive and Negative Syndrome Scale total score: initial visit: 80 (IQR = 71.00–104.00); discharge visit 50.5 (IQR = 44.75–75.00), P < 0.001) and at 2 year follow-up (Health of Nation Outcome Scales total score median initial visit: 13.00 (IQR = 9.00–15.00); 2 year follow-up: 8.00 (IQR = 3.00–13.00), P = 0.023).
Conclusions
These findings indicate that community initiation of clozapine is feasible and is associated with significant reductions in costs, service use and symptom severity.
The Canadian Nosocomial Infection Surveillance Program conducted point-prevalence surveys in acute-care hospitals in 2002, 2009, and 2017 to identify trends in antimicrobial use.
Methods:
Eligible inpatients were identified from a 24-hour period in February of each survey year. Patients were eligible (1) if they were admitted for ≥48 hours or (2) if they had been admitted to the hospital within a month. Chart reviews were conducted. We calculated the prevalence of antimicrobial use as follows: patients receiving ≥1 antimicrobial during survey period per number of patients surveyed × 100%.
Results:
In each survey, 28−47 hospitals participated. In 2002, 2,460 (36.5%; 95% CI, 35.3%−37.6%) of 6,747 surveyed patients received ≥1 antimicrobial. In 2009, 3,566 (40.1%, 95% CI, 39.0%−41.1%) of 8,902 patients received ≥1 antimicrobial. In 2017, 3,936 (39.6%, 95% CI, 38.7%−40.6%) of 9,929 patients received ≥1 antimicrobial. Among patients who received ≥1 antimicrobial, penicillin use increased 36.8% between 2002 and 2017, and third-generation cephalosporin use increased from 13.9% to 18.1% (P < .0001). Between 2002 and 2017, fluoroquinolone use decreased from 25.7% to 16.3% (P < .0001) and clindamycin use decreased from 25.7% to 16.3% (P < .0001) among patients who received ≥1 antimicrobial. Aminoglycoside use decreased from 8.8% to 2.4% (P < .0001) and metronidazole use decreased from 18.1% to 9.4% (P < .0001). Carbapenem use increased from 3.9% in 2002 to 6.1% in 2009 (P < .0001) and increased by 4.8% between 2009 and 2017 (P = .60).
Conclusions:
The prevalence of antimicrobial use increased between 2002 and 2009 and then stabilized between 2009 and 2017. These data provide important information for antimicrobial stewardship programs.
Having attention-deficit/hyperactivity disorder (ADHD) is a risk factor for concussion that impacts concussion diagnosis and recovery. The relationship between ADHD and repetitive subconcussive head impacts on neurocognitive and behavioral outcomes is less well known. This study evaluated the role of ADHD as a moderator of the association between repetitive head impacts on neurocognitive test performance and behavioral concussion symptoms over the course of an athletic season.
Method:
Study participants included 284 male athletes aged 13–18 years who participated in high school football. Parents completed the Strengths and Weaknesses of ADHD Symptoms and Normal Behavior (SWAN) ratings about their teen athlete before the season began. Head impacts were measured using an accelerometer worn during all practices and games. Athletes and parents completed behavioral ratings of concussion symptoms and the Attention Network Task (ANT), Digital Trail Making Task (dTMT), and Cued Task Switching Task at pre- and post-season.
Results:
Mixed model analyses indicated that neither head impacts nor ADHD symptoms were associated with post-season athlete- or parent-reported concussion symptom ratings or neurocognitive task performance. Moreover, no relationships between head impact exposure and neurocognitive or behavioral outcomes emerged when severity of pre-season ADHD symptoms was included as a moderator.
Conclusion:
Athletes’ pre-season ADHD symptoms do not appear to influence behavioral or neurocognitive outcomes following a single season of competitive football competition. Results are interpreted in light of several study limitations (e.g., single season, assessment of constructs) that may have impacted this study’s pattern of largely null results.
We examined whether preadmission history of depression is associated with less delirium/coma-free (DCF) days, worse 1-year depression severity and cognitive impairment.
Design and measurements:
A health proxy reported history of depression. Separate models examined the effect of preadmission history of depression on: (a) intensive care unit (ICU) course, measured as DCF days; (b) depression symptom severity at 3 and 12 months, measured by the Beck Depression Inventory-II (BDI-II); and (c) cognitive performance at 3 and 12 months, measured by the Repeatable Battery for the Assessment of Neuropsychological Status (RBANS) global score.
Setting and participants:
Patients admitted to the medical/surgical ICU services were eligible.
Results:
Of 821 subjects eligible at enrollment, 261 (33%) had preadmission history of depression. After adjusting for covariates, preadmission history of depression was not associated with less DCF days (OR 0.78, 95% CI, 0.59–1.03 p = 0.077). A prior history of depression was associated with higher BDI-II scores at 3 and 12 months (3 months OR 2.15, 95% CI, 1.42–3.24 p = <0.001; 12 months OR 1.89, 95% CI, 1.24–2.87 p = 0.003). We did not observe an association between preadmission history of depression and cognitive performance at either 3 or 12 months (3 months beta coefficient −0.04, 95% CI, −2.70–2.62 p = 0.97; 12 months 1.5, 95% CI, −1.26–4.26 p = 0.28).
Conclusion:
Patients with a depression history prior to ICU stay exhibit a greater severity of depressive symptoms in the year after hospitalization.
Many people attempt to give meaning to their lives by pursuing projects that they believe will bear fruit after they have died. Knowing that their death will preclude them from protecting or promoting such projects people who draw meaning from them will often attempt to secure their continuance by securing promises from others to serve as their caretakers after they die. But those who rely on such are faced with a problem: None of the four major accounts that have been developed to explain directed promissory obligation (the Authority View, the Trust View, the Assurance View, and the Reliance View) support the view that we are obligated to keep our promises to persons who are now dead. But I will provide hope for those who wish to use such promises to protect the meaning with which they have endowed their lives. I will argue that while we cannot wrong a person who is now dead by breaking a promise made to her during her life, we could wrong the living by so doing. We thus (might) have reason to keep the promises that we made to those who are now dead.
The mechanics underlying ice–skate friction remain uncertain despite over a century of study. In the 1930s, the theory of self-lubrication from frictional heat supplanted an earlier hypothesis that pressure melting governed skate friction. More recently, researchers have suggested that a layer of abraded wear particles or the presence of quasi-liquid molecular layers on the surface of ice could account for its slipperiness. Here, we assess the dominant hypotheses proposed to govern ice–skate friction and describe experiments conducted in an indoor skating rink aimed to provide observations to test these hypotheses. Our results indicate that the brittle failure of ice under rapid compression plays a strong role. Our observations did not confirm the presence of full-contact water films and are more consistent with the presence of lubricating ice-rich slurries at discontinuous high-pressure zones (HPZs). The presence of ice-rich slurries supporting skates through HPZs merges pressure-melting, abrasion and lubricating films as a unified hypothesis for why skates are so slippery across broad ranges of speeds, temperatures and normal loads. We suggest tribometer experiments to overcome the difficulties of investigating these processes during actual skating trials.
To conduct an individual patient data meta-analysis of randomised controlled trials (RCTs) of manualised psychological treatments for obsessive-compulsive disorder (OCD), and examine the differential efficacy of psychological treatments by treatment type and format.
Background
Previous meta-analyses conclude that efficacious psychological treatments for OCD exist. However, determining the efficacy of psychological treatments requires multiple forms of assessment across a range of indexes, yet most previous meta-analyses in OCD are based solely on effect sizes.
Method
We evaluated treatment efficacy across 24 RCTs (n = 1,667) by conducting clinical significance analyses (using standardised Jacobson methodology) and standardised mean difference within-group effect-size analyses. Outcomes were Yale-Brown Obsessive Compulsive Scale (Y-BOCS) scores, evaluated at post-treatment and follow-up (3-6 months post-treatment).
Result
Post-treatment, there was a large significant within-group effect size for treated patients (g = 1.28) and a small significant effect size for controls (g = 0.30). At follow-up, large within-group effect sizes were found for both treated patients (g = 1.45) and controls (g = 0.90). Clinical significance analyses indicated that treated patients were significantly more likely than controls to recover following an intervention, but recovery rates were low; post-intervention, only 32% of treated patients and 3% of controls recovered; rising to 38% and 21% respectively at follow-up. Regardless of allocation, only approximately 20% of patients were asymptomatic at follow-up. Across the different analysis methods, individual cognitive therapy (CT) was the most effective intervention, followed by group CT plus exposure and response prevention. Self-help interventions were generally less effective.
Conclusion
Reliance on aggregated within-group effect sizes may lead to overestimation of the efficacy of psychological treatments for OCD. More research is needed to determine the most effective treatment type and format for patients with OCD.
ABSTRACT IMPACT: Within three EDs in a regional health system in Connecticut, African American race, male gender, non-Hispanic ethnicity, lack of private insurance, and homelessness were associated with significant odds of being physically restrained during a visit. OBJECTIVES/GOALS: Agitated patient encounters in the Emergency Department (ED) are on the rise, and physical restraints are used to protect staff and prevent self-harm. However, these are associated with safety risks and potential stigmatization of vulnerable individuals. We aim to determine factors that are associated with odds of being restrained in the ED. METHODS/STUDY POPULATION: We conducted a retrospective cohort analysis of all patients (≥18 yo) placed in restraints during an ED visit to three hospitals within a large tertiary health system from Jan 2013-Aug 2018. We undertook descriptive analysis of the data and created a generalized linear mixed model with a binary logistic identity link to model restraint use and determine odds ratios for various clinically significant demographic factors. These include gender, race, ethnicity, insurance status, alcohol use, illicit drug use, and homelessness. Our model accounted for patients nested across the three EDs and also accounted for multiple patient visits. RESULTS/ANTICIPATED RESULTS: In 726,417 total ED visits, 7,090 (1%) had associated restraint orders. Restrained patients had an average age of 45, with 64% male, 54% Caucasian and 29% African American. 17% had private insurance, 36% endorsed illicit substances, 51.4% endorsed alcohol use and 2.3% were homeless. African Americans had statistically significant odds of being restrained compared to Caucasians with adjusted odds ratio (AOR) of 1.14 (1.08,1.21). Females (AOR 0.75 [0.71, 0.79] had lower odds of being restrained compared to males while patients with Medicaid (AOR 1.57 [1.46, 1.68]) and Medicare (AOR 1.70 [1.57, 1.85]) had increased odds compared to the privately insured. Illicit substance use (AOR 1.55 [1.46, 1.64]), alcohol use (AOR 1.13 [1.07, 1.20] and homelessness (AOR 1.35 [1.14, 1.16]) had increased odds of restraint use. DISCUSSION/SIGNIFICANCE OF FINDINGS: We showed statistically significant effects of patient demographics on odds of restraint use in the ED. The increased odds based on race, insurance status, and substance use highlight the potential effects of implicit bias on the decision to physically restrain patients and underscores the importance of objective assessments of these patients.
Why people trust is a question that has preoccupied scholars across many disciplines. Historical explorations of trust abound, but we know relatively little about the workings of trust in the history of investment. Despite becoming increasingly mediated and institutionalized in the nineteenth century, the market for stocks and shares remained local and embedded in personal relations to a significant extent. This created a complex trust environment in which old and new forms of trust co-existed. Investors sought information from the press, but they also relied upon friends to help them navigate the market. Rather than studying trust in the aggregate, this article argues that focusing on the particular allows us to appreciate trust as an emotional and ultimately imaginative process depending as much on affective stories as rational calculation. To this end, it takes the case of a Bath clergyman and workhouse schools inspector, James Clutterbuck, who solicited investments from a wide network of friends and colleagues in the 1880s and 1890s. By capturing the complex interplay of friendship, emotions, and narrative in the formation of trust, the article offers a window onto everyday financial life in late Victorian provincial England.