To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
To report feasibility, early outcomes and challenges of implementing a 14-day threshold for undertaking surgical tracheostomy in the critically ill coronavirus disease 2019 patient.
Twenty-eight coronavirus disease 2019 patients underwent tracheostomy. Demographics, risk factors, ventilatory assistance, organ support and logistics were assessed.
The mean time from intubation to tracheostomy formation was 17.0 days (standard deviation = 4.4, range 8–26 days). Mean time to decannulation was 15.8 days (standard deviation = 9.4) and mean time to intensive care unit stepdown to a ward was 19.2 days (standard deviation = 6.8). The time from intubation to tracheostomy was strongly positively correlated with: duration of mechanical ventilation (r(23) = 0.66; p < 0.001), time from intubation to decannulation (r(23) = 0.66; p < 0.001) and time from intubation to intensive care unit discharge (r(23) = 0.71; p < 0.001).
Performing a tracheostomy in coronavirus disease 2019 positive patients at 8–14 days following intubation is compatible with favourable outcomes. Multidisciplinary team input is crucial to patient selection.
In the last three decades, early intervention for psychosis (EIP) services have been established worldwide and have resulted in superior symptomatic and functional outcomes for people affected by psychotic disorders. These improved outcomes are a result of reducing delays to treatment and the provision of specialised, holistic interventions. The COVID-19 pandemic poses significant challenges to the delivery of these services, such as undetected cases or long delays to treatment. Furthermore, the COVID-19 pandemic will likely increase the mental health needs of communities, including the incidence of psychotic disorders. In this perspective piece, we provide suggestions as to how EIP services can adapt within this environment, such as utilising novel technologies. Finally, we argue that despite the economic consequences of the pandemic, the funding for mental health services, including EI services, should be increased in line with the need for these services during and beyond the pandemic.
Innovation Concept: Research training programs for students, especially in emergency medicine (EM), may be difficult to initiate due to lack of protected time, resources, and mentors (Chang Y, Ramnanan CJ. Academic Medicine 2015). We developed a ten-week summer program for medical students aimed at cultivating research skills through mentorship, clinical enrichment, and immersion in EM research culture through shadowing and project support. Methods: Five second year Ontario medical students were recruited to participate in the Summer Training and Research in Emergency Medicine (STAR-EM) program at University Health Network, Toronto, from June - Aug, 2019. Program design followed review of existing summer research programs and literature regarding challenges to EM research (McRae, Perry, Brehaut et al. CJEM 2018). The program had broad emergency physician (EP) engagement, with five EP research project mentors, and over ten EPs delivering academic sessions. Curriculum development was collaborative and iterative. All projects were approved by the hospital Research Ethics Board (REB). Curriculum, Tool or Material: Each weekly academic morning comprised small group teaching (topics including research methodology, manuscript preparation, health equity, quality improvement, and wellness), followed by EP-led group progress review of each student's project. Each student spent one half day per week in the emergency department (ED), shadowing an EP and identifying patients for recruitment for ongoing mentor-initiated ED research projects. Remaining time was spent on independent student project work. Presentation to faculty and program evaluation occurred in week 10. Scholarly output included one abstract submitted for publication per student. Program evaluation by students reflected a uniform impression that course material and mentorship were each excellent (100%, n = 5). Interest in pursuing academic EM as a career was identified by all students. Faculty researchers rated the program as very effective (80%, n = 4) or somewhat effective (20%, n = 1) in terms of enhancing productivity and scholarly output. Conclusion: The STAR-EM program provides a transferable model for other academic departments seeking to foster the development of future clinician investigators and enhance ED research culture. Program challenges included delays in REB approval for student projects and engaging recalcitrant staff to participate in research.
Why patients with psychosis use cannabis remains debated. The self-medication hypothesis has received some support but other evidence points towards an alleviation of dysphoria model. This study investigated the reasons for cannabis use in first-episode psychosis (FEP) and whether strength in their endorsement changed over time.
FEP inpatients and outpatients at the South London and Maudsley, Oxleas and Sussex NHS Trusts UK, who used cannabis, rated their motives at baseline (n = 69), 3 months (n = 29) and 12 months (n = 36). A random intercept model was used to test the change in strength of endorsement over the 12 months. Paired-sample t-tests assessed the differences in mean scores between the five subscales on the Reasons for Use Scale (enhancement, social motive, coping with unpleasant affect, conformity and acceptance and relief of positive symptoms and side effects), at each time-point.
Time had a significant effect on scores when controlling for reason; average scores on each subscale were higher at baseline than at 3 months and 12 months. At each time-point, patients endorsed ‘enhancement’ followed by ‘coping with unpleasant affect’ and ‘social motive’ more highly for their cannabis use than any other reason. ‘Conformity and acceptance’ followed closely. ‘Relief of positive symptoms and side effects’ was the least endorsed motive.
Patients endorsed their reasons for use at 3 months and 12 months less strongly than at baseline. Little support for the self-medication or alleviation of dysphoria models was found. Rather, patients rated ‘enhancement’ most highly for their cannabis use.
To measure caregivers’ and clinicians’ perception of false memories in the lives of patients with memory loss due to Alzheimer’s disease (AD) and mild cognitive impairment (MCI) using a novel false memories questionnaire. Our hypotheses were that false memories are occurring as often as forgetting according to clinicians and family members.
This prospective, questionnaire-based study consisting of 20 false memory questions paired with 20 forgetting questions had two forms: one for clinicians and the other for family members of older subjects. In total, 226 clinicians and 150 family members of 49 patients with AD, 44 patients with MCI, and 57 healthy older controls (OCs) completed the questionnaire.
False memories occurred nearly as often as forgetting according to clinicians and family members of patients with MCI and AD. Family members of OCs and patients with MCI reported fewer false memories compared to those of the AD group. As Mini-Mental State Examination scores decreased, the mean score increased for both forgetting and false memories. Among clinicians, correlations were observed between the dementia severity of patients seen with both forgetting and false memories questionnaire scores as well as with the impact of forgetting and false memories on daily life.
Patients with AD experience false memories almost as frequently as they do forgetting. Given how common false memories are in AD patients, additional work is needed to understand the clinical implications of these false memories on patients’ daily lives. The novel false memories questionnaire developed may be a valuable tool.
A systematic review and network meta-analysis were conducted to assess the relative efficacy of internal or external teat sealants given at dry-off in dairy cattle. Controlled trials were eligible if they assessed the use of internal or external teat sealants, with or without concurrent antimicrobial therapy, compared to no treatment or an alternative treatment, and measured one or more of the following outcomes: incidence of intramammary infection (IMI) at calving, IMI during the first 30 days in milk (DIM), or clinical mastitis during the first 30 DIM. Risk of bias was based on the Cochrane Risk of Bias 2.0 tool with modified signaling questions. From 2280 initially identified records, 32 trials had data extracted for one or more outcomes. Network meta-analysis was conducted for IMI at calving. Use of an internal teat sealant (bismuth subnitrate) significantly reduced the risk of new IMI at calving compared to non-treated controls (RR = 0.36, 95% CI 0.25–0.72). For comparisons between antimicrobial and teat sealant groups, concerns regarding precision were seen. Synthesis of the primary research identified important challenges related to the comparability of outcomes, replication and connection of interventions, and quality of reporting of study conduct.
A systematic review and meta-analysis were conducted to determine the efficacy of selective dry-cow antimicrobial therapy compared to blanket therapy (all quarters/all cows). Controlled trials were eligible if any of the following were assessed: incidence of clinical mastitis during the first 30 DIM, frequency of intramammary infection (IMI) at calving, or frequency of IMI during the first 30 DIM. From 3480 identified records, nine trials were data extracted for IMI at calving. There was an insufficient number of trials to conduct meta-analysis for the other outcomes. Risk of IMI at calving in selectively treated cows was higher than blanket therapy (RR = 1.34, 95% CI = 1.13, 1.16), but substantial heterogeneity was present (I2 = 58%). Subgroup analysis showed that, for trials using internal teat sealants, there was no difference in IMI risk at calving between groups, and no heterogeneity was present. For trials not using internal teat sealants, there was an increased risk in cows assigned to a selective dry-cow therapy protocol, compared to blanket treatment, with substantial heterogeneity in this subgroup. However, the small number of trials and heterogeneity in the subgroup without internal teat sealants suggests that the relative risk between treatments may differ from the determined point estimates based on other unmeasured factors.
A systematic review and network meta-analysis were conducted to assess the relative efficacy of antimicrobial therapy given to dairy cows at dry-off. Eligible studies were controlled trials assessing the use of antimicrobials compared to no treatment or an alternative treatment, and assessed one or more of the following outcomes: incidence of intramammary infection (IMI) at calving, incidence of IMI during the first 30 days in milk (DIM), or incidence of clinical mastitis during the first 30 DIM. Databases and conference proceedings were searched for relevant articles. The potential for bias was assessed using the Cochrane Risk of Bias 2.0 algorithm. From 3480 initially identified records, 45 trials had data extracted for one or more outcomes. Network meta-analysis was conducted for IMI at calving. The use of cephalosporins, cloxacillin, or penicillin with aminoglycoside significantly reduced the risk of new IMI at calving compared to non-treated controls (cephalosporins, RR = 0.37, 95% CI 0.23–0.65; cloxacillin, RR = 0.55, 95% CI 0.38–0.79; penicillin with aminoglycoside, RR = 0.42, 95% CI 0.26–0.72). Synthesis revealed challenges with a comparability of outcomes, replication of interventions, definitions of outcomes, and quality of reporting. The use of reporting guidelines, replication among interventions, and standardization of outcome definitions would increase the utility of primary research in this area.
A systematic review and network meta-analysis were conducted to assess the relative efficacy of antimicrobial therapy for clinical mastitis in lactating dairy cattle. Controlled trials in lactating dairy cattle with natural disease exposure were eligible if they compared an antimicrobial treatment to a non-treated control, placebo, or a different antimicrobial, for the treatment of clinical mastitis, and assessed clinical or bacteriologic cure. Potential for bias was assessed using a modified Cochrane Risk of Bias 2.0 tool. From 14775 initially identified records, 54 trials were assessed as eligible. Networks were established for bacteriologic cure by bacterial species group, and clinical cure. Disparate networks among bacteriologic cures precluded meta-analysis. Network meta-analysis was conducted for trials assessing clinical cure, but lack of precision of point estimates resulted in wide credibility intervals for all treatments, with no definitive conclusions regarding relative efficacy. Consideration of network geometry can inform future research to increase the utility of current and previous work. Replication of intervention arms and consideration of connection to existing networks would improve the future ability to determine relative efficacy. Challenges in the evaluation of bias in primary research stemmed from a lack of reporting. Consideration of reporting guidelines would also improve the utility of future research.
Early in a foodborne disease outbreak investigation, illness incubation periods can help focus case interviews, case definitions, clinical and environmental evaluations and predict an aetiology. Data describing incubation periods are limited. We examined foodborne disease outbreaks from laboratory-confirmed, single aetiology, enteric bacterial and viral pathogens reported to United States foodborne disease outbreak surveillance from 1998–2013. We grouped pathogens by clinical presentation and analysed the reported median incubation period among all illnesses from the implicated pathogen for each outbreak as the outbreak incubation period. Outbreaks from preformed bacterial toxins (Staphylococcus aureus, Bacillus cereus and Clostridium perfringens) had the shortest outbreak incubation periods (4–10 h medians), distinct from that of Vibrio parahaemolyticus (17 h median). Norovirus, salmonella and shigella had longer but similar outbreak incubation periods (32–45 h medians); campylobacter and Shiga toxin-producing Escherichia coli had the longest among bacteria (62–87 h medians); hepatitis A had the longest overall (672 h median). Our results can help guide diagnostic and investigative strategies early in an outbreak investigation to suggest or rule out specific etiologies or, when the pathogen is known, the likely timeframe for exposure. They also point to possible differences in pathogenesis among pathogens causing broadly similar syndromes.
The study aimed to assess the clinical feasibility of employing an automatic match during cone beam computed tomography (CBCT) imaging using prostatic calcifications within the 95% isodose set as the region of interest.
Materials and methods:
CBCT images were analysed on the 5th fraction in 34 patients evaluating the difference between standard manual soft tissue anatomy matching versus auto calcification matching. An assessment of the clinical feasibility of using prostatic calcifications during matching alongside considering the effect a more automated matching process has been conducted on interobserver variability.
The standard deviation values of the difference between the soft tissue match (baseline) versus automatic calcification matches fluctuated around 1 mm in all three axes for all of the matches carried out. The interobserver variability observed between the two radiographers was 0·055, 0·065 and 0·045 cm in the vertical, longitudinal and lateral axes, respectively.
The clarity of the calcifications on the CBCT images might explain the low interobserver variability displayed by the two matching radiographers. A calcification provides a clear starting point for image matching before commencing a check of volumetric coverage, if the matching process begins in the same place, it can allow for a standardisation of matching technique between radiographers.
To examine factors that influence decision-making, preferences, and plans related to advance care planning (ACP) and end-of-life care among persons with dementia and their caregivers, and examine how these may differ by race.
13 geographically dispersed Alzheimer’s Disease Centers across the United States.
431 racially diverse caregivers of persons with dementia.
Survey on “Care Planning for Individuals with Dementia.”
The respondents were knowledgeable about dementia and hospice care, indicated the person with dementia would want comfort care at the end stage of illness, and reported high levels of both legal ACP (e.g., living will; 87%) and informal ACP discussions (79%) for the person with dementia. However, notable racial differences were present. Relative to white persons with dementia, African American persons with dementia were reported to have a lower preference for comfort care (81% vs. 58%) and lower rates of completion of legal ACP (89% vs. 73%). Racial differences in ACP and care preferences were also reflected in geographic differences. Additionally, African American study partners had a lower level of knowledge about dementia and reported a greater influence of religious/spiritual beliefs on the desired types of medical treatments. Notably, all respondents indicated that more information about the stages of dementia and end-of-life health care options would be helpful.
Educational programs may be useful in reducing racial differences in attitudes towards ACP. These programs could focus on the clinical course of dementia and issues related to end-of-life care, including the importance of ACP.
There is insufficient research on medical care at mass-gathering events (MGEs) on college and university campuses. Fun Day is an annual celebratory day held at Skidmore College (Saratoga Springs, New York USA), a small liberal arts college in the Northeastern United States. Fun Day is focused around an outdoor music festival; students also congregate and celebrate throughout the surrounding campus. To improve care and alleviate strain on local resources, a model was developed for the provision of emergency care by a collegiate-based, volunteer first-response service – Skidmore College Emergency Medical Services (EMS) – in coordination with a contracted, private ambulance service.
The aims of this study were to: (1) analyze medical usage rates and case mixes at Fun Day over a four-year period, and to (2) describe the collegiate-based first response model for MGEs.
Data were collected retrospectively from event staff, college administrators, and Skidmore College EMS on event-related variables, patient encounters, and medical operations at Fun Day over a four-year period (2014-2017).
Annual attendance at the music festival was estimated at 2,000 individuals. Over four years, 54 patients received emergency medical care on campus on Fun Day, and 18 (33.3%) were transported to the emergency department. On-site contracted ambulances transported 77.8% of patients who were transported to the emergency department; mutual aid was requested for the other 22.2% of transports. The mean (SD) patient presentation rate (PPR) was 7.0 (SD = 1.0) per 1,000 attendees. The mean (SD) transport-to-hospital rate (TTHR) was 2.0 (SD = 1.0) per 1,000 attendees. Thirty (55.6%) patients presented with intoxication, seven (13.0%) with laceration(s), and five (9.3%) with head trauma as the primary concern. Medical command was established by volunteer undergraduate students. Up to 16 volunteer student first responders (including emergency medical technicians [EMTs]) were stationed on campus, in addition to two contracted ambulances at the Basic Life Support (BLS) and Advanced Life Support (ALS) levels. Operational strategies included: mobile first response crews, redundant communication systems, preventative education, and harm reduction.
High medical usage rates were observed, primarily due to alcohol/illicit substance use and traumatic injuries. The provision of emergency care by a collegiate-based first response service in coordination with a contracted, private ambulance agency serves as an innovative model for mass-gathering medical care on college and university campuses.
FriedmanNMG, O’ConnorEK, MunroT, GoroffD.Mass-Gathering Medical Care Provided by a Collegiate-Based First Response Service at an Annual College Music Festival and Campus-Wide Celebration. Prehosp Disaster Med. 2019;34(1):98–103.
The role that vitamin D plays in pulmonary function remains uncertain. Epidemiological studies reported mixed findings for serum 25-hydroxyvitamin D (25(OH)D)–pulmonary function association. We conducted the largest cross-sectional meta-analysis of the 25(OH)D–pulmonary function association to date, based on nine European ancestry (EA) cohorts (n 22 838) and five African ancestry (AA) cohorts (n 4290) in the Cohorts for Heart and Aging Research in Genomic Epidemiology Consortium. Data were analysed using linear models by cohort and ancestry. Effect modification by smoking status (current/former/never) was tested. Results were combined using fixed-effects meta-analysis. Mean serum 25(OH)D was 68 (sd 29) nmol/l for EA and 49 (sd 21) nmol/l for AA. For each 1 nmol/l higher 25(OH)D, forced expiratory volume in the 1st second (FEV1) was higher by 1·1 ml in EA (95 % CI 0·9, 1·3; P<0·0001) and 1·8 ml (95 % CI 1·1, 2·5; P<0·0001) in AA (Prace difference=0·06), and forced vital capacity (FVC) was higher by 1·3 ml in EA (95 % CI 1·0, 1·6; P<0·0001) and 1·5 ml (95 % CI 0·8, 2·3; P=0·0001) in AA (Prace difference=0·56). Among EA, the 25(OH)D–FVC association was stronger in smokers: per 1 nmol/l higher 25(OH)D, FVC was higher by 1·7 ml (95 % CI 1·1, 2·3) for current smokers and 1·7 ml (95 % CI 1·2, 2·1) for former smokers, compared with 0·8 ml (95 % CI 0·4, 1·2) for never smokers. In summary, the 25(OH)D associations with FEV1 and FVC were positive in both ancestries. In EA, a stronger association was observed for smokers compared with never smokers, which supports the importance of vitamin D in vulnerable populations.