To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Nosocomial transmission of influenza is a major concern for infection control. We aimed to dissect transmission dynamics of influenza, including asymptomatic transmission events, in acute care.
Prospective surveillance study during 2 influenza seasons.
Volunteer sample of inpatients on medical wards and healthcare workers (HCWs).
Participants provided daily illness diaries and nasal swabs for influenza A and B detection and whole-genome sequencing for phylogenetic analyses. Contacts between study participants were tracked. Secondary influenza attack rates were calculated based on spatial and temporal proximity and phylogenetic evidence for transmission.
In total, 152 HCWs and 542 inpatients were included; 16 HCWs (10.5%) and 19 inpatients (3.5%) tested positive for influenza on 109 study days. Study participants had symptoms of disease on most of the days they tested positive for influenza (83.1% and 91.9% for HCWs and inpatients, respectively). Also, 11(15.5%) of 71 influenza-positive swabs among HCWs and 3 (7.9%) of 38 influenza-positive swabs among inpatients were collected on days without symptoms; 2 (12.5%) of 16 HCWs and 2 (10.5%) of 19 inpatients remained fully asymptomatic. The secondary attack rate was low: we recorded 1 transmission event over 159 contact days (0.6%) that originated from a symptomatic case. No transmission event occurred in 61 monitored days of contacts with asymptomatic influenza-positive individuals.
Influenza in acute care is common, and individuals regularly shed influenza virus without harboring symptoms. Nevertheless, both symptomatic and asymptomatic transmission events proved rare. We suggest that healthcare-associated influenza prevention strategies that are based on preseason vaccination and barrier precautions for symptomatic individuals seem to be effective.
To assess influenza symptoms, adherence to mask use recommendations, absenteesm and presenteeism in acute care healthcare workers (HCWs) during influenza epidemics.
The TransFLUas influenza transmission study in acute healthcare prospectively followed HCWs prospectively over 2 consecutive influenza seasons. Symptom diaries asking for respiratory symptoms and adherence with mask use recommendations were recorded on a daily basis, and study participants provided midturbinate nasal swabs for influenza testing.
In total, 152 HCWs (65.8% nurses and 13.2% physicians) were included: 89.1% of study participants reported at least 1 influenza symptom during their study season and 77.8% suffered from respiratory symptoms. Also, 28.3% of HCW missed at least 1 working day during the study period: 82.6% of these days were missed because of symptoms of influenza illness. Of all participating HCWs, 67.9% worked with symptoms of influenza infection on 8.8% of study days. On 0.3% of study days, symptomatic HCWs were shedding influenza virus while at work. Among HCWs with respiratory symptoms, 74.1% adhered to the policy to wear a mask at work on 59.1% of days with respiratory symptoms.
Respiratory disease is frequent among HCWs and imposes a significant economic burden on hospitals due to the number of working days lost. Presenteesm with respiratory illness, including influenza, is also frequent and poses a risk for patients and staff.
This chapter comprises the following sections: names, taxonomy, subspecies and distribution, descriptive notes, habitat, movements and home range, activity patterns, feeding ecology, reproduction and growth, behavior, parasites and diseases, status in the wild, and status in captivity.
Since the beginning of 2020, the coronavirus disease (COVID-19) pandemic has dramatically influenced almost every aspect of human life. Activities requiring human gatherings have either been postponed, canceled, or held completely virtually. To supplement lack of in-person contact, people have increasingly turned to virtual settings online, advantages of which include increased inclusivity and accessibility and a reduced carbon footprint. However, emerging online technologies cannot fully replace in-person scientific events. In-person meetings are not susceptible to poor Internet connectivity problems, and they provide novel opportunities for socialization, creating new collaborations and sharing ideas. To continue such activities, a hybrid model for scientific events could be a solution offering both in-person and virtual components. While participants can freely choose the mode of their participation, virtual meetings would most benefit those who cannot attend in-person due to the limitations. In-person portions of meetings should be organized with full consideration of prevention and safety strategies, including risk assessment and mitigation, venue and environmental sanitation, participant protection and disease prevention, and promoting the hybrid model. This new way of interaction between scholars can be considered as a part of a resilience system, which was neglected previously and should become a part of routine practice in the scientific community.
Background: Contaminated surfaces within patient rooms and on shared equipment is a major driver of healthcare-acquired infections (HAIs). The emergence of Candida auris in the New York City metropolitan area, a multidrug-resistant fungus with extended environmental viability, has made a standardized assessment of cleaning protocols even more urgent for our multihospital academic health system. We therefore sought to create an environmental surveillance protocol to detect C. auris and to assess patient room contamination after discharge cleaning by different chemicals and methods, including touch-free application using an electrostatic sprayer. Surfaces disinfected using touch-free methods may not appear disinfected when assessed by fluorescent tracer dye or ATP bioluminescent assay. Methods: We focused on surfaces within the patient zone which are touched by the patient or healthcare personnel prior to contact with the patient. Our protocol sampled the over-bed table, call button, oxygen meter, privacy curtain, and bed frame using nylon-flocked swabs dipped in nonbacteriostatic sterile saline. We swabbed a 36-cm2 surface area on each sample location shortly after the room was disinfected, immediately inoculated the swab on a blood agar 5% TSA plate, and then incubated the plate for 24 hours at 36°C. The contamination with common environmental bacteria was calculated as CFU per plate over swabbed surface area and a cutoff of 2.5 CFU/cm2 was used to determine whether a surface passed inspection. Limited data exist on acceptable microbial limits for healthcare settings, but the aforementioned cutoff has been used in food preparation. Results: Over a year-long period, terminal cleaning had an overall fail rate of 6.5% for 413 surfaces swabbed. We used the protocol to compare the normal application of either peracetic acid/hydrogen peroxide or bleach using microfiber cloths to a new method using sodium dichloroisocyanurate (NaDCC) applied with microfiber cloths and electrostatic sprayers. The normal protocol had a fail rate of 9%, and NaDCC had a failure rate of 2.5%. The oxygen meter had the highest normal method failure rate (18.2%), whereas the curtain had the highest NaDCC method failure rate (11%). In addition, we swabbed 7 rooms previously occupied by C. auris–colonized patients for C. auris contamination of environmental surfaces, including the mobile medical equipment of the 4 patient care units that contained these rooms. We did not find any C. auris, and we continue data collection. Conclusions: A systematic environmental surveillance system is critical for healthcare systems to assess touch-free disinfection and identify MDRO contamination of surfaces.
Glacial and lacustrine sediments from the Mongolian Altai provide paleoclimatic information for the late Pleistocene in Mongolia, for which only a few sufficiently studied archives exist. Glacial stages referred to global cooling events are reported for the last glacial maximum (27–21 ka) and the late glacial period (18–16 ka). Sedimentary archives from the first part of the last glacial period are infrequent. We present proxy data for this period from two different archives (88–63 and 57–30 ka). Due to the limitation of effective moisture, an increase of precipitation is discussed as one trigger for glacier development in the cold-arid regions of central Asia. Our pollen analysis from periods of high paleolake levels in small catchments indicate that the vegetation was sparse and of dry desert type between 42–29 and 17–11 ka. This apparent contradiction between high lake levels and dry landscape conditions, the latter supported by intensified eolian processes, points to lower temperatures and cooler conditions causing reduced evaporation to be the main trigger for the high lake levels during glacier advances. Rising temperatures that cause melting of glacier and permafrost ice and geomorphological processes play a role in paleolake conditions. Interpreting lake-level changes as regional or global paleoclimate signals requires detailed investigation of geomorphological settings and mountain–basin relationships.
The Focused Assessment with Sonography in Trauma (FAST) exam is a rapid ultrasound test to identify evidence of hemorrhage within the abdomen. Few studies examine the accuracy of paramedic performed FAST examinations. The duration of an ultrasound training program remains controversial. This study's purpose was to assess the accuracy of paramedic FAST exam interpretation following a one hour didactic training session.
The interpretation of paramedic performed FAST exams was compared to the interpretation of physician performed FAST examinations on a mannequin model containing 300ml of free fluid following a one hour didactic training course. Results were compared using the Chi-square test. Differences in accuracy rate were deemed significant if p < 0.05.
Fourteen critical care flight paramedics and four emergency physicians were voluntarily recruited. The critical care paramedics were mostly ultrasound-naive whereas the emergency physicians all had ultrasound training. The correct interpretation of FAST scans was comparable between the two groups with accuracy of 85.6% and 87.5% (∆1.79 95%CI -33.85 to 21.82, p = 0.90) for paramedics and emergency physicians respectively.
This study determined that critical care paramedics were able to use ultrasound to detect free fluid on a simulated mannequin model and interpret the FAST exam with a similar accuracy as experienced emergency physicians following a one hour training course. This suggests the potential use of prehospital ultrasound to aid in the triage and transport decisions of trauma patients while limiting the financial and logistical burden of ultrasound training.
Point of care ultrasound (POCUS) is an essential tool for physicians to guide treatment decisions in both hospital and prehospital settings. Despite the potential patient care and system utilization benefits of prehospital ultrasound, the financial burden of a “hands-on” training program for large numbers of paramedics remains a barrier to implementation. In this study, we conducted a prospective, observational, double-blinded study comparing paramedics to emergency physicians in their ability to generate usable abdominal ultrasound images after a 1-hour didactic training session.
Canadian aeromedical critical care paramedics were compared against emergency medicine physicians in their ability to generate adequate abdominal ultrasound images on five healthy volunteers. Quality of each scan was evaluated by a trained expert in POCUS who was blinded to the identity of the participant using a 5-point Likert scale and using the standardized QUICk Focused Assessment with Sonography in Trauma (FAST) assessment tool.
Fourteen Critical care paramedics and four emergency department (ED) physicians were voluntarily recruited. Of paramedics, 57% had never used ultrasound before, 36% has used ultrasound without formal training, and 7% had previous training. Physicians had a higher proportion of usable scans compared with paramedics (100% v. 61.4%, Δ38.6%; 95% confidence interval, 19.3–50.28).
Paramedics were not able to produce images of interpretable quality at the same frequency when compared with emergency medicine physicians. However, a 61.4% usable image rate for paramedics following a short 1-hour didactic training session is promising for future studies, which could incorporate a short hands-on tutorial while remaining cost-effective.
Little is known about how the Royal College of Emergency Medicine (RCEM) residency programs are selecting their residents. This creates uncertainty regarding alignment between current selection processes and known best practices. We seek to describe the current selection processes of Canadian RCEM programs.
An online survey was distributed to all RCEM program directors and assistant directors. The survey instrument included 22 questions and sought both qualitative and quantitative data from the following six domains: application file, letters of reference, elective selection, interview, rank order, and selection process evaluation.
We received responses from 13 of 14 programs for an aggregate response rate of 92.9%. A candidate's letters of reference were identified as the most important criterion from the paper application (38.5%). Having a high level of familiarity with the applicant was the most important characteristic of a reference letter author (46.2%). In determining rank order, 53.8% of programs weighed the interview more heavily than the paper application. Once final candidate scores are established following the interview stage, all program respondents indicated that further adjustment is made to the final rank order list. Only 1 of 13 program respondents reported ever having completed a formal evaluation of their selection process.
We have identified elements of the selection process that will inform recommendations for programs, students, and referees. We encourage programs to conduct regular reviews of their selection process going forward to be in alignment with best practices.
The Fontan Outcomes Network was created to improve outcomes for children and adults with single ventricle CHD living with Fontan circulation. The network mission is to optimise longevity and quality of life by improving physical health, neurodevelopmental outcomes, resilience, and emotional health for these individuals and their families. This manuscript describes the systematic design of this new learning health network, including the initial steps in development of a national, lifespan registry, and pilot testing of data collection forms at 10 congenital heart centres.
Brain imaging studies have shown altered amygdala activity during emotion processing in children and adolescents with oppositional defiant disorder (ODD) and conduct disorder (CD) compared to typically developing children and adolescents (TD). Here we aimed to assess whether aggression-related subtypes (reactive and proactive aggression) and callous-unemotional (CU) traits predicted variation in amygdala activity and skin conductance (SC) response during emotion processing.
We included 177 participants (n = 108 cases with disruptive behaviour and/or ODD/CD and n = 69 TD), aged 8–18 years, across nine sites in Europe, as part of the EU Aggressotype and MATRICS projects. All participants performed an emotional face-matching functional magnetic resonance imaging task.
Differences between cases and TD in affective processing, as well as specificity of activation patterns for aggression subtypes and CU traits, were assessed. Simultaneous SC recordings were acquired in a subsample (n = 63). Cases compared to TDs showed higher amygdala activity in response to negative faces (fearful and angry) v. shapes. Subtyping cases according to aggression-related subtypes did not significantly influence on amygdala activity; while stratification based on CU traits was more sensitive and revealed decreased amygdala activity in the high CU group. SC responses were significantly lower in cases and negatively correlated with CU traits, reactive and proactive aggression.
Our results showed differences in amygdala activity and SC responses to emotional faces between cases with ODD/CD and TD, while CU traits moderate both central (amygdala) and peripheral (SC) responses. Our insights regarding subtypes and trait-specific aggression could be used for improved diagnostics and personalized treatment.
In a modified case–control association study we tested the assumption that two polymorphisms (A118G in exon 1 and IVS2+31 in intron 2) of the human μ-opioid receptor gene (OPRM1) confer susceptibility to opioid dependence.
In contrast to classical case–control studies both groups, opioid dependent cases and non-opioid dependent controls were recruited from individuals who have had access to drugs including opioids and who had been sentenced for violation of the “Dangerous Drugs Act” in Germany.
For the two allelic variants of OPRM1 under study we did not find evidence for association with opioid dependence.
Despite absence of association we think that this recruitment approach introduced here, is useful since it putatively offers a more adequate matching for case–control association studies of opioid dependent individuals.
Objectives. – Studies on the relation between local cerebral activation and retrieval success usually compared high and low performance conditions, and thus showed performance-related activation of different brain areas. Only a few studies directly compared signal intensities of different response categories during retrieval. During verbal recognition, we recently observed increased parieto-occipital activation related to false alarms. The present study intends to replicate and extend this observation by investigating common and differential activation by veridical and false recognition.
Methods. – Fifteen healthy volunteers performed a verbal recognition paradigm using 160 learned target and 160 new distracter words. The subjects had to indicate whether they had learned the word before or not. Echo-planar MRI of blood-oxygen-level-dependent signal changes was performed during this recognition task. Words were classified post hoc according to the subjects’ responses, i.e. hits, false alarms, correct rejections and misses. Response-related fMRI-analysis was used to compare activation associated with the subjects’ recognition success, i.e. signal intensities related to the presentation of words were compared by the above-mentioned four response types.
Results. – During recognition, all word categories showed increased bilateral activation of the inferior frontal gyrus, the inferior temporal gyrus, the occipital lobe and the brainstem in comparison with the control condition. Hits and false alarms activated several areas including the left medial and lateral parieto-occipital cortex in comparison with subjectively unknown items, i.e. correct rejections and misses. Hits showed more pronounced activation in the medial, false alarms in the lateral parts of the left parieto-occipital cortex.
Conclusions. – Veridical and false recognition show common as well as different areas of cerebral activation in the left parieto-occipital lobe: increased activation of the medial parietal cortex by hits may correspond to true recognition, increased activation of the parieto-occipital cortex by false alarms may correspond to familiarity decisions. Further studies are needed to investigate the reasons for false decisions in healthy subjects and patients with memory problems.
We estimate the ecosystem service value of water supplied by the San Bernardino National Forest in Southern California under climate change projections through the 21st century. We couple water flow projections from a dynamic vegetation model with an economic demand model for residential water originating from the San Bernardino National Forest. Application of the method demonstrates how estimates of consumer welfare changes due to variation in water supply from public lands in Southern California can inform policy and land management decisions. Results suggest variations in welfare changes over time due to alterations in the projected water supply surpluses, shifting demand limited by water supply shortages or surpluses, and price increases. Results are sensitive to future climate projections—in some cases large decreases in welfare due to supply shortages—and to assumptions about the demand model.
To evaluate the National Health Safety Network (NHSN) hospital-onset Clostridioides difficile infection (HO-CDI) standardized infection ratio (SIR) risk adjustment for general acute-care hospitals with large numbers of intensive care unit (ICU), oncology unit, and hematopoietic cell transplant (HCT) patients.
Retrospective cohort study.
Eight tertiary-care referral general hospitals in California.
We used FY 2016 data and the published 2015 rebaseline NHSN HO-CDI SIR. We compared facility-wide inpatient HO-CDI events and SIRs, with and without ICU data, oncology and/or HCT unit data, and ICU bed adjustment.
For these hospitals, the median unmodified HO-CDI SIR was 1.24 (interquartile range [IQR], 1.15–1.34); 7 hospitals qualified for the highest ICU bed adjustment; 1 hospital received the second highest ICU bed adjustment; and all had oncology-HCT units with no additional adjustment per the NHSN. Removal of ICU data and the ICU bed adjustment decreased HO-CDI events (median, −25%; IQR, −20% to −29%) but increased the SIR at all hospitals (median, 104%; IQR, 90%–105%). Removal of oncology-HCT unit data decreased HO-CDI events (median, −15%; IQR, −14% to −21%) and decreased the SIR at all hospitals (median, −8%; IQR, −4% to −11%).
For tertiary-care referral hospitals with specialized ICUs and a large number of ICU beds, the ICU bed adjustor functions as a global adjustment in the SIR calculation, accounting for the increased complexity of patients in ICUs and non-ICUs at these facilities. However, the SIR decrease with removal of oncology and HCT unit data, even with the ICU bed adjustment, suggests that an additional adjustment should be considered for oncology and HCT units within general hospitals, perhaps similar to what is done for ICU beds in the current SIR.
Several studies have reported diminished learning from non-social outcomes in depressed individuals. However, it is not clear how depression impacts learning from social feedback. Notably, mood disorders are commonly associated with deficits in social functioning, which raises the possibility that potential impairments in social learning may negatively affect real-life social experiences in depressed subjects.
Ninety-two participants with high (HD; N = 40) and low (LD; N = 52) depression scores were recruited. Subjects performed a learning task, during which they received monetary outcomes or social feedback which they were told came from other people. Additionally, participants answered questions about their everyday social experiences. Computational models were fit to the data and model parameters were related to social experience measures.
HD subjects reported a reduced quality and quantity of social experiences compared to LD controls, including an increase in the amount of time spent in negative social situations. Moreover, HD participants showed lower learning rates than LD subjects in the social condition of the task. Interestingly, across all participants, reduced social learning rates predicted higher amounts of time spent in negative social situations, even when depression scores were controlled for.
These findings indicate that deficits in social learning may affect the quality of everyday social experiences. Specifically, the impaired ability to use social feedback to appropriately update future actions, which was observed in HD subjects, may lead to suboptimal interpersonal behavior in real life. This, in turn, may evoke negative feedback from others, thus bringing about more unpleasant social encounters.
Anaesthesia is possibly the most pharmacology oriented of all clinical medical specialties. What we do every day is, effectively, applied pharmacology. Yet with all this practical experience, some aspects, particularly pharmacokinetics, can appear dauntingly complex. Indeed, mathematical modelling and developing target-controlled infusions is difficult as is the design of modern vaporisers for inhalational drugs but, as an analogy, we don’t need to be able to design a car in order to know how to drive it. However, there are certain basic PK features that will enhance your understanding and improve your ability to use these drugs appropriately.