To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We investigated a large multistate outbreak that occurred in the United States in 2015–2016. Epidemiologic, laboratory, and traceback studies were conducted to determine the source of the infections. We identified 907 case-patients from 40 states with illness onset dates ranging from July 3, 2015 to March 2, 2016. Sixty-three percent of case-patients reported consuming cucumbers in the week before illness onset. Ten illness sub-clusters linked to events or purchase locations were identified. All sub-clusters investigated received cucumbers from a single distributor which were sourced from a single grower in Mexico. Seventy-five cucumber samples were collected, 19 of which yielded the outbreak strain. Whole genome sequencing performed on 154 clinical isolates and 19 cucumber samples indicated that the sequenced isolates were closely related genetically to one another. This was the largest US foodborne disease outbreak in the last ten years and the third largest in the past 20 years. This was at least the fifth multistate outbreak caused by contaminated cucumbers since 2010. The outbreak is noteworthy because a recall was issued only 17 days after the outbreak was identified, which allowed for the removal of the contaminated cucumbers still available in commerce, unlike previous cucumber associated outbreaks. The rapid identification and response of multiple public health agencies resulted in preventing this from becoming an even larger outbreak.
A nationwide survey indicated that screening for asymptomatic carriers of C. difficile is an uncommon practice in US healthcare settings. Better understanding of the role of asymptomatic carriage in C. difficile transmission, and of the measures available to reduce that risk, are needed to inform best practices regarding the management of carriers.
The majority of paediatric Clostridioides difficile infections (CDI) are community-associated (CA), but few data exist regarding associated risk factors. We conducted a case–control study to evaluate CA-CDI risk factors in young children. Participants were enrolled from eight US sites during October 2014–February 2016. Case-patients were defined as children aged 1–5 years with a positive C. difficile specimen collected as an outpatient or ⩽3 days of hospital admission, who had no healthcare facility admission in the prior 12 weeks and no history of CDI. Each case-patient was matched to one control. Caregivers were interviewed regarding relevant exposures. Multivariable conditional logistic regression was performed. Of 68 pairs, 44.1% were female. More case-patients than controls had a comorbidity (33.3% vs. 12.1%; P = 0.01); recent higher-risk outpatient exposures (34.9% vs. 17.7%; P = 0.03); recent antibiotic use (54.4% vs. 19.4%; P < 0.0001); or recent exposure to a household member with diarrhoea (41.3% vs. 21.5%; P = 0.04). In multivariable analysis, antibiotic exposure in the preceding 12 weeks was significantly associated with CA-CDI (adjusted matched odds ratio, 6.25; 95% CI 2.18–17.96). Improved antibiotic prescribing might reduce CA-CDI in this population. Further evaluation of the potential role of outpatient healthcare and household exposures in C. difficile transmission is needed.
Although disaster-related posttraumatic stress symptoms (PTSS) typically decrease in intensity over time, some youth continue to report elevated levels of PTSS many years after the disaster. The current study examines two processes that may help to explain the link between disaster exposure and enduring PTSS: caregiver emotion socialization and youth recollection qualities. One hundred and twenty-two youth (ages 12 to 17) and their female caregivers who experienced an EF-4 tornado co-reminisced about the event, and adolescents provided independent recollections between 3 and 4 years after the tornado. Adolescent individual transcripts were coded for coherence and negative personal impact, qualities that have been found to contribute to meaning making. Parent–adolescent conversations were coded for caregiver egocentrism, a construct derived from the emotion socialization literature to reflect the extent to which the caregiver centered the conversation on her own emotions and experiences. Egocentrism predicted higher youth PTSS, and this association was mediated by the coherence of adolescents’ narratives. The association between coherence and PTSS was stronger for youth who focused more on the negative personal impacts of the tornado event during their recollections. Results suggest that enduring tornado-related PTSS may be influenced in part by the interplay of caregiver emotion socialization practices and youth recollection qualities.
Due to differences in the circulation of influenza viruses, distribution and antigenic drift of A subtypes and B lineages, and susceptibility to infection in the population, the incidence of symptomatic influenza infection can vary widely between seasons and age-groups. Our goal was to estimate the symptomatic infection incidence in the Netherlands for the six seasons 2011/2012 through 2016/2017, using Bayesian evidence synthesis methodology to combine season-specific sentinel surveillance data on influenza-like illness (ILI), virus detections in sampled ILI cases and data on healthcare-seeking behaviour. Estimated age-aggregated incidence was 6.5 per 1000 persons (95% uncertainty interval (UI): 4.7–9.0) for season 2011/2012, 36.7 (95% UI: 31.2–42.8) for 2012/2013, 9.1 (95% UI: 6.3–12.9) for 2013/2014, 41.1 (95% UI: 35.0–47.7) for 2014/2015, 39.4 (95% UI: 33.4–46.1) for 2015/2016 and 27.8 (95% UI: 22.7–33.7) for season 2016/2017. Incidence varied substantially between age-groups (highest for the age-group <5 years: 23 to 47/1000, but relatively low for 65+ years: 2 to 34/1000 over the six seasons). Integration of all relevant data sources within an evidence synthesis framework has allowed the estimation – with appropriately quantified uncertainty – of the incidence of symptomatic influenza virus infection. These estimates provide valuable insight into the variation in influenza epidemics across seasons, by virus subtype and lineage, and between age-groups.
There is increasing evidence for shared genetic susceptibility between schizophrenia and bipolar disorder. Although genetic variants only convey subtle increases in risk individually, their combination into a polygenic risk score constitutes a strong disease predictor.
To investigate whether schizophrenia and bipolar disorder polygenic risk scores can distinguish people with broadly defined psychosis and their unaffected relatives from controls.
Using the latest Psychiatric Genomics Consortium data, we calculated schizophrenia and bipolar disorder polygenic risk scores for 1168 people with psychosis, 552 unaffected relatives and 1472 controls.
Patients with broadly defined psychosis had dramatic increases in schizophrenia and bipolar polygenic risk scores, as did their relatives, albeit to a lesser degree. However, the accuracy of predictive models was modest.
Although polygenic risk scores are not ready for clinical use, it is hoped that as they are refined they could help towards risk reduction advice and early interventions for psychosis.
Declaration of interest
R.M.M. has received honoraria for lectures from Janssen, Lundbeck, Lilly, Otsuka and Sunovian.
Herbicide resistance is ‘wicked’ in nature; therefore, results of the many educational efforts to encourage diversification of weed control practices in the United States have been mixed. It is clear that we do not sufficiently understand the totality of the grassroots obstacles, concerns, challenges, and specific solutions needed for varied crop production systems. Weed management issues and solutions vary with such variables as management styles, regions, cropping systems, and available or affordable technologies. Therefore, to help the weed science community better understand the needs and ideas of those directly dealing with herbicide resistance, seven half-day regional listening sessions were held across the United States between December 2016 and April 2017 with groups of diverse stakeholders on the issues and potential solutions for herbicide resistance management. The major goals of the sessions were to gain an understanding of stakeholders and their goals and concerns related to herbicide resistance management, to become familiar with regional differences, and to identify decision maker needs to address herbicide resistance. The messages shared by listening-session participants could be summarized by six themes: we need new herbicides; there is no need for more regulation; there is a need for more education, especially for others who were not present; diversity is hard; the agricultural economy makes it difficult to make changes; and we are aware of herbicide resistance but are managing it. The authors concluded that more work is needed to bring a community-wide, interdisciplinary approach to understanding the complexity of managing weeds within the context of the whole farm operation and for communicating the need to address herbicide resistance.
Seven half-day regional listening sessions were held between December 2016 and April 2017 with groups of diverse stakeholders on the issues and potential solutions for herbicide-resistance management. The objective of the listening sessions was to connect with stakeholders and hear their challenges and recommendations for addressing herbicide resistance. The coordinating team hired Strategic Conservation Solutions, LLC, to facilitate all the sessions. They and the coordinating team used in-person meetings, teleconferences, and email to communicate and coordinate the activities leading up to each regional listening session. The agenda was the same across all sessions and included small-group discussions followed by reporting to the full group for discussion. The planning process was the same across all the sessions, although the selection of venue, time of day, and stakeholder participants differed to accommodate the differences among regions. The listening-session format required a great deal of work and flexibility on the part of the coordinating team and regional coordinators. Overall, the participant evaluations from the sessions were positive, with participants expressing appreciation that they were asked for their thoughts on the subject of herbicide resistance. This paper details the methods and processes used to conduct these regional listening sessions and provides an assessment of the strengths and limitations of those processes.
Studies examining productive syntax have used varying elicitation methods and have tended to focus on either young children or adolescents/adults, so we lack an account of syntactic development throughout middle childhood. We describe here the results of an analysis of clause complexity in narratives produced by 354 speakers aged from four years to adulthood using the Expressive, Receptive, and Recall of Narrative Instrument (ERRNI). We show that the number of clauses per utterance increased steadily through this age range. However, the distribution of clause types depended on which of two stories was narrated, even though both stories were designed to have a similar story structure. In addition, clausal complexity was remarkably similar regardless of whether the speaker described a narrative from pictures, or whether the same narrative was recalled from memory. Finally, our findings with the youngest children showed that the task of generating a narrative from pictures may underestimate syntactic competence in those aged below five years.
To test the hypothesis that long-term care facility (LTCF) residents with Clostridium difficile infection (CDI) or asymptomatic carriage of toxigenic strains are an important source of transmission in the LTCF and in the hospital during acute-care admissions.
A 6-month cohort study with identification of transmission events was conducted based on tracking of patient movement combined with restriction endonuclease analysis (REA) and whole-genome sequencing (WGS).
Veterans Affairs hospital and affiliated LTCF.
The study included 29 LTCF residents identified as asymptomatic carriers of toxigenic C. difficile based on every other week perirectal screening and 37 healthcare facility-associated CDI cases (ie, diagnosis >3 days after admission or within 4 weeks of discharge to the community), including 26 hospital-associated and 11 LTCF-associated cases.
Of the 37 CDI cases, 7 (18·9%) were linked to LTCF residents with LTCF-associated CDI or asymptomatic carriage, including 3 of 26 hospital-associated CDI cases (11·5%) and 4 of 11 LTCF-associated cases (36·4%). Of the 7 transmissions linked to LTCF residents, 5 (71·4%) were linked to asymptomatic carriers versus 2 (28·6%) to CDI cases, and all involved transmission of epidemic BI/NAP1/027 strains. No incident hospital-associated CDI cases were linked to other hospital-associated CDI cases.
Our findings suggest that LTCF residents with asymptomatic carriage of C. difficile or CDI contribute to transmission both in the LTCF and in the affiliated hospital during acute-care admissions. Greater emphasis on infection control measures and antimicrobial stewardship in LTCFs is needed, and these efforts should focus on LTCF residents during hospital admissions.
Introduction: The 2015 CanMEDS framework requires all residency programs to increase their focus on Quality Improvement and Patient Safety (QIPS). We created a longitudinal (4-year), modular QIPS curriculum for FRCP emergency medicine residents at the University of Toronto (UT) using multiple educational methods. The curriculum addresses three levels of QIPS training: knowledge, practical skills at the microsystem level, and practical skills at the organization level. Aim Statement: To increase the UT FRCP emergency medicine residents absolute score on the QIKAT-R (Quality Improvement Knowledge Application Tool Revised) by 10% after the completion of the QIPS curriculum. Methods: Physicians and other healthcare professionals with QI expertise collaboratively designed and taught the curriculum. We used the QIKAT-R as the outcome measure to evaluate QI knowledge and its applicability. The QIKAT-R is a validated measure that assesses an individuals ability to decipher a QI issue within the healthcare context, and propose a change initiative to address it. The first cohort of residents completed the QIKAT-R prior to the first session in 2014 (pre) and at the completion of the curriculum in 2017 (post). Each response was anonymized and scored by physicians with QI expertise. The QIKAT-R scores and comments from course evaluations are used to make yearly iterative curriculum changes. Results: The QIPS curriculum was implemented in September 2014. All nine residents in the first cohort completed the curriculum; they demonstrated an absolute increase of 19.6% (5.3/27) in the mean QIKAT-R score (13.0 +/− 3.3 pre vs. 18.3 +/− 3.8 post, p=0.001). Of the pre-test responses, 26% were categorized as poor, 70% as good, and 4% as excellent, whereas of the post-test 11% of responses were categorized as poor, 37% as good, and 52% as excellent (p<0.001). Two iterative curriculum changes were made at the end of each academic year since 2014: (1) The time between sessions were decreased to promote knowledge retention, and (2) different PGY3 QI practical project options were provided to suit residents individual QI interests. QIKAT-R scores and resident feedback were used to evaluate the impact of the curriculum changes. Conclusion: A collaborative, modular, longitudinal QIPS curriculum for UT FRCP emergency medicine residents that met CanMEDS requirements was created using multiple educational methods. The first resident cohort that completed the curriculum demonstrated an absolute increase in QI knowledge and its applicability (as measured by the QIKAT-R) by 19.6%. Two PDSA cycles were completed to improve the curriculum with the change ideas generated from resident feedback. Ongoing challenges include limited staff availability to teach and supervise resident QI projects. Future directions include incentivising staff participation and providing mentorship for residents with a career interest in QI beyond what is offered by the curriculum.
Introduction: Health advocacy training is an important part of emergency medicine practice and education. There is little agreement, however, about how advocacy should be taught and evaluated in the postgraduate context, and there is no consolidated evidence-base to guide the design and implementation of post-graduate health advocacy curricula. This literature review aims to identify existing models used for teaching and evaluating advocacy training, and to integrate these findings with current best-practices in medical education to develop practical, generalizable recommendations for those involved in the design of postgraduate advocacy training programs. Methods: Ovid MEDLINE and PubMed searches combined both MeSH and non-MeSH variations on advocacy and internship and residency. Forward snowballing that incorporated grey literature searches from accreditation agencies, residency websites and reports were included. Articles were excluded if unrelated to advocacy and postgraduate medical education. Results: 507 articles were identified in the search. A total of 108 peer reviewed articles and 38 grey literature resources were included in the final analysis. Results show that many regulatory bodies and residency programs integrate advocacy training into their mission statements and curricula, but they are not prescriptive about training methods or assessment strategies. Barriers to advocacy training were identified, most notably confusion about the definition of the advocate role and a lower value placed on advocacy by trainees and educators. Common training methods included didactic modules, standardized patient encounters, and clinical exposure to vulnerable populations. Longitudinal exposure was less common but appeared the most promising, often linked to scholarly or policy objectives. Conclusion: This review indicates that postgraduate medical education advocacy curricula are largely designed in an ad-hoc fashion with little consistency across programs even within a given discipline. Longitudinal curriculum design appears to engage residents and allows for achievement of stated outcomes. Residency program directors from emergency medicine and other specialties may benefit from promising models in pediatrics, and a shared portal with access to advocacy curricula and the opportunity to exchange ideas related to curriculum design and implementation.
Introduction: Extracorporeal Life Support in the context of cardiac arrest (ECPR) is an emerging resuscitative therapy which has shown promising results for patients who may not otherwise survive. As a resource-intensive intervention, ECPR requires carefully selected patients to maximize its potential benefits and mitigate undue harm. This retrospective health records review sought to identify the characteristics of cardiac arrest patients presenting to two academic tertiary care Emergency Departments (EDs) in order to assess the feasibility and impact of an ECPR program. Methods: We reviewed charts for all patients aged 18-75 years old presenting to two Academic Teaching Hospitals with out-of-hospital or in-ED refractory cardiac arrest from January 2015 to December 2016. Based on a review of existing ECPR literature, we defined two sets of liberal and restrictive criteria associated with survival and applied these to our cohort for possible initiation of ECPR. The chart review was completed by one of the principal investigators, with 10% of charts randomly reviewed by a second investigator to ensure good inter-agreement. Any discrepancies or ambiguities found in the review were resolved collaboratively between both investigators. Results: A total of 220 charts were identified and 191 deemed eligible for inclusion in the study. The median age was 59 (IQR: 49.5-67) years and the cohort was 72% male. The initial presenting rhythm was identified as VT/VF in 47% of patients. 65% of arrests were witnessed, with immediate bystander CPR performed on 50% patients and an additional 12% receiving CPR within 10 minutes of collapse. 60% of patients had cardiac arrest lasting less than 75 minutes. 69% of patients were identified as having a reversible cause of cardiac arrest. A favorable premorbid status was identified in 76% of patients. Application of our two sets of ECPR inclusion criteria revealed that 17% and 3% of patients for the liberal and restrictive criteria respectively, would have been candidates for ECPR. Conclusion: At our centre, we identified that in a two-year period, 3% to 17% of cardiac arrest patients presenting to the ED would have met inclusion criteria for ECPR, translating to an additional 0.2-1.4 patients per month admitted for critical care. These findings would suggest that the implementation of an ECPR program at our institution has the potential to have a positive impact for patients with only a relatively low volume of patients requiring additional resources.
The Troodos ophiolite Cyprus hosts the type locality for Cyprus-type, mafic volcanogenic massive sulfide (VMS) deposits. Regional soil geochemical data for Troodos are highly variable with the Solea graben, one of three regional graben structures on Cyprus, showing enrichment in Te and Se. Of the three VMS sampled within the Solea graben, Apliki exhibits the most significant enrichment in Se. Samples from the South Apliki Breccia Zone; a zone of hematite-rich breccia containing euhedral pyrite and chalcopyrite, contain up to 4953 and 3956 ppm Se in pyrite and chalcopyrite, respectively. Four paragenetic stages are identified at Apliki and different generations of pyrite are distinguishable using trace-element chemistry analysed via laser ablation inductively coupled plasma mass spectrometry. Results indicate stage I pyrite formed under reduced conditions at high temperatures >280°C and contains 182 ppm (n = 22 σ = 253) Se. Late stage III pyrite which is euhedral and overprints chalcopyrite and hematite is enriched in Se (averaging 1862 ppm; n = 23 σ = 1394). Sulfide dissolution and hematite formation displaced large amounts of Se as hematite cannot accommodate high concentrations of Se in its crystal structure. The mechanisms proposed to explain the pronounced change in redox are twofold. Fault movement leading to localized seawater ingress coupled with a decreasing magmatic flux that generated locally oxidizing conditions and promoted sulfide dissolution. A Se/S ratio of 9280 indicates a probable magmatic component for late stage III pyrite, which is suggested as a mechanism explaining the transition from oxidizing back to reduced conditions. This study highlights the significance of changes in redox which promote sulfide dissolution, mobilization and enrichment of Se.
Vaccination programmes are considered a main contributor to the decline of infectious diseases over the 20th century. In recent years, the national vaccination coverage in the Netherlands has been declining, highlighting the need for continuous monitoring and evaluation of vaccination programmes. Our aim was to quantify the impact of long-standing vaccination programmes on notified cases in the Netherlands. We collected and digitised previously unavailable monthly case notifications of diphtheria, poliomyelitis, mumps and rubella in the Netherlands over the period 1919–2015. Poisson regression models accounting for seasonality, multi-year cycles, secular trends and auto-correlation were fit to pre-vaccination periods. Cases averted were calculated as the difference between observed and expected cases based on model projections. In the first 13 years of mass vaccinations, case notifications declined rapidly with 82.4% (95% credible interval (CI): 74.9–87.6) of notified cases of diphtheria averted, 92.9% (95% CI 85.0–97.2) cases of poliomyelitis, and 79.1% (95% CI 67.1–87.4) cases of mumps. Vaccination of 11-year-old girls against rubella averted 49.9% (95% CI 9.3–73.5) of cases, while universal vaccination averted 68.1% (95% CI 19.4–87.3) of cases. These findings show that vaccination programmes have contributed substantially to the reduction of infectious diseases in the Netherlands.
Information on morbidity burden of seasonal influenza in China is limited. A multiplier model was used to estimate the incidence and number of outpatient visits for seasonal influenza by age group for the 2015–2016 season in Beijing, the capital of China, based on reported numbers of influenza-like illness consultations and proportions of positive cases from influenza surveillance systems in Beijing, general consultation rates and other parameters from previous studies, surveys and surveillance systems. An estimated total of 1 190 200 (95% confidence interval (CI) 830 400–1 549 900) cases of influenza virus infections occurred in Beijing, 2015–2016 season, with an attack rate of 5·5% (95% CI 3·9–7·2%). These infections resulted in an estimated 468 280 (95% CI 70 700–606 800) outpatient visits, with an attack rate of 2·2% (95% CI 0·3–2·8%). The attack rate of influenza virus infections was highest among children aged 0–4 years (31·9% (95% CI 21·9–41·9%)), followed by children aged 5–14 years (18·7% (95% CI 12·9–24·5%)). Our study demonstrated a substantial influenza-related morbidity in Beijing, China, especially among the preschool- and school-aged children. This suggests that development or modification of seasonal influenza targeted vaccination strategies need to recognize that incidence is highest in children.
A range of endophenotypes characterise psychosis, however there has been limited work understanding if and how they are inter-related.
This multi-centre study includes 8754 participants: 2212 people with a psychotic disorder, 1487 unaffected relatives of probands, and 5055 healthy controls. We investigated cognition [digit span (N = 3127), block design (N = 5491), and the Rey Auditory Verbal Learning Test (N = 3543)], electrophysiology [P300 amplitude and latency (N = 1102)], and neuroanatomy [lateral ventricular volume (N = 1721)]. We used linear regression to assess the interrelationships between endophenotypes.
The P300 amplitude and latency were not associated (regression coef. −0.06, 95% CI −0.12 to 0.01, p = 0.060), and P300 amplitude was positively associated with block design (coef. 0.19, 95% CI 0.10–0.28, p < 0.001). There was no evidence of associations between lateral ventricular volume and the other measures (all p > 0.38). All the cognitive endophenotypes were associated with each other in the expected directions (all p < 0.001). Lastly, the relationships between pairs of endophenotypes were consistent in all three participant groups, differing for some of the cognitive pairings only in the strengths of the relationships.
The P300 amplitude and latency are independent endophenotypes; the former indexing spatial visualisation and working memory, and the latter is hypothesised to index basic processing speed. Individuals with psychotic illnesses, their unaffected relatives, and healthy controls all show similar patterns of associations between endophenotypes, endorsing the theory of a continuum of psychosis liability across the population.
Using speakers of either African American English or Southern White English, we asked whether a working memory measure was linguistically unbiased, that is, equally able to distinguish between children with and without specific language impairment (SLI) across dialects, with similar error profiles and similar correlations to standardized test scores. We also examined whether the measure was affected by a child's nonmainstream dialect density. Fifty-three kindergarteners with SLI and 53 typically developing controls (70 African American English, 36 Southern White English) were given a size judgment working memory task, which involved reordering items by physical size before recall, as well as tests of syntax, vocabulary, intelligence, and nonmainstream density. Across dialects, children with SLI earned significantly poorer span scores than controls, and made more nonlist errors. Span and standardized language test performance were correlated; however, they were also both correlated with nonmainstream density. After partialing out density, span continued to differentiate the groups and correlate with syntax measures in both dialects. Thus, working memory performance can distinguish between children with and without SLI and is equally related to syntactic abilities across dialects. However, the correlation between span and nonmainstream dialect density indicates that processing-based verbal working memory tasks may not be as free from linguistic bias as often thought. Additional studies are needed to further explore this relationship.