To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Clostridioides difficile infection (CDI) can be prevented through infection prevention practices and antibiotic stewardship. Diagnostic stewardship (ie, strategies to improve use of microbiological testing) can also improve antibiotic use. However, little is known about the use of such practices in US hospitals, especially after multidisciplinary stewardship programs became a requirement for US hospital accreditation in 2017. Thus, we surveyed US hospitals to assess antibiotic stewardship program composition, practices related to CDI, and diagnostic stewardship.
Surveys were mailed to infection preventionists at 900 randomly sampled US hospitals between May and October 2017. Hospitals were surveyed on antibiotic stewardship programs; CDI prevention, treatment, and testing practices; and diagnostic stewardship strategies. Responses were compared by hospital bed size using weighted logistic regression.
Overall, 528 surveys were completed (59% response rate). Almost all (95%) responding hospitals had an antibiotic stewardship program. Smaller hospitals were less likely to have stewardship team members with infectious diseases (ID) training, and only 41% of hospitals met The Joint Commission accreditation standards for multidisciplinary teams. Guideline-recommended CDI prevention practices were common. Smaller hospitals were less likely to use high-tech disinfection devices, fecal microbiota transplantation, or diagnostic stewardship strategies.
Following changes in accreditation standards, nearly all US hospitals now have an antibiotic stewardship program. However, many hospitals, especially smaller hospitals, appear to struggle with access to ID expertise and with deploying diagnostic stewardship strategies. CDI prevention could be enhanced through diagnostic stewardship and by emphasizing the role of non–ID-trained pharmacists and clinicians in antibiotic stewardship.
The physiology of mesophotic Scleractinia varies with depth in response to environmental change. Previous research has documented trends in heterotrophy and photosynthesis with depth, but has not addressed between-site variation for a single species. Environmental differences between sites at a local scale and heterogeneous microhabitats, because of irradiance and food availability, are likely important factors when explaining the occurrence and physiology of Scleractinia. Here, 108 colonies of Agaricia lamarcki were sampled from two locations off the coast of Utila, Honduras, distributed evenly down the observed 50 m depth range of the species. We found that depth alone was not sufficient to fully explain physiological variation. Pulse Amplitude-Modulation fluorometry and stable isotope analyses revealed that trends in photochemical and heterotrophic activity with depth varied markedly between sites. Our isotope analyses do not support an obligate link between photosynthetic activity and heterotrophic subsidy with increasing depth. We found that A. lamarcki colonies at the bottom of the species depth range can be physiologically similar to those nearer the surface. As a potential explanation, we hypothesize sites with high topographical complexity, and therefore varied microhabitats, may provide more physiological niches distributed across a larger depth range. Varied microhabitats with depth may reduce the dominance of depth as a physiological determinant. Thus, A. lamarcki may ‘avoid’ changes in environment with depth, by instead existing in a subset of favourable niches. Our observations correlate with site-specific depth ranges, advocating for linking physiology and abiotic profiles when defining the distribution of mesophotic taxa.
There is increasing evidence to support integration of simulation into medical training; however, no national emergency medicine (EM) simulation curriculum exists. Using Delphi methodology, we aimed to identify and establish content validity for adult EM curricular content best suited for simulation-based training, to inform national postgraduate EM training.
A national panel of experts in EM simulation iteratively rated potential curricular topics, on a 4-point scale, to determine those best suited for simulation-based training. After each round, responses were analyzed. Topics scoring <2/4 were removed and remaining topics were resent to the panel for further ratings until consensus was achieved, defined as Cronbach α ≥ 0.95. At conclusion of the Delphi process, topics rated ≥ 3.5/4 were considered “core” curricular topics, while those rated 3.0-3.5 were considered “extended” curricular topics.
Forty-five experts from 13 Canadian centres participated. Two hundred eighty potential curricular topics, in 29 domains, were generated from a systematic literature review, relevant educational documents and Delphi panellists. Three rounds of surveys were completed before consensus was achieved, with response rates ranging from 93-100%. Twenty-eight topics, in eight domains, reached consensus as “core” curricular topics. Thirty-five additional topics, in 14 domains, reached consensus as “extended” curricular topics.
Delphi methodology allowed for achievement of expert consensus and content validation of EM curricular content best suited for simulation-based training. These results provide a foundation for improved integration of simulation into postgraduate EM training and can be used to inform a national simulation curriculum to supplement clinical training and optimize learning.
Influenza and respiratory syncytial virus (RSV) are common causes of respiratory tract infections and place a burden on health services each winter. Systems to describe the timing and intensity of such activity will improve the public health response and deployment of interventions to these pressures. Here we develop early warning and activity intensity thresholds for monitoring influenza and RSV using two novel data sources: general practitioner out-of-hours consultations (GP OOH) and telehealth calls (NHS 111). Moving Epidemic Method (MEM) thresholds were developed for winter 2017–2018. The NHS 111 cold/flu threshold was breached several weeks in advance of other systems. The NHS 111 RSV epidemic threshold was breached in week 41, in advance of RSV laboratory reporting. Combining the use of MEM thresholds with daily monitoring of NHS 111 and GP OOH syndromic surveillance systems provides the potential to alert to threshold breaches in real-time. An advantage of using thresholds across different health systems is the ability to capture a range of healthcare-seeking behaviour, which may reflect differences in disease severity. This study also provides a quantifiable measure of seasonal RSV activity, which contributes to our understanding of RSV activity in advance of the potential introduction of new RSV vaccines.
X-ray diffraction topography is the name given to several x-ray diffraction techniques where large area x-ray beams diffracted from a crystal provide detailed information about the surface structure and internal perfection of crystal microstructures. Since x-ray topographic techniques are based on Bragg (reflection) or Laue (transmission) diffraction from a crystal lattice, they are extremely sensitive to any atomic lattice imperfections and strains. Alterations of the interplanar spacing as small as one part in ten thousand extending over a reasonable number of atomic ce11 lengths can be recorded as a corresponding change in the diffracted beam intensity. Line Modified-Asymmetric Crystal Topography (LM-ACT) is one such reflection technique that shows particular promise in Che field of microelectronics. The LM-ACT system is designed with low angular divergence in the x-ray beam probe. Low probe beam divergence allows details of device geometries on the order of microns to be resolved in the recorded x-ray intensity variation of the diffracted beam.
The LM-ACT system was applied here to the study of integrated circuits (IC) after specific processing steps were accomplished during IC fabrication and in the final product condition. Topographs obtained from specular crystal surfaces that were implanted through a patterned mask showed contrast variations between the implanted and non-implanted regions; details of the mask patterns have been resolved on the order of a few microns. LMACT topographs from annealed, and unannealed, Implanted specimens showed marked differences and as a result it is suggested that LM-ACT would be beneficial in optimizing the processing schedule for a particular wafer/electronic system. A significant feature of the LM-ACT technique is the capability for producing high resolution stereo-pair topographs that provide quantitative information through the depth of individual process layers in an integrated circuit.
We sought to address the prior limitations of symptom checker accuracy by analysing the diagnostic and triage feasibility of online symptom checkers using a consecutive series of real-life emergency department (ED) patient encounters, and addressing a complex patient population – those with hepatitis C or HIV. We aimed to study the diagnostic and triage accuracy of these symptom checkers in relation to an emergency room physician-determined diagnosis. An ED retrospective analysis was performed on 8363 consecutive adult patients. Eligible patients included: 90 HIV, 67 hepatitis C, 11 both HIV and hepatitis C. Five online symptom checkers were utilised for diagnosis (Mayo Clinic, WebMD, Symptomate, Symcat, Isabel), three with triage capabilities. Symptom checker output was compared with ED physician-determined diagnosis data in regards to diagnostic accuracy and differential diagnosis listing, along with triage advice. All symptom checkers, whether for combined HIV and hepatitis C, HIV alone or hepatitis C alone had poor diagnostic accuracy in regards to Top1 (<20%), Top3 (<35%), Top10 (<40%), Listed at All (<45%). Significant variations existed for each individual symptom checker, as some appeared more accurate for listing the diagnosis in the top of the differential, vs. others more apt to list the diagnosis at all. In regards to ED triage data, a significantly higher percentage of hepatitis C patients (59.7%; 40/67) were found to have an initial diagnosis with emergent criteria than HIV patients (35.6%; 32/90). Symptom checker diagnostic capabilities are quite inferior to physician diagnostic capabilities. Complex patients such as those with HIV or hepatitis C may carry a more specific differential diagnosis, warranting symptom checkers to have diagnostic algorithms accounting for such complexity. Symptom checkers carry the potential for real-time epidemiologic monitoring of patient symptoms, as symptom entries and subsequent symptom checker diagnosis could allow health officials a means to track illnesses in specific patient populations and geographic regions. In order to do this, accurate and reliable symptom checkers are warranted.
Mismatch negativity (MMN) is an event-related potential (ERP) component reflecting auditory predictive coding. Repeated standard tones evoke increasing positivity (‘repetition positivity’; RP), reflecting strengthening of the standard's memory trace and the prediction it will recur. Likewise, deviant tones preceded by more standard repetitions evoke greater negativity (‘deviant negativity’; DN), reflecting stronger prediction error signaling. These memory trace effects are also evident in MMN difference wave. Here, we assess group differences and test-retest reliability of these indices in schizophrenia patients (SZ) and healthy controls (HC).
Electroencephalography was recorded twice, 2 weeks apart, from 43 SZ and 30 HC, during a roving standard paradigm. We examined ERPs to the third, eighth, and 33rd standards (RP), immediately subsequent deviants (DN), and the corresponding MMN. Memory trace effects were assessed by comparing amplitudes associated with the three standard repetition trains.
Compared with controls, SZ showed reduced MMNs and DNs, but normal RPs. Both groups showed memory trace effects for RP, MMN, and DN, with a trend for attenuated DNs in SZ. Intraclass correlations obtained via this paradigm indicated good-to-moderate reliabilities for overall MMN, DN and RP, but moderate to poor reliabilities for components associated with short, intermediate, and long standard trains, and poor reliability of their memory trace effects.
MMN deficits in SZ reflected attenuated prediction error signaling (DN), with relatively intact predictive code formation (RP) and memory trace effects. This roving standard MMN paradigm requires additional development/validation to obtain suitable levels of reliability for use in clinical trials.
High body mass index (BMI) has been associated with lower risks of suicidal behaviour and being underweight with increased risks. However, evidence is inconsistent and sparse, particularly for women. We aim to study this relationship in a large cohort of UK women.
In total 1.2 million women, mean age 56 (s.d. 5) years, without prior suicide attempts or other major illness, recruited in 1996–2001 were followed by record linkage to national hospital admission and death databases. Cox regression yielded relative risks (RRs) and 95% confidence intervals (CIs) for attempted suicide and suicide by BMI, adjusted for baseline lifestyle factors and self-reported treatment for depression or anxiety.
After 16 (s.d. 3) years of follow-up, 4930 women attempted suicide and 642 died by suicide. The small proportion (4%) with BMI <20 kg/m2 were at clearly greater risk of attempted suicide (RR = 1.38, 95% CI 1.23–1.56) and suicide (RR = 2.10, 1.59–2.78) than women of BMI 20–24.9 kg/m2; p < 0.0001 for both comparisons. Small body size at 10 and 20 years old was also associated with increased risks. Half the cohort had BMIs >25 kg/m2 and, while risks were somewhat lower than for BMI 20–24.9 kg/m2 (attempted suicide RR = 0.91, 0.86–0.96; p = 0.001; suicide RR = 0.79, 0.67–0.93; p = 0.006), the reductions in risk were not strongly related to level of BMI.
Being underweight is associated with a definite increase in the risk of suicidal behaviour, particularly death by suicide. Residual confounding cannot be excluded for the small and inconsistent decreased risk of suicidal behaviour associated with being overweight or obese.
A proper subgraph of a connected linear graph is said to disconnect the graph if removing it leaves a disconnected graph. In this paper we characterize, in the following sense, the disconnecting subgraphs of a fixed connected graph. We define two distinct types of disconnecting subgraphs (isthmuses and articulators) which are minimal in the sense that no proper subgraph of either type can disconnect the graph. We then show that any disconnecting subgraph must contain either an isthmus or an articulator. We also define a set of subgraphs (called dense) which form a lattice. We show that the union of the minimal dense subgraphs contains all isthmuses and articulators. In terms of these subgraphs we investigate some of the consequences of assuming that a disconnecting subgraph must contain at least m points.
Legionnaires’ disease (LD) incidence in the USA has quadrupled since 2000. Health departments must detect LD outbreaks quickly to identify and remediate sources. We tested the performance of a system to prospectively detect simulated LD outbreaks in Allegheny County, Pennsylvania, USA. We generated three simulated LD outbreaks based on published outbreaks. After verifying no significant clusters existed in surveillance data during 2014–2016, we embedded simulated outbreak-associated cases into 2016, assigning simulated residences and report dates. We mimicked daily analyses in 2016 using the prospective space-time permutation scan statistic to detect clusters of ⩽30 and ⩽180 days using 365-day and 730-day baseline periods, respectively. We used recurrence interval (RI) thresholds of ⩾20, ⩾100 and ⩾365 days to define significant signals. We calculated sensitivity, specificity and positive and negative predictive values for daily analyses, separately for each embedded outbreak. Two large, simulated cooling tower-associated outbreaks were detected. As the RI threshold was increased, sensitivity and negative predictive value decreased, while positive predictive value and specificity increased. A small, simulated potable water-associated outbreak was not detected. Use of a RI threshold of ⩾100 days minimised time-to-detection while maximizing positive predictive value. Health departments should consider using this system to detect community-acquired LD outbreaks.
A cluster of Salmonella Paratyphi B variant L(+) tartrate(+) infections with indistinguishable pulsed-field gel electrophoresis patterns was detected in October 2015. Interviews initially identified nut butters, kale, kombucha, chia seeds and nutrition bars as common exposures. Epidemiologic, environmental and traceback investigations were conducted. Thirteen ill people infected with the outbreak strain were identified in 10 states with illness onset during 18 July–22 November 2015. Eight of 10 (80%) ill people reported eating Brand A raw sprouted nut butters. Brand A conducted a voluntary recall. Raw sprouted nut butters are a novel outbreak vehicle, though contaminated raw nuts, nut butters and sprouted seeds have all caused outbreaks previously. Firms producing raw sprouted products, including nut butters, should consider a kill step to reduce the risk of contamination. People at greater risk for foodborne illness may wish to consider avoiding raw products containing raw sprouted ingredients.
During the 2009 influenza pandemic, a rapid assessment of disease severity was a challenge as a significant proportion of cases did not seek medical care; care-seeking behaviour changed and the proportion asymptomatic was unknown. A random-digit-dialling telephone survey was undertaken during the 2011/12 winter season in England and Wales to address the feasibility of answering these questions. A proportional quota sampling strategy was employed based on gender, age group, geographical location, employment status and level of education. Households were recruited pre-season and re-contacted immediately following peak seasonal influenza activity. The pre-peak survey was undertaken in October 2011 with 1061 individuals recruited and the post-peak telephone survey in March 2012. Eight hundred and thirty-four of the 1061 (78.6%) participants were successfully re-contacted. Their demographic characteristics compared well to national census data. In total, 8.4% of participants self-reported an influenza-like illness (ILI) in the previous 2 weeks, with 3.2% conforming to the World Health Organization (WHO) ILI case definition. In total, 29.6% of the cases reported consulting their general practitioner. 54.1% of the 1061 participants agreed to be re-contacted about providing biological samples. A population-based cohort was successfully recruited and followed up. Longitudinal survey methodology provides a practical tool to assess disease severity during future pandemics.
In 2016, imported Zika virus (ZIKV) infections and the presence of a potentially competent mosquito vector (Aedes albopictus) implied that ZIKV transmission in New York City (NYC) was possible. The NYC Department of Health and Mental Hygiene developed contingency plans for a urosurvey to rule out ongoing local transmission as quickly as possible if a locally acquired case of confirmed ZIKV infection was suspected. We identified tools to (1) rapidly estimate the population living in any given 150-m radius (i.e. within the typical flight distance of an Aedes mosquito) and (2) calculate the sample size needed to test and rule out the further local transmission. As we expected near-zero ZIKV prevalence, methods relying on the normal approximation to the binomial distribution were inappropriate. Instead, we assumed a hypergeometric distribution, 10 missed cases at maximum, a urine assay sensitivity of 92.6% and 100% specificity. Three suspected example risk areas were evaluated with estimated population sizes of 479–4,453, corresponding to a minimum of 133–1244 urine samples. This planning exercise improved our capacity for ruling out local transmission of an emerging infection in a dense, urban environment where all residents in a suspected risk area cannot be feasibly sampled.
Chronic hepatitis C virus (HCV) infection is the most common blood-borne infection in the USA. Estimating prevalence is critical for monitoring diagnosis, treatment, and cure and for allocating resources. Surveillance data reported to the New York City (NYC) Health Department, 2000–2015, were used to estimate HCV prevalence in NYC in 2015. The numbers who died, out-migrated or whose last RNA test was negative were removed from the count of people reported with HCV. A simulation model was used to remove estimates of people whose infection spontaneously cleared or was cured and to add an estimate of people unaware of infection. The surveillance-based HCV prevalence in NYC in 2015 overall was 1.4% (95% certainty level (CL) 1.2–1.6%; n ≈ 116 000, 95% CL ≈99 000–135 000) and among adults aged ⩾20 years was 1.8% (95% CL 1.5–2.0%, n ≈ 115 000, 95% CL ≈99 000–134 000), lower than the 2010 estimate among adults aged ⩾20 years of 2.4% (n ≈ 147 000). Contributors to the decrease in HCV prevalence from 2010 to 2015 include both the availability of highly effective treatment and also deaths among an ageing population. The 2015 estimate can be used to set NYC-specific HCV screening and treatment targets and monitor progress towards HCV elimination.
To integrate electronic clinical decision support tools into clinical practice and to evaluate the impact on indwelling urinary catheter (IUC) use and catheter-associated urinary tract infections (CAUTIs).
Design, Setting, and Participants
This 4-phase observational study included all inpatients at a multicampus, academic medical center between 2011 and 2015.
Phase 1 comprised best practices training and standardization of electronic documentation. Phase 2 comprised real-time electronic tracking of IUC duration. In phase 3, a triggered alert reminded clinicians of IUC duration. In phase 4, a new IUC order (1) introduced automated order expiration and (2) required consideration of alternatives and selection of an appropriate indication.
Overall, 2,121 CAUTIs, 179,070 new catheters, 643,055 catheter days, and 2,186 reinsertions occurred in 3·85 million hospitalized patient days during the study period. The CAUTI rate per 10,000 patient days decreased incrementally in each phase from 9·06 in phase 1 to 1·65 in phase 4 (relative risk [RR], 0·182; 95% confidence interval [CI], 0·153–0·216; P<·001). New catheters per 1,000 patient days declined from 53·4 in phase 1 to 39·5 in phase 4 (RR, 0·740; 95% CI, 0·730; P<·001), and catheter days per 1,000 patient days decreased from 194·5 in phase 1 to 140·7 in phase 4 (RR, 0·723; 95% CI, 0·719–0·728; P<·001). The reinsertion rate declined from 3·66% in phase 1 to 3·25% in phase 4 (RR, 0·894; 95% CI, 0·834–0·959; P=·0017).
The phased introduction of decision support tools was associated with progressive declines in new catheters, total catheter days, and CAUTIs. Clinical decision support tools offer a viable and scalable intervention to target hospital-wide IUC use and hold promise for other quality improvement initiatives.
Significant increases in excess all-cause mortality, particularly in the elderly, were observed during the winter of 2014/15 in England. With influenza A(H3N2) the dominant circulating influenza A subtype, this paper determines the contribution of influenza to this excess controlling for weather. A standardised multivariable Poisson regression model was employed with weekly all-cause deaths the dependent variable for the period 2008–2015. Adjusting for extreme temperature, a total of 26 542 (95% CI 25 301–27 804) deaths in 65+ and 1942 (95% CI 1834–2052) in 15–64-year-olds were associated with influenza from week 40, 2014 to week 20, 2015. This is compatible with the circulation of influenza A(H3N2). It is the largest estimated number of influenza-related deaths in England since prior to 2008/09. The findings highlight the potential health impact of influenza and the important role of the annual influenza vaccination programme that is required to protect the population including the elderly, who are vulnerable to a severe outcome.
Introduction: Our emergency department (ED) sees a low volume of high acuity pediatric cases. A needs assessment revealed that 68% of our Emergency Physicians (EP) manage pediatric patients in less than 25% of their shifts. The same percentage of EPs as well as ED nurses indicated they were uncomfortable managing a critically unwell neonate. Thus, an interprofessional curriculum focused on pediatric emergencies for ED staff was developed. In-situ simulation education was chosen as the most appropriate method to consolidate each didactic block of curriculum, and uncover important system gaps. Methods: Needs assessment conducted, and emerging themes informed IPE curriculum objectives. A committee of experts in simulation, pediatric emergencies and nursing education designed a full-day, RCPSC accredited, interprofessional in-situ simulation program. Results: Progressive segmental strategy maximized learning outcomes. The initial phase (2 hrs) comprised an” early recognition of sepsis” seminar and 4 rotating skills stations (equipment familiarity, sedating the child, IV starts, and mixing IV medication). This deliberate, adaptive, customized practice was enhanced by expert facilitation at each station, directly engaging participants and providing real-time feedback. The second phase allowed interprofessional teams of MDs, RNs and Physician Assistants to apply knowledge gained from the didactic and skills stations to in-situ simulated emergencies. Each group participated in two pediatric emergency scenarios. Scenarios ran 20 minutes, followed by a 40 minute debrief. Each scenario had a trained debriefer and content expert. The day concluded with a final debrief, attended by all participants. Formalized checklists assessed participants knowledge translation during simulation exercises. Participants assessed facilitators and evaluated the simulation day and curriculum via anonymous feedback forms. Debriefing sessions were scribed and knowledge gaps and system errors were recorded. Results were distributed to ED leaders and responsibilities assigned to key stakeholders to ensure accountability and improvement in system errors. Results All participants reported the experience to be relevant and helpful in their learning. All participants requested more frequent simulation days. System gaps identified included: use of metric vs imperial measurements, non-compatible laryngoscope equipment, inadequate identification of team personnel. As a result, the above-mentioned equipment has been replaced, and we are developing resuscitation room ID stickers for all team roles. Conclusion: Simulation as a culmination to a didactic curriculum provides a safe environment to translate acquired knowledge, increasing ED staff comfort and familiarity with rare pediatric cases. Additionally, is an excellent tool to reveal system gaps and allow us to fill these gaps to improve departmental functioning and safety.