We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Ecosystem modeling, a pillar of the systems ecology paradigm (SEP), addresses questions such as, how much carbon and nitrogen are cycled within ecological sites, landscapes, or indeed the earth system? Or how are human activities modifying these flows? Modeling, when coupled with field and laboratory studies, represents the essence of the SEP in that they embody accumulated knowledge and generate hypotheses to test understanding of ecosystem processes and behavior. Initially, ecosystem models were primarily used to improve our understanding about how biophysical aspects of ecosystems operate. However, current ecosystem models are widely used to make accurate predictions about how large-scale phenomena such as climate change and management practices impact ecosystem dynamics and assess potential effects of these changes on economic activity and policy making. In sum, ecosystem models embedded in the SEP remain our best mechanism to integrate diverse types of knowledge regarding how the earth system functions and to make quantitative predictions that can be confronted with observations of reality. Modeling efforts discussed are the Century ecosystem model, DayCent ecosystem model, Grassland Ecosystem Model ELM, food web models, Savanna model, agent-based and coupled systems modeling, and Bayesian modeling.
Emerging from the warehouse of knowledge about terrestrial ecosystem functioning and the application of the systems ecology paradigm, exemplified by the power of simulation modeling, tremendous strides have been made linking the interactions of the land, atmosphere, and water locally to globally. Through integration of ecosystem, atmospheric, soil, and more recently social science interactions, plausible scenarios and even reasonable predictions are now possible about the outcomes of human activities. The applications of that knowledge to the effects of changing climates, human-caused nitrogen enrichment of ecosystems, and altered UV-B radiation represent challenges addressed in this chapter. The primary linkages addressed are through the C, N, S, and H2O cycles, and UV-B radiation. Carbon dioxide exchanges between land and the atmosphere, N additions and losses to and from lands and waters, early studies of SO2 in grassland ecosystem, and the effects of UV-B radiation on ecosystems have been mainstays of research described in this chapter. This research knowledge has been used in international and national climate assessments, for example the IPCC, US National Climate Assessment, and Paris Climate Accord. Likewise, the knowledge has been used to develop concepts and technologies related to sustainable agriculture, C sequestration, and food security.
Perceived discrimination is associated with worse mental health. Few studies have assessed whether perceived discrimination (i) is associated with the risk of psychotic disorders and (ii) contributes to an increased risk among minority ethnic groups relative to the ethnic majority.
Methods
We used data from the European Network of National Schizophrenia Networks Studying Gene-Environment Interactions Work Package 2, a population-based case−control study of incident psychotic disorders in 17 catchment sites across six countries. We calculated odds ratios (OR) and 95% confidence intervals (95% CI) for the associations between perceived discrimination and psychosis using mixed-effects logistic regression models. We used stratified and mediation analyses to explore differences for minority ethnic groups.
Results
Reporting any perceived experience of major discrimination (e.g. unfair treatment by police, not getting hired) was higher in cases than controls (41.8% v. 34.2%). Pervasive experiences of discrimination (≥3 types) were also higher in cases than controls (11.3% v. 5.5%). In fully adjusted models, the odds of psychosis were 1.20 (95% CI 0.91–1.59) for any discrimination and 1.79 (95% CI 1.19–1.59) for pervasive discrimination compared with no discrimination. In stratified analyses, the magnitude of association for pervasive experiences of discrimination appeared stronger for minority ethnic groups (OR = 1.73, 95% CI 1.12–2.68) than the ethnic majority (OR = 1.42, 95% CI 0.65–3.10). In exploratory mediation analysis, pervasive discrimination minimally explained excess risk among minority ethnic groups (5.1%).
Conclusions
Pervasive experiences of discrimination are associated with slightly increased odds of psychotic disorders and may minimally help explain excess risk for minority ethnic groups.
In recent years, a variety of efforts have been made in political science to enable, encourage, or require scholars to be more open and explicit about the bases of their empirical claims and, in turn, make those claims more readily evaluable by others. While qualitative scholars have long taken an interest in making their research open, reflexive, and systematic, the recent push for overarching transparency norms and requirements has provoked serious concern within qualitative research communities and raised fundamental questions about the meaning, value, costs, and intellectual relevance of transparency for qualitative inquiry. In this Perspectives Reflection, we crystallize the central findings of a three-year deliberative process—the Qualitative Transparency Deliberations (QTD)—involving hundreds of political scientists in a broad discussion of these issues. Following an overview of the process and the key insights that emerged, we present summaries of the QTD Working Groups’ final reports. Drawing on a series of public, online conversations that unfolded at www.qualtd.net, the reports unpack transparency’s promise, practicalities, risks, and limitations in relation to different qualitative methodologies, forms of evidence, and research contexts. Taken as a whole, these reports—the full versions of which can be found in the Supplementary Materials—offer practical guidance to scholars designing and implementing qualitative research, and to editors, reviewers, and funders seeking to develop criteria of evaluation that are appropriate—as understood by relevant research communities—to the forms of inquiry being assessed. We dedicate this Reflection to the memory of our coauthor and QTD working group leader Kendra Koivu.1
Recently, artificial intelligence-powered devices have been put forward as potentially powerful tools for the improvement of mental healthcare. An important question is how these devices impact the physician-patient interaction.
Aims
Aifred is an artificial intelligence-powered clinical decision support system (CDSS) for the treatment of major depression. Here, we explore the use of a simulation centre environment in evaluating the usability of Aifred, particularly its impact on the physician–patient interaction.
Method
Twenty psychiatry and family medicine attending staff and residents were recruited to complete a 2.5-h study at a clinical interaction simulation centre with standardised patients. Each physician had the option of using the CDSS to inform their treatment choice in three 10-min clinical scenarios with standardised patients portraying mild, moderate and severe episodes of major depression. Feasibility and acceptability data were collected through self-report questionnaires, scenario observations, interviews and standardised patient feedback.
Results
All 20 participants completed the study. Initial results indicate that the tool was acceptable to clinicians and feasible for use during clinical encounters. Clinicians indicated a willingness to use the tool in real clinical practice, a significant degree of trust in the system's predictions to assist with treatment selection, and reported that the tool helped increase patient understanding of and trust in treatment. The simulation environment allowed for the evaluation of the tool's impact on the physician–patient interaction.
Conclusions
The simulation centre allowed for direct observations of clinician use and impact of the tool on the clinician–patient interaction before clinical studies. It may therefore offer a useful and important environment in the early testing of new technological tools. The present results will inform further tool development and clinician training materials.
To describe epidemiologic and genomic characteristics of a severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) outbreak in a large skilled nursing facility (SNF), and the strategies that controlled transmission.
Design, Setting, and Participants:
Cohort study during March 22–May 4, 2020 of all staff and residents at a 780-bed SNF in San Francisco, California.
Methods:
Contact tracing and symptom screening guided targeted testing of staff and residents; respiratory specimens were also collected through serial point prevalence surveys (PPS) in units with confirmed cases. Cases were confirmed by real-time reverse transcription–polymerase chain reaction testing for SARS-CoV-2; whole genome sequencing (WGS) characterized viral isolate lineages and relatedness. Infection prevention and control (IPC) interventions included restricting from work any staff who had close contact to a confirmed case; restricting movements between units; implementing surgical face masking facility-wide; and recommended PPE (isolation gown, gloves, N95 respirator and eye protection) for clinical interactions in units with confirmed cases.
Results:
Of 725 staff and residents tested through targeted testing and serial PPS, twenty-one (3%) were SARS-CoV-2-positive; sixteen (76%) staff and 5 (24%) residents. Fifteen (71%) were linked to a single unit. Targeted testing identified 17 (81%) cases; PPS identified 4 (19%). Most (71%) cases were identified prior to IPC intervention. WGS was performed on SARS-CoV-2 isolates from four staff and four residents; five were of Santa Clara County lineage and the three others were distinct lineages.
Conclusions:
Early implementation of targeted testing, serial PPS, and multimodal IPC interventions limited SARS-CoV-2 transmission within the SNF.
Grass pea (Lathyrus sativus L.) has a Mediterranean origin and was spread to Western Europe, Africa and South Asia. Over time, this grain legume crop has become important in South Asia, where it is often affected by waterlogging at germination. Therefore, varieties with waterlogging tolerance of seeds at germination are needed. This study evaluated waterlogging tolerance in a grass pea diversity panel. First, morpho-agronomic traits of 53 grass pea genotypes from 7 diverse countries (Afghanistan, Australia, Bangladesh, Cyprus, Ethiopia, Greece and Pakistan) were measured in a glasshouse. Seeds of the collection were then sown into waterlogged soil for 6 days and is subsequently drained for 8 days. Finally, representative genotypes from each country of origin of the three survival patterns (described below) were then tested to identify the effect of seed priming on germination and seedling growth in waterlogged soil. Canonical analysis of six traits (seed weight, pod length, pod width, flowering time, time to maturity and seedling survival) showed that genotypes from Bangladesh and Ethiopia were similar. There was a significant variation amongst genotypes in waterlogging tolerance. Genotypes from Bangladesh and Ethiopia showed the highest percent seedling survival (54% and 47%), with an ability to germinate under waterlogging and then maintain growth from the first day of draining to the final sampling (Pattern 1). In contrast, genotypes from other origins either germinated during waterlogging, but did not survive during drainage (Pattern 2) or failed to germinate and had low seedling survival during waterlogging and drainage (Pattern 3). Priming seeds reduced seedling survival in grass pea. Despite Mediterranean origin, specific ecotypes of grass pea with greater waterlogging tolerance under warm wet conditions have been favoured in Bangladesh and Ethiopia where adaptation to extreme precipitation events at germination and seedling survival upon soil drainage is critical for successful crops.
This chapter comprises the following sections: names, taxonomy, subspecies and distribution, descriptive notes, habitat, movements and home range, activity patterns, feeding ecology, reproduction and growth, behavior, parasites and diseases, status in the wild, and status in captivity.
The object described and discussed in this paper is a recently found Anglo-Saxon strap-end. Although incomplete, the strap-end is of interest in view of its rarity in being made of silver, of its decoration and of it containing an inscribed text. One part of the decoration is a depiction of the agnus dei. In the discussion, the decoration on the strap-end, and its significance, is set in the context of other instances of the agnus dei, both on artefacts and in manuscripts, from late Anglo-Saxon England.
This is the first report on the association between trauma exposure and depression from the Advancing Understanding of RecOvery afteR traumA(AURORA) multisite longitudinal study of adverse post-traumatic neuropsychiatric sequelae (APNS) among participants seeking emergency department (ED) treatment in the aftermath of a traumatic life experience.
Methods
We focus on participants presenting at EDs after a motor vehicle collision (MVC), which characterizes most AURORA participants, and examine associations of participant socio-demographics and MVC characteristics with 8-week depression as mediated through peritraumatic symptoms and 2-week depression.
Results
Eight-week depression prevalence was relatively high (27.8%) and associated with several MVC characteristics (being passenger v. driver; injuries to other people). Peritraumatic distress was associated with 2-week but not 8-week depression. Most of these associations held when controlling for peritraumatic symptoms and, to a lesser degree, depressive symptoms at 2-weeks post-trauma.
Conclusions
These observations, coupled with substantial variation in the relative strength of the mediating pathways across predictors, raises the possibility of diverse and potentially complex underlying biological and psychological processes that remain to be elucidated in more in-depth analyses of the rich and evolving AURORA database to find new targets for intervention and new tools for risk-based stratification following trauma exposure.
Cannabis use has been associated with psychosis through exposure to delta-9-tetrahydrocannabinol (Δ9-THC), its key psychoactive ingredient. Although preclinical and human evidence suggests that Δ9-THC acutely modulates glial function and hypothalamic-pituitary-adrenal (HPA) axis activity, whether differential sensitivity to the acute psychotomimetic effects of Δ9-THC is associated with differential effects of Δ9-THC on glial function and HPA-axis response has never been tested.
Methods
A double-blind, randomized, placebo-controlled, crossover study investigated whether sensitivity to the psychotomimetic effects of Δ9-THC moderates the acute effects of a single Δ9-THC dose (1.19 mg/2 ml) on myo-inositol levels, a surrogate marker of glia, in the Anterior Cingulate Cortex (ACC), and circadian cortisol levels, the key neuroendocrine marker of the HPA-axis, in a set of 16 healthy participants (seven males) with modest previous cannabis exposure.
Results
The Δ9-THC-induced change in ACC myo-inositol levels differed significantly between those sensitive to (Δ9-THC minus placebo; M = −0.251, s.d. = 1.242) and those not sensitive (M = 1.615, s.d. = 1.753) to the psychotomimetic effects of the drug (t(14) = 2.459, p = 0.028). Further, the Δ9-THC-induced change in cortisol levels over the study period (baseline minus 2.5 h post-drug injection) differed significantly between those sensitive to (Δ9-THC minus placebo; M = −275.4, s.d. = 207.519) and those not sensitive (M = 74.2, s.d. = 209.281) to the psychotomimetic effects of the drug (t(13) = 3.068, p = 0.009). Specifically, Δ9-THC exposure lowered ACC myo-inositol levels and disrupted the physiological diurnal cortisol decrease only in those subjects developing transient psychosis-like symptoms.
Conclusions
The interindividual differences in transient psychosis-like effects of Δ9-THC are the result of its differential impact on glial function and stress response.
Over the last two decades, heart centres have developed strategies to meet the neurodevelopmental needs of children with congenital heart disease. Since the publication of guidelines in 2012, cardiac neurodevelopmental follow-up programmes have become more widespread. Local neurodevelopmental programmes, however, have been developed independently in widely varying environments. We sought to characterise variation in structure and personnel in cardiac neurodevelopmental programmes. A 31-item survey was sent to all member institutions of the Cardiac Neurodevelopmental Outcome Collaborative. Multidisciplinary teams at each centre completed the survey. Responses were compiled in a descriptive fashion. Of the 29 invited centres, 23 responded to the survey (79%). Centres reported more anticipated neurodevelopment visits between birth and 5 years of age (median 5, range 2–8) than 5–18 years (median 2, range 0–10) with 53% of centres lacking any standard for routine neurodevelopment evaluations after 5 years of age. Estimated annual neurodevelopment clinic volume ranged from 85 to 428 visits with a median of 16% of visits involving children >5 years of age. Among responding centres, the Bayley Scales of Infant and Toddler Development and Wechsler Preschool and Primary Scale of Intelligence were the most routinely used tests. Neonatal clinical assessment was more common (64%) than routine neonatal brain imaging (23%) during hospitalisation. In response to clinical need and published guidelines, centres have established formal cardiac neurodevelopment follow-up programmes. Centres vary considerably in their approaches to routine screening and objective testing, with many centres currently focussing their resources on evaluating younger patients.
Rapid spread of coronavirus disease 2019 (COVID-19) has affected people with intellectual disability disproportionately. Existing data does not provide enough information to understand factors associated with increased deaths in those with intellectual disability. Establishing who is at high risk is important in developing prevention strategies, given risk factors or comorbidities in people with intellectual disability may be different to those in the general population.
Aims
To identify comorbidities, demographic and clinical factors of those individuals with intellectual disability who have died from COVID-19.
Method
An observational descriptive case series looking at deaths because of COVID-19 in people with intellectual disability was conducted. Along with established risk factors observed in the general population, possible specific risk factors and comorbidities in people with intellectual disability for deaths related to COVID-19 were examined. Comparisons between mild and moderate-to-profound intellectual disability subcohorts were undertaken.
Results
Data on 66 deaths in individuals with intellectual disability were analysed. This group was younger (mean age 64 years) compared with the age of death in the general population because of COVID-19. High rates of moderate-to-profound intellectual disability (n = 43), epilepsy (n = 29), mental illness (n = 29), dysphagia (n = 23), Down syndrome (n = 20) and dementia (n = 15) were observed.
Conclusions
This is the first study exploring associations between possible risk factors and comorbidities found in COVID-19 deaths in people with intellectual disability. Our data provides insight into possible factors for deaths in people with intellectual disability. Some of the factors varied between the mild and moderate-to-profound intellectual disability groups. This highlights an urgent need for further systemic inquiry and study of the possible cumulative impact of these factors and comorbidities given the possibility of COVID-19 resurgence.
Background: Carbapenem-resistant Enterobacteriaceae (CRE) are endemic in the Chicago region. We assessed the regional impact of a CRE control intervention targeting high-prevalence facilities; that is, long-term acute-care hospitals (LTACHs) and ventilator-capable skilled nursing facilities (vSNFs). Methods: In July 2017, an academic–public health partnership launched a regional CRE prevention bundle: (1) identifying patient CRE status by querying Illinois’ XDRO registry and periodic point-prevalence surveys reported to public health, (2) cohorting or private rooms with contact precautions for CRE patients, (3) combining hand hygiene adherence, monitoring with general infection control education, and guidance by project coordinators and public health, and (4) daily chlorhexidine gluconate (CHG) bathing. Informed by epidemiology and modeling, we targeted LTACHs and vSNFs in a 13-mile radius from the coordinating center. Illinois mandates CRE reporting to the XDRO registry, which can also be manually queried or generate automated alerts to facilitate interfacility communication. The regional intervention promoted increased automation of alerts to hospitals. The prespecified primary outcome was incident clinical CRE culture reported to the XDRO registry in Cook County by month, analyzed by segmented regression modeling. A secondary outcome was colonization prevalence measured by serial point-prevalence surveys for carbapenemase-producing organism colonization in LTACHs and vSNFs. Results: All eligible LTACHs (n = 6) and vSNFs (n = 9) participated in the intervention. One vSNF declined CHG bathing. vSNFs that implemented CHG bathing typically bathed residents 2–3 times per week instead of daily. Overall, there were significant gaps in infection control practices, especially in vSNFs. Also, 75 Illinois hospitals adopted automated alerts (56 during the intervention period). Mean CRE incidence in Cook County decreased from 59.0 cases per month during baseline to 40.6 cases per month during intervention (P < .001). In a segmented regression model, there was an average reduction of 10.56 cases per month during the 24-month intervention period (P = .02) (Fig. 1), and an estimated 253 incident CRE cases were averted. Mean CRE incidence also decreased among the stratum of vSNF/LTACH intervention facilities (P = .03). However, evidence of ongoing CRE transmission, particularly in vSNFs, persisted, and CRE colonization prevalence remained high at intervention facilities (Table 1). Conclusions: A resource-intensive public health regional CRE intervention was implemented that included enhanced interfacility communication and targeted infection prevention. There was a significant decline in incident CRE clinical cases in Cook County, despite high persistent CRE colonization prevalence in intervention facilities. vSNFs, where understaffing or underresourcing were common and lengths of stay range from months to years, had a major prevalence challenge, underscoring the need for aggressive infection control improvements in these facilities.
Funding: The Centers for Disease Control and Prevention (SHEPheRD Contract No. 200-2011-42037)
Disclosures: M.Y.L. has received research support in the form of contributed product from OpGen and Sage Products (now part of Stryker Corporation), and has received an investigator-initiated grant from CareFusion Foundation (now part of BD).
Background:Clostridioides difficile is a major cause of antibiotic-associated colitis and the most common healthcare-associated pathogen in the United States. Interrupting the known transmission mechanisms of C. difficile in hospitals requires appropriate hand hygiene, disinfection of potentially contaminated surfaces, and patient equipment. However, only limited data are available on the effectiveness of germicides against various strains of C. difficile, with and without fetal calf serum, and at multiple exposure times. For this reason, we undertook the following evaluation to determine the effectiveness of germicides. Methods: The effectiveness of the sporicidal activity of the germicides against 5 strains of C. difficile was evaluated using a quantitative carrier test, a standard of ASTM International developed by Sattar et al. In this protocol, metal carriers (1 cm diameter 0.7 mm thick) were inoculated with 10 L spore suspension, containing ~103 or 106 C. difficile spores, and we then exposed them to 50 L germicide for 1, 5, 10, or 20 minutes. The following C. difficile strains were used in these studies: ATCC strains 9689; J9; BI-9; 630; and CF-4. To determine whether C. difficile spore susceptibility was similar to other spores, we also tested Bacillus atrophaeus spores, ATCC strain 19659. Fetal calf serum (FCS) was used to simulate organic matter. Results: In general, high-level disinfectants (eg, OPA, glutaraldehyde), chemical sterilants (eg, peracetic acid), and high concentrations of chlorine (>5,000 ppm) were generally sporicidal (>3 log10 reduction) in 5–10 minutes (and sometimes 1 minute). This level of sporicidal activity was demonstrated for the various strains of C. difficile spores and B. atrophaeus spores (Table 1). There did not appear to be any significant differences in inactivation of C. difficile spores (BI-9 strain) in the presence or absence of FCS (Table 2). Discussion: The sporicidal activity of disinfectants is critical because such formulations are routinely used to eliminate the risk associated with noncritical and semicritical instruments and environmental surfaces. Our data suggest that immersion in most (but not all) high-level disinfectants for 10 minutes is likely to be successful in eradicating C. difficile spores (>4 log10 reduction) from semicritical equipment (eg, endoscopes). Additionally, high concentrations of chlorine and some high-level disinfectants will kill C. difficile spores in 1 or 2 minutes.
Funding: None
Disclosures: Drs. Rutala and Weber are consultants to PDI (Professional Disposable International)
Background: Most medical and surgical devices used in healthcare facilities are made of materials that are sterilized by heat (ie, heat stable), primarily steam sterilization. Low-temperature sterilization methods developed for heat and moisture sensitive devices include ethylene oxide gas (ETO), hydrogen peroxide gas plasma (HPGP), vaporized hydrogen peroxide (VHP), and hydrogen peroxide plus ozone. This study is the first to evaluate the microbicidal activity of the FDA-cleared VHP sterilizer and other methods (Table 1) in the presence of salt and serum (10% FCS). Methods: Brushed stainless steel discs (test carriers) were inoculated with test microbes (Table 1) and subjected to 4 sterilization methods: steam, ETO, VHP and HPGP. Results: Steam sterilization killed all 5 vegetative and 3 spore-forming test organisms in the presence of salt and serum (Table 1). Similarly, the ETO and the HPGP sterilizers inactivated the test organisms with a failure rate of 1.9% for each (ie, 6 of 310 for ETO and 5 of 270 for HPGP). Although steam had no failures compared to both ETO and HPGP, which demonstrated some failures for vegetative bacteria, there was no significant difference comparing the failure rate of steam to either ETO (P > .05) or HPGP (P > .05). However, the VHP system tested failed to inactivate all the test organisms in 76.3% of the tests (206 of 270; P < .00001) (Table 1). Conclusions: This investigation demonstrated that steam sterilization was the most effective method, followed by ETO and HPGP and, lastly, VHP.
Funding: None
Disclosures: Dr. Rutala was a consultant to ASP (Advanced Sterilization Products)
Background: Surgical instruments that enter sterile tissue should be sterile because microbial contamination could result in disease transmission. Despite careful surgical instrument reprocessing, surgeons and other healthcare personnel (HCP) describe cases in which surgical instruments have been contaminated with organic material (eg, blood). Although most of these cases are observed before the instrument reaches the patient, in some cases the contaminated instrument contaminates the sterile field, or rarely, the patient. In this study, we evaluated the robustness of sterilization technologies when spores and bacteria mixed with blood were placed on dirty (uncleaned) instruments. Methods: Dirty surgical instruments were inoculated with 1.5105 to 4.1107 spores or vegetative bacteria (MRSA, VRE or Mycobacterium terrae) in the presence or absence of blood. The spores used were most resistant to the sterilization process tested (eg, Geobacillus stearothermophilus for steam and HPGP and Bacillus atrophaeus for ETO). Once the inoculum dried, the instruments were placed in a peel pouch and sterilized by steam sterilization, ethylene oxide (ETO), or hydrogen peroxide gas plasma (HPGP). These experiments are not representative of practice or manufacturer’s recommendations because cleaning must always precede sterilization. Results: Steam sterilization killed all the G. stearothermophilus spores and M. terrae when inoculated onto dirty instruments in the presence or absence of blood (Table 1). ETO failed to inactivate all test spores (B. atrophaeus) when inoculated onto dirty instruments (60% failure) and dirty instruments with blood (90% failure). ETO did kill the vegetative bacteria (MRSA, VRE) under the same 2 test conditions (ie, dirty instruments with and without blood). The failure rates for HPGP for G. stearothermophilus spores and MRSA were 60% and 40%, respectively, when mixed with blood on a dirty instrument. Conclusions:This investigation demonstrated that steam sterilization is the most robust sterilization process and is effective even when instruments were not cleaned and the test organisms (G. stearothermophilus spores and MRSA) were mixed with blood. The low-temperature sterilization technologies tested (ie, ETO, HPGP) failed to inactivate the test spores but ETO did kill the test bacteria (ie, MRSA, VRE). These findings should assist HCP to assess the risk of infection to patients when potentially contaminated surgical instruments enter the sterile field or are unintentionally used on patients during surgery. Our data also demonstrate the importance of thorough cleaning prior to sterilization.
Funding: None
Disclosures: Dr. Rutala was a consultant to ASP (Advanced Sterilization Products)
Background:Candida auris is an emerging fungal pathogen that is often resistant to major classes of antifungal drugs. It is considered a serious global health threat because it has caused severe infections with frequent mortality in over a dozen countries. C. auris can survive on healthcare environmental surfaces for at least 7 days, and it causes outbreaks in healthcare facilities. C. auris has an environmental route of transmission. Thus, infection prevention strategies, such as surface disinfection and room decontamination technologies (eg, ultraviolet [UV-C] light), will be essential to controlling transmission. Unfortunately, data are limited regarding the activity of UV-C to inactivate this pathogen. In this study, a UV-C device was evaluated for its antimicrobial activity against C. auris and C. albicans. Methods: We tested the antifungal activity of a single UV-C device using the vegetative bacteria cycle, which delivers a reflected dose of 12,000 µW/cm2. This testing was performed using Formica sheets (7.6 × 7.6 cm; 3 × 3 inches). The carriers were inoculated with C. auris or C. albicans and placed horizontal on the surface or vertical (ie, perpendicular) to the vertical UV-C lamp and at a distance from 1. 2 m (~4 ft) to 2.4 m (~8 ft). Results: Direct UV-C, with or without FCS (log10 reduction 4.57 and 4.45, respectively), exhibited a higher log10 reduction than indirect UV-C for C. auris (log10 reduction 2.41 and 1.96, respectively), which was statistically significant (Fig. 1 and Table 1). For C. albicans, although direct UV-C had a higher log10 reduction (log10 reduction with and without FCS, 5.26 and 5.07, respectively) compared to indirect exposure (log10 reduction with and without FCS, 3.96 and 3.56, respectively), this difference was not statistically significant. The vertical UV had statistically higher log10 reductions than horizontal UV against C. auris and C. albicans with FCS and without FCS. For example, for C. auris with FCS the log10 reduction for vertical surfaces was 4.92 (95% CI 3.79, 6.04) and for horizontal surfaces the log10 reduction was 2.87 (95% CI, 2.36–3.38). Conclusions:C. auris can be inactivated on environmental surfaces by UV-C as long as factors that affect inactivation are optimized (eg, exposure time). These data and other published UV-C data should be used in developing cycle parameters that prevent contaminated surfaces from being a source of acquisition by staff or patients of this globally emerging pathogen.