To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Herbicide resistance has for decades been an increasing problem of agronomic crops such as corn and soybean. Several weed species have evolved herbicide resistance in turfgrass systems such as golf courses, sports fields, and sod production—particularly biotypes of annual bluegrass and goosegrass. Consequences of herbicide resistance in agronomic cropping systems indicate what could happen in turfgrass if herbicide resistance becomes broader in terms of species, distribution, and mechanisms of action. The turfgrass industry must take action to develop effective resistance management programs while this problem is still relatively small in scope. We propose that lessons learned from a series of national listening sessions conducted by the Herbicide Resistance Education Committee of the Weed Science Society of America to better understand the human dimensions affecting herbicide resistance in crop production provide tremendous insight into what themes to address when developing effective resistance management programs for the turfgrass industry.
At present, analysis of diet and bladder cancer (BC) is mostly based on the intake of individual foods. The examination of food combinations provides a scope to deal with the complexity and unpredictability of the diet and aims to overcome the limitations of the study of nutrients and foods in isolation. This article aims to demonstrate the usability of supervised data mining methods to extract the food groups related to BC. In order to derive key food groups associated with BC risk, we applied the data mining technique C5.0 with 10-fold cross-validation in the BLadder cancer Epidemiology and Nutritional Determinants study, including data from eighteen case–control and one nested case–cohort study, compromising 8320 BC cases out of 31 551 participants. Dietary data, on the eleven main food groups of the Eurocode 2 Core classification codebook, and relevant non-diet data (i.e. sex, age and smoking status) were available. Primarily, five key food groups were extracted; in order of importance, beverages (non-milk); grains and grain products; vegetables and vegetable products; fats, oils and their products; meats and meat products were associated with BC risk. Since these food groups are corresponded with previously proposed BC-related dietary factors, data mining seems to be a promising technique in the field of nutritional epidemiology and deserves further examination.
We describe an ultra-wide-bandwidth, low-frequency receiver recently installed on the Parkes radio telescope. The receiver system provides continuous frequency coverage from 704 to 4032 MHz. For much of the band (
), the system temperature is approximately 22 K and the receiver system remains in a linear regime even in the presence of strong mobile phone transmissions. We discuss the scientific and technical aspects of the new receiver, including its astronomical objectives, as well as the feed, receiver, digitiser, and signal processor design. We describe the pipeline routines that form the archive-ready data products and how those data files can be accessed from the archives. The system performance is quantified, including the system noise and linearity, beam shape, antenna efficiency, polarisation calibration, and timing stability.
The Genomics Used to Improve DEpresssion Decisions (GUIDED) trial assessed outcomes associated with combinatorial pharmacogenomic (PGx) testing in patients with major depressive disorder (MDD). Analyses used the 17-item Hamilton Depression (HAM-D17) rating scale; however, studies demonstrate that the abbreviated, core depression symptom-focused, HAM-D6 rating scale may have greater sensitivity toward detecting differences between treatment and placebo. However, the sensitivity of HAM-D6 has not been tested for two active treatment arms. Here, we evaluated the sensitivity of the HAM-D6 scale, relative to the HAM-D17 scale, when assessing outcomes for actively treated patients in the GUIDED trial.
Outpatients (N=1,298) diagnosed with MDD and an inadequate treatment response to >1 psychotropic medication were randomized into treatment as usual (TAU) or combinatorial PGx-guided (guided-care) arms. Combinatorial PGx testing was performed on all patients, though test reports were only available to the guided-care arm. All patients and raters were blinded to study arm until after week 8. Medications on the combinatorial PGx test report were categorized based on the level of predicted gene-drug interactions: ‘use as directed’, ‘moderate gene-drug interactions’, or ‘significant gene-drug interactions.’ Patient outcomes were assessed by arm at week 8 using HAM-D6 and HAM-D17 rating scales, including symptom improvement (percent change in scale), response (≥50% decrease in scale), and remission (HAM-D6 ≤4 and HAM-D17 ≤7).
At week 8, the guided-care arm demonstrated statistically significant symptom improvement over TAU using HAM-D6 scale (Δ=4.4%, p=0.023), but not using the HAM-D17 scale (Δ=3.2%, p=0.069). The response rate increased significantly for guided-care compared with TAU using both HAM-D6 (Δ=7.0%, p=0.004) and HAM-D17 (Δ=6.3%, p=0.007). Remission rates were also significantly greater for guided-care versus TAU using both scales (HAM-D6 Δ=4.6%, p=0.031; HAM-D17 Δ=5.5%, p=0.005). Patients taking medication(s) predicted to have gene-drug interactions at baseline showed further increased benefit over TAU at week 8 using HAM-D6 for symptom improvement (Δ=7.3%, p=0.004) response (Δ=10.0%, p=0.001) and remission (Δ=7.9%, p=0.005). Comparatively, the magnitude of the differences in outcomes between arms at week 8 was lower using HAM-D17 (symptom improvement Δ=5.0%, p=0.029; response Δ=8.0%, p=0.008; remission Δ=7.5%, p=0.003).
Combinatorial PGx-guided care achieved significantly better patient outcomes compared with TAU when assessed using the HAM-D6 scale. These findings suggest that the HAM-D6 scale is better suited than is the HAM-D17 for evaluating change in randomized, controlled trials comparing active treatment arms.
To measure the association between statewide adoption of the Centers for Disease Control and Prevention’s (CDC’s) Core Elements for Hospital Antimicrobial Stewardship Programs (Core Elements) and hospital-associated methicillin-resistant Staphylococcus aureus bacteremia (MRSA) and Clostridioides difficile infection (CDI) rates in the United States. We hypothesized that states with a higher percentage of reported compliance with the Core Elements have significantly lower MRSA and CDI rates.
All US states.
Observational longitudinal study.
We used 2014–2016 data from Hospital Compare, Provider of Service files, Medicare cost reports, and the CDC’s Patient Safety Atlas website. Outcomes were MRSA standardized infection ratio (SIR) and CDI SIR. The key explanatory variable was the percentage of hospitals that meet the Core Elements in each state. We estimated state and time fixed-effects models with time-variant controls, and we weighted our analyses for the number of hospitals in the state.
The percentage of hospitals reporting compliance with the Core Elements between 2014 and 2016 increased in all states. A 1% increase in reported ASP compliance was associated with a 0.3% decrease (P < .01) in CDIs in 2016 relative to 2014. We did not find an association for MRSA infections.
Increasing documentation of the Core Elements may be associated with decreases in the CDI SIR. We did not find evidence of such an association for the MRSA SIR, probably due to the short length of the study and variety of stewardship strategies that ASPs may encompass.
The reported incidence of Clostridoides difficile infection (CDI) has increased in recent years, partly due to broadening adoption of nucleic acid amplification tests (NAATs) replacing enzyme immunoassay (EIA) methods. Our aim was to quantify the impact of this switch on reported CDI rates using a large, multihospital, empirical dataset.
We analyzed 9 years of retrospective CDI data (2009–2017) from 47 hospitals in the southeastern United States; 37 hospitals switched to NAAT during this period, including 24 with sufficient pre- and post-switch data for statistical analyses. Poisson regression was used to quantify the NAAT-over-EIA incidence rate ratio (IRR) at hospital and network levels while controlling for longitudinal trends, the proportion of intensive care unit patient days, changes in surveillance methodology, and previously detected infection cluster periods. We additionally used change-point detection methods to identify shifts in the mean and/or slope of hospital-level CDI rates, and we compared results to recorded switch dates.
For hospitals that transitioned to NAAT, average unadjusted CDI rates increased substantially after the test switch from 10.9 to 23.9 per 10,000 patient days. Individual hospital IRRs ranged from 0.75 to 5.47, with a network-wide IRR of 1.75 (95% confidence interval, 1.62–1.89). Reported CDI rates significantly changed 1.6 months on average after switching to NAAT testing (standard deviation, 1.9 months).
Hospitals that switched from EIA to NAAT testing experienced an average postswitch increase of 75% in reported CDI rates after adjusting for other factors, and this increase was often gradual or delayed.
The Murchison Widefield Array (MWA) is an open access telescope dedicated to studying the low-frequency (80–300 MHz) southern sky. Since beginning operations in mid-2013, the MWA has opened a new observational window in the southern hemisphere enabling many science areas. The driving science objectives of the original design were to observe 21 cm radiation from the Epoch of Reionisation (EoR), explore the radio time domain, perform Galactic and extragalactic surveys, and monitor solar, heliospheric, and ionospheric phenomena. All together
programs recorded 20 000 h producing 146 papers to date. In 2016, the telescope underwent a major upgrade resulting in alternating compact and extended configurations. Other upgrades, including digital back-ends and a rapid-response triggering system, have been developed since the original array was commissioned. In this paper, we review the major results from the prior operation of the MWA and then discuss the new science paths enabled by the improved capabilities. We group these science opportunities by the four original science themes but also include ideas for directions outside these categories.
Myocardial strain measurements are increasingly used to detect complications following heart transplantation. However, the temporal association of these changes with allograft rejection is not well defined. The aim of this study was to describe the evolution of strain measurements prior to the diagnosis of rejection in paediatric heart transplant recipients.
All paediatric heart transplant recipients (2004–2015) with at least one episode of acute rejection were identified. Longitudinal and circumferential strain measurements were assessed at the time of rejection and retrospectively on all echocardiograms until the most recent negative biopsy. Smoothing technique (LOESS) was used to visualise the changes of each variable over time and estimate the time preceding rejection at which alterations are first detectable.
A total of 58 rejection episodes were included from 37 unique patients. In the presence of rejection, there were decrements from baseline in global longitudinal strain (−18.2 versus −14.1), global circumferential strain (−24.1 versus −19.6), longitudinal strain rate (−1 versus −0.8), circumferential strain rate (−1.3 versus −1.1), peak longitudinal early diastolic strain rate (1.3 versus 1), and peak circumferential early diastolic strain rate (1.5 versus 1.3) (p<0.01 for all). The earliest detectable changes occurred 45 days prior to rejection with simultaneous alterations in myocardial strain and ejection fraction.
Changes in graft function can be detected non-invasively prior to the diagnosis of rejection. However, changes in strain occur concurrently with a decline in ejection fraction. Strain measurements aid in the non-invasive detection of rejection, but may not facilitate earlier diagnosis compared to more traditional measures of ventricular function.
The second Singapore Mental Health Study (SMHS) – a nationwide, cross-sectional, epidemiological survey - was initiated in 2016 with the intent of tracking the state of mental health of the general population in Singapore. The study employed the same methodology as the first survey initiated in 2010. The SMHS 2016 aimed to (i) establish the 12-month and lifetime prevalence and correlates of major depressive disorder (MDD), dysthymia, bipolar disorder, generalised anxiety disorder (GAD), obsessive compulsive disorder (OCD) and alcohol use disorder (AUD) (which included alcohol abuse and dependence) and (ii) compare the prevalence of these disorders with reference to data from the SMHS 2010.
Door-to-door household surveys were conducted with adult Singapore residents aged 18 years and above from 2016 to 2018 (n = 6126) which yielded a response rate of 69.0%. The subjects were randomly selected using a disproportionate stratified sampling method and assessed using World Health Organization Composite International Diagnostic Interview version 3.0 (WHO-CIDI 3.0). The diagnoses of lifetime and 12-month selected mental disorders including MDD, dysthymia, bipolar disorder, GAD, OCD, and AUD (alcohol abuse and alcohol dependence), were based on the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition (DSM-IV) criteria.
The lifetime prevalence of at least one mood, anxiety or alcohol use disorder was 13.9% in the adult population. MDD had the highest lifetime prevalence (6.3%) followed by alcohol abuse (4.1%). The 12-month prevalence of any DSM-IV mental disorders was 6.5%. OCD had the highest 12-month prevalence (2.9%) followed by MDD (2.3%). Lifetime and 12-month prevalence of mental disorders assessed in SMHS 2016 (13.8% and 6.4%) was significantly higher than that in SMHS 2010 (12.0% and 4.4%). A significant increase was observed in the prevalence of lifetime GAD (0.9% to 1.6%) and alcohol abuse (3.1% to 4.1%). The 12-month prevalence of GAD (0.8% vs. 0.4%) and OCD (2.9% vs. 1.1%) was significantly higher in SMHS 2016 as compared to SMHS 2010.
The high prevalence of OCD and the increase across the two surveys needs to be tackled at a population level both in terms of creating awareness of the disorder and the need for early treatment. Youth emerge as a vulnerable group who are more likely to be associated with mental disorders and thus targeted interventions in this group with a focus on youth friendly and accessible care centres may lead to earlier detection and treatment of mental disorders.
Common bean (Phaseolus vulgaris L.) is perhaps the most important grain legume in sub-Saharan Africa (SSA) smallholder systems for food security and household income. Although a wide choice of varieties is available, smallholder farmers in western Kenya realize yields that are low and variable since they operate in risky production environments. Significant seasonal variations exist in rainfall and severity of pests and diseases. This situation is worsened by the low and declining soil fertility, coupled with low capacity of farmers to purchase production inputs such as fertilizers, fungicides and insecticides, and land scarcity. The objective of this study was to investigate whether growing multiple-bean varieties instead of a single variety can enable farmers enhance yield stability over seasons and ensure food security. Five common bean varieties were evaluated in multiple farms for 11 seasons at Kapkerer in Nandi County, western Kenya. Data were collected on grain yield, days to 50% flowering and major diseases. In addition, daily rainfall was recorded throughout the growing seasons. The five varieties were combined in all possible ways to create 31 single- and multiple-bean production strategies. The strategies were evaluated for grain yield performance and yield stability over seasons to determine the risk of not attaining a particular yield target. Results indicated that cropping multiple-bean varieties can be an effective way for reducing production risks in heterogeneous smallholder systems. Yield stability can be greatly enhanced across diverse environments, leading to improved food security, especially for the resource-poor smallholder farmers operating in risk-prone environments. Although the results show that some of the single-bean variety strategies were high yielding, their yield stability was generally lower than those of multiple strategies. Resource-poor risk averse farmers can greatly increase the probability of exceeding their yield targets by cropping multiple-bean varieties with relatively low yields but high grain yield stability. Trading-off high grain yield for yield stability might be an important strategy for minimizing bean production risks.
Approximately 70% of the 30 000 known bee (Hymenoptera) species and most flower-visiting, solitary wasps (Hymenoptera) nest in the ground. However, nesting behaviours of most ground-nesting bees and wasps are poorly understood. Habitat loss, including nesting habitat, threatens populations of ground-nesting bees and wasps. Most ground-nesting bee and wasp studies implement trapping methods that capture foraging individuals, but provide little insight into the nesting preferences of these taxa. Some researchers have suggested that emergence traps may provide a suitable means by which to determine ground-nesting bee and wasp abundance. We sought to evaluate nest-site selection of ground-nesting bees and wasps using emergence traps in two study systems: (1) planted wildflower enhancement plots and fallow control plots in agricultural land; and (2) upland pine and hammock habitat in forests. Over the course of three years (2015–2017), we collected 306 ground-nesting bees and wasps across all study sites from emergence traps. In one study, we compared captures per trap between coloured pan traps and emergence traps and found that coloured pan traps captured far more ground-nesting bees and wasps than did emergence traps. Based on our emergence trap data, our results also suggest ground-nesting bees and wasps are more apt to nest within wildflower enhancement plots than in fallow control plots, and in upland pine habitats than in hammock forests. In conclusion, emergence traps have potential to be a unique tool to gain understanding of ground-nesting bee and wasp habitat requirements.
OBJECTIVES/SPECIFIC AIMS: The objective of this research was to assess the clinical impact of simulation-based team leadership training on team leadership effectiveness and patient care during actual trauma resuscitations. This translational work addresses an important gap in simulation research and medical education research. METHODS/STUDY POPULATION: Eligible trauma team leaders were randomized to the intervention (4-hour simulation-based leadership training) or control (standard training) condition. Subject-led actual trauma patient resuscitations were video recorded and coded for leadership behaviors (primary outcome) and patient care (secondary outcome) using novel leadership and trauma patient care metrics. Patient outcomes for trauma resuscitations were obtained through the Harborview Medical Center Trauma Registry and analyzed descriptively. A one-way ANCOVA analysis was conducted to test the effectiveness of our training intervention versus a control group for each outcome (leadership effectiveness and patient care) while accounting for pre-training performance, injury severity score, postgraduate training year, and days since training occurred. Association between leadership effectiveness and patient care was evaluated using random coefficient modeling. RESULTS/ANTICIPATED RESULTS: Sixty team leaders, 30 in each condition, completed the study. There was a significant difference in post-training leadership effectiveness [F(1,54)=30.19, p<.001, η2=.36] between the experimental and control conditions. There was no direct impact of training on patient care [F(1,54)=1.0, p=0.33, η2=.02]; however, leadership effectiveness mediated an indirect effect of training on patient care. Across all trauma resuscitations team leader effectiveness correlated with patient care (p<0.05) as predicted by team leadership conceptual models. DISCUSSION/SIGNIFICANCE OF IMPACT: This work represents a critical step in advancing translational simulation-based research (TSR). While there are several examples of high quality translational research programs, they primarily focus on procedural tasks and do not evaluate highly complex skills such as leadership. Complex skills present significant measurement challenges because individuals and processes are interrelated, with multiple components and emergent nature of tasks and related behaviors. We provide evidence that simulation-based training of a complex skill (team leadership behavior) transfers to a complex clinical setting (emergency department) with highly variable clinical tasks (trauma resuscitations). Our novel team leadership training significantly improved overall leadership performance and partially mediated the positive effect between leadership and patient care. This represents the first rigorous, randomized, controlled trial of a leadership or teamwork-focused training that systematically evaluates the impact on process (leadership) and performance (patient care).
The Commensal Real-time Australian Square Kilometre Array Pathfinder Fast Transients survey is the first extensive astronomical survey using phased array feeds. Since January 2017, it has been searching for fast radio bursts in fly’s eye mode. Here, we present a calculation of the sensitivity and total exposure of the survey that detected the first 20 of these bursts, using the pulsars B1641-45 and B0833-45 as calibrators. The beamshape, antenna-dependent system noise, and the effects of radio-frequency interference and fluctuations during commissioning are quantified. Effective survey exposures and sensitivities are calculated as a function of the source counts distribution. Statistical ‘stat’ and systematics ‘sys’ effects are treated separately. The implied fast radio burst rate is significantly lower than the 37 sky−1 day−1 calculated using nominal exposures and sensitivities for this same sample by Shannon et al. (2018). At the Euclidean (best-fit) power-law index of −1.5 (−2.2), the rate is
(sys) ± 3.6 (stat) sky−1 day−1 (
(sys) ± 2.8 (stat) sky−1 day−1) above a threshold of 56.6 ± 6.6(sys) Jy ms (40.4 ± 1.2(sys) Jy ms). This strongly suggests that these calculations be performed for other FRB-hunting experiments, allowing meaningful comparisons to be made between them.
Major depressive disorder (MDD) is a leading cause of disease burden worldwide, with lifetime prevalence in the United States of 17%. Here we present the results of the first prospective, large-scale, patient- and rater-blind, randomized controlled trial evaluating the clinical importance of achieving congruence between combinatorial pharmacogenomic (PGx) testing and medication selection for MDD.
1,167 outpatients diagnosed with MDD and an inadequate response to ≥1 psychotropic medications were enrolled and randomized 1:1 to a Treatment as Usual (TAU) arm or PGx-guided care arm. Combinatorial PGx testing categorized medications in three groups based on the level of gene-drug interactions: use as directed, use with caution, or use with increased caution and more frequent monitoring. Patient assessments were performed at weeks 0 (baseline), 4, 8, 12 and 24. Patients, site raters, and central raters were blinded in both arms until after week 8. In the guided-care arm, physicians had access to the combinatorial PGx test result to guide medication selection. Primary outcomes utilized the Hamilton Depression Rating Scale (HAM-D17) and included symptom improvement (percent change in HAM-D17 from baseline), response (50% decrease in HAM-D17 from baseline), and remission (HAM-D17<7) at the fully blinded week 8 time point. The durability of patient outcomes was assessed at week 24. Medications were considered congruent with PGx test results if they were in the ‘use as directed’ or ‘use with caution’ report categories while medications in the ‘use with increased caution and more frequent monitoring’ were considered incongruent. Patients who started on incongruent medications were analyzed separately according to whether they changed to congruent medications by week8.
At week 8, symptom improvement for individuals in the guided-care arm was not significantly different than TAU (27.2% versus 24.4%, p=0.11). However, individuals in the guided-care arm were more likely than those in TAU to achieve remission (15% versus 10%; p<0.01) and response (26% versus 20%; p=0.01). Remission rates, response rates, and symptom reductions continued to improve in the guided-treatment arm until the 24week time point. Congruent prescribing increased to 91% in the guided-care arm by week 8. Among patients who were taking one or more incongruent medication at baseline, those who changed to congruent medications by week 8 demonstrated significantly greater symptom improvement (p<0.01), response (p=0.04), and remission rates (p<0.01) compared to those who persisted on incongruent medications.
Combinatorial PGx testing improves short- and long-term response and remission rates for MDD compared to standard of care. In addition, prescribing congruency with PGx-guided medication recommendations is important for achieving symptom improvement, response, and remission for MDD patients.
Funding Acknowledgements: This study was supported by Assurex Health, Inc.
Outcome analyses in large administrative databases are ideal for rare diseases such as Becker and Duchenne muscular dystrophy. Unfortunately, Becker and Duchenne do not yet have specific International Classification of Disease-9/-10 codes. We hypothesised that an algorithm could accurately identify these patients within administrative data and improve assessment of cardiovascular morbidity.
Hospital discharges (n=13,189) for patients with muscular dystrophy classified by International Classification of Disease-9 code: 359.1 were identified from the Pediatric Health Information System database. An identification algorithm was created and then validated at three institutions. Multi-variable generalised linear mixed-effects models were used to estimate the associations of length of stay, hospitalisation cost, and 14-day readmission with age, encounter severity, and respiratory disease accounting for clustering within the hospital.
The identification algorithm improved identification of patients with Becker and Duchenne from 55% (code 359.1 alone) to 77%. On bi-variate analysis, left ventricular dysfunction and arrhythmia were associated with increased cost of hospitalisation, length of stay, and mortality (p<0.001). After adjustment, Becker and Duchenne patients with left ventricular dysfunction and arrhythmia had increased length of stay with rate ratio 1.4 and 1.2 (p<0.001 and p=0.004) and increased cost of hospitalization with rate ratio 1.4 and 1.4 (both p<0.001).
Our algorithm accurately identifies patients with Becker and Duchenne and can be used for future analysis of administrative data. Our analysis demonstrates the significant effects of cardiovascular disease on length of stay and hospitalisation cost in patients with Becker and Duchenne. Better recognition of the contribution of cardiovascular disease during hospitalisation with earlier more intensive evaluation and therapy may help improve outcomes in this patient population.
Malnutrition remains a leading contributor to the morbidity and mortality of children under the age of 5 years and can weaken the immune system and increase the severity of concurrent infections. Livestock milk with the protective properties of human milk is a potential therapeutic to modulate intestinal microbiota and improve outcomes. The aim of this study was to develop an infection model of childhood malnutrition in the pig to investigate the clinical, intestinal and microbiota changes associated with malnutrition and enterotoxigenic Escherichia coli (ETEC) infection and to test the ability of goat milk and milk from genetically engineered goats expressing the antimicrobial human lysozyme (hLZ) milk to mitigate these effects. Pigs were weaned onto a protein–energy-restricted diet and after 3 weeks were supplemented daily with goat, hLZ or no milk for a further 2 weeks and then challenged with ETEC. The restricted diet enriched faecal microbiota in Proteobacteria as seen in stunted children. Before infection, hLZ milk supplementation improved barrier function and villous height to a greater extent than goat milk. Both goat and hLZ milk enriched for taxa (Ruminococcaceae) associated with weight gain. Post-ETEC infection, pigs supplemented with hLZ milk weighed more, had improved Z-scores, longer villi and showed more stable bacterial populations during ETEC challenge than both the goat and no milk groups. This model of childhood disease was developed to test the confounding effects of malnutrition and infection and demonstrated the potential use of hLZ goat milk to mitigate the impacts of malnutrition and infection.
We reviewed all patients who were supported with extracorporeal membrane oxygenation and/or ventricular assist device at our institution in order to describe diagnostic characteristics and assess mortality.
A retrospective cohort study was performed including all patients supported with extracorporeal membrane oxygenation and/or ventricular assist device from our first case (8 October, 1998) through 25 July, 2016. The primary outcome of interest was mortality, which was modelled by the Kaplan–Meier method.
A total of 223 patients underwent 241 extracorporeal membrane oxygenation runs. Median support time was 4.0 days, ranging from 0.04 to 55.8 days, with a mean of 6.4±7.0 days. Mean (±SD) age at initiation was 727.4 days (±146.9 days). Indications for extracorporeal membrane oxygenation were stratified by primary indication: cardiac extracorporeal membrane oxygenation (n=175; 72.6%) or respiratory extracorporeal membrane oxygenation (n=66; 27.4%). The most frequent diagnosis for cardiac extracorporeal membrane oxygenation patients was hypoplastic left heart syndrome or hypoplastic left heart syndrome-related malformation (n=55 patients with HLHS who underwent 64 extracorporeal membrane oxygenation runs). For respiratory extracorporeal membrane oxygenation, the most frequent diagnosis was congenital diaphragmatic hernia (n=22). A total of 24 patients underwent 26 ventricular assist device runs. Median support time was 7 days, ranging from 0 to 75 days, with a mean of 15.3±18.8 days. Mean age at initiation of ventricular assist device was 2530.8±660.2 days (6.93±1.81 years). Cardiomyopathy/myocarditis was the most frequent indication for ventricular assist device placement (n=14; 53.8%). Survival to discharge was 42.2% for extracorporeal membrane oxygenation patients and 54.2% for ventricular assist device patients. Kaplan–Meier 1-year survival was as follows: all patients, 41.0%; extracorporeal membrane oxygenation patients, 41.0%; and ventricular assist device patients, 43.2%. Kaplan–Meier 5-year survival was as follows: all patients, 39.7%; extracorporeal membrane oxygenation patients, 39.7%; and ventricular assist device patients, 43.2%.
This single-institutional 18-year review documents the differential probability of survival for various sub-groups of patients who require support with extracorporeal membrane oxygenation or ventricular assist device. The indication for mechanical circulatory support, underlying diagnosis, age, and setting in which cannulation occurs may affect survival after extracorporeal membrane oxygenation and ventricular assist device. The Kaplan–Meier analyses in this study demonstrate that patients who survive to hospital discharge have an excellent chance of longer-term survival.