To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Recent years have seen an exponential increase in the variety of healthcare data captured across numerous sources. However, mechanisms to leverage these data sources to support scientific investigation have remained limited. In 2013 the Pediatric Heart Network (PHN), funded by the National Heart, Lung, and Blood Institute, developed the Integrated CARdiac Data and Outcomes (iCARD) Collaborative with the goals of leveraging available data sources to aid in efficiently planning and conducting PHN studies; supporting integration of PHN data with other sources to foster novel research otherwise not possible; and mentoring young investigators in these areas. This review describes lessons learned through the development of iCARD, initial efforts and scientific output, challenges, and future directions. This information can aid in the use and optimisation of data integration methodologies across other research networks and organisations.
Patients with congenital diaphragmatic hernias often have concomitant congenital heart disease (CHD), with small left-sided cardiac structures as a frequent finding. The goal of this study is to evaluate which left-sided heart structures are affected in neonates with congenital diaphragmatic hernias.
Retrospective review of neonates between May 2007 and April 2015 with a diagnosis of a congenital diaphragmatic hernia was performed. Clinical and echocardiographic data were extracted from the electronic medical record and indexed to body surface area and compared to normative values. Univariable regression models assessed for associations between different variables and length of stay.
Data of 52 patients showed decreased mean z scores for the LVIDd (–3.16), LVIDs (–3.05), aortic annulus (–1.68), aortic sinuses (–2.11), transverse arch (–3.11), and sinotubular junction (–1.47) with preservation of the aorta at the diaphragm compared to age-matched normative data with similar body surface areas. Regression analysis showed a percent reduction in length of stay per 1 mm size increase for LVIDd (8%), aortic annulus (27%), aortic sinuses (18%), sinotubular junctions (20%), and transverse arches (25%).
Patients with congenital diaphragmatic hernias have significantly smaller left-sided heart structures compared to age-matched normative data. Aortic preservation at the diaphragm provides evidence for a mass effect aetiology with increased right-to-left shunting at the fetal ductus resulting in decreased size. Additionally, length of stay appears to be prolonged with decreasing size of several of these structures. These data provide quantitative evidence of smaller left-sided heart structures in patients with congenital diaphragmatic hernias.
Cardiac surgery-associated acute kidney injury is common. In order to improve our understanding of acute kidney injury, we formed the multi-centre Neonatal and Pediatric Heart and Renal Outcomes Network. Our main goals are to describe neonatal kidney injury epidemiology, evaluate variability in diagnosis and management, identify risk factors, investigate the impact of fluid overload, and explore associations with outcomes.
The Neonatal and Pediatric Heart and Renal Outcomes Network collaborative includes representatives from paediatric cardiac critical care, cardiology, nephrology, and cardiac surgery. The collaborative sites and infrastructure are part of the Pediatric Cardiac Critical Care Consortium. An acute kidney injury module was developed and merged into the existing infrastructure. A total of twenty-two participating centres provided data on 100–150 consecutive neonates who underwent cardiac surgery within the first 30 post-natal days. Additional acute kidney injury variables were abstracted by chart review and merged with the corresponding record in the quality improvement database. Exclusion criteria included >1 operation in the 7-day study period, pre-operative renal replacement therapy, pre-operative serum creatinine >1.5 mg/dl, and need for extracorporeal support in the operating room or within 24 hours after the index operation.
A total of 2240 neonatal patients were enrolled across 22 centres. The incidence of acute kidney injury was 54% (stage 1 = 31%, stage 2 = 13%, and stage 3 = 9%).
Neonatal and Pediatric Heart and Renal Outcomes Network represents the largest multi-centre study of neonatal kidney injury. This new network will enhance our understanding of kidney injury and its complications.
We observed pediatric S. aureus hospitalizations decreased 36% from 26.3 to 16.8 infections per 1,000 admissions from 2009 to 2016, with methicillin-resistant S. aureus (MRSA) decreasing by 52% and methicillin-susceptible S. aureus decreasing by 17%, among 39 pediatric hospitals. Similar decreases were observed for days of therapy of anti-MRSA antibiotics.
Few studies have investigated the patterns of posttraumatic stress disorder (PTSD) symptom change in prolonged exposure (PE) therapy. In this study, we aimed to understand the patterns of PTSD symptom change in both PE and present-centered therapy (PCT).
Participants were active duty military personnel (N = 326, 89.3% male, 61.2% white, 32.5 years old) randomized to spaced-PE (S-PE; 10 sessions over 8 weeks), PCT (10 sessions over 8 weeks), or massed-PE (M-PE; 10 sessions over 2 weeks). Using latent profile analysis, we determined the optimal number of PTSD symptom change classes over time and analyzed whether baseline and follow-up variables were associated with class membership.
Five classes, namely rapid responder (7–17%), steep linear responder (14–22%), gradual responder (30–34%), non-responder (27–33%), and symptom exacerbation (7–13%) classes, characterized each treatment. No baseline clinical characteristics predicted class membership for S-PE and M-PE; in PCT, more negative baseline trauma cognitions predicted membership in the non-responder v. gradual responder class. Class membership was robustly associated with PTSD, trauma cognitions, and depression up to 6 months after treatment for both S-PE and M-PE but not for PCT.
Distinct profiles of treatment response emerged that were similar across interventions. By and large, no baseline variables predicted responder class. Responder status was a strong predictor of future symptom severity for PE, whereas response to PCT was not as strongly associated with future symptoms.
Seven half-day regional listening sessions were held between December 2016 and April 2017 with groups of diverse stakeholders on the issues and potential solutions for herbicide-resistance management. The objective of the listening sessions was to connect with stakeholders and hear their challenges and recommendations for addressing herbicide resistance. The coordinating team hired Strategic Conservation Solutions, LLC, to facilitate all the sessions. They and the coordinating team used in-person meetings, teleconferences, and email to communicate and coordinate the activities leading up to each regional listening session. The agenda was the same across all sessions and included small-group discussions followed by reporting to the full group for discussion. The planning process was the same across all the sessions, although the selection of venue, time of day, and stakeholder participants differed to accommodate the differences among regions. The listening-session format required a great deal of work and flexibility on the part of the coordinating team and regional coordinators. Overall, the participant evaluations from the sessions were positive, with participants expressing appreciation that they were asked for their thoughts on the subject of herbicide resistance. This paper details the methods and processes used to conduct these regional listening sessions and provides an assessment of the strengths and limitations of those processes.
Herbicide resistance is ‘wicked’ in nature; therefore, results of the many educational efforts to encourage diversification of weed control practices in the United States have been mixed. It is clear that we do not sufficiently understand the totality of the grassroots obstacles, concerns, challenges, and specific solutions needed for varied crop production systems. Weed management issues and solutions vary with such variables as management styles, regions, cropping systems, and available or affordable technologies. Therefore, to help the weed science community better understand the needs and ideas of those directly dealing with herbicide resistance, seven half-day regional listening sessions were held across the United States between December 2016 and April 2017 with groups of diverse stakeholders on the issues and potential solutions for herbicide resistance management. The major goals of the sessions were to gain an understanding of stakeholders and their goals and concerns related to herbicide resistance management, to become familiar with regional differences, and to identify decision maker needs to address herbicide resistance. The messages shared by listening-session participants could be summarized by six themes: we need new herbicides; there is no need for more regulation; there is a need for more education, especially for others who were not present; diversity is hard; the agricultural economy makes it difficult to make changes; and we are aware of herbicide resistance but are managing it. The authors concluded that more work is needed to bring a community-wide, interdisciplinary approach to understanding the complexity of managing weeds within the context of the whole farm operation and for communicating the need to address herbicide resistance.
Novel approaches to improving disaster response have begun to include the use of big data and information and communication technology (ICT). However, there remains a dearth of literature on the use of these technologies in disasters. We have conducted an integrative literature review on the role of ICT and big data in disasters. Included in the review were 113 studies that met our predetermined inclusion criteria. Most studies used qualitative methods (39.8%, n=45) over mixed methods (31%, n=35) or quantitative methods (29.2%, n=33). Nearly 80% (n=88) covered only the response phase of disasters and only 15% (n=17) of the studies addressed disasters in low- and middle-income countries. The 4 most frequently mentioned tools were geographic information systems, social media, patient information, and disaster modeling. We suggest testing ICT and big data tools more widely, especially outside of high-income countries, as well as in nonresponse phases of disasters (eg, disaster recovery), to increase an understanding of the utility of ICT and big data in disasters. Future studies should also include descriptions of the intended users of the tools, as well as implementation challenges, to assist other disaster response professionals in adapting or creating similar tools. (Disaster Med Public Health Preparedness. 2019;13:353–367)
The authors developed a practical and clinically useful model to predict the risk of psychosis that utilizes clinical characteristics empirically demonstrated to be strong predictors of conversion to psychosis in clinical high-risk (CHR) individuals. The model is based upon the Structured Interview for Psychosis Risk Syndromes (SIPS) and accompanying clinical interview, and yields scores indicating one's risk of conversion.
Baseline data, including demographic and clinical characteristics measured by the SIPS, were obtained on 199 CHR individuals seeking evaluation in the early detection and intervention for mental disorders program at the New York State Psychiatric Institute at Columbia University Medical Center. Each patient was followed for up to 2 years or until they developed a syndromal DSM-4 disorder. A LASSO logistic fitting procedure was used to construct a model for conversion specifically to a psychotic disorder.
At 2 years, 64 patients (32.2%) converted to a psychotic disorder. The top five variables with relatively large standardized effect sizes included SIPS subscales of visual perceptual abnormalities, dysphoric mood, unusual thought content, disorganized communication, and violent ideation. The concordance index (c-index) was 0.73, indicating a moderately strong ability to discriminate between converters and non-converters.
The prediction model performed well in classifying converters and non-converters and revealed SIPS measures that are relatively strong predictors of conversion, comparable with the risk calculator published by NAPLS (c-index = 0.71), but requiring only a structured clinical interview. Future work will seek to externally validate the model and enhance its performance with the incorporation of relevant biomarkers.
For this study, we adapted the Montgomery Borgatta Caregiver Burden Scale, used widely in the United States, to the Saudi Arabian context. To produce an Arabic, culturally sensitive version of the scale, we conducted semi-structured interviews with 20 Saudi family caregivers. The Arabic version of the scale was tested, and participants were asked to comment on the appropriateness of items for the construct of “caregiver burden” using the repertory grid technique and laddering procedure – two constructivist methods derived from personal construct theory. From interview findings, we examined the content of the items and the caregiver burden construct itself. Our findings suggest that the use of constructivist methods to refine constructs and quantitative instruments is highly informative. This strategy is feasible even when little is known about the investigated constructs in the target culture and further elucidates our understanding of cross-cultural variations or invariance of different versions of the scale.
Depression contributes to persistent opioid analgesic use (OAU). Treating depression may increase opioid cessation.
To determine if adherence to antidepressant medications (ADMs) v. non-adherence was associated with opioid cessation in patients with a new depression episode after >90 days of OAU.
Patients with non-cancer, non-HIV pain (n = 2821), with a new episode of depression following >90 days of OAU, were eligible if they received ≥1 ADM prescription from 2002 to 2012. ADM adherence was defined as >80% of days covered. Opioid cessation was defined as ≥182 days without a prescription refill. Confounding was controlled by inverse probability of treatment weighting.
In weighted data, the incidence rate of opioid cessation was significantly (P = 0.007) greater in patients who adhered v. did not adhered to taking antidepressants (57.2/1000 v. 45.0/1000 person-years). ADM adherence was significantly associated with opioid cessation (odds ratio (OR) = 1.24, 95% CI 1.05–1.46).
ADM adherence, compared with non-adherence, is associated with opioid cessation in non-cancer pain. Opioid taper and cessation may be more successful when depression is treated to remission.
The objective of this panel was to generate recommendations to promote the engagement of front-line emergency department (ED) clinicians in clinical and implementation research.
Panel members conducted semi-structured interviews with 37 Canadian adult and pediatric emergency medicine researchers to elicit barriers and facilitators to clinician engagement in research activities, and to glean strategies for promoting clinician engagement.
Responses were organized by themes, and, based on these responses, recommendations were developed and refined in an iterative fashion by panel members.
We offer eight recommendations to promote front-line clinician engagement in clinical research activities. Recommendations to promote clinician engagement specifically address the creation of a research-friendly culture in the ED, minimizing the burden of data collection on clinical staff through the careful design of data collection tools and the use of research staff, and communication between researchers and clinical staff to promote adherence to study protocols.
The objective of Panel 2b was to present an overview of and recommendations for the conduct of implementation trials and multicentre studies in emergency medicine.
Panel members engaged methodologists to discuss the design and conduct of implementation and multicentre studies. We also conducted semi-structured interviews with 37 Canadian adult and pediatric emergency medicine researchers to elicit barriers and facilitators to conducting these kinds of studies.
Responses were organized by themes, and, based on these responses, recommendations were developed and refined in an iterative fashion by panel members.
We offer eight recommendations to facilitate multicentre clinical and implementation studies, along with guidance for conducting implementation research in the emergency department. Recommendations for multicentre studies reflect the importance of local study investigators and champions, requirements for research infrastructure and staffing, and the cooperation and communication between the coordinating centre and participating sites.
An internationally approved and globally used classification scheme for the diagnosis of CHD has long been sought. The International Paediatric and Congenital Cardiac Code (IPCCC), which was produced and has been maintained by the International Society for Nomenclature of Paediatric and Congenital Heart Disease (the International Nomenclature Society), is used widely, but has spawned many “short list” versions that differ in content depending on the user. Thus, efforts to have a uniform identification of patients with CHD using a single up-to-date and coordinated nomenclature system continue to be thwarted, even if a common nomenclature has been used as a basis for composing various “short lists”. In an attempt to solve this problem, the International Nomenclature Society has linked its efforts with those of the World Health Organization to obtain a globally accepted nomenclature tree for CHD within the 11th iteration of the International Classification of Diseases (ICD-11). The International Nomenclature Society has submitted a hierarchical nomenclature tree for CHD to the World Health Organization that is expected to serve increasingly as the “short list” for all communities interested in coding for congenital cardiology. This article reviews the history of the International Classification of Diseases and of the IPCCC, and outlines the process used in developing the ICD-11 congenital cardiac disease diagnostic list and the definitions for each term on the list. An overview of the content of the congenital heart anomaly section of the Foundation Component of ICD-11, published herein in its entirety, is also included. Future plans for the International Nomenclature Society include linking again with the World Health Organization to tackle procedural nomenclature as it relates to cardiac malformations. By doing so, the Society will continue its role in standardising nomenclature for CHD across the globe, thereby promoting research and better outcomes for fetuses, children, and adults with congenital heart anomalies.
OBJECTIVES/SPECIFIC AIMS: The goal of this study is to develop an effective and efficient STI preventive intervention among college students following the principles and phases of MOST. METHODS/STUDY POPULATION As part of the preparation phase, an explicit conceptual model, drawing heavily on theory and prior research, was used to translate the existing science into 5 candidate intervention components (ie, descriptive norms, injunctive norms, expectancies, perceived benefits of protective behavioral strategies, and self-efficacy). For the optimization phase, in Fall 2016 all first-year students (n=3547) from 4 universities were recruited to participate. Students were randomized to 1 of 32 different experimental conditions that included a combination of the candidate intervention components. Component effectiveness was evaluated using data from an immediate post-intervention survey on respective component mediators (eg, alcohol and sex-related descriptive norms). After a second factorial experiment (Fall 2017), only those intervention components that meet the pre-specified criteria of day ≥0.15 will be included in the optimized intervention. The evaluation phase will evaluate the effectiveness of the optimized STI preventive intervention via a randomized-control trial (Fall 2018). RESULTS/ANTICIPATED RESULTS: Preliminary results from the first factorial experiment suggest that descriptive norms and injunctive norms intervention components were significantly effective in reducing post-intervention perceived alcohol prevalence (β=−0.28, p<0.001) and approval of alcohol (β=−0.33, p<0.001), and sex-related norms (β=−0.23, p<.001). These results, in combination with process data, are being used to inform revisions of the intervention components to be included in a second factorial screening experiment. DISCUSSION/SIGNIFICANCE OF IMPACT: This study demonstrates how an iterative approach to engineering an STI preventive intervention using MOST can affect the behaviors of college students and serve as a foundation for other translational science.
Accurate weed emergence models are valuable tools for scheduling planting, cultivation, and herbicide applications. Multiple models predicting giant ragweed emergence have been developed, but none have been validated in diverse crop rotation and tillage systems, which have the potential to influence weed emergence patterns. This study evaluated the performance of published giant ragweed emergence models across various crop rotations and spring tillage dates in southern Minnesota. Across experiments, the most robust model was a mixed-effects Weibull (flexible sigmoidal function) model predicting emergence in relation to hydrothermal time accumulation with a base temperature of 4.4 C, a base soil matric potential of −2.5 MPa, and two random effects determined by overwinter growing degree days (GDD) (10 C) and precipitation accumulated during seedling recruitment. The deviations in emergence between individual plots and the fixed-effects model were distinguished by the positive association between the lower horizontal asymptote (Drop) and maximum daily soil temperature during seedling recruitment. This finding indicates that crops and management practices that increase soil temperature will have a shorter lag phase at the start of giant ragweed emergence compared with practices promoting cool soil temperatures. Thus, crops with early-season crop canopies such as perennial crops and crops planted in early spring and in narrow rows will likely have a slower progression of giant ragweed emergence. This research provides a valuable assessment of published giant ragweed emergence models and illustrates that accurate emergence models can be used to time field operations and improve giant ragweed control across diverse cropping systems.
To assess the burden of bloodstream infections (BSIs) among pediatric hematology-oncology (PHO) inpatients, to propose a comprehensive, all-BSI tracking approach, and to discuss how such an approach helps better inform within-center and across-center differences in CLABSI rate
Prospective cohort study
US multicenter, quality-improvement, BSI prevention network
PHO centers across the United States who agreed to follow a standardized central-line–maintenance care bundle and track all BSI events and central-line days every month.
Infections were categorized as CLABSI (stratified by mucosal barrier injury–related, laboratory-confirmed BSI [MBI-LCBI] versus non–MBI-LCBI) and secondary BSI, using National Healthcare Safety Network (NHSN) definitions. Single positive blood cultures (SPBCs) with NHSN defined common commensals were also tracked.
Between 2013 and 2015, 34 PHO centers reported 1,110 BSIs. Among them, 708 (63.8%) were CLABSIs, 170 (15.3%) were secondary BSIs, and 232 (20.9%) were SPBCs. Most SPBCs (75%) occurred in patients with profound neutropenia; 22% of SPBCs were viridans group streptococci. Among the CLABSIs, 51% were MBI-LCBI. Excluding SPBCs, CLABSI rates were higher (88% vs 77%) and secondary BSI rates were lower (12% vs 23%) after the NHSN updated the definition of secondary BSI (P<.001). Preliminary analyses showed across-center differences in CLABSI versus secondary BSI and between SPBC and CLABSI versus non-CLABSI rates.
Tracking all BSIs, not just CLABSIs in PHO patients, is a patient-centered, clinically relevant approach that could help better assess across-center and within-center differences in infection rates, including CLABSI. This approach enables informed decision making by healthcare providers, payors, and the public.
The influence of baseline severity has been examined for antidepressant
medications but has not been studied properly for cognitive–behavioural
therapy (CBT) in comparison with pill placebo.
To synthesise evidence regarding the influence of initial severity on
efficacy of CBT from all randomised controlled trials (RCTs) in which
CBT, in face-to-face individual or group format, was compared with
pill-placebo control in adults with major depression.
A systematic review and an individual-participant data meta-analysis
using mixed models that included trial effects as random effects. We used
multiple imputation to handle missing data.
We identified five RCTs, and we were given access to individual-level
data (n = 509) for all five. The analyses revealed that
the difference in changes in Hamilton Rating Scale for Depression between
CBT and pill placebo was not influenced by baseline severity (interaction
P = 0.43). Removing the non-significant interaction
term from the model, the difference between CBT and pill placebo was a
standardised mean difference of –0.22 (95% CI –0.42 to –0.02,
P = 0.03, I2 = 0%).
Patients suffering from major depression can expect as much benefit from
CBT across the wide range of baseline severity. This finding can help
inform individualised treatment decisions by patients and their