We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure coreplatform@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Optimizing research on the developmental origins of health and disease (DOHaD) involves implementing initiatives maximizing the use of the available cohort study data; achieving sufficient statistical power to support subgroup analysis; and using participant data presenting adequate follow-up and exposure heterogeneity. It also involves being able to undertake comparison, cross-validation, or replication across data sets. To answer these requirements, cohort study data need to be findable, accessible, interoperable, and reusable (FAIR), and more particularly, it often needs to be harmonized. Harmonization is required to achieve or improve comparability of the putatively equivalent measures collected by different studies on different individuals. Although the characteristics of the research initiatives generating and using harmonized data vary extensively, all are confronted by similar issues. Having to collate, understand, process, host, and co-analyze data from individual cohort studies is particularly challenging. The scientific success and timely management of projects can be facilitated by an ensemble of factors. The current document provides an overview of the ‘life course’ of research projects requiring harmonization of existing data and highlights key elements to be considered from the inception to the end of the project.
Delirium is a serious neuropsychiatric condition characterized by an acute change in cognition and attention that affects a significant proportion of hospitalized older adults and is associated with significant morbidity and mortality. Prevention of delirium is an important part of the care of hospitalized older adults. The Hospital Elder Life Program is a multicomponent intervention that has been shown to reduce the incidence of delirium. As many cases of delirium are overlooked, its diagnosis is important and can be achieved using the Confusion Assessment Method, which relies on four cardinal features of delirium: acute onset, inattention, altered level of consciousness, and disorganized thinking. The etiology of delirium is often multifactorial with contributions from predisposing factors (such as sensory impairment, chronic illness, and cognitive impairment) and precipitating factors (such as infection, polypharmacy, or illness). Once diagnosed, delirium should be evaluated with a thorough history, complete physical, medication review, and targeted tests in an effort to identify these factors. Management should focus on addressing the noted precipitating and predisposing factors with limited use of low-dose antipsychotic medications in patients at risk of self-harm.
Exposure to maternal hyperglycemia in utero has been associated with adverse metabolic outcomes in offspring. However, few studies have investigated the relationship between maternal hyperglycemia and offspring cortisol levels. We assessed associations of gestational diabetes mellitus (GDM) with cortisol biomarkers in two longitudinal prebirth cohorts: Project Viva included 928 mother–child pairs and Gen3G included 313 mother–child pairs. In Project Viva, GDM was diagnosed in N = 48 (5.2%) women using a two-step procedure (50 g glucose challenge test, if abnormal followed by 100 g oral glucose tolerance test [OGTT]), and in N = 29 (9.3%) women participating in Gen3G using one-step 75 g OGTT. In Project Viva, we measured cord blood glucocorticoids and child hair cortisol levels during mid-childhood (mean (SD) age: 7.8 (0.8) years) and early adolescence (mean (SD) age: 13.2 (0.9) years). In Gen3G, we measured hair cortisol at 5.4 (0.3) years. We used multivariable linear regression to examine associations of GDM with offspring cortisol, adjusting for child age and sex, maternal prepregnancy body mass index, education, and socioeconomic status. We additionally adjusted for child race/ethnicity in the cord blood analyses. In both Project Viva and Gen3G, we observed null associations of GDM and maternal glucose markers in pregnancy with cortisol biomarkers in cord blood at birth (β = 16.6 nmol/L, 95% CI −60.7, 94.0 in Project Viva) and in hair samples during childhood (β = −0.56 pg/mg, 95% CI −1.16, 0.04 in Project Viva; β = 0.09 pg/mg, 95% CI −0.38, 0.57 in Gen3G). Our findings do not support the hypothesis that maternal hyperglycemia is related to hypothalamic–pituitary–adrenal axis activity.
Since the advent of direct-acting antiviral therapy, the elimination of hepatitis c virus (HCV) as a public health concern is now possible. However, identification of those who remain undiagnosed, and re-engagement of those who are diagnosed but remain untreated, will be essential to achieve this. We examined the extent of HCV infection among individuals undergoing liver function tests (LFT) in primary care. Residual biochemistry samples for 6007 patients, who had venous blood collected in primary care for LFT between July 2016 and January 2017, were tested for HCV antibody. Through data linkage to national and sentinel HCV surveillance databases, we also examined the extent of diagnosed infection, attendance at specialist service and HCV treatment for those found to be HCV positive. Overall HCV antibody prevalence was 4.0% and highest for males (5.0%), those aged 37–50 years (6.2%), and with an ALT result of 70 or greater (7.1%). Of those testing positive, 68.9% had been diagnosed with HCV in the past, 84.9% before the study period. Most (92.5%) of those diagnosed with chronic infection had attended specialist liver services and while 67.7% had ever been treated only 38% had successfully cleared infection. More than half of HCV-positive people required assessment, and potentially treatment, for their HCV infection but were not engaged with services during the study period. LFT in primary care are a key opportunity to diagnose, re-diagnose and re-engage patients with HCV infection and highlight the importance of GPs in efforts to eliminate HCV as a public health concern.
To describe the genomic analysis and epidemiologic response related to a slow and prolonged methicillin-resistant Staphylococcus aureus (MRSA) outbreak.
Design:
Prospective observational study.
Setting:
Neonatal intensive care unit (NICU).
Methods:
We conducted an epidemiologic investigation of a NICU MRSA outbreak involving serial baby and staff screening to identify opportunities for decolonization. Whole-genome sequencing was performed on MRSA isolates.
Results:
A NICU with excellent hand hygiene compliance and longstanding minimal healthcare-associated infections experienced an MRSA outbreak involving 15 babies and 6 healthcare personnel (HCP). In total, 12 cases occurred slowly over a 1-year period (mean, 30.7 days apart) followed by 3 additional cases 7 months later. Multiple progressive infection prevention interventions were implemented, including contact precautions and cohorting of MRSA-positive babies, hand hygiene observers, enhanced environmental cleaning, screening of babies and staff, and decolonization of carriers. Only decolonization of HCP found to be persistent carriers of MRSA was successful in stopping transmission and ending the outbreak. Genomic analyses identified bidirectional transmission between babies and HCP during the outbreak.
Conclusions:
In comparison to fast outbreaks, outbreaks that are “slow and sustained” may be more common to units with strong existing infection prevention practices such that a series of breaches have to align to result in a case. We identified a slow outbreak that persisted among staff and babies and was only stopped by identifying and decolonizing persistent MRSA carriage among staff. A repeated decolonization regimen was successful in allowing previously persistent carriers to safely continue work duties.
This paper introduces a new approach to the quantitative study of democratization. Building on the comparative case-study and large-N literature, it outlines an episode approach that identifies the discrete beginning of a period of political liberalization, traces its progression, and classifies episodes as successful versus different types of failing outcomes, thus avoiding potentially fallacious assumptions of unit homogeneity. We provide a description and analysis of all 383 liberalization episodes from 1900 to 2019, offering new insights on democratic “waves”. We also demonstrate the value of this approach by showing that while several established covariates are valuable for predicting the ultimate outcomes, none explain the onset of a period of liberalization.
The Hierarchical Taxonomy of Psychopathology (HiTOP) has emerged out of the quantitative approach to psychiatric nosology. This approach identifies psychopathology constructs based on patterns of co-variation among signs and symptoms. The initial HiTOP model, which was published in 2017, is based on a large literature that spans decades of research. HiTOP is a living model that undergoes revision as new data become available. Here we discuss advantages and practical considerations of using this system in psychiatric practice and research. We especially highlight limitations of HiTOP and ongoing efforts to address them. We describe differences and similarities between HiTOP and existing diagnostic systems. Next, we review the types of evidence that informed development of HiTOP, including populations in which it has been studied and data on its validity. The paper also describes how HiTOP can facilitate research on genetic and environmental causes of psychopathology as well as the search for neurobiologic mechanisms and novel treatments. Furthermore, we consider implications for public health programs and prevention of mental disorders. We also review data on clinical utility and illustrate clinical application of HiTOP. Importantly, the model is based on measures and practices that are already used widely in clinical settings. HiTOP offers a way to organize and formalize these techniques. This model already can contribute to progress in psychiatry and complement traditional nosologies. Moreover, HiTOP seeks to facilitate research on linkages between phenotypes and biological processes, which may enable construction of a system that encompasses both biomarkers and precise clinical description.
One of six nursing home residents and staff with positive SARS-CoV-2 tests ≥90 days after initial infection had specimen cycle thresholds (Ct) <30. Individuals with specimen Ct<30 were more likely to report symptoms but were not different from individuals with high Ct value specimens by other clinical and testing data.
Background: Healthcare facilities have experienced many challenges during the COVID-19 pandemic, including limited personal protective equipment (PPE) supplies. Healthcare personnel (HCP) rely on PPE, vaccines, and other infection control measures to prevent SARS-CoV-2 infections. We describe PPE concerns reported by HCP who had close contact with COVID-19 patients in the workplace and tested positive for SARS-CoV-2. Method: The CDC collaborated with Emerging Infections Program (EIP) sites in 10 states to conduct surveillance for SARS-CoV-2 infections in HCP. EIP staff interviewed HCP with positive SARS-CoV-2 viral tests (ie, cases) to collect data on demographics, healthcare roles, exposures, PPE use, and concerns about their PPE use during COVID-19 patient care in the 14 days before the HCP’s SARS-CoV-2 positive test. PPE concerns were qualitatively coded as being related to supply (eg, low quality, shortages); use (eg, extended use, reuse, lack of fit test); or facility policy (eg, lack of guidance). We calculated and compared the percentages of cases reporting each concern type during the initial phase of the pandemic (April–May 2020), during the first US peak of daily COVID-19 cases (June–August 2020), and during the second US peak (September 2020–January 2021). We compared percentages using mid-P or Fisher exact tests (α = 0.05). Results: Among 1,998 HCP cases occurring during April 2020–January 2021 who had close contact with COVID-19 patients, 613 (30.7%) reported ≥1 PPE concern (Table 1). The percentage of cases reporting supply or use concerns was higher during the first peak period than the second peak period (supply concerns: 12.5% vs 7.5%; use concerns: 25.5% vs 18.2%; p Conclusions: Although lower percentages of HCP cases overall reported PPE concerns after the first US peak, our results highlight the importance of developing capacity to produce and distribute PPE during times of increased demand. The difference we observed among selected groups of cases may indicate that PPE access and use were more challenging for some, such as nonphysicians and nursing home HCP. These findings underscore the need to ensure that PPE is accessible and used correctly by HCP for whom use is recommended.
Virtual reality has emerged as a unique educational modality for medical trainees. However, incorporation of virtual reality curricula into formal training programmes has been limited. We describe a multi-centre effort to develop, implement, and evaluate the efficacy of a virtual reality curriculum for residents participating in paediatric cardiology rotations.
Methods:
A virtual reality software program (“The Stanford Virtual Heart”) was utilised. Users are placed “inside the heart” and explore non-traditional views of cardiac anatomy. Modules for six common congenital heart lesions were developed, including narrative scripts. A prospective case–control study was performed involving three large paediatric residency programmes. From July 2018 to June 2019, trainees participating in an outpatient cardiology rotation completed a 27-question, validated assessment tool. From July 2019 to February 2020, trainees completed the virtual reality curriculum and assessment tool during their cardiology rotation. Qualitative feedback on the virtual reality experience was also gathered. Intervention and control group performances were compared using univariate analyses.
Results:
There were 80 trainees in the control group and 52 in the intervention group. Trainees in the intervention group achieved higher scores on the assessment (20.4 ± 2.9 versus 18.8 ± 3.8 out of 27 questions answered correctly, p = 0.01). Further analysis showed significant improvement in the intervention group for questions specifically testing visuospatial concepts. In total, 100% of users recommended integration of the programme into the residency curriculum.
Conclusions:
Virtual reality is an effective and well-received adjunct to clinical curricula for residents participating in paediatric cardiology rotations. Our results support continued virtual reality use and expansion to include other trainees.
Understanding how cardiovascular structure and physiology guide management is critically important in paediatric cardiology. However, few validated educational tools are available to assess trainee knowledge. To address this deficit, paediatric cardiologists and fellows from four institutions collaborated to develop a multimedia assessment tool for use with medical students and paediatric residents. This tool was developed in support of a novel 3-dimensional virtual reality curriculum created by our group.
Methods:
Educational domains were identified, and questions were iteratively developed by a group of clinicians from multiple centres to assess understanding of key concepts. To evaluate content validity, content experts completed the assessment and reviewed items, rating item relevance to educational domains using a 4-point Likert scale. An item-level content validity index was calculated for each question, and a scale-level content validity index was calculated for the assessment tool, with scores of ≥0.78 and ≥0.90, respectively, representing excellent content validity.
Results:
The mean content expert assessment score was 92% (range 88–97%). Two questions yielded ≤50% correct content expert answers. The item-level content validity index for 29 out of 32 questions was ≥0.78, and the scale-level content validity index was 0.92. Qualitative feedback included suggestions for future improvement. Questions with ≤50% content expert agreement and item-level content validity index scores <0.78 were removed, yielding a 27-question assessment tool.
Conclusions:
We describe a multi-centre effort to create and validate a multimedia assessment tool which may be implemented within paediatric trainee cardiology curricula. Future efforts may focus on content refinement and expansion to include additional educational domains.
OBJECTIVES/GOALS: Using the covariate-rich Veteran Health Administration data, estimate the association between Proton Pump Inhibitor (PPI) use and severe COVID-19, rigorously adjusting for confounding using propensity score (PS)-weighting. METHODS/STUDY POPULATION: We assembled a national retrospective cohort of United States veterans who tested positive for SARS-CoV-2, with information on 33 covariates including comorbidity diagnoses, lab values, and medications. Current outpatient PPI use was compared to non-use (two or more fills and pills on hand at admission vs no PPI prescription fill in prior year). The primary composite outcome was mechanical ventilation use or death within 60 days; the secondary composite outcome included ICU admission. PS-weighting mimicked a 1:1 matching cohort, allowing inclusion of all patients while achieving good covariate balance. The weighted cohort was analyzed using logistic regression. RESULTS/ANTICIPATED RESULTS: Our analytic cohort included 97,674 veterans with SARS-CoV-2 testing, of whom 14,958 (15.3%) tested positive (6,262 [41.9%] current PPI-users, 8,696 [58.1%] non-users). After weighting, all covariates were well-balanced with standardized mean differences less than a threshold of 0.1. Prior to PS-weighting (no covariate adjustment), we observed higher odds of the primary (9.3% vs 7.5%; OR 1.27, 95% CI 1.13-1.43) and secondary (25.8% vs 21.4%; OR 1.27, 95% CI 1.18-1.37) outcomes among PPI users vs non-users. After PS-weighting, PPI use vs non-use was not associated with the primary (8.2% vs 8.0%; OR 1.03, 95% CI 0.91-1.16) or secondary (23.4% vs 22.9%;OR 1.03, 95% CI 0.95-1.12) outcomes. DISCUSSION/SIGNIFICANCE: The associations between PPI use and severe COVID-19 outcomes that have been previously reported may be due to limitations in the covariates available for adjustment. With respect to COVID-19, our robust PS-weighted analysis provides patients and providers with further evidence for PPI safety.
To examine the association between adherence to plant-based diets and mortality.
Design:
Prospective study. We calculated a plant-based diet index (PDI) by assigning positive scores to plant foods and reverse scores to animal foods. We also created a healthful PDI (hPDI) and an unhealthful PDI (uPDI) by further separating the healthy plant foods from less-healthy plant foods.
Setting:
The VA Million Veteran Program.
Participants:
315 919 men and women aged 19–104 years who completed a FFQ at the baseline.
Results:
We documented 31 136 deaths during the follow-up. A higher PDI was significantly associated with lower total mortality (hazard ratio (HR) comparing extreme deciles = 0·75, 95 % CI: 0·71, 0·79, Ptrend < 0·001]. We observed an inverse association between hPDI and total mortality (HR comparing extreme deciles = 0·64, 95 % CI: 0·61, 0·68, Ptrend < 0·001), whereas uPDI was positively associated with total mortality (HR comparing extreme deciles = 1·41, 95 % CI: 1·33, 1·49, Ptrend < 0·001). Similar significant associations of PDI, hPDI and uPDI were also observed for CVD and cancer mortality. The associations between the PDI and total mortality were consistent among African and European American participants, and participants free from CVD and cancer and those who were diagnosed with major chronic disease at baseline.
Conclusions:
A greater adherence to a plant-based diet was associated with substantially lower total mortality in this large population of veterans. These findings support recommending plant-rich dietary patterns for the prevention of major chronic diseases.
Despite evidence favoring perioperative antibiotic prophylaxis (ABP) use in patients undergoing craniotomy to reduce rates of surgical site infections (SSIs), standardized protocols are lacking. We describe demographic characteristics, risk factors, and ABP choice in patients with craniotomy complicated with SSI.
Design:
Retrospective case series from January 1, 2017, through December 31, 2020.
Setting:
Tertiary-care referral center.
Patients:
Adults who underwent craniotomy and were diagnosed with an SSI.
Methods:
Logistic regression to estimate odds ratios and 95% confidence intervals to identify factors associated with SSIs.
Results:
In total, 5,328 patients undergoing craniotomy were identified during the study period; 59 (1.1%) suffered an SSI. Compared with non-SSI cases, patients with SSI had a significantly higher frequency of emergency procedures: 13.5% versus 5.8% (P = .02; odds ratio [OR], 2.52; 95% confidene interval [CI], 1.10–5.06; P = .031). Patients with SSI had a higher rate of a dirty (5.1% vs 0.9%) and lower rate of clean-contaminated (3.3% vs 14.5%) wound class than those without infection (P = .002). Nearly all patients received ABP before craniotomy (98.3% in the SSI group vs 99.6% in the non-SSI group; P = .10). Combination of vancomycin and cefazolin as dual therapy was more prevalent in the group of patients without infection (n = 1,761, 34.1%) than those with SSI (n = 4, 6.8%) (P < .001), associated with decreased odds for SSI (OR, 0.17; 95% CI, 0.005–0.42; P ≤ .001).
Conclusions:
SSI are frequently seen after an emergent neurosurgical procedure and a dirty wound classification. Combination of prophylactic cefazolin and vancomycin is associated with decreased risk for SSI.
To assess the prevalence of antibiotic-resistant gram-negative bacteria (R-GNB) among patients without recent hospitalization and to examine the influence of outpatient antibiotic exposure on the risk of acquiring R-GNB in this population.
Design:
2-year retrospective cohort study.
Setting:
Regional Veterans Affairs healthcare system.
Patients:
Outpatients at 13 community-based clinics.
Methods:
We examined the rate of acquisition of R-GNB within 90 days following an outpatient visit from 2018 to 2019. We used clinical and administrative databases to determine and summarize prescriptions for systemic antibiotics, associated infectious diagnoses, and subsequent R-GNB acquisition among patients without recent hospitalizations. We also calculated the odds ratio of R-GNB acquisition following antibiotic exposure.
Results:
During the 2-year study period, 7,215 patients had outpatient visits with microbiological cultures obtained within 90 days. Of these patients, 206 (2.9%) acquired an R-GNB. Among patients receiving antibiotics at the visit, 4.6% acquired a R-GNB compared to 2.7% among patients who did not receive antibiotics, yielding an unadjusted odds ratio of 1.75 (95% confidence interval, 1.18–2.52) for a R-GNB following an outpatient visit with versus without an antibiotic exposure. Regardless of R-GNB occurrence, >50% of antibiotic prescriptions were issued at visits without an infectious disease diagnosis or issued without documentation of an in-person or telehealth clinical encounter.
Conclusions:
Although the rate of R-GNBs was low (2.9%), the 1.75-fold increased odds of acquiring a R-GNB following an outpatient antibiotic highlights the importance of antimicrobial stewardship efforts in outpatient settings. Specific opportunities include reducing antibiotics prescribed without an infectious diagnosis or a clinical visit.
There is limited understanding of treatment pathways for paediatric sleep-disordered breathing. This study explored current UK pathways and what is important to well-being for parents and children.
Method
The study comprised in-depth qualitative interviews (n = 22) with parents of children (2–9 years) with symptoms of sleep-disordered breathing referred to a regional ENT clinic (n = 11), general practitioners who might refer these children to ENT (n = 5) and hospital doctors involved in treating these children (n = 6). Interviews were audio recorded, transcribed verbatim, anonymised and analysed thematically.
Results
General practitioners rarely identify seeing children with sleep-disordered breathing; conversely hospital doctors identify unsuspected issues. Parents are worried their child will stop breathing, but routes to referral and diagnosis are not straightforward. Modern technology can aid investigation and diagnosis. Patient weight is an issue for general practitioners and hospital doctors. Adenotonsillectomy is the treatment of choice, and information on paediatric sleep-disordered breathing is needed.
Conclusion
Guidelines for the management of paediatric sleep-disordered breathing are needed.
Early in the COVID-19 pandemic, the World Health Organization stressed the importance of daily clinical assessments of infected patients, yet current approaches frequently consider cross-sectional timepoints, cumulative summary measures, or time-to-event analyses. Statistical methods are available that make use of the rich information content of longitudinal assessments. We demonstrate the use of a multistate transition model to assess the dynamic nature of COVID-19-associated critical illness using daily evaluations of COVID-19 patients from 9 academic hospitals. We describe the accessibility and utility of methods that consider the clinical trajectory of critically ill COVID-19 patients.