To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Studies suggest that alcohol consumption and alcohol use disorders have distinct genetic backgrounds.
We examined whether polygenic risk scores (PRS) for consumption and problem subscales of the Alcohol Use Disorders Identification Test (AUDIT-C, AUDIT-P) in the UK Biobank (UKB; N = 121 630) correlate with alcohol outcomes in four independent samples: an ascertained cohort, the Collaborative Study on the Genetics of Alcoholism (COGA; N = 6850), and population-based cohorts: Avon Longitudinal Study of Parents and Children (ALSPAC; N = 5911), Generation Scotland (GS; N = 17 461), and an independent subset of UKB (N = 245 947). Regression models and survival analyses tested whether the PRS were associated with the alcohol-related outcomes.
In COGA, AUDIT-P PRS was associated with alcohol dependence, AUD symptom count, maximum drinks (R2 = 0.47–0.68%, p = 2.0 × 10−8–1.0 × 10−10), and increased likelihood of onset of alcohol dependence (hazard ratio = 1.15, p = 4.7 × 10−8); AUDIT-C PRS was not an independent predictor of any phenotype. In ALSPAC, the AUDIT-C PRS was associated with alcohol dependence (R2 = 0.96%, p = 4.8 × 10−6). In GS, AUDIT-C PRS was a better predictor of weekly alcohol use (R2 = 0.27%, p = 5.5 × 10−11), while AUDIT-P PRS was more associated with problem drinking (R2 = 0.40%, p = 9.0 × 10−7). Lastly, AUDIT-P PRS was associated with ICD-based alcohol-related disorders in the UKB subset (R2 = 0.18%, p < 2.0 × 10−16).
AUDIT-P PRS was associated with a range of alcohol-related phenotypes across population-based and ascertained cohorts, while AUDIT-C PRS showed less utility in the ascertained cohort. We show that AUDIT-P is genetically correlated with both use and misuse and demonstrate the influence of ascertainment schemes on PRS analyses.
Introduction: When presenting to the Emergency Department (ED), the care of elderly patients residing in Long Term Care (LTC) can be complicated by threats to patient safety created by ineffective transitions of care. Though standardized inpatient handover tools exist, there has yet to be a universal tool adopted for transfers to the ED. In this study, we surveyed relevant stakeholders and identified what information is essential in the transitions of care for this vulnerable population. Methods: We performed a descriptive, cross sectional electronic survey that was distributed to physicians and nurses in ED and LTC settings, paramedics, and patient advocates in two Canadian cities. The survey was kept open for a one month period with weekly formal reminders sent. Questions were generated after performing a literature review which sought to assess the current landscape of transitional care in this population. These were either multiple choice or free text entry questions aimed at identifying what information is essential in transitional periods. Results: A total of 191 health care providers (HCP) and 22 patient advocates (PA) responded to the survey. Within the HCPs, 38% were paramedics, 38% worked in the ED, and 24% were in LTC. In this group, only 41% of respondents were aware of existing handover protocols. Of the proposed informational items in transitional care, 100% of the respondents within both groups indicated that items including reason for transfer and advanced care directives were essential. Other areas identified as necessary were past medical history and baseline functional status. Furthermore, the majority of PAs identified that items such as primary language, bowel and bladder incontinence and spiritual beliefs should be included. Conclusion: This survey demonstrated that there is a need for an improved handover culture to be established when caring for LTC patients in the ED. Education needs to be provided surrounding existing protocols to ensure that health care providers are aware of their existence. Furthermore, we identified what information is essential to transitional care of these patients according to HCPs and PAs. These findings will be used to generate a simple, one page handover form. The next iteration of this project will pilot this handover form in an attempt to create safer transitions to the ED in this at-risk population.
Summary: In this paper we build on work investigating the feasibility of human immunodeficiency virus (HIV) testing in emergency departments (EDs), estimating the prevalence of hepatitis B, C and HIV infections among persons attending two inner-London EDs, identifying factors associated with testing positive in an ED. We also undertook molecular characterisation to look at the diversity of the viruses circulating in these individuals, and the presence of clinically significant mutations which impact on treatment and control.
Blood-borne virus (BBV) testing in non-traditional settings is feasible, with emergency departments (ED) potentially effective at reaching vulnerable and underserved populations. We investigated the feasibility of BBV testing within two inner-London EDs. Residual samples from biochemistry for adults (⩾18 years) attending The Royal Free London Hospital (RFLH) or the University College London Hospital (UCLH) ED between January and June 2015 were tested for human immunodeficiency virus (HIV)Ag/Ab, anti-hepatitis C (HCV) and HBsAg. PCR and sequence analysis were conducted on reactive samples. Sero-prevalence among persons attending RFH and UCLH with residual samples (1287 and 1546), respectively, were 1.1% and 1.0% for HBsAg, 1.6% and 2.3% for anti-HCV, 0.9% and 1.6% for HCV RNA, and 1.3% and 2.2% for HIV. For RFH, HBsAg positivity was more likely among persons of black vs. white ethnicity (odds ratio 9.08; 95% confidence interval 2.72–30), with anti-HCV positivity less likely among females (0.15, 95% CI 0.04–0.50). For UCLH, HBsAg positivity was more likely among non-white ethnicity (13.34, 95% CI 2.20–80.86 (Asian); 8.03, 95% CI 1.12–57.61 (black); and 8.11, 95% CI 1.13–58.18 (other/mixed)). Anti-HCV positivity was more likely among 36–55 year olds vs. ⩾56 years (7.69, 95% CI 2.24–26.41), and less likely among females (0.24, 95% CI 0.09–0.65). Persons positive for HIV-markers were more likely to be of black vs. white ethnicity (4.51, 95% CI 1.63–12.45), and less likely to have one ED attendance (0.39, 95% CI 0.17–0.88), or female (0.12, 95% CI 0.04–0.42). These results indicate that BBV-testing in EDs is feasible, providing a basis for further studies to explore provider and patient acceptability, referral into care and cost-effectiveness.
The construct of self-concept lies at the core of the positive psychology revolution. Historically, as one of the cornerstone constructs in the social sciences, the approach to self-concept has been adapted to focus on how healthy individuals can thrive in life. In this chapter we differentiate between the historical unidimensional perspective of self-concept (centered on self-esteem) and the evolving multifaceted models discriminating between different aspects of self (such as specific academic, social, physical, and emotional components).
the definition of self-concept and the reason it is so important;
historical and evolving perspectives of self-concept;
general and domain-specific theoretical models with associated empirical research regarding self-concept, motivation, and performance;
the way different self-concept domains vary as a function of gender and age;
the impact of specific psychological and social traits on self-concept development;
the differentiation between multidimensional perspectives of personality and self-concept;
theoretical models of academic self-concept formation and its relation to achievement;
frame of reference effects in self-concept formation;
a construct-validity approach to self-concept enhancement interventions; and directions for further research.
A diagnosis of dissociative identity disorder (DID) is controversial and prone to under- and misdiagnosis. From the moment of seeking treatment for symptoms to the time of an accurate diagnosis of DID individuals received an average of four prior other diagnoses and spent 7 years, with reports of up to 12 years, in mental health services.
To investigate whether data-driven pattern recognition methodologies applied to structural brain images can provide biomarkers to aid DID diagnosis.
Structural brain images of 75 participants were included: 32 female individuals with DID and 43 matched healthy controls. Individuals with DID were recruited from psychiatry and psychotherapy out-patient clinics. Probabilistic pattern classifiers were trained to discriminate cohorts based on measures of brain morphology.
The pattern classifiers were able to accurately discriminate between individuals with DID and healthy controls with high sensitivity (72%) and specificity (74%) on the basis of brain structure. These findings provide evidence for a biological basis for distinguishing between DID-affected and healthy individuals.
We propose a pattern of neuroimaging biomarkers that could be used to inform the identification of individuals with DID from healthy controls at the individual level. This is important and clinically relevant because the DID diagnosis is controversial and individuals with DID are often misdiagnosed. Ultimately, the application of pattern recognition methodologies could prevent unnecessary suffering of individuals with DID because of an earlier accurate diagnosis, which will facilitate faster and targeted interventions.
Declaration of interest
The authors declare no competing financial interests.
Introduction: Primary care paramedics (PCPs) have limited options to provide analgesia during transport thus timely pain relief is often significantly delayed. Inhaled nitrous oxide is considered usual care for PCPs, but is limited in effectiveness. Intranasal (IN) ketamine has been shown to provide effective analgesia with no deleterious effects on cardiorespiratory function thus may provide rapid, easily-administered and well-tolerated analgesia in prehospital transports. Methods: This was a randomized double-blind pilot series. Patients with an acute painful condition reporting a pain score of 5 or more on an 11-point verbal numeric rating scale (VNRS) were included. Exclusion criteria were age under 18 years, known intolerance to ketamine, non-traumatic chest pain, altered mental status, pregnancy and nasal occlusion. Patients were randomized to 0.75 mg/kg of IN ketamine or IN saline. All patents received inhaled nitrous oxide. The primary outcome was the proportion of patients experiencing a reduction in VNRS pain score of two points or more (clinically significant pain reduction) at 30 minutes. Secondary outcomes were patient-reported comfort, patient and provider satisfaction, and incidence of adverse events. Results: 40 patients were enrolled, 20 in each group. 80% of IN ketamine patients compared to 60% of placebo patients reported a 2-point reduction in VNRS pain score by 30 minutes. 50% of ketamine vs 25% of placebo patients reported feeling moderately or much better. 85% of ketamine vs 75% of placebo patients reported any improvement in subjective comfort. 80% of ketamine patients reported minor adverse effects compared to 52% of placebo patients. No serious adverse effects were reported. Conclusion: The addition of IN ketamine to usual care with nitrous oxide appears to result in a greater proportion of patients reporting a clinically significant reduction in VNRS pain score and improved subjective comfort, with a greater incidence of minor adverse effects. These findings will be used to power a definitive randomized double-blind trial.
Identifying genetic relationships between complex traits in emerging adulthood can provide useful etiological insights into risk for psychopathology. College-age individuals are under-represented in genomic analyses thus far, and the majority of work has focused on the clinical disorder or cognitive abilities rather than normal-range behavioral outcomes.
This study examined a sample of emerging adults 18–22 years of age (N = 5947) to construct an atlas of polygenic risk for 33 traits predicting relevant phenotypic outcomes. Twenty-eight hypotheses were tested based on the previous literature on samples of European ancestry, and the availability of rich assessment data allowed for polygenic predictions across 55 psychological and medical phenotypes.
Polygenic risk for schizophrenia (SZ) in emerging adults predicted anxiety, depression, nicotine use, trauma, and family history of psychological disorders. Polygenic risk for neuroticism predicted anxiety, depression, phobia, panic, neuroticism, and was correlated with polygenic risk for cardiovascular disease.
These results demonstrate the extensive impact of genetic risk for SZ, neuroticism, and major depression on a range of health outcomes in early adulthood. Minimal cross-ancestry replication of these phenomic patterns of polygenic influence underscores the need for more genome-wide association studies of non-European populations.
The current study used data from two longitudinal samples to test whether self-regulation, depressive symptoms, and aggression/antisociality were mediators in the relation between a polygenic score indexing serotonin (5-HT) functioning and alcohol use in adolescence. The results from an independent genome-wide association study of 5-hydroxyindoleacetic acid in the cerebrospinal fluid were used to create 5-HT polygenic risk scores. Adolescents and/or parents reported on adolescents’ self-regulation (Time 1), depressive symptoms (Time 2), aggression/antisociality (Time 2), and alcohol use (Time 3). The results showed that 5-HT polygenic risk did not predict self-regulation. However, adolescents with higher levels of 5-HT polygenic risk showed greater depression and aggression/antisociality. Adolescents’ aggression/antisociality mediated the relation between 5-HT polygenic risk and later alcohol use. Deficits in self-regulation also predicted depression and aggression/antisociality, and indirectly predicted alcohol use through aggression/antisociality. Pathways to alcohol use were especially salient for males from families with low parental education in one of the two samples. The results provide insights into the longitudinal mechanisms underlying the relation between 5-HT functioning and alcohol use (i.e., earlier aggression/antisociality). There was no evidence that genetically based variation in 5-HT functioning predisposed individuals to deficits in self-regulation. Genetically based variation in 5-HT functioning and self-regulation might be separate, transdiagnostic risk factors for several types of psychopathology.
To evaluate the impact of multidrug-resistant gram-negative rod (MDR-GNR) infections on mortality and healthcare resource utilization in community hospitals.
Two matched case-control analyses.
Six community hospitals participating in the Duke Infection Control Outreach Network from January 1, 2010, through December 31, 2012.
Adult patients admitted to study hospitals during the study period.
Patients with MDR-GNR bloodstream and urinary tract infections were compared with 2 groups: (1) patients with infections due to nonMDR-GNR and (2) control patients representative of the nonpsychiatric, non-obstetric hospitalized population. Four outcomes were assessed: mortality, direct cost of hospitalization, length of stay, and 30-day readmission rates. Multivariable regression models were created to estimate the effect of MDR status on each outcome measure.
No mortality difference was seen in either analysis. Patients with MDR-GNR infections had 2.03 higher odds of 30-day readmission compared with patients with nonMDR-GNR infections (95% CI, 1.04–3.97, P=.04). There was no difference in hospital direct costs between patients with MDR-GNR infections and patients with nonMDR-GNR infections. Hospitalizations for patients with MDR-GNR infections cost $5,320.03 more (95% CI, $2,366.02–$8,274.05, P<.001) and resulted in 3.40 extra hospital days (95% CI, 1.41–5.40, P<.001) than hospitalizations for control patients.
Our study provides novel data regarding the clinical and financial impact of MDR gram-negative bacterial infections in community hospitals. There was no difference in mortality between patients with MDR-GNR infections and patients with nonMDR-GNR infections or control patients.
Monochorionic twins share a single placenta and are connected with each other through vascular anastomoses. Unbalanced inter-twin blood transfusion may lead to various complications, including twin-to-twin transfusion syndrome (TTTS) and twin anemia polycythemia sequence (TAPS). TAPS was first described less than a decade ago, and the pathogenesis of TAPS results from slow blood transfusion from donor to recipient through a few minuscule vascular anastomoses. This gradually leads to anemia in the donor and polycythemia in the recipient, in the absence of twin oligo-polyhydramnios sequence (TOPS). TAPS may occur spontaneously in 3–5% of monochorionic twins or after laser surgery for TTTS. The prevalence of post-laser TAPS varies from 2% to 16% of TTTS cases, depending on the rate of residual anastomoses. Pre-natal diagnosis of TAPS is currently based on discordant measurements of the middle cerebral artery peak systolic velocity (MCA-PSV; >1.5 multiples of the median [MoM] in donors and <1.0 in recipients). Post-natal diagnosis is based on large inter-twin hemoglobin (Hb) difference (>8 g/dL), and at least one of the following: reticulocyte count ratio >1.7 or minuscule placental anastomoses. Management includes expectant management, and intra-uterine blood transfusion (IUT) with or without partial exchange transfusion (PET) or fetoscopic laser surgery. Post-laser TAPS can be prevented by using the Solomon laser surgery technique. Short-term neonatal outcome ranges from isolated inter-twin Hb differences to severe neonatal morbidity and neonatal death. Long-term neonatal outcome in post-laser TAPS is comparable with long-term outcome after treated TTTS. This review summarizes the current knowledge after 10 years of research on the pathogenesis, diagnosis, management, and outcome in TAPS.
The literature on the neurobiology of emotional processing in panic disorder (PD) remains inconsistent. Clinical heterogeneity could be causing this.
To investigate differences in brain activity between PD and healthy controls using the emotional faces fMRI paradigm.
To elucidate neurobiological mechanisms underlying emotional processing in PD and previously identified subtypes (Pattyn et al., 2015).
The main analysis compared the neural processing of different emotional facial expressions from a large group of PD patients (n = 73) versus healthy controls (n = 58) originating from the Netherlands Study of Depression and Anxiety (NESDA). A second analysis divided the PD group into the three previously identified subgroups: a cognitive-autonomic (n = 22), an autonomic (n = 16) and an aspecific subgroup (n = 35). The fusiform gyrus, the anterior cingulate cortex and the insula were used in a ROI approach.
Comparing PD patients with healthy controls, a decreased activity on angry faces was observed in the left fusiform gyrus. The subgroup analysis showed more activity in the anterior cingulate cortex on neutral faces in the cognitive-autonomic subgroup versus the autonomic subgroup and a decreased activity in the left fusiform gyrus on angry faces compared to the aspecific subgroup. Less activity was observed in the right insula on neutral faces in the autonomic subgroup versus the aspecific subgroup.
Reduced activity in the left fusiform gyrus was differentiating panic disorder patients from healthy controls. In accordance with clinical subtyping, between-subtype differences are an indication that a phenomenological approach could provide more insight in underlying neurobiological mechanisms in emotional processing in PD.
Disclosure of interest
The authors have not supplied their declaration of competing interest.
To describe the epidemiology of complex surgical site infection (SSI) following commonly performed surgical procedures in community hospitals and to characterize trends of SSI prevalence rates over time for MRSA and other common pathogens
We prospectively collected SSI data at 29 community hospitals in the southeastern United States from 2008 through 2012. We determined the overall prevalence rates of SSI for commonly performed procedures during this 5-year study period. For each year of the study, we then calculated prevalence rates of SSI stratified by causative organism. We created log-binomial regression models to analyze trends of SSI prevalence over time for all pathogens combined and specifically for MRSA.
A total of 3,988 complex SSIs occurred following 532,694 procedures (prevalence rate, 0.7 infections per 100 procedures). SSIs occurred most frequently after small bowel surgery, peripheral vascular bypass surgery, and colon surgery. Staphylococcus aureus was the most common pathogen. The prevalence rate of SSI decreased from 0.76 infections per 100 procedures in 2008 to 0.69 infections per 100 procedures in 2012 (prevalence rate ratio [PRR], 0.90; 95% confidence interval [CI], 0.82–1.00). A more substantial decrease in MRSA SSI (PRR, 0.69; 95% CI, 0.54–0.89) was largely responsible for this overall trend.
The prevalence of MRSA SSI decreased from 2008 to 2012 in our network of community hospitals. This decrease in MRSA SSI prevalence led to an overall decrease in SSI prevalence over the study period.
To determine whether daily chlorhexidine gluconate (CHG) bathing of intensive care unit (ICU) patients leads to a decrease in hospital-acquired infections (HAIs), particularly infections caused by methicillin-resistant Staphylococcus aureus (MRSA) and vancomycin-resistant Enterococcus (VRE).
Interrupted time series analysis.
The study included 33 community hospitals participating in the Duke Infection Control Outreach Network from January 2008 through December 2013.
All ICU patients at study hospitals during the study period.
Of the 33 hospitals, 17 hospitals implemented CHG bathing during the study period, and 16 hospitals that did not perform CHG bathing served as controls. Primary pre-specified outcomes included ICU central-line–associated bloodstream infections (CLABSIs), primary bloodstream infections (BSI), ventilator-associated pneumonia (VAP), and catheter-associated urinary tract infections (CAUTIs). MRSA and VRE HAIs were also evaluated.
Chlorhexidine gluconate (CHG) bathing was associated with a significant downward trend in incidence rates of ICU CLABSI (incidence rate ratio [IRR], 0.96; 95% confidence interval [CI], 0.93–0.99), ICU primary BSI (IRR, 0.96; 95% CI, 0.94–0.99), VRE CLABSIs (IRR, 0.97; 95% CI, 0.97–0.98), and all combined VRE infections (IRR, 0.96; 95% CI, 0.93–1.00). No significant trend in MRSA infection incidence rates was identified prior to or following the implementation of CHG bathing.
In this multicenter, real-world analysis of the impact of CHG bathing, hospitals that implemented CHG bathing attained a decrease in ICU CLABSIs, ICU primary BSIs, and VRE CLABSIs. CHG bathing did not affect rates of specific or overall infections due to MRSA. Our findings support daily CHG bathing of ICU patients.
We examine three-dimensional (3D) effects on the flapping dynamics of a flag, modelled as a thin membrane, in uniform fluid inflow. We consider periodic spanwise variations of length
(ignoring edge effects), so that the 3D effects are characterized by the dimensionless spanwise wavelength
is the chord length. We perform linear stability analysis (LSA) to show increase in stability with
, with the purely 2D mode being the most unstable. To confirm the LSA and to study nonlinear responses of 3D flapping, we obtain direct numerical simulations, up to Reynolds number 1000 based on
, coupling solvers for the Navier–Stokes equations and that for a thin membrane structure undergoing arbitrarily large displacement. For nonlinear flapping evolution, we identify and characterize the effect of
on the distinct flag motions and wake vortex structures, corresponding to spanwise standing wave (SW) and travelling wave (TW) modes, in the absence and presence of cross-flow respectively. For both SW and TW, the response is characterized by an initial instability growth phase (I), followed by a nonlinear development phase (II) consisting of multiple unstable 3D modes, and tending, in long time, towards a quasi-steady limit-cycle response (III) dominated by a single (most unstable) mode. Phase I follows closely the predictions of LSA for initial instability and growth rates, with the latter increased for TW due to suppression of restoring forces by the cross-flow. Phase II is characterized by multiple competing flapping modes with energy cascading towards the more unstable mode(s). The wake is characterized by interwoven (SW) and oblique continuous (TW) shed vortices. For phase III, the persistent single dominant mode for SW is the (most unstable) 2D flag displacement with a continuous parallel wake structure; and for TW, the fundamental oblique travelling-wave flag displacement corresponding to the given
with continuous oblique shedding. The transition to phase III occurs slower for greater
. For the total forces, drag decreases for both SW and TW with decreasing
, while lift is negligible in phase I and II and comparable in magnitude to drag in phase III for any
To determine the association (1) between shorter operative duration and surgical site infection (SSI) and (2) between surgeon median operative duration and SSI risk among first-time hip and knee arthroplasties.
Retrospective cohort study
A total of 43 community hospitals located in the southeastern United States.
Adults who developed SSIs according to National Healthcare Safety Network criteria within 365 days of first-time knee or hip arthroplasties performed between January 1, 2008 and December 31, 2012.
Log-binomial regression models estimated the association (1) between operative duration and SSI outcome and (2) between surgeon median operative duration and SSI outcome. Hip and knee arthroplasties were evaluated in separate models. Each model was adjusted for American Society of Anesthesiology score and patient age.
A total of 25,531 hip arthroplasties and 42,187 knee arthroplasties were included in the study. The risk of SSI in knee arthroplasties with an operative duration shorter than the 25th percentile was 0.40 times the risk of SSI in knee arthroplasties with an operative duration between the 25th and 75th percentile (risk ratio [RR], 0.40; 95% confidence interval [CI], 0.38–0.56; P<.01). Short operative duration did not demonstrate significant association with SSI for hip arthroplasties (RR, 1.04; 95% CI, 0.79–1.37; P=.36). Knee arthroplasty surgeons with shorter median operative durations had a lower risk of SSI than surgeons with typical median operative durations (RR, 0.52; 95% CI, 0.43–0.64; P<.01).
Short operative durations were not associated with a higher SSI risk for knee or hip arthroplasty procedures in our analysis.
Infect. Control Hosp. Epidemiol. 2015;36(12):1431–1436
To evaluate seasonal variation in the rate of surgical site infections (SSI) following commonly performed surgical procedures.
Retrospective cohort study.
We analyzed 6 years (January 1, 2007, through December 31, 2012) of data from the 15 most commonly performed procedures in 20 hospitals in the Duke Infection Control Outreach Network. We defined summer as July through September. First, we performed 3 separate Poisson regression analyses (unadjusted, multivariable, and polynomial) to estimate prevalence rates and prevalence rate ratios of SSI following procedures performed in summer versus nonsummer months. Then, we stratified our results to obtain estimates based on procedure type and organism type. Finally, we performed a sensitivity analysis to test the robustness of our findings.
We identified 4,543 SSI following 441,428 surgical procedures (overall prevalence rate, 1.03/100 procedures). The rate of SSI was significantly higher during the summer compared with the remainder of the year (1.11/100 procedures vs 1.00/100 procedures; prevalence rate ratio, 1.11 [95% CI, 1.04–1.19]; P=.002). Stratum-specific SSI calculations revealed higher SSI rates during the summer for both spinal (P=.03) and nonspinal (P=.004) procedures and revealed higher rates during the summer for SSI due to either gram-positive cocci (P=.006) or gram-negative bacilli (P=.004). Multivariable regression analysis and sensitivity analyses confirmed our findings.
The rate of SSI following commonly performed surgical procedures was higher during the summer compared with the remainder of the year. Summer SSI rates remained elevated after stratification by organism and spinal versus nonspinal surgery, and rates did not change after controlling for other known SSI risk factors.
Infect. Control Hosp. Epidemiol. 2015;36(9):1011–1016
We conducted a developmental analysis of genetic moderation of the effect of the Fast Track intervention on adult externalizing psychopathology. The Fast Track intervention enrolled 891 children at high risk to develop externalizing behavior problems when they were in kindergarten. Half of the enrolled children were randomly assigned to receive 10 years of treatment, with a range of services and resources provided to the children and their families, and the other half to usual care (controls). We previously showed that the effect of the Fast Track intervention on participants' risk of externalizing psychopathology at age 25 years was moderated by a variant in the glucocorticoid receptor gene. Children who carried copies of the A allele of the single nucleotide polymorphism rs10482672 had the highest risk of externalizing psychopathology if they were in the control arm of the trial and the lowest risk of externalizing psychopathology if they were in the treatment arm. In this study, we test a developmental hypothesis about the origins of this for better and for worse Gene × Intervention interaction (G × I): that the observed G × I effect on adult psychopathology is mediated by the proximal impact of intervention on childhood externalizing problems and adolescent substance use and delinquency. We analyzed longitudinal data tracking the 270 European American children in the Fast Track randomized control trial with available genetic information (129 intervention children, 141 control group peers, 69% male) from kindergarten through age 25 years. Results show that the same pattern of for better and for worse susceptibility to intervention observed at the age 25 follow-up was evident already during childhood. At the elementary school follow-ups and at the middle/high school follow-ups, rs10482672 predicted better adjustment among children receiving the Fast Track intervention and worse adjustment among children in the control condition. In turn, these proximal G × I effects early in development mediated the ultimate G × I effect on externalizing psychopathology at age 25 years. We discuss the contribution of these findings to the growing literature on genetic susceptibility to environmental intervention.
Funguria rarely represents true infection in the urinary tract. Excluding yeast from the catheter-associated urinary tract infection (CAUTI) surveillance definition reduced CAUTI rates by nearly 25% in community hospitals and at an academic, tertiary-care medical center.