To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Peripheral low-grade inflammation in depression is increasingly seen as a therapeutic target. We aimed to establish the prevalence of low-grade inflammation in depression, using different C-reactive protein (CRP) levels, through a systematic literature review and meta-analysis.
We searched the PubMed database from its inception to July 2018, and selected studies that assessed depression using a validated tool/scale, and allowed the calculation of the proportion of patients with low-grade inflammation (CRP >3 mg/L) or elevated CRP (>1 mg/L).
After quality assessment, 37 studies comprising 13 541 depressed patients and 155 728 controls were included. Based on the meta-analysis of 30 studies, the prevalence of low-grade inflammation (CRP >3 mg/L) in depression was 27% (95% CI 21–34%); this prevalence was not associated with sample source (inpatient, outpatient or population-based), antidepressant treatment, participant age, BMI or ethnicity. Based on the meta-analysis of 17 studies of depression and matched healthy controls, the odds ratio for low-grade inflammation in depression was 1.46 (95% CI 1.22–1.75). The prevalence of elevated CRP (>1 mg/L) in depression was 58% (95% CI 47–69%), and the meta-analytic odds ratio for elevated CRP in depression compared with controls was 1.47 (95% CI 1.18–1.82).
About a quarter of patients with depression show evidence of low-grade inflammation, and over half of patients show mildly elevated CRP levels. There are significant differences in the prevalence of low-grade inflammation between patients and matched healthy controls. These findings suggest that inflammation could be relevant to a large number of patients with depression.
Human exposure to lead may induce a variety of adverse effects on health including haematological, neurobehavioural, cardiovascular and renal changes and therefore continues to be a public health concern (Needleman 1989). Lead is dispersed in the environment from where it may be inhaled or ingested by man. Environmental exposure may arise from a number of potential sources: typically industrial emissions, exhaust from petrol engines, drinking water, foodstuffs, paint, soldered cans, lead glazed earthenware, dust and soil. A further source is tobacco smoke.
One of the founding concepts of the high-entropy alloy (HEA) field was that the lattice structures of multicomponent solid solution phases are highly distorted. The displacement of the constituent atoms, away from their ideal locations (local lattice strain), has been widely cited as the reason for a number of the observed physical and mechanical properties. However, very little data directly characterizing these lattice distortions exist and, thus, the validity of this hypothesis remains an open question. Here, the concept is reviewed by considering the underlying principles of the lattice distortions, the suitability of different assessment methods, and the direct experimental data currently available. It is found that, at present, there is no clear evidence that the lattice distortions in HEAs are significantly greater than those of conventional alloys. However, so few alloys have been appropriately characterized that this conclusion cannot be considered overarching and further research is required.
The value of the nosological distinction between non-affective and affective psychosis has frequently been challenged. We aimed to investigate the transdiagnostic dimensional structure and associated characteristics of psychopathology at First Episode Psychosis (FEP). Regardless of diagnostic categories, we expected that positive symptoms occurred more frequently in ethnic minority groups and in more densely populated environments, and that negative symptoms were associated with indices of neurodevelopmental impairment.
This study included 2182 FEP individuals recruited across six countries, as part of the EUropean network of national schizophrenia networks studying Gene–Environment Interactions (EU-GEI) study. Symptom ratings were analysed using multidimensional item response modelling in Mplus to estimate five theory-based models of psychosis. We used multiple regression models to examine demographic and context factors associated with symptom dimensions.
A bifactor model, composed of one general factor and five specific dimensions of positive, negative, disorganization, manic and depressive symptoms, best-represented associations among ratings of psychotic symptoms. Positive symptoms were more common in ethnic minority groups. Urbanicity was associated with a higher score on the general factor. Men presented with more negative and less depressive symptoms than women. Early age-at-first-contact with psychiatric services was associated with higher scores on negative, disorganized, and manic symptom dimensions.
Our results suggest that the bifactor model of psychopathology holds across diagnostic categories of non-affective and affective psychosis at FEP, and demographic and context determinants map onto general and specific symptom dimensions. These findings have implications for tailoring symptom-specific treatments and inform research into the mood-psychosis spectrum.
Patient days and days present were compared to directly measured person time to quantify how choice of different denominator metrics may affect antimicrobial use rates. Overall, days present were approximately one-third higher than patient days. This difference varied among hospitals and units and was influenced by short length of stay.
Postpartum psychosis has recently been the focus of an in-depth storyline on a British television soap opera watched by millions of viewers.
This research explored how the storyline and concomitant increase in public awareness of postpartum psychosis have been received by women who have recovered from the condition.
Nine semistructured, one-to-one interviews were conducted with women who had experienced postpartum psychosis. Thematic analysis consistent with Braun and Clarke's six-step approach was used to generate themes from the data.
Public exposure provided by the postpartum psychosis portrayal was deemed highly valuable, and its mixed reception encompassed potentially therapeutic benefits in addition to harms.
Public awareness of postpartum psychosis strongly affects women who have experienced postpartum psychosis. This research highlights the complexity of using television drama for public education and may enable mental health organisations to better focus future practices of raising postpartum psychosis awareness.
Declaration of interest
GB is chair of action on Postpartum Psychosis. JH is director of action on Postpartum Psychosis. IJ is a trustee of action on Postpartum Psychosis and was a consultant to the BBC (television company) on the EastEnders storyline. CD is a trustee of action on Postpartum Psychosis, a trustee of BIPOLAR UK, vice chair of the Maternal Mental Health Alliance, and was a consultant to the BBC (television company) on the EastEnders storyline.
A predictive risk stratification tool (PRISM) to estimate a patient's risk of an emergency hospital admission in the following year was trialled in general practice in an area of the United Kingdom. PRISM's introduction coincided with a new incentive payment (‘QOF’) in the regional contract for family doctors to identify and manage the care of people at high risk of emergency hospital admission.
Alongside the trial, we carried out a complementary qualitative study of processes of change associated with PRISM's implementation. We aimed to describe how PRISM was understood, communicated, adopted, and used by practitioners, managers, local commissioners and policy makers. We gathered data through focus groups, interviews and questionnaires at three time points (baseline, mid-trial and end-trial). We analyzed data thematically, informed by Normalisation Process Theory (1).
All groups showed high awareness of PRISM, but raised concerns about whether it could identify patients not yet known, and about whether there were sufficient community-based services to respond to care needs identified. All practices reported using PRISM to fulfil their QOF targets, but after the QOF reporting period ended, only two practices continued to use it. Family doctors said PRISM changed their awareness of patients and focused them on targeting the highest-risk patients, though they were uncertain about the potential for positive impact on this group.
Though external factors supported its uptake in the short term, with a focus on the highest risk patients, PRISM did not become a sustained part of normal practice for primary care practitioners.
New approaches are needed to safely reduce emergency admissions to hospital by targeting interventions effectively in primary care. A predictive risk stratification tool (PRISM) identifies each registered patient's risk of an emergency admission in the following year, allowing practitioners to identify and manage those at higher risk. We evaluated the introduction of PRISM in primary care in one area of the United Kingdom, assessing its impact on emergency admissions and other service use.
We conducted a randomized stepped wedge trial with cluster-defined control and intervention phases, and participant-level anonymized linked outcomes. PRISM was implemented in eleven primary care practice clusters (total thirty-two practices) over a year from March 2013. We analyzed routine linked data outcomes for 18 months.
We included outcomes for 230,099 registered patients, assigned to ranked risk groups.
Overall, the rate of emergency admissions was higher in the intervention phase than in the control phase: adjusted difference in number of emergency admissions per participant per year at risk, delta = .011 (95 percent Confidence Interval, CI .010, .013). Patients in the intervention phase spent more days in hospital per year: adjusted delta = .029 (95 percent CI .026, .031). Both effects were consistent across risk groups.
Primary care activity increased in the intervention phase overall delta = .011 (95 percent CI .007, .014), except for the two highest risk groups which showed a decrease in the number of days with recorded activity.
Introduction of a predictive risk model in primary care was associated with increased emergency episodes across the general practice population and at each risk level, in contrast to the intended purpose of the model. Future evaluation work could assess the impact of targeting of different services to patients across different levels of risk, rather than the current policy focus on those at highest risk.
Emergency admissions to hospital are a major financial burden on health services. In one area of the United Kingdom (UK), we evaluated a predictive risk stratification tool (PRISM) designed to support primary care practitioners to identify and manage patients at high risk of admission. We assessed the costs of implementing PRISM and its impact on health services costs. At the same time as the study, but independent of it, an incentive payment (‘QOF’) was introduced to encourage primary care practitioners to identify high risk patients and manage their care.
We conducted a randomized stepped wedge trial in thirty-two practices, with cluster-defined control and intervention phases, and participant-level anonymized linked outcomes. We analysed routine linked data on patient outcomes for 18 months (February 2013 – September 2014). We assigned standard unit costs in pound sterling to the resources utilized by each patient. Cost differences between the two study phases were used in conjunction with differences in the primary outcome (emergency admissions) to undertake a cost-effectiveness analysis.
We included outcomes for 230,099 registered patients. We estimated a PRISM implementation cost of GBP0.12 per patient per year.
Costs of emergency department attendances, outpatient visits, emergency and elective admissions to hospital, and general practice activity were higher per patient per year in the intervention phase than control phase (adjusted δ = GBP76, 95 percent Confidence Interval, CI GBP46, GBP106), an effect that was consistent and generally increased with risk level.
Despite low reported use of PRISM, it was associated with increased healthcare expenditure. This effect was unexpected and in the opposite direction to that intended. We cannot disentangle the effects of introducing the PRISM tool from those of imposing the QOF targets; however, since across the UK predictive risk stratification tools for emergency admissions have been introduced alongside incentives to focus on patients at risk, we believe that our findings are generalizable.
A link between infection, inflammation, neurodevelopment and adult illnesses has been proposed. The objective of this study was to examine the association between infection burden during childhood – a critical period of development for the immune and nervous systems – and subsequent systemic inflammatory markers and general intelligence. In the Avon Longitudinal Study of Parents and Children, a prospective birth cohort in England, we examined the association of exposure to infections during childhood, assessed at seven follow-ups between age 1·5 and 7·5 years, with subsequent: (1) serum interleukin 6 and C-reactive protein (CRP) levels at age 9; (2) intelligence quotient (IQ) at age 8. We also examined the relationship between inflammatory markers and IQ. Very high infection burden (90+ percentile) was associated with higher CRP levels, but this relationship was explained by body mass index (adjusted odds ratio (OR) 1·19; 95% confidence interval (CI) 0·95–1·50), maternal occupation (adjusted OR 1·23; 95% CI 0·98–1·55) and atopic disorders (adjusted OR 1·24; 95% CI 0·98–1·55). Higher CRP levels were associated with lower IQ; adjusted β = −0·79 (95% CI −1·31 to −0·27); P = 0·003. There was no strong evidence for an association between infection and IQ. The findings indicate that childhood infections do not have an independent, lasting effect on circulating inflammatory marker levels subsequently in childhood; however, elevated inflammatory markers may be harmful for intellectual development/function.
Shape of the carcass is considered important commercially and is usually assessed using a subjective score for conformation. Carcasses of higher conformation are perceived to have higher lean to bone (L:B) ratios and give joints of better shape at a weight, characterised as shorter and having a greater thickness of muscle. Some of these benefits have been shown, but so has a positive association between conformation and fatness. Purchas et al. (1991) proposed that muscularity indices could be used as an alternative to the conformation score. The objectives of this study were to investigate the relationships between muscularity, shape of joints and composition within breeds and the relationships between different muscularity indices. Knowledge of the latter relationships is important to determine how many indices are required to adequately describe carcass muscularity.
Currently fewer than 50% of UK lambs produce carcasses of acceptable quality for the domestic and export markets, which compromises the competitiveness of sheep farming. Carcass quality can be changed by selection, and this is now being taken advantage of in terminal sire breeds and, to a lesser extent, in hill breeds. However, little attention has yet been focused on the crossing breeds, which have relatively poor carcass quality, in spite of the large impact such breeds have on the slaughter generation. Recently, a long-term project began to develop breeding programmes relevant to crossing sire (‘longwool’) breeds. Its objective is to produce a selection index to improve carcass quality without compromising the reproductive performance or maternal ability of these breeds. The Bluefaced Leicester is the most prevalent crossing sire breed with its crossbred (‘Mule’) daughters out of draft hill ewes accounting for 89% of crossbred (longwool x hill) ewes in the UK (Pollot, 1998).
With increasing emphasis in the meat sector on better and more consistent quality, carcass leanness and conformation is now an important issue for sheep breeders. In 1999, only 47% of all carcasses in the UK met the target specifications for weight, fat and conformation (MLC, 2000), highlighting the potential for improvement. In the current stratified crossbreeding system, crossbred wether lambs are a by-product of the production of dam line ewes for the lowland sector. If their carcass quality is sufficient, they can give a valuable boost to the economics of the breeding programme. Genetic improvement of carcass quality in crossing sire breeds would benefit the crossbred wethers, as well as filter through to the terminal sire cross lambs produced by the crossbred ewes. This work aims to assess the influence of selection index and live conformation score of crossing sires (in this case Bluefaced Leicesters) on growth and carcass quality traits of their crossbred progeny, as a first step towards designing a genetic improvement programme for crossing sire sheep.
To identify developmental sub-groups of depressive symptoms during the second decade of life, a critical period of brain development, using data from a prospective birth cohort. To test whether childhood intelligence and inflammatory markers are associated with subsequent persistent depressive symptoms.
IQ, a proxy for neurodevelopment, was measured at age 8 years. Interleukin 6 (IL-6) and C-reactive protein, typical inflammatory markers, were measured at age 9 years. Depressive symptoms were measured six times between 10 and 19 years using the short mood and feelings questionnaire (SMFQ), which were coded as binary variable and then used in latent class analysis to identify developmental sub-groups of depressive symptoms.
Longitudinal SMFQ data from 9156 participants yielded three distinct population sub-groups of depressive symptoms: no symptoms (81.2%); adolescent-onset symptoms (13.2%); persistent symptoms (5.6%). Lower IQ and higher IL-6 levels in childhood were independently associated with subsequent persistent depressive symptoms in a linear, dose–response fashion, but not with adolescent-onset symptoms. Compared with the group with no symptoms the adjusted odds ratio for persistent depressive symptoms per s.d. increase in IQ was 0.80 (95% CI, 0.68–0.95); that for IL-6 was 1.20 (95% CI, 1.03–1.39). Evidence for an association with IL-6 remained after controlling for initial severity of depressive symptoms at 10 years. There was no evidence that IL-6 moderated or mediated the IQ-persistent depressive symptom relationship.
The results indicate potentially important roles for two distinct biological processes, neurodevelopment and inflammation, in the aetiology of persistent depressive symptoms in young people.
OBJECTIVES/SPECIFIC AIMS: Nonalcoholic fatty liver disease (NAFLD) is the most common cause of liver disease in the United States and increases risk for cirrhosis and liver cancer. Identifying modifiable risk factors for NAFLD could allow better targeting of prevention programs. Insulin resistance (IR) plays a significant role in the development and progression of NAFLD. IR is also an important precursor to the development of type 2 diabetes (T2DM). However, the development and duration of IR during young adulthood and its association with NAFLD and T2DM in midlife is unclear. To test whether trajectories of IR using homeostatic model assessment (HOMA-IR) change throughout early adulthood are associated with risk of prevalent NAFLD and T2DM among persons with NAFLD in midlife independent of current or baseline HOMA-IR. METHODS/STUDY POPULATION: Participants from the CARDIA study, a prospective multicenter population-based biracial cohort of adults (baseline age 18–30 years), underwent HOMA-IR measurement (≥8 h fasting and not pregnant) at baseline (1985–1986) and follow-up exam years 7, 10, 15, 20, and 25. At Year 25 (Y25, 2010–2011), liver fat was assessed by noncontrast computed tomography (CT). NAFLD was defined as CT liver attenuation <51 Hounsfield Units after exclusion of other causes of liver fat (alcohol/hepatitis/medications). Latent mixture modeling was used to identify 25-year trajectories in HOMA-IR over time. Multivariable logistic regression models were used to assess associations between HOMA-IR trajectory groups and prevalent NAFLD with adjustment for baseline or Y25 HOMA-IR. RESULTS/ANTICIPATED RESULTS: Among 3060 participants, we identified 3 distinct trajectory groups for HOMA-IR for individuals free from diabetes in middle adulthood: qualitatively low-stable (46.7% of the cohort), moderate-increasing (42.0%), and high-increasing (11.3%) with a NAFLD prevalence at Y25 of: 8.3%, 33.4%, and 63.5%, respectively (p-trend<0.0001). After adjustment for confounders (baseline smoking status, alcohol use, body mass index, physical activity score, systolic blood pressure, antihypertensive medication use, and total/HDL cholesterol ratio) and baseline HOMA-IR, increasing HOMA-IR trajectories were associated with greater NAFLD prevalence compared with the low-stable trajectory group [odds ratio (95% CI): 5.8 (4.3–7.9) and 22.3 (14.2–34.9) for moderate and high, respectively]. These associations were attenuated, but remained significant, even after controlling for current Y25 HOMA-IR [OR=3.6 (2.6–5.0) for moderate and 5.9 (3.4–10.3) for high (referent: low)]. Among participants with NAFLD (n=511), high-increasing HOMA-IR trajectory was associated with greater prevalent [OR=6.5 (1.6–25.7)] and incident [OR=8.7 (2.2–34.4)] T2DM at Y25 independent of confounders and Y25 HOMA-IR (referent: low-stable). DISCUSSION/SIGNIFICANCE OF IMPACT: In this community-based sample of individuals free from diabetes at baseline, an increasing HOMA-IR trajectory through young adulthood was associated with greater NAFLD prevalence in midlife. Knowledge of changes in IR throughout adulthood provides new information on the risk of T2DM among persons with NAFLD in midlife independent of current level of IR. These findings highlight early identification of increasing IR as a potential target for primary prevention of T2DM in the setting of NAFLD.
Sleep loss may trigger mood episodes in people with bipolar disorder but individual differences could influence vulnerability to this trigger.
To determine whether bipolar subtype (bipolar disorder type I (BP-I) or II (BD-II)) and gender were associated with vulnerability to the sleep loss trigger.
During a semi-structured interview, 3140 individuals (68% women) with bipolar disorder (66% BD-I) reported whether sleep loss had triggered episodes of high or low mood. DSM-IV diagnosis of bipolar subtype was derived from case notes and interview data.
Sleep loss triggering episodes of high mood was associated with female gender (odds ratio (OR) = 143, 95% CI 1.17–1.75, P<0.001) and BD-I subtype (OR=2.81, 95% CI 2.26–3.50, P<0.001). Analyses on sleep loss triggering low mood were not significant following adjustment for confounders.
Gender and bipolar subtype may increase vulnerability to high mood following sleep deprivation. This should be considered in situations where patients encounter sleep disruption, such as shift work and international travel.