To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We examined Clostridioides difficile infection (CDI) prevention practices and their relationship with hospital-onset healthcare facility-associated CDI rates (CDI rates) in Veterans Affairs (VA) acute-care facilities.
From January 2017 to February 2017, we conducted an electronic survey of CDI prevention practices and hospital characteristics in the VA. We linked survey data with CDI rate data for the period January 2015 to December 2016. We stratified facilities according to whether their overall CDI rate per 10,000 bed days of care was above or below the national VA mean CDI rate. We examined whether specific CDI prevention practices were associated with an increased risk of a CDI rate above the national VA mean CDI rate.
All 126 facilities responded (100% response rate). Since implementing CDI prevention practices in July 2012, 60 of 123 facilities (49%) reported a decrease in CDI rates; 22 of 123 facilities (18%) reported an increase, and 41 of 123 (33%) reported no change. Facilities reporting an increase in the CDI rate (vs those reporting a decrease) after implementing prevention practices were 2.54 times more likely to have CDI rates that were above the national mean CDI rate. Whether a facility’s CDI rates were above or below the national mean CDI rate was not associated with self-reported cleaning practices, duration of contact precautions, availability of private rooms, or certification of infection preventionists in infection prevention.
We found considerable variation in CDI rates. We were unable to identify which particular CDI prevention practices (i.e., bundle components) were associated with lower CDI rates.
The science of studying diamond inclusions for understanding Earth history has developed significantly over the past decades, with new instrumentation and techniques applied to diamond sample archives revealing the stories contained within diamond inclusions. This chapter reviews what diamonds can tell us about the deep carbon cycle over the course of Earth’s history. It reviews how the geochemistry of diamonds and their inclusions inform us about the deep carbon cycle, the origin of the diamonds in Earth’s mantle, and the evolution of diamonds through time.
Evidence suggests that sub-optimal maternal nutrition has implications for the developing offspring. We have previously shown that exposure to a low-protein diet during gestation was associated with upregulation of genes associated with cholesterol transport and packaging within the placenta. This study aimed to elucidate the effect of altering maternal dietary linoleic acid (LA; omega-6) to alpha-linolenic acid (ALA; omega-6) ratios as well as total fat content on placental expression of genes associated with cholesterol transport. The potential for maternal body mass index (BMI) to be associated with expression of these genes in human placental samples was also evaluated. Placentas were collected from 24 Wistar rats at 20-day gestation (term = 21–22-day gestation) that had been fed one of four diets containing varying fatty acid compositions during pregnancy, and from 62 women at the time of delivery. Expression of 14 placental genes associated with cholesterol packaging and transfer was assessed in rodent and human samples by quantitative real time polymerase chain reaction. In rats, placental mRNA expression of ApoA2, ApoC2, Cubn, Fgg, Mttp and Ttr was significantly elevated (3–30 fold) in animals fed a high LA (36% fat) diet, suggesting increased cholesterol transport across the placenta in this group. In women, maternal BMI was associated with fewer inconsistent alterations in gene expression. In summary, sub-optimal maternal nutrition is associated with alterations in the expression of genes associated with cholesterol transport in a rat model. This may contribute to altered fetal development and potentially programme disease risk in later life. Further investigation of human placenta in response to specific dietary interventions is required.
Cognitive impairment is strongly linked with persistent disability in people with mood disorders, but the factors that explain cognitive impairment in this population are unclear.
To estimate the total effect of (a) bipolar disorder and (b) major depression on cognitive function, and the magnitude of the effect that is explained by potentially modifiable intermediate factors.
Cross-sectional study using baseline data from the UK Biobank cohort. Participants were categorised as having bipolar disorder (n = 2709), major depression (n = 50 975) or no mood disorder (n = 102 931 and n = 105 284). The outcomes were computerised tests of reasoning, reaction time and memory. The potential mediators were cardiometabolic disease and psychotropic medication. Analyses were informed by graphical methods and controlled for confounding using regression, propensity score-based methods and G-computation.
Group differences of small magnitude were found on a visuospatial memory test. Z-score differences for the bipolar disorder group were in the range −0.23 to −0.17 (95% CI −0.39 to −0.03) across different estimation methods, and for the major depression group they were approximately −0.07 (95% CI −0.10 to −0.03). One-quarter of the effect was mediated via psychotropic medication in the bipolar disorder group (−0.05; 95% CI −0.09 to −0.01). No evidence was found for mediation via cardiometabolic disease.
In a large community-based sample in middle to early old age, bipolar disorder and depression were associated with lower visuospatial memory performance, in part potentially due to psychotropic medication use. Mood disorders and their treatments will have increasing importance for population cognitive health as the proportion of older adults continues to grow.
Declaration of interest
I.J.D. is a UK Biobank participant. J.P.P. is a member of the UK Biobank Steering Committee.
This is a copy of the slides presented at the meeting but not formally written up for the volume.
Stripe domains in ferroelectric thin films form in order to minimize the total energy of the film. It has been known for some time that a stable configuration is reached when the decrease in elastic energy from domain formation is balanced by the energetic costs of domain wall formation, local elastic strains in the substrate, and internal electric field formation from domain polarizations. The size and strain of each domain is determined by the lattice mismatch and the energetic costs of interface formation. Recent piezoelectric force microscopy measurements have shown that BiFeO3 (BFO) films on SrRuO3/SrTiO3 (001) substrates form striped polarization domains. Since the details of the local structure and polarization cannot be measured at the same time with conventional techniques, we have used synchrotron x-ray microdiffraction to study these effects. Probing only a few domains at a time with the submicron x-ray spot resulted in a diffraction pattern near the substrate (103) reflection consisting of several BFO peaks. We have unambiguously assigned these peaks to individual structural variants. Based on these results, we propose a physical model that includes the striped domains. The structural variants within the stripes are similar to those predicted by striped patterns in rhombohedral films which minimize elastic energy. The local piezoelectric properties were measured using time-resolved microdiffraction in order to examine the role of the striped domains in the linear responses of the film. The out of plane piezoelectric coefficient d33 was approximately 50 pm/V and the piezoelectric strain was proportional to electric field was up to 0.55%, the maximum strain we have measured. The projection of the in-plane piezoelectric coefficients onto the reciprocal space maps for different structural variants had vastly different values due to the differences in orientation of the domains.
The cognitive process of worry, which keeps negative thoughts in mind and elaborates the content, contributes to the occurrence of many mental health disorders. Our principal aim was to develop a straightforward measure of general problematic worry suitable for research and clinical treatment. Our secondary aim was to develop a measure of problematic worry specifically concerning paranoid fears.
An item pool concerning worry in the past month was evaluated in 250 non-clinical individuals and 50 patients with psychosis in a worry treatment trial. Exploratory factor analysis and item response theory (IRT) informed the selection of scale items. IRT analyses were repeated with the scales administered to 273 non-clinical individuals, 79 patients with psychosis and 93 patients with social anxiety disorder. Other clinical measures were administered to assess concurrent validity. Test-retest reliability was assessed with 75 participants. Sensitivity to change was assessed with 43 patients with psychosis.
A 10-item general worry scale (Dunn Worry Questionnaire; DWQ) and a five-item paranoia worry scale (Paranoia Worries Questionnaire; PWQ) were developed. All items were highly discriminative (DWQ a = 1.98–5.03; PWQ a = 4.10–10.7), indicating small increases in latent worry lead to a high probability of item endorsement. The DWQ was highly informative across a wide range of the worry distribution, whilst the PWQ had greatest precision at clinical levels of paranoia worry. The scales demonstrated excellent internal reliability, test-retest reliability, concurrent validity and sensitivity to change.
The new measures of general problematic worry and worry about paranoid fears have excellent psychometric properties.
Crisis resolution teams (CRTs) offer brief, intensive home treatment for people experiencing mental health crisis. CRT implementation is highly variable; positive trial outcomes have not been reproduced in scaled-up CRT care.
To evaluate a 1-year programme to improve CRTs’ model fidelity in a non-masked, cluster-randomised trial (part of the Crisis team Optimisation and RElapse prevention (CORE) research programme, trial registration number: ISRCTN47185233).
Fifteen CRTs in England received an intervention, informed by the US Implementing Evidence-Based Practice project, involving support from a CRT facilitator, online implementation resources and regular team fidelity reviews. Ten control CRTs received no additional support. The primary outcome was patient satisfaction, measured by the Client Satisfaction Questionnaire (CSQ-8), completed by 15 patients per team at CRT discharge (n = 375). Secondary outcomes: CRT model fidelity, continuity of care, staff well-being, in-patient admissions and bed use and CRT readmissions were also evaluated.
All CRTs were retained in the trial. Median follow-up CSQ-8 score was 28 in each group: the adjusted average in the intervention group was higher than in the control group by 0.97 (95% CI −1.02 to 2.97) but this was not significant (P = 0.34). There were fewer in-patient admissions, lower in-patient bed use and better staff psychological health in intervention teams. Model fidelity rose in most intervention teams and was significantly higher than in control teams at follow-up. There were no significant effects for other outcomes.
The CRT service improvement programme did not achieve its primary aim of improving patient satisfaction. It showed some promise in improving CRT model fidelity and reducing acute in-patient admissions.
Suicidal behaviour is common in acute psychiatric wards resulting in distress, and burden for patients, carers and society. Although psychological therapies for suicidal behaviour are effective in out-patient settings, there is little research on their effectiveness for in-patients who are suicidal.
Our primary objective was to determine whether cognitive–behavioural suicide prevention therapy (CBSP) was feasible and acceptable, compared with treatment as usual (TAU) for in-patients who are suicidal. Secondary aims were to assess the impact of CBSP on suicidal thinking, behaviours, functioning, quality of life, service use, cost-effectiveness and psychological factors associated with suicide.
A single-blind pilot randomised controlled trial comparing TAU to TAU plus CBSP in in-patients in acute psychiatric wards who are suicidal (the Inpatient Suicide Intervention and Therapy Evaluation (INSITE) trial, trial registration: ISRCTN17890126). The intervention consisted of TAU plus up to 20 CBSP sessions, over 6 months continuing in the community following discharge. Participants were assessed at baseline and at 6 weeks and 6 months post-baseline.
A total of 51 individuals were randomised (27 to TAU, 24 to TAU plus CBSP) of whom 37 were followed up at 6 months (19 in TAU, 18 in TAU plus CBSP). Engagement, attendance, safety and user feedback indicated that the addition of CBSP to TAU for in-patients who are acutely suicidal was feasible and acceptable while on in-patient wards and following discharge. Economic analysis suggests the intervention could be cost-effective.
Psychological therapy can be delivered safely to patients who are suicidal although modifications are required for this setting. Findings indicate a larger, definitive trial should be conducted.
Declaration of interest
The trial was hosted by Greater Manchester Mental health NHS Trust (formerly, Manchester Mental Health and Social Care NHS Trust). The authors are affiliated to the University of Manchester, Greater Manchester Mental Health Foundation Trust, Lancashire Care NHS Foundation trust and the Manchester Academic Health Sciences Centre. Y.A. is a trustee for a North-West England branch of the charity Mind.
Previous literature on partisan campaign behavior shows that third-party candidates do not have the same presence online as major-party candidates, and these differences have been linked regularly to campaign finance. Twitter, however, has changed the online campaigning game. Because Twitter essentially is free, third-party candidates can even the playing field with major-party candidates who have more financial resources. The question asked in this article is whether this is actually the case. Evans, Cordova, and Sipole (2014) showed that in 2012, third-party candidates were less likely to have accounts on Twitter; however, those who had accounts tweeted more often than major party candidates. This article updates those findings to consider the behavior of third-party candidates during the 2014 and 2016 congressional races. Using a dataset of all candidates for both the US House and the US Senate, we show that the gap has begun to close between major- and minor-party candidates on Twitter. Third-party candidates, however, continue to have a different way of communicating with their followers on Twitter when compared to Democrats and Republicans.
A national survey investigated the implementation of mental health crisis resolution teams (CRTs) in England. CRTs were mapped and team managers completed an online survey.
Ninety-five per cent of mapped CRTs (n = 233) completed the survey. Few CRTs adhered fully to national policy guidelines. CRT implementation and local acute care system contexts varied substantially. Access to CRTs for working-age adults appears to have improved, compared with a similar survey in 2012, despite no evidence of higher staffing levels. Specialist CRTs for children and for older adults with dementia have been implemented in some areas but are uncommon.
A national mandate and policy guidelines have been insufficient to implement CRTs fully as planned. Programmes to support adherence to the CRT model and CRT service improvement are required. Clearer policy guidance is needed on requirements for crisis care for young people and older adults.
A predictive risk stratification tool (PRISM) to estimate a patient's risk of an emergency hospital admission in the following year was trialled in general practice in an area of the United Kingdom. PRISM's introduction coincided with a new incentive payment (‘QOF’) in the regional contract for family doctors to identify and manage the care of people at high risk of emergency hospital admission.
Alongside the trial, we carried out a complementary qualitative study of processes of change associated with PRISM's implementation. We aimed to describe how PRISM was understood, communicated, adopted, and used by practitioners, managers, local commissioners and policy makers. We gathered data through focus groups, interviews and questionnaires at three time points (baseline, mid-trial and end-trial). We analyzed data thematically, informed by Normalisation Process Theory (1).
All groups showed high awareness of PRISM, but raised concerns about whether it could identify patients not yet known, and about whether there were sufficient community-based services to respond to care needs identified. All practices reported using PRISM to fulfil their QOF targets, but after the QOF reporting period ended, only two practices continued to use it. Family doctors said PRISM changed their awareness of patients and focused them on targeting the highest-risk patients, though they were uncertain about the potential for positive impact on this group.
Though external factors supported its uptake in the short term, with a focus on the highest risk patients, PRISM did not become a sustained part of normal practice for primary care practitioners.
Emergency admissions to hospital are a major financial burden on health services. In one area of the United Kingdom (UK), we evaluated a predictive risk stratification tool (PRISM) designed to support primary care practitioners to identify and manage patients at high risk of admission. We assessed the costs of implementing PRISM and its impact on health services costs. At the same time as the study, but independent of it, an incentive payment (‘QOF’) was introduced to encourage primary care practitioners to identify high risk patients and manage their care.
We conducted a randomized stepped wedge trial in thirty-two practices, with cluster-defined control and intervention phases, and participant-level anonymized linked outcomes. We analysed routine linked data on patient outcomes for 18 months (February 2013 – September 2014). We assigned standard unit costs in pound sterling to the resources utilized by each patient. Cost differences between the two study phases were used in conjunction with differences in the primary outcome (emergency admissions) to undertake a cost-effectiveness analysis.
We included outcomes for 230,099 registered patients. We estimated a PRISM implementation cost of GBP0.12 per patient per year.
Costs of emergency department attendances, outpatient visits, emergency and elective admissions to hospital, and general practice activity were higher per patient per year in the intervention phase than control phase (adjusted δ = GBP76, 95 percent Confidence Interval, CI GBP46, GBP106), an effect that was consistent and generally increased with risk level.
Despite low reported use of PRISM, it was associated with increased healthcare expenditure. This effect was unexpected and in the opposite direction to that intended. We cannot disentangle the effects of introducing the PRISM tool from those of imposing the QOF targets; however, since across the UK predictive risk stratification tools for emergency admissions have been introduced alongside incentives to focus on patients at risk, we believe that our findings are generalizable.
Various theories of democratic governance posit that citizens should vote for incumbent politicians when they provide good service, and vote for the opposition when service delivery is poor. But does electoral accountability work as theorized, especially in developing country contexts? Studying Southern African democracies, where infrastructural investment in basic services has expanded widely but not universally, we contribute a new empirical answer to this question. Analyzing the relationship between service provision and voting, we find a surprising negative relationship: improvements in service provision predict decreases in support for dominant party incumbents. Though stronger in areas where opposition parties control local government, the negative relationship persists even in those areas where local government is run by the nationally dominant party. Survey data provide suggestive evidence that citizen concerns about corruption and ratcheting preferences for service delivery may be driving citizen attitudes and behaviors. Voters may thus be responsive to service delivery, but perhaps in ways that are more nuanced than extant theories previously recognized.