To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Most research on the causes of women's underrepresentation examines one of two stages of the political pipeline: the development of nascent political ambition or specific aspects of the campaign and election process. In this article, we make a different kind of contribution. We build on the growing literature on gender, psychology, and representation to provide an analysis of what kinds of men and women make it through the political pipeline at each stage. This allows us to draw some conclusions about the ways in which the overall process is similar and different for women and men. Using surveys of the general U.S. population (N = 1,939) and elected municipal officials such as mayors and city councilors (N = 2,354) that measure the distribution of Big Five personality traits, we find that roughly the same types of men and women have nascent political ambition; there is just an intercept shift for sex. In contrast, male and female elected officials have different personality profiles. These differences do not reflect underlying distributions in the general population or the population of political aspirants. In short, our data suggest that socialization into political ambition is similar for men and women, but campaign and election processes are not.
Reducing food portion size could reduce energy intake. However, it is unclear at what point consumers respond to reductions by increasing intake of other foods. We predicted that a change in served portion size would only result in significant additional eating within the same meal if the resulting portion size was no longer visually perceived as ‘normal’. Participants in two crossover experiments (Study 1: n 45; Study 2: n 37; adults, 51 % female) were served different-sized lunchtime portions on three occasions that were perceived by a previous sample of participants as ‘large-normal’, ‘small-normal’ and ‘smaller than normal’, respectively. Participants were able to serve themselves additional helpings of the same food (Study 1) or dessert items (Study 2). In Study 1 there was a small but significant increase in additional intake when participants were served the ‘smaller than normal’ compared with the ‘small-normal’ portion (m difference = 161 kJ, P = 0·002, d = 0·35), but there was no significant difference between the ‘small-normal’ and ‘large-normal’ conditions (m difference = 88 kJ, P = 0·08, d = 0·24). A similar pattern was observed in Study 2 (m difference = 149 kJ, P = 0·06, d = 0·18; m difference = 83 kJ, P = 0·26, d = 0·10). However, smaller portion sizes were each associated with a significant reduction in total meal intake. The findings provide preliminary evidence that reductions that result in portions appearing ‘normal’ in size may limit additional eating, but confirmatory research is needed.
At GE Research, we are combining “physics” with artificial intelligence and machine learning to advance manufacturing design, processing, and inspection, turning innovative technologies into real products and solutions across our industrial portfolio. This article provides a snapshot of how this physical plus digital transformation is evolving at GE.
Shiga toxin-producing Escherichia coli (STEC) infection can cause serious illness including haemolytic uraemic syndrome. The role of socio-economic status (SES) in differential clinical presentation and exposure to potential risk factors amongst STEC cases has not previously been reported in England. We conducted an observational study using a dataset of all STEC cases identified in England, 2010–2015. Odds ratios for clinical characteristics of cases and foodborne, waterborne and environmental risk factors were estimated using logistic regression, stratified by SES, adjusting for baseline demographic factors. Incidence was higher in the highest SES group compared to the lowest (RR 1.54, 95% CI 1.19–2.00). Odds of Accident and Emergency attendance (OR 1.35, 95% CI 1.10–1.75) and hospitalisation (OR 1.71, 95% CI 1.36–2.15) because of illness were higher in the most disadvantaged compared to the least, suggesting potential lower ascertainment of milder cases or delayed care-seeking behaviour in disadvantaged groups. Advantaged individuals were significantly more likely to report salad/fruit/vegetable/herb consumption (OR 1.59, 95% CI 1.16–2.17), non-UK or UK travel (OR 1.76, 95% CI 1.40–2.27; OR 1.85, 95% CI 1.35–2.56) and environmental exposures (walking in a paddock, OR 1.82, 95% CI 1.22–2.70; soil contact, OR 1.52, 95% CI 2.13–1.09) suggesting other unmeasured risks, such as person-to-person transmission, could be more important in the most disadvantaged group.
OBJECTIVES/SPECIFIC AIMS: The Life’s Simple 7 (LS7) metric was created by the American Heart Association with the goal of educating the public on seven modifiable factors that contribute to heart health. While it is well documented that these ideal health behaviors lower risk of cardiovascular disease (CVD) in the general population, the association between the LS7 ideal health metrics and end stage renal disease (ESRD) risk has not been examined in a lower socioeconomic population at high risk for both ESRD and CVD. Our objective is to examine the association between the LS7 score and incident ESRD in a cohort of white and black men and women in the southeastern US, where rates of CVD and ESRD are high. METHODS/STUDY POPULATION: The Southern Community Cohort Study recruited ~86,000 low-income blacks and whites in the southeastern US (2002-2009). Utilizing a nested case-control design, our analysis included 1628 incident cases of ESRD identified via linkage of the cohort with the United States Renal Data System (USRDS) from January 1, 2002 to March 31, 2015. Controls (n = 4884) were individually matched 3:1 with ESRD cases based on age, sex, and race. Demographic, medical, and lifestyle information were obtained via baseline questionnaire. The AHA definitions for ideal health were used for non-smoking (never or quit >12 months), body mass index (BMI<25kg/m2) and physical activity (>75 min/week of vigorous physical activity or >150min/week of moderate/vigorous activity). Modified definitions were used for consuming a healthy diet [Healthy Eating Index (HEI10) score>70] and for blood pressure, fasting plasma glucose, and total cholesterol, based on self-reported no history of diagnosis of hypertension, diabetes, and hypercholesterolemia, respectively. The number of ideal health parameters were summed to generate the LS7 score, which ranged from 0-7 with higher scores indicating more ideal health. Adjusted odds ratios (95% confidence intervals) for incident ESRD associated with LS7 score were calculated using conditional logistic regression models, adjusting for income and education. The SCCS ESRD case-cohort dataset will be available by TS 2019 and analyses will be completed to adjust for baseline estimated glomerular filtration rate (eGFR) as a marker of kidney function and to examine whether eGFR modifies the relationship between LS7 and incident ESRD. RESULTS/ANTICIPATED RESULTS: At baseline, mean age was 54 years, 55% (3600) of participants were women, and 87% (5656) were black. A total of 58% (943) of ESRD cases were non-smokers compared to 54% (2633) of controls. ESRD cases had higher prevalence of BMI>25 kg/m2 (81% vs. 74%), hypertension (84% vs. 59%), hypercholesterolemia (48% vs. 34%), and diabetes (66% vs. 22%) compared to controls. A total of 18% (839) of controls and 12% (194) of ESRD cases met ideal exercise recommendations, and 20% of either cases (302) or controls (916) had a HEI10 score above 70. The median LS7 score for controls and ESRD cases was 3 and 2, respectively, and 17% (983) of participants had a low score (0-1) while 2% (105) met 6 or 7 ideal health metrics. Higher LS7 score was associated with lower odds of ESRD (P-trend<0.001). Participants with LS7 score >3 (above median) had 75% reduced odds of ESRD (OR 0.25; 95% CI 0.22, 0.29) compared to those with a score of 2 or less. DISCUSSION/SIGNIFICANCE OF IMPACT: In the SCCS population, the presence of any 3 or more ideal health behaviors is associated with reduced odds of developing ESRD. The components of the LS7 represent important modifiable risk factors that may be targets for future interventions driven by the patient. The attributable risk due to each factor is needed to dissect which ideal behaviors are the most beneficial.
Investing in global health and development requires making difficult choices about what policies to pursue and what level of resources to devote to different initiatives. Methods of economic evaluation are well established and widely used to quantify and compare the impacts of alternative investments. However, if not well conducted and clearly reported, these evaluations can lead to erroneous conclusions. Differences in analytic methods and assumptions can obscure important differences in impacts. To increase the comparability of these evaluations, improve their quality, and expand their use, this special issue includes a series of papers developed to support reference case guidance for benefit-cost analysis. In this introductory article, we discuss the background and context for this work, summarize the process we are following, describe the overall framework, and introduce the articles that follow.
The mouth may be presented and understood in different ways, be subject to judgement by others and, as we age, may intrude on everyday life due to problems that affect oral health. However, research that considers older people's experiences concerning their mouths and teeth is limited. This paper reports on qualitative research with 43 people in England and Scotland, aged 65–91, exploring the significance of the mouth over the lifecourse. It uses the concept of ‘mouth talk’ to explore narratives of maintaining, losing and replacing teeth. Participants engaged in ‘mouth talk’ to downplay the impact of the mouth, demonstrate socially appropriate ageing, and distance themselves from ‘real’ old age by retaining a moral identity and sense of self. They also found means to challenge dominant discourses of ageing in how they spoke about missing teeth. Referring to Leder's notion of ‘dys-appearance’ and Gilleard and Higgs’ work on the social imaginary of the fourth age, the study illustrates the ways in which ‘mouth talk’ can contribute to sustaining a sense of self in later life, presenting the ageing mouth, with and without teeth, as an absent presence. It also argues for the importance of listening to stories of the mouth in order to expand understanding of people's approaches to oral health in older age.
OBJECTIVES/SPECIFIC AIMS: Discrimination within the healthcare system and physician distrust have been associated with adverse clinical outcomes for people living with HIV; however, many studies do not link these variables to biological data. We hypothesize that perceived healthcare discrimination and physician distrust associates with higher longitudinal viremia among HIV-positive women. METHODS/STUDY POPULATION: A 2006 cross-sectional survey assessed healthcare-based discrimination and physician trust in 92 HIV-positive and 46 high-risk HIV-negative women from the Washington DC Women’s Interagency HIV Study (DC-WIHS). In addition, we identified HIV viral load trajectories and demographics from the HIV-positive women who contributed≥4 semi-annual visits from 1994 to 2015. Viral suppression was defined by assay detection limits (<80 to <20 copies/mL). Group-based probability trajectory analyses grouped women based on longitudinal viral load patterns, and identified 3 groups: sustained viremia (n=32) with low-viral suppression over time, intermittent viremia (n=27) with varying suppression over time, and non-viremia (n=33) with high-longitudinal viral suppression. Ordinal logistic regression models assessed trajectory group and discrimination variables, controlling for demographics, using stepwise selection with significance level of α=0.05. RESULTS/ANTICIPATED RESULTS: Most women were African American (60%), insured at the time of visit (89%) and nonsmokers (56%). While physician trust did not differ by HIV viral trajectory group, trust was lower among HIV-negative women compared with HIV-positive women (p=0.03). Over 1 in 5 HIV-positive women reported discrimination in the healthcare system based on HIV status (21.3%). Report of discrimination based on drug/alcohol use was higher among HIV-negative participants (19.2% vs. 6.5%, p=0.01). Among women with longitudinal sustained viremia, report of discrimination based on race ethnicity (29%, p=0.004) and sexual orientation (15.6%, p=0.008) were higher than within the nonviremic and intermittent trajectory groups. DISCUSSION/SIGNIFICANCE OF IMPACT: Physician trust did not associate with increased longitudinal viral suppression among HIV-positive women in Washington, DC. Lack of physician trust among high-risk HIV-negative women could have implications for uptake of prevention methods. Reports of discrimination vary between HIV-positive and HIV-negative women in the Washington, DC area. The findings of healthcare system distrust among HIV-negative women has implications outside the realm of HIV, as this lack of trust may impact risk for other disease states among similar populations of women.
Extensively drug-resistant (XDR) tuberculosis (TB) poses a threat to public health due to its complicated, expensive and often unsuccessful treatment. A cluster of three XDR TB cases was detected among foreign medical students of a Romanian university. The contact investigations included tuberculin skin testing or interferon gamma release assay, chest X-ray, sputum smear microscopy, culture, drug susceptibility testing, genotyping and whole-genome sequencing (WGS), and were addressed to students, personnel of the university, family members or other close contacts of the cases. These investigations increased the total number of cases to seven. All confirmed cases shared a very similar WGS profile. Two more cases were epidemiologically linked, but no laboratory confirmation exists. Despite all the efforts done, the source of the outbreak was not identified, but the transmission was controlled. The investigation was conducted by a team including epidemiologists and microbiologists from five countries (Finland, Israel, Romania, Sweden and the UK) and from the European Centre for Disease Prevention and Control. Our report shows how countries can collaborate to control the spread of XDR TB by exchanging information about cases and their contacts to enable identification of additional cases and transmission and to perform the source investigation.
Glyphosate-resistant (GR) common waterhemp (CW) is a localized weed in Ontario and one of the most problematic weeds in the US Corn Belt. First confirmed in Ontario in 2014, GR CW has now been confirmed in forty fields in three counties in Ontario as of 2015. Historically, the primary POST herbicides used for the control of CW in soybean were glyphosate, acifluorfen and fomesafen, but resistance to all three has been confirmed in many US states. Research was conducted in 2015 and 2016 to determine the control of GR CW with some of the new herbicide-resistant soybean technologies including glufosinate (LibertyLink), 2,4-D and glyphosate (Enlist), and isoxaflutole, mesotrione, and glufosinate (HPPD-resistant). Glyphosate-resistant CW was controlled (≥90%) all season with a two-pass weed control system across all herbicide-resistant soybean technologies evaluated. The two-pass weed control system in this research is defined as a PRE herbicide followed by a POST herbicide. At 12 WAA, the two-pass programs in LibertyLink, Enlist, and HPPD-resistant systems controlled GR CW up to 98, 98, and 92%, respectively, and reduced GR CW densities to 0 to 2% of the weedy control at 4 WAA. The two-pass programs provided greater GR CW control than PRE or POST herbicides alone. This study found that the use of two-pass weed control programs in glufosinate-resistant, glyphosate DMA/2,4-D choline-resistant and HPPD-resistant soybean can provide excellent control of GR CW, and can be valuable tools to reduce the selection intensity for herbicide-resistant weeds. Through the rotational use of different technologies, growers may be able to better manage their weed populations in reducing the risk of resistance when compared to the use of one herbicide repeatedly.
A controversy at the 2016 IUCN World Conservation Congress on the topic of closing domestic ivory markets (the 007, or so-called James Bond, motion) has given rise to a debate on IUCN's value proposition. A cross-section of authors who are engaged in IUCN but not employed by the organization, and with diverse perspectives and opinions, here argue for the importance of safeguarding and strengthening the unique technical and convening roles of IUCN, providing examples of what has and has not worked. Recommendations for protecting and enhancing IUCN's contribution to global conservation debates and policy formulation are given.
Accurate and reproducible patient positioning is a critical step in radiotherapy for breast cancer. This has seen the use of permanent skin markings becoming standard practice in many centres. Permanent skin markings may have a negative impact on long-term cosmetic outcome, which may in turn, have psychological implications in terms of body image. The aim of this study was to investigate the feasibility of using a semi-permanent tattooing device for the administration of skin marks for breast radiotherapy set-up.
Materials and methods
This was designed as a phase II double-blinded randomised-controlled study comparing our standard permanent tattoos with the Precision Plus Micropigmentation (PPMS) device method. Patients referred for radical breast radiotherapy were eligible for the study. Each study participant had three marks applied using a randomised combination of the standard permanent and PPMS methods and was blinded to the type of each mark. Follow up was at routine appointments until 24 months post radiotherapy. Participants and a blind assessor were invited to score the visibility of each tattoo at each follow-up using a Visual Analogue Scale. Tattoo scores at each time point and change in tattoo scores at 24 months were analysed by a general linear model using the patient as a fixed effect and the type of tattoo (standard or research) as covariate. A simple questionnaire was used to assess radiographer feedback on using the PPMS.
In total, 60 patients were recruited to the study, of which 55 were available for follow-up at 24 months. Semi-permanent tattoos were more visible at 24 months than the permanent tattoos. Semi-permanent tattoos demonstrated a greater degree of fade than the permanent tattoos at 24 months (final time point) post completion of radiotherapy. This was not statistically significant, although it was more apparent for the patient scores (p=0·071) than the blind assessor scores (p=0·27). No semi-permanent tattoos required re-marking before the end of radiotherapy and no adverse skin reactions were observed.
The PPMS presents a safe and feasible alternative to our permanent tattooing method. An extended period of follow-up is required to fully assess the extent of semi-permanent tattoo fade.
Glyphosate-resistant (GR) common waterhemp is the fifth GR weed species confirmed in Canada, and the fourth in Ontario. As of 2017, GR common waterhemp has been confirmed in Lambton, Essex, and Chatham-Kent counties in Ontario. Greenhouse and field dose–response experiments revealed that GR common waterhemp in Ontario had a resistance level of 4.5 and 28, respectively, when compared with known glyphosate-susceptible populations. At 12 wk after application, pyroxasulfone/flumioxazin (240 g ai ha−1), pyroxasulfone/sulfentrazone (300 g ai ha−1), and S-metolachlor/metribuzin (1,943 g ai ha−1) controlled GR common waterhemp 97%, 92%, and 87%, respectively. Pyroxasulfone/sulfentrazone or S-metolachlor/metribuzin applied PRE followed by acifluorfen (600 g ai ha−1) or fomesafen (240 g ai ha−1) applied POST controlled GR common waterhemp 98% and performed better than PRE or POST alone. This research is the first to determine the resistance factor of GR common waterhemp in Ontario and identifies control strategies in soybean to mitigate the impact of common waterhemp interference in soybean crop production.
Developing countries are experiencing an increase in total demand for livestock commodities, as populations and per capita demands increase. Increased production is therefore required to meet this demand and maintain food security. Production increases will lead to proportionate increases in greenhouse gas (GHG) emissions unless offset by reductions in the emissions intensity (Ei) (i.e. the amount of GHG emitted per kg of commodity produced) of livestock production. It is therefore important to identify measures that can increase production whilst reducing Ei cost-effectively. This paper seeks to do this for smallholder agro-pastoral cattle systems in Senegal; ranging from low input to semi-intensified, they are representative of a large proportion of the national cattle production. Specifically, it identifies a shortlist of mitigation measures with potential for application to the various herd systems and estimates their GHG emissions abatement potential (using the Global Livestock Environmental Assessment Model) and cost-effectiveness. Limitations and future requirements are identified and discussed. This paper demonstrates that the Ei of meat and milk from livestock systems in a developing region can be reduced through measures that would also benefit food security, many of which are likely to be cost-beneficial. The ability to make such quantification can assist future sustainable development efforts.
The livestock sector is one of the fastest growing subsectors of the agricultural economy and, while it makes a major contribution to global food supply and economic development, it also consumes significant amounts of natural resources and alters the environment. In order to improve our understanding of the global environmental impact of livestock supply chains, the Food and Agriculture Organization of the United Nations has developed the Global Livestock Environmental Assessment Model (GLEAM). The purpose of this paper is to provide a review of GLEAM. Specifically, it explains the model architecture, methods and functionality, that is the types of analysis that the model can perform. The model focuses primarily on the quantification of greenhouse gases emissions arising from the production of the 11 main livestock commodities. The model inputs and outputs are managed and produced as raster data sets, with spatial resolution of 0.05 decimal degrees. The Global Livestock Environmental Assessment Model v1.0 consists of five distinct modules: (a) the Herd Module; (b) the Manure Module; (c) the Feed Module; (d) the System Module; (e) the Allocation Module. In terms of the modelling approach, GLEAM has several advantages. For example spatial information on livestock distributions and crops yields enables rations to be derived that reflect the local availability of feed resources in developing countries. The Global Livestock Environmental Assessment Model also contains a herd model that enables livestock statistics to be disaggregated and variation in livestock performance and management to be captured. Priorities for future development of GLEAM include: improving data quality and the methods used to perform emissions calculations; extending the scope of the model to include selected additional environmental impacts and to enable predictive modelling; and improving the utility of GLEAM output.
The new channels of communication as social media (e.g. Facebook and Twitter) and the social marketing campaign (i.e. campaign focused on enabling, encouraging and supporting behavioural changes among target audiences) can represent useful strategies to challenge stigma attached to mental disorders.
To evaluate the efficacy of the social marketing campaign of the time to change (SMC-TTC) anti-stigma programme on the target population in England during 2009–2014.
To assess the impact of the SMC-TTC anti-stigma programme in terms of:
– use of the social media channels;
– levels of awareness of the SMC-TTC;
– changes in knowledge, attitude, and behaviour related to mental disorders.
Participants completed the mental health knowledge schedule (MAKS), the community attitudes toward mental illness (CAMI) and the reported and intended behaviour scale (RIBS), together with an ad-hoc schedule on socio-demographic characteristics.
In total, 10526 people were interviewed, it was found a growing usage of the SMC-TTC media channels and of the level of awareness of the campaign (P < 0.001). Being aware of the SMC-TTC was found to be associated with higher score at MAKS (OR = .95, CI = .68 to 1.21; P < .001), at “tolerance and support” CAMI subscale (OR = .12, CI = .09 to .16; P < .001) and RIBS (OR = .71, CI = .51 to .92; P < .001), controlling for confounders.
In the general population, SMC-TTC has been found to be effective in improving attitudes and behaviours towards people with mental disorders.
Considering these promising results obtained in England, social media can represent the possible way forward for challenging stigma. The future on-going evaluation of the SMC-TTC may further shed light on the essential role of social media in reducing of stigma and discrimination.
Disclosure of interest
The authors have not supplied their declaration of competing interest.
Hippocampal dysfunction is considered central to many neurobiological models of schizophrenia, yet there are few longitudinal in vivo neuroimaging studies that have investigated the relationship between antipsychotic treatment and morphologic changes within specific hippocampal subregions among patients with psychosis.
A total of 29 patients experiencing a first episode of psychosis with little or no prior antipsychotic exposure received structural neuroimaging examinations at illness onset and then following 12 weeks of treatment with either risperidone or aripiprazole in a double-blind randomized clinical trial. In addition, 29 healthy volunteers received structural neuroimaging examinations at baseline and 12-week time points. We manually delineated six hippocampal subregions [i.e. anterior cornu ammonis (CA) 1–3, posterior CA1–3, subiculum, dentate gyrus/CA4, entorhinal cortex, and fimbria] from 3T magnetic resonance images using an established method with high inter- and intra-rater reliability.
Following antipsychotic treatment patients demonstrated significant reductions in dentate gyrus/CA4 volume and increases in subiculum volume. Healthy volunteers demonstrated non-significant volumetric changes in these subregions across the two time points. We observed a significant quadratic (i.e. inverted U) association between changes in dentate gyrus/CA4 volume and cumulative antipsychotic dosage between the scans.
This study provides the first evidence to our knowledge regarding longitudinal in vivo volumetric changes within specific hippocampal subregions in patients with psychosis following antipsychotic treatment. The finding of a non-linear relationship between changes in dentate gyrus/CA4 subregion volume and antipsychotic exposure may provide new avenues into understanding dosing strategies for therapeutic interventions relevant to neurobiological models of hippocampal dysfunction in psychosis.
In England, during 2009–2014 the ‘Time to Change’ anti-stigma programme has included a social marketing campaign (SMC) using mass media channels, social media and social contact events but the efficacy of such approach has not been evaluated yet.
The target population included people aged between mid-twenties/mid-forties, from middle-income groups. Participants were recruited through an online market research panel, before and after each burst of the campaign (with a mean number of unique participants per each burst: 956.9 ± 170.2). Participants completed an online questionnaire evaluating knowledge [Mental Health Knowledge Schedule (MAKS)]; attitudes [Community Attitudes toward Mental Illness (CAMI)]; and behaviours [Reported and Intended Behaviour Scale (RIBS)]. Socio-demographic data and level of awareness of the SMC were also collected.
A total of 10,526 people were interviewed. An increasing usage of the SMC-media channels as well as of the level of awareness of SMC was found (P < 0.001). Being aware of the SMC was found to be associated with higher score at MAKS (OR = 0.95, CI = 0.68 to 1.21; P < 0.001), at ‘tolerance and support’ CAMI subscale (OR = 0.12, CI = 0.09 to 0.16; P < 0.001), and at RIBS (OR = 0.71, CI = 0.51 to 0.92; P < 0.001), controlling for confounders.
The SMC represents an important way to effectively reduce stigma. Taking into account these positive findings, further population-based campaigns using social media may represent an effective strategy to challenge stigma.