To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Both acute and chronic pain can disrupt reward processing. Moreover, prolonged prescription opioid use and depressed mood are common in chronic pain samples. Despite the prevalence of these risk factors for anhedonia, little is known about anhedonia in chronic pain populations.
We conducted a large-scale, systematic study of anhedonia in chronic pain, focusing on its relationship with opioid use/misuse, pain severity, and depression. Chronic pain patients across four distinct samples (N = 488) completed the Snaith–Hamilton Pleasure Scale (SHAPS), measures of opioid use, pain severity and depression, as well as the Current Opioid Misuse Measure (COMM). We used a meta-analytic approach to determine reference levels of anhedonia in healthy samples spanning a variety of countries and diverse age groups, extracting SHAPS scores from 58 published studies totaling 2664 psychiatrically healthy participants.
Compared to healthy samples, chronic pain patients showed higher levels of anhedonia, with ~25% of patients scoring above the standard anhedonia cut-off. This difference was not primarily driven by depression levels, which explained less than 25% of variance in anhedonia scores. Neither opioid use duration, dose, nor pain severity alone was significantly associated with anhedonia. Yet, there was a clear effect of opioid misuse, with opioid misusers (COMM ⩾13) reporting greater anhedonia than non-misusers. Opioid misuse remained a significant predictor of anhedonia even after controlling for pain severity, depression and opioid dose.
Study results suggest that both chronic pain and opioid misuse contribute to anhedonia, which may, in turn, drive further pain and misuse.
Through autonomic and affective mechanisms, adverse childhood experiences (ACEs) may disrupt the capacity to regulate negative emotions, increasing craving and exacerbating risk for opioid use disorder (OUD) among individuals with chronic pain who are receiving long-term opioid analgesic pharmacotherapy. This study examined associations between ACEs, heart rate variability (HRV) during emotion regulation, and negative emotional cue-elicited craving among a sample of female opioid-treated chronic pain patients at risk for OUD. A sample of women (N = 36, mean age = 51.2 ± 9.5) with chronic pain receiving long-term opioid analgesic pharmacotherapy (mean morphine equivalent daily dose = 87.1 ± 106.9 mg) were recruited from primary care and pain clinics to complete a randomized task in which they viewed and reappraised negative affective stimuli while HRV and craving were assessed. Both ACEs and duration of opioid use significantly predicted blunted HRV during negative emotion regulation and increased negative emotional cue-elicited craving. Analysis of study findings from a multiple-levels-of-analysis approach suggest that exposure to childhood abuse occasions later emotion dysregulation and appetitive responding toward opioids in negative affective contexts among adult women with chronic pain, and thus this vulnerable clinical population should be assessed for OUD risk when initiating a course of extended, high-dose opioids for pain management.
The USA is currently enduring an opioid crisis. Identifying cost-effective, easy-to-implement behavioral measures that predict treatment outcomes in opioid misusers is a crucial scientific, therapeutic, and epidemiological goal.
The current study used a mixed cross-sectional and longitudinal design to test whether a behavioral choice task, previously validated in stimulant users, was associated with increased opioid misuse severity at baseline, and whether it predicted change in opioid misuse severity at follow-up. At baseline, data from 100 prescription opioid-treated chronic pain patients were analyzed; at follow-up, data were analyzed in 34 of these participants who were non-misusers at baseline. During the choice task, participants chose under probabilistic contingencies whether to view opioid-related images in comparison with affectively pleasant, unpleasant, and neutral images. Following previous procedures, we also assessed insight into choice behavior, operationalized as whether (yes/no) participants correctly self-reported the image category they chose most often.
At baseline, the higher choice for viewing opioid images in direct comparison with pleasant images was associated with opioid misuse and impaired insight into choice behavior; the combination of these produced especially elevated opioid-related choice behavior. In longitudinal analyses of individuals who were initially non-misusers, higher baseline opioid v. pleasant choice behavior predicted more opioid misuse behaviors at follow-up.
These results indicate that greater relative allocation of behavior toward opioid stimuli and away from stimuli depicting natural reinforcement is associated with concurrent opioid misuse and portends vulnerability toward future misuse. The choice task may provide important medical information to guide opioid-prescribing practices.
This paper presents the findings from the first qualitative study to consider the relationship between intersex experience and law, representing a significant contribution to a currently under-researched area of law. Since 2013 there has been a global move towards the legal recognition of intersex, with Australia, Germany and Malta all using different techniques to construct and regulate intersex embodiment. This paper is the first to compare and problematise these differing legal approaches in the legal literature. In doing so it demonstrates that many of these approaches are grounded in ideas of formal equality that lead to the entrenchment of vulnerability and fail to build resilience for the intersex community. Through engagement with the intersex community a more contextual account of substantive equality is enabled, encouraging new approaches to law and social justice. Our qualitative study revealed that prevention of non-therapeutic medical interventions on the bodies of children was understood to be the key method to achieving equality for intersex embodied people. Whilst this is the cornerstone of intersex-led legislative reform, such an approach necessitates support through a mixture of formal and substantive equality methods such as anti-discrimination law, education and enforcement procedures. This paper concludes by offering a series of recommendations to legislators capable of enabling substantive intersex equality.
In many respects, Sweden was once considered a pioneering nation in the regulation of juridical gender. In 1631, Sweden became the first country in the world to establish a population registry for its inhabitants – a registry that reflected acknowledgement over time of the existence of persons who could not be classified simply as ‘male’ or ‘female’. In 1972, Sweden also became the first country in the world to enact a national scheme for changing one's registered gender, providing gender-affirming medical care in connection with such changes. The Gender Classification Act at the centre of this scheme, however, also appears to have been the first national law to endorse not only sterilisation of transgender persons, but to provide a mechanism for parents to seek changes of registered gender for children born with intersex conditions, as well as gender-conforming surgery on those children, without the children's consent. The Act emerged from the first known governmental investigation in the world to recognise the class of intersex persons – a class defined as persons who suffer from social conflict with gender registration. Though the experts that led the investigation concluded that the classes of ‘males’ and ‘females’ did not scientifically permit easy categorisation of many individuals, they nevertheless advocated that these classes should be maintained for socio-legal purposes and that all persons should be determined to ‘belong’ to one of these classes under strict legal controls. This legal scheme is expected to undergo significant transformation in 2018, as the government has announced plans to abolish the Gender Classification Act and to authorise simpler administrative changes of registered gender without any medical preconditions, with at least one such change as a matter of right.
Sweden, however, also shapes and controls juridical gender through a more far-reaching law that medicalises gender identity as binary. the Population Registration Act directs medical personnel to provide data on the gender of an infant as part of the child's juridical registration, despite governmental concessions that no sound definition exists of what constitutes a ‘male’ or ‘female’.
Twenty years ago, standard clinical practice regarding the treatment of infants with intersex conditions and differences of sex development reached a critical turning point, particularly on the question of what gender to assign the affected children and whether their gender assignments should be reinforced with surgical interventions. Prior to this period, prominent clinicians tended to minimise as anomalous the statements of aggrieved adults whose bodies had been surgically modified in childhood and who suffered physical pain, genital dysfunction, loss of fertility, and sexual sensitivity, as well as those who considered the interventions a violation of their identity and personal integrity. By 1997, however, case studies had been published confirming rejections of gender assignments by older minors and adults who had been subjected to medical gender-conforming procedures in childhood. These disclosures led to the first reform guidelines proposing that while a social gender assignment for infants could be expected to continue, clinical practice should be more open to recognition of diverse gender identities and resistant to surgical interventions designed to reinforce an assigned gender. On the whole, however, clinical practitioners did not appear to embrace these guidelines. Rather, while the calls for practice changes were expected to ‘accelerate the re-examination of the clinical care of the intersex patient’, they instead marked the start of a period of a ‘crisis in clinical management’, one in which many clinicians found it difficult to change their practices without scientific evidence conclusively proving that all gender-conforming medical interventions are too risk-laden or unnecessary to support gender assignment on infants and young children.
Concerned about the quality of evidence supporting clinical practice, a group of prominent expert-clinicians organised several invitational gatherings of their colleagues to review gender-assignment practices and the medical interventions used to reinforce them. The first of these gatherings took place in Chicago in 2005 and was dubbed the Chicago Consensus, which led to the publication of a Consensus Statement the following year. This Consensus Statement recommended caution for a limited number of interventions but acknowledged that the lack of long-term outcome data was a ‘major shortfall’ of clinical practice, including gender assignment in infancy.
OBJECTIVES/SPECIFIC AIMS: The purpose of the present secondary data analysis was to examine the effect of moderate-severe disturbed sleep before the start of radiation therapy (RT) on subsequent RT-induced pain. METHODS/STUDY POPULATION: Analyses were performed on 676 RT-naïve breast cancer patients (mean age 58, 100% female) scheduled to receive RT from a previously completed nationwide, multicenter, phase II randomized controlled trial examining the efficacy of oral curcumin on radiation dermatitis severity. The trial was conducted at 21 community oncology practices throughout the US affiliated with the University of Rochester Cancer Center NCI’s Community Oncology Research Program (URCC NCORP) Research Base. Sleep disturbance was assessed using a single item question from the modified MD Anderson Symptom Inventory (SI) on a 0–10 scale, with higher scores indicating greater sleep disturbance. Total subjective pain as well as the subdomains of pain (sensory, affective, and perceived) were assessed by the short-form McGill Pain Questionnaire. Pain at treatment site (pain-Tx) was also assessed using a single item question from the SI. These assessments were included for pre-RT (baseline) and post-RT. For the present analyses, patients were dichotomized into 2 groups: those who had moderate-severe disturbed sleep at baseline (score≥4 on the SI; n=101) Versus those who had mild or no disturbed sleep (control group; score=0–3 on the SI; n=575). RESULTS/ANTICIPATED RESULTS: Prior to the start of RT, breast cancer patients with moderate-severe disturbed sleep at baseline were younger, less likely to have had lumpectomy or partial mastectomy while more likely to have had total mastectomy and chemotherapy, more likely to be on sleep, anti-anxiety/depression, and prescription pain medications, and more likely to suffer from depression or anxiety disorder than the control group (all p’s≤0.02). Spearman rank correlations showed that changes in sleep disturbance from baseline to post-RT were significantly correlated with concurrent changes in total pain (r=0.38; p<0.001), sensory pain (r=0.35; p<0.001), affective pain (r=0.21; p<0.001), perceived pain intensity (r=0.37; p<0.001), and pain-Tx (r=0.35; p<0.001). In total, 92% of patients with moderate-severe disturbed sleep at baseline reported post-RT total pain compared with 79% of patients in the control group (p=0.006). Generalized linear estimating equations, after controlling for baseline pain and other covariates (baseline fatigue and distress, age, sleep medications, anti-anxiety/depression medications, prescription pain medications, and depression or anxiety disorder), showed that patients with moderate-severe disturbed sleep at baseline had significantly higher mean values of post-RT total pain (by 39%; p=0.033), post-RT sensory pain (by 41%; p=0.046), and post-RT affective pain (by 55%; p=0.035) than the control group. Perceived pain intensity (p=0.066) and pain-Tx (p=0.086) at post-RT were not significantly different between the 2 groups. DISCUSSION/SIGNIFICANCE OF IMPACT: These findings suggest that moderate-severe disturbed sleep prior to RT is an important predictor for worsening of pain at post-RT in breast cancer patients. There could be several plausible reasons for this. Sleep disturbance, such as sleep loss and sleep continuity disturbance, could result in impaired sleep related recovery and repair of tissue damage associated with cancer and its treatment; thus, resulting in the amplification of pain. Sleep disturbance may also reduce pain tolerance threshold through increased sensitization of the central nervous system. In addition, pain and sleep disturbance may share common neuroimmunological pathways. Sleep disturbance may modulate inflammation, which in turn may contribute to increased pain. Further research is needed to confirm these findings and whether interventions targeting sleep disturbance in early phase could be potential alternate approaches to reduce pain after RT.
Insomnia is underrecognized and inadequately managed, with close to 60% of cancer survivors experiencing insomnia at some point in the treatment trajectory. The objective of this study was to further understand predisposing, precipitating, and perpetuating factors in the development and maintenance of insomnia in cancer survivors.
A heterogeneous sample of 63 patients who had completed active treatment was recruited. Participants were required to have a score >7 on the Insomnia Severity Index and meet the diagnostic criteria for insomnia disorder. Open-ended, semistructured interviews were conducted to elicit participants’ experiences with sleep problems. An a priori set of codes and a set of codes that emerged from the data were used to analyze the data.
The mean age of the sample was 60.5 years, with 30% identifying as non-white and 59% reporting their sex as female. The cancer types represented were heterogeneous with the two most common being breast (30%) and prostate (21%). Participants described an inherited risk for insomnia, anxious temperament, and insufficient ability to relax as predisposing factors. Respondents were split as to whether they classified their cancer diagnosis as the precipitating factor for their insomnia. Participants reported several behaviors that are known to perpetuate problems with sleep including napping, using back-lit electronics before bed, and poor sleep hygiene. One of the most prominent themes identified was the use of sleeping medications. Participants reported that they were reluctant to take medication but felt that it was the only option to treat their insomnia and that it was encouraged by their doctors.
Significance of results
Insomnia is a prevalent, but highly treatable, disorder in cancer survivors. Patients and provider education is needed to change individual and organizational behaviors that contribute to the development and maintenance of insomnia and increase access to evidence-based nonpharmacological interventions.
Introduction: Pulmonary embolism (PE) is a common cardiovascular condition with high mortality rates if left untreated. Given the non-specific and varied symptoms of PE, its diagnosis remains challenging and approaches can lend themselves to inefficiencies through over-testing and over-diagnosis. Clinicians rely on a multi-component and sequential approach, including clinical risk assessment, rule-out biomarkers, and diagnostic imaging. This study assessed the potential cost-effectiveness of different diagnostic algorithms. Methods: A cost-utility model was developed with an upfront decision tree capturing the diagnostic accuracy and a Markov cohort model reflecting the lifetime disease progression and clinical utility of each diagnostic strategy. 57 diagnostic strategies were evaluated that were permutations of various clinical risk assessment, rule-out biomarkers and diagnostic imaging modalities. Diagnostic test accuracy was informed by systematic reviews and meta-analyses, and costs (2016 CAD) were obtained from Canadian costing databases to reflect a health-care payer perspective. Separate scenario analyses were conducted on patients contra-indicated for computed tomography (CT) or who are pregnant as this entails a comparison of a different set of diagnostic strategies. Results: Six diagnostic strategies formed the efficiency frontier. Diagnosing patients with PE was generally cost-effective if willingness-to-pay was greater than $1,481 per quality-adjusted-life year (QALY). CT dominated other imaging modality given its greater diagnostic accuracy, lower rates of non-diagnostic findings and lowest overall costs. The use of clinical prediction rules to determine clinical pre-test probability of PE and the application of rule-out test for patients with low-to-moderate risk of PE may be cost-effective while reducing the proportion of patients requiring CT and lowering radiation exposure. At a willingness-to-pay of $50,000 per QALY, the strategy of Wells (2 tier) --> d-dimer --> CT --> CT was the most likely cost-effective diagnostic strategy. However, different diagnostic strategies were considered cost-effective for pregnant patients and those contra-indicated for CT. Conclusion: This study highlighted the value of economic modelling to inform judicious use of resources in achieving a diagnosis for PE. These findings, in conjunction with a recent health technology assessment, may help to inform clinical practice and guidelines. Which strategy would be considered cost-effective reflected ones willingness to trade-off between misdiagnosis and over-diagnosis.
Our understanding of the interactions between the Roman Empire and indigenous societies (or ‘barbarians’) that lay within or surrounding its borders has undergone considerable advances over the last 30 years. Stemming initially from a colonial perspective, which saw the Roman Empire as ‘civilising’ those who were subsumed into it, the study of these interactions now includes a wealth of diverse post-processual or post-colonial approaches that stress the complexity of interactions within and between these social groups. Even with these advances, the self-imposed opposition between prehistoric and Roman studies, whether in theoretical stance, approach or research frameworks, remains constant in modern scholarly debate (Hingley 2012: 629). As a consequence, and despite extensive debate to the contrary, the divide between ‘Romans’ and ‘natives’ endures in our current interpretations of the contact between pre-Roman and Roman society.
Designers can involve users in the design process. The challenge lies in reaching multiple users and finding the best way to use their input in the design process. Affordance based design (ABD) is a design method that focuses in part on the perceived or existing interactions between the user and the artifact. The shape and physical characteristics of the product enable the user to perceive some of its affordances. The goal of this research is to use ABD, along with an optimization tool, to evolve the shape of products toward better perceived solutions using the input from users. A web application has been developed that evolves design concepts using an interactive multi-objective genetic algorithm (IGA) relying on the user assessment of product affordances. As a proof of concept, a steering wheel is designed using the application by having users rate specific affordances of solutions presented to them. The results show that the design concepts evolve toward better perceived solutions, allowing designers to explore more solutions that reflect the preferences of end users. Relationships between affordances and product design variables are also explored, revealing that specific affordances can be targeted with changes in design parameter values and highlighting the tie between physical characteristics and affordances.
Skin self-examination (SSE) is a crucial preventive health behaviour in melanoma survivors, as it facilitates early detection. Physician endorsement of SSE is important for the initiation and maintenance of this behaviour. This study focussed on the preliminary validation of a new nine-item measure assessing physician support of SSE in melanoma patients. English and French versions of this measure were administered to 188 patients diagnosed with melanoma in the context of a longitudinal study investigating predictors and facilitators of SSE. Structural validity was investigated using exploratory factor analysis conducted in Mplus and convergent and divergent validity was assessed using bivariate correlations conducted in spss. Results suggest that the scale is a unidimensional and reliable measure of physician support for SSE. Given the uncertainty regarding the optimal frequency of SSE for at-risk individuals, we recommend that future psychometric evaluations of this scale consider tailoring items according to the most up-to-date research on SSE effectiveness.
Vitamin D deficiency is a global public health concern. Studies of serum 25-hydroxyvitamin D (25(OH)D) determinants in young women are limited and few include objective covariates. Our aims were to define the prevalence of vitamin D deficiency and examine serum 25(OH)D correlates in an exploratory study of women aged 16–25 years. We studied 348 healthy females living in Victoria, Australia, recruited through Facebook. Data collected included serum 25(OH)D assayed by liquid chromatography-tandem MS, relevant serum biochemistry, soft tissue composition by dual-energy X-ray absorptiometry, skin melanin density, Fitzpatrick skin type, sun exposure using UV dosimeters and lifestyle factors. Mean serum 25(OH)D was 68 (sd 27) nmol/l and 26 % were vitamin D deficient (25(OH)D <50 nmol/l). The final model explained 56 % of 25(OH)D variance. Serum sex hormone-binding globulin levels, creatinine levels, sun exposure measured by UV dosimeters, a positive attitude towards sun tanning, typically spending >2 h in the sun in summer daily, holidaying in the most recent summer period, serum Fe levels, height and multivitamin use were positively associated with 25(OH)D. Fat mass and a blood draw in any season except summer was inversely associated with 25(OH)D. Vitamin D deficiency is common in young women. Factors such as hormonal contraception, sun exposure and sun-related attitudes, as well as dietary supplement use are essential to consider when assessing vitamin D status. Further investigation into methods to safely optimise vitamin D status and to improve understanding of the impact of vitamin D status on long-term health outcomes is required.
Variation in terrestrial productivity and biomass impacts evolution through linkages between productivity and biodiversity and through the types of resources available for consumption by herbivores. Geographic variation in terrestrial plant carbon is known on a global scale for extant biomes and is strongly correlated with precipitation, temperature, and the area of wetlands. Although estimates of extant terrestrial plant carbon density are still somewhat uncertain, the highest densities clearly occur in tropical and temperate rainforests, and the lowest occur in deserts, semideserts, and arctic/alpine tundra. Patterns of variation in ancient terrestrial plant carbon can be estimated through the correlation between biome/climate and carbon density, provided individual biomes show little change through time in primary productivity or density of plant carbon.
Density of terrestrial plant carbon has been estimated on a global scale for the latest Cretaceous, late Paleocene/Eocene, middle-late Eocene, early Miocene, and Holocene/Recent using the biomal reconstructions of Wolfe (1984), Upchurch (this symposium), and others. Latest Cretaceous (Maastrichtian) estimates indicate a relatively low value of 700-800 gigatons, which may underestimate carbon due to the presence of extensive latest Cretaceous coastal wetlands. However, much of this figure is readliy explainable by extensive deserts in Asia and little evidence for areally extensive tropical rainforest.
Major increase in terrestrial plant carbon occurred during the Paleocene/earliest Eocene in conjunction with a major areal increase in rainforest. During the early Miocene terrestrial global carbon was approximately 1200-1300 gigatons. This figure decreased by about half between the early Miocene and Holocene/Recent. The decrease in terrestrial carbon density resulted from a decrease in area of tropical and subtropical forests and increase in area of deserts, grasslands, and mediterranean woodlands/chapparal.
The Cretaceous rise of flowering plants marked an important transition in the modernization of terrestrial ecosystems. Well documented is the diversification of angiosperm pollen during the mid-Cretaceous and the migration of angiosperms from low latitudes to middle and high latitudes during the Barremian to Cenomanian. Global compilations of “species” diversity indicate a rapid rise in angiosperm diversity during the Albian to Cenomanian. This rise parallels a decline in the species diversity of archaic pteridophytes and the gymnosperm orders Cycadales, Bennettitales, Ginkgoales, Czekanowskiales, and Caytoniales. Late Cretaceous floras show more gradual trends in species diversity than mid-Cretaceous floras.
Megafloral reconstructions of vegetation and climate for North America and other continents indicate warm temperatures in coastal regions of middle to high latitudes. Cretaceous biomes, however, often cannot be compared closely with Recent biomes. During much of the Cretaceous, conifers and other gymnosperms shared dominance with angiosperms in tropical and subtropical vegetation, unlike the Recent. During the Late Cretaceous, tropical rainforest was areally restricted. The few known leaf megafloras from equatorial regions indicate subhumid, rather than rainforest, conditions. Desert and semi-desert were widespread at lower latitudes and are documented by the occurrence of evaporite minerals in China, Africa, Spain, Mexico, and South America. Mid-latitude vegetation consisted of open-canopy broadleaved and coniferous evergreen woodlands that existed under subhumid conditions and low seasonality. High-latitude vegetation of the Northern Hemisphere consisted of coniferous and broadleaved deciduous forest, rather than boreal forest and tundra. High-latitude vegetation from coastal regions of the Southern Hemisphere consisted of evergreen conifers and angiosperms. Rainforest conditions appear to have been largely restricted to polar latitudes.
Data on relative abundance, though often incomplete, indicate that angiosperms became ecologically important in tropical to warm subtropical broadleaved evergreen forests and woodlands by the Cenomanian. However, their rise to dominance took longer in other biomes. Conifers formed an important component of many Late Cretaceous biomes, and the persistence of archaic gymnosperms was strongly influenced by climate. Deciduous Ginkgoales, Czekanowskiales, Bennettitales, and Caytoniales are rare to absent in Late Cretaceous megafloras from warm subtropical to tropical climates, but they persist in megafloras from cooler climates. Archaic conifers such as Frenelopsis occur in megafloras representing low-latitude desert and semi-desert, but they are generally absent in more humid assemblages. Within mid-latitude broadleaved and coniferous evergreen woodland from North America, conifers show evidence for co-dominance with angiosperms into the early Maastrichtian. However, this co-dominance appears to have ended by latest Maastrichtian, which implies that vegetational reorganization occurred during the last few million years of the Cretaceous in North America.