To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In the past decade, network analysis (NA) has been applied to psychopathology to quantify complex symptom relationships. This statistical technique has demonstrated much promise, as it provides researchers the ability to identify relationships across many symptoms in one model and can identify central symptoms that may predict important clinical outcomes. However, network models are highly influenced by node selection, which could limit the generalizability of findings. The current study (N = 6850) tests a comprehensive, cognitive–behavioral model of eating-disorder symptoms using items from two, widely used measures (Eating Disorder Examination Questionnaire and Eating Pathology Symptoms Inventory).
We used NA to identify central symptoms and compared networks across the duration of illness (DOI), as chronicity is one of the only known predictors of poor outcome in eating disorders (EDs).
Our results suggest that eating when not hungry and feeling fat were the most central symptoms across groups. There were no significant differences in network structure across DOI, meaning the connections between symptoms remained relatively consistent. However, differences emerged in central symptoms, such that cognitive symptoms related to overvaluation of weight/shape were central in individuals with shorter DOI, and behavioral central symptoms emerged more in medium and long DOI.
Our results have important implications for the treatment of individuals with enduring EDs, as they may have a different core, maintaining symptoms. Additionally, our findings highlight the importance of using comprehensive, theoretically- or empirically-derived models for NA.
The seeds of most Australian acacias have pronounced physical dormancy (PY). While fire and hot water (HW) treatments cause the lens to ‘pop’ almost instantaneously, for many Acacia species the increase in germination percentage can be gradual. If PY is broken instantly by HW treatment, why is germination often an extended process? Control and HW treatments were performed on seeds of 48 species of Acacia. Seeds were placed on a moist substrate and imbibition was assessed by frequently weighing individual seeds. In the two soft-seeded species all control seeds were fully imbibed within 6–24 h, while in hard-seeded species very few control seeds imbibed over several weeks. In 10 species over 50% of the HW-treated seeds imbibed within 30 h, but mostly the percentage of imbibed seeds gradually increased over several weeks. Some seeds in a replicate would imbibe early, while others would remain unimbibed for many days or weeks then, remarkably, become fully imbibed in less than 24 h. While HW treatment broke PY almost instantaneously, it appeared that in many Acacia species some other part of the testa slowed water from reaching the embryo. This process of having staggered imbibition may be a way of ensuring not all seeds in a population germinate after small rain events. Thus it appears the lens acts as a ‘fire gauge’ while some other part of the seed coat acts as a ‘rain gauge’.
Frascati international research criteria for HIV-associated neurocognitive disorders (HAND) are controversial; some investigators have argued that Frascati criteria are too liberal, resulting in a high false positive rate. Meyer et al. recommended more conservative revisions to HAND criteria, including exploring other commonly used methodologies for neurocognitive impairment (NCI) in HIV including the global deficit score (GDS). This study compares NCI classifications by Frascati, Meyer, and GDS methods, in relation to neuroimaging markers of brain integrity in HIV.
Two hundred forty-one people living with HIV (PLWH) without current substance use disorder or severe (confounding) comorbid conditions underwent comprehensive neurocognitive testing and brain structural magnetic resonance imaging and magnetic resonance spectroscopy. Participants were classified using Frascati criteria versus Meyer criteria: concordant unimpaired [Frascati(Un)/Meyer(Un)], concordant impaired [Frascati(Imp)/Meyer(Imp)], or discordant [Frascati(Imp)/Meyer(Un)] which were impaired via Frascati criteria but unimpaired via Meyer criteria. To investigate the GDS versus Meyer criteria, the same groupings were utilized using GDS criteria instead of Frascati criteria.
When examining Frascati versus Meyer criteria, discordant Frascati(Imp)/Meyer(Un) individuals had less cortical gray matter, greater sulcal cerebrospinal fluid volume, and greater evidence of neuroinflammation (i.e., choline) than concordant Frascati(Un)/Meyer(Un) individuals. GDS versus Meyer comparisons indicated that discordant GDS(Imp)/Meyer(Un) individuals had less cortical gray matter and lower levels of energy metabolism (i.e., creatine) than concordant GDS(Un)/Meyer(Un) individuals. In both sets of analyses, the discordant group did not differ from the concordant impaired group on any neuroimaging measure.
The Meyer criteria failed to capture a substantial portion of PLWH with brain abnormalities. These findings support continued use of Frascati or GDS criteria to detect HIV-associated CNS dysfunction.
There are high rates of obesity and low self-esteem in patients with psychosis. The occurrence of negative voice content directly about appearance is therefore plausible. Derogatory comments about appearance are likely to be distressing, increase depression and contribute to social withdrawal.
To systematically assess the occurrence of voice content regarding appearance and identify correlates.
Sixty patients experiencing verbal auditory hallucinations at least once a week in the context of non-affective psychosis completed a measure assessing positive and negative voice content about appearance. They also completed assessments about body image, self-esteem, psychiatric symptoms and well-being.
Fifty-five (91.7%) participants reported hearing voices comment on their appearance. A total of 54 (90%) patients reported negative voice content about their appearance with 30 (50%) patients experienced negative appearance comments on a daily basis. The most common negative comment was ‘the voices tell me that I am ugly’ (n = 48, 80%). There were 39 (65%) patients who reported positive voice content on appearance. The most frequent positive comment was ‘I look as nice as other people’ (n = 26, 43.3%). Negative voice content about appearance was associated with body image concerns, paranoia, voice hearing severity, depression, worry, negative self-beliefs and safety-seeking behaviours. Positive appearance voice content was associated with greater body esteem and well-being and lower levels of depression and insomnia.
Voice content about appearance is very common for patients seen in clinical services. Negative voice content may reflect – and subsequently reinforce – negative beliefs about one's appearance, low self-esteem, worry and paranoia.
Shifts in ceramic technology are often assumed to reflect wider social changes. Closer attention, however, needs to be directed to the fundamental issue of production. Shifts in the ceramic record of the Tao River Valley in north-western China (c. 2100 BC) are no exception and the relationships between ceramic form, clay recipes and communities of practice have not been previously investigated for this region. Here, petrographic analysis demonstrates that, despite major shifts in ceramic form and surface treatment, production techniques, raw materials and exchange relationships show surprising continuity through time.
The period before the formation of a persecutory delusion may provide causal insights. Patient accounts are invaluable in informing this understanding.
To inform the understanding of delusion formation, we asked patients about the occurrence of potential causal factors – identified from a cognitive model – before delusion onset.
A total of 100 patients with persecutory delusions completed a checklist about their subjective experiences in the weeks before belief onset. The checklist included items concerning worry, images, low self-esteem, poor sleep, mood dysregulation, dissociation, manic-type symptoms, aberrant salience, hallucinations, substance use and stressors. Time to reach certainty in the delusion was also assessed.
Most commonly it took patients several months to reach delusion certainty (n = 30), although other patients took a few weeks (n = 24), years (n = 21), knew instantly (n = 17) or took a few days (n = 6). The most frequent experiences occurring before delusion onset were: low self-confidence (n = 84); excessive worry (n = 80); not feeling like normal self (n = 77); difficulties concentrating (n = 77); going over problems again and again (n = 75); being very negative about the self (n = 75); images of bad things happening (n = 75); and sleep problems (n = 75). The average number of experiences occurring was high (mean 23.5, s.d. = 8.7). The experiences clustered into six main types, with patients reporting an average of 5.4 (s.d. = 1.0) different types.
Patients report numerous different experiences in the period before full persecutory delusion onset that could be contributory causal factors, consistent with a complex multifactorial view of delusion occurrence. This study, however, relied on retrospective self-report and could not determine causality.
While hot-water drilling is a well-established technique used to access the subsurface of ice masses, drilling into high-elevation (≳ 4000 m a.s.l.) debris-covered glaciers faces specific challenges. First, restricted transport capacity limits individual equipment items to a volume and mass that can be slung by small helicopters. Second, low atmospheric oxygen and pressure reduces the effectiveness of combustion, limiting a system's ability to pump and heat water. Third, thick supraglacial debris, which is both highly uneven and unstable, inhibits direct access to the ice surface, hinders the manoeuvring of equipment and limits secure sites for equipment placement. Fourth, englacial debris can slow the drilling rate such that continued drilling becomes impracticable and/or boreholes deviate substantially from vertical. Because of these challenges, field-based englacial and subglacial data required to calibrate numerical models of high-elevation debris-covered glaciers are scarce or absent. Here, we summarise our experiences of hot-water drilling over two field seasons (2017–2018) at the debris-covered Khumbu Glacier, Nepal, where we melted 27 boreholes up to 192 m length, at elevations between 4900 and 5200 m a.s.l. We describe the drilling equipment and operation, evaluate the effectiveness of our approach and suggest equipment and methodological adaptations for future use.
Methamphetamine (MA) dependence contributes to neurotoxicity and neurocognitive deficits. Although combined alcohol and MA misuse is common, how alcohol consumption relates to neurocognitive performance among MA users remains unclear. We hypothesized that alcohol and MA use would synergistically diminish neurocognitive functioning, such that greater reported alcohol consumption would exert larger negative effects on neurocognition among MA-dependent individuals compared to MA-nonusing persons.
Eighty-seven MA-dependent (MA+) and 114 MA-nonusing (MA−) adults underwent neuropsychological and substance use assessments. Linear and logistic regressions examined the interaction between MA status and lifetime average drinks per drinking day on demographically corrected global neurocognitive T scores and impairment rates, controlling for recent alcohol use, lifetime cannabis use, WRAT reading performance, and lifetime depression.
MA+ displayed moderately higher rates of impairment and lower T scores compared to MA−. Lifetime alcohol use significantly interacted with MA status to predict global impairment (ORR = 0.70, p = .003) such that greater lifetime alcohol use increased likelihood of impairment in MA−, but decreased likelihood of impairment in MA+. Greater lifetime alcohol use predicted poorer global T scores among MA− (b = −0.44, p = .030) but not MA+ (b = 0.08, p = .586).
Contrary to expectations, greater lifetime alcohol use related to reduced risk of neurocognitive impairment among MA users. Findings are supported by prior research identifying neurobiological mechanisms by which alcohol may attenuate stimulant-driven vasoconstriction and brain thermotoxicity. Replication and examination of neurophysiologic mechanisms underlying alcohol use in the context of MA dependence are warranted to elucidate whether alcohol confers a degree of neuroprotection.
The near-threatened Lilian’s Lovebird Agapornis lilianae is a small parrot endemic to the Zambezi basin in south-east Africa. The species has a fragmented distribution predominantly within mopane woodlands and is widely referred to as a mopane specialist. The harvesting of mopane trees for charcoal production and timber are having widespread impacts on this woodland habitat, raising concerns over its capacity to support biodiversity. This study aimed to understand the key drivers determining the occurrence of Lilian’s Lovebird in the mopane woodlands of Zambia, focusing particularly on aspects of woodland structure, including the size and density of trees. We used a MaxEnt species distribution model based on historical species occurrence data, to inform selection of 116 survey plots in the Luangwa, Luano and Zambezi valleys. Each plot was sampled for Lilian’s Lovebirds and woodland structure described. Occurrence of Lilian’s Lovebird was found to be positively associated with the size of mopane trees (both height and diameter at breast height) suggesting that large ‘cathedral’ mopane trees provide a key resource for the species and that conservation efforts should focus on the protection of sites containing large trees. No Lilian’s Lovebirds were recorded in areas where they previously occurred to the west of Lower Zambezi National Park, and there was an absence of ‘cathedral’ mopane habitat in this area.
Objectives: Studies of neurocognitively elite older adults, termed SuperAgers, have identified clinical predictors and neurobiological indicators of resilience against age-related neurocognitive decline. Despite rising rates of older persons living with HIV (PLWH), SuperAging (SA) in PLWH remains undefined. We aimed to establish neuropsychological criteria for SA in PLWH and examined clinically relevant correlates of SA. Methods: 734 PLWH and 123 HIV-uninfected participants between 50 and 64 years of age underwent neuropsychological and neuromedical evaluations. SA was defined as demographically corrected (i.e., sex, race/ethnicity, education) global neurocognitive performance within normal range for 25-year-olds. Remaining participants were labeled cognitively normal (CN) or impaired (CI) based on actual age. Chi-square and analysis of variance tests examined HIV group differences on neurocognitive status and demographics. Within PLWH, neurocognitive status differences were tested on HIV disease characteristics, medical comorbidities, and everyday functioning. Multinomial logistic regression explored independent predictors of neurocognitive status. Results: Neurocognitive status rates and demographic characteristics differed between PLWH (SA=17%; CN=38%; CI=45%) and HIV-uninfected participants (SA=35%; CN=55%; CI=11%). In PLWH, neurocognitive groups were comparable on demographic and HIV disease characteristics. Younger age, higher verbal IQ, absence of diabetes, fewer depressive symptoms, and lifetime cannabis use disorder increased likelihood of SA. SA reported increased independence in everyday functioning, employment, and health-related quality of life than non-SA. Conclusions: Despite combined neurological risk of aging and HIV, youthful neurocognitive performance is possible for older PLWH. SA relates to improved real-world functioning and may be better explained by cognitive reserve and maintenance of cardiometabolic and mental health than HIV disease severity. Future research investigating biomarker and lifestyle (e.g., physical activity) correlates of SA may help identify modifiable neuroprotective factors against HIV-related neurobiological aging. (JINS, 2019, 25, 507–519)
Objective: Post-stroke cognitive impairment is common, but mechanisms and risk factors are poorly understood. Frailty may be an important risk factor for cognitive impairment after stroke. We investigated the association between pre-stroke frailty and acute post-stoke cognition. Methods: We studied consecutively admitted acute stroke patients in a single urban teaching hospital during three recruitment waves between May 2016 and December 2017. Cognition was assessed using the Mini-Montreal Cognitive Assessment (min=0; max=12). A Frailty Index was used to generate frailty scores for each patient (min=0; max=100). Clinical and demographic information were collected, including pre-stroke cognition, delirium, and stroke-severity. We conducted univariate and multiple-linear regression analyses with covariates forced in (covariates included were: age, sex, stroke severity, stroke-type, pre-stroke cognitive impairment, delirium, previous stroke/transient ischemic attack) to investigate the association between pre-stroke frailty and post-stroke cognition. Results: Complete data were available for 154 stroke patients. Mean age was 68 years (SD=11; range=32–97); 93 (60%) were male. Median mini-Montreal Cognitive Assessment score was 8 (IQR=4–12). Mean Frailty Index score was 18 (SD=11). Pre-stroke cognitive impairment was apparent in 13/154 (8%) patients. Pre-stroke frailty was significantly associated with lower post-stroke cognition (Standardized-Beta=−0.40; p<0.001) and this association was independent of covariates (Unstandardized-Beta=−0.05; p=0.005). Additional significant variables in the multiple regression model were age (Unstandardized-Beta=−0.05; p=0.002), delirium (Unstandardized-Beta=−2.81; p<0.001), pre-stroke cognitive impairment (Unstandardized-Beta=−2.28; p=0.001), and stroke-severity (Unstandardized-Beta=−0.20; p<0.001). Conclusions: Pre-stroke frailty may be a moderator of post-stroke cognition, independent of other well-established post-stroke cognitive impairment risk factors. (JINS, 2019, 25, 501–506)
The article focuses on the “legitimate interest in performance” requirement which is now at the heart of the new test on penalty clauses but which has been left undefined by the Supreme Court in Cavendish Square Holding BV v Talal El Makdessi and ParkingEye Ltd v Beavis . It seeks to bring clarity to what is meant by “legitimate interest in performance” by examining other areas of the law of remedies for breach of contract where concepts of legitimate interest have featured in the court’s reasoning. It also makes suggestions as to what considerations are or might be relevant in determining whether a contracting party has a legitimate interest in performance, in particular a legitimate interest that goes beyond compensation.