We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure coreplatform@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This study investigates the capacity of pre/perinatal factors to predict attention-deficit/hyperactivity disorder (ADHD) symptoms in childhood. It also explores whether predictive accuracy of a pre/perinatal model varies for different groups in the population. We used the ABCD (Adolescent Brain Cognitive Development) cohort from the United States (N = 9975). Pre/perinatal information and the Child Behavior Checklist were reported by the parent when the child was aged 9–10. Forty variables which are generally known by birth were input as potential predictors including maternal substance-use, obstetric complications and child demographics. Elastic net regression with 5-fold validation was performed, and subsequently stratified by sex, race/ethnicity, household income and parental psychopathology. Seventeen pre/perinatal variables were identified as robust predictors of ADHD symptoms in this cohort. The model explained just 8.13% of the variance in ADHD symptoms on average (95% CI = 5.6%–11.5%). Predictive accuracy of the model varied significantly by subgroup, particularly across income groups, and several pre/perinatal factors appeared to be sex-specific. Results suggest we may be able to predict childhood ADHD symptoms with modest accuracy from birth. This study needs to be replicated using prospectively measured pre/perinatal data.
Patients with social anxiety disorder (SAD) have a range of negative thoughts and beliefs about how they think they come across to others. These include specific fears about doing or saying something that will be judged negatively (e.g. ‘I’ll babble’, ‘I’ll have nothing to say’, ‘I’ll blush’, ‘I’ll sweat’, ‘I’ll shake’, etc.) and more persistent negative self-evaluative beliefs such as ‘I am unlikeable’, ‘I am foolish’, ‘I am inadequate’, ‘I am inferior’, ‘I am weird/different’ and ‘I am boring’. Some therapists may take the presence of such persistent negative self-evaluations as being a separate problem of ‘low self-esteem’, rather than seeing them as a core feature of SAD. This may lead to a delay in addressing the persistent negative self-evaluations until the last stages of treatment, as might be typically done in cognitive therapy for depression. It might also prompt therapist drift from the core interventions of NICE recommended cognitive therapy for social anxiety disorder (CT-SAD). Therapists may be tempted to devote considerable time to interventions for ‘low self-esteem’. Our experience from almost 30 years of treating SAD within the framework of the Clark and Wells (1995) model is that when these digressions are at the cost of core CT-SAD techniques, they have limited value. This article clarifies the role of persistent negative self-evaluations in SAD and shows how these beliefs can be more helpfully addressed from the start, and throughout the course of CT-SAD, using a range of experiential techniques.
Key learning aims
(1) To recognise persistent negative self-evaluations as a key feature of SAD.
(2) To understand that persistent negative self-evaluations are central in the Clark and Wells (1995) cognitive model and how to formulate these as part of SAD.
(3) To be able to use all the experiential interventions in cognitive therapy for SAD to address these beliefs.
We assessed breakpoint changes of 13,101 Enterobacterales and Pseudomonas aeruginosa isolates from the past decade. All β-lactams and fluoroquinolones demonstrated decreased susceptibilities following breakpoint changes. Enterobacter cloacae experienced the largest average decrease in susceptibility amongst the Enterobacterales at 5.3% and P. aeruginosa experienced an average decrease in susceptibility of 9.3%.
Surveys are a powerful technique in cognitive behavioural therapy (CBT). A form of behavioural experiment, surveys can be used to test beliefs, normalise symptoms and experiences, and generate compassionate perspectives. In this article, we discuss why and when to use surveys in CBT interventions for a range of psychological disorders. We also present a step-by-step guide to collaboratively designing surveys with patients, selecting the appropriate recipients, sending out surveys, discussing responses and using key learning as a part of therapy. In doing so, we hope to demonstrate that surveys are a flexible, impactful, time-efficient, individualised technique which can be readily and effectively integrated into CBT interventions.
Key learning aims
After reading this article, it is hoped that readers will be able to:
(1) Conceptualise why surveys can be useful in cognitive behavioural therapy.
(2) Implement collaborative and individualised survey design, delivery and feedback as part of a CBT intervention.
Amiodarone may be considered for patients with junctional ectopic tachycardia refractory to treatment with sedation, analgesia, cooling, and electrolyte replacements. There are currently no published pediatric data regarding the hemodynamic effects of the newer amiodarone formulation, PM101, devoid of hypotensive agents used in the original amiodarone formulation. We performed a single-center, retrospective, descriptive study from January 2012 to December 2020 in a pediatric ICU. Thirty-three patients were included (22 male and 11 female) between the ages of 1.1 and 1,460 days who developed post-operative junctional ectopic tachycardia or other tachyarrhythmias requiring PM101. Data analysis was performed on hemodynamic parameters (mean arterial pressures and heart rate) and total PM101 (mg/kg) from hour 0 of amiodarone administration to hour 72. Adverse outcomes were defined as Vasoactive-Inotropic Score >20, patients requiring ECMO or CPR, or patient death. There was no statistically significant decrease in mean arterial pressures within the 6 hours of PM101 administration (p > 0.05), but there was a statistically significant therapeutic decrease in heart rate for resolution of tachyarrhythmia (p < 0.05). Patients received up to 25 mg/kg in an 8-hour time for rate control. Average rate control was achieved within 11.91 hours and average rhythm control within 62 hours. There were four adverse events around the time of PM101 administration, with three determined to not be associated with the medication. PM101 is safe and effective in the pediatric cardiac surgical population. Our study demonstrated that PM101 can be used in a more aggressive dosing regimen than previously reported in pediatric literature with the prior formulation.
Thirty years after the discovery of an Early Neolithic timber hall at Balbridie in Scotland was reported in Antiquity, new analysis of the site's archaeobotanical assemblage, featuring 20 000 cereal grains preserved when the building burnt down in the early fourth millennium BC, provides new insights into early farming practices. The results of stable isotope analyses of cereals from Balbridie, alongside archaeobotanical and stable isotope results from three other sites, indicate that while cereals were successfully cultivated in well-established plots without manuring at Balbridie, a variety of manuring strategies was implemented at the other sites. These differences reinforce the picture of variability in cultivation practices across Neolithic North-west Europe.
This article analyzes the impact of state policies since the 1970s on household food security in several Mapuche communities in the Araucanía region of Chile (Region IX). The author highlights key transformations in the national economy and food system and endeavors to link those to local phenomena, in particular the absorption of the local livelihood strategies and food systems into capitalist markets and the high incidences of food, insecurity. The article concludes that a reconceptualization of macroeconomic and indigenous policies are required to rebuild the material and social foundations of rural Mapuche communities that provide the bases from which their inhabitants can reconstruct a mutually beneficial relationship with the broader Chilean society and avert the continued acceleration of tension and violence.
Psychotic experiences (PE) are common in the general population, in particular in childhood, adolescence and young adulthood. PE have been shown to be associated with an increased risk for later psychotic disorders, mental disorders, and poorer functioning. Recent findings have highlighted the relevance of PE to many fields of healthcare, including treatment response in clinical services for anxiety & depression treatment, healthcare costs and service use. Despite PE relevance to many areas of mental health, and healthcare research, there remains a gap of information between PE researchers and experts in other fields. With this review, we aim to bridge this gap by providing a broad overview of the current state of PE research, and future directions. This narrative review aims to provide an broad overview of the literature on psychotic experiences, under the following headings: (1) Definition and Measurement of PE; (2) Risk Factors for PE; (3) PE and Health; (4) PE and Psychosocial Functioning; (5) Interventions for PE, (6) Future Directions.
We use a mathematical model to investigate the effect of basal topography and ice surface slope on transport and deposition of sediment within a water-filled subglacial channel. In our model, three zones of different behaviour occur. In the zone furthest upstream, variations in basal topography lead to sediment deposition under a wide range of conditions. In this first zone, even very small and gradually varying basal undulations (~5 m amplitude) can lead to the deposition of sediment within a modelled channel. Deposition is concentrated on the downstream gradient of subglacial ridges, and on the upstream gradient of subglacial troughs. The thickness and steepness of the ice sheet has a substantial impact on deposition rates, with shallow ice profiles strongly promoting both the magnitude and extent of sediment deposition. In a second zone, all sediment is transported downstream. Finally, a third zone close to the ice margin is characterised by high rates of sediment deposition. The existence of these zones has implications for esker formation and the dynamics of the subglacial environment.
Internationally, an increasing proportion of emergency department visits are mental health related. Concurrently, psychiatric wards are often occupied above capacity. Healthcare providers have introduced short-stay, hospital-based crisis units offering a therapeutic space for stabilisation, assessment and appropriate referral. Research lags behind roll-out, and a review of the evidence is urgently needed to inform policy and further introduction of similar units.
Aims
This systematic review aims to evaluate the effectiveness of short-stay, hospital-based mental health crisis units.
Method
We searched EMBASE, Medline, CINAHL and PsycINFO up to March 2021. All designs incorporating a control or comparison group were eligible for inclusion, and all effect estimates with a comparison group were extracted and combined meta-analytically where appropriate. We assessed study risk of bias with Risk of Bias in Non-Randomized Studies – of Interventions and Risk of Bias in Randomized Trials.
Results
Data from twelve studies across six countries (Australia, Belgium, Canada, The Netherlands, UK and USA) and 67 505 participants were included. Data indicated that units delivered benefits on many outcomes. Units could reduce psychiatric holds (42% after intervention compared with 49.8% before intervention; difference = 7.8%; P < 0.0001) and increase out-patient follow-up care (χ2 = 37.42, d.f. = 1; P < 0.001). Meta-analysis indicated a significant reduction in length of emergency department stay (by 164.24 min; 95% CI −261.24 to −67.23 min; P < 0.001) and number of in-patient admissions (odds ratio 0.55, 95% CI 0.43–0.68; P < 0.001).
Conclusions
Short-stay mental health crisis units are effective for reducing emergency department wait times and in-patient admissions. Further research should investigate the impact of units on patient experience, and clinical and social outcomes.
Therapist cognitions about trauma-focused psychological therapies can affect our implementation of evidence-based therapies for post-traumatic stress disorder (PTSD), potentially reducing their effectiveness. Based on observations gleaned from teaching and supervising one of these treatments, cognitive therapy for PTSD (CT-PTSD), ten common ‘misconceptions’ were identified. These included misconceptions about the suitability of the treatment for some types of trauma and/or emotions, the need for stabilisation prior to memory work, the danger of ‘retraumatising’ patients with memory-focused work, the risks of using memory-focused techniques with patients who dissociate, the remote use of trauma-focused techniques, and the perception of trauma-focused CBT as inflexible. In this article, these misconceptions are analysed in light of existing evidence and guidance is provided on using trauma-focused CT-PTSD with a broad range of presentations.
Key learning aims
(1) To recognise common misconceptions about trauma-focused CBT for PTSD and the evidence against them.
(2) To widen understanding of the application of cognitive therapy for PTSD (CT-PTSD) to a broad range of presentations.
(3) To increase confidence in the formulation-driven, flexible, active and creative delivery of CT-PTSD.
Cognitive therapy for social anxiety disorder (CT-SAD) is recommended by NICE (2013) as a first-line intervention. Take up in routine services is limited by the need for up to 14 ninety-min face-to-face sessions, some of which are out of the office. An internet-based version of the treatment (iCT-SAD) with remote therapist support may achieve similar outcomes with less therapist time.
Methods
102 patients with social anxiety disorder were randomised to iCT-SAD, CT-SAD, or waitlist (WAIT) control, each for 14 weeks. WAIT patients were randomised to the treatments after wait. Assessments were at pre-treatment/wait, midtreatment/wait, posttreatment/wait, and follow-ups 3 & 12 months after treatment. The pre-registered (ISRCTN 95 458 747) primary outcome was the social anxiety disorder composite, which combines 6 independent assessor and patient self-report scales of social anxiety. Secondary outcomes included disability, general anxiety, depression and a behaviour test.
Results
CT-SAD and iCT-SAD were both superior to WAIT on all measures. iCT-SAD did not differ from CT-SAD on the primary outcome at post-treatment or follow-up. Total therapist time in iCT-SAD was 6.45 h. CT-SAD required 15.8 h for the same reduction in social anxiety. Mediation analysis indicated that change in process variables specified in cognitive models accounted for 60% of the improvements associated with either treatment. Unlike the primary outcome, there was a significant but small difference in favour of CT-SAD on the behaviour test.
Conclusions
When compared to conventional face-to-face therapy, iCT-SAD can more than double the amount of symptom change associated with each therapist hour.
Electroanatomic mapping systems are increasingly used during ablations to decrease the need for fluoroscopy and therefore radiation exposure. For left-sided arrhythmias, transseptal puncture is a common procedure performed to gain access to the left side of the heart. We aimed to demonstrate the radiation exposure associated with transseptal puncture.
Methods:
Data were retrospectively collected from the Catheter Ablation with Reduction or Elimination of Fluoroscopy registry. Patients with left-sided accessory pathway-mediated tachycardia, with a structurally normal heart, who had a transseptal puncture, and were under 22 years of age were included. Those with previous ablations, concurrent diagnostic or interventional catheterisation, and missing data for fluoroscopy use or procedural outcomes were excluded. Patients with a patent foramen ovale who did not have a transseptal puncture were selected as the control group using the same criteria. Procedural outcomes were compared between the two groups.
Results:
There were 284 patients in the transseptal puncture group and 70 in the patent foramen ovale group. The transseptal puncture group had a significantly higher mean procedure time (158.8 versus 131.4 minutes, p = 0.002), rate of fluoroscopy use (38% versus 7%, p < 0.001), and mean fluoroscopy time (2.4 versus 0.6 minutes, p < 0.001). The acute success and complication rates were similar.
Conclusions:
Performing transseptal puncture remains a common reason to utilise fluoroscopy in the era of non-fluoroscopic ablation. Better tools are needed to make non-fluoroscopic transseptal puncture more feasible.
Automated virtual reality therapies are being developed to increase access to psychological interventions. We assessed the experience with one such therapy of patients diagnosed with psychosis, including satisfaction, side effects, and positive experiences of access to the technology. We tested whether side effects affected therapy.
Methods
In a clinical trial 122 patients diagnosed with psychosis completed baseline measures of psychiatric symptoms, received gameChange VR therapy, and then completed a satisfaction questionnaire, the Oxford-VR Side Effects Checklist, and outcome measures.
Results
79 (65.8%) patients were very satisfied with VR therapy, 37 (30.8%) were mostly satisfied, 3 (2.5%) were indifferent/mildly dissatisfied, and 1 (0.8%) person was quite dissatisfied. The most common side effects were: difficulties concentrating because of thinking about what might be happening in the room (n = 17, 14.2%); lasting headache (n = 10, 8.3%); and the headset causing feelings of panic (n = 9, 7.4%). Side effects formed three factors: difficulties concentrating when wearing a headset, feelings of panic using VR, and worries following VR. The occurrence of side effects was not associated with number of VR sessions, therapy outcomes, or psychiatric symptoms. Difficulties concentrating in VR were associated with slightly lower satisfaction. VR therapy provision and engagement made patients feel: proud (n = 99, 81.8%); valued (n = 97, 80.2%); and optimistic (n = 96, 79.3%).
Conclusions
Patients with psychosis were generally very positive towards the VR therapy, valued having the opportunity to try the technology, and experienced few adverse effects. Side effects did not significantly impact VR therapy. Patient experience of VR is likely to facilitate widespread adoption.
Adverse drug reactions (ADRs) are associated with increased morbidity, mortality, and resource utilization. Drug interactions (DDIs) are among the most common causes of ADRs, and estimates have cited that up to 22% of patients take interacting medications. DDIs are often due to the propensity for agents to induce or inhibit enzymes responsible for the metabolism of concomitantly administered drugs. However, this phenomenon is further complicated by genetic variants of such enzymes. The aim of this study is to quantify and describe potential drug-drug, drug-gene, and drug-drug-gene interactions in a community-based patient population.
Methods
A regional pharmacy with retail outlets in Arkansas provided deidentified prescription data from March 2020 for 4761 individuals. Drug-drug and drug-drug-gene interactions were assessed utilizing the logic incorporated into GenMedPro, a commercially available digital gene-drug interaction software program that incorporates variants of 9 pharmacokinetic (PK) and 2 pharmacodynamic (PD) genes to evaluate DDIs and drug-gene interactions. The data were first assessed for composite drug-drug interaction risk, and each individual was stratified to a risk category using the logic incorporated in GenMedPro. To calculate the frequency of potential drug-gene interactions, genotypes were imputed and allocated to the cohort according to each gene’s frequency in the general population. Potential genotypes were randomly allocated to the population 100 times in a Monte Carlo simulation. Potential drug-drug, gene-drug, or gene-drug-drug interaction risk was characterized as minor, moderate, or major.
Results
Based on prescription data only, the probability of a DDI of any impact (mild, moderate, or major) was 26% [95% CI: 0.248-0.272] in the population. This probability increased to 49.6% [95% CI: 0.484-0.507] when simulated genetic polymorphisms were additionally assessed. When assessing only major impact interactions, there was a 7.8% [95% CI: 0.070-0.085] probability of drug-drug interactions and 10.1% [95% CI: 0.095-0.108] probability with the addition of genetic contributions. The probability of drug-drug-gene interactions of any impact was correlated with the number of prescribed medications, with an approximate probability of 77%, 85%, and 94% in patients prescribed 5, 6, or 7+ medications, respectively. When stratified by specific drug class, antidepressants (19.5%), antiemetics (21.4%), analgesics (16%), antipsychotics (15.6%), and antiparasitics (49.7%) had the highest probability of major drug-drug-gene interaction.
Conclusions
In a community-based population of outpatients, the probability of drug-drug interaction risk increases when genetic polymorphisms are attributed to the population. These data suggest that pharmacogenetic testing may be useful in predicting drug interactions, drug-gene interactions, and severity of interactions when proactively evaluating patient medication profiles.
Early in the COVID-19 pandemic, the World Health Organization stressed the importance of daily clinical assessments of infected patients, yet current approaches frequently consider cross-sectional timepoints, cumulative summary measures, or time-to-event analyses. Statistical methods are available that make use of the rich information content of longitudinal assessments. We demonstrate the use of a multistate transition model to assess the dynamic nature of COVID-19-associated critical illness using daily evaluations of COVID-19 patients from 9 academic hospitals. We describe the accessibility and utility of methods that consider the clinical trajectory of critically ill COVID-19 patients.
Many patients with mental health disorders become increasingly isolated at home due to anxiety about going outside. A cognitive perspective on this difficulty is that threat cognitions lead to the safety-seeking behavioural response of agoraphobic avoidance.
Aims:
We sought to develop a brief questionnaire, suitable for research and clinical practice, to assess a wide range of cognitions likely to lead to agoraphobic avoidance. We also included two additional subscales assessing two types of safety-seeking defensive responses: anxious avoidance and within-situation safety behaviours.
Method:
198 patients with psychosis and agoraphobic avoidance and 1947 non-clinical individuals completed the item pool and measures of agoraphobic avoidance, generalised anxiety, social anxiety, depression and paranoia. Factor analyses were used to derive the Oxford Cognitions and Defences Questionnaire (O-CDQ).
Results:
The O-CDQ consists of three subscales: threat cognitions (14 items), anxious avoidance (11 items), and within-situation safety behaviours (8 items). Separate confirmatory factor analyses demonstrated a good model fit for all subscales. The cognitions subscale was significantly associated with agoraphobic avoidance (r = .672, p < .001), social anxiety (r = .617, p < .001), generalized anxiety (r = .746, p < .001), depression (r = .619, p < .001) and paranoia (r = .655, p < .001). Additionally, both the O-CDQ avoidance (r = .867, p < .001) and within-situation safety behaviours (r = .757, p < .001) subscales were highly correlated with agoraphobic avoidance. The O-CDQ demonstrated excellent internal consistency (cognitions Cronbach’s alpha = .93, avoidance Cronbach’s alpha = .94, within-situation Cronbach’s alpha = .93) and test–re-test reliability (cognitions ICC = 0.88, avoidance ICC = 0.92, within-situation ICC = 0.89).
Conclusions:
The O-CDQ, consisting of three separate scales, has excellent psychometric properties and may prove a helpful tool for understanding agoraphobic avoidance across mental health disorders.
Young people with social disability and severe and complex mental health problems have poor outcomes, frequently struggling with treatment access and engagement. Outcomes may be improved by enhancing care and providing targeted psychological or psychosocial intervention.
Aims
We aimed to test the hypothesis that adding social recovery therapy (SRT) to enhanced standard care (ESC) would improve social recovery compared with ESC alone.
Method
A pragmatic, assessor-masked, randomised controlled trial (PRODIGY: ISRCTN47998710) was conducted in three UK centres. Participants (n = 270) were aged 16–25 years, with persistent social disability, defined as under 30 hours of structured activity per week, social impairment for at least 6 months and severe and complex mental health problems. Participants were randomised to ESC alone or SRT plus ESC. SRT was an individual psychosocial therapy delivered over 9 months. The primary outcome was time spent in structured activity 15 months post-randomisation.
Results
We randomised 132 participants to SRT plus ESC and 138 to ESC alone. Mean weekly hours in structured activity at 15 months increased by 11.1 h for SRT plus ESC (mean 22.4, s.d. = 21.4) and 16.6 h for ESC alone (mean 27.7, s.d. = 26.5). There was no significant difference between arms; treatment effect was −4.44 (95% CI −10.19 to 1.31, P = 0.13). Missingness was consistently greater in the ESC alone arm.
Conclusions
We found no evidence for the superiority of SRT as an adjunct to ESC. Participants in both arms made large, clinically significant improvements on all outcomes. When providing comprehensive evidence-based standard care, there are no additional gains by providing specialised SRT. Optimising standard care to ensure targeted delivery of existing interventions may further improve outcomes.