To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Violent criminal offenders with personality disorders (PD's) can cause immense harm, but are often deemed untreatable. This study aimed to conduct a randomized clinical trial to test the effectiveness of long-term psychotherapy for rehabilitating offenders with PDs.
We compared schema therapy (ST), an evidence-based psychotherapy for PDs, to treatment-as-usual (TAU) at eight high-security forensic hospitals in the Netherlands. Patients in both conditions received multiple treatment modalities and differed only in the individual, study-specific therapy they received. One-hundred-three male offenders with antisocial, narcissistic, borderline, or paranoid PDs, or Cluster B PD-not-otherwise-specified, were assigned to 3 years of ST or TAU and assessed every 6 months. Primary outcomes were rehabilitation, involving gradual reintegration into the community, and PD symptoms.
Patients in both conditions showed moderate to large improvements in outcomes. ST was superior to TAU on both primary outcomes – rehabilitation (i.e. attaining supervised and unsupervised leave) and PD symptoms – and six of nine secondary outcomes, with small to moderate advantages over TAU. ST patients moved more rapidly through rehabilitation (supervised leave, treatment*time: F(5308) = 9.40, p < 0.001; unsupervised leave, treatment*time: F(5472) = 3.45, p = 0.004), and showed faster improvements on PD scales (treatment*time: t(1387) = −2.85, p = 0.005).
These findings contradict pessimistic views on the treatability of violent offenders with PDs, and support the effectiveness of long-term psychotherapy for rehabilitating these patients, facilitating their re-entry into the community.
In this study, we examined the relationship between polygenic liability for depression and number of stressful life events (SLEs) as risk factors for early-onset depression treated in inpatient, outpatient or emergency room settings at psychiatric hospitals in Denmark.
Data were drawn from the iPSYCH2012 case-cohort sample, a population-based sample of individuals born in Denmark between 1981 and 2005. The sample included 18 532 individuals who were diagnosed with depression by a psychiatrist by age 31 years, and a comparison group of 20 184 individuals. Information on SLEs was obtained from nationwide registers and operationalized as a time-varying count variable. Hazard ratios and cumulative incidence rates were estimated using Cox regressions.
Risk for depression increased by 35% with each standard deviation increase in polygenic liability (p < 0.0001), and 36% (p < 0.0001) with each additional SLE. There was a small interaction between polygenic liability and SLEs (β = −0.04, p = 0.0009). The probability of being diagnosed with depression in a hospital-based setting between ages 15 and 31 years ranged from 1.5% among males in the lowest quartile of polygenic liability with 0 events by age 15, to 18.8% among females in the highest quartile of polygenic liability with 4+ events by age 15.
These findings suggest that although there is minimal interaction between polygenic liability and SLEs as risk factors for hospital-treated depression, combining information on these two important risk factors could potentially be useful for identifying high-risk individuals.
Stem cells give rise to the entirety of cells within an organ. Maintaining stem cell identity and coordinately regulating stem cell divisions is crucial for proper development. In plants, mobile proteins, such as WUSCHEL-RELATED HOMEOBOX 5 (WOX5) and SHORTROOT (SHR), regulate divisions in the root stem cell niche. However, how these proteins coordinately function to establish systemic behaviour is not well understood. We propose a non-cell autonomous role for WOX5 in the cortex endodermis initial (CEI) and identify a regulator, ANGUSTIFOLIA (AN3)/GRF-INTERACTING FACTOR 1, that coordinates CEI divisions. Here, we show with a multi-scale hybrid model integrating ordinary differential equations (ODEs) and agent-based modeling that quiescent center (QC) and CEI divisions have different dynamics. Specifically, by combining continuous models to describe regulatory networks and agent-based rules, we model systemic behaviour, which led us to predict cell-type-specific expression dynamics of SHR, SCARECROW, WOX5, AN3 and CYCLIND6;1, and experimentally validate CEI cell divisions. Conclusively, our results show an interdependency between CEI and QC divisions.
In April 2019, the U.S. Fish and Wildlife Service (USFWS) released its recovery plan for the jaguar Panthera onca after several decades of discussion, litigation and controversy about the status of the species in the USA. The USFWS estimated that potential habitat, south of the Interstate-10 highway in Arizona and New Mexico, had a carrying capacity of c. six jaguars, and so focused its recovery programme on areas south of the USA–Mexico border. Here we present a systematic review of the modelling and assessment efforts over the last 25 years, with a focus on areas north of Interstate-10 in Arizona and New Mexico, outside the recovery unit considered by the USFWS. Despite differences in data inputs, methods, and analytical extent, the nine previous studies found support for potential suitable jaguar habitat in the central mountain ranges of Arizona and New Mexico. Applying slightly modified versions of the USFWS model and recalculating an Arizona-focused model over both states provided additional confirmation. Extending the area of consideration also substantially raised the carrying capacity of habitats in Arizona and New Mexico, from six to 90 or 151 adult jaguars, using the modified USFWS models. This review demonstrates the crucial ways in which choosing the extent of analysis influences the conclusions of a conservation plan. More importantly, it opens a new opportunity for jaguar conservation in North America that could help address threats from habitat losses, climate change and border infrastructure.
This study aimed to evaluate and compare simultaneous integrated boost-based volumetric modulated arc therapy (SIB-VMAT) of head-and-neck plans optimised using segmented and non-segmented intermediate-risk target volumes.
Materials and methods:
CT data of 20 patients with locally advanced laryngeal cancer treated with radical chemoradiation were included retrospectively. Both segmented [planning target volume (PTV) IR!] and non-segmented PTV (PTV IR) volumes were created for the intermediate-risk volume. Correspondingly, two VMAT plans were generated for every CT dataset. Dosimetry parameters obtained from cumulative dose volume histogram and the quality indices such as conformity and homogeneity indices were evaluated for both plans and were statistically analysed.
Maximum dose of PTV IR! was observed to be higher in the non-segmented plans (7281·45 versus 7075·75 cGy) and was statistically significant (p = 0·002). Homogeneity index (HI) of PTV IR! in segmented plans fared better compared to non-segmented plans (0·1 versus 0·12, p = 0·01). All other dosimetry parameters were found to be similar in both plans.
This study shows that using segmented volumes for planning will lead to more homogenous plans with regard to intermediate- and low-risk volumes, especially under controlled settings.
This SHEA white paper identifies knowledge gaps and challenges in healthcare epidemiology research related to coronavirus disease 2019 (COVID-19) with a focus on core principles of healthcare epidemiology. These gaps, revealed during the worst phases of the COVID-19 pandemic, are described in 10 sections: epidemiology, outbreak investigation, surveillance, isolation precaution practices, personal protective equipment (PPE), environmental contamination and disinfection, drug and supply shortages, antimicrobial stewardship, healthcare personnel (HCP) occupational safety, and return to work policies. Each section highlights three critical healthcare epidemiology research questions with detailed description provided in supplementary materials. This research agenda calls for translational studies from laboratory-based basic science research to well-designed, large-scale studies and health outcomes research. Research gaps and challenges related to nursing homes and social disparities are included. Collaborations across various disciplines, expertise and across diverse geographic locations will be critical.
The first demonstration of laser action in ruby was made in 1960 by T. H. Maiman of Hughes Research Laboratories, USA. Many laboratories worldwide began the search for lasers using different materials, operating at different wavelengths. In the UK, academia, industry and the central laboratories took up the challenge from the earliest days to develop these systems for a broad range of applications. This historical review looks at the contribution the UK has made to the advancement of the technology, the development of systems and components and their exploitation over the last 60 years.
We summarize some of the past year's most important findings within climate change-related research. New research has improved our understanding of Earth's sensitivity to carbon dioxide, finds that permafrost thaw could release more carbon emissions than expected and that the uptake of carbon in tropical ecosystems is weakening. Adverse impacts on human society include increasing water shortages and impacts on mental health. Options for solutions emerge from rethinking economic models, rights-based litigation, strengthened governance systems and a new social contract. The disruption caused by COVID-19 could be seized as an opportunity for positive change, directing economic stimulus towards sustainable investments.
A synthesis is made of ten fields within climate science where there have been significant advances since mid-2019, through an expert elicitation process with broad disciplinary scope. Findings include: (1) a better understanding of equilibrium climate sensitivity; (2) abrupt thaw as an accelerator of carbon release from permafrost; (3) changes to global and regional land carbon sinks; (4) impacts of climate change on water crises, including equity perspectives; (5) adverse effects on mental health from climate change; (6) immediate effects on climate of the COVID-19 pandemic and requirements for recovery packages to deliver on the Paris Agreement; (7) suggested long-term changes to governance and a social contract to address climate change, learning from the current pandemic, (8) updated positive cost–benefit ratio and new perspectives on the potential for green growth in the short- and long-term perspective; (9) urban electrification as a strategy to move towards low-carbon energy systems and (10) rights-based litigation as an increasingly important method to address climate change, with recent clarifications on the legal standing and representation of future generations.
Social media summary
Stronger permafrost thaw, COVID-19 effects and growing mental health impacts among highlights of latest climate science.
There is ongoing debate regarding the relationship between clinical symptoms and cognition in schizophrenia spectrum disorders (SSD). The present study aimed to explore the potential relationships between symptoms, with an emphasis on negative symptoms, and social and non-social cognition.
Hierarchical cluster analysis with k-means optimisation was conducted to characterise clinical subgroups using the Scale for the Assessment of Negative Symptoms and Scale for the Assessment of Positive Symptoms in n = 130 SSD participants. Emergent clusters were compared on the MATRICS Consensus Cognitive Battery, which measures non-social cognition and emotion management as well as demographic and clinical variables. Spearman’s correlations were then used to investigate potential relationships between specific negative symptoms and emotion management and non-social cognition.
Four distinct clinical subgroups were identified: 1. high hallucinations, 2. mixed symptoms, 3. high negative symptoms, and 4. relatively asymptomatic. The high negative symptom subgroup was found to have significantly poorer emotion management than the high hallucination and relatively asymptomatic subgroups. No further differences between subgroups were observed. Correlation analyses revealed avolition-apathy and anhedonia-asociality were negatively correlated with emotion management, but not non-social cognition. Affective flattening and alogia were not associated with either emotion management or non-social cognition.
The present study identified associations between negative symptoms and emotion management within social cognition, but no domains of non-social cognition. This relationship may be specific to motivation, anhedonia and apathy, but not expressive deficits. This suggests that targeted interventions for social cognition may also result in parallel improvement in some specific negative symptoms.
This is the first report on the association between trauma exposure and depression from the Advancing Understanding of RecOvery afteR traumA(AURORA) multisite longitudinal study of adverse post-traumatic neuropsychiatric sequelae (APNS) among participants seeking emergency department (ED) treatment in the aftermath of a traumatic life experience.
We focus on participants presenting at EDs after a motor vehicle collision (MVC), which characterizes most AURORA participants, and examine associations of participant socio-demographics and MVC characteristics with 8-week depression as mediated through peritraumatic symptoms and 2-week depression.
Eight-week depression prevalence was relatively high (27.8%) and associated with several MVC characteristics (being passenger v. driver; injuries to other people). Peritraumatic distress was associated with 2-week but not 8-week depression. Most of these associations held when controlling for peritraumatic symptoms and, to a lesser degree, depressive symptoms at 2-weeks post-trauma.
These observations, coupled with substantial variation in the relative strength of the mediating pathways across predictors, raises the possibility of diverse and potentially complex underlying biological and psychological processes that remain to be elucidated in more in-depth analyses of the rich and evolving AURORA database to find new targets for intervention and new tools for risk-based stratification following trauma exposure.
To characterize associations between exposures within and outside the medical workplace with healthcare personnel (HCP) SARS-CoV-2 infection, including the effect of various forms of respiratory protection.
We collected data from international participants via an online survey.
In total, 1,130 HCP (244 cases with laboratory-confirmed COVID-19, and 886 controls healthy throughout the pandemic) from 67 countries not meeting prespecified exclusion (ie, healthy but not working, missing workplace exposure data, COVID symptoms without lab confirmation) were included in this study.
Respondents were queried regarding workplace exposures, respiratory protection, and extra-occupational activities. Odds ratios for HCP infection were calculated using multivariable logistic regression and sensitivity analyses controlling for confounders and known biases.
HCP infection was associated with non–aerosol-generating contact with COVID-19 patients (adjusted OR, 1.4; 95% CI, 1.04–1.9; P = .03) and extra-occupational exposures including gatherings of ≥10 people, patronizing restaurants or bars, and public transportation (adjusted OR range, 3.1–16.2). Respirator use during aerosol-generating procedures (AGPs) was associated with lower odds of HCP infection (adjusted OR, 0.4; 95% CI, 0.2–0.8, P = .005), as was exposure to intensive care and dedicated COVID units, negative pressure rooms, and personal protective equipment (PPE) observers (adjusted OR range, 0.4–0.7).
COVID-19 transmission to HCP was associated with medical exposures currently considered lower-risk and multiple extra-occupational exposures, and exposures associated with proper use of appropriate PPE were protective. Closer scrutiny of infection control measures surrounding healthcare activities and medical settings considered lower risk, and continued awareness of the risks of public congregation, may reduce the incidence of HCP infection.
Radiocarbon (14C) ages cannot provide absolutely dated chronologies for archaeological or paleoenvironmental studies directly but must be converted to calendar age equivalents using a calibration curve compensating for fluctuations in atmospheric 14C concentration. Although calibration curves are constructed from independently dated archives, they invariably require revision as new data become available and our understanding of the Earth system improves. In this volume the international 14C calibration curves for both the Northern and Southern Hemispheres, as well as for the ocean surface layer, have been updated to include a wealth of new data and extended to 55,000 cal BP. Based on tree rings, IntCal20 now extends as a fully atmospheric record to ca. 13,900 cal BP. For the older part of the timescale, IntCal20 comprises statistically integrated evidence from floating tree-ring chronologies, lacustrine and marine sediments, speleothems, and corals. We utilized improved evaluation of the timescales and location variable 14C offsets from the atmosphere (reservoir age, dead carbon fraction) for each dataset. New statistical methods have refined the structure of the calibration curves while maintaining a robust treatment of uncertainties in the 14C ages, the calendar ages and other corrections. The inclusion of modeled marine reservoir ages derived from a three-dimensional ocean circulation model has allowed us to apply more appropriate reservoir corrections to the marine 14C data rather than the previous use of constant regional offsets from the atmosphere. Here we provide an overview of the new and revised datasets and the associated methods used for the construction of the IntCal20 curve and explore potential regional offsets for tree-ring data. We discuss the main differences with respect to the previous calibration curve, IntCal13, and some of the implications for archaeology and geosciences ranging from the recent past to the time of the extinction of the Neanderthals.
Prescribing metrics, cost, and surrogate markers are often used to describe the value of antimicrobial stewardship (AMS) programs. However, process measures are only indirectly related to clinical outcomes and may not represent the total effect of an intervention. We determined the global impact of a multifaceted AMS initiative for hospitalized adults with common infections.
Single center, quasi-experimental study.
Hospitalized adults with urinary, skin, and respiratory tract infections discharged from family medicine and internal medicine wards before (January 2017–June 2017) and after (January 2018–June 2018) an AMS initiative on a family medicine ward were included. A series of AMS-focused initiatives comprised the development and dissemination of: handheld prescribing tools, AMS positive feedback cases, and academic modules. We compared the effect on an ordinal end point consisting of clinical resolution, adverse drug events, and antimicrobial optimization between the preintervention and postintervention periods.
In total, 256 subjects were included before and after an AMS intervention. Excessive durations of therapy were reduced from 40.3% to 22% (P < .001). Patients without an optimized antimicrobial course were more likely to experience clinical failure (OR, 2.35; 95% CI, 1.17–4.72). The likelihood of a better global outcome was greater in the family medicine intervention arm (62.0%, 95% CI, 59.6–67.1) than in the preintervention family medicine arm.
Collaborative, targeted feedback with prescribing metrics, AMS cases, and education improved global outcomes for hospitalized adults on a family medicine ward.
The objective of this paper is to demonstrate that the gradient-constrained discounted Steiner point algorithm (GCDSPA) described in an earlier paper by the authors is applicable to a class of real mine planning problems, by using the algorithm to design a part of the underground access in the Rubicon gold mine near Kalgoorlie in Western Australia. The algorithm is used to design a decline connecting two ore bodies so as to maximize the net present value (NPV) associated with the connector. The connector is to break out from the access infrastructure of one ore body and extend to the other ore body. There is a junction on the connector where it splits in two near the second ore body. The GCDSPA is used to obtain the optimal location of the junction and the corresponding NPV. The result demonstrates that the GCDSPA can be used to solve certain problems in mine planning for which currently available methods cannot provide optimal solutions.
Lewy body dementia, consisting of both dementia with Lewy bodies (DLB) and Parkinson's disease dementia (PDD), is considerably under-recognised clinically compared with its frequency in autopsy series.
This study investigated the clinical diagnostic pathways of patients with Lewy body dementia to assess if difficulties in diagnosis may be contributing to these differences.
We reviewed the medical notes of 74 people with DLB and 72 with non-DLB dementia matched for age, gender and cognitive performance, together with 38 people with PDD and 35 with Parkinson's disease, matched for age and gender, from two geographically distinct UK regions.
The cases of individuals with DLB took longer to reach a final diagnosis (1.2 v. 0.6 years, P = 0.017), underwent more scans (1.7 v. 1.2, P = 0.002) and had more alternative prior diagnoses (0.8 v. 0.4, P = 0.002), than the cases of those with non-DLB dementia. Individuals diagnosed in one region of the UK had significantly more core features (2.1 v. 1.5, P = 0.007) than those in the other region, and were less likely to have dopamine transporter imaging (P < 0.001). For patients with PDD, more than 1.4 years prior to receiving a dementia diagnosis: 46% (12 of 26) had documented impaired activities of daily living because of cognitive impairment, 57% (16 of 28) had cognitive impairment in multiple domains, with 38% (6 of 16) having both, and 39% (9 of 23) already receiving anti-dementia drugs.
Our results show the pathway to diagnosis of DLB is longer and more complex than for non-DLB dementia. There were also marked differences between regions in the thresholds clinicians adopt for diagnosing DLB and also in the use of dopamine transporter imaging. For PDD, a diagnosis of dementia was delayed well beyond symptom onset and even treatment.
In response to advancing clinical practice guidelines regarding concussion management, service members, like athletes, complete a baseline assessment prior to participating in high-risk activities. While several studies have established test stability in athletes, no investigation to date has examined the stability of baseline assessment scores in military cadets. The objective of this study was to assess the test–retest reliability of a baseline concussion test battery in cadets at U.S. Service Academies.
All cadets participating in the Concussion Assessment, Research, and Education (CARE) Consortium investigation completed a standard baseline battery that included memory, balance, symptom, and neurocognitive assessments. Annual baseline testing was completed during the first 3 years of the study. A two-way mixed-model analysis of variance (intraclass correlation coefficent (ICC)3,1) and Kappa statistics were used to assess the stability of the metrics at 1-year and 2-year time intervals.
ICC values for the 1-year test interval ranged from 0.28 to 0.67 and from 0.15 to 0.57 for the 2-year interval. Kappa values ranged from 0.16 to 0.21 for the 1-year interval and from 0.29 to 0.31 for the 2-year test interval. Across all measures, the observed effects were small, ranging from 0.01 to 0.44.
This investigation noted less than optimal reliability for the most common concussion baseline assessments. While none of the assessments met or exceeded the accepted clinical threshold, the effect sizes were relatively small suggesting an overlap in performance from year-to-year. As such, baseline assessments beyond the initial evaluation in cadets are not essential but could aid concussion diagnosis.
To institute facility-wide Kamishibai card (K-card) rounding for central venous catheter (CVC) maintenance bundle education and adherence and to evaluate its impact on bundle reliability and central-line–associated bloodstream infection (CLABSI) rates.
Quality improvement project.
Inpatient units at a large, academic freestanding children’s hospital.
Data for inpatients with a CVC in place for ≥1 day between November 1, 2017 and October 31, 2018 were included.
A K-card was developed based on 7 core elements in our CVC maintenance bundle. During monthly audits, auditors used the K-cards to ask bedside nurses standardized questions and to conduct medical record documentation reviews in real time. Adherence to every bundle element was required for the audit to be considered “adherent.” We recorded bundle reliability prospectively, and we compared reliability and CLABSI rates at baseline and 1 year after the intervention.
During the study period, 2,321 K-card audits were performed for 1,051 unique patients. Overall maintenance bundle reliability increased significantly from 43% at baseline to 78% at 12 months after implementation (P < .001). The hospital-wide CLABSI rate decreased from 1.35 during the 12-month baseline period to 1.17 during the 12-month intervention period, but the change was not statistically significant (incidence rate ratio [IRR], 0.87; 95% confidence interval [CI], 0.60–1.24; P = .41).
Hospital-wide CVC K-card rounding facilitated standardized data collection, discussion of reliability, and real-time feedback to nurses. Maintenance bundle reliability increased after implementation, accompanied by a nonsignificant decrease in the CLABSI rate.
Accurate near-field measurements for either deterministic or stochastic electromagnetic fields characterization require a relevant process that removes the influence of the probes, transmission lines, and measurement circuits. The main part of the experimental work presented here is related to a calibration procedure of a test setup consisting of a microstrip test structure and a scanning loop probe. The calibration characteristic, obtained by comparing measured and simulated results, is then used to convert the measured voltage into the magnetic field across and along the microstrip line at the specific height above it. By performing the measurements and simulations of the same test structure with the loop probe in the presence of an additional scanning probe, the influence of the additional probe to the measured output is thoroughly investigated and relevant corrections are given. These corrections can be important when two-point correlation measurement is required, especially in scanning points when two probes are mutually close.