To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Although diagnostic instability in first-episode psychosis (FEP) is of major concern, little is known about its determinants. This very long-term follow-up study aimed to examine the diagnostic stability of FEP diagnoses, the baseline predictors of diagnostic change and the timing of diagnostic change.
This was a longitudinal and naturalistic study of 243 subjects with FEP who were assessed at baseline and reassessed after a mean follow-up of 21 years. The diagnostic stability of DSM-5 psychotic disorders was examined using prospective and retrospective consistencies, logistic regression was used to establish the predictors of diagnostic change, and survival analysis was used to compare time to diagnostic change across diagnostic categories.
The overall diagnostic stability was 47.7%. Schizophrenia and bipolar disorder were the most stable diagnoses, with other categories having low stability. Predictors of diagnostic change to schizophrenia included a family history of schizophrenia, obstetric complications, developmental delay, poor premorbid functioning in several domains, long duration of untreated continuous psychosis, spontaneous dyskinesia, lack of psychosocial stressors, longer duration of index admission, and poor early treatment response. Most of these variables also predicted diagnostic change to bipolar disorder but in the opposite direction and with lesser effect sizes. There were no significant differences between specific diagnoses regarding time to diagnostic change. At 10-year follow-up, around 80% of the diagnoses had changed.
FEP diagnoses other than schizophrenia or bipolar disorder should be considered as provisional. Considering baseline predictors of diagnostic change may help to enhance diagnostic accuracy and guide therapeutic interventions.
In Tanzania, there are high rates of suicidal thoughts and behavior among people living with HIV (PLWH), yet few instruments exist for effective screening and referral. To address this gap, we developed and validated Swahili translations of the Columbia Suicide Severity Rating Scale (C-SSRS) Screen Version and two accompanying scales assessing self-efficacy to avoid suicidal action and reasons for living. We administered a structured survey to 80 PLWH attending two HIV clinics in Moshi, Tanzania. Factor analysis of the items revealed four subscales: suicide intensity, self-efficacy to avoid suicide, fear and social concern about suicide, and family and spirituality deterrents to suicide. The area under the receiver operating curve showed only suicide intensity, and fear and social concern met the prespecified cutoff of ≥0.7 in accurately identifying patients with a plan and intent to act on suicidal thoughts. This study provides early evidence that brief screening of intensity of suicidality in the past month, assessed by the C-SSRS Screen Version, is a strong, resource-efficient strategy for identifying suicide risk in the Tanzanian setting. Patients who report little fear of dying and low concern about social perceptions of suicide may also be at increased risk.
Researchers have identified genetic and neural risk factors for externalizing behaviors. However, it has not yet been determined if genetic liability is conferred in part through associations with more proximal neurophysiological risk markers.
Participants from the Collaborative Study on the Genetics of Alcoholism, a large, family-based study of alcohol use disorders were genotyped and polygenic scores for externalizing (EXT PGS) were calculated. Associations with target P3 amplitude from a visual oddball task (P3) and broad endorsement of externalizing behaviors (indexed via self-report of alcohol and cannabis use, and antisocial behavior) were assessed in participants of European (EA; N = 2851) and African ancestry (AA; N = 1402). Analyses were also stratified by age (adolescents, age 12–17 and young adults, age 18–32).
The EXT PGS was significantly associated with higher levels of externalizing behaviors among EA adolescents and young adults as well as AA young adults. P3 was inversely associated with externalizing behaviors among EA young adults. EXT PGS was not significantly associated with P3 amplitude and therefore, there was no evidence that P3 amplitude indirectly accounted for the association between EXT PGS and externalizing behaviors.
Both the EXT PGS and P3 amplitude were significantly associated with externalizing behaviors among EA young adults. However, these associations with externalizing behaviors appear to be independent of each other, suggesting that they may index different facets of externalizing.
The earliest monumentality in Western Europe is associated with megalithic structures, but where did the builders of these monuments live? Here, the authors focus on west-central France, one of the earliest centres of megalithic building in Atlantic Europe, commencing in the mid fifth millennium BC. They report on an enclosure at Le Peu (Charente), dated to the Middle Neolithic (c. 4400 BC), and defined by a ditch with two ‘crab claw’ entrances and a double timber palisade flanked by two timber structures—possibly defensive bastions. Inside, timber buildings—currently the earliest known in the region—were possibly home to the builders of the nearby Tusson long mounds.
The term “blue justice” was coined in 2018 during the 3rd World Small-Scale Fisheries Congress. Since then, academic engagement with the concept has grown rapidly. This article reviews 5 years of blue justice scholarship and synthesizes some of the key perspectives, developments, and gaps. We then connect this literature to wider relevant debates by reviewing two key areas of research – first on blue injustices and second on grassroots resistance to these injustices. Much of the early scholarship on blue justice focused on injustices experienced by small-scale fishers in the context of the blue economy. In contrast, more recent writing and the empirical cases reviewed here suggest that intersecting forms of oppression render certain coastal individuals and groups vulnerable to blue injustices. These developments signal an expansion of the blue justice literature to a broader set of affected groups and underlying causes of injustice. Our review also suggests that while grassroots resistance efforts led by coastal communities have successfully stopped unfair exposure to environmental harms, preserved their livelihoods and ways of life, defended their culture and customary rights, renegotiated power distributions, and proposed alternative futures, these efforts have been underemphasized in the blue justice scholarship, and from marine and coastal literature more broadly. We conclude with some suggestions for understanding and supporting blue justice now and into the future.
Helminth species of Neotropical bats are poorly known. In Mexico, few studies have been conducted on helminths of bats, especially in regions such as the Yucatan Peninsula where Chiroptera is the mammalian order with the greatest number of species. In this study, we characterized morphologically and molecularly the helminth species of bats and explored their infection levels and parasite–host interactions in the Yucatan Peninsula, Mexico. One hundred and sixty-three bats (representing 21 species) were captured between 2017 and 2022 in 15 sites throughout the Yucatan Peninsula. Conventional morphological techniques and molecular tools were used with the 28S gene to identify the collected helminths. Host–parasite network analyses were carried out to explore interactions by focusing on the level of host species. Helminths were found in 44 (26.9%) bats of 12 species. Twenty helminth taxa were recorded (7 trematodes, 3 cestodes and 10 nematodes), including 4 new host records for the Americas. Prevalence and mean intensity of infection values ranged from 7.1 to 100% and from 1 to 56, respectively. Molecular analyses confirmed the identity of some helminths at species and genus levels; however, some sequences did not correspond to any of the species available on GenBank. The parasite–host network suggests that most of the helminths recorded in bats were host-specific. The highest helminth richness was found in insectivorous bats. This study increases our knowledge of helminths parasitizing Neotropical bats, adding new records and nucleotide sequences.
Over-flexibility in the definition of Friston blankets obscures a key distinction between observational and interventional inference. The latter requires cognizers form not just a causal representation of the world but also of their own boundary and relationship with it, in order to diagnose the consequences of their actions. We suggest this locates the blanket in the eye of the beholder.
Only a limited number of patients with major depressive disorder (MDD) respond to a first course of antidepressant medication (ADM). We investigated the feasibility of creating a baseline model to determine which of these would be among patients beginning ADM treatment in the US Veterans Health Administration (VHA).
A 2018–2020 national sample of n = 660 VHA patients receiving ADM treatment for MDD completed an extensive baseline self-report assessment near the beginning of treatment and a 3-month self-report follow-up assessment. Using baseline self-report data along with administrative and geospatial data, an ensemble machine learning method was used to develop a model for 3-month treatment response defined by the Quick Inventory of Depression Symptomatology Self-Report and a modified Sheehan Disability Scale. The model was developed in a 70% training sample and tested in the remaining 30% test sample.
In total, 35.7% of patients responded to treatment. The prediction model had an area under the ROC curve (s.e.) of 0.66 (0.04) in the test sample. A strong gradient in probability (s.e.) of treatment response was found across three subsamples of the test sample using training sample thresholds for high [45.6% (5.5)], intermediate [34.5% (7.6)], and low [11.1% (4.9)] probabilities of response. Baseline symptom severity, comorbidity, treatment characteristics (expectations, history, and aspects of current treatment), and protective/resilience factors were the most important predictors.
Although these results are promising, parallel models to predict response to alternative treatments based on data collected before initiating treatment would be needed for such models to help guide treatment selection.
We report on the generation and delivery of 10.2 PW peak power laser pulses, using the High Power Laser System at the Extreme Laser Infrastructure – Nuclear Physics facility. In this work we demonstrate for the first time, to the best of our knowledge, the compression and propagation of full energy, full aperture, laser pulses that reach a power level of more than 10 PW.
To determine whether the DCTclock can detect differences across groups of patients seen in the memory clinic for suspected dementia.
Patients (n = 123) were classified into the following groups: cognitively normal (CN), subtle cognitive impairment (SbCI), amnestic cognitive impairment (aMCI), and mixed/dysexecutive cognitive impairment (mx/dysMCI). Nine outcome variables included a combined command/copy total score and four command and four copy indices measuring drawing efficiency, simple/complex motor operations, information processing speed, and spatial reasoning.
Total combined command/copy score distinguished between groups in all comparisons with medium to large effects. The mx/dysMCI group had the lowest total combined command/copy scores out of all groups. The mx/dysMCI group scored lower than the CN group on all command indices (p < .050, all analyses); and lower than the SbCI group on drawing efficiency (p = .011). The aMCI group scored lower than the CN group on spatial reasoning (p = .019). Smaller effect sizes were obtained for the four copy indices.
These results suggest that DCTclock command/copy parameters can dissociate CN, SbCI, and MCI subtypes. The larger effect sizes for command clock indices suggest these metrics are sensitive in detecting early cognitive decline. Additional research with a larger sample is warranted.
Fewer than half of patients with major depressive disorder (MDD) respond to psychotherapy. Pre-emptively informing patients of their likelihood of responding could be useful as part of a patient-centered treatment decision-support plan.
This prospective observational study examined a national sample of 807 patients beginning psychotherapy for MDD at the Veterans Health Administration. Patients completed a self-report survey at baseline and 3-months follow-up (data collected 2018–2020). We developed a machine learning (ML) model to predict psychotherapy response at 3 months using baseline survey, administrative, and geospatial variables in a 70% training sample. Model performance was then evaluated in the 30% test sample.
32.0% of patients responded to treatment after 3 months. The best ML model had an AUC (SE) of 0.652 (0.038) in the test sample. Among the one-third of patients ranked by the model as most likely to respond, 50.0% in the test sample responded to psychotherapy. In comparison, among the remaining two-thirds of patients, <25% responded to psychotherapy. The model selected 43 predictors, of which nearly all were self-report variables.
Patients with MDD could pre-emptively be informed of their likelihood of responding to psychotherapy using a prediction tool based on self-report data. This tool could meaningfully help patients and providers in shared decision-making, although parallel information about the likelihood of responding to alternative treatments would be needed to inform decision-making across multiple treatments.
From 2014 to 2020, we compiled radiocarbon ages from the lower 48 states, creating a database of more than 100,000 archaeological, geological, and paleontological ages that will be freely available to researchers through the Canadian Archaeological Radiocarbon Database. Here, we discuss the process used to compile ages, general characteristics of the database, and lessons learned from this exercise in “big data” compilation.
Telecommunications data are being explored by many countries as a new source of data that can be incorporated into their national statistical systems. In particular, “mobile positioning data” are increasingly being used to study population movements and population distributions. However, the legal, ethical, and technical complexities of working with this type of data often pose many barriers, which can prevent the data from being used at the times when it is most urgently needed. We demonstrate how having a robust public–private partnership framework, a privacy-preserving technical setup, and a communications strategy already in place, prior to an emergency, can enable governments to harness the advantages of telecommunications data at the times when it is most valuable. However, even once these foundations are in place, the challenges of competing priorities, managing expectations, and maintaining communication with data consumers during a pandemic mean that the potential of the data is not automatically translated into direct impact. This highlights the importance of sensitisation exercises, targeted at potential data users, to make clear the potential and limitations of the data, as well as the importance of being able to maintain direct communication with data users. The views expressed in this work belong solely to the authors and should not be interpreted as the views of their institutions.
To determine how well machine learning algorithms can classify mild cognitive impairment (MCI) subtypes and Alzheimer’s disease (AD) using features obtained from the digital Clock Drawing Test (dCDT).
dCDT protocols were administered to 163 patients diagnosed with AD(n = 59), amnestic MCI (aMCI; n = 26), combined mixed/dysexecutive MCI (mixed/dys MCI; n = 43), and patients without MCI (non-MCI; n = 35) using standard clock drawing command and copy procedures, that is, draw the face of the clock, put in all of the numbers, and set the hands for “10 after 11.” A digital pen and custom software recorded patient’s drawings. Three hundred and fifty features were evaluated for maximum information/minimum redundancy. The best subset of features was used to train classification models to determine diagnostic accuracy.
Neural network employing information theoretic feature selection approaches achieved the best 2-group classification results with 10-fold cross validation accuracies at or above 83%, that is, AD versus non-MCI = 91.42%; AD versus aMCI = 91.49%; AD versus mixed/dys MCI = 84.05%; aMCI versus mixed/dys MCI = 84.11%; aMCI versus non-MCI = 83.44%; and mixed/dys MCI versus non-MCI = 85.42%. A follow-up two-group non-MCI versus all MCI patients analysis yielded comparable results (83.69%). Two-group classification analyses were achieved with 25–125 dCDT features depending on group classification. Three- and four-group analyses yielded lower but still promising levels of classification accuracy.
Early identification of emergent neurodegenerative illness is criterial for better disease management. Applying machine learning to standard neuropsychological tests promises to be an effective first line screening method for classification of non-MCI and MCI subtypes.
The electroconvulsive therapy (ECT) is an effective treatment used for several psychiatric disorders. However, there are multiple enigmas about the mechanisms of action and factors that improve its results. Some frequent questions are if the anesthetic drug makes a difference in the time of convulsion and blood pressure.
Our principal aim is to describe the utilization of anesthetic drugs among the patients that are being treated with ECT in hospital del Mar. We also want to know the differences in the time of convulsion and systolic arterial pressure for every anesthetic drug (propofol, thiopental and etomidate).
Material and methods
We have used the database of ECT in hospital del Mar. It contains information like age, principal diagnosis, medical background and pharmacological treatment at the moment of starting ECTs; it also contains information of each individual ECT session as basal, 2 and 5 minutes arterial pressure; the anesthetic drug used, and convulsion duration.
We made an analysis of general conditions of the population, the differences of convulsion time and arterial pressure between the three anesthetic drugs.
Propofol was used in 1140 sessions, thiopental in 61 sessions and etomidate in 54 sessions. The differences in the means of convulsion times between propofol and etomidate are statistically significant (“P” value < 0.05). Etomidate or thiopental increases the difference of arterial pressure more than propofol.
Further research about the factors that improve convulsion duration and minimize adverse effects on blood pressure is needed.
Disclosure of interest
The authors have not supplied their declaration of competing interest.
Large prospective observational studies have cast doubt on the common assumption that endovascular thrombectomy (EVT) is superior to intravenous thrombolysis for patients with acute basilar artery occlusion (BAO). The purpose of this study was to retrospectively review our experience for patients with BAO undergoing EVT with modern endovascular devices.
All consecutive patients undergoing EVT with either a second-generation stent retriever or direct aspiration thrombectomy for BAO at our regional stroke center from January 1, 2013 to March 1, 2019 were included. The primary outcome measure was functional outcome at 1 month using the modified Rankin Scale (mRS) score. Multivariable logistic regression was used to assess the association between patient characteristics and dichotomized mRS.
A total of 43 consecutive patients underwent EVT for BAO. The average age was 67 years with 61% male patients. Overall, 37% (16/43) of patients achieved good functional outcome. Successful reperfusion was achieved in 72% (31/43) of cases. The median (interquartile range) stroke onset to treatment time was 420 (270–639) minutes (7 hours) for all patients. The procedure-related complication rate was 9% (4/43). On multivariate analysis, posterior circulation Alberta stroke program early computed tomography score and Basilar Artery on Computed Tomography Angiography score were associated with improved functional outcome.
EVT appears to be safe and feasible in patients with BAO. Our finding that time to treatment and successful reperfusion were not associated with improved outcome is likely due to including patients with established infarcts. Given the variability of collaterals in the posterior circulation, the paradigm of utilizing a tissue window may assist in patient selection for EVT. Magnetic resonance imaging may be a reasonable option to determine the extent of ischemia in certain situations.
Studies suggest that alcohol consumption and alcohol use disorders have distinct genetic backgrounds.
We examined whether polygenic risk scores (PRS) for consumption and problem subscales of the Alcohol Use Disorders Identification Test (AUDIT-C, AUDIT-P) in the UK Biobank (UKB; N = 121 630) correlate with alcohol outcomes in four independent samples: an ascertained cohort, the Collaborative Study on the Genetics of Alcoholism (COGA; N = 6850), and population-based cohorts: Avon Longitudinal Study of Parents and Children (ALSPAC; N = 5911), Generation Scotland (GS; N = 17 461), and an independent subset of UKB (N = 245 947). Regression models and survival analyses tested whether the PRS were associated with the alcohol-related outcomes.
In COGA, AUDIT-P PRS was associated with alcohol dependence, AUD symptom count, maximum drinks (R2 = 0.47–0.68%, p = 2.0 × 10−8–1.0 × 10−10), and increased likelihood of onset of alcohol dependence (hazard ratio = 1.15, p = 4.7 × 10−8); AUDIT-C PRS was not an independent predictor of any phenotype. In ALSPAC, the AUDIT-C PRS was associated with alcohol dependence (R2 = 0.96%, p = 4.8 × 10−6). In GS, AUDIT-C PRS was a better predictor of weekly alcohol use (R2 = 0.27%, p = 5.5 × 10−11), while AUDIT-P PRS was more associated with problem drinking (R2 = 0.40%, p = 9.0 × 10−7). Lastly, AUDIT-P PRS was associated with ICD-based alcohol-related disorders in the UKB subset (R2 = 0.18%, p < 2.0 × 10−16).
AUDIT-P PRS was associated with a range of alcohol-related phenotypes across population-based and ascertained cohorts, while AUDIT-C PRS showed less utility in the ascertained cohort. We show that AUDIT-P is genetically correlated with both use and misuse and demonstrate the influence of ascertainment schemes on PRS analyses.
Neuroanatomical abnormalities in first-episode psychosis (FEP) tend to be subtle and widespread. The vast majority of previous studies have used small samples, and therefore may have been underpowered. In addition, most studies have examined participants at a single research site, and therefore the results may be specific to the local sample investigated. Consequently, the findings reported in the existing literature are highly heterogeneous. This study aimed to overcome these issues by testing for neuroanatomical abnormalities in individuals with FEP that are expressed consistently across several independent samples.
Structural Magnetic Resonance Imaging data were acquired from a total of 572 FEP and 502 age and gender comparable healthy controls at five sites. Voxel-based morphometry was used to investigate differences in grey matter volume (GMV) between the two groups. Statistical inferences were made at p < 0.05 after family-wise error correction for multiple comparisons.
FEP showed a widespread pattern of decreased GMV in fronto-temporal, insular and occipital regions bilaterally; these decreases were not dependent on anti-psychotic medication. The region with the most pronounced decrease – gyrus rectus – was negatively correlated with the severity of positive and negative symptoms.
This study identified a consistent pattern of fronto-temporal, insular and occipital abnormalities in five independent FEP samples; furthermore, the extent of these alterations is dependent on the severity of symptoms and duration of illness. This provides evidence for reliable neuroanatomical alternations in FEP, expressed above and beyond site-related differences in anti-psychotic medication, scanning parameters and recruitment criteria.