To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Artificial intelligence (AI) systems have demonstrated impressive performance across a variety of clinical tasks. However, notoriously, sometimes these systems are “black boxes.” The initial response in the literature was a demand for “explainable AI.” However, recently, several authors have suggested that making AI more explainable or “interpretable” is likely to be at the cost of the accuracy of these systems and that prioritizing interpretability in medical AI may constitute a “lethal prejudice.” In this paper, we defend the value of interpretability in the context of the use of AI in medicine. Clinicians may prefer interpretable systems over more accurate black boxes, which in turn is sufficient to give designers of AI reason to prefer more interpretable systems in order to ensure that AI is adopted and its benefits realized. Moreover, clinicians may be justified in this preference. Achieving the downstream benefits from AI is critically dependent on how the outputs of these systems are interpreted by physicians and patients. A preference for the use of highly accurate black box AI systems, over less accurate but more interpretable systems, may itself constitute a form of lethal prejudice that may diminish the benefits of AI to—and perhaps even harm—patients.
Depression in dementia is common, disabling and causes significant distress to patients and carers. Despite widespread use of antidepressants for depression in dementia, there is no evidence of therapeutic efficacy, and their use is potentially harmful in this patient group. Depression in dementia has poor outcomes and effective treatments are urgently needed. Understanding why antidepressants are ineffective in depression in dementia could provide insight into their mechanism of action and aid identification of new therapeutic targets. In this review we discuss why depression in dementia may be a distinct entity, current theories of how antidepressants work and how these mechanisms of action may be affected by disease processes in dementia. We also consider why clinicians continue to prescribe antidepressants in dementia, and novel approaches to understand and identify effective treatments for patients living with depression and dementia.
Artificial intelligence (AI) systems have demonstrated impressive performance across a variety of clinical tasks. However, notoriously, sometimes these systems are “black boxes.” The initial response in the literature was a demand for “explainable AI.” However, recently, several authors have suggested that making AI more explainable or “interpretable” is likely to be at the cost of the accuracy of these systems and that prioritizing interpretability in medical AI may constitute a “lethal prejudice.” In this article, we defend the value of interpretability in the context of the use of AI in medicine. Clinicians may prefer interpretable systems over more accurate black boxes, which in turn is sufficient to give designers of AI reason to prefer more interpretable systems in order to ensure that AI is adopted and its benefits realized. Moreover, clinicians may be justified in this preference. Achieving the downstream benefits from AI is critically dependent on how the outputs of these systems are interpreted by physicians and patients. A preference for the use of highly accurate black box AI systems, over less accurate but more interpretable systems, may itself constitute a form of lethal prejudice that may diminish the benefits of AI to—and perhaps even harm—patients.
The New York Bight is undergoing rapid anthropogenic change amidst an apparent increase in baleen whale sightings. Though survey efforts have increased in recent years, the lack of published knowledge on baleen whale occurrence prior to these efforts impedes effective assessments of distributional or behavioural shifts due to increasing human activities. Here we synthesize opportunistic sightings of baleen whales from 1998–2017, which represent the majority of sightings data prior to recent survey efforts, and which are largely unpublished. Humpback and fin whales were the most commonly sighted species, followed by North Atlantic right whales and North Atlantic minke whales. Important behaviours such as feeding and nursing were observed, and most species (including North Atlantic right whales) were seen during all seasons. Baleen whales overlapped with multiple anthropogenic use areas, and all species, but of particular importance North Atlantic right whales, were sighted outside the spatial and temporal bounds of the Seasonal Management Areas for North Atlantic right whales. These opportunistic data are vital for providing a baseline and context of baleen whales in the New York Bight prior to broad-scale efforts and facilitate interpretation of current and future observations and trends, which can more accurately inform effective management and mitigation efforts.
Ice streams are warmed by shear strain, both vertical shear near the bed and lateral shear at the margins. Warm ice deforms more easily, establishing a positive feedback loop in an ice stream where fast flow leads to warm ice and then to even faster flow. Here, we use radar attenuation measurements to show that the Siple Coast ice streams are colder than previously thought, which we hypothesize is due to along-flow advection of cold ice from upstream. We interpret the attenuation results within the context of previous ice-temperature measurements from nearby sites where hot-water boreholes were drilled. These in-situ temperatures are notably colder than model predictions, both in the ice streams and in an ice-stream shear margin. We then model ice temperature using a 1.5-dimensional numerical model which includes a parameterization for along-flow advection. Compared to analytical solutions, we find depth-averaged temperatures that are colder by 0.7°C in the Bindschadler Ice Stream, 2.7°C in the Kamb Ice Stream and 6.2–8.2°C in the Dragon Shear Margin of Whillans Ice Stream, closer to the borehole measurements at all locations. Modelled cooling corresponds to shear-margin thermal strengthening by 3–3.5 times compared to the warm-ice case, which must be compensated by some other weakening mechanism such as material damage or ice-crystal fabric anisotropy.
To examine the costs and cost-effectiveness of mirtazapine compared to placebo over 12-week follow-up.
Economic evaluation in a double-blind randomized controlled trial of mirtazapine vs. placebo.
Community settings and care homes in 26 UK centers.
People with probable or possible Alzheimer’s disease and agitation.
Primary outcome included incremental cost of participants’ health and social care per 6-point difference in CMAI score at 12 weeks. Secondary cost-utility analyses examined participants’ and unpaid carers’ gain in quality-adjusted life years (derived from EQ-5D-5L, DEMQOL-Proxy-U, and DEMQOL-U) from the health and social care and societal perspectives.
One hundred and two participants were allocated to each group; 81 mirtazapine and 90 placebo participants completed a 12-week assessment (87 and 95, respectively, completed a 6-week assessment). Mirtazapine and placebo groups did not differ on mean CMAI scores or health and social care costs over the study period, before or after adjustment for center and living arrangement (independent living/care home). On the primary outcome, neither mirtazapine nor placebo could be considered a cost-effective strategy with a high level of confidence. Groups did not differ in terms of participant self- or proxy-rated or carer self-rated quality of life scores, health and social care or societal costs, before or after adjustment.
On cost-effectiveness grounds, the use of mirtazapine cannot be recommended for agitated behaviors in people living with dementia. Effective and cost-effective medications for agitation in dementia remain to be identified in cases where non-pharmacological strategies for managing agitation have been unsuccessful.
Only a limited number of patients with major depressive disorder (MDD) respond to a first course of antidepressant medication (ADM). We investigated the feasibility of creating a baseline model to determine which of these would be among patients beginning ADM treatment in the US Veterans Health Administration (VHA).
A 2018–2020 national sample of n = 660 VHA patients receiving ADM treatment for MDD completed an extensive baseline self-report assessment near the beginning of treatment and a 3-month self-report follow-up assessment. Using baseline self-report data along with administrative and geospatial data, an ensemble machine learning method was used to develop a model for 3-month treatment response defined by the Quick Inventory of Depression Symptomatology Self-Report and a modified Sheehan Disability Scale. The model was developed in a 70% training sample and tested in the remaining 30% test sample.
In total, 35.7% of patients responded to treatment. The prediction model had an area under the ROC curve (s.e.) of 0.66 (0.04) in the test sample. A strong gradient in probability (s.e.) of treatment response was found across three subsamples of the test sample using training sample thresholds for high [45.6% (5.5)], intermediate [34.5% (7.6)], and low [11.1% (4.9)] probabilities of response. Baseline symptom severity, comorbidity, treatment characteristics (expectations, history, and aspects of current treatment), and protective/resilience factors were the most important predictors.
Although these results are promising, parallel models to predict response to alternative treatments based on data collected before initiating treatment would be needed for such models to help guide treatment selection.
To investigate factors associated with suicidal ideation (SI) around the time of dementia diagnosis. We hypothesised relatively preserved cognition, co-occurring physical and psychiatric disorders, functional impairments, and dementia diagnosis subtype would be associated with a higher risk of SI.
Cross-sectional study using routinely collected electronic mental healthcare records.
National Health Service secondary mental healthcare services in South London, UK, serving a population of over 1.36 million residents.
Patients who received a diagnosis of dementia (Alzheimer’s, vascular, mixed Alzheimer’s/vascular, or dementia with Lewy bodies) between 1 Nov 2007–31 Oct 2021: 18,252 people were identified during the observation period.
A natural language processing algorithm was used to identify recorded clinician recording of SI around the time of dementia diagnosis. Sociodemographic and clinical characteristics were also measured around the time of diagnosis. We compared people diagnosed with non-Alzheimer’s dementia to those with Alzheimer’s and used statistical models to adjust for putative confounders.
15.1% of patients had recorded SI, which was more common in dementia with Lewy bodies compared to other dementia diagnoses studied. After adjusting for sociodemographic and clinical factors, SI was more frequent in those with depression and dementia with Lewy bodies and less common in those with impaired activities of daily living and in vascular dementia. Agitated behavior and hallucinations were not associated with SI in the final model.
Our findings highlight the importance of identifying and treating depressive symptoms in people with dementia and the need for further research into under-researched dementia subtypes.
The rate of normal birth outcomes (i.e. full-term births without intervention) for women with severe mental illness (SMI – psychotic and bipolar disorders) is not known. We examined rates of birth without intervention (spontaneous labour onset, spontaneous vaginal delivery without instruments, no episiotomy and no indication of pre- or post-delivery anaesthesia) in women with SMI (584 pregnancies) compared with a control population (70 942 pregnancies). Outcome ratios were calculated standardising for age. Women with SMI were less likely to have a birth without intervention (29.5%) relative to the control population (36.8%) (standardised outcome ratio 0.74, 95% CI 0.63–0.87).
Fewer than half of patients with major depressive disorder (MDD) respond to psychotherapy. Pre-emptively informing patients of their likelihood of responding could be useful as part of a patient-centered treatment decision-support plan.
This prospective observational study examined a national sample of 807 patients beginning psychotherapy for MDD at the Veterans Health Administration. Patients completed a self-report survey at baseline and 3-months follow-up (data collected 2018–2020). We developed a machine learning (ML) model to predict psychotherapy response at 3 months using baseline survey, administrative, and geospatial variables in a 70% training sample. Model performance was then evaluated in the 30% test sample.
32.0% of patients responded to treatment after 3 months. The best ML model had an AUC (SE) of 0.652 (0.038) in the test sample. Among the one-third of patients ranked by the model as most likely to respond, 50.0% in the test sample responded to psychotherapy. In comparison, among the remaining two-thirds of patients, <25% responded to psychotherapy. The model selected 43 predictors, of which nearly all were self-report variables.
Patients with MDD could pre-emptively be informed of their likelihood of responding to psychotherapy using a prediction tool based on self-report data. This tool could meaningfully help patients and providers in shared decision-making, although parallel information about the likelihood of responding to alternative treatments would be needed to inform decision-making across multiple treatments.
To describe the epidemiology of patients with nonintestinal carbapenem-resistant Enterobacterales (CRE) colonization and to compare clinical outcomes of these patients to those with CRE infection.
A secondary analysis of Consortium on Resistance Against Carbapenems in Klebsiella and other Enterobacteriaceae 2 (CRACKLE-2), a prospective observational cohort.
A total of 49 US short-term acute-care hospitals.
Patients hospitalized with CRE isolated from clinical cultures, April, 30, 2016, through August 31, 2017.
We described characteristics of patients in CRACKLE-2 with nonintestinal CRE colonization and assessed the impact of site of colonization on clinical outcomes. We then compared outcomes of patients defined as having nonintestinal CRE colonization to all those defined as having infection. The primary outcome was a desirability of outcome ranking (DOOR) at 30 days. Secondary outcomes were 30-day mortality and 90-day readmission.
Of 547 patients with nonintestinal CRE colonization, 275 (50%) were from the urinary tract, 201 (37%) were from the respiratory tract, and 71 (13%) were from a wound. Patients with urinary tract colonization were more likely to have a more desirable clinical outcome at 30 days than those with respiratory tract colonization, with a DOOR probability of better outcome of 61% (95% confidence interval [CI], 53%–71%). When compared to 255 patients with CRE infection, patients with CRE colonization had a similar overall clinical outcome, as well as 30-day mortality and 90-day readmission rates when analyzed in aggregate or by culture site. Sensitivity analyses demonstrated similar results using different definitions of infection.
Patients with nonintestinal CRE colonization had outcomes similar to those with CRE infection. Clinical outcomes may be influenced more by culture site than classification as “colonized” or “infected.”
Studying phenotypic and genetic characteristics of age at onset (AAO) and polarity at onset (PAO) in bipolar disorder can provide new insights into disease pathology and facilitate the development of screening tools.
To examine the genetic architecture of AAO and PAO and their association with bipolar disorder disease characteristics.
Genome-wide association studies (GWASs) and polygenic score (PGS) analyses of AAO (n = 12 977) and PAO (n = 6773) were conducted in patients with bipolar disorder from 34 cohorts and a replication sample (n = 2237). The association of onset with disease characteristics was investigated in two of these cohorts.
Earlier AAO was associated with a higher probability of psychotic symptoms, suicidality, lower educational attainment, not living together and fewer episodes. Depressive onset correlated with suicidality and manic onset correlated with delusions and manic episodes. Systematic differences in AAO between cohorts and continents of origin were observed. This was also reflected in single-nucleotide variant-based heritability estimates, with higher heritabilities for stricter onset definitions. Increased PGS for autism spectrum disorder (β = −0.34 years, s.e. = 0.08), major depression (β = −0.34 years, s.e. = 0.08), schizophrenia (β = −0.39 years, s.e. = 0.08), and educational attainment (β = −0.31 years, s.e. = 0.08) were associated with an earlier AAO. The AAO GWAS identified one significant locus, but this finding did not replicate. Neither GWAS nor PGS analyses yielded significant associations with PAO.
AAO and PAO are associated with indicators of bipolar disorder severity. Individuals with an earlier onset show an increased polygenic liability for a broad spectrum of psychiatric traits. Systematic differences in AAO across cohorts, continents and phenotype definitions introduce significant heterogeneity, affecting analyses.
Individuals present in lower Manhattan during the 9/11 World Trade Center (WTC) disaster suffered from significant physical and psychological trauma. Studies of longitudinal psychological distress among those exposed to trauma have been limited to relatively short durations of follow-up among smaller samples.
The current study longitudinally assessed heterogeneity in trajectories of psychological distress among WTC Health Registry enrollees – a prospective cohort health study of responders, students, employees, passersby, and residents in the affected area (N = 30 839) – throughout a 15-year period following the WTC disaster. Rescue/recovery status and exposure to traumatic events of 9/11, as well as sociodemographic factors and health status, were assessed as risk factors for trajectories of psychological distress.
Five psychological distress trajectory groups were found: none-stable, low-stable, moderate-increasing, moderate-decreasing, and high-stable. Of the study sample, 78.2% were classified as belonging to the none-stable or low-stable groups. Female sex, being younger at the time of 9/11, lower education and income were associated with a higher probability of being in a greater distress trajectory group relative to the none-stable group. Greater exposure to traumatic events of 9/11 was associated with a higher probability of a greater distress trajectory, and community members (passerby, residents, and employees) were more likely to be in greater distress trajectory groups – especially in the moderate-increasing [odds ratios (OR) 2.31 (1.97–2.72)] and high-stable groups [OR 2.37 (1.81–3.09)] – compared to the none-stable group.
The current study illustrated the heterogeneity in psychological distress trajectories following the 9/11 WTC disaster, and identified potential avenues for intervention in future disasters.
ABSTRACT IMPACT: The potential to use vaginal pH as a low cost, non-invasive diagnostic test at the point of CIN2 diagnosis to predict worsening of cervical disease. OBJECTIVES/GOALS: We previously reported that persistence/progression of cervical intraepithelial neoplasia-2 (CIN2) was uncommon in women living with HIV (WLH) from the Women’s Interagency HIV Study (WIHS, now MWCCS). Here we examined additional factors that may influence CIN2 natural history. METHODS/STUDY POPULATION: A total of 337 samples from 94 WLH with a confirmed CIN2 diagnosis were obtained from the MWCCS. 42 cervicovaginal HPV types and 34 cervicovaginal cytokines/chemokines were measured at CIN2 diagnosis (94 samples) and 6-12 months prior to CIN2 diagnosis (79 samples). Covariates, including CD4 count and vaginal pH, were abstracted from core MWCCS visits. Logistic regression models were used to explore CIN2 regression (CIN1, normal) vs. persistence/progression (CIN2, CIN3). Log rank tests, Kaplan Meier method, and Cox regression modeling were used to determine CIN2 regression rates. RESULTS/ANTICIPATED RESULTS: The most prevalent HPV types were HPV54 (21.6%) and 53 (21.3%). 33 women (35.1%) had a subsequent CIN2/CIN3 diagnosis (median 12.5 years follow-up). Each additional hr-HPV type detected at the pre-CIN2 visit associated with increased odds of CIN2 persistence/progression (OR 2.27, 95% CI 1.15, 4.50). Higher vaginal pH (aOR 2.27, 95% CI 1.15, 4.50) and bacterial vaginosis (aOR 5.08, 95% CI 1.30, 19.94) at the CIN2 diagnosis visit associated with higher odds of CIN2 persistence/progression. Vaginal pH >4.5 at CIN2 diagnosis also associated with unadjusted time to CIN2 persistence/progression (log rank p=0.002) and a higher rate of CIN2 persistence/progression (adjusted hazard ratio [aHR] 3.37, 95% CI 1.26, 8.99). Cervicovaginal cytokine/chemokine levels were not associated with CIN2 persistence/progression. DISCUSSION/SIGNIFICANCE OF FINDINGS: We found relatively low prevalence of HPV16/18 in this cohort. Elevated vaginal pH at the time of CIN2 diagnosis may be a useful indicator of CIN2 persistence/progression and the rate of persistence/progression.
Understanding place-based contributors to health requires geographically and culturally diverse study populations, but sharing location data is a significant challenge to multisite studies. Here, we describe a standardized and reproducible method to perform geospatial analyses for multisite studies. Using census tract-level information, we created software for geocoding and geospatial data linkage that was distributed to a consortium of birth cohorts located throughout the USA. Individual sites performed geospatial linkages and returned tract-level information for 8810 children to a central site for analyses. Our generalizable approach demonstrates the feasibility of geospatial analyses across study sites to promote collaborative translational research.
Recently, artificial intelligence-powered devices have been put forward as potentially powerful tools for the improvement of mental healthcare. An important question is how these devices impact the physician-patient interaction.
Aifred is an artificial intelligence-powered clinical decision support system (CDSS) for the treatment of major depression. Here, we explore the use of a simulation centre environment in evaluating the usability of Aifred, particularly its impact on the physician–patient interaction.
Twenty psychiatry and family medicine attending staff and residents were recruited to complete a 2.5-h study at a clinical interaction simulation centre with standardised patients. Each physician had the option of using the CDSS to inform their treatment choice in three 10-min clinical scenarios with standardised patients portraying mild, moderate and severe episodes of major depression. Feasibility and acceptability data were collected through self-report questionnaires, scenario observations, interviews and standardised patient feedback.
All 20 participants completed the study. Initial results indicate that the tool was acceptable to clinicians and feasible for use during clinical encounters. Clinicians indicated a willingness to use the tool in real clinical practice, a significant degree of trust in the system's predictions to assist with treatment selection, and reported that the tool helped increase patient understanding of and trust in treatment. The simulation environment allowed for the evaluation of the tool's impact on the physician–patient interaction.
The simulation centre allowed for direct observations of clinician use and impact of the tool on the clinician–patient interaction before clinical studies. It may therefore offer a useful and important environment in the early testing of new technological tools. The present results will inform further tool development and clinician training materials.
The majority of psychological treatment research is dedicated to investigating the effectiveness of cognitive behavioural therapy (CBT) across different conditions, population and contexts. We aimed to summarise the current systematic review evidence and evaluate the consistency of CBT's effect across different conditions. We included reviews of CBT randomised controlled trials in any: population, condition, format, context, with any type of comparator and published in English. We searched DARE, Cochrane, MEDLINE, EMBASE, PsycINFO, CINAHL, CDAS, and OpenGrey between 1992 and January 2019. Reviews were quality assessed, their data extracted and summarised. The effects upon health-related quality of life (HRQoL) were pooled, within-condition groups. If the across-condition heterogeneity was I2 < 75%, we pooled effects using a random-effect panoramic meta-analysis. We summarised 494 reviews (221 128 participants), representing 14/20 physical and 13/20 mental conditions (World Health Organisation's International Classification of Diseases). Most reviews were lower-quality (351/494), investigated face-to-face CBT (397/494), and in adults (378/494). Few reviews included trials conducted in Asia, South America or Africa (45/494). CBT produced a modest benefit across-conditions on HRQoL (standardised mean difference 0.23; 95% confidence intervals 0.14–0.33, I2 = 32%). The effect's associated prediction interval −0.05 to 0.50 suggested CBT will remain effective in conditions for which we do not currently have available evidence. While there remain some gaps in the completeness of the evidence base, we need to recognise the consistent evidence for the general benefit which CBT offers.
In the treatment of psychosis, agitation and aggression in Alzheimer's disease, guidelines emphasise the need to ‘use the lowest possible dose’ of antipsychotic drugs, but provide no information on optimal dosing.
This analysis investigated the pharmacokinetic profiles of risperidone and 9-hydroxy (OH)-risperidone, and how these related to treatment-emergent extrapyramidal side-effects (EPS), using data from The Clinical Antipsychotic Trials of Intervention Effectiveness in Alzheimer's Disease study (Clinicaltrials.gov identifier: NCT00015548).
A statistical model, which described the concentration–time course of risperidone and 9-OH-risperidone, was used to predict peak, trough and average concentrations of risperidone, 9-OH-risperidone and ‘active moiety’ (combined concentrations) (n = 108 participants). Logistic regression was used to investigate the associations of pharmacokinetic biomarkers with EPS. Model-based predictions were used to simulate the dose adjustments needed to avoid EPS.
The model showed an age-related reduction in risperidone clearance (P < 0.0001), reduced renal elimination of 9-OH-risperidone (elimination half-life 27 h), and slower active moiety clearance in 22% of patients, (concentration-to-dose ratio: 20.2 (s.d. = 7.2) v. 7.6 (s.d. = 4.9) ng/mL per mg/day, Mann–Whitney U-test, P < 0.0001). Higher trough 9-OH-risperidone and active moiety concentrations (P < 0.0001) and lower Mini-Mental State Examination (MMSE) scores (P < 0.0001), were associated with EPS. Model-based predictions suggest the optimum dose ranged from 0.25 mg/day (85 years, MMSE of 5), to 1 mg/day (75 years, MMSE of 15), with alternate day dosing required for those with slower drug clearance.
Our findings argue for age- and MMSE-related dose adjustments and suggest that a single measure of the concentration-to-dose ratio could be used to identify those with slower drug clearance.