To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
It is acknowledged that health technology assessment (HTA) is an inherently value-based activity that makes use of normative reasoning alongside empirical evidence. But the language used to conceptualise and articulate HTA's normative aspects is demonstrably unnuanced, imprecise, and inconsistently employed, undermining transparency and preventing proper scrutiny of the rationales on which decisions are based. This paper – developed through a cross-disciplinary collaboration of 24 researchers with expertise in healthcare priority-setting – seeks to address this problem by offering a clear definition of key terms and distinguishing between the types of normative commitment invoked during HTA, thus providing a novel conceptual framework for the articulation of reasoning. Through application to a hypothetical case, it is illustrated how this framework can operate as a practical tool through which HTA practitioners and policymakers can enhance the transparency and coherence of their decision-making, while enabling others to hold them more easily to account. The framework is offered as a starting point for further discussion amongst those with a desire to enhance the legitimacy and fairness of HTA by facilitating practical public reasoning, in which decisions are made on behalf of the public, in public view, through a chain of reasoning that withstands ethical scrutiny.
Knowledge graphs have become a common approach for knowledge representation. Yet, the application of graph methodology is elusive due to the sheer number and complexity of knowledge sources. In addition, semantic incompatibilities hinder efforts to harmonize and integrate across these diverse sources. As part of The Biomedical Translator Consortium, we have developed a knowledge graph–based question-answering system designed to augment human reasoning and accelerate translational scientific discovery: the Translator system. We have applied the Translator system to answer biomedical questions in the context of a broad array of diseases and syndromes, including Fanconi anemia, primary ciliary dyskinesia, multiple sclerosis, and others. A variety of collaborative approaches have been used to research and develop the Translator system. One recent approach involved the establishment of a monthly “Question-of-the-Month (QotM) Challenge” series. Herein, we describe the structure of the QotM Challenge; the six challenges that have been conducted to date on drug-induced liver injury, cannabidiol toxicity, coronavirus infection, diabetes, psoriatic arthritis, and ATP1A3-related phenotypes; the scientific insights that have been gleaned during the challenges; and the technical issues that were identified over the course of the challenges and that can now be addressed to foster further development of the prototype Translator system. We close with a discussion on Large Language Models such as ChatGPT and highlight differences between those models and the Translator system.
Outdoor activities have accelerated in the past several years. The authors were tasked with providing medical care for the Union Cycliste International (UCI) mountain biking World Cup in Snowshoe, West Virginia (USA) in September 2021. The Hartman and Arbon models were designed to predict patient presentation and hospital transport rates as well as needed medical resources at urban mass-gathering events. However, there is a lack of standardized methods to predict injury, illness, and insult severity at rural mass gatherings.
This study aimed to determine whether the Arbon model would predict, within 10%, the number of patient presentations to be expected and to determine if the event classification provided by the Hartman model would adequately predict resources needed during the event.
Race data were collected from UCI event officials and injury data were collected from participants at time of presentation for medical care. Predicted presentation and transport rates were calculated using the Arbon model, which was then compared to the actual observed presentation rates. Furthermore, the event classification provided by the Hartman model was compared to the resources utilized during the event.
During the event, 34 patients presented for medical care and eight patients required some level of transport to a medical facility. The Arbon predictive model for the 2021 event yielded 30.3 expected patient presentations. There were 34 total patient presentations during the 2021 race, approximately 11% more than predicted. The Hartman model yielded a score of four. Based on this score, this race would be classified as an “intermediate” event, requiring multiple Advanced Life Support (ALS) and Basic Life Support (BLS) personnel and transport units.
The Arbon model provided a predicted patient presentation rate within reasonable error to allow for effective pre-event planning and resource allocation with only a four patient presentation difference from the actual data. While the Arbon model under-predicted patient presentations, the Hartman model under-estimated resources needed due to the high-risk nature of downhill cycling. The events staffed required physician skills and air medical services to safely care for patients. Further evaluation of rural events will be needed to determine if there is a generalized need for physician presence at smaller events with inherently risky activities, or if this recurring cycling event is an outlier.
Electroconvulsive therapy (ECT) is the most effective intervention for patients with treatment resistant depression. A clinical decision support tool could guide patient selection to improve the overall response rate and avoid ineffective treatments with adverse effects. Initial small-scale, monocenter studies indicate that both structural magnetic resonance imaging (sMRI) and functional MRI (fMRI) biomarkers may predict ECT outcome, but it is not known whether those results can generalize to data from other centers. The objective of this study was to develop and validate neuroimaging biomarkers for ECT outcome in a multicenter setting.
Multimodal data (i.e. clinical, sMRI and resting-state fMRI) were collected from seven centers of the Global ECT-MRI Research Collaboration (GEMRIC). We used data from 189 depressed patients to evaluate which data modalities or combinations thereof could provide the best predictions for treatment remission (HAM-D score ⩽7) using a support vector machine classifier.
Remission classification using a combination of gray matter volume and functional connectivity led to good performing models with average 0.82–0.83 area under the curve (AUC) when trained and tested on samples coming from the three largest centers (N = 109), and remained acceptable when validated using leave-one-site-out cross-validation (0.70–0.73 AUC).
These results show that multimodal neuroimaging data can be used to predict remission with ECT for individual patients across different treatment centers, despite significant variability in clinical characteristics across centers. Future development of a clinical decision support tool applying these biomarkers may be feasible.
Several hypotheses may explain the association between substance use, posttraumatic stress disorder (PTSD), and depression. However, few studies have utilized a large multisite dataset to understand this complex relationship. Our study assessed the relationship between alcohol and cannabis use trajectories and PTSD and depression symptoms across 3 months in recently trauma-exposed civilians.
In total, 1618 (1037 female) participants provided self-report data on past 30-day alcohol and cannabis use and PTSD and depression symptoms during their emergency department (baseline) visit. We reassessed participant's substance use and clinical symptoms 2, 8, and 12 weeks posttrauma. Latent class mixture modeling determined alcohol and cannabis use trajectories in the sample. Changes in PTSD and depression symptoms were assessed across alcohol and cannabis use trajectories via a mixed-model repeated-measures analysis of variance.
Three trajectory classes (low, high, increasing use) provided the best model fit for alcohol and cannabis use. The low alcohol use class exhibited lower PTSD symptoms at baseline than the high use class; the low cannabis use class exhibited lower PTSD and depression symptoms at baseline than the high and increasing use classes; these symptoms greatly increased at week 8 and declined at week 12. Participants who already use alcohol and cannabis exhibited greater PTSD and depression symptoms at baseline that increased at week 8 with a decrease in symptoms at week 12.
Our findings suggest that alcohol and cannabis use trajectories are associated with the intensity of posttrauma psychopathology. These findings could potentially inform the timing of therapeutic strategies.
The rise of neurotechnologies, especially in combination with artificial intelligence (AI)-based methods for brain data analytics, has given rise to concerns around the protection of mental privacy, mental integrity and cognitive liberty – often framed as “neurorights” in ethical, legal, and policy discussions. Several states are now looking at including neurorights into their constitutional legal frameworks, and international institutions and organizations, such as UNESCO and the Council of Europe, are taking an active interest in developing international policy and governance guidelines on this issue. However, in many discussions of neurorights the philosophical assumptions, ethical frames of reference and legal interpretation are either not made explicit or conflict with each other. The aim of this multidisciplinary work is to provide conceptual, ethical, and legal foundations that allow for facilitating a common minimalist conceptual understanding of mental privacy, mental integrity, and cognitive liberty to facilitate scholarly, legal, and policy discussions.
Sackett et al. (2022) identified previously unnoticed flaws in the way range restriction corrections have been applied in prior meta-analyses of personnel selection tools. They offered revised estimates of operational validity, which are often quite different from the prior estimates. The present paper attempts to draw out the applied implications of that work. We aim to a) present a conceptual overview of the critique of prior approaches to correction, b) outline the implications of this new perspective for the relative validity of different predictors and for the tradeoff between validity and diversity in selection system design, c) highlight the need to attend to variability in meta-analytic validity estimates, rather than just the mean, d) summarize reactions encountered to date to Sackett et al., and e) offer a series of recommendations regarding how to go about correcting validity estimates for unreliability in the criterion and for range restriction in applied work.
Training in disaster medicine can be partly theoretical but it must include a large practical part. If part of it can be developed through exercises in virtual reality or on a computer, the realization of life-size disaster exercises bringing together all the disciplines is of great help in this learning. Exercises of such magnitude are difficult to carry out in civilian life for reasons of resources and cost. We therefore wanted to develop this disaster medicine course with the three French-speaking civil universities but also with the Royal Military School for the practical part.
Collaboration agreements were established between three civilian universities (ULB, UCLouvain, ULiège) and the Royal Military School. The army thus provides the infrastructures of the Belgian military units to organize the exercises, personnel, means of make-up, vehicles, and security, all free of cost. Coordination meetings before exercises are also organized during the year by the army.
The exercises are organized in complete safety conditions on military fields, isolated from the civilian environment without disturbing the daily functioning of civilians. Access is authorized and organized for the various disciplines (firefighters, police, red cross and other participants). Nearly 100 people (victims, firemen, policemen,...) and 50 vehicles per exercise make the scenario completely believable. Different scenarios are repeated six times to complete the training of 80 students.
The collaboration between civilians and military has made it possible to set up quality training integrating a large part of life-size exercises at no cost and in complete safety. This ends the course by integrating in practice all the knowledge learned during the theoretical part and the virtual exercises.
OBJECTIVES/GOALS: In 2020, Baylor College of Medicine held a datathon to introduce a data warehouse, identify its capabilities/limitations, foster collaborations, and engage trainees. The event was held again in 2022, and lessons learned (e.g., tools for data self-service or team communication) were applied. METHODS/STUDY POPULATION: Senior faculty reviewed proposals with an emphasis on feasibility, impact, and relevance to quality improvement or population health. Selected teams worked with Information Technology (IT) for 2 months and presented findings at a 1-day event. Surveys were administered to participants before and after the event to evaluate their background, team characteristics, collaborations, knowledge before and after the datathon, perceived value of the datathon, and plans for future work. Descriptive statistics of respondents’ self-reports were tabulated. RESULTS/ANTICIPATED RESULTS: In 2022, 19 of 36 projects were accepted (13/33 in 2020). At both events, most projects studied quality improvement or clinical outcomes. Of 82 participants in 2022, 54 completed surveys. In 2022, 72% had no datathon experience (48% in 2020). Median effort was 10 person-hours; median IT time was 20% (20 and 10%, in 2020). Seven respondents finished and 21 partially finished their projects (1 and 11, in 2020); 92% made new collaborations (91% in 2020). Respondents strongly agreed that: the experience was valuable (n=28), they would participate in future datathons (n=30), and they would use the warehouse for future work (n=25). Twenty-seven have planned abstracts; 25 have planned manuscripts. DISCUSSION/SIGNIFICANCE: The 2022 datathon had more participants with less experience, potentially due to improved promotion and training opportunities. Fewer person-hours and a higher percentage of IT time were required as compared to 2020, and more projects were completed, possibly due to increased IT efficiency.
OBJECTIVES/GOALS: The goal of this study was to develop a clinically applicable technique to increase the precision of in vivo dose monitoring during radiation therapy by mapping the dose deposition and resolving the temporal dose accumulation while the treatment is being delivered in real time. METHODS/STUDY POPULATION: Ironizing radiation acoustic imaging (iRAI) is a novel imaging concept with the potential to map the delivered radiation dose on anatomic structure in real time during external beam radiation therapy without interrupting the clinical workflow. The iRAI system consisted of a custom-designed two-dimensional (2D) matrix transducer array with integrated preamplifier array, driven by a clinic-ready ultrasound imaging platform. The feasibility of iRAI volumetric imaging in mapping dose delivery and real-time monitoring of temporal dose accumulation in a clinical treatment plan were investigated with a phantom, a rabbit model, and a cancer patient. RESULTS/ANTICIPATED RESULTS: The total dose deposition and temporal dose accumulation in 3D space of a clinical C-shape treatment plan in a targeted region were first imaged and optimized in a phantom. Then, semi-quantitative iRAI measurements were achieved in an in vivo rabbit model. Finally, for the first time, real-time visualization of radiation dose delivered deep in a patient with liver metastases was performed with a clinical linear accelerator. These studies demonstrate the potential of iRAI to monitor and quantify the radiation dose deposition during treatment. DISCUSSION/SIGNIFICANCE: Described here is the pioneering role of an iRAI system in mapping the 3D radiation dose deposition of a complex clinical radiotherapy treatment plan. iRAI offers a cost-effective and practical solution for real-time visualization of 3D radiation dose delivery, potentially leading to personalized radiotherapy with optimal efficacy and safety.
This paper used data from the Apathy in Dementia Methylphenidate Trial 2 (NCT02346201) to conduct a planned cost consequence analysis to investigate whether treatment of apathy with methylphenidate is economically attractive.
A total of 167 patients with clinically significant apathy randomized to either methylphenidate or placebo were included. The Resource Utilization in Dementia Lite instrument assessed resource utilization for the past 30 days and the EuroQol five dimension five level questionnaire assessed health utility at baseline, 3 months, and 6 months. Resources were converted to costs using standard sources and reported in 2021 USD. A repeated measures analysis of variance compared change in costs and utility over time between the treatment and placebo groups. A binary logistic regression was used to assess cost predictors.
Costs were not significantly different between groups whether the cost of methylphenidate was excluded (F(2,330) = 0.626, ηp2 = 0.004, p = 0.535) or included (F(2,330) = 0.629, ηp2 = 0.004, p = 0.534). Utility improved with methylphenidate treatment as there was a group by time interaction (F(2,330) = 7.525, ηp2 = 0.044, p < 0.001).
Results from this study indicated that there was no evidence for a difference in resource utilization costs between methylphenidate and placebo treatment. However, utility improved significantly over the 6-month follow-up period. These results can aid in decision-making to improve quality of life in patients with Alzheimer’s disease while considering the burden on the healthcare system.
KarXT combines the M1/M4 preferring muscarinic receptor agonist xanomeline and the peripherally restricted anticholinergic trospium. In the phase 2 EMERGENT-1 study, KarXT met the primary endpoint of a significant reduction in Positive and Negative Syndrome Scale (PANSS) total score through week 5 vs placebo, improved other key secondary efficacy measures, and was generally well tolerated.
EMERGENT-2 was a phase 3, randomized, double-blind, placebo-controlled, 5-week trial of KarXT in acutely psychotic patients with schizophrenia in the inpatient setting. Eligible patients were randomized 1:1 to KarXT or matched placebo. Dosing of KarXT (mg xanomeline/mg trospium) started at 50 mg/20 mg BID and increased to a maximum of 125 mg/30 mg BID. The primary efficacy endpoint was change from baseline to week 5 in PANSS total score. Key secondary endpoints included change from baseline to week 5 in PANSS positive subscale, PANSS negative subscale, and PANSS negative Marder factor scores compared with placebo. Efficacy analyses were performed using the modified intent-to-treat population (patients with ≥1 dose of study medication, a baseline PANSS assessment, and ≥1 postbaseline PANSS assessment). All patients receiving ≥1 dose of study drug were included in safety analyses.
252 US patients were enrolled. KarXT demonstrated a statistically significant and clinically meaningful 9.6-point reduction from baseline to week 5 (effect size=0.61) in PANSS total score vs placebo (p<0.0001); a significant improvement in PANSS total score was demonstrated starting at week 2 (first postbaseline rating) and continued through the study end. KarXT also met key secondary endpoints. Results at week 5 included a 2.9-point reduction in PANSS positive subscale score with KarXT vs placebo (p<0.0001), a 1.8-point reduction in PANSS negative subscale score with KarXT vs placebo (p=0.0055), and a 2.2-point reduction in PANSS negative Marder factor score with KarXT vs placebo (p=0.0022). KarXT was generally well tolerated. Overall discontinuation rates were similar with KarXT (25%) and placebo (21%). The overall treatment-emergent adverse events (TEAEs) rate for KarXT and placebo was 75% and 58%, respectively. Discontinuation rates related to TEAEs were similar between KarXT (7%) and placebo (6%). Rates of serious TEAEs were similar with KarXT and placebo (2%, each group); no serious TEAEs were determined to be drug related. The most common TEAEs (≥5%) with KarXT were all mild to moderate in severity and included constipation, dyspepsia, nausea, vomiting, headache, blood pressure increases, dizziness, gastroesophageal reflux disease, abdominal discomfort, and diarrhea. KarXT was not associated with sedation/somnolence, weight gain, and extrapyramidal symptoms.
KarXT has the potential to be the first in a new class of treatments for patients with schizophrenia and a promising alternative to postsynaptic dopamine D2 receptor antagonists.
“Comprehensive Healthcare for America” is a largely single-payer reform proposal that, by applying the insights of behavioral economics, may be able to rally patients and clinicians sufficiently to overcome the opposition of politicians and vested interests to providing all Americans with less complicated and less costly access to needed healthcare.
Obesity is highly prevalent and disabling, especially in individuals with severe mental illness including bipolar disorders (BD). The brain is a target organ for both obesity and BD. Yet, we do not understand how cortical brain alterations in BD and obesity interact.
We obtained body mass index (BMI) and MRI-derived regional cortical thickness, surface area from 1231 BD and 1601 control individuals from 13 countries within the ENIGMA-BD Working Group. We jointly modeled the statistical effects of BD and BMI on brain structure using mixed effects and tested for interaction and mediation. We also investigated the impact of medications on the BMI-related associations.
BMI and BD additively impacted the structure of many of the same brain regions. Both BMI and BD were negatively associated with cortical thickness, but not surface area. In most regions the number of jointly used psychiatric medication classes remained associated with lower cortical thickness when controlling for BMI. In a single region, fusiform gyrus, about a third of the negative association between number of jointly used psychiatric medications and cortical thickness was mediated by association between the number of medications and higher BMI.
We confirmed consistent associations between higher BMI and lower cortical thickness, but not surface area, across the cerebral mantle, in regions which were also associated with BD. Higher BMI in people with BD indicated more pronounced brain alterations. BMI is important for understanding the neuroanatomical changes in BD and the effects of psychiatric medications on the brain.
Backward magical contagion describes instances in which individuals (sources) express discomfort or pleasure when something connected to them (medium; e.g., hair, a diary) falls into the possession of a negatively- or positively-perceived individual (recipient). The reaction seems illogical, since it is made clear that the source will never experience the object again, and the psychological effect appears to reverse the standard forward model of causality. Backward magical contagion was originally believed to be a belief held only within traditional cultures. Two studies examined negative backward contagion in adult Americans in online surveys. Study 1 indicated that backward contagion effects occur commonly, particularly when a recipient knows of the medium’s source. Study 2 showed that backward contagion effects tend to be neutralized when the recipient burns the object, as opposed to just possessing it or discarding it. Ironically, in traditional cultures, burning is a particularly potent cause of backward contagion.
Using work from home (WFH) scores obtained by matching Philippine occupations with U.S. O*NET occupations, this paper estimates that only 12.38% of all workers can WFH and 25.7% of Philippine occupations are teleworkable––mostly from the following occupational groups: professionals, clerical support workers, and technicians and associate professionals. The education, real estate and, professional, scientific and technical sectors account for the largest share of teleworkable jobs. Those workers belonging to lower per capita income deciles, who are male, who have lower levels of education, who are self-employed, aged 55 and older, and who are working in sectors such as agriculture and retail, are also less likely to be in teleworkable occupations.