We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We propose a hierarchical Bayesian model for analyzing multi-site experimental fMRI studies. Our method takes the hierarchical structure of the data (subjects are nested within sites, and there are multiple observations per subject) into account and allows for modeling between-site variation. Using posterior predictive model checking and model selection based on the deviance information criterion (DIC), we show that our model provides a good fit to the observed data by sharing information across the sites. We also propose a simple approach for evaluating the efficacy of the multi-site experiment by comparing the results to those that would be expected in hypothetical single-site experiments with the same sample size.
We investigated the impact of COVID-19 restrictions on the duration of untreated psychosis (DUP). First-episode psychosis admissions (n = 101) to the STEP Clinic in Connecticut showed DUP reduction (P = 0.0015) during the pandemic, with the median reducing from 208 days pre-pandemic to 56 days in the early pandemic period, and subsequently increasing to 154 days (P = 0.0281). Time from psychosis onset to antipsychotic prescription decreased significantly in the pandemic (P = 0.0183), with the median falling from 117 to 35 days. This cohort study demonstrates an association between greater pandemic restrictions and marked DUP reduction, and provides insights for future early detection efforts.
Growing evidence suggests that direct oral anticoagulants (DOACs) may be suitable for cerebral venous thrombosis (CVT). The optimal strategy regarding lead-in parenteral anticoagulation (PA) prior to DOAC is unknown.
Methods:
In this post hoc analysis of the retrospective ACTION-CVT study, we compared patients treated with DOACs as part of routine care: those given “very early” DOAC (no PA), “early” (<5 days PA) and “delayed” (5–21 days PA). We compared baseline characteristics and outcomes between the very early/early and delayed groups. The primary outcome was a composite of day-30 CVT recurrence/extension, new peripheral venous thromboembolism, cerebral edema and intracranial hemorrhage.
Results:
Of 231 patients, 11.7% had very early DOAC, 64.5% early (median [IQR] 2 [1–2] days) and 23.8% delayed (5 [5–6] days). More patients had severe clinical/radiological presentations in the delayed group; more patients had isolated headaches in the very early/early group. Outcomes were better in the very early/early groups (90-day modified Rankin Scale of 0–2; 94.3% vs. 83.9%). Primary outcome events were rare and did not differ significantly between groups (2.4% vs. 2.1% delayed; adjusted HR 1.49 [95%CI 0.17–13.11]).
Conclusions:
In this cohort of patients receiving DOAC for CVT as part of routine care, >75% had <5 days of PA. Those with very early/early initiation of DOAC had less severe clinical presentations. Low event rates and baseline differences between groups preclude conclusions about safety or effectiveness. Further prospective data will inform care.
Infection control guidelines for cystic fibrosis (CF) stress cleaning of environmental surfaces and patientcare equipment in CF clinics. This multicenter study measured cleanliness of frequently touched surfaces in CF clinics using an ATP bioluminescence assay to assess the effectiveness of cleaning/disinfection and the impact of feedback.
Methods:
Eight surfaces were tested across 19 clinics (10 pediatric, 9 adult) over 5 rounds of testing. Rounds 1 and 2 served as uncleaned baseline, and Round 3 occurring after routine cleaning. Rounds 4 and 5 were performed after feedback provided to staff and measured after cleaning. Pass rates defined as <250 relative light units were the primary outcome.
Results:
Of the 750 tests performed, 72% of surfaces passed at baseline, and 79%, 83%, and 85% of surfaces passed in Rounds 3, 4, and 5, respectively. The overall pass-rate was significantly higher in adult compared to pediatric clinics (86% vs 71%; P < 0.001). In pediatric clinics, blood pressure equipment and computer keyboards in the pulmonary function lab consistently passed, but the exam room patient/visitor chairs consistently failed in all rounds. In adult clinics blood pressure equipment, keyboards in exam rooms and exam tables passed in all rounds and no surface consistently failed.
Conclusion:
We demonstrate the feasibility of an ATP bioluminescence assay to measure cleanliness of patient care equipment and surfaces in CF clinics. Pass rates improved after cleaning and feedback for certain surfaces. We found that surfaces are more challenging to keep clean in clinics taking care of younger patients.
Military Servicemembers and Veterans are at elevated risk for suicide, but rarely self-identify to their leaders or clinicians regarding their experience of suicidal thoughts. We developed an algorithm to identify posts containing suicide-related content on a military-specific social media platform.
Methods
Publicly-shared social media posts (n = 8449) from a military-specific social media platform were reviewed and labeled by our team for the presence/absence of suicidal thoughts and behaviors and used to train several machine learning models to identify such posts.
Results
The best performing model was a deep learning (RoBERTa) model that incorporated post text and metadata and detected the presence of suicidal posts with relatively high sensitivity (0.85), specificity (0.96), precision (0.64), F1 score (0.73), and an area under the precision-recall curve of 0.84. Compared to non-suicidal posts, suicidal posts were more likely to contain explicit mentions of suicide, descriptions of risk factors (e.g. depression, PTSD) and help-seeking, and first-person singular pronouns.
Conclusions
Our results demonstrate the feasibility and potential promise of using social media posts to identify at-risk Servicemembers and Veterans. Future work will use this approach to deliver targeted interventions to social media users at risk for suicide.
Unlike other causes of stroke, symptoms in cerebral venous thrombosis (CVT) can be nonspecific at onset with gradual worsening over time. To explore potential opportunities for earlier diagnosis, we analyzed healthcare interactions in the week prior to hospitalization for patients admitted with incident CVT in British Columbia (BC).
Methods:
We constructed a population-based cohort (2000–2017) using linked patient-level administrative data to identify patients aged ≥18 diagnosed with CVT in BC. We used descriptive analysis to describe the frequency and types of healthcare encounters within the 7 and 3 days prior to hospitalization. Multivariable logistic regression modeling was performed to examine risk factors associated with prior encounters.
Results:
The cohort included 554 patients (mean age 50.9 years, 55.4% female). Within the 7 days prior to CVT hospitalization, 57.9% of patients had ≥1 outpatient encounter and 5.6% had ≥1 inpatient encounter. In the 3 days prior to hospitalization, 46.8% of patients had ≥1 outpatient encounter and 2.0% had ≥1 inpatient encounter. Women more frequently had outpatient interactions within 7 days (64.8% women vs. 35.2% men, p < 0.001) and 3 days (51.8% vs. 48.2%, p = 0.01) before admission. Common provider specialties for outpatient encounters were general practice (58.0%), emergency (8.3%) and neurology (5.7%). Females had higher odds (OR = 1.79) of having ≥1 outpatient encounter after adjusting for confounding.
Conclusions:
Within our Canadian cohort, over half of patients had a healthcare encounter within 7 days before their hospitalization with incident CVT. Women more commonly had an outpatient encounter preceding hospital admission.
Targeted spraying application technologies have the capacity to drastically reduce herbicide inputs, but to be successful, the performance of both machine vision–based weed detection and actuator efficiency needs to be optimized. This study assessed (1) the performance of spotted spurge recognition in ‘Latitude 36’ bermudagrass turf canopy using the You Only Look Once (YOLOv3) real-time multiobject detection algorithm and (2) the impact of various nozzle densities on model efficiency and projected herbicide reduction under simulated conditions. The YOLOv3 model was trained and validated with a data set of 1,191 images. The simulation design consisted of four grid matrix regimes (3 × 3, 6 × 6, 12 × 12, and 24 × 24), which would then correspond to 3, 6, 12, and 24 nonoverlapping nozzles, respectively, covering a 50-cm-wide band. Simulated efficiency testing was conducted using 50 images containing predictions (labels) generated with the trained YOLO model and by applying each of the grid matrixes to individual images. The model resulted in prediction accuracy of an F1 score of 0.62, precision of 0.65, and a recall value of 0.60. Increased nozzle density (from 3 to 12) improved actuator precision and predicted herbicide-use efficiency with a reduction in the false hits ratio from ∼30% to 5%. The area required to ensure herbicide deposition to all spotted spurge detected within images was reduced to 18%, resulting in ∼80% herbicide savings compared to broadcast application. Slightly greater precision was predicted with 24 nozzles but was not statistically different from the 12-nozzle scenario. Using this turf/weed model as a basis, optimal actuator efficacy and herbicide savings would occur by increasing nozzle density from 1 to 12 nozzles within the context of a single band.
Interlayer swelling of hydrated montmorillonite is an important issue in clay mineralogy. Although the swelling behavior of montmorillonite under ambient conditions has been investigated comprehensively, the effects of basin conditions on the hydration and swelling behaviors of montmorillonite have not been characterized thoroughly. In the present study, molecular dynamics simulations were employed to reveal the swelling behavior and changes in the interlayer structure of Na-montmorillonite under the high temperatures and pressures of basin conditions. According to the calculation of the immersion energy, the monolayer hydrate becomes more stable than the bilayer hydrate at a burial depth of 7 km (at a temperature of 518 K and a lithostatic pressure of 1.04 kbar). With increasing burial depth, the basal spacings of the monolayer and bilayer hydrates change to varying degrees. The density-distribution profiles of interlayer species exhibit variation in the hydrate structures due to temperature and pressure change, especially in the structures of bilayer hydrate. With increasing depth, more Na+ ions prefer to distribute closer to the clay layers. The mobility of interlayer water and ions increases with increasing temperature, while increasing pressure caused the mobility of these ions to decrease.
Assess turnaround time (TAT) and cost-benefit of on-site C. auris screening and its impact on length of stay (LOS) and costs compared to reference laboratories.
Design:
Before-and-after retrospective cohort study.
Setting:
Large-tertiary medical center.
Methods:
We validated an on-site polymerase chain reaction-based testing platform for C. auris and retrospectively reviewed hospitalized adults who screened negative before and after platform implementation. We constructed multivariable models to assess the association of screening negative with hospital LOS/cost in the pre and postimplementation periods. We adjusted for confounders such as demographics and indwelling device use, and compared TATs for all samples tested.
Results:
The sensitivity and specificity of the testing platform were 100% and 98.11%, respectively, compared to send-out testing. The clinical cohort included 287 adults in the pre and 1,266 postimplementation period. The TAT was reduced by more than 2 days (3 (interquartile range (IQR): 2.0, 7.0) vs 0.42 (IQR: 0.24, 0.81), p < 0.001). Median LOS was significantly lower in the postimplementation period; however, this was no longer evident after adjustment. In relation to total cost, the time period had an effect of $6,965 (95% CI: −$481, $14,412); p = 0.067) on reducing the cost. The median adjusted total cost per patient was $7,045 (IQR: $3,805, $13,924) less in the post vs the preimplementation period.
Conclusions:
Our assessment did not find a statistically significant change in LOS, nevertheless, on-site testing was not cost-prohibitive for the institution. The value of on-site testing may be supported if an institutional C. auris reduction strategy emphasizes faster TATs.
Machine learning (ML) approaches are a promising venue for identifying vocal markers of neuropsychiatric disorders, such as schizophrenia. While recent studies have shown that voice-based ML models can reliably predict diagnosis and clinical symptoms of schizophrenia, it is unclear to what extent such ML markers generalize to new speech samples collected using a different task or in a different language: the assessment of generalization performance is however crucial for testing their clinical applicability.
Objectives
In this research, we systematically assessed the generalizability of ML models across contexts and languages relying on a large cross-linguistic dataset of audio recordings of patients with schizophrenia and controls.
Methods
We trained ML models of vocal markers of schizophrenia on a large cross-linguistic dataset of audio recordings of 231 patients with schizophrenia and 238 matched controls (>4.000 recordings in Danish, German, Mandarin and Japanese). We developed a rigorous pipeline to minimize overfitting, including cross-validated training set and Mixture of Experts (MoE) models. We tested the generalizability of the ML models on: (i) different participants, speaking the same language (hold-out test set); (ii) different participants, speaking a different language. Finally, we compared the predictive performance of: (i) models trained on a single language (e.g., Danish) (ii) MoE models, i.e., ensemble of models (experts) trained on a single language whose predictions are combined using a weighted sum (iii) multi-language models trained on multiple languages (e.g., Danish and German).
Results
Model performance was comparable to state-of-the art findings (F1: 70%-80%) when trained and tested on participants speaking the same language (out-of-sample performance). Crucially, however, the ML models did not generalize well - showing a substantial decrease of performance (close to chance) - when trained in a language and tested on new languages (e.g., trained on Danish and tested on German). MoE and multi-language models showed a better increase of performance (F1: 55%-60%), but still far from those requested for achieving clinical applicability.
Conclusions
Our results show that the cross-linguistic generalizability of ML models of vocal markers of schizophrenia is very limited. This is an issue if our first goal is to translate these vocal markers into effective clinical applications. We argue that more emphasis needs to be placed on collecting large open datasets to test the generalizability of voice-based ML models, for example, across different speech tasks or across the heterogeneous clinical profiles that characterize schizophrenia spectrum disorder.
Background: Tuberculosis is an airborne disease caused by Mycobacterium Tuberculosis. Intracranial tuberculoma is a rare complication of extrapulmonary tuberculosis due to hematogenous spread to subpial and subependymal regions. Intracranial tuberculoma can occur with or without meningitis. Methods: A 3-year-old male who recently emigrated from Sudan presented to the emergency department with right-sided seizures lasting 30 minutes which were aborted with levetiracetam and midazolam. CT head revealed a multilobulated left supratentorial mass, with solid and cystic components measuring 8.0 x 4.8 x 6.5 cm. The patient had successful surgical resection of the mass which was positive for Mycobacterium Tuberculosis. He was started on rifampin, isoniazid, pyrazinamide, ethambutol, and fluoroquinolone and discharged home in stable condition. Results: Literature review on pediatric intracranial tuberculoma was performed which included 48 studies (n=49). The mean age was 8.8 ± 5.4 years with slight female predilection (59%). Predominant solitary tuberculomas (63%) were preferentially managed with both surgical resection and antitubercular therapy (ATT) compared to multifocal tuberculomas that were preferentially managed with ATT. Conclusions: Intracranial tuberculoma is a rare but treatable cause of space-occupying lesions in children. Clinicians should maintain high-level of suspicion in patients from endemic regions and involve infectious disease service early in patient’s care.
Background: To localize cortical speech areas, methods such as fMRI are commonly used, but the Wada test can also determine whether a region is critical to the particular task. We report a case of a left-handed patient with a left frontal tumour in whom fMRI language paradigms produced both left and right Broca’s and Wernicke’s areas. Methods: All imaging used a 3 Tesla Siemens Skyra scanner. The patient performed five speech tasks: word reading, picture naming, semantic questions, pseudohomophone reading, and word generation. All preprocessing and statistical analyses for functional images were performed using Brain Voyager QX. Results: The fMRI results revealed right hemisphere dominance for language processing. A Wada test was performed in order to confirm whether the regions in the left hemisphere were critical to speech. The patient experienced speech arrest during the Wada test, thus confirming that despite bilateral speech activation, the left hemisphere speech regions are required for speech production. Conclusions: This case emphasizes the importance of preoperative fMRI in assessing the location of eloquent cortices adjacent to a tumour and the Wada test is still warranted for examining necessity of left hemisphere language regions when fMRI fails to show clear left-lateralization.
This study aimed to compare the pre- and post-operative vestibular and equilibrium functions of patients with cholesteatoma-induced labyrinthine fistulas who underwent different management methods.
Methods
Data from 49 patients with cholesteatoma-induced labyrinthine fistulas who underwent one of three surgical procedures were retrospectively analysed. The three management options were fistula repair, obliteration and canal occlusion.
Results
Patients underwent fistula repair (n = 8), canal occlusion (n = 18) or obliteration procedures (n = 23). Patients in the fistula repair and canal occlusion groups suffered from post-operative vertigo and imbalance, which persisted for longer than in those in the obliteration group. Despite receiving different management strategies, all patients achieved complete recovery of equilibrium functions through persistent efforts in rehabilitation exercises.
Conclusion
Complete removal of the cholesteatoma matrix overlying the fistula is reliable for preventing iatrogenic hearing deterioration due to unremitting labyrinthitis. Thus, among the three fistula treatments, obliteration is the optimal method for preserving post-operative vestibular functions.
There is evidence that child maltreatment is associated with shorter telomere length in early life.
Aims
This study aims to examine if child maltreatment is associated with telomere length in middle- and older-age adults.
Method
This was a retrospective cohort study of 141 748 UK Biobank participants aged 37–73 years at recruitment. Leukocyte telomere length was measured with quantitative polymerase chain reaction, and log-transformed and scaled to have unit standard deviation. Child maltreatment was recalled by participants. Linear regression was used to analyse the association.
Results
After adjusting for sociodemographic characteristics, participants with three or more types of maltreatment presented with the shortest telomere lengths (β = −0.05, 95% CI −0.07 to −0.03; P < 0.0001), followed by those with two types of maltreatment (β = −0.02, 95% CI −0.04 to 0.00; P = 0.02), referent to those who had none. When adjusted for depression and post-traumatic stress disorder, the telomere lengths of participants with three or more types of maltreatment were still shorter (β = −0.04, 95% CI −0.07 to −0.02; P = 0.0008). The telomere lengths of those with one type of maltreatment were not significantly different from those who had none. When mutually adjusted, physical abuse (β = −0.05, 95% CI −0.07 to −0.03; P < 0.0001) and sexual abuse (β = −0.02, 95% CI −0.04 to 0.00; P = 0.02) were independently associated with shorter telomere length.
Conclusions
Our findings showed that child maltreatment is associated with shorter telomere length in middle- and older-aged adults, independent of sociodemographic and mental health factors.