To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Introduction: Emergency department (ED) crowding is a major problem across Canada. We studied the ability of artificial intelligence methods to improve patient flow through the ED by predicting patient disposition using information available at triage and shortly after patients’ arrival in the ED. Methods: This retrospective study included all visits to an urban, academic, adult ED between May 2012 and June 2019. For each visit, 489 variables were extracted including triage data that had been collected for use in the Canadian Triage Assessment Scale (CTAS) and information regarding laboratory tests, radiological tests, consultations and admissions. A training set consisting of all visits from April 2012 up to December 2018 was used to train 5 classes of machine learning models to predict admission to the hospital from the ED. The models were trained to predict admission at the time of the patient's arrival in the ED and every 30 minutes after arrival until 6 hours into their ED stay. The performance of models was compared using the area under the ROC curve (AUC) on a test set consisting of all visits from January 2019 to June 2019. Results: The study included 536,332 visits and the admission rate was 15.0%. Gradient boosting models generally outperformed other machine learning models. A gradient boosting model using all available data at 2 hours after patient arrival in the ED yielded a test set AUC 0.92 [95% CI 0.91-0.93], while a model using only data available at triage yielded an AUC 0.90 [95% CI 0.89-0.91]. The quality of predictions generally improved as predictions were made later in the patient's ED stay leading to an AUC 0.95 [95% CI 0.93-0.96] at 6 hours after arrival. A gradient boosting model with 20 variables available at 2 hours after patient arrival in the ED yielded an AUC 0.91 [95% CI 0.89-0.93]. A gradient boosting model that makes predictions at 2 hours after arrival in ED using only variables that are available at all EDs in the province of Quebec yielded an AUC 0.91 [95% 0.89-0.92]. Conclusion: Machine learning can predict admission to a hospital from the ED using variables that area collected as part of routine ED care. Machine learning tools may potentially be used to help ED physicians to make faster and more appropriate disposition decisions, to decrease unnecessary testing and alleviate ED crowding.
Introduction: The Canadian Syncope Risk Score (CSRS) is a validated risk tool developed using the best practices of conventional biostatistics, for predicting 30-day serious adverse events (SAE) after an Emergency Department (ED) visit for syncope. We sought to improve on the prediction ability of the CSRS and compared it to physician judgement using artificial intelligence (AI) research with modern machine learning (ML) methods. Methods: We used the prospective multicenter cohort data collected for the CSRS derivation and validation at 11 EDs across Canada over an 8-year period. The same 43 candidate variables considered for CSRS development were used to train and validate the four classes of ML models to predict 30-day SAE (death, arrhythmias, MI, structural heart disease, pulmonary embolism, hemorrhage) after ED disposition. Physician judgement was modeled using the two variables, referral for consultation and hospitalization. We compared the area under the curve (AUC) for the three models. Results: The proportion of patients who suffered 30-day SAE in the derivation cohort (N = 4030) was 3.6% and in validation phase (N = 2290) was 3.4%. Characteristics of the both cohorts were similar with no shift. The best performing ML model, a gradient boosting tree-based model used all 43 variables as predictors as opposed to the 9 final CSRS predictors. The AUC for the three models on the validation data were: best ML model 0.91 (95% CI 0.87–0.93), CSRS 0.87 (95% CI 0.83–0.90) and physician judgment 0.79 (95% CI 0.74 - 0.84). The most important predictors in the ML model were the same as the CSRS predictors. Conclusion: A ML model developed using AI method for risk-stratification of ED syncope performed with slightly better discrimination ability though not significantly different when compared to the CSRS. Both the ML model and the CSRS were better predictors of poor outcomes after syncope than physician judgement. ML models can perform with similar discrimination abilities when compared to traditional statistical models and outperform physician judgement given their ability to use all candidate variables.
Acute change in mental status (ACMS), defined by the Confusion Assessment Method, is used to identify infections in nursing home residents. A medical record review revealed that none of 15,276 residents had an ACMS documented. Using the revised McGeer criteria with a possible ACMS definition, we identified 296 residents and 21 additional infections. The use of a possible ACMS definition should be considered for retrospective nursing home infection surveillance.
Hope is considered as an important factor in recovery from severe mental illness. So far it has been studied in patients with depression, anxiety disorders and post traumatic stress disorder, whereas empirical studies involving people with psychosis are scarce and their results are inconclusive.
We aimed at evaluating the relationship between
(i) hope and positive as well as negative psychotic symptoms and
(ii) hope and depression in people with psychosis.
In this cross-sectional study 148 patients with schizophrenia and schizo-affective disorder were interviewed by a psychologist who rated the positive and negative symptoms on the Positive and Negative Syndrome Scale (PANSS). Hope and depression were measured using the self-assessment scales Integrative Hope Scale (IHS) and the Center for Epidemiologic Studies Depression Scale (CES-D).
No statistically significant correlation was found between hope and positive symptoms (r = .071, p = .414). Hope and negative symptoms, however, showed a statistically significant negative correlation (r = -.196, p = .023), as did hope and depression (r = -.255, p = .003). This latter relationship remained significant after controlling for negative symptoms in a partial correlation (r = -.216, p = .013).
While hope appears unrelated to positive symptoms, a significant correlation with negative symptoms and depression was found. These results emphasise the potential importance of hope as a target variable to support recovery in patients with psychosis. However, prospective studies are needed to clarify the causal relationships between hope and symptoms of psychotic disorders.
Reasons for differences in the effect sizes of studies on complex interventions, such as assertive outreach, between the US and the UK are a much debated topic. One possible explanation was suggested to be the potentially different quality of standard care in the two countries.
We aimed to
(i) empirically establish the comparability of research results on complex interventions for people with severe mental illness (SMI) from the UK and the US, and
(ii) explore developments over time in standard care in both countries by comparing studies that use “treatment as usual” (TAU) as the control intervention.
We conducted a systematic review and meta-analysis of RCTs conducted in the UK or the US
(i) involving people with SMI,
(ii) comparing complex interventions with TAU, and
(iii) using the outcome relapse or readmission to hospital.
The Risk Ratios for relapse/readmission were very similar and favouring experimental treatment both in the UK (RR 0.80, CI 0.73–0.88) and the US (RR 0.87, CI 0.79–0.95). The development of effects resulting from experimental interventions relative to those from TAU over time shows a slightly different pattern for the two countries.
The broadly similar total RR for relapse/readmission in both countries confirms the comparability of studies conducted in the UK and the US and suggests no significant overall difference in the quality of standard care. The chronological development of effects, however, reflects developments in TAU over time which differ between the two countries.
Negative symptoms have been previously reported during the psychosis prodrome, however our understanding of their relationship with treatment-phase negative symptoms remains unclear.
We report the prevalence of psychosis prodrome onset negative symptoms (PONS) and ascertain whether these predict negative symptoms at first presentation for treatment.
Presence of expressivity or experiential negative symptom domains was established at first presentation for treatment using the Scale for Assessment of Negative Symptoms (SANS) in 373 individuals with a first episode psychosis. PONS were established using the Beiser Scale. The relationship between PONS and negative symptoms at first presentation was ascertained and regression analyses determined the relationship independent of confounding.
PONS prevalence was 50.3% in the schizophrenia spectrum group (n = 155) and 31.2% in the non-schizophrenia spectrum group (n = 218). In the schizophrenia spectrum group, PONS had a significant unadjusted (χ2 = 10.41, P < 0.001) and adjusted (OR = 2.40, 95% CI = 1.11–5.22, P = 0.027) association with first presentation experiential symptoms, however this relationship was not evident in the non-schizophrenia spectrum group. PONS did not predict expressivity symptoms in either diagnostic group.
PONS are common in schizophrenia spectrum diagnoses, and predict experiential symptoms at first presentation. Further prospective research is needed to examine whether negative symptoms commence during the psychosis prodrome.
Obsessive-compulsive disorder (OCD) is a highly disabling condition, with frequent early onset. Adult/adolescent OCD has been extensively investigated, but little is known about prevalence and clinical characterization of geriatric patients with OCD (G-OCD = 65 years). The present study aimed to assess prevalence of G-OCD and associated socio-demographic and clinical correlates in a large international sample.
Data from 416 outpatients, participating in the ICOCS network, were assessed and categorized into 2 groups, age < vs = 65 years, and then divided on the basis of the median age of the sample (age < vs = 42 years). Socio-demographic and clinical variables were compared between groups (Pearson Chi-squared and t tests).
G-OCD compared with younger patients represented a significant minority of the sample (6% vs 94%, P < .001), showing a significantly later age at onset (29.4 ± 15.1 vs 18.7 ± 9.2 years, P < .001), a more frequent adult onset (75% vs 41.1%, P < .001) and a less frequent use of cognitive-behavioural therapy (CBT) (20.8% vs 41.8%, P < .05). Female gender was more represented in G-OCD patients, though not at a statistically significant level (75% vs 56.4%, P = .07). When the whole sample was divided on the basis of the median age, previous results were confirmed for older patients, including a significantly higher presence of women (52.1% vs 63.1%, P < .05).
G-OCD compared with younger patients represented a small minority of the sample and showed later age at onset, more frequent adult onset and lower CBT use. Age at onset may influence course and overall management of OCD, with additional investigation needed.
Frascati international research criteria for HIV-associated neurocognitive disorders (HAND) are controversial; some investigators have argued that Frascati criteria are too liberal, resulting in a high false positive rate. Meyer et al. recommended more conservative revisions to HAND criteria, including exploring other commonly used methodologies for neurocognitive impairment (NCI) in HIV including the global deficit score (GDS). This study compares NCI classifications by Frascati, Meyer, and GDS methods, in relation to neuroimaging markers of brain integrity in HIV.
Two hundred forty-one people living with HIV (PLWH) without current substance use disorder or severe (confounding) comorbid conditions underwent comprehensive neurocognitive testing and brain structural magnetic resonance imaging and magnetic resonance spectroscopy. Participants were classified using Frascati criteria versus Meyer criteria: concordant unimpaired [Frascati(Un)/Meyer(Un)], concordant impaired [Frascati(Imp)/Meyer(Imp)], or discordant [Frascati(Imp)/Meyer(Un)] which were impaired via Frascati criteria but unimpaired via Meyer criteria. To investigate the GDS versus Meyer criteria, the same groupings were utilized using GDS criteria instead of Frascati criteria.
When examining Frascati versus Meyer criteria, discordant Frascati(Imp)/Meyer(Un) individuals had less cortical gray matter, greater sulcal cerebrospinal fluid volume, and greater evidence of neuroinflammation (i.e., choline) than concordant Frascati(Un)/Meyer(Un) individuals. GDS versus Meyer comparisons indicated that discordant GDS(Imp)/Meyer(Un) individuals had less cortical gray matter and lower levels of energy metabolism (i.e., creatine) than concordant GDS(Un)/Meyer(Un) individuals. In both sets of analyses, the discordant group did not differ from the concordant impaired group on any neuroimaging measure.
The Meyer criteria failed to capture a substantial portion of PLWH with brain abnormalities. These findings support continued use of Frascati or GDS criteria to detect HIV-associated CNS dysfunction.
Objectives: Studies of neurocognitively elite older adults, termed SuperAgers, have identified clinical predictors and neurobiological indicators of resilience against age-related neurocognitive decline. Despite rising rates of older persons living with HIV (PLWH), SuperAging (SA) in PLWH remains undefined. We aimed to establish neuropsychological criteria for SA in PLWH and examined clinically relevant correlates of SA. Methods: 734 PLWH and 123 HIV-uninfected participants between 50 and 64 years of age underwent neuropsychological and neuromedical evaluations. SA was defined as demographically corrected (i.e., sex, race/ethnicity, education) global neurocognitive performance within normal range for 25-year-olds. Remaining participants were labeled cognitively normal (CN) or impaired (CI) based on actual age. Chi-square and analysis of variance tests examined HIV group differences on neurocognitive status and demographics. Within PLWH, neurocognitive status differences were tested on HIV disease characteristics, medical comorbidities, and everyday functioning. Multinomial logistic regression explored independent predictors of neurocognitive status. Results: Neurocognitive status rates and demographic characteristics differed between PLWH (SA=17%; CN=38%; CI=45%) and HIV-uninfected participants (SA=35%; CN=55%; CI=11%). In PLWH, neurocognitive groups were comparable on demographic and HIV disease characteristics. Younger age, higher verbal IQ, absence of diabetes, fewer depressive symptoms, and lifetime cannabis use disorder increased likelihood of SA. SA reported increased independence in everyday functioning, employment, and health-related quality of life than non-SA. Conclusions: Despite combined neurological risk of aging and HIV, youthful neurocognitive performance is possible for older PLWH. SA relates to improved real-world functioning and may be better explained by cognitive reserve and maintenance of cardiometabolic and mental health than HIV disease severity. Future research investigating biomarker and lifestyle (e.g., physical activity) correlates of SA may help identify modifiable neuroprotective factors against HIV-related neurobiological aging. (JINS, 2019, 25, 507–519)
Antibiotic use tracking in nursing homes is necessary for stewardship and regulatory requirements but may be burdensome. We used pharmacy data to evaluate whether once-weekly sampling of antibiotic use can estimate total use; we found no significant differences in estimated and measured antibiotic use.
Significant ethnic and socio-economic disparities exist in infectious diseases (IDs) rates in New Zealand, so accurate measures of these characteristics are required. This study compared methods of ascribing ethnicity and socio-economic status. Children in the Growing Up in New Zealand longitudinal cohort were ascribed to self-prioritised, total response and single-combined ethnic groups. Socio-economic status was measured using household income, and both census-derived and survey-derived deprivation indices. Rates of ID hospitalisation were compared using linked administrative data. Self-prioritised ethnicity was simplest to use. Total response accounted for mixed ethnicity and allowed overlap between groups. Single-combined ethnicity required aggregation of small groups to maintain power but offered greater detail. Regardless of the method used, Māori and Pacific children, and children in the most socio-economically deprived households had a greater risk of ID hospitalisation. Risk differences between self-prioritised and total response methods were not significant for Māori and Pacific children but single-combined ethnicity revealed a diversity of risk within these groups. Household income was affected by non-random missing data. The census-derived deprivation index offered a high level of completeness with some risk of multicollinearity and concerns regarding the ecological fallacy. The survey-derived index required extra questions but was acceptable to participants and provided individualised data. Based on these results, the use of single-combined ethnicity and an individualised survey-derived index of deprivation are recommended where sample size and data structure allow it.