We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure coreplatform@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Background: Phase 3 PREEMPT established safety and efficacy of 155-195U onabotulinumtoxinA in adults with chronic migraine (CM). This analysis of the PREDICT study (NCT02502123) evaluates real-world effectiveness and safety of 155U, 156-195U and 195U-onabotulinumtoxinA in CM. Methods: Patients received onabotulinumtoxinA approximately every 12-weeks (≤7 treatment cycles [Tx]) per Canadian product monograph). Primary endpoint was mean change from baseline in Migraine-Specific Quality of Life (MSQ) at Tx4. Headache days, physician and patient satisfaction were evaluated. Analysis stratified safety population (≥1 onabotulinumtoxin A dose) into 3 groups (155U,156-195U,195U) by dose received on ≥3 of the first 4 Tx. Results: 184 patients received ≥1 onabotulinumtoxin A dose (155U, n=68; 156-195U, n=156; 195U, n=13 on ≥3 Tx). Headache days decreased over time compared to baseline (Tx4: -7.1[6.7] 155U; -6.5[6.7] 156-195U; -11.2[6.4] 195U). Physicians rated most patients as improved, and majority of patients were satisfied at final visit (80.8% 155U; 83.6% 156-195U; 90% 195U). Treatment-emergent adverse events (TEAEs) were reported in 18/68(26.5%) patients in 155U-group, 41/65(63.1%) in 156-195U-group and 10/13(76.9%) in 195U-group; treatment-related TEAEs were 9(13.2%), 10(15.4%) and 3(23.1%) respectively; serious TEAEs were 0, 3(4.6%) and 1(7.7%), none treatment-related. Conclusions: Long-term treatment with 155U, 156-195U, and 195U-onabotulinumtoxinA in PREDICT was safe and effective CM treatment. No new safety signals were identified.
To evaluate the utility of autologous bone-flap swab cultures performed at the time of cranioplasty in predicting postcranioplasty surgical site infection (SSI).
Design:
Retrospective cohort study.
Participants:
Patients undergoing craniectomy (with bone-flap storage in tissue bank), followed by delayed autologous bone-flap replacement cranioplasty between January 1, 2010, and November 30, 2020.
Setting:
Tertiary-care academic hospital.
Methods:
We framed the bone-flap swab culture taken at the time of cranioplasty as a diagnostic test for predicting postcranioplasty SSI. We calculated, sensitivity, specificity, positive and negative predictive values, and positive and negative likelihood ratios.
Results:
Among 282 unique eligible cases, 16 (5.6%) developed SSI after cranioplasty. A high percentage of bone-flap swab cultures were positive at the time of craniectomy (66.7%) and cranioplasty (59.5%). Most organisms from bone-flap swab cultures were Cutibacterium acnes or coagulase-negative staphylococci (76%–85%), and most SSI pathogens were methicillin-susceptible Staphylococcus aureus (38%). Bone-flap swab culture had poor sensitivity (0.07; 95% CI, 0.01–0.31), specificity (0.4; 95% CI, 0.34–0.45), and positive likelihood ratio (0.12) for predicting postcranioplasty SSI.
Conclusion:
Overall, autologous bone-flap swab cultures performed at the time of cranioplasty have poor utility in predicting postcranioplasty SSI. Eliminating this low-value practice would result in significant workload reductions and associated healthcare costs.
To evaluate different prospective audit-and-feedback models on antimicrobial prescribing at a rehabilitation hospital.
Design:
Retrospective interrupted time series (ITS) and qualitative methods.
Setting:
A 178-bed rehabilitation hospital within an academic health sciences center.
Methods:
ITS analysis was used to analyze monthly days of therapy (DOT) per 1,000 patient days (PD) and monthly urine cultures ordered per 1,000 PD. We compared 2 sequential intervention periods to the baseline: (1) a period when a dedicated antimicrobial stewardship (AMS) pharmacist performed prospective audit and feedback and provided urine culture education followed by (2) a period when ward pharmacists performing audit and feedback. We conducted an electronic survey with physicians and semistructured interviews with pharmacists, respectively.
Results:
Audit and feedback conducted by an AMS pharmacist resulted in a 24.3% relative reduction in total DOT per 1,000 PD (incidence rate ratio [IRR], 0.76; 95% confidence interval [CI], 0.58–0.99; P = .04), whereas we detected no difference between ward pharmacist audit and feedback and the baseline (IRR, 1.20; 95% CI, 0.53–2.70; P = .65). We detected no statistically significant change in monthly urine-culture orders between the AMS pharmacist period and the baseline (level coefficient, 0.81; 95% CI, 0.65–1.01; P = .07). Compared to baseline, the ward pharmacist period showed a statistically significant increase in urine-culture ordering over time (slope coefficient, 1.04; 95% CI, 1.01–1.08; P = .02). The barrier most identified by pharmacists was insufficient time.
Conclusions:
Audit and feedback conducted by an AMS pharmacist in a rehabilitation hospital was associated with decreased antimicrobial use.
Major depressive disorder (MDD) is the leading cause of disability worldwide. Patients with MDD have high rates of comorbidity with mental and physical conditions, one of which is chronic pain. Chronic pain conditions themselves are also associated with significant disability, and the large number of patients with MDD who have chronic pain drives high levels of disability and compounds healthcare burden. The management of depression in patients who also have chronic pain can be particularly challenging due to underlying mechanisms that are common to both conditions, and because many patients with these conditions are already taking multiple medications. For these reasons, healthcare providers may be reluctant to treat such patients. The Canadian Network for Mood and Anxiety Treatments (CANMAT) guidelines provide evidence-based recommendations for the management of MDD and comorbid psychiatric and medical conditions such as anxiety, substance use disorder, and cardiovascular disease; however, comorbid chronic pain is not addressed. In this article, we provide an overview of the pathophysiological and clinical overlap between depression and chronic pain and review evidence-based pharmacological recommendations in current treatment guidelines for MDD and for chronic pain. Based on clinical experience with MDD patients with comorbid pain, we recommend rapidly and aggressively treating depression according to CANMAT treatment guidelines, using antidepressant medications with analgesic properties, while addressing pain with first-line pharmacotherapy as treatment for depression is optimized. We review options for treating pain symptoms that remain after response to antidepressant treatment is achieved.
Multiple treatments are effective for major depressive disorder (MDD), but the outcomes of each treatment vary broadly among individuals. Accurate prediction of outcomes is needed to help select a treatment that is likely to work for a given person. We aim to examine the performance of machine learning methods in delivering replicable predictions of treatment outcomes.
Methods
Of 7732 non-duplicate records identified through literature search, we retained 59 eligible reports and extracted data on sample, treatment, predictors, machine learning method, and treatment outcome prediction. A minimum sample size of 100 and an adequate validation method were used to identify adequate-quality studies. The effects of study features on prediction accuracy were tested with mixed-effects models. Fifty-four of the studies provided accuracy estimates or other estimates that allowed calculation of balanced accuracy of predicting outcomes of treatment.
Results
Eight adequate-quality studies reported a mean accuracy of 0.63 [95% confidence interval (CI) 0.56–0.71], which was significantly lower than a mean accuracy of 0.75 (95% CI 0.72–0.78) in the other 46 studies. Among the adequate-quality studies, accuracies were higher when predicting treatment resistance (0.69) and lower when predicting remission (0.60) or response (0.56). The choice of machine learning method, feature selection, and the ratio of features to individuals were not associated with reported accuracy.
Conclusions
The negative relationship between study quality and prediction accuracy, combined with a lack of independent replication, invites caution when evaluating the potential of machine learning applications for personalizing the treatment of depression.
In this essay, we honour the memory of Oliver Williamson by reflecting on Chiles and McMackin's 1996 Academy of Management Review article ‘Integrating variable risk preferences, trust, and transaction cost economics’. The article, which built on Williamson's work in transaction cost economics (TCE), went on to attract attention not only from the authors’ home discipline of management and organisation studies, but also from other business disciplines, the professions and the social sciences. After revisiting the article's origins and core arguments, we turn to selectively (re)view TCE's development since 1996 through the lens of this article, focusing on trust, risk and subjective costs. We cover conceptual and empirical developments in each of these areas and reflect on how our review contributes to previous debates concerning trade-offs implicit in relaxing TCE's behavioural assumptions. We conclude by reflecting on key points of learning from our review and possible implications for future research.
We previously reported that bipolar disorder (BD) patients with clinically significant weight gain (CSWG; ⩾7% of baseline weight) in the 12 months after their first manic episode experienced greater limbic brain volume loss than patients without CSWG. It is unknown whether CSWG is also a risk factor for progressive neurochemical abnormalities.
Methods
We investigated whether 12-month CSWG predicted greater 12-month decreases in hippocampal N-acetylaspartate (NAA) and greater increases in glutamate + glutamine (Glx) following a first manic episode. In BD patients (n = 58) and healthy comparator subjects (HS; n = 34), we measured baseline and 12-month hippocampal NAA and Glx using bilateral 3-Tesla single-voxel proton magnetic resonance spectroscopy. We used general linear models for repeated measures to investigate whether CSWG predicted neurochemical changes.
Results
Thirty-three percent of patients and 18% of HS experienced CSWG. After correcting for multiple comparisons, CSWG in patients predicted a greater decrease in left hippocampal NAA (effect size = −0.52, p = 0.005). CSWG also predicted a greater decrease in left hippocampal NAA in HS with a similar effect size (−0.53). A model including patients and HS found an effect of CSWG on Δleft NAA (p = 0.007), but no diagnosis effect and no diagnosis × CSWG interaction, confirming that CSWG had similar effects in patients and HS.
Conclusion
CSWG is a risk factor for decreasing hippocampal NAA in BD patients and HS. These results suggest that the well-known finding of reduced NAA in BD may result from higher body mass index in patients rather than BD diagnosis.
Brief measurements of the subjective experience of stress with good predictive capability are important in a range of community mental health and research settings. The potential for large-scale implementation of such a measure for screening may facilitate early risk detection and intervention opportunities. Few such measures however have been developed and validated in epidemiological and longitudinal community samples. We designed a new single-item measure of the subjective level of stress (SLS-1) and tested its validity and ability to predict long-term mental health outcomes of up to 12 months through two separate studies.
Methods
We first examined the content and face validity of the SLS-1 with a panel consisting of mental health experts and laypersons. Two studies were conducted to examine its validity and predictive utility. In study 1, we tested the convergent and divergent validity as well as incremental validity of the SLS-1 in a large epidemiological sample of young people in Hong Kong (n = 1445). In study 2, in a consecutively recruited longitudinal community sample of young people (n = 258), we first performed the same procedures as in study 1 to ensure replicability of the findings. We then examined in this longitudinal sample the utility of the SLS-1 in predicting long-term depressive, anxiety and stress outcomes assessed at 3 months and 6 months (n = 182) and at 12 months (n = 84).
Results
The SLS-1 demonstrated good content and face validity. Findings from the two studies showed that SLS-1 was moderately to strongly correlated with a range of mental health outcomes, including depressive, anxiety, stress and distress symptoms. We also demonstrated its ability to explain the variance explained in symptoms beyond other known personal and psychological factors. Using the longitudinal sample in study 2, we further showed the significant predictive capability of the SLS-1 for long-term symptom outcomes for up to 12 months even when accounting for demographic characteristics.
Conclusions
The findings altogether support the validity and predictive utility of the SLS-1 as a brief measure of stress with strong indications of both concurrent and long-term mental health outcomes. Given the value of brief measures of mental health risks at a population level, the SLS-1 may have potential for use as an early screening tool to inform early preventative intervention work.
Many studies document cognitive decline following specific types of acute illness hospitalizations (AIH) such as surgery, critical care, or those complicated by delirium. However, cognitive decline may be a complication following all types of AIH. This systematic review will summarize longitudinal observational studies documenting cognitive changes following AIH in the majority admitted population and conduct meta-analysis (MA) to assess the quantitative effect of AIH on post-hospitalization cognitive decline (PHCD).
Methods:
We followed Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) guidelines. Selection criteria were defined to identify studies of older age adults exposed to AIH with cognitive measures. 6566 titles were screened. 46 reports were reviewed qualitatively, of which seven contributed data to the MA. Risk of bias was assessed using the Newcastle–Ottawa Scale.
Results:
The qualitative review suggested increased cognitive decline following AIH, but several reports were particularly vulnerable to bias. Domain-specific outcomes following AIH included declines in memory and processing speed. Increasing age and the severity of illness were the most consistent risk factors for PHCD. PHCD was supported by MA of seven eligible studies with 41,453 participants (Cohen’s d = −0.25, 95% CI [−0.02, −0.49] I2 35%).
Conclusions:
There is preliminary evidence that AIH exposure accelerates or triggers cognitive decline in the elderly patient. PHCD reported in specific contexts could be subsets of a larger phenomenon and caused by overlapping mechanisms. Future research must clarify the trajectory, clinical significance, and etiology of PHCD: a priority in the face of an aging population with increasing rates of both cognitive impairment and hospitalization.
To compare long-term survival of Parkinson’s disease (PD) patients with deep brain stimulation (DBS) to matched controls, and examine whether DBS was associated with differences in injurious falls, long-term care, and home care.
Methods:
Using administrative health data (Ontario, Canada), we examined DBS outcomes within a cohort of individuals diagnosed with PD between 1997 and 2012. Patients receiving DBS were matched with non-DBS controls by age, sex, PD diagnosis date, time with PD, and a propensity score. Survival between groups was compared using the log-rank test and marginal Cox proportional hazards regression. Cumulative incidence function curves and marginal subdistribution hazard models were used to assess effects of DBS on falls, long-term care admission, and home care use, with death as a competing risk.
Results:
There were 260 DBS recipients matched with 551 controls. Patients undergoing DBS did not experience a significant survival advantage compared to controls (log-rank test p = 0.50; HR: 0.89, 95% CI: 0.65–1.22). Among patients <65 years of age, DBS recipients had a significantly reduced risk of death (HR: 0.49, 95% CI: 0.28–0.84). Patients receiving DBS were more likely than controls to receive care for falls (HR: 1.56, 95% CI: 1.19–2.05) and home care (HR: 1.59, 95% CI: 1.32–1.90), while long-term care admission was similar between groups.
Conclusions:
Receiving DBS may increase survival for younger PD patients who undergo DBS. Future studies should examine whether survival benefits may be attributed to effects on PD or the absence of comorbidities that influence mortality.
An estimated 1.4 million people per year attend Emergency Departments in the UK following head trauma.1 Approximately 10% of these patients have a moderate or severe Traumatic Brain Injury (TBI) with a Glasgow Coma Score (GCS) <12 or <9 respectively. TBI results in more than 3600 Intensive Care admissions per year2 and remains the leading cause of mortality in patients aged under 25 with 6–10 brain injury deaths per 100 000 of population per annum.1
To examine whether sociodemographic characteristics and health care utilization are associated with receiving deep brain stimulation (DBS) surgery for Parkinson’s disease (PD) in Ontario, Canada.
Methods:
Using health administrative data, we identified a cohort of individuals aged 40 years or older diagnosed with incident PD between 1995 and 2009. A case-control study was used to examine whether select factors were associated with DBS for PD. Patients were classified as cases if they underwent DBS surgery at any point 1-year after cohort entry until December 31, 2016. Conditional logistic regression modeling was used to estimate the adjusted odds of DBS surgery for sociodemographic and health care utilization indicators.
Results:
A total of 46,237 individuals with PD were identified, with 543 (1.2%) receiving DBS surgery. Individuals residing in northern Ontario were more likely than southern patients to receive DBS surgery [adjusted odds ratio (AOR) = 2.23, 95% confidence interval (CI) = 1.15–4.34]; however, regional variations were not observed after accounting for medication use among older adults (AOR = 1.04, 95% CI = 0.26–4.21). Patients living in neighborhoods with the highest concentration of visible minorities were less likely to receive DBS surgery compared to patients living in predominantly white neighborhoods (AOR = 0.27, 95% CI = 0.16–0.46). Regular neurologist care and use of multiple PD medications were positively associated with DBS surgery.
Conclusions:
Variations in use of DBS may reflect differences in access to care, specialist referral pathways, health-seeking behavior, or need for DBS. Future studies are needed to understand drivers of potential disparities in DBS use.
The existing literature on chronic pain points to the effects anxiety sensitivity, pain hypervigilance, and pain catastrophizing on pain-related fear; however, the nature of the relationships remains unclear. The three dispositional factors may affect one another in the prediction of pain adjustment outcomes. The addition of one disposition may increase the association between another disposition and outcomes, a consequence known as suppressor effects in statistical terms.
Objective
This study examined the possible statistical suppressor effects of anxiety sensitivity, pain hypervigilance and pain catastrophizing in predicting pain-related fear and adjustment outcomes (disability and depression).
Methods
Chinese patients with chronic musculoskeletal pain (n = 401) completed a battery of assessments on pain intensity, depression, anxiety sensitivity, pain vigilance, pain catastrophizing, and pain-related fear. Multiple regression analyses assessed the mediating/moderating role of pain hypervigilance. Structural equation modeling (SEM) was used to evaluate suppression effects.
Results
Our results evidenced pain hypervigilance mediated the effects of anxiety sensitivity (Model 1: Sobel z = 4.86) and pain catastrophizing (Model 3: Sobel z = 5.08) on pain-related fear. Net suppression effect of pain catastrophizing on anxiety sensitivity was found in SEM where both anxiety sensitivity and pain catastrophizing were included in the same full model to predict disability (Model 9: CFI = 0.95) and depression (Model 10: CFI = 0.93) (all P < 0.001) (see Figs. 3 and 4, Figs. 1 and 2).
Conclusions
Our findings evidenced that pain hypervigilance mediated the relationship of two dispositional factors, pain catastrophic cognition and anxiety sensitivity, with pain-related fear. The net suppression effects of pain catastrophizing suggest that anxiety sensitivity enhanced the effect of pain catastrophic cognition on pain hypervigilance.
Disclosure of interest
The authors have not supplied their declaration of competing interest.
A body of evidence has accrued supporting the Fear-Avoidance Model (FAM) of chronic pain which postulated the mediating role of pain-related fear in the relationships between pain catastrophizing and pain anxiety in affecting pain-related outcomes. Yet, relatively little data points to the extent to which the FAM be extended to understand chronic pain in Chinese population and its impact on quality of life (QoL).
Objective
This study explored the relationships between FAM components and their effects on QoL in a Chinese sample.
Methods
A total of 401 Chinese patients with chronic musculoskeletal pain completed measures of three core FAM components (pain catastrophizing, pain-related fear, and pain anxiety) and QoL. Cross-sectional structural equation modeling (SEM) assessed the goodness of fit of the FAM for two QoL outcomes, Physical (Model 1) and Mental (Model 2). In both models, pain catastrophizing was hypothesized to underpin pain-related fear, thereby influencing pain anxiety and subsequently QoL outcomes.
Results
Results of SEM evidenced adequate data-model fit (CFI30.90) for the two models tested (Model 1: CFI = 0.93; Model 2: CFI = 0.94). Specifically, pain catastrophizing significantly predicted pain-related fear (Model 1: stdb = 0.90; Model 2: stdb = 0.91), which in turn significantly predicted pain anxiety (Model 1: stdb = 0.92; Model 2: stdb = 0.929) and QoL outcomes in a negative direction (Model 1: stdb = −0.391; Model 2: stdb = −0.651) (all P < 0.001) (Table 1, Fig. 1).
Conclusion
Our data substantiated the existing FAM literature and offered evidence for the cross-cultural validity of the FAM in the Chinese population with chronic pain.
Disclosure of interest
The authors have not supplied their declaration of competing interest.
To assess whether a self-reported β-lactam allergy is associated with an increased risk of surgical site infection (SSI) across a broad range of procedures and to determine whether this association is mediated by the receipt of an alternate antibiotic to cefazolin.
Design:
Retrospective cohort study.
Participants:
Surgical procedures sampled by an institutional National Surgical Quality Improvement Program database over an 18-month period (January 2017 to June 2018) from 7 surgical specialties.
Setting:
Tertiary-care academic hospital.
Results:
Of the 3,589 surgical procedures included in the study, 369 (10.3%) were performed in patients with a reported β-lactam allergy. Those with a reported β-lactam allergy were significantly less likely to receive cefazolin (38.8% vs 95.5%) or metronidazole (20.3% vs 26.1%) and were more likely to receive clindamycin (52.0% vs 0.2%), gentamicin (3.5% vs 0%), or vancomycin (2.2% vs 0.1%) than those without allergy. An SSI occurred in 154 of 3,220 procedures (4.8%) in patients without reported allergy and 27 of 369 (7.3%) with reported allergy. In the multivariable regression model, a reported β-lactam allergy was associated with a statistically significant increase in SSI risk (adjusted odds ratio [aOR], 1.61; 95% confidence interval [CI], 1.04–2.51; P = .03). This effect was completely mediated by receipt of an alternate antibiotic to cefazolin (indirect effect aOR, 1.68; 95% CI, 1.17–2.34; P = .005).
Conclusions:
Self-reported β-lactam allergy was associated with an increased SSI risk mediated through receipt of alternate antibiotic prophylaxis. Safely increasing use of cefazolin prophylaxis in patients with reported β-lactam allergy can potentially lower the risk of SSIs.
Patients with major depressive disorder (MDD) display cognitive deficits in acutely depressed and remitted states. Childhood maltreatment is associated with cognitive dysfunction in adults, but its impact on cognition and treatment related cognitive outcomes in adult MDD has received little consideration. We investigate whether, compared to patients without maltreatment and healthy participants, adult MDD patients with childhood maltreatment display greater cognitive deficits in acute depression, lower treatment-associated cognitive improvements, and lower cognitive performance in remission.
Methods
Healthy and acutely depressed MDD participants were enrolled in a multi-center MDD predictive marker discovery trial. MDD participants received 16 weeks of standardized antidepressant treatment. Maltreatment and cognition were assessed with the Childhood Experience of Care and Abuse interview and the CNS Vital Signs battery, respectively. Cognitive scores and change from baseline to week 16 were compared amongst MDD participants with (DM+, n = 93) and without maltreatment (DM−, n = 90), and healthy participants with (HM+, n = 22) and without maltreatment (HM−, n = 80). Separate analyses in MDD participants who remitted were conducted.
Results
DM+ had lower baseline global cognition, processing speed, and memory v. HM−, with no significant baseline differences amongst DM−, HM+, and HM− groups. There were no significant between-group differences in cognitive change over 16 weeks. Post-treatment remitted DM+, but not remitted DM−, scored significantly lower than HM− in working memory and processing speed.
Conclusions
Childhood maltreatment was associated with cognitive deficits in depressed and remitted adults with MDD. Maltreatment may be a risk factor for more severe and persistent cognitive deficits in adult MDD.
The study of predeath grief is hampered by measures that are often lengthy and not clearly differentiated from other caregiving outcomes, most notably burden. We aimed to validate a new 11-item Caregiver Grief Questionnaire (CGQ) assessing two dimensions of predeath grief, namely relational deprivation and emotional pain.
Design:
Cross-sectional survey.
Setting:
Community and psychogeriatric clinics.
Participants:
173 Alzheimer (AD) caregivers who cared for relatives with different degrees of severity (63 mild, 60 moderate, and 50 severe).
Measurements:
Besides the CGQ, measures of caregiver burden and depressive symptoms, and care-recipients’ neuropsychiatric symptoms and functional impairment were assessed.
Results:
Confirmatory factor analysis supported the hypothesized 2-factor over the 1-factor model, and both subscales were only moderately correlated with burden. Two-week test-retest reliabilities were excellent. Caregivers for mild AD reported less grief than those caring for more severe relatives. Z tests revealed significantly different correlational patterns for the two dimensions, with emotional pain more related to global burden and depressive symptoms, and relational deprivation more related to care-recipients’ functional impairment. Both dimensions were mildly correlated with neuropsychiatric symptoms (especially disruptive behaviors and psychotic symptoms) of the care-recipient.
Conclusions:
Results supported the reliability and validity of the two-dimensional measure of predeath grief. As a brief measure, it can be readily added to research instruments to facilitate study of this important phenomenon along with other caregiving outcomes.
Global inequity in access to and availability of essential mental health services is well recognized. The mental health treatment gap is approximately 50% in all countries, with up to 90% of people in the lowest-income countries lacking access to required mental health services. Increased investment in global mental health (GMH) has increased innovation in mental health service delivery in LMICs. Situational analyses in areas where mental health services and systems are poorly developed and resourced are essential when planning for research and implementation, however, little guidance is available to inform methodological approaches to conducting these types of studies. This scoping review provides an analysis of methodological approaches to situational analysis in GMH, including an assessment of the extent to which situational analyses include equity in study designs. It is intended as a resource that identifies current gaps and areas for future development in GMH. Formative research, including situational analysis, is an essential first step in conducting robust implementation research, an essential area of study in GMH that will help to promote improved availability of, access to and reach of mental health services for people living with mental illness in low- and middle-income countries (LMICs). While strong leadership in this field exists, there remain significant opportunities for enhanced research representing different LMICs and regions.
Loneliness and social networks have been extensively studied in relation to cognitive impairments, but how they interact with each other in relation to cognition is still unclear. This study aimed at exploring the interaction of loneliness and various types of social networks in relation to cognition in older adults.
Design:
a cross-sectional study.
Setting:
face-to-face interview.
Participants:
497 older adults with normal global cognition were interviewed.
Measurements:
Loneliness was assessed with Chinese 6-item De Jong Gierverg’s Loneliness Scale. Confiding network was defined as people who could share inner feelings with, whereas non-confiding network was computed by subtracting the confiding network from the total network size. Cognitive performance was expressed as a global composite z-score of Cantonese version of mini mental state examination (CMMSE), Categorical verbal fluency test (CVFT) and delayed recall. Linear regression was used to test the main effects of loneliness and the size of various networks, and their interaction on cognitive performance with the adjustment of sociodemographic, physical and psychological confounders.
Results:
Significant interaction was found between loneliness and non-confiding network on cognitive performance (B = .002, β = .092, t = 2.099, p = .036). Further analysis showed a significant interaction between loneliness and the number of family members in non-confiding network on cognition (B = .021, β = .119, t = 2.775, p = .006).
Conclusions:
Results suggested that a non-confiding relationship with family members might put lonely older adults at risk of cognitive impairment. Our study might have implications on designing psychosocial intervention for those who are vulnerable to loneliness as an early prevention of neurocognitive impairments.