To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
To examine the costs and cost-effectiveness of mirtazapine compared to placebo over 12-week follow-up.
Economic evaluation in a double-blind randomized controlled trial of mirtazapine vs. placebo.
Community settings and care homes in 26 UK centers.
People with probable or possible Alzheimer’s disease and agitation.
Primary outcome included incremental cost of participants’ health and social care per 6-point difference in CMAI score at 12 weeks. Secondary cost-utility analyses examined participants’ and unpaid carers’ gain in quality-adjusted life years (derived from EQ-5D-5L, DEMQOL-Proxy-U, and DEMQOL-U) from the health and social care and societal perspectives.
One hundred and two participants were allocated to each group; 81 mirtazapine and 90 placebo participants completed a 12-week assessment (87 and 95, respectively, completed a 6-week assessment). Mirtazapine and placebo groups did not differ on mean CMAI scores or health and social care costs over the study period, before or after adjustment for center and living arrangement (independent living/care home). On the primary outcome, neither mirtazapine nor placebo could be considered a cost-effective strategy with a high level of confidence. Groups did not differ in terms of participant self- or proxy-rated or carer self-rated quality of life scores, health and social care or societal costs, before or after adjustment.
On cost-effectiveness grounds, the use of mirtazapine cannot be recommended for agitated behaviors in people living with dementia. Effective and cost-effective medications for agitation in dementia remain to be identified in cases where non-pharmacological strategies for managing agitation have been unsuccessful.
The coronavirus disease 2019 (COVID-19) pandemic and ensuing restrictions have negatively affected the mental health and well-being of the general population, and there is increasing evidence suggesting that lockdowns have led to a disruption of health services. In March 2020, South Africa introduced a lockdown in response to the COVID-19 pandemic, entailing the suspension of all non-essential activities and a complete ban of tobacco and alcohol sales. We studied the effect of the lockdown on mental health care utilisation rates in private-sector care in South Africa.
We conducted an interrupted time-series analysis using insurance claims from 1 January 2017 to 1 June 2020 of beneficiaries 18 years or older from a large private sector medical insurance scheme. We calculated weekly outpatient consultation and hospital admission rates for organic mental disorders, substance use disorders, serious mental disorders, depression, anxiety, other mental disorders, any mental disorder and alcohol withdrawal syndrome. We calculated adjusted odds ratios (OR) for the effect of the lockdown on weekly outpatient consultation and hospital admission rates and the weekly change in rates during the lockdown until 1 June 2020.
710 367 persons were followed up for a median of 153 weeks. Hospital admission rates (OR 0.38; 95% confidence interval (CI) 0.33–0.44) and outpatient consultation rates (OR 0.74; 95% CI 0.63–0.87) for any mental disorder decreased substantially after the introduction of the lockdown and did not recover to pre-lockdown levels by 1 June 2020. Health care utilisation rates for alcohol withdrawal syndrome doubled after the introduction of the lockdown, but the statistical uncertainty around the estimates was large (OR 2.24; 95% CI 0.69–7.24).
Mental health care utilisation rates for inpatient and outpatient services decreased substantially after the introduction of the lockdown. Hospital admissions and outpatient consultations for alcohol withdrawal syndrome increased after the introduction of the lockdown, but statistical uncertainty precludes strong conclusions about a potential unintended effect of the alcohol sales ban. Governments should integrate strategies for ensuring access and continuity of essential mental health services during lockdowns in pandemic preparedness planning.
The aetiology and pathophysiology of alcohol dependence are complex, derived from genetics, parenting, peer and societal norms and rules, pharmacology and mental health comorbidity. This chapter explores some of the contributing factors that make alcohol dependence a more complex phenomenon than one merely of personal choice. It explores the various aspects of family history that contribute to the heritability of alcohol use disorders, and summarises relevant social and psychological factors. It then provides a nuanced understanding of the pharmacological rationale underpinning withdrawal and relapse prevention treatment, including an understanding of the role of GABA, glutamate and opioid systems. The chapter concludes by highlighting the ways in which patients with psychiatric disorders may be at greater risk of alcohol use disorders. In doing so, consideration is given to trauma-informed principles of care in light of the high prevalence of childhood trauma in this group.
We have developed the bispectral electroencephalography (BSEEG) method for detection of delirium and prediction of poor outcomes.
To improve the BSEEG method by introducing a new EEG device.
In a prospective cohort study, EEG data were obtained and BSEEG scores were calculated. BSEEG scores were filtered on the basis of standard deviation (s.d.) values to exclude signals with high noise. Both non-filtered and s.d.-filtered BSEEG scores were analysed. BSEEG scores were compared with the results of three delirium screening scales: the Confusion Assessment Method for the Intensive Care Unit (CAM-ICU), the Delirium Rating Scale-Revised-98 (DRS) and the Delirium Observation Screening Scale (DOSS). Additionally, the 365-day mortalities and the length of stay (LOS) in the hospital were analysed.
We enrolled 279 elderly participants and obtained 620 BSEEG recordings; 142 participants were categorised as BSEEG-positive, reflecting slower EEG activity. BSEEG scores were higher in the CAM-ICU-positive group than in the CAM-ICU-negative group. There were significant correlations between BSEEG scores and scores on the DRS and the DOSS. The mortality rate of the BSEEG-positive group was significantly higher than that of the BSEEG-negative group. The LOS of the BSEEG-positive group was longer compared with that of the BSEEG-negative group. BSEEG scores after s.d. filtering showed stronger correlations with delirium screening scores and more significant prediction of mortality.
We confirmed the usefulness of the BSEEG method for detection of delirium and of delirium severity, and prediction of patient outcomes with a new EEG device.
Obsessive–compulsive disorder (OCD) is a severe psychiatric disorder characterized by its heterogeneous nature and by different dimensions of obsessive–compulsive (OC) symptoms. Serotonin reuptake inhibitors (SRIs) are used to treat OCD, but up to 40% to 60% of patients do not show a significant improvement with these medications. In this study, we aimed to test the impact of brain-derived neurotrophic factor (BDNF) Val66Met polymorphism on the efficacy of antidepressants in OCD overall, and in relation to the different OC dimensions.
In a 6-month prospective treatment study, 69 Caucasian OCD patients were treated with escitalopram for 24 weeks or with escitalopram for 12 weeks followed by paroxetine for an additional 12-week period. Patients were genotyped and assessed for treatment response. The main clinical outcomes were improvement of the Yale-Brown Obsessive–Compulsive Scale score and in different OC symptom dimension scores.
The Val/Val group comprised 43 (62%) patients, the Val/Met and Met/Met group comprised 26 (38%) patients. Forty-two patients were classified as responders at 12 weeks and 38 at 24 weeks; no significant association was found between BDNF Val66Met and SRIs response at 12 and 24 weeks. In analyses of the different OC symptom dimensions, the Met allele was associated with a slightly reduced score in the aggressive/checking dimension at 6 months (P = .048).
Our findings do not support the usefulness of BDNF Val66Met genotyping to predict overall response to treatment with SRIs in OCD; they did however suggest a better outcome at 6 months for the aggressive/checking symptom dimension for patients carrying the Met allele.
Clinical substance misuse presentations are commonly managed by Psychiatry Core Trainees (CTs) out of hours. However, specialist teaching is not included in the Maudsley Training Program (MTP) induction. We aimed to investigate whether this was of clinical concern and, if so identify interventions to address it.
The association of substance misuse disorder and mental illness is widely recognised. The Adult Psychiatric Morbidity Survey 2014 reported that half of people dependent on drugs other than cannabis were receiving mental health treatment. Substance use substantially impacts clinical risk; 57% of patient suicides in 2017 had a history of substance misuse. It also effects emergency psychiatric services: 55-80% of patients detained under S136 are intoxicated. Therefore, it is imperative for patient safety that CTs can assess and manage these patients appropriately.
The Royal College of Psychiatrists recognises the need for specialist substance misuse knowledge and skills, and lists this as a key ‘Intended Learning Outcome’ for CTs. Unfortunately, the availability of specialist drug and alcohol service placements for CTs has significantly declined. Only one placement is available per MTP rotation. Teaching is therefore relied upon to gain these competencies.
Using a cross-sectional survey we explored CTs confidence in recognising and managing substance misuse presentations, knowledge of where to seek guidance and asked for teaching suggestions. We surveyed two CT1 cohorts in 2017 and 2019.
Fifty-one CTs took the survey. Of these 92% did not feel prepared to manage acute substance intoxication or withdrawal and 96% would like relevant teaching at the start of CT1. Furthermore, 67% did not know where they could seek guidance.
CTs felt confident at recognising and managing alcohol related presentations. However, they were less confident in recognising opioid withdrawal, how to safely prescribe opioid substitution therapy (OST), and the usual doses of OST (65%, 94%, 94% rated ‘neither confident nor not confident’ or below, respectively). CTs were not confident at recognising GBL and cannabinoid withdrawal, principles of harm minimisation, assessing readiness to change, delivering Brief Interventions and teaching patients to use Naloxone.
The results were exceptionally similar between cohorts, demonstrating reliability of our findings and that CTs lack of substance misuse knowledge is a significant clinical concern.
To address this deficit of knowledge, we are writing an introductory lecture with supporting guidance in the induction pack, developing an online video resource, and moving key substance misuse lectures to earlier in the MTP taught programme.
The aim of the current study was to establish whether the neighbourhood food environment, characterised by the healthiness of food outlets, the diversity of food outlets and fast-food outlet density within a 500 m or 1000 m street network buffer around the home address, contributed to ethnic differences in diet quality.
Cross-sectional cohort study.
Amsterdam, the Netherlands.
Data on adult participants of Dutch, South-Asian Surinamese, African Surinamese, Turkish and Moroccan descent (n total 4728) in the HELIUS study were analysed.
The neighbourhood food environment of ethnic minority groups living in Amsterdam is less supportive of a healthy diet and of less diversity than that of participants of Dutch origin. For example, participants of Turkish, Moroccan and South-Asian Surinamese descent reside in a neighbourhood with a significantly higher fast-food outlet density (≤1000 m) than participants of Dutch descent. However, we found no evidence that neighbourhood food environment characteristics directly contributed to ethnic differences in diet quality.
Although ethnic minority groups lived in less healthy food environments than participants of ethnic Dutch origin, this did not contribute to ethnic differences in diet quality. Future research should investigate other direct or indirect consequences of residing in less supportive food environments and gain a better understanding of how different ethnic groups make use of their neighbourhood food environment.
The incidence of surgical site infections may be underreported if the data are not routinely validated for accuracy. Our goal was to investigate the communicated SSI rate from a large network of Swiss hospitals compared with the results from on-site surveillance quality audits.
Retrospective cohort study.
In total, 81,957 knee and hip prosthetic arthroplasties from 125 hospitals and 33,315 colorectal surgeries from 110 hospitals were included in the study.
Hospitals had at least 2 external audits to assess the surveillance quality. The 50-point standardized score per audit summarizes quantitative and qualitative information from both structured interviews and a random selection of patient records. We calculated the mean National Healthcare Safety Network (NHSN) risk index adjusted infection rates in both surgery groups.
The median NHSN adjusted infection rate per hospital was 1.0% (interquartile range [IQR], 0.6%–1.5%) with median audit score of 37 (IQR, 33–42) for knee and hip arthroplasty, and 12.7% (IQR, 9.0%–16.6%), with median audit score 38 (IQR, 35–42) for colorectal surgeries. We observed a wide range of SSI rates and surveillance quality, with discernible clustering for public and private hospitals, and both lower infection rates and audit scores for private hospitals. Infection rates increased with audit scores for knee and hip arthroplasty (P value for the slope = .002), and this was also the case for planned (P = .002), and unplanned (P = .02) colorectal surgeries.
Surveillance systems without routine evaluation of validity may underestimate the true incidence of SSIs. Audit quality should be taken into account when interpreting SSI rates, perhaps by adjusting infection rates for those hospitals with lower audit scores.
To estimate the minimum prevalence of adult hereditary ataxias (HA) and spastic paraplegias (HSP) in Eastern Quebec and to evaluate the proportion of associated mutations in identified genes.
We conducted a descriptive cross-sectional study of patients who met clinical criteria for the diagnosis of HA (n = 241) and HSP (n = 115) in the East of the Quebec province between January 2007 and July 2019. The primary outcome was the prevalence per 100,000 persons with a 95% confidence interval (CI). The secondary outcome was the frequency of mutations identified by targeted next-generation sequencing (NGS) approach. Minimum carrier frequency for identified variants was calculated based on allele frequency values and the Hardy–Weinberg (HW) equation.
The minimum prevalence of HA in Eastern Quebec was estimated at 6.47/100 000 [95% CI; 6.44–6.51]; divided into 3.73/100 000 for autosomal recessive (AR) ataxias and 2.67/100 000 for autosomal dominant (AD) ataxias. The minimum prevalence of HSP was 4.17/100 000 [95% CI; 4.14–4.2]; with 2.05/100 000 for AD-HSP and 2.12/100 000 for AR-HSP. In total, 52.4% of patients had a confirmed genetic diagnosis. AR cerebellar ataxia type 1 (2.67/100 000) and AD spastic paraplegia SPG4 (1.18/100 000) were the most prevalent disorders identified. Mutations were identified in 23 genes and molecular alterations in 7 trinucleotides repeats expansion; the most common mutations were c.15705–12 A > G in SYNE1 and c.1529C > T (p.A510V) in SPG7.
We described the minimum prevalence of genetically defined adult HA and HSP in Eastern Quebec. This study provides a framework for international comparisons and service planning.
Background: Certain nursing home (NH) resident care tasks have a higher risk for multidrug-resistant organisms (MDRO) transfer to healthcare personnel (HCP), which can result in transmission to residents if HCPs fail to perform recommended infection prevention practices. However, data on HCP-resident interactions are limited and do not account for intrafacility practice variation. Understanding differences in interactions, by HCP role and unit, is important for informing MDRO prevention strategies in NHs. Methods: In 2019, we conducted serial intercept interviews; each HCP was interviewed 6–7 times for the duration of a unit’s dayshift at 20 NHs in 7 states. The next day, staff on a second unit within the facility were interviewed during the dayshift. HCP on 38 units were interviewed to identify healthcare personnel (HCP)–resident care patterns. All unit staff were eligible for interviews, including certified nursing assistants (CNAs), nurses, physical or occupational therapists, physicians, midlevel practitioners, and respiratory therapists. HCP were asked to list which residents they had cared for (within resident rooms or common areas) since the prior interview. Respondents selected from 14 care tasks. We classified units into 1 of 4 types: long-term, mixed, short stay or rehabilitation, or ventilator or skilled nursing. Interactions were classified based on the risk of HCP contamination after task performance. We compared proportions of interactions associated with each HCP role and performed clustered linear regression to determine the effect of unit type and HCP role on the number of unique task types performed per interaction. Results: Intercept-interviews described 7,050 interactions and 13,843 care tasks. Except in ventilator or skilled nursing units, CNAs have the greatest proportion of care interactions (interfacility range, 50%–60%) (Fig. 1). In ventilator and skilled nursing units, interactions are evenly shared between CNAs and nurses (43% and 47%, respectively). On average, CNAs in ventilator and skilled nursing units perform the most unique task types (2.5 task types per interaction, Fig. 2) compared to other unit types (P < .05). Compared to CNAs, most other HCP types had significantly fewer task types (0.6–1.4 task types per interaction, P < .001). Across all facilities, 45.6% of interactions included tasks that were higher-risk for HCP contamination (eg, transferring, wound and device care, Fig. 3). Conclusions: Focusing infection prevention education efforts on CNAs may be most efficient for preventing MDRO transmission within NH because CNAs have the most HCP–resident interactions and complete more tasks per visit. Studies of HCP-resident interactions are critical to improving understanding of transmission mechanisms as well as target MDRO prevention interventions.
Funding: Centers for Disease Control and Prevention (grant no. U01CK000555-01-00)
Disclosures: Scott Fridkin, consulting fee, vaccine industry (spouse)
Telestroke is an effective way to improve care and health outcomes for stroke patients. This study evaluates the cost-effectiveness of a French telestroke network.
A decision analysis model was built using population-based data. We compared short-term clinical outcomes and costs for the management of acute ischemic stroke patients before and after the implementation of a telestroke network from the point of view of the national health insurance system. Three effectiveness endpoints were used: hospital death, death at 3 months, and severe disability 3 months after stroke (assessed with the modified Rankin scale). Most clinical and economic parameters were estimated from the medical files of 742 retrospectively included patients. Sensitivity analyses were performed.
The analyses revealed that the telestroke strategy was more effective and slightly more costly than the reference strategy (25 disability cases avoided per 1,000 at 3 months, 6.7 avoided hospital deaths, and 13 avoided deaths at 3 months for an extra cost of EUR 97, EUR 138, and EUR 154, respectively). The results remained robust in the sensitivity analyses.
In France, telestroke is an effective strategy for improving patient outcomes and, despite the extra cost, it has a legitimate place in the national health care system.
To gain insight in Dutch food bank recipients’ perception on the content of the food parcels, their dietary intake and how the parcels contribute to their overall dietary intake.
Eleven semi-structured focus group discussions were conducted. Focus group topics were based on Andersons food insecurity definition: the lack of availability of nutritionally adequate foods and the assured ability to acquire foods in socially acceptable ways. Data were coded and analysed with Atlas.ti 7.0 software, using the framework approach.
Seven food banks throughout the Netherlands.
A total of 44 Dutch food bank recipients.
Food bank recipients were not always satisfied with the amount, quality, variation and type of foods in the food parcel. For the participants who could afford to, supplementing the food parcel was reported as main reason for buying foods, and price was the most important aspect in selecting these foods. Participants were not satisfied with their dietary intake; they mainly reported not having enough to eat. The content of the food parcel importantly influenced participants’ overall dietary intake. Finally, participants reported struggling with their feelings of dissatisfaction, while also being grateful for the foods they receive.
This study suggests that, despite their best efforts, food banks are not meeting food bank recipients’ needs. Our results provide valuable directions for improving the content of the food parcels by increasing the quantity, quality and variation in the foods supplied. Whether this also improves the dietary intake of recipients needs to be determined.
Recently, increased attention has been drawn to the composition of the intestinal microbiota and its possible role in metabolic syndrome and type 2 diabetes (T2DM). However, potential variation in gut microbiota composition across ethnic groups is rarely considered despite observed unequal prevalence for these diseases. Our objective was therefore to study the gut microbiota composition across health, metabolic syndrome and T2DM in a multi-ethnic population residing in the same geographical area. 16S rRNA gene sequencing was performed on fecal samples from 3926 participants to the HELIUS cohort (Amsterdam, The Netherlands), representing 6 ethnic groups (Dutch, Ghanaians, Moroccans, Turks, Surinamese of either African or South-Asian descent). Included participants completed a questionnaire and underwent a physical examination and overnight fasted blood sampling. Gut microbiota composition was compared across metabolic status (diabetes with and without metformin use, metabolic syndrome and its subsequent components, health) and ethnicities using Wilcoxon-Mann-Withney tests and logistic regressions. Overall, the gut microbiota alpha-diversity (richness, Shannon index and phylogenetic diversity) decreased with worsening of the metabolic state (comparing health to metabolic syndrome to T2DM) but this was only partially reproduced in ethnic-specific analyses. In line, a lower alpha-diversity was found in relation to all metabolic syndrome components as well as in T2DM subjects using metformin compared to non-users. Alterations, mainly decreased abundances, were also observed at the genus level (many Clostridiales) in metabolic syndrome subjects and more strongly in T2DM subjects with differences across ethnic groups. In particular, we observed decreased abundances of members of the Peptostreptococcaceae family and of Turicibacter and an increased abundance of a member of the Enterobacteriaceae family. Our data highlight several compositional differences in the gut microbiota of individuals with metabolic syndrome or T2DM. These features, confirming prior observations, give some insights into potential key intestinal bacteria related to a worsening of metabolic state. Our results also underscore possible ethnic-specific profiles associated with these microbiota alterations that should be further explored.
Considering the important role that paid support workers play in care of older people with dementia, it is vital that researchers and relevant organisations understand the factors that lead to them feeling valued for the work that they do, and the consequences of such valuing (or lack thereof). The current study employed semi-structured interviews to understand the individual experiences of 15 support workers based both in residential care homes and private homes. The General Inductive Approach was used to analyse the interview transcriptions and to develop a conceptual model that describes the conditions that lead to support workers feeling valued for the work that they do. This model consists of organisational or individual strategies, the context in which support work takes place, and various interactions, actions and intervening conditions that facilitate or prevent support workers feeling valued. A significant finding in this research was the role of interpersonal relationships and interactions which underlie all other aspects of the conceptual model developed here. By understanding the importance of how employers, families of older adults with dementia and peers interact with support workers, we may promote not only the quality of work that support workers deliver, but also the wellbeing of the support workers themselves.
Early-life environmental and nutritional exposures are considered to contribute to the differences in cardiovascular disease (CVD) burden. Among sub-Saharan African populations, the association between markers of early-life exposures such as leg length and sitting height and CVD risk is yet to be investigated. This study assessed the association between leg length, sitting height, and estimated 10-year atherosclerotic cardiovascular disease (ASCVD) risk among Ghanaian-born populations in Europe and Ghana. We constructed sex-specific quintiles for sitting height and leg length for 3250 participants aged 40–70 years (mean age 52 years; men 39.6%; women 60.4%) in the cross-sectional multicenter Research on Diabetes and Obesity among African Migrants study. Ten-year risk of ASCVD was estimated using the Pooled Cohort Equations; risk ≥7.5% was defined as “elevated” CVD risk. Prevalence ratios (PR) were estimated to determine the associations between sitting height, leg length, and estimated 10-year ASCVD risk. For both men and women, mean sitting height and leg length were highest in Europe and lowest in rural Ghana. Sitting height was inversely associated with 10-year ASCVD risk among all women (PR for 1 standard deviation increase of sitting height: 0.75; 95% confidence interval: 0.67, 0.85). Among men, an inverse association between sitting height and 10-year ASCVD risk was significant on adjustment for study site, adult, and parental education but attenuated when further adjusted for height. No association was found between leg length and estimated 10-year ASCVD risk. Early-life and childhood exposures that influence sitting height could be the important determinants of ASCVD risk in this adult population.
Review findings on the role of dietary patterns in preventing depression are inconsistent, possibly due to variation in assessment of dietary exposure and depression. We studied the association between dietary patterns and depressive symptoms in six population-based cohorts and meta-analysed the findings using a standardised approach that defined dietary exposure, depression assessment and covariates.
Included were cross-sectional data from 23 026 participants in six cohorts: InCHIANTI (Italy), LASA, NESDA, HELIUS (the Netherlands), ALSWH (Australia) and Whitehall II (UK). Analysis of incidence was based on three cohorts with repeated measures of depressive symptoms at 5–6 years of follow-up in 10 721 participants: Whitehall II, InCHIANTI, ALSWH. Three a priori dietary patterns, Mediterranean diet score (MDS), Alternative Healthy Eating Index (AHEI-2010), and the Dietary Approaches to Stop Hypertension (DASH) diet were investigated in relation to depressive symptoms. Analyses at the cohort-level adjusted for a fixed set of confounders, meta-analysis used a random-effects model.
Cross-sectional and prospective analyses showed statistically significant inverse associations of the three dietary patterns with depressive symptoms (continuous and dichotomous). In cross-sectional analysis, the association of diet with depressive symptoms using a cut-off yielded an adjusted OR of 0.87 (95% confidence interval 0.84–0.91) for MDS, 0.93 (0.88–0.98) for AHEI-2010, and 0.94 (0.87–1.01) for DASH. Similar associations were observed prospectively: 0.88 (0.80–0.96) for MDS; 0.95 (0.84–1.06) for AHEI-2010; 0.90 (0.84–0.97) for DASH.
Population-scale observational evidence indicates that adults following a healthy dietary pattern have fewer depressive symptoms and lower risk of developing depressive symptoms.
The objective of the present study is to summarise trends in under- and over-nutrition in pregnant women on the Thailand–Myanmar border. Refugees contributed data from 1986 to 2016 and migrants from 1999 to 2016 for weight at first antenatal consultation. BMI and gestational weight gain (GWG) data were available during 2004–2016 when height was routinely measured. Risk factors for low and high BMI were analysed for <18·5 kg/m2 or ≥23 kg/m2, respectively. A total of 48 062 pregnancies over 30 years were available for weight analysis and 14 646 pregnancies over 13 years (2004–2016) had BMI measured in first trimester (<14 weeks’ gestational age). Mean weight at first antenatal consultation in any trimester increased over the 30-year period by 2·0 to 5·2 kg for all women. First trimester BMI has been increasing on average by 0·5 kg/m2 for refugees and 0·6 kg/m2 for migrants, every 5 years. The proportion of women with low BMI in the first trimester decreased from 16·7 to 12·7 % for refugees and 23·1 to 20·2 % for migrants, whereas high BMI increased markedly from 16·9 to 33·2 % for refugees and 12·3 to 28·4 % for migrants. Multivariate analysis demonstrated low BMI as positively associated with being Burman, Muslim, primigravid, having malaria during pregnancy and smoking, and negatively associated with refugee as opposed to migrant status. High BMI was positively associated with being Muslim and literate, and negatively associated with age, primigravida, malaria, anaemia and smoking. Mean GWG was 10·0 (sd 3·4), 9·5 (sd 3·6) and 8·3 (sd 4·3) kg, for low, normal and high WHO BMI categories for Asians, respectively.
To assess the validity of multivariable models for predicting risk of surgical site infection (SSI) after colorectal surgery based on routinely collected data in national surveillance networks.
Retrospective analysis performed on 3 validation cohorts.
Colorectal surgery patients in Switzerland, France, and England, 2007–2017.
We determined calibration and discrimination (ie, area under the curve, AUC) of the COLA (contamination class, obesity, laparoscopy, American Society of Anesthesiologists [ASA]) multivariable risk model and the National Healthcare Safety Network (NHSN) multivariable risk model in each cohort. A new score was constructed based on multivariable analysis of the Swiss cohort following colorectal surgery, then based on colon and rectal surgery separately.
We included 40,813 patients who had undergone elective or emergency colorectal surgery to validate the COLA score, 45,216 patients to validate the NHSN colon and rectal surgery risk models, and 46,320 patients in the construction of a new predictive model. The COLA score’s predictive ability was poor, with AUC values of 0.64 (95% confidence interval [CI], 0.63–0.65), 0.62 (95% CI, 0.58–0.67), 0.60 (95% CI, 0.58–0.61) in the Swiss, French, and English cohorts, respectively. The NHSN colon-specific model (AUC, 0.61; 95% CI, 0.61–0.62) and the rectal surgery–specific model (AUC, 0.57; 95% CI, 0.53–0.61) showed limited predictive ability. The new predictive score showed poor predictive accuracy for colorectal surgery overall (AUC, 0.65; 95% CI, 0.64–0.66), for colon surgery (AUC, 0.65; 95% CI, 0.65–0.66), and for rectal surgery (AUC, 0.63; 95% CI, 0.60–0.66).
Models based on routinely collected data in SSI surveillance networks poorly predict individual risk of SSI following colorectal surgery. Further models that include other more predictive variables could be developed and validated.