To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Introduction: Workplace based assessments (WBAs) are integral to emergency medicine residency training. However many biases undermine their validity, such as an assessor's personal inclination to rate learners leniently or stringently. Outlier assessors produce assessment data that may not reflect the learner's performance. Our emergency department introduced a new Daily Encounter Card (DEC) using entrustability scales in June 2018. Entrustability scales reflect the degree of supervision required for a given task, and are shown to improve assessment reliability and discrimination. It is unclear what effect they will have on assessor stringency/leniency – we hypothesize that they will reduce the number of outlier assessors. We propose a novel, simple method to identify outlying assessors in the setting of WBAs. We also examine the effect of transitioning from a norm-based assessment to an entrustability scale on the population of outlier assessors. Methods: This was a prospective pre-/post-implementation study, including all DECs completed between July 2017 and June 2019 at The Ottawa Hospital Emergency Department. For each phase, we identified outlier assessors as follows: 1. An assessor is a potential outlier if the mean of the scores they awarded was more than two standard deviations away from the mean score of all completed assessments. 2. For each assessor identified in step 1, their learners’ assessment scores were compared to the overall mean of all learners. This ensures that the assessor was not simply awarding outlying scores due to working with outlier learners. Results: 3927 and 3860 assessments were completed by 99 and 116 assessors in the pre- and post-implementation phases respectively. We identified 9 vs 5 outlier assessors (p = 0.16) in the pre- and post-implementation phases. Of these, 6 vs 0 (p = 0.01) were stringent, while 3 vs 5 (p = 0.67) were lenient. One assessor was identified as an outlier (lenient) in both phases. Conclusion: Our proposed method successfully identified outlier assessors, and could be used to identify assessors who might benefit from targeted coaching and feedback on their assessments. The transition to an entrustability scale resulted in a non-significant trend towards fewer outlier assessors. Further work is needed to identify ways to mitigate the effects of rater cognitive biases.
Introduction: The Ottawa Emergency Department Shift Observation Tool (O-EDShOT) was recently developed to assess a resident's ability to safely run an ED shift and is supported by multiple sources of validity evidence. The O-EDShOT uses entrustability scales, which reflect the degree of supervision required for a given task. It was found to discriminate between learners of different levels, and to differentiate between residents who were rated as able to safely run the shift and those who were not. In June 2018 we replaced norm-based daily encounter cards (DECs) with the O-EDShOT. With the ideal assessment tool, most of the score variability would be explained by variability in learners’ performances. In reality, however, much of the observed variability is explained by other factors. The purpose of this study is to determine what proportion of total score variability is accounted for by learner variability when using norm-based DECs vs the O-EDShOT. Methods: This was a prospective pre-/post-implementation study, including all daily assessments completed between July 2017 and June 2019 at The Ottawa Hospital ED. A generalizability analysis (G study) was performed to determine what proportion of total score variability is accounted for by the various factors in this study (learner, rater, form, pgy level) for both the pre- and post- implementation phases. We collected 12 months of data for each phase, because we estimated that 6-12 months would be required to observe a measurable increase in entrustment scale scores within a learner. Results: A total of 3908 and 3679 assessments were completed by 99 and 116 assessors in the pre- and post- implementation phases respectively. Our G study revealed that 21% of total score variance was explained by a combination of post-graduate year (PGY) level and the individual learner in the pre-implementation phase, compared to 59% in the post-implementation phase. An average of 51 vs 27 forms/learner are required to achieve a reliability of 0.80 in the pre- and post-implementation phases respectively. Conclusion: A significantly greater proportion of total score variability is explained by variability in learners’ performances with the O-EDShOT compared to norm-based DECs. The O-EDShOT also requires fewer assessments to generate a reliable estimate of the learner's ability. This study suggests that the O-EDShOT is a more useful assessment tool than norm-based DECs, and could be adopted in other emergency medicine training programs.
Little is known about the experiences of people living alone with dementia in the community and their non-resident relatives and friends who support them. In this paper, we explore their respective attitudes and approaches to the future, particularly regarding the future care and living arrangements of those living with dementia. The study is based on a qualitative secondary analysis of interviews with 24 people living alone with early-stage dementia in North Wales, United Kingdom, and one of their relatives or friends who supported them. All but four of the dyads were interviewed twice over 12 months (a total of 88 interviews). In the analysis, it was observed that several people with dementia expressed the desire to continue living at home for ‘as long as possible’. A framework approach was used to investigate this theme in more depth, drawing on concepts from the existing studies of people living with dementia and across disciplines. Similarities and differences in the future outlook and temporal orientation of the participants were identified. The results support previous research suggesting that the future outlook of people living with early-stage dementia can be interpreted in part as a response to their situation and a way of coping with the threats that it is perceived to present, and not just an impaired view of time. Priorities for future research are highlighted in the discussion.
Neurobiological models of auditory verbal hallucination (AVH) have been advanced by symptom capture functional magnetic resonance imaging (fMRI), where participants self-report hallucinations during scanning. To date, regions implicated are those involved with language, memory and emotion. However, previous studies focus on chronic schizophrenia, thus are limited by factors, such as medication use and illness duration. Studies also lack detailed phenomenological descriptions of AVHs. This study investigated the neural correlates of AVHs in patients with first episode psychosis (FEP) using symptom capture fMRI with a rich description of AVHs. We hypothesised that intrusive AVHs would be associated with dysfunctional salience network activity.
Sixteen FEP patients with frequent AVH completed four psychometrically validated tools to provide an objective measure of the nature of their AVHs. They then underwent fMRI symptom capture, utilising general linear models analysis to compare activity during AVH to the resting brain.
Symptom capture of AVH was achieved in nine patients who reported intrusive, malevolent and uncontrollable AVHs. Significant activity in the right insula and superior temporal gyrus (cluster size 141 mm3), and the left parahippocampal and lingual gyri (cluster size 121 mm3), P < 0.05 FDR corrected, were recorded during the experience of AVHs.
These results suggest salience network dysfunction (in the right insula) together with memory and language processing area activation in intrusive, malevolent AVHs in FEP. This finding concurs with others from chronic schizophrenia, suggesting these processes are intrinsic to psychosis itself and not related to length of illness or prolonged exposure to antipsychotic medication.
Disclosure of interest
The authors have not supplied their declaration of competing interest.
In the context of untimely access to community formal services, unmet needs of persons with dementia (PwD) and their carers may compromise their quality of life.
The Actifcare EU-JPND project (www.actifcare.eu) focuses on access to and (non) utilization of dementia formal care in eight countries (The Netherlands, Germany, United Kingdom, Sweden, Norway, Ireland, Italy, Portugal), as related to unmet needs and quality of life. Evaluations included systematic reviews, qualitative explorations, and a European cohort study (PwD in early/intermediate phases and their primary carers; n = 453 days; 1 year follow-up). Preliminary Portuguese results are presented here (FCT-JPND-HC/0001/2012).
(1) extensive systematic searches on access to/utilization of services; (2) focus groups of PwD, carers and health/social professionals; (3) prospective study (n = 66 days from e.g., primary care, hospital outpatient services, Alzheimer Portugal).
In Portugal, nationally representative data is scarce regarding health/social services utilization in dementia. There are important barriers to access to community services, according to users, carers and professionals, whose views not always coincide. The Portuguese cohort participants were 66 PwD (62.1% female, 77.3 ± 6.2 years, 55.5% Alzheimer's/mixed subtypes, MMSE 17.8 ± 4.8, CDR1 89.4%) and 66 carers (66.7% female, 64.9 ± 15.0 years, 56.1% spouses), with considerable unmet needs in some domains.
All Actifcare milestones are being reached. The consortium is now analyzing international differences in (un) timely access to services and its impact on quality of life and needs for care (e.g., formal community support is weaker in Portugal than in many European countries). National best-practice recommendations in dementia are also in preparation.
Abstract submitted on behalf of the Actifcare Eu-JPND consortium.
Disclosure of interest
The authors have not supplied their declaration of competing interest.
Evidence suggests that early trauma may have a negative effect on cognitive functioning in individuals with psychosis, yet the relationship between childhood trauma and cognition among those at clinical high risk (CHR) for psychosis remains unexplored. Our sample consisted of 626 CHR children and 279 healthy controls who were recruited as part of the North American Prodrome Longitudinal Study 2. Childhood trauma up to the age of 16 (psychological, physical, and sexual abuse, emotional neglect, and bullying) was assessed by using the Childhood Trauma and Abuse Scale. Multiple domains of cognition were measured at baseline and at the time of psychosis conversion, using standardized assessments. In the CHR group, there was a trend for better performance in individuals who reported a history of multiple types of childhood trauma compared with those with no/one type of trauma (Cohen d = 0.16). A history of multiple trauma types was not associated with greater cognitive change in CHR converters over time. Our findings tentatively suggest there may be different mechanisms that lead to CHR states. Individuals who are at clinical high risk who have experienced multiple types of childhood trauma may have more typically developing premorbid cognitive functioning than those who reported minimal trauma do. Further research is needed to unravel the complexity of factors underlying the development of at-risk states.
A systematic review and network meta-analysis were conducted to assess the relative efficacy of internal or external teat sealants given at dry-off in dairy cattle. Controlled trials were eligible if they assessed the use of internal or external teat sealants, with or without concurrent antimicrobial therapy, compared to no treatment or an alternative treatment, and measured one or more of the following outcomes: incidence of intramammary infection (IMI) at calving, IMI during the first 30 days in milk (DIM), or clinical mastitis during the first 30 DIM. Risk of bias was based on the Cochrane Risk of Bias 2.0 tool with modified signaling questions. From 2280 initially identified records, 32 trials had data extracted for one or more outcomes. Network meta-analysis was conducted for IMI at calving. Use of an internal teat sealant (bismuth subnitrate) significantly reduced the risk of new IMI at calving compared to non-treated controls (RR = 0.36, 95% CI 0.25–0.72). For comparisons between antimicrobial and teat sealant groups, concerns regarding precision were seen. Synthesis of the primary research identified important challenges related to the comparability of outcomes, replication and connection of interventions, and quality of reporting of study conduct.
A systematic review and meta-analysis were conducted to determine the efficacy of selective dry-cow antimicrobial therapy compared to blanket therapy (all quarters/all cows). Controlled trials were eligible if any of the following were assessed: incidence of clinical mastitis during the first 30 DIM, frequency of intramammary infection (IMI) at calving, or frequency of IMI during the first 30 DIM. From 3480 identified records, nine trials were data extracted for IMI at calving. There was an insufficient number of trials to conduct meta-analysis for the other outcomes. Risk of IMI at calving in selectively treated cows was higher than blanket therapy (RR = 1.34, 95% CI = 1.13, 1.16), but substantial heterogeneity was present (I2 = 58%). Subgroup analysis showed that, for trials using internal teat sealants, there was no difference in IMI risk at calving between groups, and no heterogeneity was present. For trials not using internal teat sealants, there was an increased risk in cows assigned to a selective dry-cow therapy protocol, compared to blanket treatment, with substantial heterogeneity in this subgroup. However, the small number of trials and heterogeneity in the subgroup without internal teat sealants suggests that the relative risk between treatments may differ from the determined point estimates based on other unmeasured factors.
A systematic review and network meta-analysis were conducted to assess the relative efficacy of antimicrobial therapy given to dairy cows at dry-off. Eligible studies were controlled trials assessing the use of antimicrobials compared to no treatment or an alternative treatment, and assessed one or more of the following outcomes: incidence of intramammary infection (IMI) at calving, incidence of IMI during the first 30 days in milk (DIM), or incidence of clinical mastitis during the first 30 DIM. Databases and conference proceedings were searched for relevant articles. The potential for bias was assessed using the Cochrane Risk of Bias 2.0 algorithm. From 3480 initially identified records, 45 trials had data extracted for one or more outcomes. Network meta-analysis was conducted for IMI at calving. The use of cephalosporins, cloxacillin, or penicillin with aminoglycoside significantly reduced the risk of new IMI at calving compared to non-treated controls (cephalosporins, RR = 0.37, 95% CI 0.23–0.65; cloxacillin, RR = 0.55, 95% CI 0.38–0.79; penicillin with aminoglycoside, RR = 0.42, 95% CI 0.26–0.72). Synthesis revealed challenges with a comparability of outcomes, replication of interventions, definitions of outcomes, and quality of reporting. The use of reporting guidelines, replication among interventions, and standardization of outcome definitions would increase the utility of primary research in this area.
Although behavior therapy reduces tic severity, it is unknown whether it improves co-occurring psychiatric symptoms and functional outcomes for adults with Tourette's disorder (TD). This information is essential for effective treatment planning. This study examined the effects of behavior therapy on psychiatric symptoms and functional outcomes in older adolescents and adults with TD.
A total of 122 individuals with TD or a chronic tic disorder participated in a clinical trial comparing behavior therapy to psychoeducation and supportive therapy. At baseline, posttreatment, and follow-up visits, participants completed assessments of tic severity, co-occurring symptoms (inattention, impulsiveness, hyperactivity, anger, anxiety, depression, obsessions, and compulsions), and psychosocial functioning. We compared changes in tic severity, psychiatric symptoms, and functional outcomes using repeated measure and one-way analysis of variance.
At posttreatment, participants receiving behavior therapy reported greater reductions in obsessions compared to participants in supportive therapy (
$\eta _p^2 $
= 0.04, p = 0.04). Across treatments, a positive treatment response on the Clinical Global Impression of Improvement scale was associated with a reduced disruption in family life (
$\eta _p^2 $
= 0.05, p = 0.02) and improved functioning in a parental role (
$\eta _p^2 $
= 0.37, p = 0.02). Participants who responded positively to eight sessions of behavior therapy had an improvement in tic severity (
$\eta _p^2 $
= 0.75, p < 0.001), inattention (
$\eta _p^2 $
= 0.48, p < 0.02), and functioning (
$\eta _p^2 $
= 0.39–0.42, p < 0.03–0.04) at the 6-month follow-up.
Behavior therapy has a therapeutic benefit for co-occurring obsessive symptoms in the short-term, and reduces tic severity and disability in adults with TD over time. Additional treatments may be necessary to address co-occurring symptoms and improve functional outcomes.
Innovation Concept: The outcome of emergency medicine training is to produce physicians who can competently run an emergency department (ED) shift. While many workplace-based ED assessments focus on discrete tasks of the discipline, others emphasize assessment of performance across the entire shift. However, the quality of assessments is generally poor and these tools often lack validity evidence. The use of entrustment scale anchors may help to address these psychometric issues. The aim of this study was to develop and gather validity evidence for a novel tool to assess a resident's ability to independently run an ED shift. Methods: Through a nominal group technique, local and national stakeholders identified dimensions of performance reflective of a competent ED physician. These dimensions were included in a new tool that was piloted in the Department of Emergency Medicine at the University of Ottawa during a 4-month period. Psychometric characteristics of the items were calculated, and a generalizability analysis used to determine the reliability of scores. An ANOVA was conducted to determine whether scores increased as a function of training level (junior = PGY1-2, intermediate = PGY3, senior = PGY4-5), and varied by ED treatment area. Safety for independent practice was analyzed with a dichotomous score. Curriculum, Tool or Material: The developed Ottawa Emergency Department Shift Observation Tool (O-EDShOT) includes 12-items rated on a 5-point entrustment scale with a global assessment item and 2 short-answer questions. Eight hundred and thirty-three assessment were completed by 78 physicians for 45 residents. Mean scores differed significantly by training level (p < .001) with junior residents receiving lower ratings (3.48 ± 0.69) than intermediate residents who received lower ratings (3.98 ± 0.48) than senior residents (4.54 ± 0.42). Scores did not vary by ED treatment area (p > .05). Residents judged to be safe to independently run the shift had significantly higher mean scores than those judged not to be safe (4.74 ± 0.31 vs 3.75 ± 0.66; p < .001). Fourteen observations per resident, the typical number recorded during a 1-month rotation, were required to achieve a reliability of 0.80. Conclusion: The O-EDShOT successfully discriminated between junior, intermediate and senior-level residents regardless of ED treatment area. Multiple sources of evidence support the O-EDShOT producing valid scores for assessing a resident's ability to independently run an ED shift.
Unlike for many other respiratory infections, the seasonality of pertussis is not well understood. While evidence of seasonal fluctuations in pertussis incidence has been noted in some countries, there have been conflicting findings including in the context of Australia. We investigated this issue by analysing the seasonality of pertussis notifications in Australia using monthly data from January 1991 to December 2016. Data were made available for all states and territories in Australia except for the Australian Capital Territory and were stratified into age groups. Using a time-series decomposition approach, we formulated a generalised additive model where seasonality is expressed using cosinor terms to estimate the amplitude and peak timing of pertussis notifications in Australia. We also compared these characteristics across different jurisdictions and age groups. We found evidence that pertussis notifications exhibit seasonality, with peaks observed during the spring and summer months (November–January) in Australia and across different states and territories. During peak months, notifications are expected to increase by about 15% compared with the yearly average. Peak notifications for children <5 years occurred 1–2 months later than the general population, which provides support to the theory that older household members remain an important source of pertussis infection for younger children. In addition, our results provide a more comprehensive spatial picture of seasonality in Australia, a feature lacking in previous studies. Finally, our findings suggest that seasonal forcing may be useful to consider in future population transmission models of pertussis.
Childhood adversity is associated with poor mental and physical health outcomes across the life span. Alterations in the hypothalamic–pituitary–adrenal axis are considered a key mechanism underlying these associations, although findings have been mixed. These inconsistencies suggest that other aspects of stress processing may underlie variations in this these associations, and that differences in adversity type, sex, and age may be relevant. The current study investigated the relationship between childhood adversity, stress perception, and morning cortisol, and examined whether differences in adversity type (generalized vs. threat and deprivation), sex, and age had distinct effects on these associations. Salivary cortisol samples, daily hassle stress ratings, and retrospective measures of childhood adversity were collected from a large sample of youth at risk for serious mental illness including psychoses (n = 605, mean age = 19.3). Results indicated that childhood adversity was associated with increased stress perception, which subsequently predicted higher morning cortisol levels; however, these associations were specific to threat exposures in females. These findings highlight the role of stress perception in stress vulnerability following childhood adversity and highlight potential sex differences in the impact of threat exposures.
Community-acquired pneumonia (CAP) results in substantial numbers of hospitalisations and deaths in older adults. There are known lifestyle and medical risk factors for pneumococcal disease but the magnitude of the additional risk is not well quantified in Australia. We used a large population-based prospective cohort study of older adults in the state of New South Wales (45 and Up Study) linked to cause-specific hospitalisations, disease notifications and death registrations from 2006 to 2015. We estimated the age-specific incidence of CAP hospitalisation (ICD-10 J12-18), invasive pneumococcal disease (IPD) notification and presumptive non-invasive pneumococcal CAP hospitalisation (J13 + J18.1, excluding IPD), comparing those with at least one risk factor to those with no risk factors. The hospitalised case-fatality rate (CFR) included deaths in a 30-day window after hospitalisation. Among 266 951 participants followed for 1 850 000 person-years there were 8747 first hospitalisations for CAP, 157 IPD notifications and 305 non-invasive pneumococcal CAP hospitalisations. In persons 65–84 years, 54.7% had at least one identified risk factor, increasing to 57.0% in those ⩾85 years. The incidence of CAP hospitalisation in those ⩾65 years with at least one risk factor was twofold higher than in those without risk factors, 1091/100 000 (95% confidence interval (CI) 1060–1122) compared with 522/100 000 (95% CI 501–545) and IPD in equivalent groups was almost threefold higher (18.40/100 000 (95% CI 14.61–22.87) vs. 6.82/100 000 (95% CI 4.56–9.79)). The CFR increased with age but there were limited difference by risk status, except in those aged 45 to 64 years. Adults ⩾65 years with at least one risk factor have much higher rates of CAP and IPD suggesting that additional risk factor-based vaccination strategies may be cost-effective.
A point-prevalence study of antimicrobial use among inpatients at 5 public hospitals in Sri Lanka revealed that 54.6% were receiving antimicrobials: 43.1% in medical wards, 68.0% in surgical wards, and 97.6% in intensive care wards. Amoxicillin-clavulanate was most commonly used for major indications. Among patients receiving antimicrobials, 31.0% received potentially inappropriate therapy.
To examine the impact of temporal bone virtual reality surgical simulator use in the undergraduate otorhinolaryngology curriculum.
Medical students attended a workshop involving the use of a temporal bone virtual reality surgical simulator. Students completed a pre-workshop questionnaire on career interests. A post-workshop questionnaire evaluated the perceived usefulness and enjoyment of the virtual reality surgical simulator experience, and assessed changes in their interest in ENT.
Thirty-two fifth-year University of Auckland medical students were recruited. The majority of students (53.1 per cent) had already chosen their career path. The simulator experience was useful for: stimulating thoughts around career plans (71.9 per cent), providing hands-on experience (93.8 per cent) and teaching disease processes (93.8 per cent). After the workshop, 53.1 per cent of students were more interested in a career in ENT.
Virtual reality may be a fun and engaging way of teaching ENT. Furthermore, it could help guide student career planning.
Much of the interest in youth at clinical high risk (CHR) of psychosis has been in understanding conversion. Recent literature has suggested that less than 25% of those who meet established criteria for being at CHR of psychosis go on to develop a psychotic illness. However, little is known about the outcome of those who do not make the transition to psychosis. The aim of this paper was to examine clinical symptoms and functioning in the second North American Prodrome Longitudinal Study (NAPLS 2) of those individuals whose by the end of 2 years in the study had not developed psychosis.
In NAPLS-2 278 CHR participants completed 2-year follow-ups and had not made the transition to psychosis. At 2-years the sample was divided into three groups – those whose symptoms were in remission, those who were still symptomatic and those whose symptoms had become more severe.
There was no difference between those who remitted early in the study compared with those who remitted at one or 2 years. At 2-years, those in remission had fewer symptoms and improved functioning compared with the two symptomatic groups. However, all three groups had poorer social functioning and cognition than healthy controls.
A detailed examination of the clinical and functional outcomes of those who did not make the transition to psychosis did not contribute to predicting who may make the transition or who may have an earlier remission of attenuated psychotic symptoms.
We investigated risk factors for severe acute lower respiratory infections (ALRI) among hospitalised children <2 years, with a focus on the interactions between virus and age. Statistical interactions between age and respiratory syncytial virus (RSV), influenza, adenovirus (ADV) and rhinovirus on the risk of ALRI outcomes were investigated. Of 1780 hospitalisations, 228 (12.8%) were admitted to the intensive care unit (ICU). The median (range) length of stay (LOS) in hospital was 3 (1–27) days. An increase of 1 month of age was associated with a decreased risk of ICU admission (rate ratio (RR) 0.94; 95% confidence intervals (CI) 0.91–0.98) and with a decrease in LOS (RR 0.96; 95% CI 0.95–0.97). Associations between RSV, influenza, ADV positivity and ICU admission and LOS were significantly modified by age. Children <5 months old were at the highest risk from RSV-associated severe outcomes, while children >8 months were at greater risk from influenza-associated ICU admissions and long hospital stay. Children with ADV had increased LOS across all ages. In the first 2 years of life, the effects of different viruses on ALRI severity varies with age. Our findings help to identify specific ages that would most benefit from virus-specific interventions such as vaccines and antivirals.
To identify predictors of disagreement with antimicrobial stewardship prospective audit and feedback recommendations (PAFR) at a free-standing children’s hospital.
Retrospective cohort study of audits performed during the antimicrobial stewardship program (ASP) from March 30, 2015, to April 17, 2017.
The ASP included audits of antimicrobial use and communicated PAFR to the care team, with follow-up on adherence to recommendations. The primary outcome was disagreement with PAFR. Potential predictors for disagreement, including patient-level, antimicrobial, programmatic, and provider-level factors, were assessed using bivariate and multivariate logistic regression models.
In total, 4,727 antimicrobial audits were performed during the study period; 1,323 PAFR (28%) and 187 recommendations (15%) were not followed due to disagreement. Providers were more likely to disagree with PAFR when the patient had a gastrointestinal infection (odds ratio [OR], 5.50; 95% confidence interval [CI], 1.99–15.21), febrile neutropenia (OR, 6.14; 95% CI, 2.08–18.12), skin or soft-tissue infections (OR, 6.16; 95% CI, 1.92–19.77), or had been admitted for 31–90 days at the time of the audit (OR, 2.08; 95% CI, 1.36–3.18). The longer the duration since the attending provider had been trained (ie, the more years of experience), the more likely they were to disagree with PAFR recommendations (OR, 1.02; 95% CI, 1.01–1.04).
Evaluation of our program confirmed patient-level predictors of PAFR disagreement and identified additional programmatic and provider-level factors, including years of attending experience. Stewardship interventions focused on specific diagnoses and antimicrobials are unlikely to result in programmatic success unless these factors are also addressed.
Many women experience both vasomotor menopausal symptoms (VMS) and depressed mood at midlife, but little is known regarding the prospective bi-directional relationships between VMS and depressed mood and the role of sleep difficulties in both directions.
A pooled analysis was conducted using data from 21 312 women (median: 50 years, interquartile range 49−51) in eight studies from the InterLACE consortium. The degree of VMS, sleep difficulties, and depressed mood was self-reported and categorised as never, rarely, sometimes, and often (if reporting frequency) or never, mild, moderate, and severe (if reporting severity). Multivariable logistic regression models were used to examine the bi-directional associations adjusted for within-study correlation.
At baseline, the prevalence of VMS (40%, range 13–62%) and depressed mood (26%, 8–41%) varied substantially across studies, and a strong dose-dependent association between VMS and likelihood of depressed mood was found. Over 3 years of follow-up, women with often/severe VMS at baseline were more likely to have subsequent depressed mood compared with those without VMS (odds ratios (OR) 1.56, 1.27–1.92). Women with often/severe depressed mood at baseline were also more likely to have subsequent VMS than those without depressed mood (OR 1.89, 1.47–2.44). With further adjustment for the degree of sleep difficulties at baseline, the OR of having a subsequent depressed mood associated with often/severe VMS was attenuated and no longer significant (OR 1.13, 0.90–1.40). Conversely, often/severe depressed mood remained significantly associated with subsequent VMS (OR 1.80, 1.38–2.34).
Difficulty in sleeping largely explained the relationship between VMS and subsequent depressed mood, but it had little impact on the relationship between depressed mood and subsequent VMS.