We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure coreplatform@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The traditional medical education system has produced scientifically grounded and clinically skilled physicians who have served medicine and society. Sweeping changes launched around the turn of the millennium have revolutionized undergraduate and postgraduate medical education across the world (Gutierrez et al. 2016; Shelton et al. 2017; Samarasekera et al. 2018). Training has moved from being time-based to become more outcome-based, with a move away from the apprenticeship model to a more structured and systematic approach, emphasizing learning and development of skills.
Medical education has changed considerably from models based mainly on knowledge acquisition and duration of training towards the achievement of predefined learning outcomes (Krackov and Pohl 2011). In such a competency-based approach to education effective feedback has become an integral and important constituent of teaching and learning.
In the learning process, feedback is a process of sharing observations, concerns and suggestions with another person. Feedback helps to maximize learning and development by raising an individual’s awareness of their areas of strength and relative weakness or need as well as outlining the actions required to improve performance.
Detailed and prompt feedback coupled with clear opportunities to improve enables individuals to achieve previously agreed milestones such as curriculum outcomes (Krackov and Pohl 2011) or continuing professional development (CPD) objectives.
The duration of immunity after first severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection and the extent to which prior immunity prevents reinfection is uncertain and remains an important question within the context of new variants. This is a retrospective population-based matched observational study where we identified the first polymerase chain reaction (PCR) positive of primary SARS-CoV-2 infection case tests between 1 March 2020 and 30 September 2020. Each case was matched by age, sex, upper tier local authority of residence and testing route to one individual testing negative in the same week (controls) by PCR. After a 90-day pre-follow-up period for cases and controls, any subsequent positive tests up to 31 December 2020 and deaths within 28 days of testing positive were identified, this encompassed an essentially vaccine-free period. We used a conditional logistic regression to analyse the results. There were 517 870 individuals in the matched cohort with 2815 reinfection cases and 12 098 first infections. The protective effect of a prior SARS-CoV-2 PCR-positive episode was 78% (odds ratio (OR) 0.22, 0.21–0.23). Protection rose to 82% (OR 0.18, 0.17–0.19) after a sensitivity analysis excluded 933 individuals with a first test between March and May and a subsequent positive test between June and September 2020. Amongst individuals testing positive by PCR during follow-up, reinfection cases had 77% lower odds of symptoms at the second episode (adjusted OR 0.23, 0.20–0.26) and 45% lower odds of dying in the 28 days after reinfection (adjusted OR 0.55, 0.42–0.71). Prior SARS-CoV-2 infection offered protection against reinfection in this population. There was some evidence that reinfections increased with the alpha variant compared to the wild-type SARS-CoV-2 variant highlighting the importance of continued monitoring as new variants emerge.
Online peer support platforms have been shown to provide a supportive space that can enhance social connectedness and personal empowerment. Some studies have analysed forum messages, showing that users describe a range of advantages, and some disadvantages to their use. However, the direct examination of users’ experiences of such platforms is rare and may be particularly informative for enhancing their helpfulness. This study aimed to understand users’ experiences of the Support, Hope and Recovery Online Network (SHaRON), an online cognitive behavioural therapy-based peer support platform for adults with mild to moderate anxiety or depression. Platform users (n = 88) completed a survey on their use of different platform features, feelings about using the platform, and overall experience. Responses were analysed descriptively and using thematic analysis. Results indicated that most features were generally well used, with the exception of private messaging. Many participants described feeling well supported and finding the information and resources helpful; the majority of recent users (81%) rated it as helpful overall. However, some participants described feeling uncomfortable about posting messages, and others did not find the platform helpful and gave suggestions for improvements. Around half had not used the platform in the past 3 months, for different reasons including feeling better or forgetting about it. Some described that simply knowing it was there was helpful, even without regular use. The findings highlight what is arguably a broader range of user experiences than observed in previous studies, which may have important implications for the enhancement of SHaRON and other platforms.
Key learning aims
(1) To understand what an online peer support platform is and how this can be used to support users’ mental health.
(2) To learn how users described their experience of the SHaRON platform.
(3) To understand the benefits that online peer support may provide.
(4) To consider what users found helpful and unhelpful, and how this might inform the further development of these platforms.
Supporting Antarctic scientific investigation is the job of the national Antarctic programmes, the government entities charged with delivering their countries’ Antarctic research strategies. This requires sustained investment in people, innovative technologies, Antarctic infrastructures, and vessels with icebreaking capabilities. The recent endorsement of the International Maritime Organization (IMO) Polar Code (2015) means that countries must address challenges related to an ageing icebreaking vessel fleet. Many countries have recently invested in and begun, or completed, builds on new icebreaking Polar research vessels. These vessels incorporate innovative technologies to increase fuel efficiency, to reduce noise output, and to address ways to protect the Antarctic environment in their design. This paper is a result of a Council of Managers of National Antarctic Programs (COMNAP) project on new vessel builds which began in 2018. It considers the recent vessel builds of Australia’s RSV Nuyina, China’s MV Xue Long 2, France’s L’Astrolabe, Norway’s RV Kronprins Haakon, Peru’s BAP Carrasco, and the United Kingdom’s RRS Sir David Attenborough. The paper provides examples of purposeful consideration of science support requirements and environmental sustainability in vessel designs and operations.
Encephalitis causes high morbidity and mortality. An incidence of 4.3 cases of encephalitis/100 000 population has been reported in the UK. We performed a retrospective evaluation of the diagnosis and management of adults admitted to hospital with a clinical diagnosis of encephalitis/meningoencephalitis. Clinical, laboratory and radiological data were collated from electronic records. Thirty-six patients, median age 55 years and 24 (67%) male were included. The aetiology was confirmed over nine months in 25 (69%) of whom 16 were infections (six viral, seven bacterial, two parasitic and one viral and parasitic co-infection); 7 autoimmune; 1 metabolic and 1 neoplastic. Of 24 patients with fever, 15 (63%) had an infection. The median time to computed topography, magnetic resonance imaging and electroencephalography (EEG) was 1, 8 and 3 days respectively. Neuroimaging was abnormal in 25 (69%) and 17 (89%) had abnormal EEGs. Only 19 (53%) received aciclovir treatment. Six (17%) made good recoveries, 16 (44%) had moderate disability, 8 (22%) severe disability and 6 (17%) died. Outcomes were worse for those with an infectious cause. In summary, a diagnosis was made in 69.4% of patients admitted with encephalitis/meningoencephalitis. Autoimmune causes are important to consider at an early stage due to a successful response to treatment. Only 53% of patients received aciclovir on admission. Neuroimaging and EEG studies were delayed. The results of this work resulted in further developing the clinical algorithm for managing these patients.
Interfacility patient movement plays an important role in the dissemination of antimicrobial-resistant organisms throughout healthcare systems. We evaluated how 3 alternative measures of interfacility patient sharing were associated with C. difficile infection incidence in Ontario acute-care facilities.
Design:
The cohort included adult acute-care facility stays of ≥3 days between April 2003 and March 2016. We measured 3 facility-level metrics of patient sharing: general patient importation, incidence-weighted patient importation, and C. difficile case importation. Each of the 3 patient-sharing metrics were examined against the incidence of C. difficile infection in the facility per 1,000 stays, using Poisson regression models.
Results:
The analyzed cohort included 6.70 million stays at risk of C. difficile infection across 120 facilities. Over the 13-year period, we included 62,189 new cases of healthcare-associated CDI (incidence, 9.3 per 1,000 stays). After adjustment for facility characteristics, general importation was not strongly associated with C. difficile infection incidence (risk ratio [RR] per doubling, 1.10; 95% confidence interval [CI], 0.97–1.24; proportional change in variance [PCV], −2.0%). Incidence-weighted (RR per doubling, 1.18; 95% CI, 1.06–1.30; PCV, −8.4%) and C. difficile case importation (RR per doubling, 1.43; 95% CI, 1.29–1.58; PCV, −30.1%) were strongly associated with C. difficile infection incidence.
Conclusions:
In this 13-year study of acute-care facilities in Ontario, interfacility variation in C. difficile infection incidence was associated with importation of patients from other high-incidence acute-care facilities or specifically of patients with a recent history of C. difficile infection. Regional infection control strategies should consider the potential impact of importation of patients at high risk of C. difficile shedding from outside facilities.
Clostridium difficile spores play an important role in transmission and can survive in the environment for several months. Optimal methods for measuring environmental C. difficile are unknown. We sought to determine whether increased sample surface area improved detection of C. difficile from environmental samples.
Setting
Samples were collected from 12 patient rooms in a tertiary-care hospital in Toronto, Canada.
Methods
Samples represented small surface-area and large surface-area floor and bedrail pairs from single-bed rooms of patients with low (without prior antibiotics), medium (with prior antibiotics), and high (C. difficile infected) shedding risk. Presence of C. difficile in samples was measured using quantitative polymerase chain reaction (qPCR) with targets on the 16S rRNA and toxin B genes and using enrichment culture.
Results
Of the 48 samples, 64·6% were positive by 16S qPCR (geometric mean, 13·8 spores); 39·6% were positive by toxin B qPCR (geometric mean, 1·9 spores); and 43·8% were positive by enrichment culture. By 16S qPCR, each 10-fold increase in sample surface area yielded 6·6 times (95% CI, 3·2–13) more spores. Floor surfaces yielded 27 times (95% CI, 4·9–181) more spores than bedrails, and rooms of C. difficile–positive patients yielded 11 times (95% CI, 0·55–164) more spores than those of patients without prior antibiotics. Toxin B qPCR and enrichment culture returned analogous findings.
Conclusions
Clostridium difficile spores were identified in most floor and bedrail samples, and increased surface area improved detection. Future research aiming to understand the role of environmental C. difficile in transmission should prefer samples with large surface areas.
Antibiotic use varies widely between hospitals, but the influence of antimicrobial stewardship programs (ASPs) on this variability is not known. We aimed to determine the key structural and strategic aspects of ASPs associated with differences in risk-adjusted antibiotic utilization across facilities.
Design
Observational study of acute-care hospitals in Ontario, Canada
Methods
A survey was sent to hospitals asking about both structural (8 elements) and strategic (32 elements) components of their ASP. Antibiotic use from hospital purchasing data was acquired for January 1 to December 31, 2014. Crude and adjusted defined daily doses per 1,000 patient days, accounting for hospital and aggregate patient characteristics, were calculated across facilities. Rate ratios (RR) of defined daily doses per 1,000 patient days were compared for hospitals with and without each antimicrobial stewardship element of interest.
Results
Of 127 eligible hospitals, 73 (57%) participated in the study. There was a 7-fold range in antibiotic use across these facilities (min, 253 defined daily doses per 1,000 patient days; max, 1,872 defined daily doses per 1,000 patient days). The presence of designated funding or resources for the ASP (RRadjusted, 0·87; 95% CI, 0·75–0·99), prospective audit and feedback (RRadjusted, 0·80; 95% CI, 0·67–0·96), and intravenous-to-oral conversion policies (RRadjusted, 0·79; 95% CI, 0·64–0·99) were associated with lower risk-adjusted antibiotic use.
Conclusions
Wide variability in antibiotic use across hospitals may be partially explained by both structural and strategic ASP elements. The presence of funding and resources, prospective audit and feedback, and intravenous-to-oral conversion should be considered priority elements of a robust ASP.
To study the antibody response to tetanus toxoid and measles by age following vaccination in children aged 4 months to 6 years in Entebbe, Uganda. Serum samples were obtained from 113 children aged 4–15 months, at the Mother-Child Health Clinic (MCHC), Entebbe Hospital and from 203 of the 206 children aged between 12 and 75 months recruited through the Outpatients Department (OPD). Antibodies to measles were quantified by plaque reduction neutralisation test (PRNT) and with Siemens IgG EIA. VaccZyme IgG EIA was used to quantify anti-tetanus antibodies. Sera from 96 of 113 (85.0%) children attending the MCHC contained Measles PRNT titres below the protective level (120 mIU/ml). Sera from 24 of 203 (11.8%) children attending the OPD contained PRNT titres <120 mIU/ml. There was no detectable decline in anti-measles antibody concentrations between 1 and 6 years. The anti-tetanus antibody titres in all 113 children attending MCHC and in 189 of 203 (93.1%) children attending the OPD were >0.15 IU/ml by EIA, a level considered protective. The overall concentration of anti-tetanus antibody was sixfold higher in children under 12 months compared with the older children, with geometric mean concentrations of 3.15 IU/ml and 0.49 IU/ml, respectively. For each doubling in age between 4 and 64 months, the anti-tetanus antibody concentration declined by 50%. As time since the administration of the third DTP vaccination doubled, anti-tetanus antibody concentration declined by 39%. The low measles antibody prevalence in the children presenting at the MCHC is consistent with the current measles epidemiology in Uganda, where a significant number of measles cases occur in children under 1 year of age and earlier vaccination may be indicated. The consistent fall in anti-tetanus antibody titre over time following vaccination supports the need for further vaccine boosters at age 4–5 years as recommended by the WHO.
Social injustices, structural and personal crises as well as intensifying stress on some citizens seem increasing preoccupations in contemporary society and social policy. In this context, the concept of vulnerability has come to play a prominent role in academic, governmental and everyday accounts of the human condition. Policy makers and practitioners are now concerned with addressing vulnerability through an expansive range of interventions. As this special issue draws attention to, a vulnerability zeitgeist or ‘spirit of the time’ has been traced in contemporary welfare and disciplinary arrangements (Brown, 2014, 2015), which now informs a range of interventions and approaches to social problems, both in the UK and internationally. As prominent examples, ‘vulnerable’ people are legally entitled to ‘priority need’ in English social housing allocations (Carr and Hunter, 2008), vulnerable victims of crime are seen as requiring special responses in the UK criminal justice system (see Roulstone et al., 2011; Walkgate, 2011), ‘vulnerable adults’ have designated ‘protections’ under British law (Dunn et al., 2008; Clough, 2014) and vulnerable migrants and refugees are increasingly prioritised within international immigration processes (Peroni and Timmer, 2013). There is a long tradition in the field of social policy of critiquing the implications of particular concepts as mechanisms of governance, from poverty (Townsend, 1979; Lister, 2004) and social exclusion (Levitas, 1998; Young 1999) to risk (Beck, 1992; Kemshall, 2002) and resilience (Ecclestone and Lewis, 2014; Wright, 2016). Yet while vulnerability seems to be one of the latest buzzwords gathering political and cultural momentum, critiques and empirical studies of how it is operationalised in different policy and practice contexts are less well elaborated.
Hospital-acquired infections (HAIs) develop rapidly after brief and transient exposures, and ecological exposures are central to their etiology. However, many studies of HAIs risk do not correctly account for the timing of outcomes relative to exposures, and they ignore ecological factors. We aimed to describe statistical practice in the most cited HAI literature as it relates to these issues, and to demonstrate how to implement models that can be used to account for them.
METHODS
We conducted a literature search to identify 8 frequently cited articles having primary outcomes that were incident HAIs, were based on individual-level data, and used multivariate statistical methods. Next, using an inpatient cohort of incident Clostridium difficile infection (CDI), we compared 3 valid strategies for assessing risk factors for incident infection: a cohort study with time-fixed exposures, a cohort study with time-varying exposures, and a case-control study with time-varying exposures.
RESULTS
Of the 8 studies identified in the literature scan, 3 did not adjust for time-at-risk, 6 did not assess the timing of exposures in a time-window prior to outcome ascertainment, 6 did not include ecological covariates, and 6 did not account for the clustering of outcomes in time and space. Our 3 modeling strategies yielded similar risk-factor estimates for CDI risk.
CONCLUSIONS
Several common statistical methods can be used to augment standard regression methods to improve the identification of HAI risk factors.
Infect. Control Hosp. Epidemiol. 2016;37(4):411–419
Intakes of micronutrient-rich foods are low among Indian women of reproductive age. We investigated whether consumption of a food-based micronutrient-rich snack increased markers of blood micronutrient concentrations when compared with a control snack. Non-pregnant women (n 222) aged 14–35 years living in a Mumbai slum were randomised to receive a treatment snack (containing green leafy vegetables, dried fruit and whole milk powder), or a control snack containing foods of low micronutrient content such as wheat flour, potato and tapioca. The snacks were consumed under observation 6 d per week for 12 weeks, compliance was recorded, and blood was collected at 0 and 12 weeks. Food-frequency data were collected at both time points. Compliance (defined as the proportion of women who consumed ≥ 3 snacks/week) was >85 % in both groups. We assessed the effects of group allocation on 12-week nutrient concentrations using ANCOVA models with respective 0-week concentrations, BMI, compliance, standard of living, fruit and green leafy vegetable consumption and use of synthetic nutrients as covariates. The treatment snack significantly increased β-carotene concentrations (treatment effect: 47·1 nmol/l, 95 % CI 6·5, 87·7). There was no effect of group allocation on concentrations of ferritin, retinol, ascorbate, folate or vitamin B12. The present study shows that locally sourced foods can be made into acceptable snacks that may increase serum β-carotene concentrations among women of reproductive age. However, no increase in circulating concentrations of the other nutrients measured was observed.
The General Medical Council suggests that leadership skills are a core requirement for all doctors. The NHS Institute for Innovation and Improvement goes further, stating that doctors have not only an intrinsic leadership role but also a responsibility to contribute to the effective running of healthcare organisations. The Medical Leadership Competency Framework (MLCF) outlines a structure with domains, elements and competency outcomes, all of which are clearly spelled out with examples and methods of learning for different stages of a medical career. The revised curriculum for postgraduate training in psychiatry contains many aspects of the MLCF, both complementing and supplementing its emphasis on the development of personal qualities and skills. This article highlights this approach and describes how the development of leadership and management skills fits with the current structures for training in psychiatry.
The current shift in emphasis in teaching and learning in medicine is towards outcome-based learning. This means that greater importance is placed upon the daily clinical performance of the doctor. The direct observation of the interaction between the doctor and patient is at the heart of any schedule for the assessment of a doctor in clinical practice. The mini-Assessed Clinical Encounter (mini-ACE) was introduced with a particular eye to the learning and assessment needs of doctors at an early stage in career development in psychiatry as a whole or within a chosen specialty. The mini-ACE enables a structured observation of an aspect of clinical practice to occur with accompanying assessment and feedback on defined areas of competence. The tool is of limited value for more advanced trainees or established psychiatrists because of its emphasis on competence rather than performance and lack of content validity with regard to the work of more senior practitioners, the latter owing to its emphasis on small items of the assessment of a patient rather than the whole.
Description
The mini-ACE, which has been developed from the mini-Clinical Evaluation Exercise (mini-CEX), is a method both for assessing the clinical skills of the trainee and offering immediate feedback. It involves a single senior health professional (almost always a doctor) observing a trainee while they conduct a patient assessment in any of a variety of settings. The mini- CEX was itself a modification of the traditional long case assessment, in which the trainee conducts a focused history and/or mental state/physical examination. After asking the trainee for a diagnosis and treatment plan, the assessor rates the trainee using a structured format and then provides educational feedback. Each trainee must be assessed on several different occasions by different assessors and over a range of conditions and settings. The mini-ACE should be conducted as a routine part of the clinical and educational programme.