To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Introduction: Workplace based assessments (WBAs) are integral to emergency medicine residency training. However many biases undermine their validity, such as an assessor's personal inclination to rate learners leniently or stringently. Outlier assessors produce assessment data that may not reflect the learner's performance. Our emergency department introduced a new Daily Encounter Card (DEC) using entrustability scales in June 2018. Entrustability scales reflect the degree of supervision required for a given task, and are shown to improve assessment reliability and discrimination. It is unclear what effect they will have on assessor stringency/leniency – we hypothesize that they will reduce the number of outlier assessors. We propose a novel, simple method to identify outlying assessors in the setting of WBAs. We also examine the effect of transitioning from a norm-based assessment to an entrustability scale on the population of outlier assessors. Methods: This was a prospective pre-/post-implementation study, including all DECs completed between July 2017 and June 2019 at The Ottawa Hospital Emergency Department. For each phase, we identified outlier assessors as follows: 1. An assessor is a potential outlier if the mean of the scores they awarded was more than two standard deviations away from the mean score of all completed assessments. 2. For each assessor identified in step 1, their learners’ assessment scores were compared to the overall mean of all learners. This ensures that the assessor was not simply awarding outlying scores due to working with outlier learners. Results: 3927 and 3860 assessments were completed by 99 and 116 assessors in the pre- and post-implementation phases respectively. We identified 9 vs 5 outlier assessors (p = 0.16) in the pre- and post-implementation phases. Of these, 6 vs 0 (p = 0.01) were stringent, while 3 vs 5 (p = 0.67) were lenient. One assessor was identified as an outlier (lenient) in both phases. Conclusion: Our proposed method successfully identified outlier assessors, and could be used to identify assessors who might benefit from targeted coaching and feedback on their assessments. The transition to an entrustability scale resulted in a non-significant trend towards fewer outlier assessors. Further work is needed to identify ways to mitigate the effects of rater cognitive biases.
Introduction: The Ottawa Emergency Department Shift Observation Tool (O-EDShOT) was recently developed to assess a resident's ability to safely run an ED shift and is supported by multiple sources of validity evidence. The O-EDShOT uses entrustability scales, which reflect the degree of supervision required for a given task. It was found to discriminate between learners of different levels, and to differentiate between residents who were rated as able to safely run the shift and those who were not. In June 2018 we replaced norm-based daily encounter cards (DECs) with the O-EDShOT. With the ideal assessment tool, most of the score variability would be explained by variability in learners’ performances. In reality, however, much of the observed variability is explained by other factors. The purpose of this study is to determine what proportion of total score variability is accounted for by learner variability when using norm-based DECs vs the O-EDShOT. Methods: This was a prospective pre-/post-implementation study, including all daily assessments completed between July 2017 and June 2019 at The Ottawa Hospital ED. A generalizability analysis (G study) was performed to determine what proportion of total score variability is accounted for by the various factors in this study (learner, rater, form, pgy level) for both the pre- and post- implementation phases. We collected 12 months of data for each phase, because we estimated that 6-12 months would be required to observe a measurable increase in entrustment scale scores within a learner. Results: A total of 3908 and 3679 assessments were completed by 99 and 116 assessors in the pre- and post- implementation phases respectively. Our G study revealed that 21% of total score variance was explained by a combination of post-graduate year (PGY) level and the individual learner in the pre-implementation phase, compared to 59% in the post-implementation phase. An average of 51 vs 27 forms/learner are required to achieve a reliability of 0.80 in the pre- and post-implementation phases respectively. Conclusion: A significantly greater proportion of total score variability is explained by variability in learners’ performances with the O-EDShOT compared to norm-based DECs. The O-EDShOT also requires fewer assessments to generate a reliable estimate of the learner's ability. This study suggests that the O-EDShOT is a more useful assessment tool than norm-based DECs, and could be adopted in other emergency medicine training programs.
To examine genetic influences the anatomy of the Corpus Callosum (CC) in Bipolar Disorder (BD) by examining first-degree relatives in addition to BD patients.
We compared CCl size and shape in 180 individuals: 70 with BD, 45 of their unaffected first-degree relatives, and 75 healthy controls. The CC was extracted from a mid-sagittal slice from T1-weighted magnetic resonance images; its total area, length and curvature were compared across groups. A non-parametric permutation method was used to examine for alterations in width of the callosum along 39 points.
Validating our previous findings, a significant global reduction in CC thickness was seen in BD patients, with a disproportionate thinning in the anterior body. First-degree relatives did not differ in CC size or shape from controls. Duration of illness was associated with thinning in the anterior body, whereas Lithium treatment associated with thicker anterior CC midbody.
Global and regional CC thinning is a disease related feature of BD and may not represent a marker of familial disposition.
Evidence suggests that the subjective experience of AVHs cannot be explained by any of the existing cognitive models, highlighting the obvious need to properly investigate the actual, lived experience of AVHs, and derive models/theories that fit the complexity of this.
Via phenomenological interviews and ethnographic diary methods, we aim to gain a deeper insight into the experience of AVHs.
To explore the phenomenological quality of AVHs, as they happen/reveal themselves to consciousness,   without relying on existing suppositions.
Participants with First Episode Psychosis were recruited from the Birmingham Early Intervention Service (EIS), BSMHFT. In-depth 'walking interviews' were carried out with each participant, together with standardised assessment measures of voices. Prior to interviews, participants were asked to complete a dairy and take photographs, further capturing aspects of their AVH experiences.
20 participants have completed interviews to date. Emerging themes cover the form and quality of voices (i.e. as being separate to self, imposing, compelling etc.), and participants' understanding and management of these experiences.
Authentic descriptions gleaned from participants have the potential to increase our understanding of the relationship between the phenomenology and neurobiology of AVHs and, in turn, the experience as a whole.
Numerous studies have applied novel multivariate statistical approaches to the analysis of brain alterations in patients with schizophrenia. However the diagnostic accuracy of the reported predictive models differs largely, making it difficult to evaluate the overall potential of these studies to inform clinical diagnosis.
We conducted a comprehensive literature search to identify all studies reporting performance of neuroimaging-based multivariate predictive models for the differentiation of patients with schizophrenia from healthy control subjects. The robustness of the results as well as the effect of potentially confounding continous variables (e.g. age, gender ratio, year of publication) was investigated.
The final sample consisted of n=37 studies studies including n=1491 patients with schizophrenia and n=1488 healthy controls. Metaanalysis of the complete sample showed a sensitivity of 80.7% (95%-CI: 77.0 to 83.9%) and a specificity of 80.2% (95%-CI: 83.3 to 76.7%). Separate analysis for the different imaging modalities showed similar diagnostic accuracy for the structural MRI studies (sensitivity 77.3%, specificity 78.7%), the fMRI studies (sensitivity 81.4%, specificity 82.4%) and resting-state fMRI studies (sensitivity 86.9%, specificity 80.3%). Moderator analysis showed significant effects of age of patients on sensitivity (p=0.021) and of positive-tonegative symptom ratio on specificity (p=0.028) indicating better diagnostic accuracy in older patients and patients with positive symptoms.
Our analysis indicate an overall sensitivity and overall specificity of around 80 % of neuroimaging-based predictive models for differentiating schizophrenic patients from healthy controls. The results underline the potential applicability of neuroimaging-based predictive models for the diagnosis of schizophrenia.
The national implementation of competency-based medical education (CBME) has prompted an increased interest in identifying and tracking clinical and educational outcomes for emergency medicine training programs. For the 2019 Canadian Association of Emergency Physicians (CAEP) Academic Symposium, we developed recommendations for measuring outcomes in emergency medicine training in the context of CBME to assist educational leaders and systems designers in program evaluation.
We conducted a three-phase study to generate educational and clinical outcomes for emergency medicine (EM) education in Canada. First, we elicited expert and community perspectives on the best educational and clinical outcomes through a structured consultation process using a targeted online survey. We then qualitatively analyzed these responses to generate a list of suggested outcomes. Last, we presented these outcomes to a diverse assembly of educators, trainees, and clinicians at the CAEP Academic Symposium for feedback and endorsement through a voting process.
Academic Symposium attendees endorsed the measurement and linkage of CBME educational and clinical outcomes. Twenty-five outcomes (15 educational, 10 clinical) were derived from the qualitative analysis of the survey results and the most important short- and long-term outcomes (both educational and clinical) were identified. These outcomes can be used to help measure the impact of CBME on the practice of Emergency Medicine in Canada to ensure that it meets both trainee and patient needs.
Q fever (caused by Coxiella burnetii) is thought to have an almost world-wide distribution, but few countries have conducted national serosurveys. We measured Q fever seroprevalence using residual sera from diagnostic laboratories across Australia. Individuals aged 1–79 years in 2012–2013 were sampled to be proportional to the population distribution by region, distance from metropolitan areas and gender. A 1/50 serum dilution was tested for the Phase II IgG antibody against C. burnetii by indirect immunofluorescence. We calculated crude seroprevalence estimates by age group and gender, as well as age standardised national and metropolitan/non-metropolitan seroprevalence estimates. Of 2785 sera, 99 tested positive. Age standardised seroprevalence was 5.6% (95% confidence interval (CI 4.5%–6.8%), and similar in metropolitan (5.5%; 95% CI 4.1%–6.9%) and non-metropolitan regions (6.0%; 95%CI 4.0%–8.0%). More males were seropositive (6.9%; 95% CI 5.2%–8.6%) than females (4.2%; 95% CI 2.9%–5.5%) with peak seroprevalence at 50–59 years (9.2%; 95% CI 5.2%–13.3%). Q fever seroprevalence for Australia was higher than expected (especially in metropolitan regions) and higher than estimates from the Netherlands (2.4%; pre-outbreak) and US (3.1%), but lower than for Northern Ireland (12.8%). Robust country-specific seroprevalence estimates, with detailed exposure data, are required to better understand who is at risk and the need for preventive measures.
Evidence suggests that early trauma may have a negative effect on cognitive functioning in individuals with psychosis, yet the relationship between childhood trauma and cognition among those at clinical high risk (CHR) for psychosis remains unexplored. Our sample consisted of 626 CHR children and 279 healthy controls who were recruited as part of the North American Prodrome Longitudinal Study 2. Childhood trauma up to the age of 16 (psychological, physical, and sexual abuse, emotional neglect, and bullying) was assessed by using the Childhood Trauma and Abuse Scale. Multiple domains of cognition were measured at baseline and at the time of psychosis conversion, using standardized assessments. In the CHR group, there was a trend for better performance in individuals who reported a history of multiple types of childhood trauma compared with those with no/one type of trauma (Cohen d = 0.16). A history of multiple trauma types was not associated with greater cognitive change in CHR converters over time. Our findings tentatively suggest there may be different mechanisms that lead to CHR states. Individuals who are at clinical high risk who have experienced multiple types of childhood trauma may have more typically developing premorbid cognitive functioning than those who reported minimal trauma do. Further research is needed to unravel the complexity of factors underlying the development of at-risk states.
The Canadian Resident Matching Service (CaRMS) selection process has come under scrutiny due to the increasing number of unmatched medical graduates. In response, we outline our residency program's selection process including how we have incorporated best practices and novel techniques.
We selected file reviewers and interviewers to mitigate gender bias and increase diversity. Four residents and two attending physicians rated each file using a standardized, cloud-based file review template to allow simultaneous rating. We interviewed applicants using four standardized stations with two or three interviewers per station. We used heat maps to review rating discrepancies and eliminated rating variance using Z-scores. The number of person-hours that we required to conduct our selection process was quantified and the process outcomes were described statistically and graphically.
We received between 75 and 90 CaRMS applications during each application cycle between 2017 and 2019. Our overall process required 320 person-hours annually, excluding attendance at the social events and administrative assistant duties. Our preliminary interview and rank lists were developed using weighted Z-scores and modified through an organized discussion informed by heat mapped data. The difference between the Z-scores of applicants surrounding the interview invitation threshold was 0.18-0.3 standard deviations. Interview performance significantly impacted the final rank list.
We describe a rigorous resident selection process for our emergency medicine training program which incorporated simultaneous cloud-based rating, Z-scores, and heat maps. This standardized approach could inform other programs looking to adopt a rigorous selection process while providing applicants guidance and reassurance of a fair assessment.
A systematic review and network meta-analysis were conducted to assess the relative efficacy of internal or external teat sealants given at dry-off in dairy cattle. Controlled trials were eligible if they assessed the use of internal or external teat sealants, with or without concurrent antimicrobial therapy, compared to no treatment or an alternative treatment, and measured one or more of the following outcomes: incidence of intramammary infection (IMI) at calving, IMI during the first 30 days in milk (DIM), or clinical mastitis during the first 30 DIM. Risk of bias was based on the Cochrane Risk of Bias 2.0 tool with modified signaling questions. From 2280 initially identified records, 32 trials had data extracted for one or more outcomes. Network meta-analysis was conducted for IMI at calving. Use of an internal teat sealant (bismuth subnitrate) significantly reduced the risk of new IMI at calving compared to non-treated controls (RR = 0.36, 95% CI 0.25–0.72). For comparisons between antimicrobial and teat sealant groups, concerns regarding precision were seen. Synthesis of the primary research identified important challenges related to the comparability of outcomes, replication and connection of interventions, and quality of reporting of study conduct.
A systematic review and meta-analysis were conducted to determine the efficacy of selective dry-cow antimicrobial therapy compared to blanket therapy (all quarters/all cows). Controlled trials were eligible if any of the following were assessed: incidence of clinical mastitis during the first 30 DIM, frequency of intramammary infection (IMI) at calving, or frequency of IMI during the first 30 DIM. From 3480 identified records, nine trials were data extracted for IMI at calving. There was an insufficient number of trials to conduct meta-analysis for the other outcomes. Risk of IMI at calving in selectively treated cows was higher than blanket therapy (RR = 1.34, 95% CI = 1.13, 1.16), but substantial heterogeneity was present (I2 = 58%). Subgroup analysis showed that, for trials using internal teat sealants, there was no difference in IMI risk at calving between groups, and no heterogeneity was present. For trials not using internal teat sealants, there was an increased risk in cows assigned to a selective dry-cow therapy protocol, compared to blanket treatment, with substantial heterogeneity in this subgroup. However, the small number of trials and heterogeneity in the subgroup without internal teat sealants suggests that the relative risk between treatments may differ from the determined point estimates based on other unmeasured factors.
A systematic review and network meta-analysis were conducted to assess the relative efficacy of antimicrobial therapy given to dairy cows at dry-off. Eligible studies were controlled trials assessing the use of antimicrobials compared to no treatment or an alternative treatment, and assessed one or more of the following outcomes: incidence of intramammary infection (IMI) at calving, incidence of IMI during the first 30 days in milk (DIM), or incidence of clinical mastitis during the first 30 DIM. Databases and conference proceedings were searched for relevant articles. The potential for bias was assessed using the Cochrane Risk of Bias 2.0 algorithm. From 3480 initially identified records, 45 trials had data extracted for one or more outcomes. Network meta-analysis was conducted for IMI at calving. The use of cephalosporins, cloxacillin, or penicillin with aminoglycoside significantly reduced the risk of new IMI at calving compared to non-treated controls (cephalosporins, RR = 0.37, 95% CI 0.23–0.65; cloxacillin, RR = 0.55, 95% CI 0.38–0.79; penicillin with aminoglycoside, RR = 0.42, 95% CI 0.26–0.72). Synthesis revealed challenges with a comparability of outcomes, replication of interventions, definitions of outcomes, and quality of reporting. The use of reporting guidelines, replication among interventions, and standardization of outcome definitions would increase the utility of primary research in this area.
We conducted a systematic review and network meta-analysis to determine the comparative efficacy of antibiotics used to control bovine respiratory disease (BRD) in beef cattle on feedlots. The information sources for the review were: MEDLINE®, MEDLINE In-Process and MEDLINE® Daily, AGRICOLA, Epub Ahead of Print, Cambridge Agricultural and Biological Index, Science Citation Index, Conference Proceedings Citation Index – Science, the Proceedings of the American Association of Bovine Practitioners, World Buiatrics Conference, and the United States Food and Drug Administration Freedom of Information New Animal Drug Applications summaries. The eligible population was weaned beef cattle raised in intensive systems. The interventions of interest were injectable antibiotics used at the time the cattle arrived at the feedlot. The outcome of interest was the diagnosis of BRD within 45 days of arrival at the feedlot. The network meta-analysis included data from 46 studies and 167 study arms identified in the review. The results suggest that macrolides are the most effective antibiotics for the reduction of BRD incidence. Injectable oxytetracycline effectively controlled BRD compared with no antibiotics; however, it was less effective than macrolide treatment. Because oxytetracycline is already commonly used to prevent, control, and treat BRD in groups of feedlot cattle, the use of injectable oxytetracycline for BRD control might have advantages from an antibiotic stewardship perspective.
Vaccination against putative causal organisms is a frequently used and preferred approach to controlling bovine respiratory disease complex (BRD) because it reduces the need for antibiotic use. Because approximately 90% of feedlots use and 90% of beef cattle receive vaccines in the USA, information about their comparative efficacy would be useful for selecting a vaccine. We conducted a systematic review and network meta-analysis of studies assessing the comparative efficacy of vaccines to control BRD when administered to beef cattle at or near their arrival at the feedlot. We searched MEDLINE, MEDLINE In-Process, MEDLINE Daily Epub Ahead of Print, AGRICOLA, Cambridge Agricultural and Biological Index, Science Citation Index, and Conference Proceedings Citation Index – Science and hand-searched the conference proceedings of the American Association of Bovine Practitioners and World Buiatrics Congress. We found 53 studies that reported BRD morbidity within 45 days of feedlot arrival. The largest connected network of studies, which involved 17 vaccine protocols from 14 studies, was included in the meta-analysis. Consistent with previous reviews, we found little compelling evidence that vaccines used at or near arrival at the feedlot reduce the incidence of BRD diagnosis.
In recent years, the discovery of massive quasars at
has provided a striking challenge to our understanding of the origin and growth of supermassive black holes in the early Universe. Mounting observational and theoretical evidence indicates the viability of massive seeds, formed by the collapse of supermassive stars, as a progenitor model for such early, massive accreting black holes. Although considerable progress has been made in our theoretical understanding, many questions remain regarding how (and how often) such objects may form, how they live and die, and how next generation observatories may yield new insight into the origin of these primordial titans. This review focusses on our present understanding of this remarkable formation scenario, based on the discussions held at the Monash Prato Centre from November 20 to 24, 2017, during the workshop ‘Titans of the Early Universe: The Origin of the First Supermassive Black Holes’.
Community-acquired pneumonia (CAP) results in substantial numbers of hospitalisations and deaths in older adults. There are known lifestyle and medical risk factors for pneumococcal disease but the magnitude of the additional risk is not well quantified in Australia. We used a large population-based prospective cohort study of older adults in the state of New South Wales (45 and Up Study) linked to cause-specific hospitalisations, disease notifications and death registrations from 2006 to 2015. We estimated the age-specific incidence of CAP hospitalisation (ICD-10 J12-18), invasive pneumococcal disease (IPD) notification and presumptive non-invasive pneumococcal CAP hospitalisation (J13 + J18.1, excluding IPD), comparing those with at least one risk factor to those with no risk factors. The hospitalised case-fatality rate (CFR) included deaths in a 30-day window after hospitalisation. Among 266 951 participants followed for 1 850 000 person-years there were 8747 first hospitalisations for CAP, 157 IPD notifications and 305 non-invasive pneumococcal CAP hospitalisations. In persons 65–84 years, 54.7% had at least one identified risk factor, increasing to 57.0% in those ⩾85 years. The incidence of CAP hospitalisation in those ⩾65 years with at least one risk factor was twofold higher than in those without risk factors, 1091/100 000 (95% confidence interval (CI) 1060–1122) compared with 522/100 000 (95% CI 501–545) and IPD in equivalent groups was almost threefold higher (18.40/100 000 (95% CI 14.61–22.87) vs. 6.82/100 000 (95% CI 4.56–9.79)). The CFR increased with age but there were limited difference by risk status, except in those aged 45 to 64 years. Adults ⩾65 years with at least one risk factor have much higher rates of CAP and IPD suggesting that additional risk factor-based vaccination strategies may be cost-effective.
Environmental and biological factors contribute to sleep development during infancy. Parenting plays a particularly important role in modulating infant sleep, potentially via the serotonin system, which is itself involved in regulating infant sleep. We hypothesized that maternal neglect and serotonin system dysregulation would be associated with daytime sleep in infant rhesus monkeys. Subjects were nursery-reared infant rhesus macaques (n = 287). During the first month of life, daytime sleep-wake states were rated bihourly (0800–2100). Infants were considered neglected (n = 16) if before nursery-rearing, their mother repeatedly failed to retrieve them. Serotonin transporter genotype and concentrations of cerebrospinal fluid 5-hydroxyindoleacetic acid (5-HIAA) were used as markers of central serotonin system functioning. t tests showed that neglected infants were observed sleeping less frequently, weighed less, and had higher 5-HIAA than non-neglected nursery-reared infants. Regression revealed that serotonin transporter genotype moderated the relationship between 5-HIAA and daytime sleep: in subjects possessing the Ls genotype, there was a positive correlation between 5-HIAA and daytime sleep, whereas in subjects possessing the LL genotype there was no association. These results highlight the pivotal roles that parents and the serotonin system play in sleep development. Daytime sleep alterations observed in neglected infants may partially derive from serotonin system dysregulation.
During the summer of 2016, the Hawaii Department of Health responded to the second-largest domestic foodborne hepatitis A virus (HAV) outbreak in the post-vaccine era. The epidemiological investigation included case finding and investigation, sequencing of RNA positive clinical specimens, product trace-back and virologic testing and sequencing of HAV RNA from the product. Additionally, an online survey open to all Hawaii residents was conducted to estimate baseline commercial food consumption. We identified 292 confirmed HAV cases, of whom 11 (4%) were possible secondary cases. Seventy-four (25%) were hospitalised and there were two deaths. Among all cases, 94% reported eating at Oahu or Kauai Island branches of Restaurant Chain A, with 86% of those cases reporting raw scallop consumption. In contrast, a food consumption survey conducted during the outbreak indicated 25% of Oahu residents patronised Restaurant Chain A in the 7 weeks before the survey. Product trace-back revealed a single distributor that supplied scallops imported from the Philippines to Restaurant Chain A. Recovery, amplification and sequence comparison of HAV recovered from scallops revealed viral sequences matching those from case-patients. Removal of product from implicated restaurants and vaccination of those potentially exposed led to the cessation of the outbreak. This outbreak further highlights the need for improved imported food safety.