To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Introduction: Workplace based assessments (WBAs) are integral to emergency medicine residency training. However many biases undermine their validity, such as an assessor's personal inclination to rate learners leniently or stringently. Outlier assessors produce assessment data that may not reflect the learner's performance. Our emergency department introduced a new Daily Encounter Card (DEC) using entrustability scales in June 2018. Entrustability scales reflect the degree of supervision required for a given task, and are shown to improve assessment reliability and discrimination. It is unclear what effect they will have on assessor stringency/leniency – we hypothesize that they will reduce the number of outlier assessors. We propose a novel, simple method to identify outlying assessors in the setting of WBAs. We also examine the effect of transitioning from a norm-based assessment to an entrustability scale on the population of outlier assessors. Methods: This was a prospective pre-/post-implementation study, including all DECs completed between July 2017 and June 2019 at The Ottawa Hospital Emergency Department. For each phase, we identified outlier assessors as follows: 1. An assessor is a potential outlier if the mean of the scores they awarded was more than two standard deviations away from the mean score of all completed assessments. 2. For each assessor identified in step 1, their learners’ assessment scores were compared to the overall mean of all learners. This ensures that the assessor was not simply awarding outlying scores due to working with outlier learners. Results: 3927 and 3860 assessments were completed by 99 and 116 assessors in the pre- and post-implementation phases respectively. We identified 9 vs 5 outlier assessors (p = 0.16) in the pre- and post-implementation phases. Of these, 6 vs 0 (p = 0.01) were stringent, while 3 vs 5 (p = 0.67) were lenient. One assessor was identified as an outlier (lenient) in both phases. Conclusion: Our proposed method successfully identified outlier assessors, and could be used to identify assessors who might benefit from targeted coaching and feedback on their assessments. The transition to an entrustability scale resulted in a non-significant trend towards fewer outlier assessors. Further work is needed to identify ways to mitigate the effects of rater cognitive biases.
Introduction: The Ottawa Emergency Department Shift Observation Tool (O-EDShOT) was recently developed to assess a resident's ability to safely run an ED shift and is supported by multiple sources of validity evidence. The O-EDShOT uses entrustability scales, which reflect the degree of supervision required for a given task. It was found to discriminate between learners of different levels, and to differentiate between residents who were rated as able to safely run the shift and those who were not. In June 2018 we replaced norm-based daily encounter cards (DECs) with the O-EDShOT. With the ideal assessment tool, most of the score variability would be explained by variability in learners’ performances. In reality, however, much of the observed variability is explained by other factors. The purpose of this study is to determine what proportion of total score variability is accounted for by learner variability when using norm-based DECs vs the O-EDShOT. Methods: This was a prospective pre-/post-implementation study, including all daily assessments completed between July 2017 and June 2019 at The Ottawa Hospital ED. A generalizability analysis (G study) was performed to determine what proportion of total score variability is accounted for by the various factors in this study (learner, rater, form, pgy level) for both the pre- and post- implementation phases. We collected 12 months of data for each phase, because we estimated that 6-12 months would be required to observe a measurable increase in entrustment scale scores within a learner. Results: A total of 3908 and 3679 assessments were completed by 99 and 116 assessors in the pre- and post- implementation phases respectively. Our G study revealed that 21% of total score variance was explained by a combination of post-graduate year (PGY) level and the individual learner in the pre-implementation phase, compared to 59% in the post-implementation phase. An average of 51 vs 27 forms/learner are required to achieve a reliability of 0.80 in the pre- and post-implementation phases respectively. Conclusion: A significantly greater proportion of total score variability is explained by variability in learners’ performances with the O-EDShOT compared to norm-based DECs. The O-EDShOT also requires fewer assessments to generate a reliable estimate of the learner's ability. This study suggests that the O-EDShOT is a more useful assessment tool than norm-based DECs, and could be adopted in other emergency medicine training programs.
Catechol-O-methyltransferase (COMT) has a central role in brain dopamine, noradrenalin and adrenalin signaling, and has been suggested to be involved in the pathogenesis and pharmacological treatment of affective disorders. The functional single nucleotide polymorphism (SNP) in exon 4 (Val158Met, rs4680) influences the COMT enzyme activity. The Val158Met polymorphism is a commonly studied variant in psychiatric genetics, and initial studies in schizophrenia and bipolar disorder presented evidence for association with the Met allele. In unipolar depression, while some of the investigations point at an association between the Met/Met genotype and others have found a link between the Val/Val genotype and depression, most of the studies cannot detect any difference in Val158Met allele frequency between depressed individuals and controls.
In the present study, we further elucidated the impact of COMT polymorphisms including the Val158Met in MDD. We investigated 1,250 subjects with DSM-IV and/or ICD-10 diagnosis of major depression (MDD), and 1,589 control subjects from UK. A total of 24 SNPs spanning the COMT gene were successfully genotyped using the Illumina HumaHap610-Quad Beadchip (22 SNPs), SNPlex™ genotyping system (1 SNP), and Sequenom MassARRAY® iPLEX Gold (1 SNP). Statistical analyses were implemented using PASW Statistics18, FINETTI (http://ihg.gsf.de/cgi-bin/hw/hwa1.pl), UNPHASED version 3.0.10 program and Haploview 4.0 program.
Neither single-marker nor haplotypic association was found with the functional Val158Met polymorphism or with any of the other SNPs genotyped. Our findings do not provide evidence that COMT plays a role in MDD or that this gene explains part of the genetic overlap with bipolar disorder.
As awareness of ADHD has increased throughout the world, interest has grown beyond the constellation of ADHD symptoms, including long-term effects and impact on people's lives.
To examine the consequences of childhood ADHD and the relevance of these outcomes in different world regions.
This analysis examined the publication trends of studies of long-term outcomes of ADHD over time and among world regions.
Study identification followed Cochrane guidelines. Twelve databases were searched for reports published in English 1980–2010. Limiting criteria were designed to maximize study inclusion while maintaining a high level of study rigor: the studies were to
(1) be peer-reviewed,
(2) be primary study reports,
(3) include a comparator group or baseline, and
(4) report outcome results measured for a mean of 8 years (prospective studies, range of all studies was 6 months-40 years) after the start of the study, in late adolescence, or adulthood.
The fully-defined electronic search yielded 4615 citations, which were then reviewed manually based on the titles and abstracts, yielding a final of 371 studies.
Study publication trends analysed included: publication year, country and world region of origin, outcome types, and study types. In general, the numbers of studies published per year globally has increased substantially (from 2 in 1980 to more than 40/year in 2007 and 2008) with differences observed between Europe and North America.
Analysis of publication trends can provide insight into outcomes of ADHD and the focus of specific world regions.
As awareness of ADHD has increased worldwide, interest has grown beyond the constellation of ADHD symptoms, to include long-term impact on people's lives and society in general.
Examine the results of studies of long-term life consequences of ADHD.
To identify areas of life affected long-term by ADHD and differences in outcomes with and without ADHD treatment.
Following Cochrane guidelines, 12 databases were searched for studies published in English (1980–2010). Limiting criteria maximized study inclusion while maintaining high study rigor: (1) peer-reviewed, (2) primary study reports, (3) including a comparator condition, and (4) reporting long-term outcomes (mean 8 years, range 6 months-40 years from study start for prospective studies; subjects in adolescence or adulthood for retrospective or cross-sectional studies). The fully-defined electronic search yielded 4615 citations. Manual review based on titles and abstracts yielded 340 studies included in this analysis of outcomes.
The majority of studies (86%, 243 of 281; studies of untreated ADHD only) showed that untreated ADHD has substantial negative long-term outcomes, encompassing nine broad-ranging areas of life: non-medicinal drug use/addictive behaviour, antisocial behaviour, academic achievement, occupational achievement, public services use, self-esteem, social function, obesity, and driving outcomes. In contrast, most studies including ADHD pharmacotherapy and/or non-pharmacotherapy (94%, 46 of 49) showed that compared with baseline or untreated ADHD, long-term outcomes improved or stabilized with treatment of ADHD.
ADHD has notable negative long-term consequences, and this negative impact may be reduced with treatment of ADHD. Supported by Shire Development Inc.
Numerous studies have applied novel multivariate statistical approaches to the analysis of brain alterations in patients with schizophrenia. However the diagnostic accuracy of the reported predictive models differs largely, making it difficult to evaluate the overall potential of these studies to inform clinical diagnosis.
We conducted a comprehensive literature search to identify all studies reporting performance of neuroimaging-based multivariate predictive models for the differentiation of patients with schizophrenia from healthy control subjects. The robustness of the results as well as the effect of potentially confounding continous variables (e.g. age, gender ratio, year of publication) was investigated.
The final sample consisted of n=37 studies studies including n=1491 patients with schizophrenia and n=1488 healthy controls. Metaanalysis of the complete sample showed a sensitivity of 80.7% (95%-CI: 77.0 to 83.9%) and a specificity of 80.2% (95%-CI: 83.3 to 76.7%). Separate analysis for the different imaging modalities showed similar diagnostic accuracy for the structural MRI studies (sensitivity 77.3%, specificity 78.7%), the fMRI studies (sensitivity 81.4%, specificity 82.4%) and resting-state fMRI studies (sensitivity 86.9%, specificity 80.3%). Moderator analysis showed significant effects of age of patients on sensitivity (p=0.021) and of positive-tonegative symptom ratio on specificity (p=0.028) indicating better diagnostic accuracy in older patients and patients with positive symptoms.
Our analysis indicate an overall sensitivity and overall specificity of around 80 % of neuroimaging-based predictive models for differentiating schizophrenic patients from healthy controls. The results underline the potential applicability of neuroimaging-based predictive models for the diagnosis of schizophrenia.
Animal research suggests that weight gain may be caused by olanzapine-induced melatonin suppression. We conducted a pilot study of psychiatric patients treated with olanzapine to examine if melatonin was suppressed and if so the dose needed to replace this deficit. The relationship between melatonin and metabolic indices was also examined.
Ten patients with schizophrenia (N=3), schizoaffective disorder (N=3), or bipolar disorder (N=4) completed the study. All patients were male, average age 50.6 years. Patients were treated with olanzapine for 5 weeks, then randomized to either 0.3mg (N=4) or 3mg (N=6) of melatonin supplementation in addition to the olanzapine for another 6 weeks. We obtained baseline, week-6, and week-12 measures of the major metabolite of melatonin in the urine, 6-Sulfatoxymelatonin (aMT6s) adjusted for creatinine excretion. We measures weekly weight, glycemia, cholesterol, and triglycerides.
Olanzapine treatment was associated with a trend toward decreases in melatonin from baseline to week-6 (p=.14). Analysis of a subsample of patients diagnosed with schizoaffective or bipolar disorder showed significant decreases from baseline to week-6 (p=.02). Both supplementation with melatonin by 0.3mg and 3mg increased urinary melatonin levels from week-6 to week-12 (p=.12 and p=.02 respectively). Total cholesterol increased initially but demonstrated a trend for decrease when melatonin was supplemented (p=.10).
Olanzapine appears to be related to melatonin suppression. Melatonin supplementation reverses this suppression and may have the potential to reverse metabolic effects associated with olanzapine. Further studies are needed to examine the metabolic effects of olanzapine with melatonin.
Cocaine abuse continues to be epidemic, and yet there are no FDA approved medications for the treatment of cocaine use disorders within the United States.
This 12-week, prospective, double-blind, randomized, placebo-controlled study examined the effectiveness of quetiapine (Seroquel XRTM) versus matched placebo for the treatment of cocaine dependence in non-psychotic individuals. Sixty Individuals with a diagnosis of cocaine dependence were randomized in this study. Those who were randomized to quetiapine (N=29) were titrated up to a target dose of 400 mg/day of quetiapine, while those in the placebo arm (N=31) were given a matched placebo. All subjects had weekly clinic visits and a cognitive-behavioral therapy group session. Outcome measures included questionnaires of cocaine use, money spent on cocaine, and urine drug screens (UDS).
The drop-out rate was substantial at 68%, however there were no group differences between the two arms of the study. Using a repeat measures ANCOVA, as a whole, the subjects in this study improved by reducing their self-reported use of cocaine (p=.018) and self reported money spent on cocaine (p=.041) over the course of the study. However, the quetiapine group was not significantly different from the placebo group. in addition, cox regression analyses yielded non-significant differences (p = .65) between groups in predicting sobriety, as defined as three weeks negative UDS.
This study did not find group differences between the quetiapine and placebo arms, indicating that quetiapine does not appear to be beneficial in the treatment of cocaine dependence.
Tourniquets (TQs) save lives. Although military-approved TQs appear more effective than improvised TQs in controlling exsanguinating extremity hemorrhage, their bulk may preclude every day carry (EDC) by civilian lay-providers, limiting availability during emergencies.
The purpose of the current study was to compare the efficacy of three novel commercial TQ designs to a military-approved TQ.
Nine Emergency Medicine residents evaluated four different TQ designs: Gen 7 Combat Application Tourniquet (CAT7; control), Stretch Wrap and Tuck Tourniquet (SWAT-T), Gen 2 Rapid Application Tourniquet System (RATS), and Tourni-Key (TK). Popliteal artery flow cessation was determined using a ZONARE ZS3 ultrasound. Steady state maximal generated force was measured for 30 seconds with a thin-film force sensor.
Success rates for distal arterial flow cessation were 89% CAT7; 67% SWAT-T; 89% RATS; and 78% TK (H 0.89; P = .83). Mean (SD) application times were 10.4 (SD = 1.7) seconds CAT7; 23.1 (SD = 9.0) seconds SWAT-T; 11.1 (SD = 3.8) seconds RATS; and 20.0 (SD = 7.1) seconds TK (F 9.71; P <.001). Steady state maximal forces were 29.9 (SD = 1.2) N CAT7; 23.4 (SD = 0.8) N SWAT-T; 33.0 (SD = 1.3) N RATS; and 41.9 (SD = 1.3) N TK.
All novel TQ systems were non-inferior to the military-approved CAT7. Mean application times were less than 30 seconds for all four designs. The size of these novel TQs may make them more conducive to lay-provider EDC, thereby increasing community resiliency and improving the response to high-threat events.
There is increasing interest in the clinical and aetiological overlap between autism spectrum disorders and schizophrenia spectrum disorders, reported to co-occur at both diagnostic and trait levels. Individually, sub-clinical autistic and psychotic traits are associated with poor clinical outcomes, including increased depressive symptomatology, self-harming behaviour and suicidality. However, the implications when both traits co-occur remain poorly understood. The study aimed to (1) examine the relationship between autistic and psychotic traits and (2) determine if their co-occurrence increases depressive symptomatology, self-harm and suicidality.
Cross-sectional data from a self-selecting (online and poster advertising) sample of the adult UK population (n = 653) were collected using an online survey. Validated self-report measures were used to assess sub-clinical autistic and psychotic traits, depressive symptomatology, self-harming behaviour and suicidality. Correlation and regression analyses were performed.
A positive correlation between sub-clinical autistic and positive psychotic traits was confirmed (rs = 0.509, p < 0.001). Overall, autistic traits and psychotic traits were, independently, significant predictors of depression, self-harm and suicidality. Intriguingly, however, depression was associated with a negative interaction between the autistic domain attention to detail and psychotic traits.
This study supports previous findings that sub-clinical autistic and psychotic traits are largely independently associated with depression, self-harm and suicidality, and is novel in finding that their combined presence has no additional effect on depression, self-harm or suicidality. These findings highlight the importance of considering both autistic and psychotic traits and their symptom domains in research and when developing population-based depression prevention and intervention strategies.
Q fever (caused by Coxiella burnetii) is thought to have an almost world-wide distribution, but few countries have conducted national serosurveys. We measured Q fever seroprevalence using residual sera from diagnostic laboratories across Australia. Individuals aged 1–79 years in 2012–2013 were sampled to be proportional to the population distribution by region, distance from metropolitan areas and gender. A 1/50 serum dilution was tested for the Phase II IgG antibody against C. burnetii by indirect immunofluorescence. We calculated crude seroprevalence estimates by age group and gender, as well as age standardised national and metropolitan/non-metropolitan seroprevalence estimates. Of 2785 sera, 99 tested positive. Age standardised seroprevalence was 5.6% (95% confidence interval (CI 4.5%–6.8%), and similar in metropolitan (5.5%; 95% CI 4.1%–6.9%) and non-metropolitan regions (6.0%; 95%CI 4.0%–8.0%). More males were seropositive (6.9%; 95% CI 5.2%–8.6%) than females (4.2%; 95% CI 2.9%–5.5%) with peak seroprevalence at 50–59 years (9.2%; 95% CI 5.2%–13.3%). Q fever seroprevalence for Australia was higher than expected (especially in metropolitan regions) and higher than estimates from the Netherlands (2.4%; pre-outbreak) and US (3.1%), but lower than for Northern Ireland (12.8%). Robust country-specific seroprevalence estimates, with detailed exposure data, are required to better understand who is at risk and the need for preventive measures.
We conducted a systematic review and network meta-analysis to determine the comparative efficacy of antibiotics used to control bovine respiratory disease (BRD) in beef cattle on feedlots. The information sources for the review were: MEDLINE®, MEDLINE In-Process and MEDLINE® Daily, AGRICOLA, Epub Ahead of Print, Cambridge Agricultural and Biological Index, Science Citation Index, Conference Proceedings Citation Index – Science, the Proceedings of the American Association of Bovine Practitioners, World Buiatrics Conference, and the United States Food and Drug Administration Freedom of Information New Animal Drug Applications summaries. The eligible population was weaned beef cattle raised in intensive systems. The interventions of interest were injectable antibiotics used at the time the cattle arrived at the feedlot. The outcome of interest was the diagnosis of BRD within 45 days of arrival at the feedlot. The network meta-analysis included data from 46 studies and 167 study arms identified in the review. The results suggest that macrolides are the most effective antibiotics for the reduction of BRD incidence. Injectable oxytetracycline effectively controlled BRD compared with no antibiotics; however, it was less effective than macrolide treatment. Because oxytetracycline is already commonly used to prevent, control, and treat BRD in groups of feedlot cattle, the use of injectable oxytetracycline for BRD control might have advantages from an antibiotic stewardship perspective.
Vaccination against putative causal organisms is a frequently used and preferred approach to controlling bovine respiratory disease complex (BRD) because it reduces the need for antibiotic use. Because approximately 90% of feedlots use and 90% of beef cattle receive vaccines in the USA, information about their comparative efficacy would be useful for selecting a vaccine. We conducted a systematic review and network meta-analysis of studies assessing the comparative efficacy of vaccines to control BRD when administered to beef cattle at or near their arrival at the feedlot. We searched MEDLINE, MEDLINE In-Process, MEDLINE Daily Epub Ahead of Print, AGRICOLA, Cambridge Agricultural and Biological Index, Science Citation Index, and Conference Proceedings Citation Index – Science and hand-searched the conference proceedings of the American Association of Bovine Practitioners and World Buiatrics Congress. We found 53 studies that reported BRD morbidity within 45 days of feedlot arrival. The largest connected network of studies, which involved 17 vaccine protocols from 14 studies, was included in the meta-analysis. Consistent with previous reviews, we found little compelling evidence that vaccines used at or near arrival at the feedlot reduce the incidence of BRD diagnosis.
This proposed contribution to the special issue of ILWCH offers a theoretical re-consideration of the Liberian project. If, as is commonly supposed in its historiography and across contemporary discourse regarding its fortunes into the twenty-first century, Liberia is a notable, albeit contested, instance of the modern era's correctable violence in that it stands as an imperfect realization of the emancipated slave, the liberated colony, and the freedom to labor unalienated, then such representation continues to hide more than it reveals. This essay, instead, reads Liberia as an instructive leitmotif for the conversion of racial slavery's synecdochical plantation system in the Americas into the plantation of the world writ large: the global scene of antiblackness and the immutable qualification for enslavement accorded black positionality alone. Transitions between political economic systems—from slave trade to “re-colonization,” from Firestone occupation to dictatorial-democratic regimes—reemerge from this re-examination as crucial but inessential to understanding Liberia's position, and thus that of black laboring subjects, in the modern world. I argue that slavery is the simultaneous primitive accumulation of black land and bodies, but that this reality largely escapes current conceptualization of not only the history of labor but also that of enslavement. In other words, the African slave trade (driven first by Arabs in the Indian Ocean region, then Europeans in the Mediterranean, and, subsequently, Euro-Americans in the Atlantic) did not simply leave as its corollary effect, or byproduct, the underdevelopment of African societies. The trade in African flesh was at once the co-production of a geography of desire in which blackness is perpetually fungible at every scale, from the body to the nation-state to its soil—all treasures not simply for violation and exploitation, but more importantly, for accumulation and all manner of usage. The Liberian project elucidates this ongoing reality in distinctive ways—especially when we regard it through the lens of the millennium-plus paradigm of African enslavement. Conceptualizing slavery's “afterlife” entails exploring the ways that emancipation extended, not ameliorated, the chattel condition, and as such, impugns the efficacy of key analytic categories like “settler,” “native,” “labor,” and “freedom” when applied to black existence. Marronage, rather than colonization or emancipation, situates Liberia within the intergenerational struggle of, and over, black work against social death. Read as enslavement's conversion, this essay neither impugns nor heralds black action and leadership on the Liberian project at a particular historical moment, but rather agitates for centering black thought on the ongoing issue of black fungibility and social captivity that Liberia exemplifies. I argue that such a reading of Liberia presents a critique of both settler colonialism and of a certain conceptualization of the black radical tradition and its futures in heavily optimist, positivist, and political economic terms that are enjoying considerable favor in leading discourse on black struggle today.
Innovation Concept: The outcome of emergency medicine training is to produce physicians who can competently run an emergency department (ED) shift. While many workplace-based ED assessments focus on discrete tasks of the discipline, others emphasize assessment of performance across the entire shift. However, the quality of assessments is generally poor and these tools often lack validity evidence. The use of entrustment scale anchors may help to address these psychometric issues. The aim of this study was to develop and gather validity evidence for a novel tool to assess a resident's ability to independently run an ED shift. Methods: Through a nominal group technique, local and national stakeholders identified dimensions of performance reflective of a competent ED physician. These dimensions were included in a new tool that was piloted in the Department of Emergency Medicine at the University of Ottawa during a 4-month period. Psychometric characteristics of the items were calculated, and a generalizability analysis used to determine the reliability of scores. An ANOVA was conducted to determine whether scores increased as a function of training level (junior = PGY1-2, intermediate = PGY3, senior = PGY4-5), and varied by ED treatment area. Safety for independent practice was analyzed with a dichotomous score. Curriculum, Tool or Material: The developed Ottawa Emergency Department Shift Observation Tool (O-EDShOT) includes 12-items rated on a 5-point entrustment scale with a global assessment item and 2 short-answer questions. Eight hundred and thirty-three assessment were completed by 78 physicians for 45 residents. Mean scores differed significantly by training level (p < .001) with junior residents receiving lower ratings (3.48 ± 0.69) than intermediate residents who received lower ratings (3.98 ± 0.48) than senior residents (4.54 ± 0.42). Scores did not vary by ED treatment area (p > .05). Residents judged to be safe to independently run the shift had significantly higher mean scores than those judged not to be safe (4.74 ± 0.31 vs 3.75 ± 0.66; p < .001). Fourteen observations per resident, the typical number recorded during a 1-month rotation, were required to achieve a reliability of 0.80. Conclusion: The O-EDShOT successfully discriminated between junior, intermediate and senior-level residents regardless of ED treatment area. Multiple sources of evidence support the O-EDShOT producing valid scores for assessing a resident's ability to independently run an ED shift.
Unlike for many other respiratory infections, the seasonality of pertussis is not well understood. While evidence of seasonal fluctuations in pertussis incidence has been noted in some countries, there have been conflicting findings including in the context of Australia. We investigated this issue by analysing the seasonality of pertussis notifications in Australia using monthly data from January 1991 to December 2016. Data were made available for all states and territories in Australia except for the Australian Capital Territory and were stratified into age groups. Using a time-series decomposition approach, we formulated a generalised additive model where seasonality is expressed using cosinor terms to estimate the amplitude and peak timing of pertussis notifications in Australia. We also compared these characteristics across different jurisdictions and age groups. We found evidence that pertussis notifications exhibit seasonality, with peaks observed during the spring and summer months (November–January) in Australia and across different states and territories. During peak months, notifications are expected to increase by about 15% compared with the yearly average. Peak notifications for children <5 years occurred 1–2 months later than the general population, which provides support to the theory that older household members remain an important source of pertussis infection for younger children. In addition, our results provide a more comprehensive spatial picture of seasonality in Australia, a feature lacking in previous studies. Finally, our findings suggest that seasonal forcing may be useful to consider in future population transmission models of pertussis.
The role that vitamin D plays in pulmonary function remains uncertain. Epidemiological studies reported mixed findings for serum 25-hydroxyvitamin D (25(OH)D)–pulmonary function association. We conducted the largest cross-sectional meta-analysis of the 25(OH)D–pulmonary function association to date, based on nine European ancestry (EA) cohorts (n 22 838) and five African ancestry (AA) cohorts (n 4290) in the Cohorts for Heart and Aging Research in Genomic Epidemiology Consortium. Data were analysed using linear models by cohort and ancestry. Effect modification by smoking status (current/former/never) was tested. Results were combined using fixed-effects meta-analysis. Mean serum 25(OH)D was 68 (sd 29) nmol/l for EA and 49 (sd 21) nmol/l for AA. For each 1 nmol/l higher 25(OH)D, forced expiratory volume in the 1st second (FEV1) was higher by 1·1 ml in EA (95 % CI 0·9, 1·3; P<0·0001) and 1·8 ml (95 % CI 1·1, 2·5; P<0·0001) in AA (Prace difference=0·06), and forced vital capacity (FVC) was higher by 1·3 ml in EA (95 % CI 1·0, 1·6; P<0·0001) and 1·5 ml (95 % CI 0·8, 2·3; P=0·0001) in AA (Prace difference=0·56). Among EA, the 25(OH)D–FVC association was stronger in smokers: per 1 nmol/l higher 25(OH)D, FVC was higher by 1·7 ml (95 % CI 1·1, 2·3) for current smokers and 1·7 ml (95 % CI 1·2, 2·1) for former smokers, compared with 0·8 ml (95 % CI 0·4, 1·2) for never smokers. In summary, the 25(OH)D associations with FEV1 and FVC were positive in both ancestries. In EA, a stronger association was observed for smokers compared with never smokers, which supports the importance of vitamin D in vulnerable populations.
We investigated risk factors for severe acute lower respiratory infections (ALRI) among hospitalised children <2 years, with a focus on the interactions between virus and age. Statistical interactions between age and respiratory syncytial virus (RSV), influenza, adenovirus (ADV) and rhinovirus on the risk of ALRI outcomes were investigated. Of 1780 hospitalisations, 228 (12.8%) were admitted to the intensive care unit (ICU). The median (range) length of stay (LOS) in hospital was 3 (1–27) days. An increase of 1 month of age was associated with a decreased risk of ICU admission (rate ratio (RR) 0.94; 95% confidence intervals (CI) 0.91–0.98) and with a decrease in LOS (RR 0.96; 95% CI 0.95–0.97). Associations between RSV, influenza, ADV positivity and ICU admission and LOS were significantly modified by age. Children <5 months old were at the highest risk from RSV-associated severe outcomes, while children >8 months were at greater risk from influenza-associated ICU admissions and long hospital stay. Children with ADV had increased LOS across all ages. In the first 2 years of life, the effects of different viruses on ALRI severity varies with age. Our findings help to identify specific ages that would most benefit from virus-specific interventions such as vaccines and antivirals.
Mechanical forces during machine milking induce changes in teat condition which can be differentiated into short-term and long-term changes. Machine milking-induced short-term changes in teat condition (STC) are defined as tissue responses to a single milking and have been associated with the risk of new intramammary infection. Albeit, their association with teat characteristics, such as teat-end shape, has not been investigated by rigorous methods. The primary objective was to determine the association of STC, as measured by ultrasonography, with teat-end shape. The second objective was to describe possible differences in the recovery time of teat tissue after machine milking among teats with different teat-end shapes. Holstein cows (n=128) were enrolled in an observational study, housed in free-stall pens with sand bedding and milked three times a day. Ultrasonography of the left front and right hind teat was performed after teat preparation before milking (t−1), immediately after milking (t0) and 1, 3, 5 and 7 h after milking (t1, t3, t5, t7). The teat tissue parameters measured from ultrasound scans were teat canal length, teat-end diameter, teat-end diameter at the midpoint between the distal and proximal end of the teat canal, teat wall thickness, and teat cistern width. Teat-end shape was assessed visually and classified into three categories: pointed, flat and round. Multivariable linear regression analyses showed differences in the relative change of teat tissue parameters (compared with t−1) at t0 among teats with different teat-end shapes, with most parameters showing the largest change for round teats. The premilking values were reached (recovery time) after 7 h in teats with a pointed teat-end shape, whereas recovery time was greater than 7 h in teats with flat and round teat-end shapes. Under the same liner and milking machine conditions, teats with a round teat-end shape had the most severe short-term changes. The results of this observational study indicated that teat-end shape may be one of the factors that contribute to the severity of STC.
The history of British saints on the Continent is notoriously difficult to research – and I deliberately use the word ‘British’ and ‘Briton’ even where others might prefer ‘Breton’, because for the sixth century it is usually impossible to make a definite distinction between those who originated in Great Britain and those who came from Brittany. The majority of our sources are late: the most substantial body of material is hagiographic, but the Vita Winwaloei was written by Wrdestin and Clement in the first years of the ninth century, that of Machutus (Malo) by Bili around 860, and that of Paul Aurelian by Wrmonoc in 884. Of the two Lives of Gildas, the earliest appears to belong to the eleventh century, and the second, by Caradoc of Llancarfan, to the twelfth. The first Life of Samson (VIS) would seem to have been composed initially during the seventh century, which is when the author himself claims to have been active, and there are certain linguistic and terminological features in the Life that support such a date. There may, of course, have been a subsequent moment of what French scholars are now describing as réécriture, but even so the fact that the text makes no mention of a diocese of Dol surely indicates that the work as we have it antedates the foundation of the see, whose existence is not clearly attested before the mid-ninth century.
For Samson, unlike Gildas, Paul Aurelian, Winwaloe (Gwennolé), and Malo, we at least have the evidence of the subscription list of the Council of Paris, which can be dated by means of the other signatories to the period 556 to 573. The Council provides us with a useful point of departure for considering the activities of British ascetics in the Merovingian world. Having considered the early evidence for Samson, I will turn to that relating to the Irish saint Columbanus, which arguably gives us our most extensive block of dateable evidence for the influence of Britons on the Continent, before returning to what the Vita Samsonis has to say about the saint's Continental career, and the ways in which it complements and differs from the Columbanian material.