To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The catastrophic declines of three species of ‘Critically Endangered’ Gyps vultures in South Asia were caused by unintentional poisoning by the non-steroidal anti-inflammatory drug (NSAID) diclofenac. Despite a ban on its veterinary use in 2006 (India, Nepal, Pakistan) and 2010 (Bangladesh), residues of diclofenac have continued to be found in cattle carcasses and in dead wild vultures. Another NSAID, meloxicam, has been shown to be safe to vultures. From 2012 to 2018, we undertook covert surveys of pharmacies in India, Nepal and Bangladesh to investigate the availability and prevalence of NSAIDs for the treatment of livestock. The purpose of the study was to establish whether diclofenac continued to be sold for veterinary use, whether the availability of meloxicam had increased and to determine which other veterinary NSAIDs were available. The availability of diclofenac declined in all three countries, virtually disappearing from pharmacies in Nepal and Bangladesh, highlighting the advances made in these two countries to reduce this threat to vultures. In India, diclofenac still accounted for 10–46% of all NSAIDs offered for sale for livestock treatment in 2017, suggesting weak enforcement of existing regulations and a continued high risk to vultures. Availability of meloxicam increased in all countries and was the most common veterinary NSAID in Nepal (89.9% in 2017). Although the most widely available NSAID in India in 2017, meloxicam accounted for only 32% of products offered for sale. In Bangladesh, meloxicam was less commonly available than the vulture-toxic NSAID ketoprofen (28% and 66%, respectively, in 2018), despite the partial government ban on ketoprofen in 2016. Eleven different NSAIDs were recorded, several of which are known or suspected to be toxic to vultures. Conservation priorities should include awareness raising, stricter implementation of current bans, bans on other vulture-toxic veterinary NSAIDs, especially aceclofenac and nimesulide, and safety-testing of other NSAIDs on Gyps vultures to identify safe and toxic drugs.
Levamisole is an increasingly common cutting agent used with cocaine. Both cocaine and levamisole can have local and systemic effects on patients.
A retrospective case series was conducted of patients with a cocaine-induced midline destructive lesion or levamisole-induced vasculitis, who presented to a Dundee hospital or the practice of a single surgeon in Paisley, from April 2016 to April 2019. A literature review on the topic was also carried out.
Nine patients from the two centres were identified. One patient appeared to have levamisole-induced vasculitis, with raised proteinase 3, perinuclear antineutrophil cytoplasmic antibodies positivity and arthralgia which improved on systemic steroids. The other eight patients had features of a cocaine-induced midline destructive lesion.
As the use of cocaine increases, ENT surgeons will see more of the complications associated with it. This paper highlights some of the diagnostic issues and proposes a management strategy as a guide to this complex patient group. Often, multidisciplinary management is needed.
Introduction: Paramedics commonly administer intravenous dextrose to severely hypoglycemic patients. Typically, the treatment provided is a 25g ampule of 50% dextrose (D50). This dose of D50 is meant to ensure a return to consciousness. However, this dose may be unnecessary and lead to harm or difficulties regulating blood glucose post treatment. We hypothesize that a lower dose such as dextrose 10% (D10) or titrating the D50 to desired level of consciousness may be optimal and avoid adverse events. Methods: We systematically searched Medline, Embase, CINAHL and Cochrane Central on June 5th 2019. PRISMA guidelines were followed. The GRADE methods and risk of bias assessments were applied to determine the certainty of the evidence. We included primary literature investigating the use of intravenous dextrose in hypoglycemic diabetic patients presenting to paramedics or the emergency department. Outcomes of interest were related to the safe and effective reversal of symptoms and blood glucose levels (BGL). Results: 660 abstracts were screened, 40 full text articles, with eight studies included. Data from three randomized controlled trials and five observational studies were analyzed. A single RCT comparing D10 to D50 was identified. The primary significant finding of the study was an increased post-treatment glycemic profile by 3.2 mmol/L in the D50 group; no other outcomes had significant differences between groups. When comparing pooled data from all the included studies we find higher symptom resolution in the D10 group compared to the D50 group; at 99.8% and 94.9% respectively. However, the mean time to resolution was approximately 4 minutes longer in the D10 group (4.1 minutes (D50) and 8 minutes (D10)). There was more need for subsequent doses in the D10 group at 23.0% versus 16.5% in the D50 group. The post treatment glycemic profile was lower in the D10 group at 5.9 mmol/L versus 8.5 mmol/L in the D50 group. Both treatments had nearly complete resolution of hypoglycemia; 98.7% (D50) and 99.2% (D10). No adverse events were observed in the D10 group (0/871) compared to 12/133 adverse events in the D50 group. Conclusion: D10 may be as effective as D50 at resolving symptoms and correcting hypoglycemia. Although the desired effect can take several minutes longer there appear to be fewer adverse events. The post treatment glycemic profile may facilitate less challenging ongoing glucose management by the patients.
Introduction: The Prehospital Evidence-based Practice (PEP) program is an online, freely accessible, continuously updated repository of appraised EMS research evidence. This report is an analysis of published evidence for EMS interventions used to assess and treat patients suffering from hypoglycemia. Methods: PubMed was systematically searched in June 2019. One author screened titles, abstracts and full-texts for relevance. Trained appraisers reviewed full text articles, scored each on a three-point Level of Evidence (LOE) scale (based on study design and quality) and three-point Direction of Evidence (DOE) scale (supportive, neutral, or opposing findings for each intervention's primary outcome), abstracted the primary outcome, setting and assigned an outcome category (patient or process). Second party appraisal was conducted for all included studies. The level and direction of each intervention was plotted in an evidence matrix, based on appraisals. Results: Twenty-nine studies were included and appraised for seven interventions: 5 drugs (Dextrose 50% (D50), Dextrose 10% (D10), glucagon, oral glucose and thiamine), one assessment tool (point-of-care (POC) glucose testing) and one call disposition (treat-and-release). The most frequently reported study primary outcomes were related to: clinical improvement (n = 15, 51.7%), feasibility/safety (n = 8, 27.6%), and diagnostics (n = 6, 20.7%). The majority of outcomes were patient focused (n = 18, 62.0%). Conclusion: EMS interventions for treating hypoglycemia are informed by high-quality supportive evidence. Both D50 and D10 are supported by high-quality evidence; suggesting D10 may be an effective alternative to the standard D50. “Treat-and-release” practices for hypoglycemia are supported by moderate-quality evidence for the patient related outcomes of relapse, patient preference and complications. This body of evidence is high-quality, patient-focused and conducted in the prehospital setting thus generalizable paramedic practice.
Authors recently have suggested that family enrneshment is not synonymous with high levels of closeness or cohesion. A model proposed by Green and Werner clarifies the cohesion-enmeshment domain by distinguishing between closeness-caregiving and intrusiveness as separate relationship processes. This paper examines the cross-cultural applicability of this perspective through a study of 61 married couples in France. The French version of the California Inventory for Family Assessment (CIFA), a self-report measure designed to assess clinically relevant marital dimensions, was employed. In general, spouses' reports of their marital process demonstrated high internal consistency reliabilities. Factor analysis showed meaningful factor structures distinguishing closeness-caregiving and intrusiveness, as predicted, as well as openness of communication. Significant correlations were obtained between CIFA scales and scores on the Marital Adjustment Test. These results are similar for French and American couples. Research implications for studying relationships among French-speaking couples are underlined.
One influential view is that vulnerability to major depressive disorder (MDD) is associated with a proneness to experience negative emotions in general. In contrast, blame attribution theories emphasise the importance of blaming oneself rather than others for negative events. Our previous exploratory study provided support for the attributional hypothesis that patients with remitted MDD show no overall bias towards negative emotions, but a selective bias towards emotions entailing self-blame relative to emotions that entail blaming others. More specifically, we found a decreased proneness for contempt/disgust towards others relative to oneself (i.e. self-contempt bias). Here, we report a definitive test of the competing general negative versus specific attributional bias theories of MDD.
We compared a medication-free remitted MDD (n = 101) and a control group (n = 70) with no family or personal history of MDD on a previously validated experimental test of moral emotions. The task measures proneness to specific emotions associated with different types of self-blame (guilt, shame, self-contempt/disgust, self-indignation/anger) and blame of others (other-indignation/anger, other-contempt/disgust) whilst controlling for the intensity of unpleasantness.
We confirmed the hypothesis that patients with MDD exhibit an increased self-contempt bias with a reduction in contempt/disgust towards others. Furthermore, they also showed a decreased proneness for indignation/anger towards others.
This corroborates the prediction that vulnerability to MDD is associated with an imbalance of specific self- and other-blaming emotions rather than a general increase in negative emotions. This has important implications for neurocognitive models and calls for novel focussed interventions to rebalance blame in MDD.
We describe an ultra-wide-bandwidth, low-frequency receiver recently installed on the Parkes radio telescope. The receiver system provides continuous frequency coverage from 704 to 4032 MHz. For much of the band (
), the system temperature is approximately 22 K and the receiver system remains in a linear regime even in the presence of strong mobile phone transmissions. We discuss the scientific and technical aspects of the new receiver, including its astronomical objectives, as well as the feed, receiver, digitiser, and signal processor design. We describe the pipeline routines that form the archive-ready data products and how those data files can be accessed from the archives. The system performance is quantified, including the system noise and linearity, beam shape, antenna efficiency, polarisation calibration, and timing stability.
Horseweed is one of Kentucky’s most common and problematic weeds in no-till soybean production systems. Emergence in the fall and spring necessitates control at these times because horseweed is best managed when small. Control is typically achieved through herbicides or cover crops (CCs); integrating these practices can lead to more sustainable weed management. Two years of field experiments were conducted over 2016 to 2017 and 2017 to 2018 in Versailles, KY, to examine the use of fall herbicide (FH; namely, saflufenacil or none), spring herbicide (SH; namely, 2,4-D; dicamba; or none), and CC (namely, cereal rye or none) for horseweed management prior to soybean. Treatments were examined with a fully factorial design to assess potential interactions. The CC biomass in 2016 to 2017 was higher relative to 2017 to 2018 and both herbicide programs reduced winter weed biomass in that year. The CC reduced horseweed density while growing and after termination in 1 yr. The FH reduced horseweed density through mid-spring. The FH also killed winter weeds that may have suppressed horseweed emergence; higher horseweed density resulted by soybean planting unless the CC was present to suppress the additional spring emergence. If either FH or CC was used, SH typically did not result in additional horseweed control. The SH killed emerged plants but did not provide residual control of a late horseweed flush in 2017 to 2018. These results suggest CCs can help manage spring flushes of horseweed emergence when nonresidual herbicide products are used, though this effect was short-lived when less CC biomass was present.
Oxidative stress is implicated in the aetiology of schizophrenia, and the antioxidant defence system (AODS) may be protective in this illness. We examined the major antioxidant glutathione (GSH) in prefrontal brain and its correlates with clinical and demographic variables in schizophrenia.
GSH levels were measured in the dorsolateral prefrontal region of 28 patients with chronic schizophrenia using a magnetic resonance spectroscopy sequence specifically adapted for GSH. We examined correlations of GSH levels with age, age at onset of illness, duration of illness, and clinical symptoms.
We found a negative correlation between GSH levels and age at onset (r = −0.46, p = 0.015), and a trend-level positive relationship between GSH and duration of illness (r = 0.34, p = 0.076).
Our findings are consistent with a possible compensatory upregulation of the AODS with longer duration of illness and suggest that the AODS may play a role in schizophrenia.
Introduction: Early and accurate diagnosis of critical conditions is essential in emergency medical services (EMS). Serum lactate testing may be used to identify patients with worse prognosis, including sepsis. Recently, the use of a point-of-care lactate (POCL) test has been evaluated in guiding treatment in patients with sepsis. Operating as part of the Prehospital Evidence Based Practice (PEP) Program, the authors sought to identify and describe the body of evidence for POCL use in EMS and the emergency department (ED) for patients with sepsis. Methods: Following PEP methodology, in May 2018, PubMed was searched in a systematic manner. Title and abstract screening were conducted by the program coordinator. These studies were collected, appraised and added to the existing body of literature contained within the PEP database. Evidence appraisal was conducted by two reviewers who assigned both a level of evidence (LOE) on a novel three tier scale and a direction of evidence (supportive, neutral or opposing; based on primary outcome). Data on setting and study design were also extracted. Results: Eight studies were included in our analysis. Three of these studies were conducted in the ED setting; each investigating the POCL test's ability to predict severe sepsis, ICU admission or death. All three studies found supportive results for POCL. A systematic review on the use of POCL in the ED determined that this test can also improve time to treatment. Five of the total 8 studies were conducted prehospitally. Two of these studies were supportive of POCL use in the prehospital setting; in terms of feasibility and the ability to predict sepsis. Both of these study sites used this early information as part of initiating a “sepsis alert” pathway. The other three prehospital studies provide neutral support for POCL. One study demonstrated moderate ability of POCL to predict severe illness. Two studies found poor agreement between prehospital POCL and serum lactate values. Conclusion: Limited low and moderate quality evidence suggest POCL may be feasible and helpful in predicting sepsis in the prehospital setting. However, there is sparse and inconsistent support for specific important outcomes, including accuracy.
Introduction: The Prehospital Evidence-Based Practice (PEP) program is an online, freely accessible, continuously updated Emergency Medical Services (EMS) evidence repository. This summary describes the research evidence for the identification and management of adult patients suffering from sepsis syndrome or septic shock. Methods: PubMed was searched in a systematic manner. One author reviewed titles and abstracts for relevance and two authors appraised each study selected for inclusion. Primary outcomes were extracted. Studies were scored by trained appraisers on a three-point Level of Evidence (LOE) scale (based on study design and quality) and a three-point Direction of Evidence (DOE) scale (supportive, neutral, or opposing findings based on the studies’ primary outcome for each intervention). LOE and DOE of each intervention were plotted on an evidence matrix (DOE x LOE). Results: Eighty-eight studies were included for 15 interventions listed in PEP. The interventions with the most evidence were related to identification tools (ID) (n = 26, 30%) and early goal directed therapy (EGDT) (n = 21, 24%). ID tools included Systematic Inflammatory Response Syndrome (SIRS), quick Sequential Organ Failure Assessment (qSOFA) and other unique measures. The most common primary outcomes were related to diagnosis (n = 30, 34%), mortality (n = 40, 45%) and treatment goals (e.g. time to antibiotic) (n = 14, 16%). The evidence rank for the supported interventions were: supportive-high quality (n = 1, 7%) for crystalloid infusion, supportive-moderate quality (n = 7, 47%) for identification tools, prenotification, point of care lactate, titrated oxygen, temperature monitoring, and supportive-low quality (n = 1, 7%) for vasopressors. The benefit of prehospital antibiotics and EGDT remain inconclusive with a neutral DOE. There is moderate level evidence opposing use of high flow oxygen. Conclusion: EMS sepsis interventions are informed primarily by moderate quality supportive evidence. Several standard treatments are well supported by moderate to high quality evidence, as are identification tools. However, some standard in-hospital therapies are not supported by evidence in the prehospital setting, such as antibiotics, and EGDT. Based on primary outcomes, no identification tool appears superior. This evidence analysis can guide selection of appropriate prehospital therapies.
Introduction: Long-term immobility has detrimental effects for critically ill patients admitted to the intensive care unit (ICU) including ICU-acquired weakness. Early mobilization of patients admitted to ICU has been demonstrated to be a safe, feasible and effective strategy to improve patient outcomes. The optimal mobilization of trauma ICU patients has not been extensively studied. Our objective was to determine the impact of an early mobilization protocol on outcomes among trauma patients admitted to the ICU. Methods: We analyzed all adult trauma patients ( > 18 years old) admitted to ICU over a 2-year period prior to and following implementation of an early mobilization protocol, allowing for a 1-year transition period. Data were collected from the Nova Scotia Trauma Registry. We compared patient characteristics and outcomes (mortality, length of stay [LOS], ventilator days) between the pre- and post-implementation groups. Associations between early mobilization and clinical outcomes were estimated using binary and linear regression models. Results: Overall, there were 526 patients included in the analysis (292 pre-implementation, 234 post-implementation). The study population ranged in age from 18 to 92 years (mean age 49.0 ± 20.4 years) and 74.3% of all patients were male. The pre- and post-implementation groups were similar in age, sex, and injury severity. In-hospital mortality was reduced in the post-implementation group (25.3% vs. 17.5%; p = 0.031). In addition, there was a reduction in ICU mortality in the post-implementation group (21.6% vs. 12.8%; p = 0.009). We did not observe any difference in overall hospital LOS, ICU LOS, or ventilator days between the two groups. Compared to the pre-implementation period, trauma patients admitted to the ICU following protocol implementation were less likely to die in-hospital (OR = 0.52, 95% CI 0.30-0.91; p = 0.021) or in the ICU (OR = 0.40, 95% CI 0.21- 0.76, p = 0.005). Results were similar following a sensitivity analysis limited to patients with blunt or penetrating injuries. There was no difference between the pre- and post-implementation groups with respect to in-hospital LOS, ICU LOS, or the number of ventilator days. Conclusion: We found that trauma patients admitted to ICU during the post-implementation period had decreased odds of in-hospital mortality and ICU mortality. Ours is the first study to demonstrate a significant reduction in trauma mortality following implementation of an ICU mobility protocol.
Introduction: Previous systematic reviews suggest early mobilization in the intensive care unit (ICU) population is feasible, safe, and may improve outcomes. Only one review investigated mobilization specifically in trauma ICU patients and failed to identify any relevant articles. The objective of the present systematic review was to conduct an up-to-date search of the literature to assess the effect of early mobilization in adult trauma ICU patients on mortality, length of stay (LOS) and duration of mechanical ventilation. Methods: We performed a systematic search of four electronic databases (Ovid MEDLINE, Embase, CINAHL, Cochrane Library) and the grey literature. To be included, studies must have compared early mobilization to delayed or no mobilization among trauma patients admitted to the ICU. Meta-analysis was performed to determine the effect of early mobilization on mortality, hospital LOS, ICU LOS, and duration of mechanical ventilation. Results: The search yielded 2,975 records from the 4 databases and 7 records from grey literature and bibliographic searches; of these, 9 articles met all eligibility criteria and were included in the analysis. There were 7 studies performed in the United States, 1 study from China and 1 study from Norway. Study populations included neurotrauma (3 studies), blunt abdominal trauma (2 studies), mixed injury types (2 studies) and burns (1 study). Cohorts ranged in size from 15 to 1,132 patients (median, 63) and varied in inclusion criteria. Most studies used some form of stepwise progressive mobility protocol. Two studies used simple ambulation as the mobilization measure, and 1 study employed upright sitting as their only intervention. Time to commencement of the intervention was variable across studies, and only 2 studies specified the timing of mobilization initiation. We did not detect a difference in mortality with early mobilization, although the pooled risk ratio (RR) was reduced (RR 0.90, 95% CI 0.74 to 1.09). Hospital LOS and ICU LOS were decreased with early mobilization, though this difference did not reach significance. Duration of mechanical ventilation was significantly shorter in the early mobilization group (mean difference −1.18. 95% CI −2.17 to −0.19). Conclusion: Our review identified few studies that examined mobilization of critically ill trauma patients in the ICU. On meta-analysis, early mobilization was found to reduce duration of mechanical ventilation, but the effects on mortality and LOS were not significant.
Influenza and respiratory syncytial virus (RSV) are common causes of respiratory tract infections and place a burden on health services each winter. Systems to describe the timing and intensity of such activity will improve the public health response and deployment of interventions to these pressures. Here we develop early warning and activity intensity thresholds for monitoring influenza and RSV using two novel data sources: general practitioner out-of-hours consultations (GP OOH) and telehealth calls (NHS 111). Moving Epidemic Method (MEM) thresholds were developed for winter 2017–2018. The NHS 111 cold/flu threshold was breached several weeks in advance of other systems. The NHS 111 RSV epidemic threshold was breached in week 41, in advance of RSV laboratory reporting. Combining the use of MEM thresholds with daily monitoring of NHS 111 and GP OOH syndromic surveillance systems provides the potential to alert to threshold breaches in real-time. An advantage of using thresholds across different health systems is the ability to capture a range of healthcare-seeking behaviour, which may reflect differences in disease severity. This study also provides a quantifiable measure of seasonal RSV activity, which contributes to our understanding of RSV activity in advance of the potential introduction of new RSV vaccines.
Mismatch negativity (MMN) is an event-related potential (ERP) component reflecting auditory predictive coding. Repeated standard tones evoke increasing positivity (‘repetition positivity’; RP), reflecting strengthening of the standard's memory trace and the prediction it will recur. Likewise, deviant tones preceded by more standard repetitions evoke greater negativity (‘deviant negativity’; DN), reflecting stronger prediction error signaling. These memory trace effects are also evident in MMN difference wave. Here, we assess group differences and test-retest reliability of these indices in schizophrenia patients (SZ) and healthy controls (HC).
Electroencephalography was recorded twice, 2 weeks apart, from 43 SZ and 30 HC, during a roving standard paradigm. We examined ERPs to the third, eighth, and 33rd standards (RP), immediately subsequent deviants (DN), and the corresponding MMN. Memory trace effects were assessed by comparing amplitudes associated with the three standard repetition trains.
Compared with controls, SZ showed reduced MMNs and DNs, but normal RPs. Both groups showed memory trace effects for RP, MMN, and DN, with a trend for attenuated DNs in SZ. Intraclass correlations obtained via this paradigm indicated good-to-moderate reliabilities for overall MMN, DN and RP, but moderate to poor reliabilities for components associated with short, intermediate, and long standard trains, and poor reliability of their memory trace effects.
MMN deficits in SZ reflected attenuated prediction error signaling (DN), with relatively intact predictive code formation (RP) and memory trace effects. This roving standard MMN paradigm requires additional development/validation to obtain suitable levels of reliability for use in clinical trials.
To identify potential participants for clinical trials, electronic health records (EHRs) are searched at potential sites. As an alternative, we investigated using medical devices used for real-time diagnostic decisions for trial enrollment.
To project cohorts for a trial in acute coronary syndromes (ACS), we used electrocardiograph-based algorithms that identify ACS or ST elevation myocardial infarction (STEMI) that prompt clinicians to offer patients trial enrollment. We searched six hospitals’ electrocardiograph systems for electrocardiograms (ECGs) meeting the planned trial’s enrollment criterion: ECGs with STEMI or > 75% probability of ACS by the acute cardiac ischemia time-insensitive predictive instrument (ACI-TIPI). We revised the ACI-TIPI regression to require only data directly from the electrocardiograph, the e-ACI-TIPI using the same data used for the original ACI-TIPI (development set n = 3,453; test set n = 2,315). We also tested both on data from emergency department electrocardiographs from across the US (n = 8,556). We then used ACI-TIPI and e-ACI-TIPI to identify potential cohorts for the ACS trial and compared performance to cohorts from EHR data at the hospitals.
Receiver-operating characteristic (ROC) curve areas on the test set were excellent, 0.89 for ACI-TIPI and 0.84 for the e-ACI-TIPI, as was calibration. On the national electrocardiographic database, ROC areas were 0.78 and 0.69, respectively, and with very good calibration. When tested for detection of patients with > 75% ACS probability, both electrocardiograph-based methods identified eligible patients well, and better than did EHRs.
Using data from medical devices such as electrocardiographs may provide accurate projections of available cohorts for clinical trials.
Legionnaires’ disease (LD) incidence in the USA has quadrupled since 2000. Health departments must detect LD outbreaks quickly to identify and remediate sources. We tested the performance of a system to prospectively detect simulated LD outbreaks in Allegheny County, Pennsylvania, USA. We generated three simulated LD outbreaks based on published outbreaks. After verifying no significant clusters existed in surveillance data during 2014–2016, we embedded simulated outbreak-associated cases into 2016, assigning simulated residences and report dates. We mimicked daily analyses in 2016 using the prospective space-time permutation scan statistic to detect clusters of ⩽30 and ⩽180 days using 365-day and 730-day baseline periods, respectively. We used recurrence interval (RI) thresholds of ⩾20, ⩾100 and ⩾365 days to define significant signals. We calculated sensitivity, specificity and positive and negative predictive values for daily analyses, separately for each embedded outbreak. Two large, simulated cooling tower-associated outbreaks were detected. As the RI threshold was increased, sensitivity and negative predictive value decreased, while positive predictive value and specificity increased. A small, simulated potable water-associated outbreak was not detected. Use of a RI threshold of ⩾100 days minimised time-to-detection while maximizing positive predictive value. Health departments should consider using this system to detect community-acquired LD outbreaks.
During the 2009 influenza pandemic, a rapid assessment of disease severity was a challenge as a significant proportion of cases did not seek medical care; care-seeking behaviour changed and the proportion asymptomatic was unknown. A random-digit-dialling telephone survey was undertaken during the 2011/12 winter season in England and Wales to address the feasibility of answering these questions. A proportional quota sampling strategy was employed based on gender, age group, geographical location, employment status and level of education. Households were recruited pre-season and re-contacted immediately following peak seasonal influenza activity. The pre-peak survey was undertaken in October 2011 with 1061 individuals recruited and the post-peak telephone survey in March 2012. Eight hundred and thirty-four of the 1061 (78.6%) participants were successfully re-contacted. Their demographic characteristics compared well to national census data. In total, 8.4% of participants self-reported an influenza-like illness (ILI) in the previous 2 weeks, with 3.2% conforming to the World Health Organization (WHO) ILI case definition. In total, 29.6% of the cases reported consulting their general practitioner. 54.1% of the 1061 participants agreed to be re-contacted about providing biological samples. A population-based cohort was successfully recruited and followed up. Longitudinal survey methodology provides a practical tool to assess disease severity during future pandemics.
Significant increases in excess all-cause mortality, particularly in the elderly, were observed during the winter of 2014/15 in England. With influenza A(H3N2) the dominant circulating influenza A subtype, this paper determines the contribution of influenza to this excess controlling for weather. A standardised multivariable Poisson regression model was employed with weekly all-cause deaths the dependent variable for the period 2008–2015. Adjusting for extreme temperature, a total of 26 542 (95% CI 25 301–27 804) deaths in 65+ and 1942 (95% CI 1834–2052) in 15–64-year-olds were associated with influenza from week 40, 2014 to week 20, 2015. This is compatible with the circulation of influenza A(H3N2). It is the largest estimated number of influenza-related deaths in England since prior to 2008/09. The findings highlight the potential health impact of influenza and the important role of the annual influenza vaccination programme that is required to protect the population including the elderly, who are vulnerable to a severe outcome.
The intergenerational risk for mental illness is well established within diagnostic categories, but the risk is unlikely to respect diagnostic boundaries and may be reflected more broadly in early life vulnerabilities. We aimed to establish patterns of association between externalising and internalising vulnerabilities in early childhood and parental mental disorder across the full spectrum of diagnoses.
A cohort of Australian children (n = 69 116) entering the first year of school in 2009 were assessed using the Australian Early Development Census, providing measures of externalising and internalising vulnerability. Parental psychiatric diagnostic status was determined utilising record-linkage to administrative health datasets.
Parental mental illness, across diagnostic categories, was associated with all child externalising and internalising domains of vulnerability. There was little evidence to support interaction by parental or offspring sex.
These findings have important implications for informing early identification and intervention strategies in high-risk offspring and for research into the causes of mental illness. There may be benefits to focusing less on diagnostic categories in both cases.