To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Lithium is viewed as the first-line long-term treatment for prevention of relapse in people with bipolar disorder.
This study examined factors associated with the likelihood of maintaining serum lithium levels within the recommended range and explored whether the monitoring interval could be extended in some cases.
We included 46 555 lithium rest requests in 3371 individuals over 7 years from three UK centres. Using lithium results in four categories (<0.4 mmol/L; 0.40–0.79 mmol/L; 0.80–0.99 mmol/L; ≥1.0 mmol/L), we determined the proportion of instances where lithium results remained stable or switched category on subsequent testing, considering the effects of age, duration of lithium therapy and testing history.
For tests within the recommended range (0.40–0.99 mmol/L categories), 84.5% of subsequent tests remained within this range. Overall, 3 monthly testing was associated with 90% of lithium results remaining within range, compared with 85% at 6 monthly intervals. In cases where the lithium level in the previous 12 months was on target (0.40–0.79 mmol/L; British National Formulary/National Institute for Health and Care Excellence criteria), 90% remained within the target range at 6 months. Neither age nor duration of lithium therapy had any significant effect on lithium level stability. Levels within the 0.80–0.99 mmol/L category were linked to a higher probability of moving to the ≥1.0 mmol/L category (10%) compared with those in the 0.4–0.79 mmol/L group (2%), irrespective of testing frequency.
We propose that for those who achieve 12 months of lithium tests within the 0.40–0.79 mmol/L range, the interval between tests could increase to 6 months, irrespective of age. Where lithium levels are 0.80–0.99 mmol/L, the test interval should remain at 3 months. This could reduce lithium test numbers by 15% and costs by ~$0.4 m p.a.
Seed retention, and ultimately seed shatter, are extremely important for the efficacy of harvest weed seed control (HWSC) and are likely influenced by various agroecological and environmental factors. Field studies investigated seed-shattering phenology of 22 weed species across three soybean [Glycine max (L.) Merr.]-producing regions in the United States. We further evaluated the potential drivers of seed shatter in terms of weather conditions, growing degree days, and plant biomass. Based on the results, weather conditions had no consistent impact on weed seed shatter. However, there was a positive correlation between individual weed plant biomass and delayed weed seed–shattering rates during harvest. This work demonstrates that HWSC can potentially reduce weed seedbank inputs of plants that have escaped early-season management practices and retained seed through harvest. However, smaller individuals of plants within the same population that shatter seed before harvest pose a risk of escaping early-season management and HWSC.
To assess how well MHAS meets the service specification
To ascertain areas of good practice
To examine whether the referral form is being used in an appropriate manner
To elucidate areas of good communication and whether any improvement can be made
Launched in 2012, MHAS is the single point of access service for mental health services for patients aged 16–65 years, with a general practitioner (GP) in Dudley, who are not currently open to secondary care. Assessments are completed by a medic, community psychiatric nurse or jointly. It aims to identify the most appropriate care pathway for patients. This audit was a comprehensive assessment of how effective MHAS is at ensuring patients are adequately triaged.
10 cases from each month between April 2018 and March 2019 were randomly selected from all 980 anonymised MHAS referrals. A proforma was developed based on current practice, previous audits and service specification. A team of four doctors assisted in the data collection and only electronic health records (EHR) were reviewed.
88.3% of referrals were recorded on the EHR. Only 61.7% of referrals used the proforma with the other referrals mostly being in the form of a letter, which often missed out information vital to the triaging process. Only 4.2% of referrals are from Primary Care Mental Health Nurses (PCMHN) with 85.8% arising from GPs. Urgent referrals were not discussed with MHAS via telephone contact in about 60% of cases. The majority of patients had telephone screening completed the same day and were then discussed the next working day at the daily referral meeting. Although a brief summary for the GP was being sent the same day in all cases, over half of the comprehensive assessments were not being sent within the five day timeframe.
All referrals must be uploaded to the EHR and completed using the service's proforma. PCMHNs may be currently under-utilised or effectively doing their jobs at managing mental health patients in primary care. GPs regularly referring via letter require further training and support to use the proforma. The proforma may require simplification to make it easier to complete. The service specification requires review as it makes unrealistic demands of the service. All referrals must be discussed at the daily referral meeting. Further investigation is required to understand why MHAS is struggling to meet timeframes for appointments and letters.
Lithium was first found to have an acute antimanic effect in 1948 with further corroboration in the early 1950s. It took some time for lithium to become the standard treatment for relapse prevention in bipolar affective disorder. In this study, our aims were to examine the factors associated wtih the likelihood of maintaining lithium levels within the recommended therapeutic range and to look at the stability of lithium levels between blood tests. We examined this relation using clinical laboratory serum lithium test requesting data collected from three large UK centres, where the approach to managing patients with bipolar disorder and ordering lithium testing varied.
46,555 lithium rest requests in 3,371 individuals over 7 years were included from three UK centres. Using lithium results in four categories (<0.4 mmol/L; 0.40–0.79 mmol/L; 0.80–0.99 mmol/L; ≥1.0 mmol/L), we determined the proportion of instances where, on subsequent testing, lithium results remained in the same category or switched category. We then examined the association between testing interval and proportion remaining within target, and the effect of age, duration of lithium therapy and testing history.
For tests within the recommended range (0.40–0.99 mmol/L categories), 84.5% of subsequent tests remained within this range. Overall 3-monthly testing was associated with 90% of lithium results remaining within range compared with 85% at 6-monthly intervals. At all test intervals, lithium test result history in the previous 12-months was associated with the proportion of next test results on target (BNF/NICE criteria), with 90% remaining within range target after 6-months if all tests in the previous 12-months were on target. Age/duration of lithium therapy had no significant effect on lithium level stability. Levels within the 0.80–0.99 mmol/L category were linked to a higher probability of moving to the ≥1.0 mmol/L category (10%) than those in the 0.40–0.79 mmolL group (2%), irrespective of testing frequency. Thus prior history in relation to stability of lithium level in the previous 12 months is a predictor of future stability of lithium level.
We propose that, for those who achieve 12-months of lithium tests within the 0.40–0.79mmol/L range, it would be reasonable to increase the interval between tests to 6 months, irrespective of age, freeing up resource to focus on those less concordant with their lithium monitoring. Where lithium level is 0.80–0.99mmol/L test interval should remain at 3 months. This could reduce lithium test numbers by 15% and costs by ~$0.4 m p.a.
To examine the factors that relate to antipsychotic prescribing in general practices across England and how these relate to cost changes in recent years.
Antipsychotic medications are the first-line pharmacological intervention for severe mental illnesses(SMI) such as schizophrenia and other psychoses, while also being used to relieve distress and treat neuropsychiatric symptoms in dementia.
Since 2014 many antipsychotic agents have moved to generic provision. In 2017_18 supplies of certain generic agents were affected by substantial price increases.
The study examined over time the prescribing volume and prices paid for antipsychotic medication by agent in primary care and considered if price change affected agent selection by prescribers.
The NHS in England/Wales publishes each month the prescribing in general practice by BNF code. This was aggregated for the year 2018_19 using Defined Daily doses (DDD) as published by the World Health Organisation Annual Therapeutic Classification (WHO/ATC) and analysed by delivery method and dose level. Cost of each agent year-on-year was determined.
Monthly prescribing in primary care was consolidated over 5 years (2013-2018) and DDD amount from WHO/ATC for each agent was used to convert the amount to total DDD/practice.
In 2018_19 there were 10,360,865 prescriptions containing 136 million DDD with costs of £110 million at an average cost of £0.81/DDD issued in primary care. We included 5,750 GP Practices with practice population >3000 and with >30 people on their SMI register.
Effect of price
In 2017_18 there was a sharp increase in overall prices and they had not reduced to expected levels by the end of the 2018_19 evaluation year. There was a gradual increase in antipsychotic prescribing over 2013-2019 which was not perturbed by the increase in drug price in 2017/18.
The strongest positive relation to increased prescribing of antipsychotics came from higher social disadvantage, higher population density(urban), and comorbidities e.g. chronic obstructive pulmonary disease(COPD). Higher %younger and %older populations, northerliness and non-white (Black and Minority Ethnic (BME)) ethnicity were all independently associated with less antipsychotic prescribing.
Higher DDD/general practice population was linked with higher %injectable, higher %liquid, higher doses/prescription and higher %zuclopenthixol. Less DDD/population was linked with general practices using higher %risperidone and higher spending/dose of antipsychotic.
Higher levels of antipsychotic prescribing are driven by social factors/comorbidities. The link with depot medication prescriptions, alludes to the way that antipsychotics can induce receptor supersensitivity with consequent dose escalation.
The Subglacial Antarctic Lakes Scientific Access (SALSA) Project accessed Mercer Subglacial Lake using environmentally clean hot-water drilling to examine interactions among ice, water, sediment, rock, microbes and carbon reservoirs within the lake water column and underlying sediments. A ~0.4 m diameter borehole was melted through 1087 m of ice and maintained over ~10 days, allowing observation of ice properties and collection of water and sediment with various tools. Over this period, SALSA collected: 60 L of lake water and 10 L of deep borehole water; microbes >0.2 μm in diameter from in situ filtration of ~100 L of lake water; 10 multicores 0.32–0.49 m long; 1.0 and 1.76 m long gravity cores; three conductivity–temperature–depth profiles of borehole and lake water; five discrete depth current meter measurements in the lake and images of ice, the lake water–ice interface and lake sediments. Temperature and conductivity data showed the hydrodynamic character of water mixing between the borehole and lake after entry. Models simulating melting of the ~6 m thick basal accreted ice layer imply that debris fall-out through the ~15 m water column to the lake sediments from borehole melting had little effect on the stratigraphy of surficial sediment cores.
The coronavirus disease 2019 (COVID-19) pandemic has resulted in shortages of personal protective equipment (PPE), underscoring the urgent need for simple, efficient, and inexpensive methods to decontaminate masks and respirators exposed to severe acute respiratory coronavirus virus 2 (SARS-CoV-2). We hypothesized that methylene blue (MB) photochemical treatment, which has various clinical applications, could decontaminate PPE contaminated with coronavirus.
The 2 arms of the study included (1) PPE inoculation with coronaviruses followed by MB with light (MBL) decontamination treatment and (2) PPE treatment with MBL for 5 cycles of decontamination to determine maintenance of PPE performance.
MBL treatment was used to inactivate coronaviruses on 3 N95 filtering facepiece respirator (FFR) and 2 medical mask models. We inoculated FFR and medical mask materials with 3 coronaviruses, including SARS-CoV-2, and we treated them with 10 µM MB and exposed them to 50,000 lux of white light or 12,500 lux of red light for 30 minutes. In parallel, integrity was assessed after 5 cycles of decontamination using multiple US and international test methods, and the process was compared with the FDA-authorized vaporized hydrogen peroxide plus ozone (VHP+O3) decontamination method.
Overall, MBL robustly and consistently inactivated all 3 coronaviruses with 99.8% to >99.9% virus inactivation across all FFRs and medical masks tested. FFR and medical mask integrity was maintained after 5 cycles of MBL treatment, whereas 1 FFR model failed after 5 cycles of VHP+O3.
MBL treatment decontaminated respirators and masks by inactivating 3 tested coronaviruses without compromising integrity through 5 cycles of decontamination. MBL decontamination is effective, is low cost, and does not require specialized equipment, making it applicable in low- to high-resource settings.
Intrauterine preeclampsia exposure affects the lifelong cardiometabolic health of the child. Our study aimed to compare the growth (from birth to 6 months) of infants exposed to either a normotensive pregnancy or preeclampsia and explore the influence of being born small for gestational age (SGA). Participants were children of women participating in the Post-partum, Physiology, Psychology and Paediatric follow-up cohort study. Birth and 6-month weight and length z-scores were calculated for term and preterm (<37 weeks) babies, and change in weight z-score, rapid weight gain (≥0.67 increase in weight z-score) and conditional weight gain z-score were calculated. Compared with normotensive exposed infants (n = 298), preeclampsia exposed infants (n = 84) were more likely to be born SGA (7% versus 23%; P < 0.001), but weight gain from birth to 6 months, by any measure, did not differ between groups. Infants born SGA, irrespective of pregnancy exposure, were more likely to have rapid weight gain and had greater increases in weight z-score compared with those not born SGA. Preeclampsia exposed infants born SGA may benefit from interventions designed to prevent future cardiometabolic disease.
In the First-HD pivotal trial, the maximum deutetrabenazine dose evaluated to treat chorea associated with Huntington’s disease (HD chorea) was 48 mg/d, which is the approved maximum dose for this population. In ARC-HD, an open-label extension study evaluating the long-term efficacy and safety of deutetrabenazine to treat HD chorea, dosage ranged from 6 mg/d to 72 mg/d, with doses ≥12 mg/d administered twice daily. Doses in ARC-HD were increased by 6 mg/d per week in a response-driven manner based on efficacy and tolerability until 48 mg/d (Week 8). At the investigator’s discretion, further increases were permitted by 12 mg/d per week to a maximum of 72 mg/d. This post-hoc analysis evaluates the safety and tolerability of deutetrabenazine >48 mg/d compared to ≤48 mg/d to treat HD chorea in ARC-HD.
Patient counts and safety assessments were attributed to patients when they received a dose of either ≤48 mg/d or >48 mg/d. For 9 selected adverse events (AEs), we compared AE rates adjusted for duration of drug exposure (as number of AEs/year) at ≤48 mg/d or >48 mg/d. The AE rates were determined after titration when participants were on stable doses of deutetrabenazine.
All 113 patients were exposed to doses ≤48 mg/d (177.1 patient-years) and 49 patients were ever exposed to doses >48 mg/d (74.1 patient-years). In patients taking deutetrabenazine >48 mg/d compared to ≤48 mg/d after the titration period, there were no apparent differences in exposure-adjusted AE rates.
Based on clinical experience, some patients with HD may benefit from doses higher than 48 mg/d to adequately control chorea. These doses were tolerated without apparent increase in the exposure-adjusted rates of selected AEs after titration. This analysis does not address the occurrence of other AEs or whether adequate efficacy was achieved at lower doses, factors that may have influenced dose increases.
Teva Pharmaceutical Industries Ltd., Petach Tikva, Israel
Chorea is a prominent motor dysfunction in Huntington’s disease (HD). Deutetrabenazine, a vesicular monoamine transporter 2 (VMAT2) inhibitor, is FDA-approved for the treatment of chorea in HD. In the pivotal, 12-week First-HD trial, deutetrabenazine treatment reduced the Unified Huntington’s Disease Rating Scale (UHDRS) total maximal chorea (TMC) score versus placebo. ARC-HD, an open-label extension study, evaluated long-term safety and efficacy of deutetrabenazine dosed in a response-driven manner for treatment of HD chorea.
Patients who completed First-HD (Rollover) and patients who converted overnight from a stable dose of tetrabenazine (Switch) were included. Safety was assessed over the entire treatment period; exposure-adjusted incidence rates (EAIRs; adverse events [AEs] per person-year) were calculated. A stable, post-titration time point of 8 weeks was chosen for efficacy analyses.
Of 119 patients enrolled (Rollover, n=82; Switch, n=37), 100 (84%) completed ≥1 year of treatment (mean [SD] follow-up, 119  weeks). End of study EAIRs for patients in the Rollover and Switch cohorts, respectively, were: any AE, 2.6 and 4.3; serious AEs, 0.13 and 0.14; AEs leading to dose suspension, 0.05 and 0.04. Overall, 68% and 73% of patients in Rollover and Switch, respectively, experienced a study drug–related AE. Most common AEs possibly related to study drug were somnolence (17% Rollover; 27% Switch), depression (23%; 19%), anxiety (9%; 11%), insomnia (10%; 8%), and akathisia (9%; 14%). Rates of AEs of interest include suicidality (9%; 3%) and parkinsonism (6%; 11%). In both cohorts, mean UHDRS TMC score and total motor score (TMS) decreased from baseline to Week 8; mean (SD) change in TMC score (units) was –4.4 (3.1) and –2.1 (3.3) and change in TMS was –7.1 (7.3) and –2.4 (8.7) in Rollover and Switch, respectively. While receiving stable dosing from Week 8 to 132 (or end of treatment), patients showed minimal change in TMC score (0.9 [5.0]), but TMS increased compared to Week 8 (9.0 [11.3]). Upon drug withdrawal, there were no remarkable AEs and TMC scores increased 4.4 (3.7) units compared to end of treatment.
The type and severity of AEs observed in long-term deutetrabenazine exposure are consistent with the previous study. Efficacy in reducing chorea persisted over time. There was no unexpected worsening of HD or chorea associated with HD upon deutetrabenazine withdrawal.
Teva Pharmaceutical Industries Ltd., Petach Tikva, Israel
Potential effectiveness of harvest weed seed control (HWSC) systems depends upon seed shatter of the target weed species at crop maturity, enabling its collection and processing at crop harvest. However, seed retention likely is influenced by agroecological and environmental factors. In 2016 and 2017, we assessed seed-shatter phenology in 13 economically important broadleaf weed species in soybean [Glycine max (L.) Merr.] from crop physiological maturity to 4 wk after physiological maturity at multiple sites spread across 14 states in the southern, northern, and mid-Atlantic United States. Greater proportions of seeds were retained by weeds in southern latitudes and shatter rate increased at northern latitudes. Amaranthus spp. seed shatter was low (0% to 2%), whereas shatter varied widely in common ragweed (Ambrosia artemisiifolia L.) (2% to 90%) over the weeks following soybean physiological maturity. Overall, the broadleaf species studied shattered less than 10% of their seeds by soybean harvest. Our results suggest that some of the broadleaf species with greater seed retention rates in the weeks following soybean physiological maturity may be good candidates for HWSC.
Seed shatter is an important weediness trait on which the efficacy of harvest weed seed control (HWSC) depends. The level of seed shatter in a species is likely influenced by agroecological and environmental factors. In 2016 and 2017, we assessed seed shatter of eight economically important grass weed species in soybean [Glycine max (L.) Merr.] from crop physiological maturity to 4 wk after maturity at multiple sites spread across 11 states in the southern, northern, and mid-Atlantic United States. From soybean maturity to 4 wk after maturity, cumulative percent seed shatter was lowest in the southern U.S. regions and increased moving north through the states. At soybean maturity, the percent of seed shatter ranged from 1% to 70%. That range had shifted to 5% to 100% (mean: 42%) by 25 d after soybean maturity. There were considerable differences in seed-shatter onset and rate of progression between sites and years in some species that could impact their susceptibility to HWSC. Our results suggest that many summer annual grass species are likely not ideal candidates for HWSC, although HWSC could substantially reduce their seed output during certain years.
Background: The burden of C. difficile infection (CDI) on healthcare facilities is well recognized. However, studies focusing on inpatient settings, in addition to ascertainment bias in general, have led to a paucity of data on the true burden of CDI across whole healthcare economies. Methods: Sites testing both inpatient and community samples were recruited from 12 European countries (1 site per 3 million population). On 2 selected days, all diarrheal fecal samples (regardless of tests requested) were sent to the European Coordinating Laboratory (ECL) for C. difficile toxin testing and culture. The CDI results and tests not requested at each submitting site were compared with the ECL results to determine the number of missed CDIs. Contemporaneous C. difficile isolates from food and animal sources were collected. All isolates underwent PCR ribotyping and toxinotyping; prevalences of ribotypes among regions of Europe and reservoir settings were compared. Results: Overall, 3,163 diarrheal fecal samples were received from 119 sites. The burden of CDI varied by country (positivity rates, 0–15.8%) and by European region; the highest positivity rate in Eastern Europe was 13.1%. The testing and positivity rates in community samples were 29.6% and 1.4% vs 74.9% and 5.0% in hospital samples; 16% and 55% of samples positive for CDI at ECL were not diagnosed in hospitals and the community. The most common C. difficile ribotypes from hospital samples were 027 (11%), 181 (12%), and 014 (8%), although prevalence varied by country. The highest prevalence of toxinotype IIIb (ribotypes 027, 181, and 176) was seen in Eastern Europe (55% of all isolates), which also had the lowest testing rate. For hospital samples, the proportion of toxinotype IIIb was inversely related to the testing rate (r = −0.79) (Fig. 1). The most common ribotypes from food sources were 078 (23%) and 126 (13%) (toxinotype V), and most common ribotypes from community samples were 078 (9%) and 039 (9%). Overall, 106 different ribotypes were identified: 25 in both the hospital and community and 16 in the hospital, community, and food chain. Conclusions: The diagnosed burden of CDI varies markedly among countries in both hospital and community settings. Reduced sampling/testing in Eastern Europe is inversely related to the proportion of toxinotype IIIb strains identified, suggesting that lack of suspicion leads to underdiagnosis and outbreaks of infection. The proportion of missed CDIs in the community was ~3.5× higher than in hospitals, indicating major underrecognition in the former setting. There were marked differences in ribotypes in different reservoir settings, emphasizing the complex epidemiology of C. difficile.
Funding: Proprietary organization: COMBACTE-CDI is an EU funded (Horizon2020) consortium of academic and EFPIA partners (bioMerieux, GSK, Sanofi Pasteur, Astra Zeneca, Pfizer, Da Volterra) with additional Funding: from the EFPIA partners.
Disclosures: Submitter: Kerrie Davies; the work presented is funded via the EU and EFPIA (commercial) partners in a consortium.
The inclusion of students with autism spectrum disorder (ASD) is increasing, but there have been no longitudinal studies of included students in Australia. Interview data reported in this study concern primary school children with ASD enrolled in mainstream classes in South Australia and New South Wales, Australia. In order to examine perceived facilitators and barriers to inclusion, parents, teachers, and principals were asked to comment on the facilitators and barriers to inclusion relevant to each child. Data are reported about 60 students, comprising a total of 305 parent interviews, 208 teacher interviews, and 227 principal interviews collected at 6-monthly intervals over 3.5 years. The most commonly mentioned facilitator was teacher practices. The most commonly mentioned barrier was intrinsic student factors. Other factors not directly controllable by school staff, such as resource limitations, were also commonly identified by principals and teachers. Parents were more likely to mention school- or teacher-related barriers. Many of the current findings were consistent with previous studies but some differences were noted, including limited reporting of sensory issues and bullying as barriers. There was little change in the pattern of facilitators and barriers identified by respondents over time. A number of implications for practice and directions for future research are discussed.
Life course research embraces the complexity of health and disease development, tackling the extensive interactions between genetics and environment. This interdisciplinary blueprint, or theoretical framework, offers a structure for research ideas and specifies relationships between related factors. Traditionally, methodological approaches attempt to reduce the complexity of these dynamic interactions and decompose health into component parts, ignoring the complex reciprocal interaction of factors that shape health over time. New methods that match the epistemological foundation of the life course framework are needed to fully explore adaptive, multilevel, and reciprocal interactions between individuals and their environment. The focus of this article is to (1) delineate the differences between lifespan and life course research, (2) articulate the importance of complex systems science as a methodological framework in the life course research toolbox to guide our research questions, (3) raise key questions that can be asked within the clinical and translational science domain utilizing this framework, and (4) provide recommendations for life course research implementation, charting the way forward. Recent advances in computational analytics, computer science, and data collection could be used to approximate, measure, and analyze the intertwining and dynamic nature of genetic and environmental factors involved in health development.
UK Biobank is a well-characterised cohort of over 500 000 participants including genetics, environmental data and imaging. An online mental health questionnaire was designed for UK Biobank participants to expand its potential.
Describe the development, implementation and results of this questionnaire.
An expert working group designed the questionnaire, using established measures where possible, and consulting a patient group. Operational criteria were agreed for defining likely disorder and risk states, including lifetime depression, mania/hypomania, generalised anxiety disorder, unusual experiences and self-harm, and current post-traumatic stress and hazardous/harmful alcohol use.
A total of 157 366 completed online questionnaires were available by August 2017. Participants were aged 45–82 (53% were ≥65 years) and 57% women. Comparison of self-reported diagnosed mental disorder with a contemporary study shows a similar prevalence, despite respondents being of higher average socioeconomic status. Lifetime depression was a common finding, with 24% (37 434) of participants meeting criteria and current hazardous/harmful alcohol use criteria were met by 21% (32 602), whereas other criteria were met by less than 8% of the participants. There was extensive comorbidity among the syndromes. Mental disorders were associated with a high neuroticism score, adverse life events and long-term illness; addiction and bipolar affective disorder in particular were associated with measures of deprivation.
The UK Biobank questionnaire represents a very large mental health survey in itself, and the results presented here show high face validity, although caution is needed because of selection bias. Built into UK Biobank, these data intersect with other health data to offer unparalleled potential for crosscutting biomedical research involving mental health.
Healthcare personnel (HCP) were recruited to provide serum samples, which were tested for antibodies against Ebola or Lassa virus to evaluate for asymptomatic seroconversion.
From 2014 to 2016, 4 patients with Ebola virus disease (EVD) and 1 patient with Lassa fever (LF) were treated in the Serious Communicable Diseases Unit (SCDU) at Emory University Hospital. Strict infection control and clinical biosafety practices were implemented to prevent nosocomial transmission of EVD or LF to HCP.
All personnel who entered the SCDU who were required to measure their temperatures and complete a symptom questionnaire twice daily were eligible.
No employee developed symptomatic EVD or LF. EVD and LF antibody studies were performed on sera samples from 42 HCP. The 6 participants who had received investigational vaccination with a chimpanzee adenovirus type 3 vectored Ebola glycoprotein vaccine had high antibody titers to Ebola glycoprotein, but none had a response to Ebola nucleoprotein or VP40, or a response to LF antigens.
Patients infected with filoviruses and arenaviruses can be managed successfully without causing occupation-related symptomatic or asymptomatic infections. Meticulous attention to infection control and clinical biosafety practices by highly motivated, trained staff is critical to the safe care of patients with an infection from a special pathogen.
We apply deep kernel learning (DKL), which can be viewed as a combination of a Gaussian process (GP) and a deep neural network (DNN), to compression ignition engine emissions and compare its performance to a selection of other surrogate models on the same dataset. Surrogate models are a class of computationally cheaper alternatives to physics-based models. High-dimensional model representation (HDMR) is also briefly discussed and acts as a benchmark model for comparison. We apply the considered methods to a dataset, which was obtained from a compression ignition engine and includes as outputs soot and NOx emissions as functions of 14 engine operating condition variables. We combine a quasi-random global search with a conventional grid-optimization method in order to identify suitable values for several DKL hyperparameters, which include network architecture, kernel, and learning parameters. The performance of DKL, HDMR, plain GPs, and plain DNNs is compared in terms of the root mean squared error (RMSE) of the predictions as well as computational expense of training and evaluation. It is shown that DKL performs best in terms of RMSE in the predictions whilst maintaining the computational cost at a reasonable level, and DKL predictions are in good agreement with the experimental emissions data.