To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
To study 2D and 3D dosimetric values for bladder and rectum, and the influence of bladder volume on bladder dose in high dose rate (HDR) intracavitary brachytherapy (ICBT). The large patient data incorporated in this study would better represent the inherent variations in many parameters affecting dosimetry in HDR-ICBT.
Material and Methods:
We prospectively collected data for 103 consecutive cervical cancer patients (over 310 HDR fractions) undergoing CT-based HDR-ICBT at our centre. Correlation among bladder and rectum maximum volume doses and corresponding International Commission on Radiation Units and Measurement (ICRU) point doses were estimated and analysed. Impact of bladder volume on bladder maximum dose was assessed.
The ICRU point doses to bladder and rectum varied from the volumetric doses to these organs. Further, bladder volume poorly correlated with bladder maximum dose for volume variations encountered in the clinical practice at our centre.
ICRU point doses to bladder and rectum are less likely to correlate with long-term toxicities to these organs. Further, in clinical practice where inter-fraction bladder volume does not vary widely there is no correlation between bladder volume and bladder dose.
Cognitive deficits in depressed adults may reflect impaired decision-making. To investigate this possibility, we analyzed data from unmedicated adults with Major Depressive Disorder (MDD) and healthy controls as they performed a probabilistic reward task. The Hierarchical Drift Diffusion Model (HDDM) was used to quantify decision-making mechanisms recruited by the task, to determine if any such mechanism was disrupted by depression.
Data came from two samples (Study 1: 258 MDD, 36 controls; Study 2: 23 MDD, 25 controls). On each trial, participants indicated which of two similar stimuli was presented; correct identifications were rewarded. Quantile-probability plots and the HDDM quantified the impact of MDD on response times (RT), speed of evidence accumulation (drift rate), and the width of decision thresholds, among other parameters.
RTs were more positively skewed in depressed v. healthy adults, and the HDDM revealed that drift rates were reduced—and decision thresholds were wider—in the MDD groups. This pattern suggests that depressed adults accumulated the evidence needed to make decisions more slowly than controls did.
Depressed adults responded slower than controls in both studies, and poorer performance led the MDD group to receive fewer rewards than controls in Study 1. These results did not reflect a sensorimotor deficit but were instead due to sluggish evidence accumulation. Thus, slowed decision-making—not slowed perception or response execution—caused the performance deficit in MDD. If these results generalize to other tasks, they may help explain the broad cognitive deficits seen in depression.
Quality Improvement and Patient Safety (QIPS) plays an important role in addressing shortcomings in optimal healthcare delivery. However, there is little published guidance available for emergency department (ED) teams with respect to developing their own QIPS programs. We sought to create recommendations for established and aspiring ED leaders to use as a pathway to better patient care through programmatic QIPS activities, starting internally and working towards interdepartmental collaboration.
An expert panel comprised of ten ED clinicians with QIPS and leadership expertise was established. A scoping review was conducted to identify published literature on establishing QIPS programs and frameworks in healthcare. Stakeholder consultations were conducted among Canadian healthcare leaders, and recommendations were drafted by the expert panel based on all the accumulated information. These were reviewed and refined at the 2018 CAEP Academic Symposium in Calgary using in-person and technologically-supported feedback.
Recommendations include: creating a sense of urgency for improvement; engaging relevant stakeholders and leaders; creating a formal local QIPS Committee; securing funding and resources; obtaining local data to guide the work; supporting QIPS training for team members; encouraging interprofessional, cross-departmental, and patient collaborations; using an established QIPS framework to guide the work; developing reward mechanisms and incentive structures; and considering to start small by focusing on a project rather than a program.
A list of 10 recommendations is presented as guiding principles for the establishment and sustainable deployment of QIPS activities in EDs throughout Canada and abroad. ED leaders are encouraged to implement our recommendations in an effort to improve patient care.
The difference in the defect structures produced by different ion masses in a tungsten lattice is investigated using 80 MeV Au7+ ions and 10 MeV B3+ ions. The details of the defects produced by ions in recrystallized tungsten foil samples are studied using transmission electron microscopy. Dislocations of type b = 1/2 and  were observed in the analysis. While highly energetic gold ion produced small clusters of defects with very few dislocation lines, boron has produced large and sparse clusters with numerous dislocation lines. The difference in the defect structures could be due to the difference in separation between primary knock-on atoms produced by gold and boron ions.
To investigate a Middle East respiratory syndrome coronavirus (MERS-CoV) outbreak event involving multiple healthcare facilities in Riyadh, Saudi Arabia; to characterize transmission; and to explore infection control implications.
Cases presented in 4 healthcare facilities in Riyadh, Saudi Arabia: a tertiary-care hospital, a specialty pulmonary hospital, an outpatient clinic, and an outpatient dialysis unit.
Contact tracing and testing were performed following reports of cases at 2 hospitals. Laboratory results were confirmed by real-time reverse transcription polymerase chain reaction (rRT-PCR) and/or genome sequencing. We assessed exposures and determined seropositivity among available healthcare personnel (HCP) cases and HCP contacts of cases.
In total, 48 cases were identified, involving patients, HCP, and family members across 2 hospitals, an outpatient clinic, and a dialysis clinic. At each hospital, transmission was linked to a unique index case. Moreover, 4 cases were associated with superspreading events (any interaction where a case patient transmitted to ≥5 subsequent case patients). All 4 of these patients were severely ill, were initially not recognized as MERS-CoV cases, and subsequently died. Genomic sequences clustered separately, suggesting 2 distinct outbreaks. Overall, 4 (24%) of 17 HCP cases and 3 (3%) of 114 HCP contacts of cases were seropositive.
We describe 2 distinct healthcare-associated outbreaks, each initiated by a unique index case and characterized by multiple superspreading events. Delays in recognition and in subsequent implementation of control measures contributed to secondary transmission. Prompt contact tracing, repeated testing, HCP furloughing, and implementation of recommended transmission-based precautions for suspected cases ultimately halted transmission.
Major depressive disorder (MDD) is a highly heterogeneous condition in terms of symptom presentation and, likely, underlying pathophysiology. Accordingly, it is possible that only certain individuals with MDD are well-suited to antidepressants. A potentially fruitful approach to parsing this heterogeneity is to focus on promising endophenotypes of depression, such as neuroticism, anhedonia, and cognitive control deficits.
Within an 8-week multisite trial of sertraline v. placebo for depressed adults (n = 216), we examined whether the combination of machine learning with a Personalized Advantage Index (PAI) can generate individualized treatment recommendations on the basis of endophenotype profiles coupled with clinical and demographic characteristics.
Five pre-treatment variables moderated treatment response. Higher depression severity and neuroticism, older age, less impairment in cognitive control, and being employed were each associated with better outcomes to sertraline than placebo. Across 1000 iterations of a 10-fold cross-validation, the PAI model predicted that 31% of the sample would exhibit a clinically meaningful advantage [post-treatment Hamilton Rating Scale for Depression (HRSD) difference ⩾3] with sertraline relative to placebo. Although there were no overall outcome differences between treatment groups (d = 0.15), those identified as optimally suited to sertraline at pre-treatment had better week 8 HRSD scores if randomized to sertraline (10.7) than placebo (14.7) (d = 0.58).
A subset of MDD patients optimally suited to sertraline can be identified on the basis of pre-treatment characteristics. This model must be tested prospectively before it can be used to inform treatment selection. However, findings demonstrate the potential to improve individual outcomes through algorithm-guided treatment recommendations.
Practical Implementation of an Antibiotic Stewardship Program provides an essential resource for healthcare providers in acute care, long-term care, and ambulatory care settings looking either to begin or to strengthen existing antibiotic stewardship programs. Each chapter is written by both physician and pharmacist leaders in the stewardship field and incorporates both practical knowledge as well as evidence-based guidance. This book will also serve as a useful resource for medical students, pharmacy students, residents, and infectious diseases fellows looking to learn more about the field of antibiotic stewardship.
Agriculture in the Central Himalayan Region depends on the availability of suitable germplasm as well as natural conditions. Due to extreme weather conditions, food and nutrition security is a major issue for communities inhabiting these remote and inaccessible areas. Millets are common crops grown in these areas. Foxtail millet (Setaria italica (L.) P. Beauv) is an important crop and forms a considerable part of the diet in this region. The aim of the present study was to explore, collect, conserve and evaluate the untapped genetic diversity of foxtail millet at the molecular level and discover variability in their nutritional traits. A total of 30 accessions having unique traits of agronomic importance were collected and molecular profiling was performed. A total of 63 alleles were generated with an average of 2.52 alleles per locus and average expected heterozygosity of 0.37 ± 0.231. Significant genetic variability was revealed through the genetic differentiation (Fst) and gene flow (Nm) values. Structure-based analysis divided whole germplasm into three sub-groups. Rich variability was found in nutritional traits such as dietary fibre in husked grains, carbohydrate, protein, lysine and thiamine content. The collected germplasm may be useful for developing nutritionally rich and agronomically beneficial varieties of foxtail millet and also designing strategies for utilization of unexploited genetic diversity for food and nutrition security in this and other similar agro-ecological regions.
We compared the impact of a commercial chlorination product (brand name Air RahMat) in stored drinking water to traditional boiling practices in Indonesia. We conducted a baseline survey of all households with children <5 years in four communities, made 11 subsequent weekly home visits to assess acceptability and use of water treatment methods, measured Escherichia coli concentration in stored water, and determined diarrhoea prevalence among children <5 years. Of 281 households surveyed, boiling (83%) and Air RahMat (7%) were the principal water treatment methods. Multivariable log-binomial regression analyses showed lower risk of E. coli in stored water treated with Air RahMat than boiling (risk ratio (RR) 0·75, 95% confidence interval (CI) 0·56–1·00). The risk of diarrhoea in children <5 years was lower among households using Air RahMat (RR 0·43, 95% CI 0·19–0·97) than boiling, and higher in households with E. coli concentrations of 1–1000 MPN/100 ml (RR 1·54, 95% CI 1·04–2·28) or >1000 MPN/100 ml (RR 1·86, 95% CI 1·09–3·19) in stored water than in households without detectable E. coli. Although results suggested that Air RahMat water treatment was associated with lower E. coli contamination and diarrhoeal rates among children <5 years than water treatment by boiling, Air RahMat use remained low.
Purpose: We measured anterior cerebral artery (ACA)-middle cerebral artery (MCA) and posterior cerebral artery (PCA)-MCA pial filling on single-phase computed tomography angiograms (sCTAs) in acute ischemic stroke and correlate with the CTA-based Massachusetts General Hospital (MGH) and digital subtraction angiography (DSA)-based American Society of Interventional and Therapeutic Neuroradiology (ASITN) score. Methods: Patients with acute stroke and M1 MCA±intracranial internal carotid artery occlusion on baseline CTA were included. Baseline sCTA was assessed for phase of image acquisition. An evaluator assessed collaterals using the Calgary Collateral (CC) Score (measures pial arterial filling in ACA-MCA and PCA-MCA regions separately), the CTA-based MGH score, and on DSA using the ASITN score. Infarct volumes were measured on 24- to 48-hour magnetic resonance imaging/ computed tomography. Results: Of 106 patients, baseline sCTA was acquired in early arterial phase in 9.9%, peak arterial in 50.7%, equilibrium in 32.4%, early venous in 5.6%, and late venous in 1.4%. Variance in ACA-MCA collaterals explained only 32% of variance in PCA-MCA collaterals on the CC score (Spearman’s correlation coefficient rho [rho]=0.56). Correlation between ACA-MCA collaterals and the MGH score was strong (rho=0.8); correlation between PCA-MCA collaterals and this score was modest (rho=0.54). Correlation between ACA-MCA collaterals and the ASITN score was modest (n=53, rho=0.43); and correlation between PCA-MCA collaterals and ASITN score was poor (rho=0.33). Of the CTA-based scores, the CC Score (Akaike [AIC] 1022) was better at predicting follow-up infarct volumes than was the MGH score (AIC 1029). Conclusion: Collateral assessments in acute ischemic stroke are best done using CTA with temporal resolution and by assessing regional variability. ACA-MCA and MCA-PCA collaterals should be evaluated separately.
Introduction: The process of triage is used to prioritize the care of patients arriving in the emergency department (ED). To our knowledge, self-triage has not been previously studied in the general emergency department (ED) setting. In an attempt to test the feasibility of implementing this in the ED, we sought to assess the ability of ED patients to triage themselves using an electronic questionnaire. Methods: This was a prospective observational study. An iPad-based questionnaire was designed with a series of ‘yes’ or ‘no’ answers related to common chief complaints. A score corresponding to a Canadian Triage and Acuity Scale (CTAS) category was assigned based on their answers, without the knowledge of patients or ED staff. These scores were subsequently compared to the official CTAS score assigned by triage nurses. A convenience sample of ambulatory patients arriving at the ED were enrolled over a four week period. Patients arriving by ambulance were excluded. We also sought to assess patients’ ability to predict their ultimate disposition. Results: A total of 492 patients were enrolled. The mean age of enrolled patients was 43.9. Of enrolled patients, 56 (11.4%) were under 20 years old, 168 (34.1%) between ages 20-39, 116 (23.6%) between ages 40-59 and 152 (30.9%) older than 60 years. We had 245 (49.8%) patients identify as male. Patient-determined CTAS scores were as follows: 146 CTAS 1 (26.7%), 66 CTAS 2 (13.4%), 176 CTAS 3 (35.8%) and 104 CTAS 4 and 5 (21.1%). Formal triage CTAS scores were: 47 CTAS 2 (9.6%), 155 CTAS 3 (31.5%), and 290 CTAS 4 and 5 (59%). With our survey tool, 22.2% of patients matched their official triage scores. We found that that 69.9% of participants over-estimated their CTAS score while 7.9% underestimated it. Two hundred and three patients (41.3%) felt that they needed to be admitted. In fact, 73 patients (17.3%) were admitted to hospital. Conclusion: Using an electronic questionnaire, ambulatory patients frequently overestimated the acuity of their presenting complaint. Patients were also not unable to accurately predict their disposition. Further study of different approaches to self-triage is needed before possible implementation in EDs.
Depression and metabolic syndrome (MetS) are frequently comorbid disorders that are independently associated with premature mortality. Conversely, cardiorespiratory fitness (CRF) is associated with reduced mortality risk. These factors may interact to impact mortality; however, their effects have not been assessed concurrently. This analysis assessed the mortality risk of comorbid depression/MetS and the effect of CRF on mortality in those with depression/MetS.
Prospective study of 47 702 adults in the Cooper Center Longitudinal Study. Mortality status was attained from the National Death Index. History of depression was determined by patient response (yes or no) to a standardized medical history questionnaire. MetS was categorized using the American Heart Association/National Heart, Lung, and Blood Institute criteria. CRF was estimated from the final speed/grade of a treadmill graded exercise test.
13.9% reported a history of depression, 21.4% met criteria for MetS, and 3.0% met criteria for both MetS and history of depression. History of depression (HR = 1.24, p = 0.003) and MetS (HR = 1.28, p < 0.001) were independently associated with an increased mortality risk, with the greatest mortality risk among individuals with both a history of depression and MetS (HR = 1.59, p < 0.001). Higher CRF was associated with a significantly lower risk of mortality (p < 0.001) in all individuals, including those with MetS and/or a history of depression.
Those with higher levels CRF had reduced mortality risk in the context of depression/MetS. Interventions that improve CRF could have substantial impact on the health of persons with depression/MetS.
In 2013, a before-and-after intervention study was conducted to evaluate the effect 24-hour intensivist coverage on length of stay and rates of catheter-associated urinary tract infection, central-line associated blood stream infection, and ventilator-associated events. Intensivist coverage for 24 hours did not decrease length of stay or result in a decrease in any specific infection rate.
Infect. Control Hosp. Epidemiol. 2016;37(3):352–354