To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
To assess the impact of a newly developed Central-Line Insertion Site Assessment (CLISA) score on the incidence of local inflammation or infection for CLABSI prevention.
A pre- and postintervention, quasi-experimental quality improvement study.
Setting and participants:
Adult inpatients with central venous catheters (CVCs) hospitalized in an intensive care unit or oncology ward at a large academic medical center.
We evaluated CLISA score impact on insertion site inflammation and infection (CLISA score of 2 or 3) incidence in the baseline period (June 2014–January 2015) and the intervention period (April 2015–October 2017) using interrupted times series and generalized linear mixed-effects multivariable analyses. These were run separately for days-to-line removal from identification of a CLISA score of 2 or 3. CLISA score interrater reliability and photo quiz results were evaluated.
Among 6,957 CVCs assessed 40,846 times, percentage of lines with CLISA score of 2 or 3 in the baseline and intervention periods decreased by 78.2% (from 22.0% to 4.7%), with a significant immediate decrease in the time-series analysis (P < .001). According to the multivariable regression, the intervention was associated with lower percentage of lines with a CLISA score of 2 or 3, after adjusting for age, gender, CVC body location, and hospital unit (odds ratio, 0.15; 95% confidence interval, 0.06–0.34; P < .001). According to the multivariate regression, days to removal of lines with CLISA score of 2 or 3 was 3.19 days faster after the intervention (P < .001). Also, line dwell time decreased 37.1% from a mean of 14 days (standard deviation [SD], 10.6) to 8.8 days (SD, 9.0) (P < .001). Device utilization ratios decreased 9% from 0.64 (SD, 0.08) to 0.58 (SD, 0.06) (P = .039).
The CLISA score creates a common language for assessing line infection risk and successfully promotes high compliance with best practices in timely line removal.
Meal timing may influence food choices, neurobiology and psychological states. Our exploratory study examined if time-of-day eating patterns were associated with mood disorders among adults.
During 2004–2006 (age 26–36 years) and 2009–2011 (follow-up, age 31–41 years), N = 1304 participants reported 24-h food and beverage intake. Time-of-day eating patterns were derived by principal components analysis. At follow-up, the Composite International Diagnostic Interview measured lifetime mood disorder. Log binomial and adjacent categories log-link regression were used to examine bidirectional associations between eating patterns and mood disorder. Covariates included sex, age, marital status, social support, education, work schedule, body mass index and smoking.
Three patterns were derived at each time-point: Grazing (intake spread across the day), Traditional (highest intakes reflected breakfast, lunch and dinner), and Late (skipped/delayed breakfast with higher evening intakes). Compared to those in the lowest third of the respective pattern at baseline and follow-up, during the 5-year follow-up, those in the highest third of the Late pattern at both time-points had a higher prevalence of mood disorder [prevalence ratio (PR) = 2.04; 95% confidence interval (CI) 1.20–3.48], and those in the highest third of the Traditional pattern at both time-points had a lower prevalence of first onset mood disorder (PR = 0.31; 95% CI 0.11–0.87). Participants who experienced a mood disorder during follow-up had a 1.07 higher relative risk of being in a higher Late pattern score category at follow-up than those without mood disorder (95% CI 1.00–1.14).
Non-traditional eating patterns, particularly skipped or delayed breakfast, may be associated with mood disorders.
Several research teams have previously traced patterns of emerging conduct problems (CP) from early or middle childhood. The current study expands on this previous literature by using a genetically-informed, experimental, and long-term longitudinal design to examine trajectories of early-emerging conduct problems and early childhood discriminators of such patterns from the toddler period to adolescence. The sample represents a cohort of 731 toddlers and diverse families recruited based on socioeconomic, child, and family risk, varying in urbanicity and assessed on nine occasions between ages 2 and 14. In addition to examining child, family, and community level discriminators of patterns of emerging conduct problems, we were able to account for genetic susceptibility using polygenic scores and the study's experimental design to determine whether random assignment to the Family Check-Up (FCU) discriminated trajectory groups. In addition, in accord with differential susceptibility theory, we tested whether the effects of the FCU were stronger for those children with higher genetic susceptibility. Results augmented previous findings documenting the influence of child (inhibitory control [IC], gender) and family (harsh parenting, parental depression, and educational attainment) risk. In addition, children in the FCU were overrepresented in the persistent low versus persistent high CP group, but such direct effects were qualified by an interaction between the intervention and genetic susceptibility that was consistent with differential susceptibility. Implications are discussed for early identification and specifically, prevention efforts addressing early child and family risk.
Background: Cervical sponylotic myelopathy (CSM) may present with neck and arm pain. This study investiagtes the change in neck/arm pain post-operatively in CSM. Methods: This ambispective study llocated 402 patients through the Canadian Spine Outcomes and Research Network. Outcome measures were the visual analogue scales for neck and arm pain (VAS-NP and VAS-AP) and the neck disability index (NDI). The thresholds for minimum clinically important differences (MCIDs) for VAS-NP and VAS-AP were determined to be 2.6 and 4.1. Results: VAS-NP improved from mean of 5.6±2.9 to 3.8±2.7 at 12 months (P<0.001). VAS-AP improved from 5.8±2.9 to 3.5±3.0 at 12 months (P<0.001). The MCIDs for VAS-NP and VAS-AP were also reached at 12 months. Based on the NDI, patients were grouped into those with mild pain/no pain (33%) versus moderate/severe pain (67%). At 3 months, a significantly high proportion of patients with moderate/severe pain (45.8%) demonstrated an improvement into mild/no pain, whereas 27.2% with mild/no pain demonstrated worsening into moderate/severe pain (P <0.001). At 12 months, 17.4% with mild/no pain experienced worsening of their NDI (P<0.001). Conclusions: This study suggests that neck and arm pain responds to surgical decompression in patients with CSM and reaches the MCIDs for VAS-AP and VAS-NP at 12 months.
Introduction: Simulation is becoming widely adopted across medical disciplines and by different medical professionals. For medical students, emergency medicine simulation has been shown to increase knowledge, confidence and satisfaction. At the University of Ottawa Skills and Simulation Centre, third-year medical students participate in simulated scenarios common to Emergency Medicine (EM) as part of their mandatory EM clerkship rotation. This study aims to evaluate simulation as part of the EM clerkship rotation by assessing changes in student confidence following a simulation session. Methods: In groups of seven, third year medical students at the University of Ottawa completed simulation sessions of the following: Status Asthmaticus, Status Epilepticus, Urosepsis and Breaking Bad News. Student confidence with each topic was assessed before and after simulation with a written survey. Confidence scores pre- and post-simulation were compared with the Wilcoxon signed rank test. Results: Forty-eight third years medical students in their core EM clerkship rotation, between September 2017 and August 2018 participated in this study. Medical student confidence with diagnosis of status asthmaticus (N = 44, p = 0.0449) and status epilepticus (N = 45, p = 0.0011) increased significantly following simulation, whereas confidence with diagnosis of urosepsis was unchanged (N = 45, p = 0.0871). Treatment confidence increased significantly for status asthmaticus (N = 47, p = 0.0009), status epilepticus (N = 48, p = 0.0005) and urosepsis (N = 48, p < 0.0001). Confidence for breaking bad news was not significantly changed after simulation (N = 47, p = 0.0689). Conclusion: Simulation training in our EM clerkship rotation significantly increased the confidence of medical students for certain common EM presentations, but not for all. Further work will aim to understand why some simulation scenarios did not improve confidence, and look to improve existing scenarios.
Introduction: Acute aortic syndrome (AAS) is a time sensitive aortic catastrophe that is often misdiagnosed. There are currently no Canadian guidelines to aid in diagnosis. Our goal was to adapt the existing American Heart Association (AHA) and European Society of Cardiology (ESC) diagnostic algorithms for AAS into a Canadian evidence based best practices algorithm targeted for emergency medicine physicians. Methods: We chose to adapt existing high-quality clinical practice guidelines (CPG) previously developed by the AHA/ESC using the GRADE ADOLOPMENT approach. We created a National Advisory Committee consisting of 21 members from across Canada including academic, community and remote/rural emergency physicians/nurses, cardiothoracic and cardiovascular surgeons, cardiac anesthesiologists, critical care physicians, cardiologist, radiologists and patient representatives. The Advisory Committee communicated through multiple teleconference meetings, emails and a one-day in person meeting. The panel prioritized questions and outcomes, using the Grading of Recommendations Assessment, Development and Evaluation (GRADE) approach to assess evidence and make recommendations. The algorithm was prepared and revised through feedback and discussions and through an iterative process until consensus was achieved. Results: The diagnostic algorithm is comprised of an updated pre test probability assessment tool with further testing recommendations based on risk level. The updated tool incorporates likelihood of an alternative diagnosis and point of care ultrasound. The final best practice diagnostic algorithm defined risk levels as Low (0.5% no further testing), Moderate (0.6-5% further testing required) and High ( >5% computed tomography, magnetic resonance imaging, trans esophageal echocardiography). During the consensus and feedback processes, we addressed a number of issues and concerns. D-dimer can be used to reduce probability of AAS in an intermediate risk group, but should not be used in a low or high-risk group. Ultrasound was incorporated as a bedside clinical examination option in pre test probability assessment for aortic insufficiency, abdominal/thoracic aortic aneurysms. Conclusion: We have created the first Canadian best practice diagnostic algorithm for AAS. We hope this diagnostic algorithm will standardize and improve diagnosis of AAS in all emergency departments across Canada.
The majority of paediatric Clostridioides difficile infections (CDI) are community-associated (CA), but few data exist regarding associated risk factors. We conducted a case–control study to evaluate CA-CDI risk factors in young children. Participants were enrolled from eight US sites during October 2014–February 2016. Case-patients were defined as children aged 1–5 years with a positive C. difficile specimen collected as an outpatient or ⩽3 days of hospital admission, who had no healthcare facility admission in the prior 12 weeks and no history of CDI. Each case-patient was matched to one control. Caregivers were interviewed regarding relevant exposures. Multivariable conditional logistic regression was performed. Of 68 pairs, 44.1% were female. More case-patients than controls had a comorbidity (33.3% vs. 12.1%; P = 0.01); recent higher-risk outpatient exposures (34.9% vs. 17.7%; P = 0.03); recent antibiotic use (54.4% vs. 19.4%; P < 0.0001); or recent exposure to a household member with diarrhoea (41.3% vs. 21.5%; P = 0.04). In multivariable analysis, antibiotic exposure in the preceding 12 weeks was significantly associated with CA-CDI (adjusted matched odds ratio, 6.25; 95% CI 2.18–17.96). Improved antibiotic prescribing might reduce CA-CDI in this population. Further evaluation of the potential role of outpatient healthcare and household exposures in C. difficile transmission is needed.
Dementia is a leading cause of morbidity and mortality without pharmacologic prevention or cure. Mounting evidence suggests that adherence to a Mediterranean dietary pattern may slow cognitive decline, and is important to characterise in at-risk cohorts. Thus, we determined the reliability and validity of the Mediterranean Diet and Culinary Index (MediCul), a new tool, among community-dwelling individuals with mild cognitive impairment (MCI). A total of sixty-eight participants (66 % female) aged 75·9 (sd 6·6) years, from the Study of Mental and Resistance Training study MCI cohort, completed the fifty-item MediCul at two time points, followed by a 3-d food record (FR). MediCul test–retest reliability was assessed using intra-class correlation coefficients (ICC), Bland–Altman plots and κ agreement within seventeen dietary element categories. Validity was assessed against the FR using the Bland–Altman method and nutrient trends across MediCul score tertiles. The mean MediCul score was 54·6/100·0, with few participants reaching thresholds for key Mediterranean foods. MediCul had very good test–retest reliability (ICC=0·93, 95 % CI 0·884, 0·954, P<0·0001) with fair-to-almost-perfect agreement for classifying elements within the same category. Validity was moderate with no systematic bias between methods of measurement, according to the regression coefficient (y=−2·30+0·17x) (95 % CI −0·027, 0·358; P=0·091). MediCul over-estimated the mean FR score by 6 %, with limits of agreement being under- and over-estimated by 11 and 23 %, respectively. Nutrient trends were significantly associated with increased MediCul scoring, consistent with a Mediterranean pattern. MediCul provides reliable and moderately valid information about Mediterranean diet adherence among older individuals with MCI, with potential application in future studies assessing relationships between diet and cognitive function.
The evaporation of sessile droplets is analysed when the influence of the thermal properties of the system is strong. We obtain asymptotic solutions for the evolution, and hence explicit expressions for the lifetimes, of droplets when the substrate has a high thermal resistance relative to the droplet and when the saturation concentration of the vapour depends strongly on temperature. In both situations we find that the lifetimes of the droplets are significantly extended relative to those when thermal effects are weak.
BACKGROUND: IGTS is a rare phenomenon of paradoxical germ cell tumor (GCT) growth during or following treatment despite normalization of tumor markers. We sought to evaluate the frequency, clinical characteristics and outcome of IGTS in patients in 21 North-American and Australian institutions. METHODS: Patients with IGTS diagnosed from 2000-2017 were retrospectively evaluated. RESULTS: Out of 739 GCT diagnoses, IGTS was identified in 33 patients (4.5%). IGTS occurred in 9/191 (4.7%) mixed-malignant GCTs, 4/22 (18.2%) immature teratomas (ITs), 3/472 (0.6%) germinomas/germinomas with mature teratoma, and in 17 secreting non-biopsied tumours. Median age at GCT diagnosis was 10.9 years (range 1.8-19.4). Male gender (84%) and pineal location (88%) predominated. Of 27 patients with elevated markers, median serum AFP and Beta-HCG were 70 ng/mL (range 9.2-932) and 44 IU/L (range 4.2-493), respectively. IGTS occurred at a median time of 2 months (range 0.5-32) from diagnosis, during chemotherapy in 85%, radiation in 3%, and after treatment completion in 12%. Surgical resection was attempted in all, leading to gross total resection in 76%. Most patients (79%) resumed GCT chemotherapy/radiation after surgery. At a median follow-up of 5.3 years (range 0.3-12), all but 2 patients are alive (1 succumbed to progressive disease, 1 to malignant transformation of GCT). CONCLUSION: IGTS occurred in less than 5% of patients with GCT and most commonly after initiation of chemotherapy. IGTS was more common in patients with IT-only on biopsy than with mixed-malignant GCT. Surgical resection is a principal treatment modality. Survival outcomes for patients who developed IGTS are favourable.
To achieve their conservation goals individuals, communities and organizations need to acquire a diversity of skills, knowledge and information (i.e. capacity). Despite current efforts to build and maintain appropriate levels of conservation capacity, it has been recognized that there will need to be a significant scaling-up of these activities in sub-Saharan Africa. This is because of the rapid increase in the number and extent of environmental problems in the region. We present a range of socio-economic contexts relevant to four key areas of African conservation capacity building: protected area management, community engagement, effective leadership, and professional e-learning. Under these core themes, 39 specific recommendations are presented. These were derived from multi-stakeholder workshop discussions at an international conference held in Nairobi, Kenya, in 2015. At the meeting 185 delegates (practitioners, scientists, community groups and government agencies) represented 105 organizations from 24 African nations and eight non-African nations. The 39 recommendations constituted six broad types of suggested action: (1) the development of new methods, (2) the provision of capacity building resources (e.g. information or data), (3) the communication of ideas or examples of successful initiatives, (4) the implementation of new research or gap analyses, (5) the establishment of new structures within and between organizations, and (6) the development of new partnerships. A number of cross-cutting issues also emerged from the discussions: the need for a greater sense of urgency in developing capacity building activities; the need to develop novel capacity building methodologies; and the need to move away from one-size-fits-all approaches.
A number of studies reports reduced hippocampal volume in individuals who engage in problematic alcohol use. However, the magnitude of the difference in hippocampal volume between individuals with v. without problematic alcohol use has varied widely, and there have been null findings. Moreover, the studies comprise diverse alcohol use constructs and samples, including clinically significant alcohol use disorders and subclinical but problematic alcohol use (e.g. binge drinking), adults and adolescents, and males and females.
We conducted the first quantitative synthesis of the published empirical research on associations between problematic alcohol use and hippocampal volume. In total, 23 studies were identified and selected for inclusion in the meta-analysis; effects sizes were aggregated using a random-effects model.
Problematic alcohol use was associated with significantly smaller hippocampal volume (d = −0.53). Moderator analyses indicated that effects were stronger for clinically significant v. subclinical alcohol use and among adults relative to adolescents; effects did not differ among males and females.
Problematic alcohol use is associated with reduced hippocampal volume. The moderate overall effect size suggests the need for larger samples than are typically included in studies of alcohol use and hippocampal volume. Because the existing literature is almost entirely cross-sectional, future research using causally informative study designs is needed to determine whether this association reflects premorbid risk for the development of problematic alcohol use and/or whether alcohol has a neurotoxic effect on the hippocampus.
The Antarctic Roadmap Challenges (ARC) project identified critical requirements to deliver high priority Antarctic research in the 21st century. The ARC project addressed the challenges of enabling technologies, facilitating access, providing logistics and infrastructure, and capitalizing on international co-operation. Technological requirements include: i) innovative automated in situ observing systems, sensors and interoperable platforms (including power demands), ii) realistic and holistic numerical models, iii) enhanced remote sensing and sensors, iv) expanded sample collection and retrieval technologies, and v) greater cyber-infrastructure to process ‘big data’ collection, transmission and analyses while promoting data accessibility. These technologies must be widely available, performance and reliability must be improved and technologies used elsewhere must be applied to the Antarctic. Considerable Antarctic research is field-based, making access to vital geographical targets essential. Future research will require continent- and ocean-wide environmentally responsible access to coastal and interior Antarctica and the Southern Ocean. Year-round access is indispensable. The cost of future Antarctic science is great but there are opportunities for all to participate commensurate with national resources, expertise and interests. The scope of future Antarctic research will necessitate enhanced and inventive interdisciplinary and international collaborations. The full promise of Antarctic science will only be realized if nations act together.
Our ongoing work has demonstrated that hexokinase 2 (HK2) but not HK1 or HK3 is a critical mediator of tumour glycolysis and mitochondrial metabolism in Glioblastoma (GB). Furthermore, HK2 is highly expressed in GB but not in normal brain making it an attractive therapeutic target. Our current findings now support that loss of HK2 alters tumor vasculature, increases sensitivity to radiation, and confers a significant survival benefit in several GB xenograft-bearing mice. Using a genome wide transcript analysis, we identified that loss of HK2 attenuates several pro-growth signaling pathways in GB including ERK signaling. Mechanistically, ERK rescue experiments in HK2 depleted cells rescues cell sensitivity to radiation and reduces DNA damage. Furthermore using a systems biology approach and a rationale drug screen we identified several antifungal agents in the azole class as to inhibit tumor metabolism and growth in HK2 expressing GB cells. Loss of HK2 in GB cells dampened the effect of several azoles suggesting that the mechanism of action is mediated in part through HK2. Furthermore, we tested several azole compounds known to cross the blood brain barrier in vivo. Clinically achievable doses of azoles as single agents increased survival in several orthotopic xenograft GB mouse models. In summary, HK2 drives several oncogenic pathways associated with GB including ERK signaling and sensitizes tumour cells to the azole class of antifungals. Future work will determine whether azoles work synergistically with radiation and temozolomide and elucidate the mechanisms by which they inhibit GB growth in HK2 expressing cells.
We describe the performance of the Boolardy Engineering Test Array, the prototype for the Australian Square Kilometre Array Pathfinder telescope. Boolardy Engineering Test Array is the first aperture synthesis radio telescope to use phased array feed technology, giving it the ability to electronically form up to nine dual-polarisation beams. We report the methods developed for forming and measuring the beams, and the adaptations that have been made to the traditional calibration and imaging procedures in order to allow BETA to function as a multi-beam aperture synthesis telescope. We describe the commissioning of the instrument and present details of Boolardy Engineering Test Array’s performance: sensitivity, beam characteristics, polarimetric properties, and image quality. We summarise the astronomical science that it has produced and draw lessons from operating Boolardy Engineering Test Array that will be relevant to the commissioning and operation of the final Australian Square Kilometre Array Path telescope.