To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Evidence about the impact of the COVID-19 pandemic on the mental health of specific subpopulations, such as university students, is needed as communities prepare for future waves.
To study the association of proximity of COVID-19 with symptoms of anxiety and depression in university students.
This trend study analysed weekly cross-sectional surveys of probabilistic samples of students from the University of British Columbia for 13 weeks, through the first wave of COVID-19. The main variable assessed was propinquity of COVID-19, defined as ‘knowing someone who tested positive for COVID-19’, which was specified at different levels: knowing someone anywhere globally, in Canada, in Vancouver, in their course or at home. Proximity was included in multivariable linear regressions to assess its association with primary outcomes, including 30-day symptoms of anxiety and/or depression.
Of 1388 respondents (adjusted response rate of 50%), 5.6% knew someone with COVID-19 in Vancouver, 0.8% in their course and 0.3% at home. Ten percent were overwhelmed and unable to access help. Knowing someone in Vancouver was associated with an 11-percentage-point increase in the probability of 30-day anxiety symptoms (s.e. 0.05, P ≤ 0.05), moderated by gender, with a significant interaction of the exposure and being female (coefficient −20, s.e. 0.09, P ≤ 0.05). No association was found with depressive symptoms.
Propinquity of COVID-19 cases may increase the likelihood of anxiety symptoms in students, particularly among men. Most students reported coping well, but additional support is needed for an emotionally overwhelmed minority who report being unable to access help.
The coronavirus disease 2019 (COVID-19) pandemic forced American medical systems to adapt to high patient loads of respiratory disease. Its disruption of normal routines also brought opportunities for broader reform. The purpose of this article is to describe how the Carl R. Darnall Army Medical Center (CRDAMC), a medium-sized Army hospital, capitalized on opportunities to advance its strategic aims during the pandemic. Specifically, the hospital sequentially adopted virtual video visits, surged on preventative screenings, and made-over its image to appeal to patients seeking urgent care. These campaigns supported COVID-19 efforts and larger strategic goals simultaneously, and they will endure for years to come. Predictably, CRDAMC encountered obstacles in the course of its transformation. These obstacles and their follow-on lessons are provided to assist future medical leaders seeking quantum change in the opportunities made available by health crises.
Background: Infection prevention surveillance for cross transmission is often performed by manual review of microbiologic culture results to identify geotemporally related clusters. However, the sensitivity and specificity of this approach remains uncertain. Whole-genome sequencing (WGS) analysis can help provide a gold-standard for identifying cross-transmission events. Objective: We employed a published WGS program, the Philips IntelliSpace Epidemiology platform, to compare accuracy of two surveillance methods: (i.) a virtual infection practitioner (VIP) with perfect recall and automated analysis of antibiotic susceptibility testing (AST), sample collection timing, and patient location data and (ii) a novel clinical matching (CM) algorithm that provides cluster suggestions based on a nuanced weighted analysis of AST data, timing of sample collection, and shared location stays between patients. Methods: WGS was performed routinely on inpatient and emergency department isolates of Enterobacter cloacae, Enterococcus faecium, Klebsiella pneumoniae, and Pseudomonas aeruginosa at an academic medical center. Single-nucleotide variants (SNVs) were compared within core genome regions on a per-species basis to determine cross-transmission clusters. Moreover, one unique strain per patient was included within each analysis, and duplicates were excluded from the final results. Results: Between May 2018 and April 2019, clinical data from 121 patients were paired with WGS data from 28 E. cloacae, 21 E. faecium, 61 K. pneumoniae, and 46 P. aeruginosa isolates. Previously published SNV relatedness thresholds were applied to define genomically related isolates. Mapping of genomic relatedness defined clusters as follows: 4 patients in 2 E. faecium clusters and 2 patients in 1 P. aeruginosa cluster. The VIP method identified 12 potential clusters involving 28 patients, all of which were “pseudoclusters.” Importantly, the CM method identified 7 clusters consisting of 27 patients, which included 1 true E. faecium cluster of 2 patients with genomically related isolates. Conclusions: In light of the WGS data, all of the potential clusters identified by the VIP were pseudoclusters, lacking sufficient genomic relatedness. In contrast, the CM method showed increased sensitivity and specificity: it decreased the percentage of pseudoclusters by 14% and it identified a related genomic cluster of E. faecium. These findings suggest that integrating clinical data analytics and WGS is likely to benefit institutions in limiting expenditure of resources on pseudoclusters. Therefore, WGS combined with more sophisticated surveillance approaches, over standard methods as modeled by the VIP, are needed to better identify and address true cross-transmission events.
Funding: This study was supported by Philips Healthcare.
Decontamination of N95 respirators is being used by clinicians in the face of a global shortage of these devices. Some treatments for decontamination, such as some vaporized hydrogen peroxide methods or ultraviolet methods, had no impact on respiratory performance, while other treatments resulted in substantial damage to masks.
Vascular cognitive impairment (VCI) post-stroke is frequent but may go undetected, which highlights the need to better screen cognitive functioning following a stroke.
We examined the clinical utility of the Montreal Cognitive Assessment (MoCA) in detecting cognitive impairment against a gold-standard neuropsychological battery.
We assessed cognitive status with a comprehensive battery of neuropsychological tests in 161 individuals who were at least 3-months post-stroke. We used receiver operating characteristic (ROC) curves to identify two cut points for the MoCA to maximize sensitivity and specificity at a minimum 90% threshold. We examined the utility of the Symbol Digit Modalities Test, a processing speed measure, to determine whether this additional metric would improve classification relative to the MoCA total score alone.
Using two cut points, 27% of participants scored ≤ 23 and were classified as high probability of cognitive impairment (sensitivity 92%), and 24% of participants scored ≥ 28 and were classified as low probability of cognitive impairment (specificity 91%). The remaining 48% of participants scored from 24 to 27 and were classified as indeterminate probability of cognitive impairment. The addition of a processing speed measure improved classification for the indeterminate group by correctly identifying 65% of these individuals, for an overall classification accuracy of 79%.
The utility of the MoCA in detecting cognitive impairment post-stroke is improved when using a three-category approach. The addition of a processing speed measure provides a practical and efficient method to increase confidence in the determined outcome while minimally extending the screening routine for VCI.
The Academic Development Study of Australian Twins was established in 2012 with the purpose of investigating the relative influence of genes and environments in literacy and numeracy capabilities across two primary and two secondary school grades in Australia. It is the first longitudinal twin project of its kind in Australia and comprises a sample of 2762 twin pairs, 40 triplet sets and 1485 nontwin siblings. Measures include standardized literacy and numeracy test data collected at Grades 3, 5, 7 and 9 as part of the National Assessment Program: Literacy and Numeracy. A range of demographic and behavioral data was also collected, some at multiple longitudinal time points. This article outlines the background and rationale for the study and provides an overview for the research design, sample and measures collected. Findings emerging from the project and future directions are discussed.
African White-backed Vultures were recently uplisted to ‘Critically Endangered’ by IUCN due to declines across their range. Poisoning is widely accepted as the major reason for these declines. Botswana supports a high number of this species (breeding pairs > c.1,200), but as yet no published information exists on their breeding success in the country. However, mass poisonings within Botswana and neighbouring countries have killed thousands of White-backed Vultures in recent years. We therefore expected that nesting numbers may have declined in this region if these poisoning events killed local breeding birds. We used information from aerial surveys conducted between 2006 and 2017 in Khwai and Linyanti, two important breeding areas for this species in north-central Botswana, to determine if there was any change in nesting numbers and breeding success of White-backed Vultures. Results showed an overall 53.5% decline in nesting numbers, with a greater decline in Linyanti than in Khwai. In both areas, breeding success was significantly lower in 2017 than it was 10 ten years earlier. We recommend that similar repeat surveys are continued to provide greater confidence in the trends of both nesting numbers and breeding performance. Population viability analysis suggested that if the productivity levels detected in 2017 were a true indication of current productivity levels for this population, and if recent high poisoning rates continue, this population could be extirpated from the area in the next 13 years.
This research was carried out to quantify the effects of a range of variables on milk fat globule (MFG) size for a herd of Holstein-Friesian cows managed through an automatic milking system with year-round calving. We hypothesised that the overall variation in average MFG size observed between individual animals of the same herd cannot sufficiently be explained by the magnitude of the effects of variables that could be manipulated on-farm. Hence, we aimed to conduct an extensive analysis of possible determinants of MFG size, including physiological characteristics (parity, days in milk, days pregnant, weight, age, rumination minutes, somatic cell count) and milk production traits (number of milkings, milk yield, fat yield, protein and fat content, fat-protein ratio) on the individual animal level; and environmental conditions (diet, weather, season) for the whole herd. Our results show that when analysed in isolation, many of the studied variables have a detectable effect on MFG size. However, analysis of their additive effects identified days in milk, parity and milk yield as the most important variables. In accordance with our hypothesis, the estimated effects of these variables, calculated using a multiple variable linear mixed model, do not sufficiently explain the overall variation between cows, ranging from 2.70 to 5.69 µm in average MFG size. We further show that environmental variables, such as sampling day (across seasons) or the proportion of pasture and silage in the diet, have limited effects on MFG size and that physiological differences outweigh the effects of milk production traits and environmental conditions. This presents further evidence that the selection of individual animals is more important than the adjustment of on-farm variables to control MFG size.
Identical and fraternal twin pairs reared together have been key to understanding the genetic and environmental etiology of dyslexia and of individual differences in reading. In this chapter, we begin with a brief overview of the methods of twin research, and the historical development and application of these methods to understanding the etiology of individual differences and deficits in reading and related skills. Then we examine results from predominantly English-language twin research on dyslexia. The next section on twin studies of individual differences in reading ability introduces a broader cross-language perspective that includes comparisons of findings from studies in the United States, the United Kingdom, Australia, Norway, Sweden, the Netherlands, and China. Then we expand the reading phenotype beyond word recognition to reading comprehension, the ultimate goal of reading.
Carbonate glasses can be formed routinely in the system K2CO3–MgCO3. The enthalpy of formation for one such 0.55K2CO3–0.45MgCO3 glass was determined at 298 K to be 115.00 ± 1.21 kJ/mol by drop solution calorimetry in molten sodium molybdate (3Na2O·MoO3) at 975 K. The corresponding heat of formation from oxides at 298 K was −261.12 ± 3.02 kJ/mol. This ternary glass is shown to be slightly metastable with respect to binary crystalline components (K2CO3 and MgCO3) and may be further stabilized by entropy terms arising from cation disorder and carbonate group distortions. This high degree of disorder is confirmed by 13C MAS NMR measurement of the average chemical shift tensor values, which show asymmetry of the carbonate anion to be significantly larger than previously reported values. Molecular dynamics simulations show that the structure of this carbonate glass reflects the strong interaction between the oxygen atoms in distorted carbonate anions and potassium cations.
Poor response to dopaminergic antipsychotics constitutes a major challenge in the treatment of psychotic disorders and markers for non-response during first-episode are warranted. Previous studies have found increased levels of glutamate and γ-aminobutyric acid (GABA) in non-responding first-episode patients compared to responders, but it is unknown if non-responders can be identified using reference levels from healthy controls (HCs).
Thirty-nine antipsychotic-naïve patients with first-episode psychosis and 36 matched HCs underwent repeated assessments with the Positive and Negative Syndrome Scale and 3T magnetic resonance spectroscopy. Glutamate scaled to total creatine (/Cr) was measured in the anterior cingulate cortex (ACC) and left thalamus, and levels of GABA/Cr were measured in ACC. After 6 weeks, we re-examined 32 patients on aripiprazole monotherapy and 35 HCs, and after 26 weeks we re-examined 30 patients on naturalistic antipsychotic treatment and 32 HCs. The Andreasen criteria defined non-response.
Before treatment, thalamic glutamate/Cr was higher in the whole group of patients but levels normalized after treatment. ACC levels of glutamate/Cr and GABA/Cr were lower at all assessments and unaffected by treatment. When compared with HCs, non-responders at week 6 (19 patients) and week 26 (16 patients) had higher baseline glutamate/Cr in the thalamus. Moreover, non-responders at 26 weeks had lower baseline GABA/Cr in ACC. Baseline levels in responders and HCs did not differ.
Glutamatergic and GABAergic abnormalities in antipsychotic-naïve patients appear driven by non-responders to antipsychotic treatment. If replicated, normative reference levels for glutamate and GABA may aid estimation of clinical prognosis in first-episode psychosis patients.
Weed management is a major challenge in organic crop production, and organic farms generally harbor larger weed populations and more diverse communities compared with conventional farms. However, little research has been conducted on the effects of different organic management practices on weed communities and crop yields. In 2014 and 2015, we measured weed community structure and soybean [Glycine max (L.) Merr.] yield in a long-term experiment that compared four organic cropping systems that differed in nutrient inputs, tillage, and weed management intensity: (1) high fertility (HF), (2) low fertility (LF), (3) enhanced weed management (EWM), and (4) reduced tillage (RT). In addition, we created weed-free subplots within each system to assess the impact of weeds on soybean yield. Weed density was greater in the LF and RT systems compared with the EWM system, but weed biomass did not differ among systems. Weed species richness was greater in the RT system compared with the EWM system, and weed community composition differed between RT and other systems. Our results show that differences in weed community structure were primarily related to differences in tillage intensity, rather than nutrient inputs. Soybean yield was lower in the EWM system compared with the HF and RT systems. When averaged across all four cropping systems and both years, soybean yield in weed-free subplots was 10% greater than soybean yield in the ambient weed subplots that received standard management practices for the systems in which they were located. Although weed competition limited soybean yield across all systems, the EWM system, which had the lowest weed density, also had the lowest soybean yield. Future research should aim to overcome such trade-offs between weed control and yield potential, while conserving weed species richness and the ecosystem services associated with increased weed diversity.
There is a long history of exploitation of the South American river turtle Podocnemis expansa. Conservation efforts for this species started in the 1960s but best practices were not established, and population trends and the number of nesting females protected remained unknown. In 2014 we formed a working group to discuss conservation strategies and to compile population data across the species’ range. We analysed the spatial pattern of its abundance in relation to human and natural factors using multiple regression analyses. We found that > 85 conservation programmes are protecting 147,000 nesting females, primarily in Brazil. The top six sites harbour > 100,000 females and should be prioritized for conservation action. Abundance declines with latitude and we found no evidence of human pressure on current turtle abundance patterns. It is presently not possible to estimate the global population trend because the species is not monitored continuously across the Amazon basin. The number of females is increasing at some localities and decreasing at others. However, the current size of the protected population is well below the historical population size estimated from past levels of human consumption, which demonstrates the need for concerted global conservation action. The data and management recommendations compiled here provide the basis for a regional monitoring programme among South American countries.
Determining infectious cross-transmission events in healthcare settings involves manual surveillance of case clusters by infection control personnel, followed by strain typing of clinical/environmental isolates suspected in said clusters. Recent advances in genomic sequencing and cloud computing now allow for the rapid molecular typing of infecting isolates.
To facilitate rapid recognition of transmission clusters, we aimed to assess infection control surveillance using whole-genome sequencing (WGS) of microbial pathogens to identify cross-transmission events for epidemiologic review.
Clinical isolates of Staphylococcus aureus, Enterococcus faecium, Pseudomonas aeruginosa, and Klebsiella pneumoniae were obtained prospectively at an academic medical center, from September 1, 2016, to September 30, 2017. Isolate genomes were sequenced, followed by single-nucleotide variant analysis; a cloud-computing platform was used for whole-genome sequence analysis and cluster identification.
Most strains of the 4 studied pathogens were unrelated, and 34 potential transmission clusters were present. The characteristics of the potential clusters were complex and likely not identifiable by traditional surveillance alone. Notably, only 1 cluster had been suspected by routine manual surveillance.
Our work supports the assertion that integration of genomic and clinical epidemiologic data can augment infection control surveillance for both the identification of cross-transmission events and the inclusion of missed and exclusion of misidentified outbreaks (ie, false alarms). The integration of clinical data is essential to prioritize suspect clusters for investigation, and for existing infections, a timely review of both the clinical and WGS results can hold promise to reduce HAIs. A richer understanding of cross-transmission events within healthcare settings will require the expansion of current surveillance approaches.
Syndromic surveillance is a form of surveillance that generates information for public health action by collecting, analysing and interpreting routine health-related data on symptoms and clinical signs reported by patients and clinicians rather than being based on microbiologically or clinically confirmed cases. In England, a suite of national real-time syndromic surveillance systems (SSS) have been developed over the last 20 years, utilising data from a variety of health care settings (a telehealth triage system, general practice and emergency departments). The real-time systems in England have been used for early detection (e.g. seasonal influenza), for situational awareness (e.g. describing the size and demographics of the impact of a heatwave) and for reassurance of lack of impact on population health of mass gatherings (e.g. the London 2012 Olympic and Paralympic Games).We highlight the lessons learnt from running SSS, for nearly two decades, and propose questions and issues still to be addressed. We feel that syndromic surveillance is an example of the use of ‘big data’, but contend that the focus for sustainable and useful systems should be on the added value of such systems and the importance of people working together to maximise the value for the public health of syndromic surveillance services.
Informal (unpaid) care-givers of older people with dementia experience stress and isolation, causing physical and psychiatric morbidity. Comprehensive geriatric assessment clinics represent an important geriatrician-led model of dementia care. Our qualitative study examined the educational and support needs of care-givers of people diagnosed with dementia at a geriatric assessment clinic, resources used to address those needs and challenges experienced in doing so. We conducted structured thematic analysis of interviews with 18 informal care-givers. Participants’ narratives reflected four themes. First, care-givers sought information from varied sources, including the Alzheimer Society, the internet and clinic staff. Responsive behaviours, the expected progression of dementia and system navigation were topics of particular interest. Second, care-givers obtained assistance from public, for-profit and voluntary sources. Third, care-givers received little assistance. Two-thirds received fewer than four hours of help weekly from all sources combined, and none more than 15. Several received no assistance whatsoever. Publicly funded support workers’ tasks, and their timing, were often unhelpful. Finally, while numerous care-givers felt physical and emotional strain, and worried about how poor health impaired their care-giving, many hesitated to seek help. The needs of this unique population of informal care-givers can be met by improved home-care service flexibility, and access to trustworthy information about the expected progression of dementia and skills for managing behavioural and psychological symptoms.
OBJECTIVES/SPECIFIC AIMS: A brain-machine interface (BMI) is a device implanted into the brain of a paralyzed or injured patient to control an external assistive device, such as a cursor on a computer screen, a motorized wheelchair, or a robotic limb. We hypothesize we can utilize electrical stimulation of subdural electrocorticography (ECoG) electrodes as a method of generating the percepts of somatosensation such as vibration, temperature, or proprioception. METHODS/STUDY POPULATION: There will be 10 subjects, who are informed, willing, and consented epilepsy patients undergoing initial surgery for placement of subdural ECoG electrodes in the brain for seizure monitoring. ECoG will be used as a platform for recording high-resolution local field potentials during real-touch behavioral tasks. In addition, ECoG will also be used to electrically stimulate the human cerebral cortex in order to map and understand how varying stimulation parameters produce percepts of sensation. RESULTS/ANTICIPATED RESULTS: To determine how tactile and proprioceptive signals are integrated in S1, we will perform spectral analysis of the broadband local field potentials to look for increased power in specific frequency bands in the ECoG recordings while touching or moving the hand. To explore generating artificial sensation, the subject will be asked to perform a variety of tasks with and without the aid of stimulation. We anticipate the subject’s performance will be enhanced with the addition of artificial sensation. DISCUSSION/SIGNIFICANCE OF IMPACT: Many patients might benefit from a BMI, such as those with stroke, amputation, spinal cord injury, or brain trauma. The current generation of BMI devices are guided by visual feedback alone. However, without somatosensory feedback, even the most basic limb movements are difficult to perform in a fluid and natural manner. The results from this project will be crucial to developing a closed loop motor/sensory BMI.
OBJECTIVES/SPECIFIC AIMS: (1) To evaluate clinical outcomes in mechanically ventilated patients with and without fever. We hypothesize that, after adjusting for confounding factors such as age and severity of illness: (a) In septic patients, fever will be associated with improved clinical outcomes. (b) In nonseptic patients, fever will be associated with worse clinical outcomes. (2) To examine the relationship between antipyretics and mortality in mechanically ventilated patients at risk for an acute lung injury. We hypothesize that antipyretics will have no effect on clinical outcomes in mechanically ventilated patients with and without sepsis. METHODS/STUDY POPULATION: This is a retrospective study of a “before and after” observational cohort of 1705 patients with acute initiation of mechanical ventilation in the Emergency Department from September 2009 to March 2016. Data were collected retrospectively on the first 72 hours of temperature and antipyretic medication from the EHR. Temperatures measurements were adjusted based on route of measurement. Patients intubated for cardiac arrest or brain injury were excluded from our primary analysis due to the known damage of hyperthermia in these subsets. Cox proportional hazard models and multivariable linear regression analyzed time-to-event and continuous outcomes, respectively. Predetermined patient demographics were entered into each multivariable model using backward and forward stepwise regression. Models were assessed for collinearity and residual plots were used to assure each model met assumptions. RESULTS/ANTICIPATED RESULTS: Antipyretic administration is currently undergoing analysis. Initial temperature results are reported here. In the overall group, presence of hypothermia or fever within 72 hours of intubation compared with normothermia conferred a hazard ratio (HR) of 1.95 (95% CI: 1.48–2.56) and 1.31 (95% CI: 0.97–1.78), respectively. Presence of hypothermia and fever reduced hospital free days by 3.29 (95% CI: 2.15–4.42) and 2.34 (95% CI: 1.21–3.46), respectively. In our subgroup analysis of patients with sepsis, HR for 28-day mortality 2.57 (95% CI: 1.68–3.93) for hypothermia. Fever had no effect on mortality (HR 1.11, 95% CI: 0.694–1.76). Both hypothermia and fever reduced hospital free days by 5.39 (95% CI: 4.33–7.54) and 3.98 (95% CI: 2.46–5.32) days, respectively. DISCUSSION/SIGNIFICANCE OF IMPACT: As expected, both hypothermia and fever increased 28-day mortality and decreased hospital free days. In our sepsis subgroup, hypothermia again resulted in higher mortality and fewer hospital free days, while fever did not have a survival benefit or cost, but reduced hospital free days. Antipyretic administration complicates these findings, as medication may mask fever or exert an effect on survival. Fever may also affect mechanically ventilated septic patients differently than septic patients not on mechanical ventilation. Continued analysis of this data including antipyretic administration, ventilator free days and progression to ARDS will address these questions.
To explore the prevalence and drivers of hospital-level variability in antibiotic utilization among hematopoietic cell transplant (HCT) recipients to inform antimicrobial stewardship initiatives.
Retrospective cohort study using data merged from the Pediatric Health Information System and the Center for International Blood and Marrow Transplant Research.
The study included 27 transplant centers in freestanding children’s hospitals.
The primary outcome was days of broad-spectrum antibiotic use in the interval from day of HCT through neutrophil engraftment. Hospital antibiotic utilization rates were reported as days of therapy (DOTs) per 1,000 neutropenic days. Negative binomial regression was used to estimate hospital utilization rates, adjusting for patient covariates including demographics, transplant characteristics, and severity of illness. To better quantify the magnitude of hospital variation and to explore hospital-level drivers in addition to patient-level drivers of variation, mixed-effects negative binomial models were also constructed.
Adjusted hospital rates of antipseudomonal antibiotic use varied from 436 to 1121 DOTs per 1,000 neutropenic days, and rates of broad-spectrum, gram-positive antibiotic use varied from 153 to 728 DOTs per 1,000 neutropenic days. We detected variability by hospital in choice of antipseudomonal agent (ie, cephalosporins, penicillins, and carbapenems), but gram-positive coverage was primarily driven by vancomycin use. Considerable center-level variability remained even after controlling for additional hospital-level factors. Antibiotic use was not strongly associated with days of significant illness or mortality.
Among a homogenous population of children undergoing HCT for acute leukemia, both the quantity and spectrum of antibiotic exposure in the immediate posttransplant period varied widely. Antimicrobial stewardship initiatives can apply these data to optimize the use of antibiotics in transplant patients.