We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure coreplatform@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Recently, defaults have become celebrated as a low-cost and easy-to-implement nudge for promoting positive outcomes, both at an individual and societal level. In the present research, we conducted a large-scale field experiment (N = 32,508) in an educational context to test the effectiveness of a default intervention in promoting participation in a potentially beneficial achievement test. We found that a default manipulation increased the rate at which high school students registered to take the test but failed to produce a significant change in students’ actual rate of test-taking. These results join past literature documenting robust effects of default framings on initial choice but marked variability in the extent to which those choices ultimately translate to real-world outcomes. We suggest that this variability is attributable to differences in choice-to-outcome pathways – the extent to which the initial choice is causally determinative of the outcome.
Hoarding disorder (HD) can be understood through the cognitive behavioural model in the context of vulnerability factors (for example, personality traits, co-morbidities, traumatic life events) and beliefs about possessions (for example, identity, emotional attachment, memory, utility). Less is known about the strength of these hypothesised beliefs, or how they interact within the hoarding population, with researchers suggesting that specifying beliefs would improve treatment outcomes.
Aim:
The current study explored beliefs in HD, utilising Q-methodology to explore both categories of beliefs and the interactions between these. Moreover, Q-methodology allowed for comparison of the individuals endorsing specific categories of beliefs.
Method:
A comprehensive list of beliefs about possessions was developed. Thirty-two adults with clinically significant levels of HD completed a Q-sort task, alongside measures of proposed vulnerabilities, including co-morbidity, trauma and attachment style.
Results:
Q-factor analysis produced four profiles consisting of groups of participants who endorsed the same beliefs and had shared characteristics: (1) ‘Expression of identity’, (2) ‘Responsibility and morality’, (3) ‘Stability and predictability’, and (4) ‘Objects as emotional and meaningful beings’.
Discussion:
The profiles were distinguished by different categories of beliefs and co-morbid symptoms, suggesting that more targeted assessment tools and interventions would be beneficial to account for this heterogeneity within the clinical population. In particular, beliefs about identity and self-concept formed the largest profile, and beliefs about stability and predictability introduce a novel category of beliefs.
Extreme heat and wildfires have health implications for everyone; however, minority and low-income populations are disproportionately negatively affected due to generations of social inequities and discriminatory practices. Indigenous people in Canada are at a higher risk of many chronic respiratory diseases, as well as other non-communicable diseases and hospitalization, compared to the general population. These wildfires occurring during the COVID-19 pandemic have demonstrated how disruptive compounding disasters can be, putting minority populations such as First Nations, Metis, and Inuit tribes at increased risk and decreased priority. Going forward, if the necessarily proactive mitigation and preparedness steps are not undertaken, the ability to attenuate health inequity in the indigenous community by building resiliency to wildfire disasters will be significantly hampered.
Over the past five decades, Eastern Europe has seen relatively little in terms of terrorist attacks. The recent escalation of the Russo-Ukrainian conflict has, however, placed a new spotlight on the region, and new questions and concerns around war, conflict, insurgency, and terrorism are being posed. The Russian invasion and extensive combat operations, the largest in Europe since World War II, are occurring across Ukraine where there are 15 active nuclear reactors, not including the Chernobyl site, that are vulnerable to attack or sabotage. In addition, Eastern Europe has been heavily affected by COVID-19, exposing broad vulnerabilities in an otherwise fragile health care system. This raises concerns over the ability of Eastern European health care institutions to absorb surge and manage terrorist attacks or acts of violent extremism. This study provides an epidemiological description of all terrorism-related fatalities and injuries in Eastern Europe sustained from 1970 – 2019.
Method:
Data collection was performed using a retrospective database search through the Global Terrorism Database (GTD). The GTD was searched using the internal database functions for all terrorism events which occurred in Eastern Europe from January 1, 1970 - December 31, 2019. Years 2020 and 2021 were not yet available at the time of this study. Primary weapon type, country where the incident occurred, and number of deaths and injured were collated. Results were exported into an Excel spreadsheet (Microsoft Corp.; Redmond, Washington USA) for analysis.
Results:
There were 3,901 terrorism-related events in Eastern Europe between the years 1970 and 2019, inclusive. In total, the attacks resulted in 5,391 deaths and 9,538 persons injured. Explosives were the most commonly used weapon type in 59.2% of all attacks in the region, followed by firearms in 27.6%.
Conclusion:
From 1970 through 2019, a total of 3,901 terrorist attacks occurred in Eastern Europe, inflicting 5,391 deaths and 9,538 injuries. Of those, 72.3% occurred in Russia and Ukraine. Terrorist attacks sharply declined since the peak in 2014, but there is an overall uptrend in attacks since the 1970s.
The purpose of this document is to highlight practical recommendations to assist acute care hospitals to prioritize and implement strategies to prevent ventilator-associated pneumonia (VAP), ventilator-associated events (VAE), and non-ventilator hospital-acquired pneumonia (NV-HAP) in adults, children, and neonates. This document updates the Strategies to Prevent Ventilator-Associated Pneumonia in Acute Care Hospitals published in 2014. This expert guidance document is sponsored by the Society for Healthcare Epidemiology (SHEA), and is the product of a collaborative effort led by SHEA, the Infectious Diseases Society of America, the American Hospital Association, the Association for Professionals in Infection Control and Epidemiology, and The Joint Commission, with major contributions from representatives of a number of organizations and societies with content expertise.
OBJECTIVES/GOALS: Our overall goal is to identify the processes used by the human visual system to encode visual stimuli into perceptual representations. In this project, our objective is (i) to collect a dataset of human neural activity in response to 1000 naturalistic color images and (ii) to determine how image parameters drive different parts of the human brain. METHODS/STUDY POPULATION: We recorded iEEG data in 4 human subjects who had been implanted for epilepsy monitoring. Each subject was presented 10 sets of 100 naturalistic stimuli, taken from the Natural Scenes Dataset (Allen et al., 2021), on a screen for 1 second each with 1 second rest intervals between stimuli. The subjects were instructed to fixate on a red dot at the center of the screen and were prompted to recall whether they had seen 3 additional test stimuli at the end of each set to encourage attentiveness. We calculated significant neural responses at each electrode by comparing evoked potentials and high frequency power changes during each stimulus vs. rest. Electrodes with significant responses were then mapped to anatomic locations in each subjects brain and then collectively to a standard brain. RESULTS/ANTICIPATED RESULTS: The natural image set elicited significant evoked potentials and high frequency responses at electrodes in each subject. Response latencies, from 80 to 300 ms after stimulus onset, portrayed the evolution of visual processing along the visual pathways, through key sites such as the early visual cortex, ventral temporal cortex, intraparietal sulcus, and frontal eye field. These responses differed significantly from those elicited by simple patterns, which drove early visual cortex but less so in later regions. DISCUSSION/SIGNIFICANCE: These data show that the human brain responds differently to more complex images. Determining the human brains response to naturalistic images is essential for encoding models that describe the processing in the human visual system. These models may further future efforts for electrical neurostimulation therapies such as for restoring vision.
OBJECTIVES/GOALS: We aim to determine whether non-neuronal, non-synaptic glutamate signaling mechanisms can be targeted to produce highly specific, narrow changes in brain function that would benefit CNS disorders. To do this, we investigated cognitive changes produced through manipulating the activity of the astrocytic glutamate release mechanism system xc-. METHODS/STUDY POPULATION: System xc- (Sxc) activity was eliminated by mutating the gene Slc7a11 through pronuclear injection of zinc-finger nucleases into Sprague Dawley rat embryos to create a line of rats lacking Sxc (MSxc rats). To confirm a lack of Sxc activity, we verified that tissue from MSxc rats had a complete lack of xCT, which is the regulatory subunit of Sxc that is encoded by Slc7a11. We also verified that astrocyte cultures generated from MSxc tissue lacked cystine-evoked glutamate release. Next, we measured development (body weight), CNS regulation of metabolism, and other indicators of generalized, non-specific brain function as well as behaviors that are reliant on executive function, such as cognitive flexibility, impulse control, decision-making, and response inhibition. RESULTS/ANTICIPATED RESULTS: Eliminating Sxc was not lethal and did not impair development or produce widespread changes in brain function as is commonly observed when deleting other glutamate mechanisms. MSxc rats did not differ from wildtype in growth rate, central regulation of metabolism as reflected by absolute or diurnal changes in core body temperature, locomotor activity in a familiar or novel environment, or simple forms of cognition such as novel object recognition, or operant responding (food and cocaine-reinforced). In contrast, behaviors that rely on executive function were impaired. MSxc rats displayed deficits in cocaine reinstatement and attentional set-shifting. We anticipate MSxc rats to also show impairments in decision-making in the rat gambling task and response inhibition in the stop-signal reaction time task. DISCUSSION/SIGNIFICANCE: Eliminating Sxc activity in rats produced deficits in behaviors reliant on executive function without impacting development or simple brain function. These results highlight the potential of targeting Sxc to enhance cognition without generating therapeutically limiting adverse effects resulting from non-specific changes in brain function.
Having attention-deficit/hyperactivity disorder (ADHD) is a risk factor for concussion that impacts concussion diagnosis and recovery. The relationship between ADHD and repetitive subconcussive head impacts on neurocognitive and behavioral outcomes is less well known. This study evaluated the role of ADHD as a moderator of the association between repetitive head impacts on neurocognitive test performance and behavioral concussion symptoms over the course of an athletic season.
Method:
Study participants included 284 male athletes aged 13–18 years who participated in high school football. Parents completed the Strengths and Weaknesses of ADHD Symptoms and Normal Behavior (SWAN) ratings about their teen athlete before the season began. Head impacts were measured using an accelerometer worn during all practices and games. Athletes and parents completed behavioral ratings of concussion symptoms and the Attention Network Task (ANT), Digital Trail Making Task (dTMT), and Cued Task Switching Task at pre- and post-season.
Results:
Mixed model analyses indicated that neither head impacts nor ADHD symptoms were associated with post-season athlete- or parent-reported concussion symptom ratings or neurocognitive task performance. Moreover, no relationships between head impact exposure and neurocognitive or behavioral outcomes emerged when severity of pre-season ADHD symptoms was included as a moderator.
Conclusion:
Athletes’ pre-season ADHD symptoms do not appear to influence behavioral or neurocognitive outcomes following a single season of competitive football competition. Results are interpreted in light of several study limitations (e.g., single season, assessment of constructs) that may have impacted this study’s pattern of largely null results.
To assess preventability of hospital-onset bacteremia and fungemia (HOB), we developed and evaluated a structured rating guide accounting for intrinsic patient and extrinsic healthcare-related risks.
Design:
HOB preventability rating guide was compared against a reference standard expert panel.
Participants:
A 10-member panel of clinical experts was assembled as the standard of preventability assessment, and 2 physician reviewers applied the rating guide for comparison.
Methods:
The expert panel independently rated 82 hypothetical HOB scenarios using a 6-point Likert scale collapsed into 3 categories: preventable, uncertain, or not preventable. Consensus was defined as concurrence on the same category among ≥70% experts. Scenarios without consensus were deliberated and followed by a second round of rating.
Two reviewers independently applied the rating guide to adjudicate the same 82 scenarios in 2 rounds, with interim revisions. Interrater reliability was evaluated using the κ (kappa) statistic.
Results:
Expert panel consensus criteria were met for 52 scenarios (63%) after 2 rounds.
After 2 rounds, guide-based rating matched expert panel consensus in 40 of 52 (77%) and 39 of 52 (75%) cases for reviewers 1 and 2, respectively. Agreement rates between the 2 reviewers were 84% overall (κ, 0.76; 95% confidence interval [CI], 0.64–0.88]) and 87% (κ, 0.79; 95% CI, 0.65–0.94) for the 52 scenarios with expert consensus.
Conclusions:
Preventability ratings of HOB scenarios by 2 reviewers using a rating guide matched expert consensus in most cases with moderately high interreviewer reliability. Although diversity of expert opinions and uncertainty of preventability merit further exploration, this is a step toward standardized assessment of HOB preventability.
The effect of sample preparation on a pre-aged Al–Mg–Si–Cu alloy has been evaluated using atom probe tomography. Three methods of preparation were investigated: electropolishing (control), Ga+ focused ion beam (FIB) milling, and Xe+ plasma FIB (PFIB) milling. Ga+-based FIB preparation was shown to introduce significant amount of Ga contamination throughout the reconstructed sample (≈1.3 at%), while no Xe contamination was detected in the PFIB-prepared sample. Nevertheless, a significantly higher cluster density was observed in the Xe+ PFIB-prepared sample (≈25.0 × 1023 m−3) as compared to the traditionally produced electropolished sample (≈3.2 × 1023 m−3) and the Ga+ FIB sample (≈5.6 × 1023 m−3). Hence, the absence of the ion milling species does not necessarily mean an absence of specimen preparation defects. Specifically, the FIB and PFIB-prepared samples had more Si-rich clusters as compared to electropolished samples, which is indicative of vacancy stabilization via solute clustering.
A crucial factor in how we perceive social groups involves the signals and cues emitted by them. Groups signal various properties of their constitution through coordinated behaviors across sensory modalities, influencing receivers' judgments of the group and subsequent interactions. We argue that group communication is a necessary component of a comprehensive computational theory of social groups.
The COVID-19 pandemic exacerbated gender disparities in some academic disciplines. This study examined the association of the pandemic with gender authorship disparities in clinical neuropsychology (CN) journals.
Method:
Author bylines of 1,018 initial manuscript submissions to four major CN journals from March 15 through September 15 of both 2019 and 2020 were coded for binary gender. Additionally, authorship of 40 articles published on pandemic-related topics (COVID-19, teleneuropsychology) across nine CN journals were coded for binary gender.
Results:
Initial submissions to these four CN journals increased during the pandemic (+27.2%), with comparable increases in total number of authors coded as either women (+23.0%) or men (+25.4%). Neither the average percentage of women on manuscript bylines nor the proportion of women who were lead and/or corresponding authors differed significantly across time. Moreover, the representation of women as authors of pandemic-related articles did not differ from expected frequencies in the field.
Conclusions:
Findings suggest that representation of women as authors of peer-reviewed manuscript submissions to some CN journals did not change during the initial months of the COVID-19 pandemic. Future studies might examine how risk and protective factors may have influenced individual differences in scientific productivity during the pandemic.
Characterize and compare SARS-CoV-2–specific immune responses in plasma and gingival crevicular fluid (GCF) from nursing home residents during and after natural infection
Design:
Prospective cohort
Setting:
Nursing home
Participants:
SARS-CoV-2–infected nursing home residents
Methods:
A convenience sample of 14 SARS-CoV-2–infected nursing home residents, enrolled 4–13 days after real-time reverse transcription polymerase chain reaction diagnosis, were followed for 42 days. Post diagnosis, plasma SARS-CoV-2–specific pan-Immunoglobulin (Ig), IgG, IgA, IgM, and neutralizing antibodies were measured at 5 timepoints and GCF SARS-CoV-2–specific IgG and IgA were measured at 4 timepoints.
Results:
All participants demonstrated immune responses to SARS-CoV-2 infection. Among 12 phlebotomized participants, plasma was positive for pan-Ig and IgG in all 12, neutralizing antibodies in 11, IgM in 10, and IgA in 9. Among 14 participants with GCF specimens, GCF was positive for IgG in 13 and IgA in 12. Immunoglobulin responses in plasma and GCF had similar kinetics; median times to peak antibody response was similar across specimen types (4 weeks for IgG; 3 weeks for IgA). Participants with pan-Ig, IgG, and IgA detected in plasma and GCF IgG remained positive through this evaluation’s end 46–55 days post-diagnosis. All participants were viral culture negative by the first detection of antibodies.
Conclusions:
Nursing home residents had detectable SARS-CoV-2 antibodies in plasma and GCF after infection. Kinetics of antibodies detected in GCF mirrored those from plasma. Non-invasive GCF may be useful for detecting and monitoring immunologic responses in populations unable or unwilling to be phlebotomized.
Several social determinants of health (SDoH) have been associated with the onset of major depressive disorder (MDD). However, prior studies largely focused on individual SDoH and thus less is known about the relative importance (RI) of SDoH variables, especially in older adults. Given that risk factors for MDD may differ across the lifespan, we aimed to identify the SDoH that was most strongly related to newly diagnosed MDD in a cohort of older adults.
Methods
We used self-reported health-related survey data from 41 174 older adults (50–89 years, median age = 67 years) who participated in the Mayo Clinic Biobank, and linked ICD codes for MDD in the participants' electronic health records. Participants with a history of clinically documented or self-reported MDD prior to survey completion were excluded from analysis (N = 10 938, 27%). We used Cox proportional hazards models with a gradient boosting machine approach to quantify the RI of 30 pre-selected SDoH variables on the risk of future MDD diagnosis.
Results
Following biobank enrollment, 2073 older participants were diagnosed with MDD during the follow-up period (median duration = 6.7 years). The most influential SDoH was perceived level of social activity (RI = 0.17). Lower level of social activity was associated with a higher risk of MDD [hazard ratio = 2.27 (95% CI 2.00–2.50) for highest v. lowest level].
Conclusion
Across a range of SDoH variables, perceived level of social activity is most strongly related to MDD in older adults. Monitoring changes in the level of social activity may help identify older adults at an increased risk of MDD.
From the very outset Darwin’s extensive use of metaphor in the Origin has proved controversial, with some people thinking Darwin was thereby committed to ascribing intentions or even consciousness to nature, and others fearing that readers would be misled into thinking that he was. Also, some have argued (e.g. Gillian Beer) that Darwin should be regarded as much as a poet as a scientist. We argue that, on the contrary, his metaphors have a substantively scientific role, and do real work in the development of his argument. Firstly, as Darwin himself stresses, ‘such metaphorical expressions… are almost necessary for brevity’. Secondly, they provide a method for forming new concepts (as in the case of ‘struggle’). Thirdly, and, most significantly, the use of metaphor enables Darwin to explore further the analogy between NS and AS and directly compare the achievements of human breeding and those of the struggle for existence.