To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Research has demonstrated that chronic stress exposure early in development can lead to detrimental alterations in the orbitofrontal cortex (OFC)–amygdala circuit. However, the majority of this research uses functional neuroimaging methods, and thus the extent to which childhood trauma corresponds to morphometric alterations in this limbic-cortical network has not yet been investigated. This study had two primary objectives: (i) to test whether anatomical associations between OFC–amygdala differed between adults as a function of exposure to chronic childhood assaultive trauma and (ii) to test how these environment-by-neurobiological effects relate to pathological personality traits.
Participants were 137 ethnically diverse adults (48.1% female) recruited from the community who completed a clinical diagnostic interview, a self-report measure of pathological personality traits, and anatomical MRI scans.
Findings revealed that childhood trauma moderated bilateral OFC–amygdala volumetric associations. Specifically, adults with childhood trauma exposure showed a positive association between medial OFC volume and amygdalar volume, whereas adults with no childhood exposure showed the negative OFC–amygdala structural association observed in prior research with healthy samples. Examination of the translational relevance of trauma-related alterations in OFC–amygdala volumetric associations for disordered personality traits revealed that trauma exposure moderated the association of OFC volume with antagonistic and disinhibited phenotypes, traits characteristic of Cluster B personality disorders.
The OFC–amygdala circuit is a potential anatomical pathway through which early traumatic experiences perpetuate emotional dysregulation into adulthood and confer risk for personality pathology. Results provide novel evidence of divergent neuroanatomical pathways to similar personality phenotypes depending on early trauma exposure.
To conduct a pilot study implementing combined genomic and epidemiologic surveillance for hospital-acquired multidrug-resistant organisms (MDROs) to predict transmission between patients and to estimate the local burden of MDRO transmission.
Pilot prospective multicenter surveillance study.
The study was conducted in 8 university hospitals (2,800 beds total) in Melbourne, Australia (population 4.8 million), including 4 acute-care, 1 specialist cancer care, and 3 subacute-care hospitals.
All clinical and screening isolates from hospital inpatients (April 24 to June 18, 2017) were collected for 6 MDROs: vanA VRE, MRSA, ESBL Escherichia coli (ESBL-Ec) and Klebsiella pneumoniae (ESBL-Kp), and carbapenem-resistant Pseudomonas aeruginosa (CRPa) and Acinetobacter baumannii (CRAb). Isolates were analyzed and reported as routine by hospital laboratories, underwent whole-genome sequencing at the central laboratory, and were analyzed using open-source bioinformatic tools. MDRO burden and transmission were assessed using combined genomic and epidemiologic data.
In total, 408 isolates were collected from 358 patients; 47.5% were screening isolates. ESBL-Ec was most common (52.5%), then MRSA (21.6%), vanA VRE (15.7%), and ESBL-Kp (7.6%). Most MDROs (88.3%) were isolated from patients with recent healthcare exposure.
Combining genomics and epidemiology identified that at least 27.1% of MDROs were likely acquired in a hospital; most of these transmission events would not have been detected without genomics. The highest proportion of transmission occurred with vanA VRE (88.4% of patients).
Genomic and epidemiologic data from multiple institutions can feasibly be combined prospectively, providing substantial insights into the burden and distribution of MDROs, including in-hospital transmission. This analysis enables infection control teams to target interventions more effectively.
Commonsense morality seems to feature both agent-neutral and agent-relative elements. For a long time, the core debate between consequentialists and deontologists was which of these features should take centerstage. With the introduction of the consequentializing project and agent-relative value, however, agent-neutrality has been left behind. While I likewise favor an agent-relative view, agent-neutral views capture important features of commonsense morality.
This article investigates whether an agent-relative view can maintain what is attractive about typical agent-neutral views. In particular, I argue that the agent-relative reasons-wielding deontologist is ultimately able to capture those features ordinarily associated with agent-neutral views, while the agent-relative value wielding consequentialist is left with a dilemma. The consequentializer either succumbs to the concerns of her agent-neutral opponents or else abandons the distinctive and attractive features of her view. Either way, I conclude that agent-relative value is best left behind.
Telemedicine visits are an increasingly popular method of care for mild infectious complaints, including uncomplicated urinary tract infections (UTIs), and they are an important target for antimicrobial stewardship programs (ASPs) to evaluate quality of prescribing. In this study, we compared antimicrobial prescribing in a primary care network for uncomplicated UTIs treated through virtual visits and at in-office visits.
Retrospective cohort study comparing guideline-concordant antibiotic prescribing for uncomplicated UTI between virtual visits and office visits.
Primary care network composed of 44 outpatient sites and a single virtual visit platform.
Adult female patients diagnosed with a UTI between January 1 and December 31, 2018.
Virtual visit prescribing was compared to office visit prescribing, including agent, duration, and patient outcomes. The health system ASP provides annual education to all outpatient providers regarding local antibiogram trends and prescribing guidelines. Guideline-concordant therapy was assessed based on the network’s ASP guidelines.
In total, 350 patients were included, with 175 per group. Patients treated for a UTI through a virtual visit were more likely to receive a first-line antibiotic agent (74.9% vs 59.4%; P = .002) and guideline-concordant duration (100% vs 53.1%; P < .001). Patients treated through virtual visits were also less likely to have a urinalysis (0% vs 97.1%; P < .001) or urine culture (0% vs 73.1%; P < .001) ordered and were less likely to revisit within 7 days (5.1% vs 18.9%; P < .001).
UTI care through a virtual visit was associated with more appropriate antimicrobial prescribing compared to office visits and decreased utilization of diagnostic and follow-up resources.
Alzheimer’s disease (AD) studies are increasingly targeting earlier (pre)clinical populations, in which the expected degree of observable cognitive decline over a certain time interval is reduced as compared to the dementia stage. Consequently, endpoints to capture early cognitive changes require refinement. We aimed to determine the sensitivity to decline of widely applied neuropsychological tests at different clinical stages of AD as outlined in the National Institute on Aging – Alzheimer’s Association (NIA-AA) research framework.
Amyloid-positive individuals (as determined by positron emission tomography or cerebrospinal fluid) with longitudinal neuropsychological assessments available were included from four well-defined study cohorts and subsequently classified among the NIA-AA stages. For each stage, we investigated the sensitivity to decline of 17 individual neuropsychological tests using linear mixed models.
1103 participants (age = 70.54 ± 8.7, 47% female) were included: n = 120 Stage 1, n = 206 Stage 2, n = 467 Stage 3 and n = 309 Stage 4. Neuropsychological tests were differentially sensitive to decline across stages. For example, Category Fluency captured significant 1-year decline as early as Stage 1 (β = −.58, p < .001). Word List Delayed Recall (β = −.22, p < .05) and Trail Making Test (β = 6.2, p < .05) became sensitive to 1-year decline in Stage 2, whereas the Mini-Mental State Examination did not capture 1-year decline until Stage 3 (β = −1.13, p < .001) and 4 (β = −2.23, p < .001).
We demonstrated that commonly used neuropsychological tests differ in their ability to capture decline depending on clinical stage within the AD continuum (preclinical to dementia). This implies that stage-specific cognitive endpoints are needed to accurately assess disease progression and increase the chance of successful treatment evaluation in AD.
This study investigated the latent factor structure of the NIH Toolbox Cognition Battery (NIHTB-CB) and its measurement invariance across clinical diagnosis and key demographic variables including sex, race/ethnicity, age, and education for a typical Alzheimer’s disease (AD) research sample.
The NIHTB-CB iPad English version, consisting of 7 tests, was administered to 411 participants aged 45–94 with clinical diagnosis of cognitively unimpaired, dementia, mild cognitive impairment (MCI), or impaired not MCI. The factor structure of the whole sample was first examined with exploratory factor analysis (EFA) and further refined using confirmatory factor analysis (CFA). Two groups were classified for each variable (diagnosis or demographic factors). The confirmed factor model was next tested for each group with CFA. If the factor structure was the same between the groups, measurement invariance was then tested using a hierarchical series of nested two-group CFA models.
A two-factor model capturing fluid cognition (executive function, processing speed, and memory) versus crystalized cognition (language) fit well for the whole sample and each group except for those with age < 65. This model generally had measurement invariance across sex, race/ethnicity, and education, and partial invariance across diagnosis. For individuals with age < 65, the language factor remained intact while the fluid cognition was separated into two factors: (1) executive function/processing speed and (2) memory.
The findings mostly supported the utility of the battery in AD research, yet revealed challenges in measuring memory for AD participants and longitudinal change in fluid cognition.
Background: A prolonged outbreak of carbapenemase-producing Serratia marcescens (CPSM) was identified in our quaternary healthcare center over a 2-year period from 2015 through 2017. A reservoir of IMP-4–producing S. marcescens in sink drains of clinical hand basins (CHB) was implicated in propagating transmission, supported by evidence from whole-genome sequencing (WGS). We assessed the impact of manual bioburden reduction intervention on further transmission of CPSM. Methods: Environmental sampling of frequently touched wet and dry areas around CPSM clinical cases was undertaken to identify potential reservoirs and transmission pathways. After identifying CHB as a source of CPSM, a widespread annual CHB cleaning intervention involving manual scrubbing of sink drains and the proximal pipes was implemented. Pre- and postintervention point prevalence surveys (PPS) of CHB drains performed to assess for CPSM colonization. Surveillance for subsequent transmission was conducted through weekly screening of patients and annual screening of CHB in transmission areas, and 6-monthly whole-hospital PPS of patients. All CPSM isolates were assessed by WGS. Results: In total, 6 patients were newly identified with CPSM from 2015 to 2017 (4.3 transmission events per 100,000 surveillance bed days [SBD]; 95% CI, 1.6–9.4). All clinical CPSM isolates were linked to CHB isolates by WGS. The CHB cleaning intervention resulted in a reduction in CHB colonization with CPSM in transmission areas from 72% colonization to 28% (ARR, 0.44; 95% CI, 0.25–0.63). A single further clinical case of CPSM linked to the CHB isolates was detected over 2 years of surveillance from 2017 to 2019 following the implementation of the annual CHB cleaning program (0.7 transmissions per 100,000 SBD; 95% CI, 0.0–3.9). No transmissions were linked to undertaking the cleaning intervention. Conclusions: A simple intervention targeted at reducing the biological burden of CPSM in CHB drains at regular intervals was effective in preventing transmission of carbapenemase-producing Enterobacterales from the hospital environment to patients over a prolonged period of intensive surveillance. These findings highlight the importance of detailed cleaning for controlling the spread of multidrug-resistant organisms from healthcare environments.
Cognitive behavior therapy (CBT) is effective for most patients with a social anxiety disorder (SAD) but a substantial proportion fails to remit. Experimental and clinical research suggests that enhancing CBT using imagery-based techniques could improve outcomes. It was hypothesized that imagery-enhanced CBT (IE-CBT) would be superior to verbally-based CBT (VB-CBT) on pre-registered outcomes.
A randomized controlled trial of IE-CBT v. VB-CBT for social anxiety was completed in a community mental health clinic setting. Participants were randomized to IE (n = 53) or VB (n = 54) CBT, with 1-month (primary end point) and 6-month follow-up assessments. Participants completed 12, 2-hour, weekly sessions of IE-CBT or VB-CBT plus 1-month follow-up.
Intention to treat analyses showed very large within-treatment effect sizes on the social interaction anxiety at all time points (ds = 2.09–2.62), with no between-treatment differences on this outcome or clinician-rated severity [1-month OR = 1.45 (0.45, 4.62), p = 0.53; 6-month OR = 1.31 (0.42, 4.08), p = 0.65], SAD remission (1-month: IE = 61.04%, VB = 55.09%, p = 0.59); 6-month: IE = 58.73%, VB = 61.89%, p = 0.77), or secondary outcomes. Three adverse events were noted (substance abuse, n = 1 in IE-CBT; temporary increase in suicide risk, n = 1 in each condition, with one being withdrawn at 1-month follow-up).
Group IE-CBT and VB-CBT were safe and there were no significant differences in outcomes. Both treatments were associated with very large within-group effect sizes and the majority of patients remitted following treatment.
Over the course of the 2014/15 and 2015/16 austral summer seasons, the South Pole Ice Core project recovered a 1751 m deep ice core at the South Pole. This core provided a high-resolution record of paleoclimate conditions in East Antarctica during the Holocene and late Pleistocene. The drilling and core processing were completed using the new US Intermediate Depth Drill system, which was designed and built by the US Ice Drilling Program at the University of Wisconsin–Madison. In this paper, we present and discuss the setup, operation, and performance of the drill system.
To evaluate the effect of the burden of Staphylococcus aureus colonization of nursing home residents on the risk of S. aureus transmission to healthcare worker (HCW) gowns and gloves.
Multicenter prospective cohort study.
Setting and participants:
Residents and HCWs from 13 community-based nursing homes in Maryland and Michigan.
Residents were cultured for S. aureus at the anterior nares and perianal skin. The S. aureus burden was estimated by quantitative polymerase chain reaction detecting the nuc gene. HCWs wore gowns and gloves during usual care activities; gowns and gloves were swabbed and then cultured for the presence of S. aureus.
In total, 403 residents were enrolled; 169 were colonized with methicillin-resistant S. aureus (MRSA) or methicillin-sensitive S. aureus (MSSA) and comprised the study population; 232 were not colonized and thus were excluded from this analysis; and 2 were withdrawn prior to being swabbed. After multivariable analysis, perianal colonization with S. aureus conferred the greatest odds for transmission to HCW gowns and gloves, and the odds increased with increasing burden of colonization: adjusted odds ratio (aOR), 2.1 (95% CI, 1.3–3.5) for low-level colonization and aOR 5.2 (95% CI, 3.1–8.7) for high level colonization.
Among nursing home patients colonized with S. aureus, the risk of transmission to HCW gowns and gloves was greater from those colonized with greater quantities of S. aureus on the perianal skin. Our findings inform future infection control practices for both MRSA and MSSA in nursing homes.
This article accomplishes two things. First, it explores and defends Kierkegaard's distinctive solution to the Problem of Total Devotion, a problem which has been helpfully identified by Robert Adams. Second, it extends that solution by advancing an interpretation of the command to do all things to the glory of God (1 Corinthians 10:31) according to which we are being commanded to intentionally make every one of our actions such that it simultaneously counts as a divine action: in other words, to act intentionally in all things such that it is God who acts through us.
Systematic reviews and meta-analyses suggest that behaviour change interventions have modest effect sizes, struggle to demonstrate effect in the long term and that there is high heterogeneity between studies. Such interventions take huge effort to design and run for relatively small returns in terms of changes to behaviour.
So why do behaviour change interventions not work and how can we make them more effective? This article offers some ideas about what may underpin the failure of behaviour change interventions. We propose three main reasons that may explain why our current methods of conducting behaviour change interventions struggle to achieve the changes we expect: 1) our current model for testing the efficacy or effectiveness of interventions tends to a mean effect size. This ignores individual differences in response to interventions; 2) our interventions tend to assume that everyone values health in the way we do as health professionals; and 3) the great majority of our interventions focus on addressing cognitions as mechanisms of change. We appeal to people’s logic and rationality rather than recognising that much of what we do and how we behave, including our health behaviours, is governed as much by how we feel and how engaged we are emotionally as it is with what we plan and intend to do.
Drawing on our team’s experience of developing multiple interventions to promote and support health behaviour change with a variety of populations in different global contexts, this article explores strategies with potential to address these issues.
Taxonomic identification of archaeofauna relies on techniques and anatomical traits that should be valid, reliable, and usable, but which are rarely tested. Identification protocols (techniques and anatomical traits), particularly those used to distinguish taxa of similar size and morphology, should be rigorously tested to ensure a solid interpretive foundation. Blind testing of a protocol for identifying stylohyoid bones of North American artiodactyls was performed by three analysts who independently employed the protocol to identify 77 anatomically complete specimens of known taxonomic identity, representing 54 individuals and 11 species. Identifications were identical in 89% of cases and in conflict in 3% of cases. The remainder involved differences in resolution; two analysts identified specimens to species, whereas the third identified specimens to more general taxonomic groups. Inter-analyst variability in identification was a result of differences in protocol application. Identifications were consistent with known taxon in 92%–96% of cases. Results indicate that the protocol is valid, reliable, and usable, and it can be applied to archaeological specimens with confidence. Testing of other identification criteria employed by zooarchaeologists is encouraged.
Passive acoustic monitoring is rapidly gaining recognition as a practical, affordable and robust tool for measuring gun hunting levels within protected areas, and consequently for its potential to evaluate anti-poaching patrols’ effectiveness based on outcome (i.e., change in hunting pressure) rather than effort (e.g., kilometres patrolled) or output (e.g., arrests). However, there has been no report to date of a protected area successfully using an acoustic grid to explore baseline levels of gun hunting activity, adapting its patrols in response to the evidence extracted from the acoustic data and then evaluating the effectiveness of the new patrol strategy. We report here such a case in Cameroon’s Korup National Park, where anti-poaching patrol effort was markedly increased in the 2015–2016 Christmas/New Year holiday season to curb the annual peak in gunshots recorded by a 12-sensor acoustic grid in the same period during the previous 2 years. Despite a three- to five-fold increase in patrol days, distance and area covered, the desired outcome – lower gun hunting activity – was not achieved under the new patrol scheme. The findings emphasize the need for adaptive wildlife law enforcement and how passive acoustic monitoring can help attain this goal, and they warn about the risks of using effort-based metrics of anti-poaching strategies as a surrogate for desired outcomes. We propose ways of increasing protected areas’ capacity to adopt acoustic grids as a law enforcement monitoring tool.
Most clinical microbiology laboratories have replaced toxin immunoassay (EIA) alone with multistep testing (MST) protocols or nucleic acid amplification testing (NAAT) alone for the detection of C. difficile.
Study the effect of changing testing strategies on C. difficile detection and strain diversity.
A Veterans’ Affairs hospital.
Initially, toxin EIA testing was replaced by an MST approach utilizing a glutamate dehydrogenase (GDH) and toxin EIA followed by tcdB NAAT for discordant results. After 18 months, MST was replaced by a NAAT-only strategy. Available patient stool specimens were cultured for C. difficile. Restriction endonuclease analysis (REA) strain typing and quantitative in vitro toxin testing were performed on recovered isolates.
Before MST (toxin EIA), 79 of 708 specimens (11%) were positive, and after MST (MST-A), 121 of 517 specimens (23%) were positive (P < .0001). Prior to NAAT-only testing (MST-B), 80 of the 490 specimens (16%) were positive by MST, and after NAAT-only testing was implemented, 67 of the 368 specimens (18%) were positive (P = nonsignificant). After replacing toxin EIA testing, REA strain group diversity increased (8, 13, 13, and 10 REA groups in the toxin EIA, MST-A, MST-B, and NAAT-only periods, respectively) and in vitro toxin concentration decreased. The average log10 toxin concentration of the isolates were 2.08, 1.88, 1.20 and 1.55 ng/mL for the same periods, respectively.
MST and NAAT had similar detection rates for C. difficile. Compared to toxin testing alone, they detected increased diversity of C. difficile strains, many of which were low toxin producing.
The continental shelf edge of the NW Gulf of Mexico supports dozens of reefs and banks, including the West and East Flower Garden Banks (FGB) and Stetson Bank that comprise the Flower Garden Banks National Marine Sanctuary (FGBNMS). Discovered by fishermen in the early 1900s, the FGBs are named after the colourful corals, sponges and algae that dominate the region. The reefs and banks are the surface expression of underlying salt domes and provide important habitat for mesophotic coral ecosystems (MCE) and deep coral communities to 300 m depth. Since 2001, FGBNMS research teams have utilized remotely operated vehicles (e.g. ‘Phantom S2’, ‘Mohawk’, ‘Yogi’) to survey and characterize benthic habitats of this region. In 2016, a Draft Environmental Impact Statement proposed the expansion of the current sanctuary boundaries to incorporate an additional 15 reefs and banks, including Elvers Bank. Antipatharians (black corals) were collected within the proposed expansion sites and analysed using morphological and molecular methods. A new species, Distichopathes hickersonae, collected at 172 m depth on Elvers Bank, is described within the family Aphanipathidae. This brings the total number of black coral species in and around the sanctuary to 14.
Seismic-reflection surveys of the Isle Royale sub-basin, central Lake Superior, reveal two large end moraines and associated glacial sediments deposited during the last cycle of the Laurentide Ice Sheet in the basin. The Isle Royale moraines directly overlie bedrock and are cored with dense, acoustically massive till intercalated down-ice with acoustically stratified outwash. Till and outwash are overlain by glacial varves, a lower red unit and an upper gray unit.
The maximum extent of late Younger Dryas-age readvance into the western Lake Superior basin is uncertain, but it was probably controlled by both ice dynamics and climate. Our data indicate that during retreat from the maximum, the ice paused just long enough to construct the outer of the two moraines, >100 m high, and then retreated to the inner moraine, during which time most of the lower glacial-lacustrine sequence (red varves) was deposited. Retreat from the inner moraine coincided with a marked flux of icebergs at the calving margin and a change to gray varves. Rapid retreat may be related to both an influx of meltwater from Glacial Lake Agassiz about 10,500 cal yr BP and retreat of the calving margin down an adverse slope into the Isle Royale sub-basin.
OBJECTIVES/GOALS: 1. Understand the association between patient perceptions of care measured by the Interpersonal Processes of Care (IPC) Survey and glycemic control, appointment no-shows/cancellations and medication adherence in patients with type II diabetes. 2. Determine how these relationships differ by race for non-Hispanic White and Black patients. METHODS/STUDY POPULATION: This is a cross-sectional study of a random sample of 100 White and 100 Black Type II diabetic patients followed in Duke primary care clinics and prescribed antihyperglycemic medication. We will recruit through email and phone calls. Enrolled patients will complete the Interpersonal Processes of Care Short Form and Extent of Medication Adherence survey to measure patient perceptions of care (predictor) and medication adherence (secondary outcome). No show appointments and cancellations (secondary outcomes) and most recent hemoglobin A1c (primary outcome) will be collected from the Electronic Medical Record. We will also collect basic demographic information, insurance status, financial security, significant co-morbidities, and number and type (subcutaneous vs oral) of antihyperglycemic medications. RESULTS/ANTICIPATED RESULTS: -The study is powered to detect a 0.6% difference in HbA1c, our primary outcome, between high and low scorers on the Interpersonal Processes of Care subdomains. -We expect that higher patient scores in the positive domains of the IPC survey and lower DISCUSSION/SIGNIFICANCE OF IMPACT: This study will provide information to develop and implement targeted interventions to reduce racial and ethnic disparities in patients with Type II diabetes. We hope to gain information on potentially modifiable factors in patient-provider interactions that can be intervened upon to improve prevention and long-term outcomes in these populations.
We examined whether change in added sugar intake is associated with change in δ13C, a novel sugar biomarker, in thirty-nine children aged 5–10 years selected from a Colorado (USA) prospective cohort of children at increased risk for type 1 diabetes. Reported added sugar intake via FFQ and δ13C in erythrocytes were measured at two time points a median of 2 years apart. Change in added sugar intake was associated with change in the δ13C biomarker, where for every 1-g increase in added sugar intake between the two time points, there was an increase in δ13C of 0⋅0082 (P = 0⋅0053), independent of change in HbA1c and δ15N. The δ13C biomarker may be used as a measure of compliance in an intervention study of children under the age of 10 years who are at increased risk for type 1 diabetes, in which the goal was to reduce dietary sugar intake.