We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure coreplatform@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
To assess the validity of Antigen rapid diagnostic tests (Ag-RDT) for SARS-CoV-2 as decision support tool in various hospital-based clinical settings.
Design:
Retrospective cohort study among symptomatic and asymptomatic patients and Healthcare workers (HCW).
Setting:
A large tertiary teaching medical center serving as a major COVID-19 hospitalizing facility.
Participants and Methods:
Ag-RDTs’ performance was assessed in three clinical settings: 1. Symptomatic patients and HCW presenting at the Emergency Departments 2. Asymptomatic patients screened upon hospitalization 3. HCW of all sectors tested at the HCW clinic following exposure.
Results:
We obtained 5172 samples from 4595 individuals, who had both Ag-RDT and quantitative real-time PCR (qRT-PCR) results available. Of these, 485 samples were positive by qRT-PCR. The positive percent agreement (PPA) of Ag-RDT was greater for lower cycle threshold (Ct) values, reaching 93% in cases where Ct-value was <25 and 85% where Ct-value was <30. PPA was similar between symptomatic and asymptomatic individuals. We observed a significant correlation between Ct-value and time from infection onset (p<0.001).
Conclusions:
Ag-RDT are highly sensitive to the infectious stage of COVID-19 manifested by either high viral load (lower Ct) or proximity to infection, whether patient is symptomatic or asymptomatic. Thus, this simple-to-use and inexpensive detection method can be used as a decision support tool in various in-hospital clinical settings, assisting patient flow and maintaining sufficient hospital staffing.
Evidence suggests a link between smaller hippocampal volume (HV) and post-traumatic stress disorder (PTSD). However, there has been little prospective research testing this question directly and it remains unclear whether smaller HV confers risk or is a consequence of traumatization and PTSD.
Methods
U.S. soldiers (N = 107) completed a battery of clinical assessments, including structural magnetic resonance imaging pre-deployment. Once deployed they completed monthly assessments of traumatic-stressors and symptoms. We hypothesized that smaller HV would potentiate the effects of traumatic stressors on PTSD symptoms in theater. Analyses evaluated whether total HV, lateral (right v. left) HV, or HV asymmetry (right – left) moderated the effects of stressor-exposure during deployment on PTSD symptoms.
Results
Findings revealed no interaction between total HV and average monthly traumatic-stressors on PTSD symptoms b = −0.028, p = 0.681 [95% confidence interval (CI) −0.167 to 0.100]. However, in the context of greater exposure to average monthly traumatic stressors, greater right HV was associated with fewer PTSD symptoms b = −0.467, p = 0.023 (95% CI −0.786 to −0.013), whereas greater left HV was unexpectedly associated with greater PTSD symptoms b = 0.435, p = 0.024 (95% CI 0.028–0.715).
Conclusions
Our findings highlight the importance of considering the complex role of HV, in particular HV asymmetry, in predicting the emergence of PTSD symptoms in response to war-zone trauma.
Antibiotic prescribing practices across the Veterans’ Health Administration (VA) experienced significant shifts during the coronavirus disease 2019 (COVID-19) pandemic. From 2015 to 2019, antibiotic use between January and May decreased from 638 to 602 days of therapy (DOT) per 1,000 days present (DP), while the corresponding months in 2020 saw antibiotic utilization rise to 628 DOT per 1,000 DP.
A survey of Veterans’ Affairs Medical Centers on control of carbapenem-resistant Enterobacteriaceae (CRE) and carbapenem-producing CRE (CP-CRE) demonstrated that most facilities use VA guidelines but few screen for CRE/CP-CRE colonization regularly or regularly communicate CRE/CP-CRE status at patient transfer. Most respondents were knowledgeable about CRE guidelines but cited lack of adequate resources.
The following position statement from the Union of the European Phoniatricians, updated on 25th May 2020 (superseding the previous statement issued on 21st April 2020), contains a series of recommendations for phoniatricians and ENT surgeons who provide and/or run voice, swallowing, speech and language, or paediatric audiology services.
Objectives
This material specifically aims to inform clinical practices in countries where clinics and operating theatres are reopening for elective work. It endeavours to present a current European view in relation to common procedures, many of which fall under the aegis of aerosol generating procedures.
Conclusion
As evidence continues to build, some of the recommended practices will undoubtedly evolve, but it is hoped that the updated position statement will offer clinicians precepts on safe clinical practice.
There is significant interest in the use of angiotensin converting enzyme inhibitors (ACE-I) and angiotensin II receptor blockers (ARB) in coronavirus disease 2019 (COVID-19) and concern over potential adverse effects since these medications upregulate the severe acute respiratory syndrome coronavirus 2 host cell entry receptor ACE2. Recent studies on ACE-I and ARB in COVID-19 were limited by excluding outpatients, excluding patients by age, analyzing ACE-I and ARB together, imputing missing data, and/or diagnosing COVID-19 by chest computed tomography without definitive reverse transcription polymerase chain reaction (RT-PCR), all of which are addressed here.
Methods:
We performed a retrospective cohort study of 1023 COVID-19 patients diagnosed by RT-PCR at Stanford Hospital through April 8, 2020 with a minimum follow-up time of 14 days to investigate the association between ACE-I or ARB use with outcomes.
Results:
Use of ACE-I or ARB medications was not associated with increased risk of hospitalization, intensive care unit admission, or death. Compared to patients with charted past medical history, there was a lower risk of hospitalization for patients on ACE-I (odds ratio (OR) 0.43; 95% confidence interval (CI) 0.19–0.97; P = 0.0426) and ARB (OR 0.39; 95% CI 0.17–0.90; P = 0.0270). Compared to patients with hypertension not on ACE-I or ARB, patients on ARB medications had a lower risk of hospitalization (OR 0.09; 95% CI 0.01–0.88; P = 0.0381).
Conclusions:
These findings suggest that the use of ACE-I and ARB is not associated with adverse outcomes and may be associated with improved outcomes in COVID-19, which is immediately relevant to care of the many patients on these medications.
Given the rapidly progressing coronavirus disease 2019 (COVID-19) pandemic, this report on a US cohort of 54 COVID-19 patients from Stanford Hospital and data regarding risk factors for severe disease obtained at initial clinical presentation is highly important and immediately clinically relevant. We identified low presenting oxygen saturation as predictive of severe disease outcomes, such as diagnosis of pneumonia, acute respiratory distress syndrome, and admission to the intensive care unit, and also replicated data from China suggesting an association between hypertension and disease severity. Clinicians will benefit by tools to rapidly risk stratify patients at presentation by likelihood of progression to severe disease.
Aberrant activity of the subcallosal cingulate (SCC) is a common theme across pharmacologic treatment efficacy prediction studies. The functioning of the SCC in psychotherapeutic interventions is relatively understudied, as are functional differences among SCC subdivisions. We conducted functional connectivity analyses (rsFC) on resting-state functional magnetic resonance imaging (fMRI) data, collected before and after a course of cognitive behavioral therapy (CBT) in patients with major depressive disorder (MDD), using seeds from three SCC subdivisions.
Methods.
Resting-state data were collected from unmedicated patients with current MDD (Hamilton Depression Rating Scale-17 > 16) before and after 14-sessions of CBT monotherapy. Treatment outcome was assessed using the Beck Depression Inventory (BDI). Rostral anterior cingulate (rACC), anterior subcallosal cingulate (aSCC), and Brodmann’s area 25 (BA25) masks were used as seeds in connectivity analyses that assessed baseline rsFC and symptom severity, changes in connectivity related to symptom improvement after CBT, and prediction of treatment outcomes using whole-brain baseline connectivity.
Results.
Pretreatment BDI negatively correlated with pretreatment rACC ~ dorsolateral prefrontal cortex and aSCC ~ lateral prefrontal cortex rsFC. In a region-of-interest longitudinal analysis, rsFC between these regions increased post-treatment (p < 0.05FDR). In whole-brain analyses, BA25 ~ paracentral lobule and rACC ~ paracentral lobule connectivities decreased post-treatment. Whole-brain baseline rsFC with SCC did not predict clinical improvement.
Conclusions.
rsFC features of rACC and aSCC, but not BA25, correlated inversely with baseline depression severity, and increased following CBT. Subdivisions of SCC involved in top-down emotion regulation may be more involved in cognitive interventions, while BA25 may be more informative for interventions targeting bottom-up processing. Results emphasize the importance of subdividing the SCC in connectivity analyses.
To evaluate the National Health Safety Network (NHSN) hospital-onset Clostridioides difficile infection (HO-CDI) standardized infection ratio (SIR) risk adjustment for general acute-care hospitals with large numbers of intensive care unit (ICU), oncology unit, and hematopoietic cell transplant (HCT) patients.
Design:
Retrospective cohort study.
Setting:
Eight tertiary-care referral general hospitals in California.
Methods:
We used FY 2016 data and the published 2015 rebaseline NHSN HO-CDI SIR. We compared facility-wide inpatient HO-CDI events and SIRs, with and without ICU data, oncology and/or HCT unit data, and ICU bed adjustment.
Results:
For these hospitals, the median unmodified HO-CDI SIR was 1.24 (interquartile range [IQR], 1.15–1.34); 7 hospitals qualified for the highest ICU bed adjustment; 1 hospital received the second highest ICU bed adjustment; and all had oncology-HCT units with no additional adjustment per the NHSN. Removal of ICU data and the ICU bed adjustment decreased HO-CDI events (median, −25%; IQR, −20% to −29%) but increased the SIR at all hospitals (median, 104%; IQR, 90%–105%). Removal of oncology-HCT unit data decreased HO-CDI events (median, −15%; IQR, −14% to −21%) and decreased the SIR at all hospitals (median, −8%; IQR, −4% to −11%).
Conclusions:
For tertiary-care referral hospitals with specialized ICUs and a large number of ICU beds, the ICU bed adjustor functions as a global adjustment in the SIR calculation, accounting for the increased complexity of patients in ICUs and non-ICUs at these facilities. However, the SIR decrease with removal of oncology and HCT unit data, even with the ICU bed adjustment, suggests that an additional adjustment should be considered for oncology and HCT units within general hospitals, perhaps similar to what is done for ICU beds in the current SIR.
Although death by neurologic criteria (brain death) is legally recognized throughout the United States, state laws and clinical practice vary concerning three key issues: (1) the medical standards used to determine death by neurologic criteria, (2) management of family objections before determination of death by neurologic criteria, and (3) management of religious objections to declaration of death by neurologic criteria. The American Academy of Neurology and other medical stakeholder organizations involved in the determination of death by neurologic criteria have undertaken concerted action to address variation in clinical practice in order to ensure the integrity of brain death determination. To complement this effort, state policymakers must revise legislation on the use of neurologic criteria to declare death. We review the legal history and current laws regarding neurologic criteria to declare death and offer proposed revisions to the Uniform Determination of Death Act (UDDA) and the rationale for these recommendations.
Laboratory identification of carbapenem-resistant Enterobacteriaceae (CRE) is a key step in controlling its spread. Our survey showed that most Veterans Affairs laboratories follow VA guidelines for initial CRE identification, whereas 55.0% use PCR to confirm carbapenemase production. Most respondents were knowledgeable about CRE guidelines. Barriers included staffing, training, and financial resources.
Cognitive behavioral therapy (CBT) is an effective treatment for many patients suffering from major depressive disorder (MDD), but predictors of treatment outcome are lacking, and little is known about its neural mechanisms. We recently identified longitudinal changes in neural correlates of conscious emotion regulation that scaled with clinical responses to CBT for MDD, using a negative autobiographical memory-based task.
Methods
We now examine the neural correlates of emotional reactivity and emotion regulation during viewing of emotionally salient images as predictors of treatment outcome with CBT for MDD, and the relationship between longitudinal change in functional magnetic resonance imaging (fMRI) responses and clinical outcomes. Thirty-two participants with current MDD underwent baseline MRI scanning followed by 14 sessions of CBT. The fMRI task measured emotional reactivity and emotion regulation on separate trials using standardized images from the International Affective Pictures System. Twenty-one participants completed post-treatment scanning. Last observation carried forward was used to estimate clinical outcome for non-completers.
Results
Pre-treatment emotional reactivity Blood Oxygen Level-Dependent (BOLD) signal within hippocampus including CA1 predicted worse treatment outcome. In contrast, better treatment outcome was associated with increased down-regulation of BOLD activity during emotion regulation from time 1 to time 2 in precuneus, occipital cortex, and middle frontal gyrus.
Conclusions
CBT may modulate the neural circuitry of emotion regulation. The neural correlates of emotional reactivity may be more strongly predictive of CBT outcome. The finding that treatment outcome was predicted by BOLD signal in CA1 may suggest overgeneralized memory as a negative prognostic factor in CBT outcome.
Cyber Operational Risk: Cyber risk is routinely cited as one of the most important sources of operational risks facing organisations today, in various publications and surveys. Further, in recent years, cyber risk has entered the public conscience through highly publicised events involving affected UK organisations such as TalkTalk, Morrisons and the NHS. Regulators and legislators are increasing their focus on this topic, with General Data Protection Regulation (“GDPR”) a notable example of this. Risk actuaries and other risk management professionals at insurance companies therefore need to have a robust assessment of the potential losses stemming from cyber risk that their organisations may face. They should be able to do this as part of an overall risk management framework and be able to demonstrate this to stakeholders such as regulators and shareholders. Given that cyber risks are still very much new territory for insurers and there is no commonly accepted practice, this paper describes a proposed framework in which to perform such an assessment. As part of this, we leverage two existing frameworks – the Chief Risk Officer (“CRO”) Forum cyber incident taxonomy, and the National Institute of Standards and Technology (“NIST”) framework – to describe the taxonomy of a cyber incident, and the relevant cyber security and risk mitigation items for the incident in question, respectively.Summary of Results: Three detailed scenarios have been investigated by the working party:
∙ Employee leaks data at a general (non-life) insurer: Internal attack through social engineering, causing large compensation costs and regulatory fines, driving a 1 in 200 loss of £210.5m (c. 2% of annual revenue).
∙ Cyber extortion at a life insurer: External attack through social engineering, causing large business interruption and reputational damage, driving a 1 in 200 loss of £179.5m (c. 6% of annual revenue).
∙ Motor insurer telematics device hack: External attack through software vulnerabilities, causing large remediation / device replacement costs, driving a 1 in 200 loss of £70.0m (c. 18% of annual revenue).
Limitations: The following sets out key limitations of the work set out in this paper:
∙ While the presented scenarios are deemed material at this point in time, the threat landscape moves fast and could render specific narratives and calibrations obsolete within a short-time frame.
∙ There is a lack of historical data to base certain scenarios on and therefore a high level of subjectivity is used to calibrate them.
∙ No attempt has been made to make an allowance for seasonality of renewals (a cyber event coinciding with peak renewal season could exacerbate cost impacts)
∙ No consideration has been given to the impact of the event on the share price of the company.
∙ Correlation with other risk types has not been explicitly considered.
Conclusions: Cyber risk is a very real threat and should not be ignored or treated lightly in operational risk frameworks, as it has the potential to threaten the ongoing viability of an organisation. Risk managers and capital actuaries should be aware of the various sources of cyber risk and the potential impacts to ensure that the business is sufficiently prepared for such an event. When it comes to quantifying the impact of cyber risk on the operations of an insurer there are significant challenges. Not least that the threat landscape is ever changing and there is a lack of historical experience to base assumptions off. Given this uncertainty, this paper sets out a framework upon which readers can bring consistency to the way scenarios are developed over time. It provides a common taxonomy to ensure that key aspects of cyber risk are considered and sets out examples of how to implement the framework. It is critical that insurers endeavour to understand cyber risk better and look to refine assumptions over time as new information is received. In addition to ensuring that sufficient capital is being held for key operational risks, the investment in understanding cyber risk now will help to educate senior management and could have benefits through influencing internal cyber security capabilities.
We evaluated the utility of vancomycin-resistant Enterococcus (VRE) surveillance by varying 2 parameters: admission versus weekly surveillance and perirectal swabbing versus stool sampling.
Design
Prospective, patient-level surveillance program of incident VRE colonization.
Setting
Liver transplant surgical intensive care unit (SICU) of a tertiary-care referral medical center with a high prevalence of VRE.
Patients
All patients admitted to the SICU from June to August 2015.
Methods
We conducted a point-prevalence estimate followed by admission and weekly surveillance by perirectal swabbing and/or stool sampling. Incident colonization was defined as a negative screen followed by positive surveillance. VRE was detected by culture on Remel Spectra VRE chromogenic agar. Microbiologically-confirmed VRE bloodstream infections (BSIs) were tracked for 2 months. Statistical analyses were calculated using the McNemar test, the Fisher exact test, the t test, and the χ2 test.
Results
In total, 91 patients underwent VRE surveillance testing. The point prevalence of VRE colonization was 60.9%; VRE prevalence on admission was 30.1%. Weekly surveillance identified an additional 7 of 28 patients (25.0%) with incident colonization. VRE BSIs were more common in VRE-colonized patients than in noncolonized patients (8 of 43 vs 2 of 48; P=.028). In a direct comparison, perirectal swabs were more sensitive than stool samples in detecting VRE (64 of 67 vs 56 of 67; P=.023). Compliance with perirectal swabbing was 89% (201 of 226) compared to 56% (127 of 226) for stool collection (P≤0.001).
Conclusions
We recommend weekly VRE surveillance over admission-only screening in high-burden units such as liver transplant SICUs. Perirectal swabs had greater collection compliance and sensitivity than stool samples, making them the preferred methodology. Further work may have implications for antimicrobial stewardship and infection control.
SCALA is a physical calibration device for the SuperNova Integral Field Spectrograph (SNIFS), mounted to the University Hawaii 2.2m telescope on Mauna Kea. For type Ia supernova (SN Ia) cosmology programs, an improved fundamental calibration directly translates into improved cosmological constraints. The aim of SCALA is to perform a fundamental calibration of the CALSPEC (Bohlin 2014) standard stars, which are currently calibrated relative to white dwarf model atmospheres.
The goal of this project was to document the current state of a community-academic partnership, identifying early successes and lessons learned.
Methods
We employed qualitative methods, semi-structured interviews and document analysis, from 2 data sources to (1) show how the principles of community-based participatory research are enacted through the activities of Addressing Disparities in Asian Populations through Translational Research (ADAPT) and (2) elucidate the barriers and facilitators to adhering to those principles from the perspectives of the members themselves.
Results
In addition to established community-based participatory research values, understanding individuals’ motivations for participation, the challenges aligning the priorities of community organizations and academic partners, and definitions of success are themes that emerged as key to the process of maintaining this partnership.
Conclusion
As the emphasis on community-academic partnerships grows, there is potential for clinical and translational science awards to use community engagement to facilitate translational research beyond the traditional medical spheres of influence and to forge relationships with affected communities.