To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
What explains right-wing radicalization in the United States? Existing research emphasizes demographic changes, economic insecurity, and elite polarization. This paper highlights an additional factor: the impact of foreign wars on society at home. We argue communities that bear the greatest costs of foreign wars are prone to higher rates of right-wing radicalization. To support this claim, we present robust correlations between activity on Parler, a predominantly right-wing social media platform, and fatalities among residents who served in U.S. wars in Iraq and Afghanistan, at both the county and census tract level. The findings contribute to understanding right-wing radicalization in the US in two key respects. First, it examines widespread, nonviolent radical-right activity that, because it is less provocative than protest and violence, has eluded systematic measurement. Second, it highlights that U.S. foreign wars have important implications for domestic politics beyond partisanship and voting, to potentially include radicalization.
Background: Statistically significant decreases in methicillin-resistant Staphylococcus aureus (MRSA) healthcare-associated infections (HAIs) occurred in Veterans Health Administration (VA) facilities from 2007 to 2019 using active surveillance for facility admissions and contact precautions for patients colonized (CPC) or infected (CPI) with MRSA, but the value of these interventions is controversial. Objective: To determine the impact of active surveillance, CPC, and CPI on prevention MRSA HAIs, we conducted a prospective cohort study between July 2020 and June 2022 in all 123 acute-care VA medical facilities. In April 2020, all facilities were given the option to suspend any combination of active surveillance, CPC, or CPI to free up laboratory resources for COVID-19 testing and conserve personal protective equipment. We measured MRSA HAIs (cases per 1,000 patient days) in intensive care units (ICUs) and non-ICUs by the infection control policy. Results: During the analysis period, there were 917,591 admissions, 5,225,174 patient days, and 568 MRSA HAIs. Only 20% of facilities continued all 3 MRSA infection control measures in July 2020, but this rate increased to 57% by June 2022. The MRSA HAI rate for all infection sites in non-ICUs was 0.07 (95% CI, 0.05–0.08) for facilities practicing active surveillance plus CPC plus CPI compared to 0.12 (95% CI, 0.08–0.19; P = .01) for those not practicing any of these strategies, and in ICUs the MRSA HAI rates were 0.20 (95% CI, 0.15–0.26) and 0.65 (95% CI, 0.41–0.98; P < .001) for the respective policies. Similar differences were seen when the analyses were restricted to MRSA bloodstream HAIs. Accounting for monthly COVID-19 admissions to facilities over the analysis period using a negative binomial regression model did not change the relationships between facility policy and MRSA HAI rates in the ICUs or non-ICUs. There was no statistically significant difference in monthly facility urinary catheter-associated infection rates, a nonequivalent dependent variable, in the categories during the analysis period in either ICUs or non-ICUs. Conclusions: In Veterans Affairs medical centers, there were fewer MRSA HAIs when facilities practiced active surveillance and contact precautions for colonized or infected patients during the COVID-19 pandemic. The effect was greater in ICUs than non-ICUs.
We explored experiences and perceptions surrounding the Self-Stewardship Time-Out Program (SSTOP) intervention across implementation sites to improve antimicrobial use. Semistructured qualitative interviews were conducted with Antibiotic Stewardship physicians and pharmacists, from which 5 key themes emerged. SSTOP may serve to achieve sustainable promotion of antibiotic use improvements.
To describe national trends in testing and detection of carbapenemases
produced by carbapenem-resistant Enterobacterales (CRE) and associate
testing with culture and facility characteristics.
Retrospective cohort study.
Department of Veterans’ Affairs medical centers (VAMCs).
Patients seen at VAMCs between 2013 and 2018 with cultures positive for CRE,
defined by national VA guidelines.
Microbiology and clinical data were extracted from national VA data sets.
Carbapenemase testing was summarized using descriptive statistics.
Characteristics associated with carbapenemase testing were assessed with
Of 5,778 standard cultures that grew CRE, 1,905 (33.0%) had evidence of
molecular or phenotypic carbapenemase testing and 1,603 (84.1%) of these had
carbapenemases detected. Among these cultures confirmed as
carbapenemase-producing CRE, 1,053 (65.7%) had molecular testing for
≥1 gene. Almost all testing included KPC (n = 1,047, 99.4%), with KPC
detected in 914 of 1,047 (87.3%) cultures. Testing and detection of other
enzymes was less frequent. Carbapenemase testing increased over the study
period from 23.5% of CRE cultures in 2013 to 58.9% in 2018. The South US
Census region (38.6%) and the Northeast (37.2%) region had the highest
proportion of CRE cultures with carbapenemase testing. High complexity (vs
low) and urban (vs rural) facilities were significantly associated with
carbapenemase testing (P < .0001).
Between 2013 and 2018, carbapenemase testing and detection increased in the
VA, largely reflecting increased testing and detection of KPC. Surveillance
of other carbapenemases is important due to global spread and increasing
antibiotic resistance. Efforts supporting the expansion of carbapenemase
testing to low-complexity, rural healthcare facilities and standardization
of reporting of carbapenemase testing are needed.
To assess the validity of Antigen rapid diagnostic tests (Ag-RDT) for SARS-CoV-2 as decision support tool in various hospital-based clinical settings.
Retrospective cohort study among symptomatic and asymptomatic patients and Healthcare workers (HCW).
A large tertiary teaching medical center serving as a major COVID-19 hospitalizing facility.
Participants and Methods:
Ag-RDTs’ performance was assessed in three clinical settings: 1. Symptomatic patients and HCW presenting at the Emergency Departments 2. Asymptomatic patients screened upon hospitalization 3. HCW of all sectors tested at the HCW clinic following exposure.
We obtained 5172 samples from 4595 individuals, who had both Ag-RDT and quantitative real-time PCR (qRT-PCR) results available. Of these, 485 samples were positive by qRT-PCR. The positive percent agreement (PPA) of Ag-RDT was greater for lower cycle threshold (Ct) values, reaching 93% in cases where Ct-value was <25 and 85% where Ct-value was <30. PPA was similar between symptomatic and asymptomatic individuals. We observed a significant correlation between Ct-value and time from infection onset (p<0.001).
Ag-RDT are highly sensitive to the infectious stage of COVID-19 manifested by either high viral load (lower Ct) or proximity to infection, whether patient is symptomatic or asymptomatic. Thus, this simple-to-use and inexpensive detection method can be used as a decision support tool in various in-hospital clinical settings, assisting patient flow and maintaining sufficient hospital staffing.
In The Early 1920s Two American Companies Battled for the Rights to develop an oil concession in northern Persia: Standard Oil Company of New Jersey (later to become Exxon) and the smaller, but none the less formidable, Sinclair Consolidated Oil Corporation (later to merge into ARCO). American industry was a new variable in the equation, since Russia and Britain had been the primary beneficiaries of concessions granted by the Persian government to outsiders. Ultimately neither company won out—for several reasons. The British did not welcome the new presence and sought to stymie it; the Persian press misunderstood the relationship between business and government in the U.S., leading some Persians into unrealistic expectations; and one of the competitors, Sinclair, became tangled up in scandal.
Evidence suggests a link between smaller hippocampal volume (HV) and post-traumatic stress disorder (PTSD). However, there has been little prospective research testing this question directly and it remains unclear whether smaller HV confers risk or is a consequence of traumatization and PTSD.
U.S. soldiers (N = 107) completed a battery of clinical assessments, including structural magnetic resonance imaging pre-deployment. Once deployed they completed monthly assessments of traumatic-stressors and symptoms. We hypothesized that smaller HV would potentiate the effects of traumatic stressors on PTSD symptoms in theater. Analyses evaluated whether total HV, lateral (right v. left) HV, or HV asymmetry (right – left) moderated the effects of stressor-exposure during deployment on PTSD symptoms.
Findings revealed no interaction between total HV and average monthly traumatic-stressors on PTSD symptoms b = −0.028, p = 0.681 [95% confidence interval (CI) −0.167 to 0.100]. However, in the context of greater exposure to average monthly traumatic stressors, greater right HV was associated with fewer PTSD symptoms b = −0.467, p = 0.023 (95% CI −0.786 to −0.013), whereas greater left HV was unexpectedly associated with greater PTSD symptoms b = 0.435, p = 0.024 (95% CI 0.028–0.715).
Our findings highlight the importance of considering the complex role of HV, in particular HV asymmetry, in predicting the emergence of PTSD symptoms in response to war-zone trauma.
Effective stewardship strategies such as an “antibiotic timeout” to encourage prescriber reflection on the use of broad-spectrum antibiotics are critical to reduce the threat of multidrug-resistant organisms. We sought to understand the facilitators and barriers of the implementation of the Antibiotic Self-Stewardship Timeout Program (SSTOP), which used a template note integrated into the electronic health record system to guide decision making regarding anti- methicillin-resistant S. aureus (MRSA) therapy after 3 days of hospitalization. We conducted interviews at 10 Veterans’ Affairs medical centers (VAMCs) during the preimplementation period (N = 16 antibiotic stewards) and postimplementation (N = 13 antibiotic stewards) ~12 months after program initiation. Preimplementation interviews focused on current stewardship programs, whereas postimplementation interviews addressed the implementation process and corresponding challenges. We also directly asked about the impact of COVID-19 on stewardship activities at each facility. Interviews were transcribed and analyzed using consensus-based inductive and deductive coding. Codes were iteratively combined into barrier and facilitator groupings. Barriers identified in the preimplementation interviews included challenges with staffing, the difficulties of changing prescribing culture, and academic affiliates (eg, rotating physician trainees). Facilitators included intellectual support (eg, providers who understand the concept of stewardship), facility support, individual strengths of antibiotic stewards (eg, diplomacy, strong relationships with surgeons), and resources such as VA policies mandating stewardship. By the postimplementation phase, all sites reported a high volume of COVID-19 cases. Additional demands were placed on infectious disease providers who comprise the antibiotic stewardship teams, which complicated the implementation of SSTOP. Many barriers and facilitators mentioned were similar to those identified during preimplementation interviews. Staffing problems and specific providers not “getting it [stewardship activities]” continued, whereas facilitators centered around strong institutional support. Specific pandemic-related barriers included slow down or stoppage of stewardship activities including curbing of regular MRSA screening practices, halting weekly stewardship rounds, and delaying stewardship committee planning. Pandemic-specific staffing problems occurred due to the need for “all hands on deck” and challenges with staff working from home, as well as being pulled in multiple directions, (eg, writing COVID-19 policies). Furthermore, an increase in antibiotic use was also reported at sites during COVID-19 surges. Our findings indicate that SSTOP implementation met with barriers at most times; however, pandemic-specific barriers were particularly powerful. Sites with strong staffing resources were better equipped to deal with these challenges. Understanding how the program evolves with subsequent COVID-19 surges will be important to support the broad implementation of SSTOP.
As advancements in medical therapy improve survival, we are confronted with more patients who either cannot communicate or lack decisional capacity, leaving a more common dependency on collaborations with surrogate decision makers. Accompanying these advancements also come radically more complex scenarios to consider that require us to occasionally compromise between quality of life and longevity of life. For instance, modalities such as continuous renal replacement therapy, extracorporeal membrane oxygenation, left ventricular assist devices, and organ transplantation can certainly extend the lives of their recipients, but at a cost of potential complications, time in the hospital, and variable success. Further, the scale between physician-directed decision-making and medical consumerism is weighing heavier toward giving patients a wider breadth of decisional authority in their health care, and the intensive care unit (ICU) is no exception. The recognition of the importance of autonomous decision-making in the latter half of the twentieth century created a need to establish the shared medical decision-making model that incorporates the values and choices of patients with the medical expertise of the physician, as discussed elsewhere in this text (see Chapter 1, When Does Shared Decision-Making Apply in Adult Critical Care?). A natural extension to the increasingly used shared decision-making model requires that we make reasonable efforts to seek the collaboration with surrogate decision-makers when the patient is unable to represent themselves.1
Antibiotic prescribing practices across the Veterans’ Health Administration (VA) experienced significant shifts during the coronavirus disease 2019 (COVID-19) pandemic. From 2015 to 2019, antibiotic use between January and May decreased from 638 to 602 days of therapy (DOT) per 1,000 days present (DP), while the corresponding months in 2020 saw antibiotic utilization rise to 628 DOT per 1,000 DP.
Background: Carbapenem-resistant Enterobacteriaceae (CRE) are gram-negative bacteria resistant to at least 1 carbapenem and are associated with high mortality (50%). Carbapenemase-producing CRE (CP-CRE) are particularly serious because they are more likely to transmit carbapenem resistance genes to other gram-negative bacteria and they are resistant to all carbapenem antibiotics. Few studies have evaluated risk factors associated with CP-CRE colonization. The goal of this study was to determine the risk factors associated with CP-CRE colonization in a cohort of US veterans. Methods: We conducted a retrospective cohort study of patients seen at VA medical centers between 2013 and 2018 who had positive cultures for CRE from any site, defined by resistance to at least 1 of the following carbapenems: imipenem, meropenem, doripenem, or ertapenem. CP-CRE was defined via antibiotic sensitivity data that coded the culture as being ‘carbapenemase producing,’ being ‘Hodge test positive,’ or ‘KPC producing.’ Only the first positive culture for CRE was included. Patient demographics (year of culture, age, sex, race, major comorbidities, infectious organism, culture site, inpatient status, and CP-CRE status) and facility demographics (rurality, geographic region, and facility complexity) were collected. Bivariate analysis and multiple logistic regression were performed to determine variables associated with CP-CRE versus non–CP-CRE. Results: In total, 3,322 patients were identified with a positive CRE culture: 546 (16.4%) with CP-CRE and 2,776 (83.63%) with non–CP-CRE. Most patients were men (95%) and were older (mean age, 71; SD, 12.5) and were diagnosed at a high-complexity VA medical center (65%). Most of the cultures were urine (63%), followed by sputum (13%), and blood (7%). Most were from inpatients (46%), followed by outpatients (42%), and long-term care facilities (12%). Multivariable analysis showed the following variables to be associated with CP-CRE positive cultures: congestive heart failure (P = .0136), African American (P = .0760), Klebsiella spp (P < .0001), GI cancers (P = .0087), culture collected in 2017 (P = .0004), and culture collected in 2018 (P < .0001). There were also significant differences CP-CRE frequencies by geographic region (P < .001). Discussion: CP-CRE diagnoses are relatively rare; however, the serious complications associated make them important infections to investigate. In our analysis, we found that congestive heart failure and gastric cancer were comorbidities strongly associated with CP-CRE. In 2017, the VA formalized their CP-CRE definition, which led to more accurate reporting. Conclusions: After the guideline was implemented, CP-CRE detection dramatically increased in noncontinental US facilities. More work should be done in the future to determine the different risk factors between non–CP-CRE and CP-CRE infections.
Background: Contamination of healthcare workers and patient environments likely play a role in the spread of antibiotic-resistant organisms. The mechanisms that contribute to the distribution of organisms within and between patient rooms are not well understood, but they may include movement patterns and patient interactions of healthcare workers. We used an innovative technology for tracking healthcare worker movement and patient interactions in ICUs. Methods: The Kinect system, a device developed by Microsoft, was used to detect the location of a person’s hands and head over time, each represented with 3-dimensional coordinates. The Kinects were deployed in 2 intensive care units (ICUs), at 2 different hospitals, and they collected data from 5 rooms in a high-acuity 20-bed cardiovascular ICU (unit 1) and 3 rooms in a 10-bed medical-surgical ICU (unit 2). The length of the Kinect deployment varied by room (range, 15–48 days). The Kinect data were processed to included date, time, and location of head and hands for all individuals. Based on the coordinates of the bed, we defined events indicating bed touch, distance 30 cm (1 foot) from the bed, and distance 1 m (3 feet) from the bed. The processed Kinect data were then used to generate heat maps showing density of person locations within a room and summarizing bed touches and time spent in different locations within the room. Results: The Kinect systems captured In total, 2,090 hours of room occupancy by at least 1 person within ~1 m of the bed (Table 1). Approximately half of the time spent within ~1 m from the bed was at the bedside (within ~30 cm). The estimated number of bed touches per hour when within ~1 m was 13–23. Patients spent more time on one side of the bed, which varied by room and facility (Fig. 1A, 1B). Additionally, we observed temporal variation in intensity measured by person time in the room (Fig. 1C, 1D). Conclusions: High occupancy tends to be on the far side (away from the door) of the patient bed where the computers are, and the bed touch rate is relatively high. These results can be used to help us understand the potential for room contamination, which can contribute to both transmission and infection, and they highlight critical times and locations in the room, with a potential for focused deep cleaning.
Background: Between 2007 and 2015, inpatient fluoroquinolone use declined in US Veterans’ Affairs (VA) hospitals. Whether fluoroquinolone use at discharge has also declined, in particular since antibiotic stewardship programs became mandated at VA hospitals in 2014, is unknown. Methods: In this retrospective cohort study of hospitalizations with infection between January 1, 2014, and December 31, 2017, at 125 VA hospitals, we assessed inpatient and discharge fluoroquinolone (ciprofloxacin, levofloxacin, and moxifloxacin) use as (1) proportion of hospitalizations with a fluoroquinolone prescribed and (2) fluoroquinolone days per 1,000 hospitalizations. After adjusting for illness severity, comorbidities, and age, we used multilevel logit and negative binomial models to assess for hospital-level variation and longitudinal prescribing trends. Results: Of 560,219 hospitalizations meeting inclusion criteria as hospitalizations with infection (Fig. 1), 209,602 of 560,219 (37.4%) had a fluoroquinolone prescribed either during hospitalization (182,337 of 560,219, 32.5%) or at discharge (110,003 of 560,219, 19.6%) (Fig. 1). Hospitals varied appreciably in inpatient, discharge, and total fluoroquinolone use, with 71% of hospitals in the highest prescribing quartile located in the southern United States. Nearly all measures of fluoroquinolone use decreased between 2014 and 2017, with the largest decreases found in inpatient fluoroquinolone and ciprofloxacin use (Fig. 2). In contrast, there was minimal decline in fluoroquinolone use at discharge (Fig. 2), which accounted for 1,433 of 2,339 (61.3%) of hospitalization-related fluoroquinolone days by 2017. Between 2014 and 2017, fluoroquinolone use decreased in VA hospitals, largely driven by a decrease in inpatient fluoroquinolone (especially ciprofloxacin) use. Fluoroquinolone prescribing at discharge, and levofloxacin prescribing overall, remain prime targets for stewardship.
Funding: This work was funded by a locally initiated project grant from the Center for Clinical Management Research at the VA Ann Arbor Healthcare System.
Disclosures:Valerie M. Vaughn reports contract research for Blue Cross and Blue Shield of Michigan, the Department of Veterans’ Affairs, the NIH, the SHEA, and the APIC. She also reports fees from the Gordon and Betty Moore Foundation Speaker’s Bureau, the CDC, the Pew Research Trust, Sepsis Alliance, and The Hospital and Health System Association of Pennsylvania.
Disclaimer The views expressed in this article are those of the authors and do not necessarily reflect the position or policy of the Department of Veterans Affairs or the US government.
A survey of Veterans’ Affairs Medical Centers on control of carbapenem-resistant Enterobacteriaceae (CRE) and carbapenem-producing CRE (CP-CRE) demonstrated that most facilities use VA guidelines but few screen for CRE/CP-CRE colonization regularly or regularly communicate CRE/CP-CRE status at patient transfer. Most respondents were knowledgeable about CRE guidelines but cited lack of adequate resources.
To evaluate the National Health Safety Network (NHSN) hospital-onset Clostridioides difficile infection (HO-CDI) standardized infection ratio (SIR) risk adjustment for general acute-care hospitals with large numbers of intensive care unit (ICU), oncology unit, and hematopoietic cell transplant (HCT) patients.
Retrospective cohort study.
Eight tertiary-care referral general hospitals in California.
We used FY 2016 data and the published 2015 rebaseline NHSN HO-CDI SIR. We compared facility-wide inpatient HO-CDI events and SIRs, with and without ICU data, oncology and/or HCT unit data, and ICU bed adjustment.
For these hospitals, the median unmodified HO-CDI SIR was 1.24 (interquartile range [IQR], 1.15–1.34); 7 hospitals qualified for the highest ICU bed adjustment; 1 hospital received the second highest ICU bed adjustment; and all had oncology-HCT units with no additional adjustment per the NHSN. Removal of ICU data and the ICU bed adjustment decreased HO-CDI events (median, −25%; IQR, −20% to −29%) but increased the SIR at all hospitals (median, 104%; IQR, 90%–105%). Removal of oncology-HCT unit data decreased HO-CDI events (median, −15%; IQR, −14% to −21%) and decreased the SIR at all hospitals (median, −8%; IQR, −4% to −11%).
For tertiary-care referral hospitals with specialized ICUs and a large number of ICU beds, the ICU bed adjustor functions as a global adjustment in the SIR calculation, accounting for the increased complexity of patients in ICUs and non-ICUs at these facilities. However, the SIR decrease with removal of oncology and HCT unit data, even with the ICU bed adjustment, suggests that an additional adjustment should be considered for oncology and HCT units within general hospitals, perhaps similar to what is done for ICU beds in the current SIR.
To determine whether the Society for Healthcare Epidemiology of America (SHEA) and the Infectious Diseases Society of America (IDSA) Clostridioides difficile infection (CDI) severity criteria adequately predicts poor outcomes.
Retrospective validation study.
Setting and participants:
Patients with CDI in the Veterans’ Affairs Health System from January 1, 2006, to December 31, 2016.
For the 2010 criteria, patients with leukocytosis or a serum creatinine (SCr) value ≥1.5 times the baseline were classified as severe. For the 2018 criteria, patients with leukocytosis or a SCr value ≥1.5 mg/dL were classified as severe. Poor outcomes were defined as hospital or intensive care admission within 7 days of diagnosis, colectomy within 14 days, or 30-day all-cause mortality; they were modeled as a function of the 2010 and 2018 criteria separately using logistic regression.
We analyzed data from 86,112 episodes of CDI. Severity was unclassifiable in a large proportion of episodes diagnosed in subacute care (2010, 58.8%; 2018, 49.2%). Sensitivity ranged from 0.48 for subacute care using 2010 criteria to 0.73 for acute care using 2018 criteria. Areas under the curve were poor and similar (0.60 for subacute care and 0.57 for acute care) for both versions, but negative predictive values were >0.80.
Model performances across care settings and criteria versions were generally poor but had reasonably high negative predictive value. Many patients in the subacute-care setting, an increasing fraction of CDI cases, could not be classified. More work is needed to develop criteria to identify patients at risk of poor outcomes.
Although death by neurologic criteria (brain death) is legally recognized throughout the United States, state laws and clinical practice vary concerning three key issues: (1) the medical standards used to determine death by neurologic criteria, (2) management of family objections before determination of death by neurologic criteria, and (3) management of religious objections to declaration of death by neurologic criteria. The American Academy of Neurology and other medical stakeholder organizations involved in the determination of death by neurologic criteria have undertaken concerted action to address variation in clinical practice in order to ensure the integrity of brain death determination. To complement this effort, state policymakers must revise legislation on the use of neurologic criteria to declare death. We review the legal history and current laws regarding neurologic criteria to declare death and offer proposed revisions to the Uniform Determination of Death Act (UDDA) and the rationale for these recommendations.
Laboratory identification of carbapenem-resistant Enterobacteriaceae (CRE) is a key step in controlling its spread. Our survey showed that most Veterans Affairs laboratories follow VA guidelines for initial CRE identification, whereas 55.0% use PCR to confirm carbapenemase production. Most respondents were knowledgeable about CRE guidelines. Barriers included staffing, training, and financial resources.