To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Strain, temperature and strain rate are crucial factors governing the development of crystallographic preferred orientations (CPO) in ice. To better understand how CPO patterns change in response to these variables, we performed quantitative analyses on neutron diffraction data between 2010 and 2019, collected in situ during uniaxial compression experiments on deuterium ice. At strains >10% and temperatures <−10°C, the c-axis pattern switches from a single maximum (‘cluster’) to small circle (‘cone’), both oriented parallel to shortening. The diameter and mean width of the cone pattern decrease as strain and/or strain rate increases. Prismatic axis (a and m) patterns are characterised by great circles parallel to the pole figure margin and may be distinguishable from the patterns in ice deformed under simple shear. While strain has the main influence on the degree of preferred orientation (or CPO ‘strength’), both temperature and strain rate have minor influences, which limits the extent to which CPOs can be used to measure strain. As cluster patterns can be observed in the c-axes of ice deformed under both pure and simple shear settings, this may complicate interpretations of flow geometry in terrestrial ice unless the prismatic axis patterns are also considered.
To describe national trends in testing and detection of carbapenemases
produced by carbapenem-resistant Enterobacterales (CRE) and associate
testing with culture and facility characteristics.
Retrospective cohort study.
Department of Veterans’ Affairs medical centers (VAMCs).
Patients seen at VAMCs between 2013 and 2018 with cultures positive for CRE,
defined by national VA guidelines.
Microbiology and clinical data were extracted from national VA data sets.
Carbapenemase testing was summarized using descriptive statistics.
Characteristics associated with carbapenemase testing were assessed with
Of 5,778 standard cultures that grew CRE, 1,905 (33.0%) had evidence of
molecular or phenotypic carbapenemase testing and 1,603 (84.1%) of these had
carbapenemases detected. Among these cultures confirmed as
carbapenemase-producing CRE, 1,053 (65.7%) had molecular testing for
≥1 gene. Almost all testing included KPC (n = 1,047, 99.4%), with KPC
detected in 914 of 1,047 (87.3%) cultures. Testing and detection of other
enzymes was less frequent. Carbapenemase testing increased over the study
period from 23.5% of CRE cultures in 2013 to 58.9% in 2018. The South US
Census region (38.6%) and the Northeast (37.2%) region had the highest
proportion of CRE cultures with carbapenemase testing. High complexity (vs
low) and urban (vs rural) facilities were significantly associated with
carbapenemase testing (P < .0001).
Between 2013 and 2018, carbapenemase testing and detection increased in the
VA, largely reflecting increased testing and detection of KPC. Surveillance
of other carbapenemases is important due to global spread and increasing
antibiotic resistance. Efforts supporting the expansion of carbapenemase
testing to low-complexity, rural healthcare facilities and standardization
of reporting of carbapenemase testing are needed.
The Hierarchical Taxonomy of Psychopathology (HiTOP) has emerged out of the quantitative approach to psychiatric nosology. This approach identifies psychopathology constructs based on patterns of co-variation among signs and symptoms. The initial HiTOP model, which was published in 2017, is based on a large literature that spans decades of research. HiTOP is a living model that undergoes revision as new data become available. Here we discuss advantages and practical considerations of using this system in psychiatric practice and research. We especially highlight limitations of HiTOP and ongoing efforts to address them. We describe differences and similarities between HiTOP and existing diagnostic systems. Next, we review the types of evidence that informed development of HiTOP, including populations in which it has been studied and data on its validity. The paper also describes how HiTOP can facilitate research on genetic and environmental causes of psychopathology as well as the search for neurobiologic mechanisms and novel treatments. Furthermore, we consider implications for public health programs and prevention of mental disorders. We also review data on clinical utility and illustrate clinical application of HiTOP. Importantly, the model is based on measures and practices that are already used widely in clinical settings. HiTOP offers a way to organize and formalize these techniques. This model already can contribute to progress in psychiatry and complement traditional nosologies. Moreover, HiTOP seeks to facilitate research on linkages between phenotypes and biological processes, which may enable construction of a system that encompasses both biomarkers and precise clinical description.
Background: The NHSN Antibiotic Resistance (AR) Option can serve as a useful tool for tracking antibiotic-resistant infections and can aid in the development of inpatient antibiograms. We recently described the frequency of antibiotic suppression in NHSN AR Option data. In this analysis, we describe the effects of suppression on practical uses of the NHSN AR Option, specifically selected agent antibiogram development, and detection of reportable conditions. Methods: Antibiotic susceptibility data were collected from the NHSN AR Option and commercial automated antimicrobial susceptibility testing instruments (cASTI) from 3 hospital networks. Data were obtained from January 1, 2017, to December 31, 2018. The clinical susceptibility data for third-generation cephalosporins and carbapenems against carbapenem-resistant Enterobacterales (CRE), Pseudomonas aeruginosa, and Acinetobacter baumannii were included. Susceptibility results were defined as suppressed when susceptibility results were observed from the laboratory instrument but not from NHSN data. For the overall percentage susceptibility estimation, isolates with <30 susceptibility results were excluded. Percentage susceptibility of NHSN results were compared to their counterparts from cASTI. Results: Of the 852 matched isolates in the primary analysis, 804 had at least 1 suppressed result. Of the 804 isolates, 16.9% were P. aeruginosa, 67.3% by E. coli, and 11.1% by Klebsiella spp. The following pathogen–drug combinations had no difference observed in the percentage susceptible between the 2 systems: ceftazidime tested against P. aeruginosa, ceftriaxone tested against Klebsiella spp, ertapenem tested against Klebsiella spp, imipenem tested against E. coli and P. aeruginosa, and meropenem tested against P. aeruginosa. Significant differences were observed for the following drugs tested against E. coli: ceftazidime (11.1%), cefotaxime (8.6%), and ceftriaxone (8.3%). In the NHSN AR Option, the following isolates showed suppressed results related to their phenotypic case definition: 17 (3%) CRE isolates, 7 (28%) carbapenem-resistant Acinetobacter baumannii (CRAB) isolates, 511 (93.2%) extended spectrum β-lactamase (ESBL) isolates, and 94 (66.7%) carbapenem-resistant Pseudomonas aeruginosa (CRPA) isolates. Conclusions: For select isolates, notably E. coli, we observed a large difference in the percentage of susceptible isolates reported into the NHSN AR Option compared to the cASTI data. This difference significantly limits the ability of the AR Option to create valid antibiograms for select pathogen–drug combinations. Moreover, significant numbers of CRAB, ESBL, and CRPA isolates would not be identified from NHSN AR Option because of suppression. This finding warrants the need for antimicrobial stewardship teams to regularly assess the impact of selective reporting in identifying pathogens of public health importance.
Background: Carbapenem-resistant Enterobacterales (CRE) are an urgent public health threat, particularly those that produce carbapenemase (CP-CRE). Certain risk factors associated with CRE acquisition have been well described, such as older age, indwelling devices, prior hospitalizations, and underlying conditions. However, data are limited regarding the association of CRE and health disparities, such as race and ethnicity. Published literature has consistently shown that minority groups, including but not limited to Non-Hispanic Black persons, have higher risks of developing adverse health outcomes. To better understand the impact of race and ethnicity in CP-CRE cases, we compared 1-year mortality rates among Non-Hispanic Blacks and Non-Hispanic Whites. Methods: CRE are reportable in Tennessee; isolates must be sent to the State Public Health Laboratory for carbapenemase detection and resistance mechanism testing. We linked 2015–2019 CP-CRE surveillance cases and laboratory data from our statewide surveillance system, the National Disease Surveillance System (NEDDS)-Base System, with the Tennessee Hospital Discharge Data System (HDDS) and vital records databases. Database linkage and data analyses were performed using SAS version 9.4 software. Results: Among 615 CP-CRE cases, the mean age was lower among non-Hispanic Blacks (59 years; SD, 16.6) compared to non-Hispanic Whites (mean, 65 years; SD, 15.7). Among 156 non-Hispanic Blacks with CP-CRE, 101 (64.7%) were nursing home residents, whereas 281 (71.1%) among the 395 non-Hispanic Whites were nursing home residents. Also, 64 Non-Hispanic Blacks (41%) died within 1 year of their first specimen collection date compared to 92 Non-Hispanic Whites (23.3%). Non-Hispanic Blacks with CP-CRE who died within 1 year had a mortality rate of 5.6 per 100,000 (95% CI, 4.21–6.94) Black population, which was 1.6 times higher than Non-Hispanic White persons at 3.5 per 100,000 (95% CI, 2.94–3.95; χ2P < .001) White population. Conclusions: Despite a lower mean age, non-Hispanic Black CP-CRE cases had a higher 1-year mortality rate than non-Hispanic Whites. Racial and ethnicity data often are missing or incomplete from surveillance data. Data linkages can be a valuable tool to gather additional clinical and demographic data that may be missing from public health surveillance data to improve our understanding of health disparities. Recognition of these health disparities among CRE can provide an opportunity for public health to create more targeted interventions and educational outreach.
Background: On March 5, 2020, the Tennessee Department of Health (TDH) announced the first case of COVID-19 in the state. Since then, hospitals have been overwhelmed by the spike in respiratory infections. Several studies have attempted to describe the impact of the pandemic on antibiotic prescriptions. The NHSN Antimicrobial Use Option offers a platform for hospitals to report their antibiotic usage. The TDH has established access to hospital antibiotic usage data statewide through an existing NHSN user group. We compared the change in the volume of inpatient antibiotic prescriptions before and during the pandemic. Methods: An ecological study was conducted from January 2019 to December 2021. Aggregated facility-level data from the NHSN Antimicrobial Use Option were used to describe antibacterial use among Tennessee hospitals. Data from facilities that had reported at least 1 month of data during the study period were included in this study. The antimicrobial use rate was calculated by dividing the antimicrobial days of therapy (DOT) by the number of 1,000 days present. Overall antimicrobial use rates as well as specific antimicrobial use rates for azithromycin, ceftriaxone, and piperacillin–tazobactam were compared across years. Results: In total, 55 hospitals reported at least 1 month of data into the NHSN Antimicrobial Use Option during the study period. These hospitals had a median bed size of 140 (range, 12–689). Conclusions: We observed a modest increase in overall antibiotic use during the COVID-19 pandemic in Tennessee facilities. This trend appeared to be primarily attributed to agents used for community-acquired respiratory infections, such as azithromycin and ceftriaxone, earlier in the pandemic. However, both of these agents have fallen to prepandemic use levels during 2021. The fact that overall use increased in 2021 suggests that other agents not analyzed may have contributed to this effect. Further analysis may help determine which agents are responsible for this increase in 2021.
Background: Healthcare facilities have experienced many challenges during the COVID-19 pandemic, including limited personal protective equipment (PPE) supplies. Healthcare personnel (HCP) rely on PPE, vaccines, and other infection control measures to prevent SARS-CoV-2 infections. We describe PPE concerns reported by HCP who had close contact with COVID-19 patients in the workplace and tested positive for SARS-CoV-2. Method: The CDC collaborated with Emerging Infections Program (EIP) sites in 10 states to conduct surveillance for SARS-CoV-2 infections in HCP. EIP staff interviewed HCP with positive SARS-CoV-2 viral tests (ie, cases) to collect data on demographics, healthcare roles, exposures, PPE use, and concerns about their PPE use during COVID-19 patient care in the 14 days before the HCP’s SARS-CoV-2 positive test. PPE concerns were qualitatively coded as being related to supply (eg, low quality, shortages); use (eg, extended use, reuse, lack of fit test); or facility policy (eg, lack of guidance). We calculated and compared the percentages of cases reporting each concern type during the initial phase of the pandemic (April–May 2020), during the first US peak of daily COVID-19 cases (June–August 2020), and during the second US peak (September 2020–January 2021). We compared percentages using mid-P or Fisher exact tests (α = 0.05). Results: Among 1,998 HCP cases occurring during April 2020–January 2021 who had close contact with COVID-19 patients, 613 (30.7%) reported ≥1 PPE concern (Table 1). The percentage of cases reporting supply or use concerns was higher during the first peak period than the second peak period (supply concerns: 12.5% vs 7.5%; use concerns: 25.5% vs 18.2%; p Conclusions: Although lower percentages of HCP cases overall reported PPE concerns after the first US peak, our results highlight the importance of developing capacity to produce and distribute PPE during times of increased demand. The difference we observed among selected groups of cases may indicate that PPE access and use were more challenging for some, such as nonphysicians and nursing home HCP. These findings underscore the need to ensure that PPE is accessible and used correctly by HCP for whom use is recommended.
Background: Multidrug-resistant organisms (MDROs) are a global threat. To track and contain the spread, the Tennessee Department of Health (TDH) performs targeted surveillance of carbapenemase-producing and pan-nonsusceptible organisms. When these MDROs are identified, TDH conducts a containment response and collects epidemiological data, which includes risk factors such as indwelling devices and previous hospitalizations. The impact of the COVID-19 pandemic on these MDROs is not well understood. Therefore, we have described the characteristics of cases positive for both COVID-19 and select MDROs. Methods: MDRO investigation data from January 1, 2020–September 30, 2021 were matched with all COVID-19 case data from the TDH statewide surveillance system, National Electronic Disease Surveillance System Base System. MDRO-positive date was defined as the specimen collection date; COVID-19 case date was first defined as the date of symptom onset and if missing, then diagnosis date, and investigation creation date, respectively. Descriptive statistics and Fisher exact tests were calculated using SAS version 9.4 software. Results: Among 336 MDRO cases, 50 had a reported SARS-CoV-2–positive result. MDRO types were Enterobacterales (CRE) (n = 31), Acinetobacter spp (CRA) (n = 18), and Pseudomonas aeruginosa (n = 1). Of these 50 cases, 20 were MDRO-positive before and 30 days after the COVID-19 case date, respectively. Of the 18 CRA cases, 16 (89%), were positive after the COVID-19 case date, compared to 13 (42%) among 31 CRE cases (P < .01). Also, 35 patients (70%) had a record of hospitalization, and 22 (63%) had their MDRO specimen collected after the COVID-19 case date (P = .37). Of these 22 patients, 4 had their MDRO specimen collected during their COVID-19 hospitalization, with an average duration from admission to MDRO collection date of 17 days (range, 4–36). Among the 50 coinfected cases, 8 died, 7 (88%) of whom were MDRO-positive after their COVID-19 case date. Data on indwelling devices at time of MDRO positivity were completed for 17 cases; 14 had an indwelling device and, among these, 13 (93%) were MDRO-positive after their COVID-19 case date. Conclusions: MDRO cases with specimen collections after COVID-19 comprised the majority of hospitalized patients, patients who died, and patients with indwelling devices compared to those with MDROs collected before their COVID-19 case date. These results show a stark difference with CRA as the most common MDRO among post–COVID-19 cases. Our data were limited by reporting gaps. We recognize that patients can remain colonized with MDROs for lengthy durations, which could have result in undetected MDRO cases prior to the COVID-19 case date. More data and analyses are needed to make targeted public health recommendations. However, these findings highlight the burden of MDROs among COVID-19 cases. including adverse health outcomes.
Background: Nationally, a decrease in total antibiotic use in nursing homes during the COVID-19 pandemic was observed with an increase in select agents used for respiratory infections. Currently there is minimal data on antibiotic use in long-term care facilities (LTCFs) in Tennessee. To address this issue, the Tennessee Department of Health (TDH) developed a monthly point-prevalence survey of antibiotic use. Utilizing this tool, we sought to determine the effect the pandemic had on antibiotic use in Tennessee LTCFs. Method: We developed a REDCap questionnaire to collect information on selected antibiotics administered in Tennessee LTCFs. Antibiotic use percentage was determined by dividing the number of residents who received an antibiotic on the day of survey by facilities’ average censuses. Data were divided into a prepandemic period (January 2019–February 2020) and a period during the pandemic (March 2020–December 2021). Antibiotic prescriptions were grouped into 4 classes according to their most common uses: Clostridium difficile infections, urinary tract infections, skin and soft-tissue infections (SSTIs), and respiratory infections. Average percentage of residents on antibiotics were compared between study periods. Results: In total, 37 facilities participated in the survey during the prepandemic period and 32 facilities participated during the pandemic period; 14 participated during both periods. The average percentage of residents on antimicrobials before the pandemic was 16.3%, which decreased to 11.5% during the pandemic period (P = .04). During the prepandemic period, 40.2% of antibiotics prescribed were in the common for SSTI category and 38.3% were in the common for respiratory infections category (P = .01); during the pandemic period, 64.3% of antibiotics prescribed were in the common for SSTI category and 45.8% were in the common for respiratory infections category (P = .01). The 3 most prescribed antibiotics in the prepandemic period were amoxicillin (148 prescriptions), doxycycline (140 prescriptions), and levofloxacin (135 prescriptions). The 3 most prescribed antibiotics during the pandemic were doxycycline (141 prescriptions), levofloxacin (125 prescriptions), and trimethoprim–sulfamethoxazole (115 prescriptions). Conclusions: Survey results revealed that antibiotic prescriptions commonly used for respiratory infections increased 7.5% during the pandemic study period. Additionally, the average percentage of residents on antimicrobials fell 4.8% during this period. Both statistics reflect what has been seen nationally with a decrease in antibiotic use with an increase in respiratory antibiotics. This could be due to multiple factors including decreased reporting, a change in healthcare delivery during the pandemic, and facilities seeing an increase of respiratory tract infections. These data will be used to guide future TDH antibiotic stewardship efforts in the long-term care setting.
Early in the COVID-19 pandemic, the World Health Organization stressed the importance of daily clinical assessments of infected patients, yet current approaches frequently consider cross-sectional timepoints, cumulative summary measures, or time-to-event analyses. Statistical methods are available that make use of the rich information content of longitudinal assessments. We demonstrate the use of a multistate transition model to assess the dynamic nature of COVID-19-associated critical illness using daily evaluations of COVID-19 patients from 9 academic hospitals. We describe the accessibility and utility of methods that consider the clinical trajectory of critically ill COVID-19 patients.
OBJECTIVES/GOALS: Diffusion basis spectrum imaging (DBSI) allows for detailed evaluation of white matter microstructural changes present in cervical spondylotic myelopathy (CSM). Our goal is to utilize multidimensional clinical and quantitative imaging data to characterize disease severity and predict long-term outcomes in CSM patients undergoing surgery. METHODS/STUDY POPULATION: A single-center prospective cohort study enrolled fifty CSM patients who underwent surgical decompression and twenty healthy controls from 2018-2021. All patients underwent diffusion tensor imaging (DTI), DBSI, and complete clinical evaluations at baseline and 2-years follow-up. Primary outcome measures were the modified Japanese Orthopedic Association score (mild [mJOA 15-17], moderate [mJOA 12-14], severe [mJOA 0-11]) and SF-36 Physical and Mental Component Summaries (PCS and MCS). At 2-years follow-up, improvement was assessed via established MCID thresholds. A supervised machine learning classification model was used to predict treatment outcomes. The highest-performing algorithm was a linear support vector machine. Leave-one-out cross-validation was utilized to test model performance. RESULTS/ANTICIPATED RESULTS: A total of 70 patients – 20 controls, 25 mild, and 25 moderate/severe CSM patients – were enrolled. Baseline clinical and DTI/DBSI measures were significantly different between groups. DBSI Axial and Radial Diffusivity were significantly correlated with baseline mJOA and mJOA recovery, respectively (r=-0.33, p<0.01; r=-0.36, p=0.02). When predicting baseline disease severity (mJOA classification), DTI metrics alone performed with 38.7% accuracy (AUC: 72.2), compared to 95.2% accuracy (AUC: 98.9) with DBSI metrics alone. When predicting improvement after surgery (change in mJOA), clinical variables alone performed with 33.3% accuracy (AUC: 0.40). When combining DTI or DBSI parameters with key clinical covariates, model accuracy improved to 66.7% (AUC: 0.65) and 88.1% (AUC: 0.95) accuracy, respectively. DISCUSSION/SIGNIFICANCE: DBSI metrics correlate with baseline disease severity and outcome measures at 2-years follow-up. Our results suggest that DBSI may serve as a valid non-invasive imaging biomarker for CSM disease severity and potential for postoperative improvement.
Social media platforms allow users to share news, ideas, thoughts, and opinions on a global scale. Data processing methods allow researchers to automate the collection and interpretation of social media posts for efficient and valuable disease surveillance. Data derived from social media and internet search trends have been used successfully for monitoring and forecasting disease outbreaks such as Zika, Dengue, MERS, and Ebola viruses. More recently, data derived from social media have been used to monitor and model disease incidence during the coronavirus disease 2019 (COVID-19) pandemic. We discuss the use of social media for disease surveillance.
Healthcare personnel with severe acute respiratory coronavirus virus 2 (SARS-CoV-2) infection were interviewed to describe activities and practices in and outside the workplace. Among 2,625 healthcare personnel, workplace-related factors that may increase infection risk were more common among nursing-home personnel than hospital personnel, whereas selected factors outside the workplace were more common among hospital personnel.
This essay examines the narrative and representational tactics of Matthew Desmond's Evicted: Poverty and Profit in the American City (2016). Rather than read this book solely in terms of its findings, this essay argues that Desmond attempts to stylistically embody the relationship between market culture, eviction, and the political delegitimation of the poor. Evicted also reworks the sociological “community study” by refashioning literary templates from writers such as Jacob Riis, Charles Dickens, Jane Jacobs, and Hannah Arendt. By fusing such debts together, Evicted powerfully connects its account of eviction's toll on the broader but too often overlooked relationship between poverty and citizenship.
Chronic psychotic disorders (CPDs) occur worldwide and cause significant burden. Poor medication adherence is pervasive, but has not been well studied in sub-Saharan Africa.
This cross-sectional survey of 100 poorly adherent Tanzanian patients with CPD characterised clinical features associated with poor adherence.
Descriptive statistics characterised demographic and clinical variables, including barriers to adherence, adherence behaviours and attitudes, and psychiatric symptoms. Measures included the Tablets Routine Questionnaire, Drug Attitudes Inventory, the Brief Psychiatric Rating Scale, the Clinical Global Impressions scale, the Alcohol Use Disorders Identification Test and Alcohol, Smoking and Substance Involvement Screening Test. The relationship between adherence and other clinical variables was evaluated.
Mean age was 35.7 years (s.d. 8.8), 61% were male and 80% had schizophrenia, with a mean age at onset of 22.4 (s.d. 7.6) years. Mean proportion of missed CPD medication was 64%. One in ten had alcohol dependence. Most individuals had multiple adherence barriers. Most clinical variables were not significantly associated with the Tablets Routine Questionnaire; however, in-patients with CPD were more likely to have worse adherence (P ≤ 0.01), as were individuals with worse medication attitudes (Drug Attitudes Inventory, P < 0.01), higher CPD symptom severity levels (Brief Psychiatric Rating Scale, P < 0.001) and higher-risk use of alcohol (Alcohol Use Disorders Identification Test, P < 0.001).
Poorly adherent patients had multiple barriers to adherence, including poor attitudes toward medication and treatment, high illness acuity and substance use comorbidity. Treatments need to address adherence barriers, and consider family supports and challenges from an intergenerational perspective.
This is an epidemiological study of carbapenem-resistant Enterobacteriaceae (CRE) in Veterans’ Affairs medical centers (VAMCs). In 2017, almost 75% of VAMCs had at least 1 CRE case. We observed substantial geographic variability, with more cases in urban, complex facilities. This supports the benefit of tailoring infection control strategies to facility characteristics.
Background: Carbapenem-resistant Enterobacteriaceae (CRE) are gram-negative bacteria resistant to at least 1 carbapenem and are associated with high mortality (50%). Carbapenemase-producing CRE (CP-CRE) are particularly serious because they are more likely to transmit carbapenem resistance genes to other gram-negative bacteria and they are resistant to all carbapenem antibiotics. Few studies have evaluated risk factors associated with CP-CRE colonization. The goal of this study was to determine the risk factors associated with CP-CRE colonization in a cohort of US veterans. Methods: We conducted a retrospective cohort study of patients seen at VA medical centers between 2013 and 2018 who had positive cultures for CRE from any site, defined by resistance to at least 1 of the following carbapenems: imipenem, meropenem, doripenem, or ertapenem. CP-CRE was defined via antibiotic sensitivity data that coded the culture as being ‘carbapenemase producing,’ being ‘Hodge test positive,’ or ‘KPC producing.’ Only the first positive culture for CRE was included. Patient demographics (year of culture, age, sex, race, major comorbidities, infectious organism, culture site, inpatient status, and CP-CRE status) and facility demographics (rurality, geographic region, and facility complexity) were collected. Bivariate analysis and multiple logistic regression were performed to determine variables associated with CP-CRE versus non–CP-CRE. Results: In total, 3,322 patients were identified with a positive CRE culture: 546 (16.4%) with CP-CRE and 2,776 (83.63%) with non–CP-CRE. Most patients were men (95%) and were older (mean age, 71; SD, 12.5) and were diagnosed at a high-complexity VA medical center (65%). Most of the cultures were urine (63%), followed by sputum (13%), and blood (7%). Most were from inpatients (46%), followed by outpatients (42%), and long-term care facilities (12%). Multivariable analysis showed the following variables to be associated with CP-CRE positive cultures: congestive heart failure (P = .0136), African American (P = .0760), Klebsiella spp (P < .0001), GI cancers (P = .0087), culture collected in 2017 (P = .0004), and culture collected in 2018 (P < .0001). There were also significant differences CP-CRE frequencies by geographic region (P < .001). Discussion: CP-CRE diagnoses are relatively rare; however, the serious complications associated make them important infections to investigate. In our analysis, we found that congestive heart failure and gastric cancer were comorbidities strongly associated with CP-CRE. In 2017, the VA formalized their CP-CRE definition, which led to more accurate reporting. Conclusions: After the guideline was implemented, CP-CRE detection dramatically increased in noncontinental US facilities. More work should be done in the future to determine the different risk factors between non–CP-CRE and CP-CRE infections.
OBJECTIVES/GOALS: African-Americans have a 3-fold higher risk of end-stage kidney disease (ESKD) compared to Whites due in part to APOL1 risk alleles. Whether resistant hypertension (RH) magnifies the risk of ESKD among African Americans beyond APOL1 is not known. We examined the interaction between RH and race on ESKD risk and the independent effect of RH beyond APOL1. METHODS/STUDY POPULATION: We designed a retrospective cohort of 240,038 veterans with HTN, enrolled in the Million Veteran Program with an estimated glomerular filtration rate (eGFR) >30 ml/min/1.73m2. The primary exposure was incident RH (time-varying). The primary outcome was incident ESKD during a 13.5 year follow up: 2004-2017. Secondary outcomes were myocardial infarction (MI), stroke, and death. Incident RH was defined as failure to achieve outpatient blood pressure (BP) <140/90 mmHg with 3 antihypertensive drugs, including a thiazide, or use of 4 or more drugs. Poisson models were used to estimate incidence rates and test additive interaction with race and APOL1 genotype. Multivariable Cox models (with Fine-Gray competing-risks models as sensitivity analyses) were used to examine independent effects. RESULTS/ANTICIPATED RESULTS: The cohort comprised 235,046 veterans; median age was 60 years; 21% were African-American and 6% were women, with 23,010 incident RH cases observed over a median follow-up time of 10.2 years [interquartile range, 5.6-12.6]. Patients with RH had higher incidence rates [per 1000 person-years] of ESKD (4.5 vs. 1.3), myocardial infarction (6.5 vs. 3.0), stroke (16.4 vs. 7.6) and death (12.0 vs. 6.9) than non-resistant hypertension (NRH). African-Americans with RH had a 2.6-fold higher risk of ESKD compared to African-Americans with NRH; 3-fold the risk of Whites with RH, and 9.6-fold the risk of Whites with NRH [p-interaction<.001]. Among African-Americans, RH was associated with a 2.2-fold (95%CI, 1.86-2.58) higher risk of incident ESKD in models adjusted for APOL1 genotype and in the subset of African-Americans with no APOL1 risk alleles, RH was associated with an adjusted 2.75-fold (95% CI: 2.00-3.50) higher risk of incident ESKD. DISCUSSION/SIGNIFICANCE OF IMPACT: RH was independently associated with a higher risk of ESKD and cardiovascular outcomes, especially among African-Americans. This elevated risk is independent of APOL1 genotype. Interventions that achieve BP targets among patients with RH could curtail the incidence of ESKD and cardiovascular outcomes in this high-risk population. CONFLICT OF INTEREST DESCRIPTION: None.
Dialysis patients may not have access to conventional renal replacement therapy (RRT) following disasters. We hypothesized that improvised renal replacement therapy (ImpRRT) would be comparable to continuous renal replacement therapy (CRRT) in a porcine acute kidney injury model.
Following bilateral nephrectomies and 2 hours of caudal aortic occlusion, 12 pigs were randomized to 4 hours of ImpRRT or CRRT. In the ImpRRT group, blood was circulated through a dialysis filter using a rapid infuser to collect the ultrafiltrate. Improvised replacement fluid, made with stock solutions, was infused pre-pump. In the CRRT group, commercial replacement fluid was used. During RRT, animals received isotonic crystalloids and norepinephrine.
There were no differences in serum creatinine, calcium, magnesium, or phosphorus concentrations. While there was a difference between groups in serum potassium concentration over time (P < 0.001), significance was lost in pairwise comparison at specific time points. Replacement fluids or ultrafiltrate flows did not differ between groups. There were no differences in lactate concentration, isotonic crystalloid requirement, or norepinephrine doses. No difference was found in electrolyte concentrations between the commercial and improvised replacement solutions.
The ImpRRT system achieved similar performance to CRRT and may represent a potential option for temporary RRT following disasters.
We systematically reviewed implementation research targeting depression interventions in low- and middle-income countries (LMICs) to assess gaps in methodological coverage.
PubMed, CINAHL, PsycINFO, and EMBASE were searched for evaluations of depression interventions in LMICs reporting at least one implementation outcome published through March 2019.
A total of 8714 studies were screened, 759 were assessed for eligibility, and 79 studies met inclusion criteria. Common implementation outcomes reported were acceptability (n = 50; 63.3%), feasibility (n = 28; 35.4%), and fidelity (n = 18; 22.8%). Only four studies (5.1%) reported adoption or penetration, and three (3.8%) reported sustainability. The Sub-Saharan Africa region (n = 29; 36.7%) had the most studies. The majority of studies (n = 59; 74.7%) reported outcomes for a depression intervention implemented in pilot researcher-controlled settings. Studies commonly focused on Hybrid Type-1 effectiveness-implementation designs (n = 53; 67.1), followed by Hybrid Type-3 (n = 16; 20.3%). Only 21 studies (26.6%) tested an implementation strategy, with the most common being revising professional roles (n = 10; 47.6%). The most common intervention modality was individual psychotherapy (n = 30; 38.0%). Common study designs were mixed methods (n = 27; 34.2%), quasi-experimental uncontrolled pre-post (n = 17; 21.5%), and individual randomized trials (n = 16; 20.3).
Existing research has focused on early-stage implementation outcomes. Most studies have utilized Hybrid Type-1 designs, with the primary aim to test intervention effectiveness delivered in researcher-controlled settings. Future research should focus on testing and optimizing implementation strategies to promote scale-up of evidence-based depression interventions in routine care. These studies should use high-quality pragmatic designs and focus on later-stage implementation outcomes such as cost, penetration, and sustainability.