To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Background: Blood cultures are commonly ordered for patients with low risk of bacteremia. Liberal blood-culture ordering increases the risk of false-positive results, which can lead to increased length of stay, excess antibiotics, and unnecessary diagnostic procedures. We implemented a blood-culture indication algorithm with data feedback and assessed the impact on ordering volume and percent positivity. Methods: We performed a prospective cohort study from February 2022 to November 2022 using historical controls from February 2020 to January 2022. We introduced the blood-culture algorithm (Fig. 1) in 2 adult surgical intensive care units (ICUs). Clinicians reviewed charts of eligible patients with blood cultures weekly to determine whether the blood-culture algorithm was followed. They provided feedback to the unit medical directors weekly. We defined a blood-culture event as ≥1 blood culture within 24 hours. We excluded patients aged <18 years, absolute neutrophil count <500, and heart and lung transplant recipients at the time of blood-culture review. Results: In total, 7,315 blood-culture events in the preintervention group and 2,506 blood-culture events in the postintervention group met eligibility criteria. The average monthly blood-culture rate decreased from 190 blood cultures per 1,000 patient days to 142 blood cultures per 1,000 patient days (P < .01) after the algorithm was implemented. (Fig. 2) The average monthly blood-culture positivity increased from 11.7% to 14.2% (P = .13). Average monthly days of antibiotic therapy (DOT) was lower in the postintervention period than in the preintervention period (2,200 vs 1,940; P < .01). (Fig. 3) The ICU length of stay did not change before the intervention compared to after the intervention: 10 days (IQR, 5–18) versus 10 days (IQR, 5–17; P = .63). The in-hospital mortality rate was lower during the postintervention period, but the difference was not statistically significant: 9.24% versus 8.34% (P = .17). The all-cause 30-day mortality was significantly lower during the intervention period: 11.9% versus 9.7% (P < .01). The unplanned 30-day readmission percentage was significantly lower during the intervention period (10.6% vs 7.6%; P < .01). Over the 9-month intervention, we reviewed 916 blood-culture events in 452 unique patients. Overall, 74.6% of blood cultures followed the algorithm. The most common reasons overall for ordering blood cultures were severe sepsis or septic shock (37%), isolated fever and/or leukocytosis (19%), and documenting clearance of bacteremia (15%) (Table 1). The most common indications for inappropriate blood cultures were isolated fever and/or leukocytosis (53%). Conclusions: We introduced a blood-culture algorithm with data feedback in 2 surgical ICUs and observed decreases in blood-culture volume without a negative impact on ICU LOS or mortality rate.
Neurodevelopmental challenges are the most prevalent comorbidity associated with a diagnosis of critical CHD, and there is a high incidence of gross and fine motor delays noted in early infancy. The frequency of motor delays in hospitalised infants with critical CHD requires close monitoring from developmental therapies (physical therapists, occupational therapists, and speech-language pathologists) to optimise motor development. Currently, minimal literature defines developmental therapists’ role in caring for infants with critical CHD in intensive or acute care hospital units.
This article describes typical infant motor skill development, how the hospital environment and events surrounding early cardiac surgical interventions impact those skills, and how developmental therapists support motor skill acquisition in infants with critical CHD. Recommendations for healthcare professionals and those who provide medical or developmental support in promotion of optimal motor skill development in hospitalised infants with critical CHD are discussed.
Infants with critical CHD requiring neonatal surgical intervention experience interrupted motor skill interactions and developmental trajectories. As part of the interdisciplinary team working in intensive and acute care settings, developmental therapists assess, guide motor intervention, promote optimal motor skill acquisition, and support the infant’s overall development.
We assessed Oxivir Tb wipe disinfectant residue in a controlled laboratory setting to evaluate low environmental contamination of SARS-CoV-2. Frequency of viral RNA detection was not statistically different between intervention and control arms on day 3 (P=0.14). Environmental contamination viability is low; residual disinfectant did not significantly contribute to low contamination.
Crinoids were major constituents of late Carboniferous (Pennsylvanian) marine ecosystems, but their rapid disarticulation rates after death result in few well-preserved specimens, limiting the study of their growth. This is amplified for cladids, who had among the highest disarticulation rates of all Paleozoic crinoids due to the relatively loose suturing of the calyx plates. However, Erisocrinus typus Meek and Worthen, 1865 has been found in unusually large numbers, most preserved as cups but some as nearly complete crowns, in the Barnsdall Formation in Oklahoma. The Barnsdall Formation, a Koncentrat Lagerstätte, is composed predominantly of fine- to medium-grained sandstone, overlain by mudstone and shale; severe compaction of the fossils in the mudstone and shale layer in this formation allowed for exceptional preservation of the plates. Herein, we summarize a growth study based on 10 crowns of E. typus, showcasing a well-defined growth series of this species from the Barnsdall Formation, including fossils from juvenile stages of development, which are rarely preserved. We used high-resolution photographs imported into ImageJ and recorded measurements of the cup and arms for all nondistorted or disarticulated plates. Results show that the plates of the cup grew anisometrically with both positive and negative allometry. The primibrachial plates of E. typus grew with positive allometry. The brachial plates started as uniserial (i.e., cuneiform) as juveniles but shifted to be biserial. Erisocrinus typus broadly shares similar growth trajectories with other cladids. These growth patterns provide insight into feeding strategies and can aid in understanding crinoid evolutionary paleoecological trends.
In sub-Saharan Africa, there are no validated screening tools for delirium in older adults, despite the known vulnerability of older people to delirium and the associated adverse outcomes. This study aimed to assess the effectiveness of a brief smartphone-based assessment of arousal and attention (DelApp) in the identification of delirium amongst older adults admitted to the medical department of a tertiary referral hospital in Northern Tanzania.
Consecutive admissions were screened using the DelApp during a larger study of delirium prevalence and risk factors. All participants subsequently underwent detailed clinical assessment for delirium by a research doctor. Delirium and dementia were identified against DSM-5 criteria by consensus.
Complete data for 66 individuals were collected of whom 15 (22.7%) had delirium, 24.5% had dementia without delirium, and 10.6% had delirium superimposed on dementia. Sensitivity and specificity of the DelApp for delirium were 0.87 and 0.62, respectively (AUROC 0.77) and 0.88 and 0.73 (AUROC 0.85) for major cognitive impairment (dementia and delirium combined). Lower DelApp score was associated with age, significant visual impairment (<6/60 acuity), illness severity, reduced arousal and DSM-5 delirium on univariable analysis, but on multivariable logistic regression only arousal remained significant.
In this setting, the DelApp performed well in identifying delirium and major cognitive impairment but did not differentiate delirium and dementia. Performance is likely to have been affected by confounders including uncorrected visual impairment and reduced level of arousal without delirium. Negative predictive value was nevertheless high, indicating excellent ‘rule out’ value in this setting.
In 2017, the Michigan Institute for Clinical and Health Research (MICHR) and community partners in Flint, Michigan collaborated to launch a research funding program and evaluate the dynamics of those research partnerships receiving funding. While validated assessments for community-engaged research (CEnR) partnerships were available, the study team found none sufficiently relevant to conducting CEnR in the context of the work. MICHR faculty and staff along with community partners living and working in Flint used a community-based participatory research (CBPR) approach to develop and administer a locally relevant assessment of CEnR partnerships that were active in Flint in 2019 and 2021.
Surveys were administered each year to over a dozen partnerships funded by MICHR to evaluate how community and academic partners assessed the dynamics and impact of their study teams over time.
The results suggest that partners believed that their partnerships were engaging and highly impactful. Although many substantive differences between community and academic partners’ perceptions over time were identified, the most notable regarded the financial management of the partnerships.
This work contributes to the field of translational science by evaluating how the financial management of community-engaged health research partnerships in a locally relevant context of Flint can be associated with these teams’ scientific productivity and impact with national implications for CEnR. This work presents evaluation methods which can be used by clinical and translational research centers that strive to implement and measure their use of CBPR approaches.
People with neuropsychiatric symptoms often experience delay in accurate diagnosis. Although cerebrospinal fluid neurofilament light (CSF NfL) shows promise in distinguishing neurodegenerative disorders (ND) from psychiatric disorders (PSY), its accuracy in a diagnostically challenging cohort longitudinally is unknown.
We collected longitudinal diagnostic information (mean = 36 months) from patients assessed at a neuropsychiatry service, categorising diagnoses as ND/mild cognitive impairment/other neurological disorders (ND/MCI/other) and PSY. We pre-specified NfL > 582 pg/mL as indicative of ND/MCI/other.
Diagnostic category changed from initial to final diagnosis for 23% (49/212) of patients. NfL predicted the final diagnostic category for 92% (22/24) of these and predicted final diagnostic category overall (ND/MCI/other vs. PSY) in 88% (187/212), compared to 77% (163/212) with clinical assessment alone.
CSF NfL improved diagnostic accuracy, with potential to have led to earlier, accurate diagnosis in a real-world setting using a pre-specified cut-off, adding weight to translation of NfL into clinical practice.
This retrospective review of 4-year surveillance data revealed a higher central line-associated bloodstream infection (CLABSI) rate in non-Hispanic Black patients and higher catheter-associated urinary tract infection (CAUTI) rates in Asian and non-Hispanic Black patients compared with White patients despite similar catheter utilization between the groups.
Urine cultures collected from catheterized patients have a high likelihood of false-positive results due to colonization. We examined the impact of a clinical decision support (CDS) tool that includes catheter information on test utilization and patient-level outcomes.
This before-and-after intervention study was conducted at 3 hospitals in North Carolina. In March 2021, a CDS tool was incorporated into urine-culture order entry in the electronic health record, providing education about indications for culture and suggesting catheter removal or exchange prior to specimen collection for catheters present >7 days. We used an interrupted time-series analysis with Poisson regression to evaluate the impact of CDS implementation on utilization of urinalyses and urine cultures, antibiotic use, and other outcomes during the pre- and postintervention periods.
The CDS tool was prompted in 38,361 instances of urine cultures ordered in all patients, including 2,133 catheterized patients during the postintervention study period. There was significant decrease in urine culture orders (1.4% decrease per month; P < .001) and antibiotic use for UTI indications (2.3% decrease per month; P = .006), but there was no significant decline in CAUTI rates in the postintervention period. Clinicians opted for urinary catheter removal in 183 (8.5%) instances. Evaluation of the safety reporting system revealed no apparent increase in safety events related to catheter removal or reinsertion.
CDS tools can aid in optimizing urine culture collection practices and can serve as a reminder for removal or exchange of long-term indwelling urinary catheters at the time of urine-culture collection.
Clozapine dose assessment in treatment-refractory schizophrenia is complicated. There is a narrow margin between an effective and a potentially toxic dose and wide inter-individual variation in clozapine metabolic capacity. Moreover, factors such as changes in smoking habit, infection/inflammation, co-prescription of certain drugs, notably fluvoxamine, and age alter the dose requirement within individuals. Therapeutic drug monitoring (TDM) of plasma clozapine and N-desmethylclozapine (norclozapine) can help assess adherence, guide dosage and guard against toxicity. This article gives an overview of clozapine pharmacokinetics and factors affecting clozapine dose requirements. It then outlines the procedures and processes of clozapine TDM, from taking the blood sample for laboratory assay or point-of-contact (finger-prick) testing (POCT) to interpreting and acting on the results.
To describe the epidemiology of complex colon surgical procedures (COLO), stratified by present at time of surgery (PATOS) surgical-site infections (SSIs) and non-PATOS SSIs and their impact on the epidemiology of colon-surgery SSIs.
Retrospective cohort study.
SSI data were prospectively collected from patients undergoing colon surgical procedures (COLOs) as defined by the National Healthcare Safety Network (NHSN) at 34 community hospitals in the southeastern United States from January 2015 to June 2019. Logistic regression models identified specific characteristics of complex COLO SSIs, complex non-PATOS COLO SSIs, and complex PATOS COLO SSIs.
Over the 4.5-year study period, we identified 720 complex COLO SSIs following 28,188 COLO surgeries (prevalence rate, 2.55 per 100 procedures). Overall, 544 complex COLO SSIs (76%) were complex non-PATOS COLO SSIs (prevalence rate [PR], 1.93 per 100 procedures) and 176 (24%) complex PATOS COLO SSIs (PR, 0.62 per 100 procedures). Age >75 years and operation duration in the >75th percentile were independently associated with non-PATOS SSIs but not PATOS SSIs. Conversely, emergency surgery and hospital volume for COLO procedures were independently associated with PATOS SSIs but not non-PATOS SSIs. The proportion of polymicrobial SSIs was significantly higher for non-PATOS SSIs compared with PATOS SSIs.
Complex PATOS COLO SSIs have distinct features from complex non-PATOS COLO SSIs. Removal of PATOS COLO SSIs from public reporting allows more accurate comparisons among hospitals that perform different case mixes of colon surgeries.
Sparse recent data are available on the epidemiology of surgical site infections (SSIs) in community hospitals. Our objective was to provide updated epidemiology data on complex SSIs in community hospitals and to characterize trends of SSI prevalence rates over time.
Retrospective cohort study.
SSI data were collected from patients undergoing 26 commonly performed surgical procedures at 32 community hospitals in the southeastern United States from 2013 to 2018. SSI prevalence rates were calculated for each year and were stratified by procedure and causative pathogen.
Over the 6-year study period, 3,561 complex (deep incisional or organ-space) SSIs occurred following 669,467 total surgeries (prevalence rate, 0.53 infections per 100 procedures). The overall complex SSI prevalence rate did not change significantly during the study period: 0.58 of 100 procedures in 2013 versus 0.53 of 100 procedures in 2018 (prevalence rate ratio [PRR], 0.84; 95% CI, 0.66–1.08; P = .16). Methicillin-sensitive Staphylococcus aureus (MSSA) complex SSIs (n = 480, 13.5%) were more common than complex SSIs caused by methicillin-resistant S. aureus (MRSA; n = 363, 10.2%).
The complex SSI rate did not decrease in our cohort of community hospitals from 2013 to 2018, which is a change from prior comparisons. The reason for this stagnation is unclear. Additional research is needed to determine the proportion of or remaining SSIs that are preventable and what measures would be effective to further reduce SSI rates.
After implementing a coronavirus disease 2019 (COVID-19) infection prevention bundle, the incidence rate ratio (IRR) of non–severe acute respiratory coronavirus virus 2 (non–SARS-CoV-2) hospital-acquired respiratory viral infection (HA-RVI) was significantly lower than the IRR from the pre–COVID-19 period (IRR, 0.322; 95% CI, 0.266–0.393; P < .01). However, HA-RVIs incidence rates mirrored community RVI trends, suggesting that hospital interventions alone did not significantly affect HA-RVI incidence.
Background: Central-line–associated bloodstream infections (CLABSIs) arise from bacteria migrating from the skin along the catheter, by direct inoculation, or from pathogens that form biofilms on the interior surface of the catheter. However, given the oxygen-poor environments that obligate anaerobes require, these organisms are unlikely to survive long enough on the skin or on the catheter after direct inoculation to be the true cause of a CLABSI. Although some anaerobic CLABSIs may meet the definition for a mucosal-barrier-injury, laboratory-confirmed, bloodstream infection (MBI-LCBI), some may be not. We sought to determine the proportion of CLABSIs attributed to obligate anaerobic bacteria, and we sought to determine the pathophysiologic source of these infections. Methods: We performed a retrospective analysis of prospectively collected CLABSI data at 54 hospitals (academic and community) in the southeastern United States from January 2015 to December 2020. We performed chart reviews on a convenient sample for which medical records were available. We calculated the proportion of CLABSIs due to obligate anaerobes, and we have described a subset of anaerobic CLABSI cases. Results: We identified 60 anaerobic CLABSIs of 2,430 CLABSIs (2.5%). Of the 60 anaerobic CLABSIs, 7 were polymicrobial with nonanaerobic bacteria. The most common species we identified were Bacteroides, Clostridium, and Lactobacillus (Table 1). The proportion of anaerobic CLABSIs per year varied from 1.2% to 3.7% (Fig. 1). Of 60 anaerobic CLABSIs, 29 (48%) occurred in the only quaternary-care academic medical center in the database. In contrast, an average of 0.6 (SD, 0.6) anaerobic CLABSIs occurred in the 53 community hospitals over the 6-year study period. Of these 29 anaerobic CLABSIs, 23 (79%) were clinically consistent with secondary bloodstream infections (BSIs) due to gastrointestinal or genitourinary source, but they lacked appropriate documentation to meet NHSN criteria for secondary BSI or MBI-LCBI based on case reviews by infection prevention physicians. The other 6 anaerobic CLABSIs did not have a clear clinical etiology and did not meet MBI-LCBI criteria. In addition, 27 (93%) of 29 anaerobic CLABSIs occurred in patients who were either solid-organ transplant recipients, were stem-cell transplant recipients, or were receiving chemotherapy. Lastly, 27 (93%) of 29 anaerobic CLABSIs were treated with antibiotics. Conclusions: Anaerobic CLABSIs are uncommon events, but CLABSI may disproportionately affect large, academic hospitals caring for a high proportion of medically complex patients. Additional criteria could be added to the MBI-LCBI to better classify anaerobic BSI.
Background: Racial and ethnic disparities in healthcare access, medical treatment, and outcomes have been extensively reported. However, the impact of racial and ethnic differences in patient safety, including healthcare-associated infections, has not been well described. Methods: We performed a retrospective review analyzing prospectively collected data on central-line–associated bloodstream infection (CLABSI) and catheter-associated urinary tract infection (CAUTI) rates per 1,000 device days. Data for adult patients admitted to an academic medical center between 2018 and 2021 were stratified by 7 racial and ethnic groups: non-Hispanic White, non-Hispanic Black, Hispanic/Latino, Asian, American Indian/Alaska Native, Native Hawaiian/Pacific Islander, and othe. The “other” group was composed of bi- or multiracial patients, or those for whom no data were reported. We compared the CLABSI and CAUTI rates between the different racial and ethnic groups using Poisson regression. Results: Compared to non-Hispanic White patients, the rate of CLABSI was significantly higher in non-Hispanic Black patients (1.27; 95% CI, 1.02–1.58; P < .03) and those in the “other” race category (1.79; 95% CI, 1.39–2.30; P < .001, respectively), and these trends increased in Hispanic/Latino patients (Table 1). Similarly, Black patients had higher rates of CAUTI (1.42; 95% CI, 1.05–1.92; P < .02), as did Asian patients (2.49; 95% CI, 1.16–5.36; P < .02), and patients in the “other” category (1.52; 95% CI, 1.06–2.18; P < .02) (Table 2). Conclusions: Racial and ethnic minorities may be vulnerable to a higher rate of patient safety events, including CLABSIs and CAUTIs. Additional analyses controlling for potential confounding factors are needed to better understand the relationship between race or ethnicity, clinical management, and healthcare-associated infections. This evaluation is essential to inform mitigation strategies and to provide optimum, equitable care for all.
Background: SARS-CoV-2 N95 mask contamination in healthcare providers (HCPs) treating patients with COVID-19 is poorly understood. Method: We performed a prospective observational study of HCP N95 respirator SARS-CoV-2 contamination during aerosol-generating procedures (AGPs) on SARS-CoV-2–positive patients housed in a COVID-19–specific unit at an academic medical center. Medical masks were used as surrogates for N95 respirators to avoid waste and were worn on top of HCP N95 respirators during study AGPs. Study masks were provided to HCPs while donning PPE and were retrieved during doffing. Additionally, during doffing, face shields were swabbed with Floq swabs premoistened with viral transport media (VTM) prior to disinfection. Medical masks were cut into 9 position-based pieces, placed in VTM, vortexed, and centrifuged (Fig. 1). RNA extraction and RT-PCR were completed on all samples. RT-PCR–positive samples underwent cell culture infection to detect cytopathic effects (CPE). Contamination was characterized by mask location and front and back of face shields. Patient COVID-19 symptoms were collected from routine clinical documentation. Study HCPs completed HCP-role–specific routine care (eg, assessing, administering medications, and maintaining oxygen supplementation) while in patient rooms and were observed by study team members. Results: We enrolled 31 HCPs between September and December 2021. HCP and patient characteristics are presented in Table 1. In total, 330 individual samples were obtained from 31 masks and 26 face shields among 12 patient rooms. Of the 330 samples, 0 samples were positive for SARS-CoV-2 via RT-PCR. Positive controls were successfully performed in the laboratory setting to confirm that the virus was recoverable using these methods. Notably, all samples were collected from HCPs caring for COVID-19 patients on high-flow, high-humidity Optiflow (AGP), with an average of 960 seconds (IQR, 525–1,680) spent in each room. In addition to Optiflow and routine care, study speech pathologists completed an additional AGP of fiberoptic endoscopic evaluation of swallowing. Notably, 29 (94%) of 31 study HCP had physical contact with their patient. Conclusions: Overall, mask contamination in HCPs treating patients with COVID-19 undergoing AGPs was not detectable while wearing face shields, despite patient contact and performing AGP.
Relapse and recurrence of depression are common, contributing to the overall burden of depression globally. Accurate prediction of relapse or recurrence while patients are well would allow the identification of high-risk individuals and may effectively guide the allocation of interventions to prevent relapse and recurrence.
To review prognostic models developed to predict the risk of relapse, recurrence, sustained remission, or recovery in adults with remitted major depressive disorder.
We searched the Cochrane Library (current issue); Ovid MEDLINE (1946 onwards); Ovid Embase (1980 onwards); Ovid PsycINFO (1806 onwards); and Web of Science (1900 onwards) up to May 2021. We included development and external validation studies of multivariable prognostic models. We assessed risk of bias of included studies using the Prediction model risk of bias assessment tool (PROBAST).
We identified 12 eligible prognostic model studies (11 unique prognostic models): 8 model development-only studies, 3 model development and external validation studies and 1 external validation-only study. Multiple estimates of performance measures were not available and meta-analysis was therefore not necessary. Eleven out of the 12 included studies were assessed as being at high overall risk of bias and none examined clinical utility.
Due to high risk of bias of the included studies, poor predictive performance and limited external validation of the models identified, presently available clinical prediction models for relapse and recurrence of depression are not yet sufficiently developed for deploying in clinical settings. There is a need for improved prognosis research in this clinical area and future studies should conform to best practice methodological and reporting guidelines.
Psychotic experiences are reported by 5–10% of young people, although only a minority persist and develop into psychotic disorders. It is unclear what characteristics differentiate those with transient psychotic experiences from those with persistent psychotic experiences that are more likely to be of clinical relevance.
To investigate how longitudinal profiles of psychotic experiences, created from assessments at three different time points, are influenced by early life and co-occurring factors.
Using data from 8045 individuals from a birth cohort study, longitudinal profiles of psychotic experiences based on semi-structured interviews conducted at 12, 18 and 24 years were defined. Environmental, cognitive, psychopathological and genetic determinants of these profiles were investigated, along with concurrent changes in psychopathology and cognition.
Following multiple imputations, the distribution of longitudinal profiles of psychotic experiences was none (65.7%), transient (24.1%), low-frequency persistent (8.4%) and high-frequency persistent (1.7%). Individuals with high-frequency persistent psychotic experiences were more likely to report traumatic experiences, other psychopathology, a more externalised locus of control, reduced emotional stability and conscientious personality traits in childhood, compared with those with transient psychotic experiences. These characteristics also differed between those who had any psychotic experiences and those who did not.
These findings indicate that the same risk factors are associated with incidence as with persistence of psychotic experiences. Thus, it might be that the severity of exposure, rather than the presence of specific disease-modifying factors, is most likely to determine whether psychotic experiences are transient or persist, and potentially develop into a clinical disorder over time.
We performed surveillance for hospital-acquired COVID-19 (HA-COVID-19) and compared time-based, electronic definitions to real-time adjudication of the most likely source of acquisition. Without real-time adjudication, nearly 50% of HA-COVID-19 cases identified using electronic definitions were misclassified. Both electronic and traditional contact tracing methods likely underestimated the incidence of HA-COVID-19.