To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Background:Clostridioides difficile infection (CDI) is a serious healthcare-associated infection responsible for >12,000 US deaths annually. Overtesting can lead to antibiotic overuse and potential patient harm when patients are colonized with C. difficile, but not infected, yet treated. National guidelines recommend when testing is appropriate; occasionally, guideline-noncompliant testing (GNCT) may be warranted. A multidisciplinary group at UNC Medical Center (UNCMC) including the antimicrobial stewardship program (ASP) used a best-practice alert in 2020 to improve diagnostic stewardship, to no effect. Evidence supports use of hard stops for this purpose, though less is known about provider acceptance. Methods: Beginning in May 2022, UNCMC implemented a hard stop in its electronic medical record system (EMR) for C. difficile GNCT orders, with exceptions to be approved by an ASP attending physician. Requests were retrospectively reviewed May–November 2022 to monitor for adverse patient outcomes and provider hard-stop compliance. The team exported data from the EMR (Epic Systems) and generated descriptive statistics in Microsoft Excel. Results: There were 85 GNCT orders during the study period. Most tests (62%) were reviewed by the ASP, and 38% sought non-ASP or no approval. Of the tests reviewed by the ASP, 33 (62%) were approved and 20 (38%) were not. Among tests not approved by the ASP, no patients subsequently received CDI-directed antibiotics, and 1 patient (5%) warranted same-admission CDI testing (negative). Of tests that circumvented ASP review, 18 (56%) ordering providers received a follow-up email from an associate chief medical officer to determine the rationale. No single response type dominated: 3 (17%) were unaware of the ASP review requirement, 2 (11%) indicated their patient’s uncharted refusal of laxatives, 2 (11%) indicated another patient-specific reason. Provider avoidance of the ASP approval mechanism decreased 38%, from 53% of noncompliant tests in month 1 to 33% of tests in month 6. Total tests orders dropped 15.5% from 1,129 during the same period in 2021 to 954 during the study period (95% CI, 13.4%–17.7%). Compliance with the guideline component requiring at least a 48-hour laxative-free interval prior to CDI testing increased from 85% (95% CI, 83%–87%) to 95% (95% CI, 93%–96%). CDI incidence rates decreased from 0.52 per 1,000 patient days (95% CI, 0.41–0.65) to 0.41 (95% CI, 0.32–0.53), though the change was neither significant at P = .05 nor attributable to any 1 intervention. Conclusions: Over time and with feedback to providers circumventing the exception process, providers accepted and used the hard stop, improving diagnostic stewardship and avoiding unneeded treatment.
Background: Central-line–associated bloodstream infections (CLABSIs) are linked to increased morbidity and mortality, longer hospital stays, and significantly higher healthcare costs. Infection prevention guidelines recommend line placement in specific insertion locations over others because of the relative risk of infection. The purpose of this study was to assess CLABSI rates by line type to determine whether some central lines had a lower risk of infection and should be recommended over others given similar clinical indications. Methods: At UNC Hospitals, data were obtained on central lines across a 3-year period (FY20–FY22) from the EMR (Epic Systems). Central lines were categorized as apheresis catheters, CVC lines (single, double, or triple lumen), hemodialysis catheters, introducer lines, pulmonary artery (PA) catheters, PICC lines (single, double, or triple lumen), port-a-catheters, trialysis catheters, or umbilical lines. The line type(s) associated with each CLABSI during the same period were recorded, and CLABSI rates by line type per 1,000 central-line days were calculated using SAS software. If an infection had >1 central-line device type associated, the infection was counted twice when calculating the CLABSI rate by line type. We calculated 95% CIs for each point estimate to assess for statistically significant differences in rates by line type. Results: During FY20–FY22, there were 264,425 central-line days and 458 CLABSIs, for an overall CLABSI rate of 1.73 CLABSIs per 1,000 central-line days. Also, 16% of patients with a CLABSI had >1 type of central line in place. Stratified data on CLABSI rates by each central-line type is presented in the Figure. CLABSI rates were highest in patients with apheresis lines (6.22; 95% CI, 3.96–9.35) and PA catheters (6.22; 95% CI, 3.54–10.20), and the lowest CLABSI rates occurred in patients with PICC lines (1.44; 95% CI, 1.19–1.73) and port-a-catheters (1.14; 95% CI, 0.89, 1.45). For both CVC and PICC lines, as the number of lumens increased from single to triple, CLABSI rates increased, from 0.91 to 2.63 and from 0.57 to 1.20, respectively. Conclusions: At our hospital, different types of central lines were associated with statistically higher CLABSI rates. Additionally, a higher number of lumens (triple vs single) in CVC and PICC lines were also associated with statistically higher CLABSI rates. These findings reinforce the importance of considering central-line type and number of lumens to minimize risk of CLABSI while ensuring that patients have the best line type based on their clinical needs.
The Virtual Interprofessional Education program is a multi-institutional consortium collaborative formed between five universities across the United States. As of January 2022, the collaborative includes over 60 universities in 30 countries. The consortium brings healthcare students together for a short-term immersive team experience that mimics the healthcare setting. The VIPE program has hosted over 5,000 students in healthcare training programs. The VIPE program expanded to a VIPE Security model to host students across multiple disciplines outside the field of healthcare to create a transdisciplinary approach to managing complex wicked problems.
Students receive asynchronous materials ahead of a synchronous virtual experience. VIPE uses the Interprofessional Education Competencies (IPEC) competencies (IPEC, 2016) and aligns with The Health Professions Accreditors Collaborative (HPAC) 2019 guidelines. VIPE uses an active teaching strategy, problem or case-based learning (PBL/CBL), which emphasizes creating an environment of psychological safety and its antecedents (Frazier et al., 2017 and Salas, 2019, Wiss, 2020). Following this model, VIPE Security explores whether the VIPE model can be tailored to work across multiple sectors to discuss management of complex wicked problems to include: climate change, disaster, cyber attacks, terrorism, pandemics, conflict, forced migration, food/water insecurity, human/narco trafficking etc. VIPE Security has hosted two events to include professionals in the health and security sectors to work through complex wicked problems to further understand their roles, ethical and responsible information sharing, and policy implications.
VIPE demonstrates statistically significant gains in knowledge towards interprofessional collaborative practice as a result of participation. VIPE Security results are currently being analyzed.
This transdisciplinary approach to IPE allows for an all-hands-on-deck approach to security, fostering early education and communication of students across multiple sectors. The VIPE Security model has future implications to be utilized within multidisciplinary organizations for practitioners, governmental agencies, and the military.
Digital Livestock Technologies (DLTs) can assist farmer decision-making and promise benefits to animal health and welfare. However, the extent to which they can help improve animal welfare is unclear. This study explores how DLTs may impact farm management and animal welfare by promoting learning, using the concept of boundary objects. Boundary objects may be interpreted differently by different social worlds but are robust enough to share a common identity across them. They facilitate communication around a common issue, allowing stakeholders to collaborate and co-learn. The type of learning generated may impact management and welfare differently. For example, it may help improve existing strategies (single-loop learning), or initiate reflection on how these strategies were framed initially (double-loop learning). This study focuses on two case studies, during which two DLTs were developed and tested on farms. In-depth, semi-structured interviews were conducted with stakeholders involved in the case studies (n = 31), and the results of a separate survey were used to complement our findings. Findings support the important potential of DLTs to help enhance animal welfare, although the impacts vary between technologies. In both case studies, DLTs facilitated discussions between stakeholders, and whilst both promoted improved management strategies, one also promoted deeper reflection on the importance of animal emotional well-being and on providing opportunities for positive animal welfare. If DLTs are to make significant improvements to animal welfare, greater priority should be given to DLTs that promote a greater understanding of the dimensions of animal welfare and a reframing of values and beliefs with respect to the importance of animals’ well-being.
The term “blue justice” was coined in 2018 during the 3rd World Small-Scale Fisheries Congress. Since then, academic engagement with the concept has grown rapidly. This article reviews 5 years of blue justice scholarship and synthesizes some of the key perspectives, developments, and gaps. We then connect this literature to wider relevant debates by reviewing two key areas of research – first on blue injustices and second on grassroots resistance to these injustices. Much of the early scholarship on blue justice focused on injustices experienced by small-scale fishers in the context of the blue economy. In contrast, more recent writing and the empirical cases reviewed here suggest that intersecting forms of oppression render certain coastal individuals and groups vulnerable to blue injustices. These developments signal an expansion of the blue justice literature to a broader set of affected groups and underlying causes of injustice. Our review also suggests that while grassroots resistance efforts led by coastal communities have successfully stopped unfair exposure to environmental harms, preserved their livelihoods and ways of life, defended their culture and customary rights, renegotiated power distributions, and proposed alternative futures, these efforts have been underemphasized in the blue justice scholarship, and from marine and coastal literature more broadly. We conclude with some suggestions for understanding and supporting blue justice now and into the future.
We compared the effectiveness of 4 sampling methods to recover Staphylococcus aureus, Klebsiella pneumoniae and Clostridioides difficile from contaminated environmental surfaces: cotton swabs, RODAC culture plates, sponge sticks with manual agitation, and sponge sticks with a stomacher. Organism type was the most important factor in bacterial recovery.
Serial position scores on verbal memory tests are sensitive to early Alzheimer’s disease (AD)-related neuropathological changes that occur in the entorhinal cortex and hippocampus. The current study examines longitudinal change in serial position scores as markers of subtle cognitive decline in older adults who may be in preclinical or at-risk states for AD.
This study uses longitudinal data from the Religious Orders Study and the Rush Memory and Aging Project. Participants (n = 141) were included if they did not have dementia at enrollment, completed follow-up assessments, and died and were classified as Braak stage I or II. Memory tests were used to calculate serial position (primacy, recency), total recall, and episodic memory composite scores. A neuropathological evaluation quantified AD, vascular, and Lewy body pathologies. Mixed effects models were used to examine change in memory scores. Neuropathologies and covariates (age, sex, education, APOE e4) were examined as moderators.
Primacy scores declined (β = −.032, p < .001), whereas recency scores increased (β = .021, p = .012). No change was observed in standard memory measures. Greater neurofibrillary tangle density and atherosclerosis explained 10.4% of the variance in primacy decline. Neuropathologies were not associated with recency change.
In older adults with hippocampal neuropathologies, primacy score decline may be a sensitive marker of early AD-related changes. Tangle density and atherosclerosis had additive effects on decline. Recency improvement may reflect a compensatory mechanism. Monitoring for changes in serial position scores may be a useful in vivo method of tracking incipient AD.
The coronavirus disease 2019 (COVID-19) pandemic has seen health systems adapt and change in response to local and international experiences. This study describes the experiences and learnings by the Central Adelaide Local Health Network (CALHN) in managing a campaign style, novel public health disaster response.
Disaster preparedness has focused on acute impact, mass casualty incidents. In early 2020, CALHNs largest hospital the Royal Adelaide Hospital (RAH) was appointed as the state primary COVID-19 adult receiving hospital. Between the period of February 1, 2020, when the first COVID-19 positive patient was admitted, through to December 31, 2020, the RAH had admitted 146 inpatients with COVID-19, 118 admitted to our hospital in the home service, 18 patients admitted to Intensive Care, and 4 patients died while inpatients. During this time CALHN has sustained an active (physical and virtual) Network Incident Command Centre (NICC) supported by a Network Incident Management Team (NIMT).
This study describes our key lessons learnt in relation to the management of a campaign style disaster response including the importance of disaster preparedness, fatigue management, and communication. Also described, were the challenges of operating in a command model and the role of exercising and education and an overview of our operating rhythm, how we built capability, and lessons management.
Undertaking a longer duration disaster response, relating to the COVID-19 pandemic has shown that, although traditional disaster principles still are important, there are many nuances that need to be considered to retain a proportionate response. Our key lessons have revolved around the key tenants of disaster management, communication, capability, and governance.
To review the experiences of healthcare professionals (HCPs) and service users on the provision and receipt of home enteral nutrition (HEN) in primary care settings, respectively.
HEN supports the nutritional needs of service users in primary care settings who are unable to meet their nutritional requirements through oral intake alone. While HEN supports service users to remain in their home, the provision of HEN services can be variable. The prevalence of HEN is increasing as health systems shift delivery of care from acute to primary care settings, and therefore the evolving needs of HCPs and service users in relation to HEN deserve exploration.
Quantitative and qualitative studies were included if they described (1) practices that support best outcomes in adults on HEN and residing in their own homes and/or (2) service user and HCP experiences of HEN. Studies on the economics of HEN were included. Databases searched included MEDLINE/PubMed, EMBASE, Web of Science, and CINAHL. Publications up to March 2021 were included. A descriptive analytical approach was used to summarise the findings.
Key themes included the importance of initial education to enable service users to adapt to HEN and the need for support from knowledgeable HCPs. Access to support from HCPs in primary care was limited, and some HCPs felt their knowledge of HEN was inadequate. Service users highlighted the significant impact of HEN on daily living and emphasised the need for support from a HEN team. HEN services were also associated with reduced hospital admissions, lengths of stay in hospital, and costs of hospitalisation.
A specialist HEN service can manage enteral nutrition-related complications, reduce unnecessary hospital admissions, and improve quality of care and patient satisfaction. Further education of HCPs is needed on the provision of HEN.
Child protection systems monitoring is key to ensuring children’s wellbeing. In England, monitoring is rooted in onsite inspection, culminating in judgements ranging from ‘outstanding’ to ‘inadequate’. But inspection may carry unintended consequences where child protection systems are weak. One potential consequence is increased child welfare intervention rates. In this longitudinal ecological study of local authorities in England, we used Poisson mixed-effects regression models to assess whether child welfare intervention rates are higher in an inspection year, whether this is driven by inspection judgement, and whether more deprived areas experience different rates for a given inspection judgement. We investigated the impact of inspection on care entry, Child Protection Plan-initiation, and child-in-need status. We found that inspection was associated with a rise in rates across the spectrum of interventions. Worse judgements yielded higher rates. Inspection may also exacerbate existing inequalities. Unlike less deprived areas, more deprived areas judged inadequate did not experience an increase in the less intrusive ‘child-in-need’ interventions. Our findings suggest that a narrow focus on social work practice is unlikely to address weaknesses in the child protection system. Child protection systems monitoring should be guided by a holistic model of systems improvement, encompassing the socioeconomic determinants of quality.
The rapid spread of coronavirus disease 2019 (COVID-19) required swift preparation to protect healthcare personnel (HCP) and patients, especially considering shortages of personal protective equipment (PPE). Due to the lack of a pre-existing biocontainment unit, we needed to develop a novel approach to placing patients in isolation cohorts while working with the pre-existing physical space.
To prevent disease transmission to non–COVID-19 patients and HCP caring for COVID-19 patients, to optimize PPE usage, and to provide a comfortable and safe working environment.
An interdisciplinary workgroup developed a combination of approaches to convert existing spaces into COVID-19 containment units with high-risk zones (HRZs). We developed standard workflow and visual management in conjunction with updated staff training and workflows. The infection prevention team created PPE standard practices for ease of use, conservation, and staff safety.
The interventions resulted in 1 possible case of patient-to-HCP transmission and zero cases of patient-to-patient transmission. PPE usage decreased with the HRZ model while maintaining a safe environment of care. Staff on the COVID-19 units were extremely satisfied with PPE availability (76.7%) and efforts to protect them from COVID-19 (72.7%). Moreover, 54.8% of HCP working in the COVID-19 unit agreed that PPE monitors played an essential role in staff safety.
The HRZ model of containment unit is an effective method to prevent the spread of COVID-19 with several benefits. It is easily implemented and scaled to accommodate census changes. Our experience suggests that other institutions do not need to modify existing physical structures to create similarly protective spaces.
After implementing a coronavirus disease 2019 (COVID-19) infection prevention bundle, the incidence rate ratio (IRR) of non–severe acute respiratory coronavirus virus 2 (non–SARS-CoV-2) hospital-acquired respiratory viral infection (HA-RVI) was significantly lower than the IRR from the pre–COVID-19 period (IRR, 0.322; 95% CI, 0.266–0.393; P < .01). However, HA-RVIs incidence rates mirrored community RVI trends, suggesting that hospital interventions alone did not significantly affect HA-RVI incidence.
Background: SARS-CoV-2 N95 mask contamination in healthcare providers (HCPs) treating patients with COVID-19 is poorly understood. Method: We performed a prospective observational study of HCP N95 respirator SARS-CoV-2 contamination during aerosol-generating procedures (AGPs) on SARS-CoV-2–positive patients housed in a COVID-19–specific unit at an academic medical center. Medical masks were used as surrogates for N95 respirators to avoid waste and were worn on top of HCP N95 respirators during study AGPs. Study masks were provided to HCPs while donning PPE and were retrieved during doffing. Additionally, during doffing, face shields were swabbed with Floq swabs premoistened with viral transport media (VTM) prior to disinfection. Medical masks were cut into 9 position-based pieces, placed in VTM, vortexed, and centrifuged (Fig. 1). RNA extraction and RT-PCR were completed on all samples. RT-PCR–positive samples underwent cell culture infection to detect cytopathic effects (CPE). Contamination was characterized by mask location and front and back of face shields. Patient COVID-19 symptoms were collected from routine clinical documentation. Study HCPs completed HCP-role–specific routine care (eg, assessing, administering medications, and maintaining oxygen supplementation) while in patient rooms and were observed by study team members. Results: We enrolled 31 HCPs between September and December 2021. HCP and patient characteristics are presented in Table 1. In total, 330 individual samples were obtained from 31 masks and 26 face shields among 12 patient rooms. Of the 330 samples, 0 samples were positive for SARS-CoV-2 via RT-PCR. Positive controls were successfully performed in the laboratory setting to confirm that the virus was recoverable using these methods. Notably, all samples were collected from HCPs caring for COVID-19 patients on high-flow, high-humidity Optiflow (AGP), with an average of 960 seconds (IQR, 525–1,680) spent in each room. In addition to Optiflow and routine care, study speech pathologists completed an additional AGP of fiberoptic endoscopic evaluation of swallowing. Notably, 29 (94%) of 31 study HCP had physical contact with their patient. Conclusions: Overall, mask contamination in HCPs treating patients with COVID-19 undergoing AGPs was not detectable while wearing face shields, despite patient contact and performing AGP.
Background: Working while ill, or presenteeism, has been documented at substantial levels among healthcare personnel (HCP) along with its consequences for both patient and HCP safety. Limited literature has been published on HCP presenteeism during the COVID-19 pandemic, and specific motivations for this behavior are not well described. Understanding both individual and systemic factors that contribute to presenteeism is key to reducing respiratory illness transmission in the healthcare setting. We characterized the frequency of and motivations for presenteeism in the workforce of a large academic medical center during the COVID-19 pandemic. Method: We deployed a voluntary, anonymous electronic survey to HCP at University of North Carolina (UNC) Medical Center in December 2021, which was approved by the UNC Institutional Review Board. We received 591 responses recruited through employee newsletters. Respondents recounted their frequency of presenteeism since March 2020, defined as coming to work feeling feverish plus cough and/or sore throat. In total, 24.6% reported presenteeism at least once, with 8.1% reporting twice and 5.3% 3 or more times. Asking more generally about any symptoms while working, the following were most common: headache (26%), sinus congestion (20%), sore throat (13%), cough (13%), and muscle aches (9.3%). Results: Motivations for presenteeism fell broadly into 4 categories: (1) perception of low risk for COVID-19 infection, (2) concerns about workplace culture and operations, (3) issues with sick leave, and (4) concerns about employment record and status. Among HCP reporting at least 1 instance, the most common motivations for presenteeism included feeling low risk for COVID-19 infection due to mild symptoms (59.9%), being vaccinated (50.6%), avoiding increasing colleagues’ workload (48.3%), avoiding employment record impact (39.6%), and saving sick days for other purposes (37.9%). Asked to identify a primary motivation, 40.3% reported feeling low risk for COVID-19 infection due to mild symptoms or vaccination, 21.2% reported a workplace culture issue (ie, increasing colleague workload, perception of weakness, responsibility for patients), 20.6% reported sick leave availability and use (including difficulty finding coverage) and 17.8% reported employment record ramifications including termination. Conclusions: This survey coincided with 2the onset of the SARS-CoV-2 ο (omicron) variant locally, and as such, risk perceptions and motivations for presenteeism may have changed. Responses were self-reported and generalizability is limited. Still, these results highlight the importance of risk messaging and demonstrate the many factors to be considered as potential presenteeism motivators. Mitigating these drivers is particularly critical during high-risk times such as pandemics or seasonal peaks of respiratory illness.
Diamondback moth, Plutella xylostella (Linnaeus) (Lepidoptera: Plutellidae), a globally important pest of Brassicaceae crops, migrates into all provinces of Canada annually. Life tables were used to determine the mortality levels contributed by the parasitoid complexes associated with diamondback moth in British Columbia, Ontario, Prince Edward Island, and insular Newfoundland. Overall, diamondback moth populations showed high generational mortality (> 90%) in all provinces, although parasitism levels were generally low. The net reproductive rate of increase in diamondback moth was less than 1.0 (populations declined) in both years in British Columbia and in each of two years in Newfoundland and Ontario, but it was greater than 1.0 in all three years in Prince Edward Island. Lower parasitism levels were found in Prince Edward Island (3.0–6.3%) compared with other provinces (8.4–17.6%, except one year in British Columbia). Diadegma insulare was the main larval parasitoid found; it was present in all provinces. Microplitis plutellae was present in all provinces except British Columbia. Oomyzus sokolowskii was found in British Columbia and Ontario. The parasitoid community documented from sentinel sampling was less diverse than that found through destructive sampling. Hypotheses are provided to explain the presence of major parasitoids. Increasing larval parasitism would have the largest effect on diamondback moth population growth in Canada.
Initial assessments of coronavirus disease 2019 (COVID-19) preparedness revealed resource shortages and variations in infection prevention policies across US hospitals. Our follow-up survey revealed improvement in resource availability, increase in testing capacity, and uniformity in infection prevention policies. Most importantly, the survey highlighted an increase in staffing shortages and use of travel nursing.