To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Background: Central-line–associated blood stream infections (CLABSIs) are linked with significant morbidity and mortality. A NHSN laboratory-confirmed bloodstream infection (LCBSI) has specific criteria to ascribe an infection to the central line or not. The criteria used to associate the pathogen to another site are restrictive. This objective to better classify CLABSIs using enhanced criteria to gain a comprehensive understanding of the error so that appropriate reduction efforts are utilized. Methods: We conducted a retrospective review of medical records with NHSN-identified CLABSI from July 2017 to December 2018 at 2 geographically proximate hospitals. Trained infectious diseases personnel from tertiary-care academic medical centers, the University of Virginia Health System, a 600-bed medical center in Charlottesville, Virginia, and Virginia Commonwealth University Health System with 865 beds in Richmond, Virginia, reviewed charts. We defined “overcaptured” or O-CLABSI into different categories: O-CLABSI-1 is bacteremia attributable to a primary infectious source; O-CLABSI-2 is bacteremia attributable to neutropenia with gastrointestinal translocation not meeting mucosal barrier injury criteria; O-CLABSI-3 is a positive blood culture attributable to a contaminant; and O-CLABSI-4 is a patient injecting line, though not officially documented. Descriptive analyses were performed using the χ2 and the Fisher exact tests. Results: We found a large number of O-CLABSIs on chart review (79 of 192, 41%). Overall, 56 of 192 (29%) LCBSIs were attributable to a primary infectious source not meeting NHSN definition. O-CLABSI proportions between the 2 hospitals were statistically different; hospital A identified 34 of 59 (58%) of their NHSN-identified CLABSIs as O-CLABSIs, and hospital B identified a 45 of 133 (34%) as O-CLABSIs (P = .0020) (Table 1). When comparing O-CLABSI types, hospital B had a higher percentage of O-CLABSI-1 compared to hospital B: 76% versus 64%. Hospital A had a higher proportion of O-CLABSI-2: 21 versus 7%. Hospitals A and B had similar proportion of O-CLABSI-3: 15% versus 18%. These values were all statistically significant (P < .0001). Discussions: The results of these 2 geographically proximate systems indicate that O-CLABSIs are common. Attribution can vary significantly between institutions, likely depending on differences in incidence of true CLABSI, patient populations, protocols, and protocol compliance. These findings have implications for interfacility comparisons of publicly reported data. Most importantly, erroneous attribution can result in missed opportunity to direct patient safety efforts to the root cause of the bacteremia and could lead to inappropriate treatment.
Disclosures: Michelle Doll, Research Grant from Molnlycke Healthcare
An experiment was conducted to test the hypothesis that meat products have digestible indispensable amino acid scores (DIAAS) >100 and that various processing methods will increase standardised ileal digestibility (SID) of amino acids (AA) and DIAAS. Nine ileal-cannulated gilts were randomly allotted to a 9 × 8 Youden square design with nine diets and eight 7-d periods. Values for SID of AA and DIAAS for two reference patterns were calculated for salami, bologna, beef jerky, raw ground beef, cooked ground beef and ribeye roast heated to 56, 64 or 72°C. The SID of most AA was not different among salami, bologna, beef jerky and cooked ground beef, but was less (P < 0·05) than the values for raw ground beef. The SID of AA for 56°C ribeye roast was not different from the values for raw ground beef and 72°C ribeye roast, but greater (P < 0·05) than those for 64°C ribeye roast. For older children, adolescents and adults, the DIAAS for all proteins, except cooked ground beef, were >100 and bologna and 64°C ribeye roast had the greatest (P < 0·05) DIAAS. The limiting AA for this age group were sulphur AA (beef jerky), leucine (bologna, raw ground beef and cooked ground beef) and valine (salami and the three ribeye roasts). In conclusion, meat products generally provide high-quality protein with DIAAS >100 regardless of processing. However, overcooking meat may reduce AA digestibility and DIAAS.
Demoralization is prevalent in patients with life-limiting chronic illnesses, many of whom reside in rural areas. These patients also have an increased risk of disease-related psychosocial burden due to the unique health barriers in this population. However, the factors affecting demoralization in this cohort are currently unknown. This study aimed to examine demoralization amongst the chronically ill in Lithgow, a town in rural New South Wales, Australia, and identify any correlated demographic, physical, and psychosocial factors in this population.
A cross-sectional survey of 73 participants drawn from Lithgow Hospital, the adjoining retirement village and nursing home, assessing correlating demographic, physical, psychiatric, and psychosocial factors.
The total mean score of the DS-II was 7.8 (SD 26.4), and high demoralization scores were associated with the level of education (p = 0.01), comorbid condition (p = 0.04), severity of symptom burden (p = <0.001), depression (p = <0.001), and psychological distress (p = <0.001). Prevalence of serious demoralization in this population was 27.4% according to a cutoff of a DS-II score ≥11. Of those, 11 (15%) met the criteria for clinical depression, leaving 9 (12.3%) of the cohort demoralized but not depressed.
Significance of results
Prevalence of demoralization was high in this population. In line with the existing literature, demoralization was associated with the level of education, symptom burden, and psychological distress, demonstrating that demoralization is a relevant psychometric factor in rural populations. Further stratification of the unique biopsychosocial factors at play in this population would contribute to better understanding the burdens experienced by people with chronic illness in this population and the nature of demoralization.
Reforestation in the Inland Northwest, including northeastern Oregon, USA, is often limited by a dry climate and soil moisture availability during the summer months. Reduction of competing vegetative cover in forest plantations is a common method for retaining available soil moisture. Several spring and summer site preparation (applied prior to planting) herbicide treatments were evaluated to determine their efficacy in reducing competing cover, thus retaining soil moisture, on three sites in northeastern Oregon. Results varied by site, year, and season of application. In general, sulfometuron (0.14 kg ai ha–1 alone and in various mixtures), imazapyr (0.42 ae kg ha–1), and hexazinone (1.68 kg ai ha–1) resulted in 3 to 17% cover of forbs and grasses in the first-year when applied in spring. Sulfometuron+glyphosate (2.2 kg ha–1) consistently reduced grasses and forbs for the first year when applied in summer, but forbs recovered in the second year on two of three sites. Aminopyralid (0.12 kg ae ha–1)+sulfometuron applied in summer also led to comparable control of forb cover. In the second year after treatment, forb cover in treated plots was similar to levels in nontreated plots, and some species of forbs had increased relative to nontreated plots. Imazapyr (0.21 and 0.42 kg ha–1) at either rate, spring or summer 2007, or at lower rate (0.14 kg ha–1) with glyphosate in summer, provided the best control of shrubs, of which snowberry was the dominant species. Total vegetative cover was similar across all treatments seven and eight years after application, and differences in vegetation were related to site rather than treatment. In the first year after treatment, rates of soil moisture depletion in the 0- to 23-cm depth were correlated with vegetative cover, particularly late season soil moisture, suggesting increased water availability for tree seedling growth.
The Atypical Maternal Behavior Instrument for Assessment and Classification (AMBIANCE; Bronfman, Madigan, & Lyons-Ruth, 2009–2014; Bronfman, Parsons, & Lyons-Ruth, 1992–2004) is a widely used and well-validated measure for assessing disrupted forms of caregiver responsiveness within parent–child interactions. However, it requires evaluating approximately 150 behavioral items from videotape and extensive training to code, thus making its use impractical in most clinical contexts. Accordingly, the primary aim of the current study was to identify a reduced set of behavioral indicators most central to the AMBIANCE coding system using latent-trait item response theory (IRT) models. Observed mother–infant interaction data previously coded with the AMBIANCE was pooled from laboratories in both North America and Europe (N = 343). Using 2-parameter logistic IRT models, a reduced set of 45 AMBIANCE items was identified. Preliminary convergent and discriminant validity was evaluated in relation to classifications of maternal disrupted communication assigned using the full set of AMBIANCE indicators, to infant attachment disorganization, and to maternal sensitivity. The results supported the construct validity of the refined item set, opening the way for development of a brief screening measure for disrupted maternal communication. IRT models in clinical scale refinement and their potential for bridging clinical and research objectives in developmental psychopathology are discussed.
Rural communities face barriers to disaster preparedness and considerable risk of disasters. Emergency preparedness among rural communities has improved with funding from federal programs and implementation of a National Incident Management System. The objective of this project was to design and implement disaster exercises to test decision making by rural response partners to improve regional planning, collaboration, and readiness. Six functional exercises were developed and conducted among three rural Nebraska (USA) regions by the Center for Preparedness Education (CPE) at the University of Nebraska Medical Center (Omaha, Nebraska USA). A total of 83 command centers participated. Six functional exercises were designed to test regional response and command-level decision making, and each 3-hour exercise was followed by a 3-hour regional after action conference. Participant feedback, single agency debriefing feedback, and regional After Action Reports were analyzed. Functional exercises were able to test command-level decision making and operations at multiple agencies simultaneously with limited funding. Observations included emergency management jurisdiction barriers to utilization of unified command and establishment of joint information centers, limited utilization of documentation necessary for reimbursement, and the need to develop coordinated public messaging. Functional exercises are a key tool for testing command-level decision making and response at a higher level than what is typically achieved in tabletop or short, full-scale exercises. Functional exercises enable evaluation of command staff, identification of areas for improvement, and advancing regional collaboration among diverse response partners.
ObaidJM, BaileyG, WheelerH, MeyersL, MedcalfSJ, HansenKF, SangerKK, LoweJJ. Utilization of Functional Exercises to Build Regional Emergency Preparedness among Rural Health Organizations in the US. Prehosp Disaster Med. 2017;32(2):224–230.
New approaches are needed to safely reduce emergency admissions to hospital by targeting interventions effectively in primary care. A predictive risk stratification tool (PRISM) identifies each registered patient's risk of an emergency admission in the following year, allowing practitioners to identify and manage those at higher risk. We evaluated the introduction of PRISM in primary care in one area of the United Kingdom, assessing its impact on emergency admissions and other service use.
We conducted a randomized stepped wedge trial with cluster-defined control and intervention phases, and participant-level anonymized linked outcomes. PRISM was implemented in eleven primary care practice clusters (total thirty-two practices) over a year from March 2013. We analyzed routine linked data outcomes for 18 months.
We included outcomes for 230,099 registered patients, assigned to ranked risk groups.
Overall, the rate of emergency admissions was higher in the intervention phase than in the control phase: adjusted difference in number of emergency admissions per participant per year at risk, delta = .011 (95 percent Confidence Interval, CI .010, .013). Patients in the intervention phase spent more days in hospital per year: adjusted delta = .029 (95 percent CI .026, .031). Both effects were consistent across risk groups.
Primary care activity increased in the intervention phase overall delta = .011 (95 percent CI .007, .014), except for the two highest risk groups which showed a decrease in the number of days with recorded activity.
Introduction of a predictive risk model in primary care was associated with increased emergency episodes across the general practice population and at each risk level, in contrast to the intended purpose of the model. Future evaluation work could assess the impact of targeting of different services to patients across different levels of risk, rather than the current policy focus on those at highest risk.
Emergency admissions to hospital are a major financial burden on health services. In one area of the United Kingdom (UK), we evaluated a predictive risk stratification tool (PRISM) designed to support primary care practitioners to identify and manage patients at high risk of admission. We assessed the costs of implementing PRISM and its impact on health services costs. At the same time as the study, but independent of it, an incentive payment (‘QOF’) was introduced to encourage primary care practitioners to identify high risk patients and manage their care.
We conducted a randomized stepped wedge trial in thirty-two practices, with cluster-defined control and intervention phases, and participant-level anonymized linked outcomes. We analysed routine linked data on patient outcomes for 18 months (February 2013 – September 2014). We assigned standard unit costs in pound sterling to the resources utilized by each patient. Cost differences between the two study phases were used in conjunction with differences in the primary outcome (emergency admissions) to undertake a cost-effectiveness analysis.
We included outcomes for 230,099 registered patients. We estimated a PRISM implementation cost of GBP0.12 per patient per year.
Costs of emergency department attendances, outpatient visits, emergency and elective admissions to hospital, and general practice activity were higher per patient per year in the intervention phase than control phase (adjusted δ = GBP76, 95 percent Confidence Interval, CI GBP46, GBP106), an effect that was consistent and generally increased with risk level.
Despite low reported use of PRISM, it was associated with increased healthcare expenditure. This effect was unexpected and in the opposite direction to that intended. We cannot disentangle the effects of introducing the PRISM tool from those of imposing the QOF targets; however, since across the UK predictive risk stratification tools for emergency admissions have been introduced alongside incentives to focus on patients at risk, we believe that our findings are generalizable.
A predictive risk stratification tool (PRISM) to estimate a patient's risk of an emergency hospital admission in the following year was trialled in general practice in an area of the United Kingdom. PRISM's introduction coincided with a new incentive payment (‘QOF’) in the regional contract for family doctors to identify and manage the care of people at high risk of emergency hospital admission.
Alongside the trial, we carried out a complementary qualitative study of processes of change associated with PRISM's implementation. We aimed to describe how PRISM was understood, communicated, adopted, and used by practitioners, managers, local commissioners and policy makers. We gathered data through focus groups, interviews and questionnaires at three time points (baseline, mid-trial and end-trial). We analyzed data thematically, informed by Normalisation Process Theory (1).
All groups showed high awareness of PRISM, but raised concerns about whether it could identify patients not yet known, and about whether there were sufficient community-based services to respond to care needs identified. All practices reported using PRISM to fulfil their QOF targets, but after the QOF reporting period ended, only two practices continued to use it. Family doctors said PRISM changed their awareness of patients and focused them on targeting the highest-risk patients, though they were uncertain about the potential for positive impact on this group.
Though external factors supported its uptake in the short term, with a focus on the highest risk patients, PRISM did not become a sustained part of normal practice for primary care practitioners.
Introduction: Smoking prevalence remains high among people with a mental illness, contributing to higher levels of morbidity and mortality. Health and community services are an opportune setting for the provision of smoking cessation care. Although family carers are acknowledged to play a critical role in supporting the care and assistance provided by such services to people with a mental illness, their expectations regarding the delivery of smoking cessation care have not been examined.
Aims: To explore family carer expectations of smoking cessation care provision by four types of health services, to clients with a mental illness, and factors associated with expectations.
Methods: A cross-sectional survey was conducted with carers of a person with a mental illness residing in New South Wales, Australia. Carers were surveyed regarding their expectations of smoking cessation care provision from four types of health services. Possible associations between carer expectation of smoking cessation care provision and socio-demographic and attitudinal variables were explored.
Results: Of 144 carers, the majority of carers considered that smoking cessation care should be provided by: mental health hospitals (71.4%), community mental health services (78.0%), general practice (82.7%), and non-government organisations (56.6%). The factor most consistently related to expectation of care was a belief that smoking cessation could positively impact mental health.
Conclusions: The majority of carers expected smoking cessation treatment to be provided by all services catering for people with a mental illness, reinforcing the appropriateness for such services to provide smoking cessation care for clients in an effective and systematic manner.
To aid in preparation of military medic trainers for a possible new curriculum in teaching junctional tourniquet use, the investigators studied the time to control hemorrhage and blood volume lost in order to provide evidence for ease of use.
Models of junctional tourniquet could perform differentially by blood loss, time to hemostasis, and user preference.
In a laboratory experiment, 30 users controlled simulated hemorrhage from a manikin (Combat Ready Clamp [CRoC] Trainer) with three iterations each of three junctional tourniquets. There were 270 tests which included hemorrhage control (yes/no), time to hemostasis, and blood volume lost. Users also subjectively ranked tourniquet performance. Models included CRoC, Junctional Emergency Treatment Tool (JETT), and SAM Junctional Tourniquet (SJT). Time to hemostasis and total blood loss were log-transformed and analyzed using a mixed model analysis of variance (ANOVA) with the users represented as random effects and the tourniquet model used as the treatment effect. Preference scores were analyzed with ANOVA, and Tukey’s honest significant difference test was used for all post-hoc pairwise comparisons.
All tourniquet uses were 100% effective for hemorrhage control. For blood loss, CRoC and SJT performed best with least blood loss and were significantly better than JETT; in pairwise comparison, CRoC-JETT (P < .0001) and SJT-JETT (P = .0085) were statistically significant in their mean difference, while CRoC-SJT (P = .35) was not. For time to hemostasis in pairwise comparison, the CRoC had a significantly shorter time compared to JETT and SJT (P < .0001, both comparisons); SJT-JETT was also significant (P = .0087). In responding to the directive, “Rank the performance of the models from best to worst,” users did not prefer junctional tourniquet models differently (P > .5, all models).
The CRoC and SJT performed best in having least blood loss, CRoC performed best in having least time to hemostasis, and users did not differ in preference of model. Models of junctional tourniquet performed differentially by blood loss and time to hemostasis.
KraghJFJr, LunatiMP, KharodCU, CunninghamCW, BaileyJA, StockingerZT, CapAP, ChenJ, AdenJK3d, CancioLC. Assessment of Groin Application of Junctional Tourniquets in a Manikin Model. Prehosp Disaster Med. 2016;31(4):358–363.
To describe the symptoms and functional changes in patients with high levels of somatization who were referred to an outpatient, multidisciplinary, shared mental healthcare (SMHC) service that primarily offered cognitive behavioural therapy. Second, we wished to compare the levels of somatization in this outpatient clinical sample with previously published community norms.
Somatization is common in primary care, and it can lead to significant impairment, disproportionate resource use, and poses a challenge for management.
All the patients (18+ years, n=508) who attended three or more treatment sessions in SMHC primary care over a seven-year period were eligible for inclusion to this pre–post study. Self-report measures included the Patient Health Questionnaire’s somatic symptom severity scale (PHQ-15) and the World Health Organization Disability Assessment Schedule (WHODAS II). Normative comparisons were used to assess the degree of symptoms and functional changes.
Clinically significant levels of somatization before treatment were common (n=138, 27.2%) and were associated with a significant reduction in somatic symptom severity (41.3% reduction; P<0.001) and disability (44% reduction; P<0.001) after treatment. Patients’ levels of somatic symptom severity and disability approached but did not quite reach the community sample norms following treatment. Multidisciplinary short-term SMHC was associated with significant improvement in patient symptoms and disability, and shows promise as an effective treatment for patients with high levels of somatization. Including a control group would allow more confidence regarding the conclusions about the effectiveness of SMHC for patients impaired by somatization.