To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
During March 27–July 14, 2020, the Centers for Disease Control and Prevention’s National Healthcare Safety Network extended its surveillance to hospital capacities responding to COVID-19 pandemic. The data showed wide variations across hospitals in case burden, bed occupancies, ventilator usage, and healthcare personnel and supply status. These data were used to inform emergency responses.
The rapid spread of severe acute respiratory coronavirus virus 2 (SARS-CoV-2) throughout key regions of the United States in early 2020 placed a premium on timely, national surveillance of hospital patient censuses. To meet that need, the Centers for Disease Control and Prevention’s National Healthcare Safety Network (NHSN), the nation’s largest hospital surveillance system, launched a module for collecting hospital coronavirus disease 2019 (COVID-19) data. We present time-series estimates of the critical hospital capacity indicators from April 1 to July 14, 2020.
From March 27 to July 14, 2020, the NHSN collected daily data on hospital bed occupancy, number of hospitalized patients with COVID-19, and the availability and/or use of mechanical ventilators. Time series were constructed using multiple imputation and survey weighting to allow near–real-time daily national and state estimates to be computed.
During the pandemic’s April peak in the United States, among an estimated 431,000 total inpatients, 84,000 (19%) had COVID-19. Although the number of inpatients with COVID-19 decreased from April to July, the proportion of occupied inpatient beds increased steadily. COVID-19 hospitalizations increased from mid-June in the South and Southwest regions after stay-at-home restrictions were eased. The proportion of inpatients with COVID-19 on ventilators decreased from April to July.
The NHSN hospital capacity estimates served as important, near–real-time indicators of the pandemic’s magnitude, spread, and impact, providing quantitative guidance for the public health response. Use of the estimates detected the rise of hospitalizations in specific geographic regions in June after they declined from a peak in April. Patient outcomes appeared to improve from early April to mid-July.
Despite the substantial investment by Australian health authorities to improve the health of rural and remote communities, rural residents continue to experience health care access challenges and poorer health outcomes. Health literacy and community engagement are both considered critical in addressing these health inequities. However, the current focus on health literacy can place undue burdens of responsibility for healthcare on individuals from disadvantaged communities whilst not taking due account of broader community needs and healthcare expectations. This can also marginalize the influence of community solidarity and mobilization in effecting healthcare improvements.
The objective is to present a conceptual framework that describes community literacy, its alignment with health literacy, and its relationship to concepts of community engaged healthcare.
Community literacy aims to integrate community knowledge, skills and resources into the design, delivery and adaptation of healthcare policies, and services at regional and local levels, with the provision of primary, secondary, and tertiary healthcare that aligns to individual community contexts. A set of principles is proposed to support the development of community literacy. Three levels of community literacy education for health personnel have been described that align with those applied to health literacy for consumers. It is proposed that community literacy education can facilitate transformational community engagement. Skills acquired by health personnel from senior executives to frontline clinical staff, can also lead to enhanced opportunities to promote health literacy for individuals.
The integration of health and community literacy provides a holistic framework that has the potential to effectively respond to the diversity of rural and remote Australian communities and their healthcare needs and expectations. Further research is required to develop, validate, and evaluate the three levels of community literacy education and alignment to health policy, prior to promoting its uptake more widely.
Background: The NHSN has used positive laboratory tests for surveillance of Clostridioides difficile infection (CDI) LabID events since 2009. Typically, CDIs are detected using enzyme immunoassays (EIAs), nucleic acid amplification tests (NAATs), or various test combinations. The NHSN uses a risk-adjusted, standardized infection ratio (SIR) to assess healthcare facility-onset (HO) CDI. Despite including test type in the risk adjustment, some hospital personnel and other stakeholders are concerned that NAAT use is associated with higher SIRs than are EIAs. To investigate this issue, we analyzed NHSN data from acute-care hospitals for July 1, 2017 through June 30, 2018. Methods: Calendar quarters for which CDI test type was reported as NAAT (includes NAAT, glutamate dehydrogenase (GDH)+NAAT and GDH+EIA followed by NAAT if discrepant) or EIA (includes EIA and GDH+EIA) were selected. HO CDI SIRs were calculated for facility-wide inpatient locations. We conducted the following analyses: (1) Among hospitals that did not switch their test type, we compared the distribution of HO incident rates and SIRs by those reporting NAAT vs EIA. (2) Among hospitals that switched their test type, we selected quarters with a stable switch pattern of 2 consecutive quarters of each of EIA and NAAT (categorized as pattern EIA-to-NAAT or NAAT-to-EIA). Pooled semiannual SIRs for EIA and NAAT were calculated, and a paired t test was used to evaluate the difference of SIRs by switch pattern. Results: Most hospitals did not switch test types (3,242, 89%), and 2,872 (89%) reported sufficient data to calculate SIRs, with 2,444 (85%) using NAAT. The crude pooled HO CDI incidence rates for hospitals using EIA clustered at the lower end of the histogram versus rates for NAAT (Fig. 1). The SIR distributions of both NAAT and EIA overlapped substantially and covered a similar range of SIR values (Fig. 1). Among hospitals with a switch pattern, hospitals were equally likely to have an increase or decrease in their SIR (Fig. 2). The mean SIR difference for the 42 hospitals switching from EIA to NAAT was 0.048 (95% CI, −0.189 to 0.284; P = .688). The mean SIR difference for the 26 hospitals switching from NAAT to EIA was 0.162 (95% CI, −0.048 to 0.371; P = .124). Conclusions: The pattern of SIR distributions of both NAAT and EIA substantiate the soundness of NHSN risk adjustment for CDI test types. Switching test type did not produce a consistent directional pattern in SIR that was statistically significant.
Background: The National Healthcare Safety Network (NHSN) has used positive laboratory tests for surveillance of Clostridioides difficile infection (CDI) LabID events since 2009. Typically, CDIs are detected using enzyme immunoassays (EIAs), nucleic acid amplification tests (NAATs), or various test combinations. The NHSN uses a risk-adjusted, standardized infection ratio (SIR) to assess healthcare facility-onset (HO) CDI. Despite including test type in the risk adjustment, some hospital personnel and other stakeholders are concerned that NAAT use is associated with higher SIRs than EIA use. To investigate this issue, we analyzed NHSN data from acute-care hospitals for July 1, 2017, through June 30, 2018. Methods: Calendar quarters where CDI test type was reported as NAAT (includes NAAT, glutamate dehydrogenase (GDH)+NAAT and GDH+EIA followed by NAAT if discrepant) or EIA (includes EIA and GDH+EIA) were selected. HO-CDI SIRs were calculated for facility-wide inpatient locations. We conducted the following 2 analyses: (1) Among hospitals that did not switch their test type, we compared the distribution of HO incident rates and SIRs by those reporting NAAT versus EIA. (2) Among hospitals that switched their test type, we selected quarters with a stable switch pattern of 2 consecutive quarters of each of EIA and NAAT (categorized as EIA-to-NAAT or NAAT-to-EIA). Pooled semiannual SIRs for EIA and NAAT were calculated, and a paired t test was used to evaluate the difference in SIRs by switch pattern. Results: Most hospitals did not switch test types (3,242, 89%), and 2,872 (89%) reported sufficient data to calculate an SIR, with 2,444 (85%) using NAAT. The crude pooled HO CDI incidence rates for hospitals using EIAs clustered at the lower end of the histogram versus rates for NAATs (Fig. 1). The SIR distributions, both NAATs and EIAs, overlapped substantially and covered a similar range of SIR values (Fig. 1). Among hospitals with a switch pattern, hospitals were equally likely to have an increase or decrease in their SIRs (Fig. 2). The mean SIR difference for the 42 hospitals switching from EIA to NAAT was 0.048 (95% CI, −0.189 to 0.284; P = .688). The mean SIR difference for the 26 hospitals switching from NAAT to EIA was 0.162 (95% CI, −0.048 to 0.371; P = .124). Conclusions: The pattern of SIR distribution for both NAAT and EIA substantiate the soundness of the NHSN’s risk adjustment for CDI test types. Switching test type did not produce a consistent directional pattern in SIR that was statistically significant.
Background: Antimicrobial resistance (AMR) is an increasingly critical global public health challenge. An initial step in prevention is the understanding of resistance patterns with accurate surveillance. To improve accurate surveillance and good clinical care, we developed training materials to improve the appropriate collection of clinical culture samples in Ethiopia. Methods: Specimen-collection training materials were initially developed by a team of infectious diseases physicians, a clinical microbiologist, and a monitoring and evaluation specialist using a training of trainers (ToT) platform. Revisions after each training session were provided by Ethiopian attendees including the addition of regional and culturally relevant material. The training format involved didactic presentations, interactive practice sessions with participants providing feedback and training to each other and the entire group as well as assessments of all training activities. Results: Overall, 4 rounds of training were conducted from August 2017 to September 2019. The first 2 rounds of training were conducted by The Ohio State University (OSU) staff, and Ethiopian trainers conducted the last 2 rounds. Initial training was primarily in lecture format outlining use of microbiology laboratory findings in clinical practice and steps for collecting specimens correctly. Appropriate specimen collection was demonstrated and practiced. Essential feedback from this early audience provided input for the final development of the training manual and visual aids. The ToT for master trainers took place in July 2018 and was conducted by OSU staff. In sessions held in February and August 2019, these master trainers provided training to facility trainers, who provide training to personnel directly responsible for specimen collection. In total, 144 healthcare personnel (including physicians, nurses, and laboratory staff), from 12 representative Ethiopian public and academic hospitals participated in the trainings. Participants were satisfied with the quality of the training (typically ranked >4.5 of 5.0) and strongly agreed that the objectives were clearly defined and that the information was relevant to their work. Posttraining scores increased by 23%. Conclusions: Training materials for clinical specimen collection have been developed for use in low- and middle-resource settings and with initial pilot testing and adoption in Ethiopia. The trainings were well accepted, and Ethiopian personnel were able to successfully lead the trainings and improve their knowledge and skills regarding specimen collection. The materials are being finalized in an online format for easier open access dissemination. Further studies are planned to determine the effectiveness of the trainings in improving the quality of clinical specimen submissions to the microbiology laboratory.
Shared patient–clinician decision-making is central to choosing between medical treatments. Decision support tools can have an important role to play in these decisions. We developed a decision support tool for deciding between nonsurgical treatment and surgical total knee replacement for patients with severe knee osteoarthritis. The tool aims to provide likely outcomes of alternative treatments based on predictive models using patient-specific characteristics. To make those models relevant to patients with knee osteoarthritis and their clinicians, we involved patients, family members, patient advocates, clinicians, and researchers as stakeholders in creating the models.
Stakeholders were recruited through local arthritis research, advocacy, and clinical organizations. After being provided with brief methodological education sessions, stakeholder views were solicited through quarterly patient or clinician stakeholder panel meetings and incorporated into all aspects of the project.
Participating in each aspect of the research from determining the outcomes of interest to providing input on the design of the user interface displaying outcome predications, 86% (12/14) of stakeholders remained engaged throughout the project. Stakeholder engagement ensured that the prediction models that form the basis of the Knee Osteoarthritis Mathematical Equipoise Tool and its user interface were relevant for patient–clinician shared decision-making.
Methodological research has the opportunity to benefit from stakeholder engagement by ensuring that the perspectives of those most impacted by the results are involved in study design and conduct. While additional planning and investments in maintaining stakeholder knowledge and trust may be needed, they are offset by the valuable insights gained.
There is no suitable vaccine against human visceral leishmaniasis (VL) and available drugs are toxic and/or present high cost. In this context, diagnostic tools should be improved for clinical management and epidemiological evaluation of disease. However, the variable sensitivity and/or specificity of the used antigens are limitations, showing the necessity to identify new molecules to be tested in a more sensitive and specific serology. In the present study, an immunoproteomics approach was performed in Leishmania infantum promastigotes and amastigotes employing sera samples from VL patients. Aiming to avoid undesired cross-reactivity in the serological assays, sera from Chagas disease patients and healthy subjects living in the endemic region of disease were also used in immunoblottings. The most reactive spots for VL samples were selected, and 29 and 21 proteins were identified in the promastigote and amastigote extracts, respectively. Two of them, endonuclease III and GTP-binding protein, were cloned, expressed, purified and tested in ELISA experiments against a large serological panel, and results showed high sensitivity and specificity values for the diagnosis of disease. In conclusion, the identified proteins could be considered in future studies as candidate antigens for the serodiagnosis of human VL.
To enhance enrollment into randomized clinical trials (RCTs), we proposed electronic health record-based clinical decision support for patient–clinician shared decision-making about care and RCT enrollment, based on “mathematical equipoise.”
As an example, we created the Knee Osteoarthritis Mathematical Equipoise Tool (KOMET) to determine the presence of patient-specific equipoise between treatments for the choice between total knee replacement (TKR) and nonsurgical treatment of advanced knee osteoarthritis.
With input from patients and clinicians about important pain and physical function treatment outcomes, we created a database from non-RCT sources of knee osteoarthritis outcomes. We then developed multivariable linear regression models that predict 1-year individual-patient knee pain and physical function outcomes for TKR and for nonsurgical treatment. These predictions allowed detecting mathematical equipoise between these two options for patients eligible for TKR. Decision support software was developed to graphically illustrate, for a given patient, the degree of overlap of pain and functional outcomes between the treatments and was pilot tested for usability, responsiveness, and as support for shared decision-making.
The KOMET predictive regression model for knee pain had four patient-specific variables, and an r2 value of 0.32, and the model for physical functioning included six patient-specific variables, and an r2 of 0.34. These models were incorporated into prototype KOMET decision support software and pilot tested in clinics, and were generally well received.
Use of predictive models and mathematical equipoise may help discern patient-specific equipoise to support shared decision-making for selecting between alternative treatments and considering enrollment into an RCT.
Farmers’ market interventions are a popular strategy for addressing chronic disease disparities in low-income neighbourhoods. With limited resources, strategic targeting of interventions is critical. The present study used spatial analysis to identify where market interventions have the greatest impact on healthy food access within a geographic region.
All farmers’ markets in a mixed urban/rural county were mapped and those that accepted Supplemental Nutrition Assistance Program (SNAP) electronic benefit transfer (EBT) cards identified. Households were grouped into small neighbourhoods and mapped. The area of ‘reasonable access’ around each market (walking distance (0·8 km; 0·5mile) in urban areas, driving distance (15 min) in rural areas) was calculated using spatial analysis. The percentage of county low-income households within a market’s access area, and the percentage of county SNAP-participating households within an EBT-accepting market’s access area, were calculated. The ten neighbourhoods with the most low-income households and with the most SNAP-participating households were then identified, their access areas calculated and mapped, and those lacking access identified. County-level gains resulting from improving market accessibility in these areas were calculated.
Honolulu County, Hawaii, USA.
Only 44 % of SNAP-participating households had EBT-market access. Six of the ten highest SNAP-participant neighbourhoods lacked access. Improving access for these neighbourhoods increased county-level access by 23 %. Market access for low-income households was 74 %. Adding markets to these low-income neighbourhoods without market access increased county-level access by 4 %.
Geographic identification of market access demographics, and strategic targeting of EBT interventions, could improve regional access to healthy foods.
Well-established methods exist for measuring party positions, but reliable means for estimating intra-party preferences remain underdeveloped. While most efforts focus on estimating the ideal points of individual legislators based on inductive scaling of roll call votes, this data suffers from two problems: selection bias due to unrecorded votes and strong party discipline, which tends to make voting a strategic rather than a sincere indication of preferences. By contrast, legislative speeches are relatively unconstrained, as party leaders are less likely to punish MPs for speaking freely as long as they vote with the party line. Yet, the differences between roll call estimations and text scalings remain essentially unexplored, despite the growing application of statistical analysis of textual data to measure policy preferences. Our paper addresses this lacuna by exploiting a rich feature of the Swiss legislature: on most bills, legislators both vote and speak many times. Using this data, we compare text-based scaling of ideal points to vote-based scaling from a crucial piece of energy legislation. Our findings confirm that text scalings reveal larger intra-party differences than roll calls. Using regression models, we further explain the differences between roll call and text scalings by attributing differences to constituency-level preferences for energy policy.
Protein supplementation in combination with resistance training may increase muscle mass and muscle strength in elderly subjects. The objective of this study was to assess the influence of post-exercise protein supplementation with collagen peptides v. placebo on muscle mass and muscle function following resistance training in elderly subjects with sarcopenia. A total of fifty-three male subjects (72·2 (sd 4·68) years) with sarcopenia (class I or II) completed this randomised double-blind placebo-controlled study. All the participants underwent a 12-week guided resistance training programme (three sessions per week) and were supplemented with either collagen peptides (treatment group (TG)) (15 g/d) or silica as placebo (placebo group (PG)). Fat-free mass (FFM), fat mass (FM) and bone mass (BM) were measured before and after the intervention using dual-energy X-ray absorptiometry. Isokinetic quadriceps strength (IQS) of the right leg was determined and sensory motor control (SMC) was investigated by a standardised one-leg stabilisation test. Following the training programme, all the subjects showed significantly higher (P<0·01) levels for FFM, BM, IQS and SMC with significantly lower (P<0·01) levels for FM. The effect was significantly more pronounced in subjects receiving collagen peptides: FFM (TG +4·2 (sd 2·31) kg/PG +2·9 (sd 1·84) kg; P<0·05); IQS (TG +16·5 (sd 12·9) Nm/PG +7·3 (sd 13·2) Nm; P<0·05); and FM (TG –5·4 (sd 3·17) kg/PG –3·5 (sd 2·16) kg; P<0·05). Our data demonstrate that compared with placebo, collagen peptide supplementation in combination with resistance training further improved body composition by increasing FFM, muscle strength and the loss in FM.
The central nervous system (CNS) integrates information from multiple sensory modalities, including visual and proprioceptive information, when planning a reaching movement (Jeannerod, 1988). Although visual and proprioceptive information regarding hand (or end point effector) position are not always consistent, performance is typically better under reaching conditions in which both sources of information are available. Under certain task conditions, visual signals tend to dominate such that one relies more on visual information than proprioception to guide movement. For example, individuals reaching to a target with misaligned visual feedback of the hand, as experienced when reaching in a virtual reality environment or while wearing prism displacement goggles, adjust their movements in order for the visual representation of the hand to achieve the desired end point even when their actual hand is elsewhere in the workspace (Krakauer et al., 1999, 2000; Redding and Wallace, 1996; Simani et al., 2007). This motor adaptation typically occurs rapidly, reaching baseline levels within twenty trials per target, and without participants' awareness (Krakauer et al., 2000). Furthermore, participants reach with these adapted movement patterns following removal of the distortion, and hence show aftereffects (Baraduc and Wolpert, 2002; Buch et al., 2003; Krakauer et al., 1999, 2000; Martin et al., 1996). These aftereffects provide a measure of motor learning referred to as visuomotor adaptation and result from the CNS learning a new visuomotor mapping to guide movement.
Synthetic κ-opioid receptor (KOR) agonists induce dysphoric and pro-depressive effects and variations in the KOR (OPRK1) and prodynorphin (PDYN) genes have been shown to be associated with alcohol dependence. We genotyped 23 single nucleotide polymorphisms (SNPs) in the PDYN and OPRK1 genes in 816 alcohol-dependent subjects and investigated their association with: (1) negative craving measured by a subscale of the Inventory of Drug Taking Situations; (2) a self-reported history of depression; (3) the intensity of depressive symptoms measured by the Beck Depression Inventory-II. In addition, 13 of the 23 PDYN and OPRK1 SNPs, which were previously genotyped in a set of 1248 controls, were used to evaluate association with alcohol dependence. SNP and haplotype tests of association were performed. Analysis of a haplotype spanning the PDYN gene (rs6045784, rs910080, rs2235751, rs2281285) revealed significant association with alcohol dependence (p = 0.00079) and with negative craving (p = 0.0499). A candidate haplotype containing the PDYN rs2281285-rs1997794 SNPs that was previously associated with alcohol dependence was also associated with negative craving (p = 0.024) and alcohol dependence (p = 0.0008) in this study. A trend for association between depression severity and PDYN variation was detected. No associations of OPRK1 gene variation with alcohol dependence or other studied phenotypes were found. These findings support the hypothesis that sequence variation in the PDYN gene contributes to both alcohol dependence and the induction of negative craving in alcohol-dependent subjects.
As a matter of respect for the person, it is considered an ethical duty to offer to return research results to participants where appropriate. Nevertheless, the return of individual research results to participants raises many socio-ethical issues and greater challenges when the participant is a child. This discrepancy arises partly because the return of individual pediatric research results entails a tripartite relationship between researcher, child, and parent(s) and is embroiled in numerous considerations (e.g., acting in the best interest of the child, respect for the person, and respect for the autonomy of the parents/child).
Extra caution is required in the pediatric research context because children cannot generally decide (consent) whether they want to be informed of their own research results or whether the results should be disclosed to parents. Children have long been considered a special and vulnerable group, and their parents, as guardians, play a critical role in the consent process. However, with regards to the return of individual research results, this might pose a potential conflict of interest between the current or future desires of the child and those of the parents.