To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We recently reported on the radio-frequency attenuation length of cold polar ice at Summit Station, Greenland, based on bi-static radar measurements of radio-frequency bedrock echo strengths taken during the summer of 2021. Those data also allow studies of (a) the relative contributions of coherent (such as discrete internal conducting layers with sub-centimeter transverse scale) vs incoherent (e.g. bulk volumetric) scattering, (b) the magnitude of internal layer reflection coefficients, (c) limits on signal propagation velocity asymmetries (‘birefringence’) and (d) limits on signal dispersion in-ice over a bandwidth of ~100 MHz. We find that (1) attenuation lengths approach 1 km in our band, (2) after averaging 10 000 echo triggers, reflected signals observable over the thermal floor (to depths of ~1500 m) are consistent with being entirely coherent, (3) internal layer reflectivities are ≈–60$\to$–70 dB, (4) birefringent effects for vertically propagating signals are smaller by an order of magnitude relative to South Pole and (5) within our experimental limits, glacial ice is non-dispersive over the frequency band relevant for neutrino detection experiments.
Background: Previously, our hospital manually built a static antibiogram from a surveillance system (VigiLanz) culture report. In 2019, a collaboration between the antimicrobial stewardship team (AST) and the infection control (IC) team set out to leverage data automation to create a dynamic antibiogram. The goal for the antibiogram was the ability to easily distribute and update for hospital staff, with the added ability to perform advanced tracking and surveillance of organism and drug susceptibilities for AST and IC. By having a readily available, accurate, and Clinical and Laboratory Standards Institute (CLSI)–compliant antibiogram, clinicians have the best available data on which to base their empiric antibiotic decisions. Methods: First, assessment of required access to hospital databases and selection of a visualization software (MS Power BI) was performed. Connecting SQL database feeds to Power BI enabled creation of a data model using DAX and M code to comply with the CLSI, generating the first isolate per patient per year. Once a visual antibiogram was created, it was validated against compiled antibiograms using data from the microbiology laboratory middleware (bioMerieux, Observa Integrated Data Management Software). This validation process uncovered some discrepancies between the 2 reference reports due to cascade reporting of susceptibilities. The Observa-derived data were used as the source of truth. The antibiogram prototype was presented to AST/IC members, microbiology laboratory leadership, and other stakeholders to assess functionality. Results: Following feedback and revisions by stakeholders, the new antibiogram was published on a hospital-wide digital platform (Fig. 1). Clinicians may view the antibiogram at any time on desktops from a firewall (or password)–protected intranet. The antibiogram view defaults to the current calendar year and users may interact with the antibiogram rows and columns without disrupting the integrity of the background databases or codes. Each year, simple refreshing of the Power BI antibiogram and changing of the calendar year allows us to easily and accurately update the antibiogram on the hospital-wide digital platform. Conclusions: This interdisciplinary collaboration resulted in a new dynamic, CLSI-compliant antibiogram with improved usability, increased visibility, and straightforward updating. In the future, a mobile version of the antibiogram may further enhance accessibility, bring more useful information to providers, and optimize AST/IC guidelines and education.
Background: The off-target effects linezolid have the potential to cause serotonin syndrome when given in conjunction with serotonergic agents. Despite package insert labeling as a contraindication, several postmarketing studies have demonstrated a low incidence of serotonin syndrome with the concomitant use of linezolid and other serotonergic agents. Linezolid provides a convenient oral option for gram-positive infections. However, due to concerns for serotonin syndrome, the use of linezolid is sometimes avoided. Methods: We performed a single-center, retrospective, medical record review of all adult inpatients from September 2021 to September 2022. Patients included had 1 administration of linezolid and 1 inpatient administration of a selective serotonin reuptake inhibitor (SSRI) or serotonin and norepinephrine reuptake inhibitor (SNRI) within 14 days. The primary outcome was the incidence of serotonin syndrome as defined by the Hunter serotonin toxicity criteria, which were retrospectively applied to each patient based on medical-record documentation. We compared patients receiving 1 versus multiple serotonergic agents. Secondary outcomes included duration of hospitalization and adverse outcomes based on concerns for serotonin syndrome such as need for rescue, ICU admission, or change in medication. Results: Of the 50 included patients from a convenience sample, 27 (54%) were on linezolid and >1 serotonergic agent. Patients had similar baseline characteristics (Table 1). The most common concomitant agent used was an SSRI. Other agents that predispose patients to serotonin syndrome included opioid analgesics and other classes of antidepressants (Fig. 1). Serotonin syndrome occurred within 48 hours in 1 patient on an SNRI and a continuous fentanyl drip. There was no need for rescue or ICU admission due to serotonin syndrome. No patients were readmitted due to serotonin syndrome, and no differences were observed in hospital lengths of stay. Conclusions: Exposure to a single serotonergic agent combined with receipt of linezolid was not associated with any cases of serotonin syndrome. Exposure to multiple serotonergic agents was not associated with a high incidence of serotonin syndrome. This small series supports previous reports demonstrating relative safety of linezolid given with serotonergic agents and encourages review of interruptive drug–drug interaction alerts for linezolid within the electronic ordering system.
Background: Nirmatrelvir-ritonavir received emergency use authorization (EUA) for the prevention of progression of COVID-19 in December 2021. Most data supporting this authorization are limited to the outpatient setting in unvaccinated patients, and high-quality head-to-head comparisons to other antivirals such as remdesivir are lacking. Patients at high risk of disease progression, such as advanced age, smokers, and those with cardiovascular disease, diabetes, obesity, or cancer continue to be admitted to acute-care settings for various indications, and some are incidentally found to have mild COVID-19. The objective of this project was to compare rates of progression of mild-to-moderate COVID-19 for inpatients treated with remdesivir versus nirmatrelvir-ritonavir. Methods: This study was a single-center, retrospective cohort study that included patients aged ≥18 years with PCR-confirmed SARS-CoV-2 infection who were initiated on nirmatrelvir-ritonavir within 5 days or remdesivir within 7 days of symptom onset between June 2022 and August 2022. The primary outcome was the worsening of symptoms via the WHO ordinal clinical severity scale for COVID-19. Secondary outcomes included escalation of care or readmission due to COVID-19, discharge prior to treatment completion, and any adverse drug reactions (ADRs). Within our institutional guidelines, prior approval is needed for COVID-19 treatment through collaboration between the primary team and antimicrobial stewards. Nirmatrelvir-ritonavir is the preferred agent for both in- and outpatients unless the patient had drug interactions or lack of enteral access, in which case remdesivir was considered. Results: In total, 58 patients were screened and 50 patients were included, 25 patients in each arm. Most were non-Hispanic, white males with at least 1 comorbidity. Compared to the remdesivir arm, the nirmatrelvir-ritonavir arm had more patients with at least a primary COVID-19 vaccine (44% vs 34%). Also, 88% of patients in each arm had a baseline ordinal score of 4, and 12% had a score of 5. Ordinal score changes between the start and end of therapy were similar between groups, and neither had an increase in oxygen requirements (Fig. 1). No readmissions were due to COVID-19, and both medications were well tolerated. Refer to Fig. 2 for secondary outcomes. Conclusions: Nirmatrelvir-ritonavir and remdesivir showed similar safety and efficacy in the treatment of hospitalized patients with mild-to-moderate COVID-19. Current evidence-based guidelines and treatment costs favor nirmatrelvir-ritonavir for patients who can receive this drug.
The aim of this study was to identify and prioritize strategies for strengthening public health system resilience for pandemics, disasters, and other emergencies using a scorecard approach.
The United Nations Public Health System Resilience Scorecard (Scorecard) was applied across 5 workshops in Slovenia, Turkey, and the United States of America. The workshops focused on participants reviewing and discussing 23 questions/indicators. A Likert type scale was used for scoring with zero being the lowest and 5 the highest. The workshop scores were analyzed and discussed by participants to prioritize areas of need and develop resilience strategies. Data from all workshops were aggregated, analyzed, and interpreted to develop priorities representative of participating locations.
Eight themes emerged representing the need for better integration of public health and disaster management systems. These include: assessing community disease burden; embedding long-term recovery groups in emergency systems; exploring mental health care needs; examining ecosystem risks; evaluating reserve funds; identifying what crisis communication strategies worked well; providing non-medical services; and reviewing resilience of existing facilities, alternate care sites, and institutions.
The Scorecard is an effective tool for establishing baseline resilience and prioritizing actions. The strategies identified reflect areas in most need for investment to improve public health system resilience.
No-till planting organic soybean [Glycine max (L.) Merr.] into roller-crimped cereal rye (Secale cereale L.) can have several advantages over traditional tillage-based organic production. However, suboptimal cereal rye growth in fields with large populations of weeds may result in reduced weed suppression, weed–crop competition, and soybean yield loss. Ecological weed management theory suggests that integrating multiple management practices that may be weakly effective on their own can collectively provide high levels of weed suppression. In 2021 and 2022, a field experiment was conducted in central New York to evaluate the performance of three weed management tactics implemented alone and in combination in organic no-till soybean planted into both cereal rye mulch and no mulch: (1) increasing crop seeding rate, (2) interrow mowing, and (3) weed electrocution. A nontreated control treatment that did not receive any weed management and a weed-free control treatment were also included. Cereal rye was absent from two of the five fields where the experiment was repeated; however, the presence of cereal rye did not differentially affect results, and thus data were pooled across fields. All treatments that included interrow mowing reduced weed biomass by at least 60% and increased soybean yield by 14% compared with the nontreated control. The use of a high seeding rate or weed electrocution, alone or in combination, did not improve weed suppression or soybean yield relative to the nontreated control. Soybean yield across all treatments was at least 22% lower than in the weed-free control plot. Future research should explore the effects of the tactics tested on weed population and community dynamics over an extended period. Indirect effects from interrow mowing and weed electrocution should also be studied, such as the potential for improved harvestability, decreased weed seed production and viability, and the impacts on soil organisms and agroecosystem biodiversity.
We present the third data release from the Parkes Pulsar Timing Array (PPTA) project. The release contains observations of 32 pulsars obtained using the 64-m Parkes ‘Murriyang’ radio telescope. The data span is up to 18 yr with a typical cadence of 3 weeks. This data release is formed by combining an updated version of our second data release with $\sim$3 yr of more recent data primarily obtained using an ultra-wide-bandwidth receiver system that operates between 704 and 4032 MHz. We provide calibrated pulse profiles, flux density dynamic spectra, pulse times of arrival, and initial pulsar timing models. We describe methods for processing such wide-bandwidth observations and compare this data release with our previous release.
Emergency departments are key settings for suicide prevention. Most people are deemed to be at no or low risk in final contacts before death.
To micro-analyse how clinicians ask about suicidal ideation and/or self-harm in emergency department psychosocial assessments and how patients respond.
Forty-six psychosocial assessments between mental health clinicians and people with suicidal ideation and/or self-harm were video-recorded. Verbal and non-verbal features of 55 question–answer sequences about self-harm thoughts and/or actions were micro-analysed using conversation analysis. Fisher's exact test was used to test the hypothesis that question type was associated with patient disclosure.
(a) Eighty-four per cent of initial questions (N = 46/55) were closed yes/no questions about self-harm thoughts and/or feelings, plans to self-harm, potential for future self-harm, predicting risk of future self-harm and being okay or keeping safe. Patients disclosed minimal information in response to closed questions, whereas open questions elicited ambivalent and information rich responses. (b) All closed questions were leading, with 54% inviting no and 46% inviting yes. When patients were asked no-inviting questions, the disclosure rate was 8%, compared to 65% when asked yes-inviting questions (P < 0.05 Fisher's exact test). (c) Patients struggled to respond when asked to predict future self-harm or guarantee safety. (d) Half of closed questions had a narrow timeframe (e.g. at the moment, overnight) or were tied to possible discharge.
Across assessments, there is a bias towards not uncovering thoughts and plans of self-harm through the cumulative effect of leading questions that invite a no response, their narrow timeframe and tying questions to possible discharge. Open questions, yes-inviting questions and asking how people feel about the future facilitate disclosure.
OBJECTIVES/GOALS: Physical therapy (PT) is key for treating functional decline that inpatients experience but is a constrained resource in hospital settings. The Activity Measure Post-Acute Care (AM-PAC) score is a mobility measurement tool that has been used to define misallocation of PT. We aim to optimize PT referrals using AM-PAC-based clinical decision support . METHODS/STUDY POPULATION: We conducted a prospective study of patients admitted to University of Chicago Medical Center. AM-PAC scores were assessed by nursing staff every 12 hours. Clinical decision support was designed using validated AM-PAC cutoffs (> 18, a predictor of discharge to home). The tool was embedded in hospital medicine note templates, requiring providers to indicate PT referral status based on current AM-PAC scores. The primary outcome, unskilled consult , was defined as PT referral for patients with AM-PAC > 18. Data were collected for one year prior to implementation and one year after implementation for intervention (hospital medicine) and control (general internal medicine) services. Difference in differences analysis was used to assess the association between the intervention and unskilled consults. RESULTS/ANTICIPATED RESULTS: Between October 2018 and March 2021, 18,241 admissions were eligible for the study. Compared to preintervention, there was a lower rate of referral to PT for patients with high AM-PAC mobility scores in the post-intervention period [18.5% vs 16.6%; X2(1) = 7.02; p < 0.01]. In the postintervention time period, the control group experienced a 2.6% increase in unskilled consults while the intervention group experienced a 2.3% decrease, a difference in differences of 4.9% (95% CI -0.07–-0.03 for difference in differences) controlling for age sex, race, LOS, and change in mobility. Compared to preintervention, there was no statistically significant difference in mean change in mobility score post-intervention for either group. DISCUSSION/SIGNIFICANCE: Our results suggest that clinical decision support can decrease unskilled PT consults. Many functionally independent patients can mobilize with nursing or other mobilization staff. Hospitals should consider mobility score-based decision support to prioritize PT for impaired and at-risk patients.
Healthcare workers (HCWs) in long-term care facilities (LTCFs) are disproportionately affected by severe acute respiratory coronavirus virus 2 (SARS-CoV-2), the virus that causes coronavirus disease 2019 (COVID-19). To characterize factors associated with SARS-CoV-2 positivity among LTCF HCWs, we performed a retrospective cohort study among HCWs in 32 LTCFs in the Minneapolis–St Paul region.
We analyzed the outcome of SARS-CoV-2 polymerase chain reaction (PCR) positivity among LTCF HCWs during weeks 34–52 of 2020. LTCF and HCW-level characteristics, including facility size, facility risk score for resident-HCW contact, and resident-facing job role, were modeled in univariable and multivariable generalized linear regressions to determine their association with SARS-CoV-2 positivity.
Between weeks 34 and 52, 440 (20.7%) of 2,130 unique HCWs tested positive for SARS-CoV-2 at least once. In the univariable model, non–resident-facing HCWs had lower odds of infection (odds ratio [OR], 0.50; 95% confidence interval [CI], 0.36–0.70). In the multivariable model, the odds remained lower for non–resident-facing HCW (OR, 0.50; 95% CI, 0.36–0.71), and those in medium- versus low-risk facilities experienced higher odds of testing positive for SARS-CoV-2 (OR, 1.47; 95% CI, 1.08–2.02).
Our findings suggest that COVID-19 cases are related to contact between HCW and residents in LTCFs. This association should be considered when formulating infection prevention and control policies to mitigate the spread of SARS-CoV-2 in LTCFs.
The uptake of electric vehicles (EVs) and renewable energy technologies is changing the magnitude, variability, and direction of power flows in electricity networks. To ensure a successful transition to a net zero energy system, it will be necessary for a wide range of stakeholders to understand the impacts of these changing flows on networks. However, there is a gap between those with the data and capabilities to understand electricity networks, such as network operators, and those working on adjacent parts of the energy transition jigsaw, such as electricity suppliers and EV charging infrastructure operators. This paper describes the electric vehicle network analysis tool (EVENT), developed to help make network analysis accessible to a wider range of stakeholders in the energy ecosystem who might not have the bandwidth to curate and integrate disparate datasets and carry out electricity network simulations. EVENT analyses the potential impacts of low-carbon technologies on congestion in electricity networks, helping to inform the design of products and services. To demonstrate EVENT’s potential, we use an extensive smart meter dataset provided by an energy supplier to assess the impacts of electricity smart tariffs on networks. Results suggest both network operators and energy suppliers will have to work much more closely together to ensure that the flexibility of customers to support the energy system can be maximized, while respecting safety and security constraints within networks. EVENT’s modular and open-source approach enables integration of new methods and data, future-proofing the tool for long-term impact.
On continuous recognition tasks, changing the context objects are embedded in impairs memory. Older adults are worse on pattern separation tasks requiring identification of similar objects compared to younger adults. However, how contexts impact pattern separation in aging is unclear. The apolipoprotein (APOE) ϵ4 allele may exacerbate possible age-related changes due to early, elevated neuropathology. The goal of this study is to determine how context and APOE status affect pattern separation among younger and older adults.
Older and younger ϵ4 carriers and noncarriers were given a continuous object recognition task. Participants indicated if objects on a Repeated White background, Repeated Scene, or a Novel Scene were old, similar, or new. The proportions of correct responses and the types of errors made were calculated.
Novel scenes lowered recognition scores compared to all other contexts for everyone. Younger adults outperformed older adults on identifying similar objects. Older adults misidentified similar objects as old more than new, and the repeated scene exacerbated this error. APOE status interacted with scene and age such that in repeated scenes, younger carriers produced less false alarms, and this trend switched for older adults where carriers made more false alarms.
Context impacted recognition memory in the same way for both age groups. Older adults underutilized details and over relied on holistic information during pattern separation compared to younger adults. The triple interaction in false alarms may indicate an even greater reliance on holistic information among older adults with increased risk for Alzheimer’s disease.
The legal brief is a primary vehicle by which lawyers seek to persuade appellate judges. Despite wide acceptance that briefs are important, empirical scholarship has yet to establish their influence on the Supreme Court or fully explore justices’ preferences regarding them. We argue that emotional language conveys a lack of credibility to justices and thereby diminishes the party’s likelihood of garnering justices’ votes. The data concur. Using an automated textual analysis program, we find that parties who employ less emotional language in their briefs are more likely to win a justice’s vote, a result that holds even after controlling for other features correlated with success, such as case quality. These findings suggest that advocates seeking to influence judges can enhance their credibility and attract justices’ votes by employing measured, objective language.
We provide evidence of a network of information flow between activists and other investors prior to 13D filings. We match EDGAR search activity to investor IP addresses, identifying specific investors who persistently download information on an individual activist’s campaign targets in the days prior to that activist’s 13D disclosures. This outside investor’s knowledge of pending activist campaign plans seems to benefit both parties: the informed investor, unnamed in the 13D, increases its holdings in the targeted stock prior to the price surge upon 13D disclosure, while the activist earns voting support that increases their likelihood of pursuing and winning a proxy fight.
Does interpersonal political communication improve the quality of individual decision making? While deliberative theorists offer reasons for hope, experimental researchers have demonstrated that biased messages can travel via interpersonal social networks. We argue that the value of interpersonal political communication depends on the motivations of the people involved, which can be shifted by different contexts. Using small-group experiments that randomly assign participants' motivations to seek or share information with others as well as their motivations for evaluating the information they receive, we demonstrate the importance of accounting for motivations in communication. We find that when individuals with more extreme preferences are motivated to acquire and share information, collective civic capacity is diminished. But if we can stimulate the exchange of information among individuals with stronger prosocial motivations, such communication can enhance collective civic capacity. We also provide advice for other researchers about conducting similar group-based experiments to study political communication.
Our understanding of homogeneous vapour bubble growth is currently restricted to asymptotic descriptions of their limiting behaviour. While attempts have been made to incorporate both the inertial and thermal limits into bubble growth models, the early stages of bubble growth have not been captured. By accounting for both the changing inertial driving force and the thermal restriction to growth, we present an inertio-thermal model of homogeneous vapour bubble growth, capable of accurately capturing the evolution of a bubble from the nano- to the macro-scale. We compare our model predictions with: (a) published experimental and numerical data, and (b) our own molecular simulations, showing significant improvement over previous models. This has potential application in improving the performance of engineering devices, such as ultrasonic cleaning and microprocessor cooling, as well as in understanding of natural phenomena involving vapour bubble growth.
To examine the impact of SARS-CoV-2 infection on CLABSI rate and characterize the patients who developed a CLABSI. We also examined the impact of a CLABSI-reduction quality-improvement project in patients with and without COVID-19.
Retrospective cohort analysis.
Academic 889-bed tertiary-care teaching hospital in urban Los Angeles.
Patients or participants:
Inpatients 18 years and older with CLABSI as defined by the National Healthcare Safety Network (NHSN).
CLABSI rate and patient characteristics were analyzed for 2 cohorts during the pandemic era (March 2020–August 2021): COVID-19 CLABSI patients and non–COVID-19 CLABSI patients, based on diagnosis of COVID-19 during admission. Secondary analyses were non–COVID-19 CLABSI rate versus a historical control period (2019), ICU CLABSI rate in COVID-19 versus non–COVID-19 patients, and CLABSI rates before and after a quality- improvement initiative.
The rate of COVID-19 CLABSI was significantly higher than non–COVID-19 CLABSI. We did not detect a difference between the non–COVID-19 CLABSI rate and the historical control. COVID-19 CLABSIs occurred predominantly in the ICU, and the ICU COVID-19 CLABSI rate was significantly higher than the ICU non–COVID-19 CLABSI rate. A hospital-wide quality-improvement initiative reduced the rate of non–COVID-19 CLABSI but not COVID-19 CLABSI.
Patients hospitalized for COVID-19 have a significantly higher CLABSI rate, particularly in the ICU setting. Reasons for this increase are likely multifactorial, including both patient-specific and process-related issues. Focused quality-improvement efforts were effective in reducing CLABSI rates in non–COVID-19 patients but were less effective in COVID-19 patients.