We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure coreplatform@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Over the last 25 years, radiowave detection of neutrino-generated signals, using cold polar ice as the neutrino target, has emerged as perhaps the most promising technique for detection of extragalactic ultra-high energy neutrinos (corresponding to neutrino energies in excess of 0.01 Joules, or 1017 electron volts). During the summer of 2021 and in tandem with the initial deployment of the Radio Neutrino Observatory in Greenland (RNO-G), we conducted radioglaciological measurements at Summit Station, Greenland to refine our understanding of the ice target. We report the result of one such measurement, the radio-frequency electric field attenuation length $L_\alpha$. We find an approximately linear dependence of $L_\alpha$ on frequency with the best fit of the average field attenuation for the upper 1500 m of ice: $\langle L_\alpha \rangle = ( ( 1154 \pm 121) - ( 0.81 \pm 0.14) \, ( \nu /{\rm MHz}) ) \,{\rm m}$ for frequencies ν ∈ [145 − 350] MHz.
Among outpatients with coronavirus disease 2019 (COVID-19) due to the severe acute respiratory coronavirus virus 2 (SARS-CoV-2) δ (delta) variant who did and did not receive 2 vaccine doses at 7 days after symptom onset, there was no difference in viral shedding (cycle threshold difference 0.59, 95% CI, −4.68 to 3.50; P = .77) with SARS-CoV-2 cultured from 2 (7%) of 28 and 1 (4%) of 26 outpatients, respectively.
Italian ryegrass is a major weed in winter cereals in the south-central United States. Harvest weed seed control (HWSC) tactics that aim to remove weed seed from crop fields are a potential avenue to reduce Italian ryegrass seedbank inputs. To this effect, a 4-yr, large-plot field study was conducted in College Station, Texas, and Newport, Arkansas, from 2016 to 2019. The treatments were arranged in a split-plot design. The main-plot treatments were (1) no narrow-windrow burning (a HWSC strategy) + disk tillage immediately after harvest, (2) HWSC + disk tillage immediately after harvest, and (3) HWSC + disk tillage 1 mo after harvest. The subplot treatments were (1) pendimethalin (1,065 g ai ha−1; Prowl H2O®) as a delayed preemergence application (herbicide program #1), and (2) a premix of flufenacet (305 g ai ha−1) + metribuzin (76 g ai ha−1; Axiom®) mixed with pyroxasulfone (89 g ai ha−1; Zidua® WG) as an early postemergence application followed by pinoxaden (59 g ai ha−1; Axial® XL) in spring (herbicide program #2). After 4 yr, HWSC alone was significantly better than no HWSC. Herbicide program #2 was superior to herbicide program #1. Herbicide program #2 combined with HWSC was the most effective treatment. The combination of herbicide program #1 and standard harvest practice (no HWSC; check) led to an increase in fall Italian ryegrass densities from 4 plants m−2 in 2017 to 58 plants m−2 in 2019 at College Station. At wheat harvest, Italian ryegrass densities were 58 and 59 shoots m−2 in check plots at College Station and Newport, respectively, whereas the densities were near zero in plots with herbicide program #2 and HWSC at both locations. These results will be useful for developing an improved Italian ryegrass management strategy in this region.
Antibiotics are widely prescribed in the neonatal intensive care unit (NICU) and duration of prescription is varied. We sought to decrease unnecessary antibiotic days for the most common indications in our outborn level IV NICU by 20% within 1 year.
Design and interventions:
A retrospective chart review was completed to determine the most common indications and treatment duration for antibiotic therapy in our 39-bed level IV NICU. A multidisciplinary team was convened to develop an antibiotic stewardship quality improvement initiative with new consensus guidelines for antibiotic duration for these common indications. To optimize compliance, prospective audit was completed to ensure antibiotic stop dates were utilized and provider justification for treatment duration was documented. Multiple rounds of educational sessions were conducted with neonatology providers.
Results:
In total, 262 patients were prescribed antibiotics (139 in baseline period and 123 after the intervention). The percentage of unnecessary antibiotic days (UAD) was defined as days beyond the consensus guidelines. As a balancing measure, reinitiation of antibiotics within 2 weeks was tracked. After sequential interventions, the percentage of UAD decreased from 42% to 12%, which exceeded our goal of a 20% decrease. Compliance with antibiotic stop dates increased from 32% to 76%, and no antibiotics were reinitiated within 2 weeks.
Conclusions:
A multidisciplinary antibiotic stewardship team coupled with a consensus for antibiotic therapy duration, prescriber justification of antibiotic necessity and use of antibiotic stop dates can effectively reduce unnecessary antibiotic exposure in the NICU.
We investigated the effects of transcranial alternating stimulation (tACS) in patients with insomnia. Nine patients with chronic insomnia underwent two in-laboratory polysomnography, 2 weeks apart, and were randomized to receive tACS either during the first or second study. The stimulation was applied simultaneously and bilaterally at F3/M1 and F4/M2 electrodes (0.75 mA, 0.75 Hz, 5-minute). Sleep onset latency and wake after sleep onset dropped on the stimulation night but they did not reach statistical significance; however, there were significant improvements in spontaneous and total arousals, sleep quality, quality of life, recall memory, sleep duration, sleep efficiency, and daytime sleepiness.
A supply disruption alert in 2020, now rescinded, notified UK prescribers of the planned discontinuation of Priadel® (lithium carbonate) tablets. This service evaluation explored lithium dose and plasma levels before and after the switching of lithium brands, in order to determine the interchangeability of different brands of lithium from a pharmacokinetic perspective.
Results
Data on the treatment of 37 patients switched from Priadel® tablets were analysed. Switching to Camcolit® controlled-release tablets at the same dose did not result in meaningful differences in plasma lithium levels. Dose adjustment and known or suspected poor medication adherence were associated with greater variability in plasma lithium levels on switching brands.
Clinical implications
For comparable pre- and post-switch doses in adherent patients, the most common brands of lithium carbonate appear to produce similar plasma lithium levels. British National Formulary guidance relating to switching lithium brands may be unnecessarily complex.
Multiple micronutrient deficiencies are widespread in Ethiopia. However, the distribution of Se and Zn deficiency risks has previously shown evidence of spatially dependent variability, warranting the need to explore this aspect for wider micronutrients. Here, blood serum concentrations for Ca, Mg, Co, Cu and Mo were measured (n 3102) on samples from the Ethiopian National Micronutrient Survey. Geostatistical modelling was used to test spatial variation of these micronutrients for women of reproductive age, who represent the largest demographic group surveyed (n 1290). Median serum concentrations were 8·6 mg dl−1 for Ca, 1·9 mg dl−1 for Mg, 0·4 µg l−1 for Co, 98·8 µg dl−1 for Cu and 0·2 µg dl−1 for Mo. The prevalence of Ca, Mg and Co deficiency was 41·6 %, 29·2 % and 15·9 %, respectively; Cu and Mo deficiency prevalence was 7·6 % and 0·3 %, respectively. A higher prevalence of Ca, Cu and Mo deficiency was observed in north western, Co deficiency in central and Mg deficiency in north eastern parts of Ethiopia. Serum Ca, Mg and Mo concentrations show spatial dependencies up to 140–500 km; however, there was no evidence of spatial correlations for serum Co and Cu concentrations. These new data indicate the scale of multiple mineral micronutrient deficiency in Ethiopia and the geographical differences in the prevalence of deficiencies suggesting the need to consider targeted responses during the planning of nutrition intervention programmes.
Approximately one-third of individuals in a major depressive episode will not achieve sustained remission despite multiple, well-delivered treatments. These patients experience prolonged suffering and disproportionately utilize mental and general health care resources. The recently proposed clinical heuristic of ‘difficult-to-treat depression’ (DTD) aims to broaden our understanding and focus attention on the identification, clinical management, treatment selection, and outcomes of such individuals. Clinical trial methodologies developed to detect short-term therapeutic effects in treatment-responsive populations may not be appropriate in DTD. This report reviews three essential challenges for clinical intervention research in DTD: (1) how to define and subtype this heterogeneous group of patients; (2) how, when, and by what methods to select, acquire, compile, and interpret clinically meaningful outcome metrics; and (3) how to choose among alternative clinical trial design options to promote causal inference and generalizability. The boundaries of DTD are uncertain, and an evidence-based taxonomy and reliable assessment tools are preconditions for clinical research and subtyping. Traditional outcome metrics in treatment-responsive depression may not apply to DTD, as they largely reflect the only short-term symptomatic change and do not incorporate durability of benefit, side effect burden, or sustained impact on quality of life or daily function. The trial methodology will also require modification as trials will likely be of longer duration to examine the sustained impact, raising complex issues regarding control group selection, blinding and its integrity, and concomitant treatments.
Seed retention, and ultimately seed shatter, are extremely important for the efficacy of harvest weed seed control (HWSC) and are likely influenced by various agroecological and environmental factors. Field studies investigated seed-shattering phenology of 22 weed species across three soybean [Glycine max (L.) Merr.]-producing regions in the United States. We further evaluated the potential drivers of seed shatter in terms of weather conditions, growing degree days, and plant biomass. Based on the results, weather conditions had no consistent impact on weed seed shatter. However, there was a positive correlation between individual weed plant biomass and delayed weed seed–shattering rates during harvest. This work demonstrates that HWSC can potentially reduce weed seedbank inputs of plants that have escaped early-season management practices and retained seed through harvest. However, smaller individuals of plants within the same population that shatter seed before harvest pose a risk of escaping early-season management and HWSC.
In May 2021, the Scientific Advisory Committee on Nutrition (SACN) published a risk assessment on lower carbohydrate diets for adults with type 2 diabetes (T2D)(1). The purpose of the report was to review the evidence on ‘low’-carbohydrate diets compared with the current UK government advice on carbohydrate intake for adults with T2D. However, since there is no agreed and widely utilised definition of a ‘low’-carbohydrate diet, comparisons in the report were between lower and higher carbohydrate diets. SACN’s remit is to assess the risks and benefits of nutrients, dietary patterns, food or food components for health by evaluating scientific evidence and to make dietary recommendations for the UK based on its assessment(2). SACN has a public health focus and only considers evidence in healthy populations unless specifically requested to do otherwise. Since the Committee does not usually make recommendations relating to clinical conditions, a joint working group (WG) was established in 2017 to consider this issue. The WG comprised members of SACN and members nominated by Diabetes UK, the British Dietetic Association, Royal College of Physicians and Royal College of General Practitioners. Representatives from NHS England and NHS Health Improvement, the National Institute for Health and Care Excellence and devolved health departments were also invited to observe the WG. The WG was jointly chaired by SACN and Diabetes UK.
This study aimed to evaluate the feasibility of a peer support intervention to encourage adoption and maintenance of a Mediterranean diet (MD) in established community groups where existing social support may assist the behaviour change process. Four established community groups with members at increased Cardiovascular Disease (CVD) risk and homogenous in gender were recruited and randomised to receive either a 12-month Peer Support (PS) intervention (PSG) (n 2) or a Minimal Support intervention (educational materials only) (MSG) (n 2). The feasibility of the intervention was assessed using recruitment and retention rates, assessing the variability of outcome measures (primary outcome: adoption of an MD at 6 months (using a Mediterranean Diet Score (MDS)) and process evaluation measures including qualitative interviews. Recruitment rates for community groups (n 4/8), participants (n 31/51) and peer supporters (n 6/14) were 50 %, 61 % and 43 %, respectively. The recruitment strategy faced several challenges with recruitment and retention of participants, leading to a smaller sample than intended. At 12 months, a 65 % and 76·5 % retention rate for PSG and MSG participants was observed, respectively. A > 2-point increase in MDS was observed in both the PSG and the MSG at 6 months, maintained at 12 months. An increase in MD adherence was evident in both groups during follow-up; however, the challenges faced in recruitment and retention suggest a definitive study of the peer support intervention using current methods is not feasible and refinement based on the current feasibility study should be incorporated. Lessons learned during the implementation of this intervention will help inform future interventions in this area.
Anxiety disorders are highly prevalent with an early age of onset. Understanding the aetiology of disorder emergence and recovery is important for establishing preventative measures and optimising treatment. Experimental approaches can serve as a useful model for disorder and recovery relevant processes. One such model is fear conditioning. We conducted a remote fear conditioning paradigm in monozygotic and dizygotic twins to determine the degree and extent of overlap between genetic and environmental influences on fear acquisition and extinction.
Methods
In total, 1937 twins aged 22–25 years, including 538 complete pairs from the Twins Early Development Study took part in a fear conditioning experiment delivered remotely via the Fear Learning and Anxiety Response (FLARe) smartphone app. In the fear acquisition phase, participants were exposed to two neutral shape stimuli, one of which was repeatedly paired with a loud aversive noise, while the other was never paired with anything aversive. In the extinction phase, the shapes were repeatedly presented again, this time without the aversive noise. Outcomes were participant ratings of how much they expected the aversive noise to occur when they saw either shape, throughout each phase.
Results
Twin analyses indicated a significant contribution of genetic effects to the initial acquisition and consolidation of fear, and the extinction of fear (15, 30 and 15%, respectively) with the remainder of variance due to the non-shared environment. Multivariate analyses revealed that the development of fear and fear extinction show moderate genetic overlap (genetic correlations 0.4–0.5).
Conclusions
Fear acquisition and extinction are heritable, and share some, but not all of the same genetic influences.
There has been a proliferation of research with human participants in violent contexts over the past ten years. Adhering to commonly held ethical principles such as beneficence, justice, and respect for persons is particularly important and challenging in research on violence. This letter argues that practices around research ethics in violent contexts should be reported more transparently in research outputs, and should be seen as a subset of research methods. We offer practical suggestions and empirical evidence from both within and outside of political science around risk assessments, mitigating the risk of distress and negative psychological outcomes, informed consent, and monitoring the incidence of potential harms. An analysis of published research on violence involving human participants from 2008 to 2019 shows that only a small proportion of current publications include any mention of these important dimensions of research ethics.
This essay will analyze the Jesus tradition in the Apostolic Fathers in light of recent debates on the relationship between orality and textuality in antiquity. Specifically, it will analyze the Jesus tradition in the Apostolic Fathers as oral tradition, given that it almost certainly derived from an oral-traditional source. This approach reflects a scholarly paradigm-shift that has been gaining momentum over the last three decades in studying the interplay of orality and textuality in early Christian circles. Prior to this paradigm-shift one could say with Werner H. Kelber that historical biblical scholarship was “empowered by an inadequate theory of the art of communication in the ancient world.” The paradigm-shift involves taking seriously that early Christianity arose and spread within societies that were predominantly oral. Not that attention to oral tradition is something new; New Testament scholars appealed to it for centuries, for example, in debating the sources and historical reliability of the canonical Gospels. Relatively recent, however, are the many insights into the inner workings of oral tradition in antiquity provided by a newer generation of scholars, many of whom built upon the pioneering work of Milman Parry and Albert Lord. These new insights are reshaping our understanding of the role of oral Jesus tradition in the early Christian community, and causing us to rethink the impact of orality and textuality upon early Christian writings and their sources.
Existing theories of democratic reversals emphasize that elites mount actions like coups when democracy is particularly threatening to their interests. However, existing theory has been largely silent on the role of elite social networks, which interact with economic incentives and may facilitate antidemocratic collective action. We develop a model where coups generate rents for elites and show that the effort an elite puts into a coup is increasing in their network centrality. We empirically explore the model using an original dataset of Haitian elite networks that we linked to firm-level data. We show that central families were more likely to be accused of participating in the 1991 coup against the democratic Aristide government. We then find that the retail prices of staple goods that are imported by such elites differentially increase during subsequent periods of nondemocracy. Our results suggest that elite social structure is an important factor in democratic reversals.
Of 10 surgeons interviewed in a descriptive qualitative study, 6 believed that surgical site infections are inevitable. Bundle adherence was felt to be more likely with strong evidence-based measures developed by surgical leaders. The intrinsic desire to excel was viewed as the main adherence motivator, rather than “pay-for-performance” models.
To characterize postextraction antibiotic prescribing patterns, predictors for antibiotic prescribing and the incidence of and risk factors for postextraction oral infection.
Design:
Retrospective analysis of a random sample of veterans who received tooth extractions from January 1, 2017 through December 31, 2017.
Setting:
VA dental clinics.
Patients:
Overall, 69,610 patients met inclusion criteria, of whom 404 were randomly selected for inclusion. Adjunctive antibiotics were prescribed to 154 patients (38.1%).
Intervention:
Patients who received or did not receive an antibiotic were compared for the occurrence of postextraction infection as documented in the electronic health record. Multivariable logistic regression was performed to identify factors associated with antibiotic receipt.
Results:
There was no difference in the frequency of postextraction oral infection identified among patients who did and did not receive antibiotics (4.5% vs 3.2%; P = .59). Risk factors for postextraction infection could not be identified due to the low frequency of this outcome. Patients who received antibiotics were more likely to have a greater number of teeth extracted (aOR, 1.10; 95% CI, 1.03–1.18), documentation of acute infection at time of extraction (aOR, 3.02; 95% CI, 1.57–5.82), molar extraction (aOR, 1.78; 95% CI, 1.10–2.86) and extraction performed by an oral maxillofacial surgeon (aOR, 2.29; 95% CI, 1.44–3.58) or specialty dentist (aOR, 5.77; 95% CI, 2.05–16.19).
Conclusion:
Infectious complications occurred at a low incidence among veterans undergoing tooth extraction who did and did not receive postextraction antibiotics. These results suggest that antibiotics have a limited role in preventing postprocedural infection; however, future studies are necessary to more clearly define the role of antibiotics for this indication.
Clinicians and parents are encouraged to have open and honest communication about end of life with children with cancer, yet there remains limited research in this area. We examined family communication and preferred forms of support among bereaved caregivers of children with cancer.
Methods
Bereaved caregivers were recruited through a closed social media group to complete an online survey providing retrospective reports of end of life communication with their child and preferences for communication support from health-care providers. The sample of 131 participants was mostly female (77.9%; n = 102) with an average age of 49.15 (SD = 8.03) years. Deceased children were of an average age of 12.42 years (SD = 6.01) and nearly 90% of children died within 5 years of diagnosis.
Results
Most caregivers spoke with their child about their prognosis (61.8%; n = 131) and death (66.7%; n = 99). Half of children (48%; n = 125) asked about death, particularly older children (51.9% ≥12 years; p = 0.03). Asking about dying was related to having conversations about prognosis (p ≤ 0.001) and death (p ≤ 0.001). Most caregivers (71.8%; n = 94) wanted support to talk to their children. Fewer wanted providers to speak to children directly (12.2%; n = 16) or to be present while caregivers spoke to the child (19.8%; n = 26). Several themes emerged from a content analysis of open-ended responses regarding preferences for provider support.
Significance of results
Most caregivers discussed issues pertaining to end of life irrespective of demographic or medical factors. Qualitative themes provide insight into support desired by families to help with these difficult conversations.
Widespread testing for severe acute respiratory coronavirus virus 2 (SARS-CoV-2) is necessary to curb the spread of coronavirus disease 2019 (COVID-19), but testing is undermined when the only option is a nasopharyngeal swab. Self-collected swab techniques can overcome many of the disadvantages of a nasopharyngeal swab, but they require evaluation.
Methods:
Three self-collected non-nasopharyngeal swab techniques (saline gargle, oral swab and combined oral-anterior nasal swab) were compared to a nasopharyngeal swab for SARS-CoV-2 detection at multiple COVID-19 assessment centers in Toronto, Canada. The performance characteristics of each test were assessed.
Results:
The adjusted sensitivity of the saline gargle was 0.90 (95% CI 0.86-0.94), the oral swab was 0.82 (95% CI, 0.72–0.89) and the combined oral–anterior nasal swab was 0.87 (95% CI, 0.77–0.93) compared to a nasopharyngeal swab, which demonstrated a sensitivity of ˜90% when all positive tests were the reference standard. The median cycle threshold values for the SARS-CoV-2 E-gene for concordant and discordant saline gargle specimens were 17 and 31 (P < .001), for the oral swabs these values were 17 and 28 (P < .001), and for oral–anterior nasal swabs these values were 18 and 31 (P = .007).
Conclusions:
Self-collected saline gargle and an oral–anterior nasal swab have a similar sensitivity to a nasopharyngeal swab for the detection of SARS-CoV-2. These alternative collection techniques are cheap and can eliminate barriers to testing, particularly in underserved populations.
According to the US Supreme Court, all individuals charged with a crime must be competent to stand trial (CST). As defined in Dusky v. US, competency requires that defendants have the ability to consult with their attorney with a reasonable degree of rationality and possess a rational as well as factual understanding of the legal proceedings. The precise number of CST evaluations conducted each year is unknown. The oft-reported figure of 60,000 provided by Bonnie and Grisso is an estimate based on the number of felony indictments coupled with the estimated percentage of referrals for competency evaluations made by the courts in the 1990s. Later work has suggested a much higher number.