We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure coreplatform@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
To describe variation in blood culture practices in the neonatal intensive care unit (NICU).
Design:
Survey of neonatal practitioners involved with blood culturing and NICU-level policy development.
Participants:
We included 28 NICUs in a large antimicrobial stewardship quality improvement program through the California Perinatal Quality Care Collaborative.
Methods:
Web-based survey of bedside blood culture practices and NICU- and laboratory-level practices. We evaluated adherence to recommended practices.
Results:
Most NICUs did not have a procedural competency (54%), did not document the sample volume (75%), did not receive a culture contamination report (57%), and/or did not require reporting to the provider if <1 mL blood was obtained (64%). The skin asepsis procedure varied across NICUs. Only 71% had a written procedure, but ≥86% changed the needle and disinfected the bottle top prior to inoculation. More than one-fifth of NICUs draw a culture from an intravascular device only (if present). Of 13 modifiable practices related to culture and contamination, NICUs with nurse practitioners more frequently adopted >50% of practices, compared to units without (92% vs 50% of units; P < .02).
Conclusions:
In the NICU setting, recommended practices for blood culturing were not routinely performed.
Obesity is highly prevalent and disabling, especially in individuals with severe mental illness including bipolar disorders (BD). The brain is a target organ for both obesity and BD. Yet, we do not understand how cortical brain alterations in BD and obesity interact.
Methods:
We obtained body mass index (BMI) and MRI-derived regional cortical thickness, surface area from 1231 BD and 1601 control individuals from 13 countries within the ENIGMA-BD Working Group. We jointly modeled the statistical effects of BD and BMI on brain structure using mixed effects and tested for interaction and mediation. We also investigated the impact of medications on the BMI-related associations.
Results:
BMI and BD additively impacted the structure of many of the same brain regions. Both BMI and BD were negatively associated with cortical thickness, but not surface area. In most regions the number of jointly used psychiatric medication classes remained associated with lower cortical thickness when controlling for BMI. In a single region, fusiform gyrus, about a third of the negative association between number of jointly used psychiatric medications and cortical thickness was mediated by association between the number of medications and higher BMI.
Conclusions:
We confirmed consistent associations between higher BMI and lower cortical thickness, but not surface area, across the cerebral mantle, in regions which were also associated with BD. Higher BMI in people with BD indicated more pronounced brain alterations. BMI is important for understanding the neuroanatomical changes in BD and the effects of psychiatric medications on the brain.
Glyphosate-resistant (GR) biotypes of horseweed were first confirmed in southern Ontario in 2010 and have spread across southern Ontario. A total of four field experiments were conducted between 2021 and 2022 to determine GR horseweed control with one- and two-pass herbicide programs in glyphosate/glufosinate/2,4-D-resistant (GG2R) soybean. 2,4-D choline/glyphosate DMA, halauxifen-methyl, and saflufenacil applied preplant (PP) controlled GR horseweed by 59%, 72%, and 78% 8 wk after postemergence (POST) application (WAA-POST); there was no improvement of GR horseweed control when 2,4-D choline/glyphosate DMA was added to saflufenacil; in contrast, there was improved GR horseweed control when saflufenacil was added to 2,4-D choline/glyphosate DMA. Glufosinate and 2,4-D choline/glyphosate DMA applied POST controlled glyphosate-resistant horseweed by 71% and 86%, respectively, 8 WAA-POST. Two-pass herbicide programs of a PP followed by POST application provided greater GR horseweed control than a PP or POST herbicide applied alone. Glufosinate or 2,4-D choline/glyphosate DMA applied POST following 2,4-D choline/glyphosate DMA or halauxifen-methyl applied PP improved GR horseweed control by 29% to 38% and 24%, respectively at 8 WAA-POST. The application of 2,4-D choline/glyphosate DMA applied POST following saflufenacil applied PP improved control by 20% 8 WAA-POST; there was no improvement of GR horseweed control when glufosinate was applied POST following saflufenacil applied PP or when either POST herbicide was applied following saflufenacil + 2,4-D choline/glyphosate DMA applied PP. When used in a two-pass program, 2,4-D choline/glyphosate DMA POST provided 2% to 3% greater control of GR horseweed than glufosinate.
Waterhemp control in Ontario has increased in complexity due to the evolution of biotypes that are resistant to five herbicide modes of action (Groups 2, 5, 9, 14, and 27 as categorized by the Weed Science Society of America). Four field trials were carried out over a 2-yr period in 2021 and 2022 to assess the control of multiple-herbicide-resistant (MHR) waterhemp biotypes in glyphosate/glufosinate/2,4-D-resistant (GG2R) soybean using one- and two-pass herbicide programs. S-metolachlor/metribuzin, pyroxasulfone/sulfentrazone, pyroxasulfone/flumioxazin, and pyroxasulfone + metribuzin applied preemergence (PRE) controlled MHR waterhemp similarly by 46%, 63%, 60%, and 69%, respectively, at 8 wk after postemergence (POST) application (WAA-B). A one-pass application of 2,4-D choline/glyphosate DMA POST provided greater control of MHR waterhemp than glufosinate. Two-pass herbicide programs of a PRE herbicide followed by (fb) a POST-applied herbicide resulted in greater MHR waterhemp control compared to a single PRE or POST herbicide application. PRE herbicides fb glufosinate or 2,4-D choline/glyphosate DMA POST controlled MHR waterhemp by 74% to 91% and by 84% to 96%, respectively, at 8 WAA-B. Two-pass herbicide applications of an effective PRE residual herbicide fb 2,4-D choline/glyphosate DMA POST in GG2R soybean can effectively manage waterhemp that is resistant to herbicides in Groups 2, 5, 9, 14, and 27.
Geographically explicit, taxonomically resolved fossil occurrences are necessary for reconstructing macroevolutionary patterns and for testing a wide range of hypotheses in the Earth and life sciences. Heterogeneity in the spatial and temporal distribution of fossil occurrences in the Paleobiology Database (PBDB) is attributable to several different factors, including turnover among biological communities, socioeconomic disparities in the intensity of paleontological research, and geological controls on the distribution and fossil yield of sedimentary deposits. Here we use the intersection of global geological map data from Macrostrat and fossil collections in the PBDB to assess the extent to which the potentially fossil-bearing, surface-expressed sedimentary record has yielded fossil occurrences. We find a significant and moderately strong positive correlation between geological map area and the number of fossil occurrences. This correlation is consistent regardless of map unit age and binning protocol, except at period level; the Neogene and Quaternary have non-marine map units covering large areas and yielding fewer occurrences than expected. The sedimentary record of North America and Europe yields significantly more fossil occurrences per sedimentary area than similarly aged deposits in most of the rest of the world. However, geographic differences in area and age of sedimentary deposits lead to regionally different expectations for fossil occurrences. Using the sampling of surface-expressed sedimentary units in North America and Europe as a predictor for what might be recoverable from the surface-expressed sedimentary deposits of other regions, we find that the rest of the globe is approximately 45% as well sampled in the PBDB. Using age and area of bedrock and sampling in North America and Europe as a basis for prediction, we estimate that more than 639,000 occurrences from outside these regions would need to be added to the PBDB to achieve global geological parity in sampling. In general, new terrestrial fossil occurrences are expected to have the greatest impact on our understanding of macroevolutionary patterns.
Long-chain omega-3 polyunsaturated fatty acid (LC n-3 PUFA) supplements, rich in eicosapentaenoic acid and/or docosahexaenoic acid, are increasingly being recommended within athletic institutions. However, the wide range of doses, durations and study designs implemented across trials makes it difficult to provide clear recommendations. The importance of study design characteristics in LC n-3 PUFA trials has been detailed in cardiovascular disease research, and these considerations may guide LC n-3 PUFA study design in healthy cohorts. This systematic review examined the quality of studies and study design considerations used in evaluating the evidence for LC n-3 PUFA improving performance in physically trained adults. SCOPUS, PubMed and Web of Science electronic databases were searched to identify studies that supplemented LC n-3 PUFA in physically trained participants. Forty-six (n = 46) studies met inclusion. Most studies used a randomised control design. Risk of bias, assessed using the design-appropriate Cochrane Collaboration tool, revealed that studies had a predominant judgment of ‘some concerns’, ‘high risk’ or ‘moderate risk’ in randomised controlled, randomised crossover or non-randomised studies, respectively. A custom five-point quality assessment scale demonstrated that no study satisfied all recommendations for LC n-3 PUFA study design. This review has highlighted that the disparate range of study designs is likely contributing to the inconclusive state of outcomes pertaining to LC n-3 PUFA as a potential ergogenic aid. Further research must adequately account for the specific LC n-3 PUFA study design considerations, underpinned by a clear hypothesis, to achieve evidence-based dose, duration and composition recommendations for physically trained individuals.
If people with episodic mental-health conditions lose their job due to an episode of their mental illness, they often experience personal negative consequences. Therefore, reintegration after sick leave is critical to avoid unfavorable courses of disease, longer inability to work, long payment of sickness benefits, and unemployment. Existing return-to-work (RTW) programs have mainly focused on “common mental disorders” and often used very elaborate and costly interventions without yielding convincing effects. It was the aim of the RETURN study to evaluate an easy-to-implement RTW intervention specifically addressing persons with mental illnesses being so severe that they require inpatient treatment.
Methods
The RETURN study was a multi-center, cluster-randomized controlled trial in acute psychiatric wards addressing inpatients suffering from a psychiatric disorder. In intervention wards, case managers (RTW experts) were introduced who supported patients in their RTW process, while in control wards treatment, as usual, was continued.
Results
A total of 268 patients were recruited for the trial. Patients in the intervention group had more often returned to their workplace at 6 and 12 months, which was also mirrored in more days at work. These group differences were statistically significant at 6 months. However, for the main outcome (days at work at 12 months), differences were no longer statistically significant (p = 0.14). Intervention patients returned to their workplace earlier than patients in the control group (p = 0.040).
Conclusions
The RETURN intervention has shown the potential of case-management interventions when addressing RTW. Further analyses, especially the qualitative ones, may help to better understand limitations and potential areas for improvement.
Neurological involvement associated with SARS-CoV-2 infection is increasingly recognized. However, the specific characteristics and prevalence in pediatric patients remain unclear. The objective of this study was to describe the neurological involvement in a multinational cohort of hospitalized pediatric patients with SARS-CoV-2.
Methods:
This was a multicenter observational study of children <18 years of age with confirmed SARS-CoV-2 infection or multisystemic inflammatory syndrome (MIS-C) and laboratory evidence of SARS-CoV-2 infection in children, admitted to 15 tertiary hospitals/healthcare centers in Canada, Costa Rica, and Iran February 2020–May 2021. Descriptive statistical analyses were performed and logistic regression was used to identify factors associated with neurological involvement.
Results:
One-hundred forty-seven (21%) of 697 hospitalized children with SARS-CoV-2 infection had neurological signs/symptoms. Headache (n = 103), encephalopathy (n = 28), and seizures (n = 30) were the most reported. Neurological signs/symptoms were significantly associated with ICU admission (OR: 1.71, 95% CI: 1.15–2.55; p = 0.008), satisfaction of MIS-C criteria (OR: 3.71, 95% CI: 2.46–5.59; p < 0.001), fever during hospitalization (OR: 2.15, 95% CI: 1.46–3.15; p < 0.001), and gastrointestinal involvement (OR: 2.31, 95% CI: 1.58–3.40; p < 0.001). Non-headache neurological manifestations were significantly associated with ICU admission (OR: 1.92, 95% CI: 1.08–3.42; p = 0.026), underlying neurological disorders (OR: 2.98, 95% CI: 1.49–5.97, p = 0.002), and a history of fever prior to hospital admission (OR: 2.76, 95% CI: 1.58–4.82; p < 0.001).
Discussion:
In this study, approximately 21% of hospitalized children with SARS-CoV-2 infection had neurological signs/symptoms. Future studies should focus on pathogenesis and long-term outcomes in these children.
This study investigates the amount and valence of information selected during single item evaluation. One hundred and thirty-five participants evaluated a cell phone by reading hypothetical customers reports. Some participants were first asked to provide a preliminary rating based on a picture of the phone and some technical specifications. The participants who were given the customer reports only after they made a preliminary rating exhibited valence bias in their selection of customers reports. In contrast, the participants that did not make an initial rating sought subsequent information in a more balanced, albeit still selective, manner. The preliminary raters used the least amount of information in their final decision, resulting in faster decision times. The study appears to support the notion that selective exposure is utilized in order to develop cognitive coherence.
We examine financial challenges of purchasing items that are readily-available yet symbolic of loving relationships. Using weddings and funerals as case studies, we find that people indirectly pay to avoid taboo monetary trade-offs. When purchasing items symbolic of love, respondents chose higher price, higher quality items over equally appealing lower price, lower quality items (Study 1), searched less for lower priced items (Study 2) and were less willing to negotiate prices (Study 3). The effect was present for experienced consumers (Study 1), affectively positive and negative events (Study 2), and more routine purchase events (Study 3). Trade-off avoidance, however, was limited to monetary trade-offs associated with loved ones. When either money or love was omitted from the decision context, people were more likely to engage in trade-off reasoning. By abandoning cost-benefit reasoning in order to avoid painful monetary trade-offs, people spend more money than if they engaged in trade-off based behaviors, such as seeking lower cost options or requesting lower prices.
Assistive forces transmitted from wearable robots to the robot’s users are often defined by controllers that rely on the accurate estimation of the human posture. The compliant nature of the human–robot interface can negatively affect the robot’s ability to estimate the posture. In this article, we present a novel algorithm that uses machine learning to correct these errors in posture estimation. For that, we recorded motion capture data and robot performance data from a group of participants (n = 8; 4 females) who walked on a treadmill while wearing a wearable robot, the Myosuit. Participants walked on level ground at various gait speeds and levels of support from the Myosuit. We used optical motion capture data to measure the relative displacement between the person and the Myosuit. We then combined this data with data derived from the robot to train a model, using a grading boosting algorithm (XGBoost), that corrected for the mechanical compliance errors in posture estimation. For the Myosuit controller, we were particularly interested in the angle of the thigh segment. Using our algorithm, the estimated thigh segment’s angle RMS error was reduced from 6.3° (2.3°) to 2.5° (1.0°), mean (standard deviation). The average maximum error was reduced from 13.1° (4.9°) to 5.9° (2.1°). These improvements in posture estimation were observed for all of the considered assistance force levels and walking speeds. This suggests that ML-based algorithms provide a promising opportunity to be used in combination with wearable-robot sensors for an accurate user posture estimation.
Advances in genomic science are providing high-resolution insights into the diversity of species and populations, and increased understanding of how they function and interact. The application of genomic data to conservation translocations is now widespread, with many examples of genomic data being used to guide the implementation of translocations, ranging from selection of donor individuals/populations, understanding the dynamics of inter-specific interactions, and the design and monitoring of population reinforcements to achieve genetic and/or evolutionary rescue. The rapidly accelerating generation of genomic data from the world’s species will lead to further major advances in understanding biodiversity at the genomic level, with associated benefits for translocation management and monitoring. However, genomic data and genomic technologies are not a panacea, and despite the power of the approaches, uncertainties can remain in data interpretation and translation into practical management actions. As the science at the interface of genomics and conservation translocations continues to develop, there is a pressing need to focus continually on translating data to support practical decision-making and, at least in the short term, to develop further guidance and thinking that allows extrapolation from well-resourced studies with extensive genomic data to guide actions and decisions in translocations where generating genomic data is not yet feasible. As genetic/genomic technologies enable greater technological interventions for conservation translocations, the need to extend multi-stakeholder dialogue will continue and grow; this ranges from promoting informed dialogue between geneticists and conservationists to ensure effective deployment of approaches and resources, to wider societal engagement in setting the agenda for if, when, and how approaches involving genetic modification should be deployed.
Pre-eclampsia is a serious complication of pregnancy, and maternal nutritional factors may play protective roles or exacerbate risk. The tendency to focus on single nutrients as a risk factor obscures the complexity of possible interactions, which may be important given the complex nature of pre-eclampsia. An evidence review was conducted to compile definite, probable, possible and indirect nutritional determinants of pre-eclampsia to map a nutritional conceptual framework for pre-eclampsia prevention. Determinants of pre-eclampsia were first compiled through an initial consultation with experts. Second, an expanded literature review was conducted to confirm associations, elicit additional indicators and evaluate evidence. The strength of association was evaluated as definite relative risk (RR) < 0·40 or ≥3·00, probable RR 0·40–0·69 or 1·50–2·99, possible RR 0·70–0·89 or 1·10–1·49 or not discernible RR 0·90–1·09. The quality of evidence was evaluated using Grading of Recommendations, Assessment, Development and Evaluation. Twenty-five nutritional factors were reported in two umbrella reviews and twenty-two meta-analyses. Of these, fourteen were significantly associated with pre-eclampsia incidence. Higher serum Fe emerged as a definite nutritional risk factors for pre-eclampsia incidence across populations, while low serum Zn was a risk factor in Asia and Africa. Maternal vitamin D deficiency was a probable risk factor and Ca and/or vitamin D supplementation were probable protective nutritional factors. Healthy maternal dietary patterns were possibly associated with lower risk of developing pre-eclampsia. Potential indirect pathways of maternal nutritional factors and pre-eclampsia may exist through obesity, maternal anaemia and gestational diabetes mellitus. Research gaps remain on the influence of household capacities and socio-cultural, economic and political contexts, as well as interactions with medical conditions.
Exclusion of special populations (older adults; pregnant women, children, and adolescents; individuals of lower socioeconomic status and/or who live in rural communities; people from racial and ethnic minority groups; individuals from sexual or gender minority groups; and individuals with disabilities) in research is a pervasive problem, despite efforts and policy changes by the National Institutes of Health and other organizations. These populations are adversely impacted by social determinants of health (SDOH) that reduce access and ability to participate in biomedical research. In March 2020, the Northwestern University Clinical and Translational Sciences Institute hosted the “Lifespan and Life Course Research: integrating strategies” “Un-Meeting” to discuss barriers and solutions to underrepresentation of special populations in biomedical research. The COVID-19 pandemic highlighted how exclusion of representative populations in research can increase health inequities. We applied findings of this meeting to perform a literature review of barriers and solutions to recruitment and retention of representative populations in research and to discuss how findings are important to research conducted during the ongoing COVID-19 pandemic. We highlight the role of SDOH, review barriers and solutions to underrepresentation, and discuss the importance of a structural competency framework to improve research participation and retention among special populations.
Research on prehistoric mainland Southeast Asia is dominated by mortuary contexts, leaving processes such as the transition to sedentism relatively understudied. Recent excavations in southern Vietnam, however, have recovered new evidence for settlement. The authors report on investigations at the neolithic site of Loc Giang (3980–3270 cal BP) in southern Vietnam, where excavation revealed a vertical sequence of more than 30 surfaces. Microarchaeological analyses indicate that these features are carefully prepared lime mortar floors; the lime was probably produced from burnt shell. The floors date to between 3510 and 3150 cal BP, providing the earliest-known evidence for the use of lime mortar, and for durable settlement construction, in this region.
Despite the importance of secondary dormancy for plant life cycle timing and survival, there is insufficient knowledge about the (epigenetic) regulation of this trait at the molecular level. Our aim was to determine the role of (epi)genetic processes in the regulation of secondary seed dormancy using natural genotypes of the widely distributed Capsella bursa-pastoris. Seeds of nine ecotypes were exposed to control conditions or histone deacetylase inhibitors [trichostatin A (TSA), valproic acid] during imbibition to study the effects of hyper-acetylation on secondary seed dormancy induction and germination. Valproic acid increased secondary dormancy and both compounds caused a delay of t50 for germination (radicle emergence) but not of t50 for testa rupture, demonstrating that they reduced speed of germination. Transcriptome analysis of one accession exposed to valproic acid versus water showed mixed regulation of ABA, negative regulation of GAs, BRs and auxins, as well as up-regulation of SNL genes, which might explain the observed delay in germination and increase in secondary dormancy. In addition, two accessions differing in secondary dormancy depth (deep vs non-deep) were studied using RNA-seq to reveal the potential regulatory processes underlying this trait. Phytohormone synthesis or signalling was generally up-regulated for ABA (e.g. NCED6, NCED2, ABCG40, ABI3) and down-regulated for GAs (GA20ox1, GA20ox2, bHLH93), ethylene (ACO1, ERF4-LIKE, ERF105, ERF109-LIKE), BRs (BIA1, CYP708A2-LIKE, probable WRKY46, BAK1, BEN1, BES1, BRI1) and auxin (GH3.3, GH3.6, ABCB19, TGG4, AUX1, PIN6, WAT1). Epigenetic candidates for variation in secondary dormancy depth include SNL genes, histone deacetylases and associated genes (HDA14, HDA6-LIKE, HDA-LIKE, ING2, JMJ30), as well as sequences linked to histone acetyltransferases (bZIP11, ARID1A-LIKE), or to gene silencing through histone methylation (SUVH7, SUVH9, CLF). Together, these results show that phytohormones and epigenetic regulation play an important role in controlling differences in secondary dormancy depth between accessions.
Observational studies suggest that 25-hydroxy vitamin D (25(OH)D) concentration is inversely associated with pain. However, findings from intervention trials are inconsistent. We assessed the effect of vitamin D supplementation on pain using data from a large, double-blind, population-based, placebo-controlled trial (the D-Health Trial). 21 315 participants (aged 60–84 years) were randomly assigned to a monthly dose of 60 000 IU vitamin D3 or matching placebo. Pain was measured using the six-item Pain Impact Questionnaire (PIQ-6), administered 1, 2 and 5 years after enrolment. We used regression models (linear for continuous PIQ-6 score and log-binomial for binary categorisations of the score, namely ‘some or more pain impact’ and ‘presence of any bodily pain’) to estimate the effect of vitamin D on pain. We included 20 423 participants who completed ≥1 PIQ-6. In blood samples collected from 3943 randomly selected participants (∼800 per year), the mean (sd) 25(OH)D concentrations were 77 (sd 25) and 115 (sd 30) nmol/l in the placebo and vitamin D groups, respectively. Most (76 %) participants were predicted to have 25(OH)D concentration >50 nmol/l at baseline. The mean PIQ-6 was similar in all surveys (∼50·4). The adjusted mean difference in PIQ-6 score (vitamin D cf placebo) was 0·02 (95 % CI (−0·20, 0·25)). The proportion of participants with some or more pain impact and with the presence of bodily pain was also similar between groups (both prevalence ratios 1·01, 95 % CI (0·99, 1·03)). In conclusion, supplementation with 60 000 IU of vitamin D3/month had negligible effect on bodily pain.
Bacterial survival on, and interactions with, human skin may explain the epidemiological success of MRSA strains. We evaluated the bacterial counts for 27 epidemic and 31 sporadic MRSA strains on 3D epidermal models based on N/TERT cells (NEMs) after 1, 2 and 8 days. In addition, the expression of antimicrobial peptides (hBD-2, RNase 7), inflammatory cytokines (IL-1β, IL-6) and chemokine IL-8 by NEMs was assessed using immunoassays and the expression of 43 S. aureus virulence factors was determined by a multiplex competitive Luminex assay. To explore donor variation, bacterial counts for five epidemic and seven sporadic MRSA strains were determined on 3D primary keratinocyte models (LEMs) from three human donors. Bacterial survival was comparable on NEMs between the two groups, but on LEMs, sporadic strains showed significantly lower survival numbers compared to epidemic strains. Both groups triggered the expression of immune factors. Upon interaction with NEMs, only the epidemic MRSA strains expressed pore-forming toxins, including alpha-hemolysin (Hla), gamma-hemolysin (HlgB), Panton-Valentine leucocidin (LukS) and LukED. Together, these data indicate that the outcome of the interaction between MRSA and human skin mimics, depends on the unique combination of bacterial strain and host factors.