To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The Subglacial Antarctic Lakes Scientific Access (SALSA) Project accessed Mercer Subglacial Lake using environmentally clean hot-water drilling to examine interactions among ice, water, sediment, rock, microbes and carbon reservoirs within the lake water column and underlying sediments. A ~0.4 m diameter borehole was melted through 1087 m of ice and maintained over ~10 days, allowing observation of ice properties and collection of water and sediment with various tools. Over this period, SALSA collected: 60 L of lake water and 10 L of deep borehole water; microbes >0.2 μm in diameter from in situ filtration of ~100 L of lake water; 10 multicores 0.32–0.49 m long; 1.0 and 1.76 m long gravity cores; three conductivity–temperature–depth profiles of borehole and lake water; five discrete depth current meter measurements in the lake and images of ice, the lake water–ice interface and lake sediments. Temperature and conductivity data showed the hydrodynamic character of water mixing between the borehole and lake after entry. Models simulating melting of the ~6 m thick basal accreted ice layer imply that debris fall-out through the ~15 m water column to the lake sediments from borehole melting had little effect on the stratigraphy of surficial sediment cores.
Recent aerial photographs of Pen y Gaer have revealed significant new details about the fort and its extramural adjuncts. These provide a springboard for examining the wider implications for extramural activity outside other forts across Wales and the Marches, and for exploring the function and chronology of potentially official buildings within the wider landscape of communications and control. Such an approach invites comparison with other frontier regions.
Healthcare personnel with severe acute respiratory coronavirus virus 2 (SARS-CoV-2) infection were interviewed to describe activities and practices in and outside the workplace. Among 2,625 healthcare personnel, workplace-related factors that may increase infection risk were more common among nursing-home personnel than hospital personnel, whereas selected factors outside the workplace were more common among hospital personnel.
This study aimed to identify a well-fitting and theoretically justified item-level latent factor structure for the Wechsler Memory Scales (WMS)-IV verbal paired associates (VerbalPA) subtest to facilitate the ease and accuracy of score interpretations for patients with lateralized temporal lobe epilepsy (TLE).
Archival data were used from 250 heterogeneous neurosciences patients who were administered the WMS-IV as part of a standard neuropsychological assessment. Three theoretically motivated models for the latent structure of VerbalPA were tested using confirmatory factor analysis. The first model, based on cognitive principles of semantic processing from hub-and-spoke theory, tested whether performance is related to specific semantic features of target words. The second, motivated by the Cattell–Horn–Carroll (CHC) model of cognitive abilities, investigated whether the associative properties of items influence performance. A third, Hybrid model tested whether performance is related to both semantic and associative properties of items. The best-fitting model was tested for diagnostic group effects contrasting the heterogeneous neuroscience patients with subsets of left and right TLE (n = 51, n = 26, respectively) patients.
The Hybrid model was found to have the best fit. Patients with left TLE scored significantly less well than the heterogeneous neurosciences sample on selected semantic factor scores, although the effect size was small.
Future editions of the WMS may consider implementing a semantic scoring structure for the VerbalPA to facilitate test score interpretation. Additionally, these results suggest that principles of hub-and-spoke theory may be integrated into CHC cognitive ability taxonomy.
The coronavirus disease 2019 (COVID-19) pandemic has resulted in shortages of personal protective equipment (PPE), underscoring the urgent need for simple, efficient, and inexpensive methods to decontaminate masks and respirators exposed to severe acute respiratory coronavirus virus 2 (SARS-CoV-2). We hypothesized that methylene blue (MB) photochemical treatment, which has various clinical applications, could decontaminate PPE contaminated with coronavirus.
The 2 arms of the study included (1) PPE inoculation with coronaviruses followed by MB with light (MBL) decontamination treatment and (2) PPE treatment with MBL for 5 cycles of decontamination to determine maintenance of PPE performance.
MBL treatment was used to inactivate coronaviruses on 3 N95 filtering facepiece respirator (FFR) and 2 medical mask models. We inoculated FFR and medical mask materials with 3 coronaviruses, including SARS-CoV-2, and we treated them with 10 µM MB and exposed them to 50,000 lux of white light or 12,500 lux of red light for 30 minutes. In parallel, integrity was assessed after 5 cycles of decontamination using multiple US and international test methods, and the process was compared with the FDA-authorized vaporized hydrogen peroxide plus ozone (VHP+O3) decontamination method.
Overall, MBL robustly and consistently inactivated all 3 coronaviruses with 99.8% to >99.9% virus inactivation across all FFRs and medical masks tested. FFR and medical mask integrity was maintained after 5 cycles of MBL treatment, whereas 1 FFR model failed after 5 cycles of VHP+O3.
MBL treatment decontaminated respirators and masks by inactivating 3 tested coronaviruses without compromising integrity through 5 cycles of decontamination. MBL decontamination is effective, is low cost, and does not require specialized equipment, making it applicable in low- to high-resource settings.
Maternal nutrition is critical in mammalian development, influencing the epigenetic reprogramming of gametes, embryos, and fetal programming. We evaluated the effects of different levels of sulfur (S) and cobalt (Co) in the maternal diet throughout the pre- and periconceptional periods on the biochemical and reproductive parameters of the donors and the DNA methylome of the progeny in Bos indicus cattle. The low-S/Co group differed from the control with respect to homocysteine, folic acid, B12, insulin growth factor 1, and glucose. The oocyte yield was lower in heifers from the low S/Co group than that in the control heifers. Embryos from the low-S/Co group exhibited 2320 differentially methylated regions (DMRs) across the genome compared with the control embryos. We also characterized candidate DMRs linked to the DNMT1 and DNMT3B genes in the blood and sperm cells of the adult progeny. A DMR located in DNMT1 that was identified in embryos remained differentially methylated in the sperm of the progeny from the low-S/Co group. Therefore, we associated changes in specific compounds in the maternal diet with DNA methylation modifications in the progeny. Our results help to elucidate the impact of maternal nutrition on epigenetic reprogramming in livestock, opening new avenues of research to study the effect of disturbed epigenetic patterns in early life on health and fertility in adulthood. Considering that cattle are physiologically similar to humans with respect to gestational length, our study may serve as a model for studies related to the developmental origin of health and disease in humans.
People with CHD are at increased risk for executive functioning deficits. Meta-analyses of these measures in CHD patients compared to healthy controls have not been reported.
To examine differences in executive functions in individuals with CHD compared to healthy controls.
We performed a systematic review of publications from 1 January, 1986 to 15 June, 2020 indexed in PubMed, CINAHL, EMBASE, PsycInfo, Web of Science, and the Cochrane Library.
Inclusion criteria were (1) studies containing at least one executive function measure; (2) participants were over the age of three.
Data extraction and quality assessment were performed independently by two authors. We used a shifting unit-of-analysis approach and pooled data using a random effects model.
The search yielded 61,217 results. Twenty-eight studies met criteria. A total of 7789 people with CHD were compared with 8187 healthy controls. We found the following standardised mean differences: −0.628 (−0.726, −0.531) for cognitive flexibility and set shifting, −0.469 (−0.606, −0.333) for inhibition, −0.369 (−0.466, −0.273) for working memory, −0.334 (−0.546, −0.121) for planning/problem solving, −0.361 (−0.576, −0.147) for summary measures, and −0.444 (−0.614, −0.274) for reporter-based measures (p < 0.001).
Our analysis consisted of cross-sectional and observational studies. We could not quantify the effect of collinearity.
Individuals with CHD appear to have at least moderate deficits in executive functions. Given the growing population of people with CHD, more attention should be devoted to identifying executive dysfunction in this vulnerable group.
Investigate an outbreak of coronavirus disease 2019 (COVID-19) among operating room staff utilizing contact tracing, mass testing for severe acute respiratory coronavirus virus 2 (SARS-CoV-2), and environmental sampling.
Operating room staff with positive SARS-CoV-2 molecular testing.
Epidemiologic and environmental investigations were conducted including contact tracing, environmental surveys, and sampling and review of the operating room schedule for staff-to-staff, staff-to-patient, and patient-to-staff SARS-CoV-2 transmission.
In total, 24 healthcare personnel (HCP) tested positive for SARS-CoV-2, including nurses (29%), surgical technologists (25%), and surgical residents (16%). Moreover, 19 HCP (79%) reported having used a communal area, most commonly break rooms (75%). Overall, 20 HCP (83%) reported symptomatic disease. In total, 72 environmental samples were collected from communal areas for SARS-CoV-2 genomic testing; none was positive. Furthermore, 236 surgical cases were reviewed for transmission: 213 (90%) had negative preoperative SARS-CoV-2 testing, 21 (9%) had a positive test on or before the date of surgery, and 2 (<1%) did not have a preoperative test performed. In addition, 40 patients underwent postoperative testing (mean, 13 days to postoperative testing), and 2 returned positive results. Neither of these 2 cases was linked to our outbreak.
Complacency in infection control practices among staff during peak community transmission of SARS-CoV-2 is believed to have driven staff-to-staff transmission. Prompt identification of the outbreak led to rapid interventions, ultimately allowing for uninterrupted surgical service.
To determine whether age, gender and marital status are associated with prognosis for adults with depression who sought treatment in primary care.
Medline, Embase, PsycINFO and Cochrane Central were searched from inception to 1st December 2020 for randomised controlled trials (RCTs) of adults seeking treatment for depression from their general practitioners, that used the Revised Clinical Interview Schedule so that there was uniformity in the measurement of clinical prognostic factors, and that reported on age, gender and marital status. Individual participant data were gathered from all nine eligible RCTs (N = 4864). Two-stage random-effects meta-analyses were conducted to ascertain the independent association between: (i) age, (ii) gender and (iii) marital status, and depressive symptoms at 3–4, 6–8,<Vinod: Please carry out the deletion of serial commas throughout the article> and 9–12 months post-baseline and remission at 3–4 months. Risk of bias was evaluated using QUIPS and quality was assessed using GRADE. PROSPERO registration: CRD42019129512. Pre-registered protocol https://osf.io/e5zup/.
There was no evidence of an association between age and prognosis before or after adjusting for depressive ‘disorder characteristics’ that are associated with prognosis (symptom severity, durations of depression and anxiety, comorbid panic disorderand a history of antidepressant treatment). Difference in mean depressive symptom score at 3–4 months post-baseline per-5-year increase in age = 0(95% CI: −0.02 to 0.02). There was no evidence for a difference in prognoses for men and women at 3–4 months or 9–12 months post-baseline, but men had worse prognoses at 6–8 months (percentage difference in depressive symptoms for men compared to women: 15.08% (95% CI: 4.82 to 26.35)). However, this was largely driven by a single study that contributed data at 6–8 months and not the other time points. Further, there was little evidence for an association after adjusting for depressive ‘disorder characteristics’ and employment status (12.23% (−1.69 to 28.12)). Participants that were either single (percentage difference in depressive symptoms for single participants: 9.25% (95% CI: 2.78 to 16.13) or no longer married (8.02% (95% CI: 1.31 to 15.18)) had worse prognoses than those that were married, even after adjusting for depressive ‘disorder characteristics’ and all available confounders.
Clinicians and researchers will continue to routinely record age and gender, but despite their importance for incidence and prevalence of depression, they appear to offer little information regarding prognosis. Patients that are single or no longer married may be expected to have slightly worse prognoses than those that are married. Ensuring this is recorded routinely alongside depressive ‘disorder characteristics’ in clinic may be important.
Mental disorders are common in people living with HIV (PLWH) but often remain untreated. This study aimed to explore the treatment gap for mental disorders in adults followed-up in antiretroviral therapy (ART) programmes in South Africa and disparities between ART programmes regarding the provision of mental health services.
We conducted a cohort study using ART programme data and linked pharmacy and hospitalisation data to examine the 12-month prevalence of treatment for mental disorders and factors associated with the rate of treatment for mental disorders among adults, aged 15–49 years, followed-up from 1 January 2012 to 31 December 2017 at one private care, one public tertiary care and two pubic primary care ART programmes in South Africa. We calculated the treatment gap for mental disorders as the discrepancy between the 12-month prevalence of mental disorders in PLWH (aged 15–49 years) in South Africa (estimated based on data from the Global Burden of Disease study) and the 12-month prevalence of treatment for mental disorders in ART programmes. We calculated adjusted rate ratios (aRRs) for factors associated with the treatment rate of mental disorders using Poisson regression.
In total, 182 285 ART patients were followed-up over 405 153 person-years. In 2017, the estimated treatment gap for mental disorders was 40.5% (95% confidence interval [CI] 19.5–52.9) for patients followed-up in private care, 96.5% (95% CI 95.0–97.5) for patients followed-up in public primary care and 65.0% (95% CI 36.5–85.1) for patients followed-up in public tertiary care ART programmes. Rates of treatment with antidepressants, anxiolytics and antipsychotics were 17 (aRR 0.06, 95% CI 0.06–0.07), 50 (aRR 0.02, 95% CI 0.01–0.03) and 2.6 (aRR 0.39, 95% CI 0.35–0.43) times lower in public primary care programmes than in the private sector programmes.
There is a large treatment gap for mental disorders in PLWH in South Africa and substantial disparities in access to mental health services between patients receiving ART in the public vs the private sector. In the public sector and especially in public primary care, PLWH with common mental disorders remain mostly untreated.
Type 2 diabetes results mainly from weight gain in adult life and affects one in twelve people worldwide. In the Diabetes REmission Clinical Trial (DiRECT), the primary care-led Counterweight-Plus weight management program achieved remission of type 2 diabetes (for up to six years) for forty-six percent of patients after one year and thirty-six percent after two years. The objective of this study was to estimate the implementation costs of the program, as well as its two-year within-trial cost effectiveness and lifetime cost effectiveness.
Within-trial cost effectiveness included the Counterweight-Plus costs (including training, practitioner appointments, and low-energy diet), medications, and all routine healthcare contacts, combined with achieved remission rates. Lifetime cost per quality-adjusted life-year (QALY) was estimated according to projected durations of remissions, assuming continued relapse rates as seen in year two of DiRECT and the consequent life expectancy, quality of life and healthcare costs.
The two-year intervention cost was EUR 1,580 per participant, with over eighty percent of the costs incurred in year one. Compared with the control group, medication savings were EUR 259 (95% confidence interval [CI]: 166–352) for anti-diabetes drugs and EUR 29 (95% CI: 12–47) for anti-hypertensive medications. The intervention was modeled with a lifetime horizon to achieve a mean 0.06 (95% CI: 0.04–0.09) gain in QALYs for the DiRECT population and a mean total lifetime cost saving per participant of EUR 1,497 (95% CI: 755–2,331), with the intervention becoming cost-saving within six years.
The intensive weight loss and maintenance program reduced the cost of anti-diabetes drugs through improved metabolic control, achieved diabetes remission in over one-third of participants, and reduced total healthcare contacts and costs over two years. A substantial lifetime healthcare cost saving is anticipated from periods of diabetes remission and delaying complications. Healthcare resources could be shifted cost effectively to establish diabetes remission services, using the existing DiRECT intervention, even if remissions are only maintained for limited durations. However, more research investment is needed to further improve weight-loss maintenance and extend remissions.
Susceptibility to infection such as SARS-CoV-2 may be influenced by host genotype. TwinsUK volunteers (n = 3261) completing the C-19 COVID-19 symptom tracker app allowed classical twin studies of COVID-19 symptoms, including predicted COVID-19, a symptom-based algorithm to predict true infection, derived from app users tested for SARS-CoV-2. We found heritability of 49% (32−64%) for delirium; 34% (20−47%) for diarrhea; 31% (8−52%) for fatigue; 19% (0−38%) for anosmia; 46% (31−60%) for skipped meals and 31% (11−48%) for predicted COVID-19. Heritability estimates were not affected by cohabiting or by social deprivation. The results suggest the importance of host genetics in the risk of clinical manifestations of COVID-19 and provide grounds for planning genome-wide association studies to establish specific genes involved in viral infectivity and the host immune response.
Potential effectiveness of harvest weed seed control (HWSC) systems depends upon seed shatter of the target weed species at crop maturity, enabling its collection and processing at crop harvest. However, seed retention likely is influenced by agroecological and environmental factors. In 2016 and 2017, we assessed seed-shatter phenology in 13 economically important broadleaf weed species in soybean [Glycine max (L.) Merr.] from crop physiological maturity to 4 wk after physiological maturity at multiple sites spread across 14 states in the southern, northern, and mid-Atlantic United States. Greater proportions of seeds were retained by weeds in southern latitudes and shatter rate increased at northern latitudes. Amaranthus spp. seed shatter was low (0% to 2%), whereas shatter varied widely in common ragweed (Ambrosia artemisiifolia L.) (2% to 90%) over the weeks following soybean physiological maturity. Overall, the broadleaf species studied shattered less than 10% of their seeds by soybean harvest. Our results suggest that some of the broadleaf species with greater seed retention rates in the weeks following soybean physiological maturity may be good candidates for HWSC.
Seed shatter is an important weediness trait on which the efficacy of harvest weed seed control (HWSC) depends. The level of seed shatter in a species is likely influenced by agroecological and environmental factors. In 2016 and 2017, we assessed seed shatter of eight economically important grass weed species in soybean [Glycine max (L.) Merr.] from crop physiological maturity to 4 wk after maturity at multiple sites spread across 11 states in the southern, northern, and mid-Atlantic United States. From soybean maturity to 4 wk after maturity, cumulative percent seed shatter was lowest in the southern U.S. regions and increased moving north through the states. At soybean maturity, the percent of seed shatter ranged from 1% to 70%. That range had shifted to 5% to 100% (mean: 42%) by 25 d after soybean maturity. There were considerable differences in seed-shatter onset and rate of progression between sites and years in some species that could impact their susceptibility to HWSC. Our results suggest that many summer annual grass species are likely not ideal candidates for HWSC, although HWSC could substantially reduce their seed output during certain years.
Social cognitive deficits can have many negative consequences, spanning social withdrawal to psychopathology. Prior work has shown that child maltreatment may associate with poorer social cognitive skills in later life. However, no studies have examined this association from early childhood into adolescence. Using data from the Avon Longitudinal Study of Parents and Children (ALSPAC; n = 4,438), we examined the association between maltreatment (caregiver physical or emotional abuse; sexual or physical abuse), assessed repeatedly (every 1–3 years) from birth to age 9, and social cognitive skills at ages 7.5, 10.5, and 14 years. We evaluated the role of both the developmental timing (defined by age at exposure) and accumulation of maltreatment (defined as the number of occasions exposed) using a least angle regression variable selection procedure, followed by structural equation modeling. Among females, accumulation of maltreatment explained the most variation in social cognitive skills. For males, no significant associations were found. These findings underscore the importance of early intervention to minimize the accumulation of maltreatment and showcase the importance of prospective studies to understand the development of social cognition over time.
Ventenata [Ventenata dubia (Leers) Coss.], an invasive winter annual grass, negatively impacts grassland community composition and function in the Pacific Northwest. Ventenata dubia established in Palouse prairie (PP) and canyon grasslands (CG) of northern Idaho/eastern Washington in the mid-1980s to early 1990s. Understanding and comparing patterns of invasion can elucidate future trends as its range expands. We performed surveys in PP (2012 and 2013) and CG (2018) to assess V. dubia abundance. Specifically, we correlated species richness, Shannon diversity, rank abundance, and indicator species with no, low (<12.5%), and high (>12.5%) V. dubia cover. We used nonmetric multidimensional scaling analysis (NMDS) to visualize species similarities and associations with abiotic variables. In both ecoregions, V. dubia was very common, appearing in nearly 60% of 450 plots. When present, V. dubia cover averaged 26% (±2.3 SE) in PP and 19% (±1.8 SE) in CG. Indigenous plant species richness and diversity were lowest in plots with high V. dubia cover. In CG, this relationship held for nonindigenous species; in PP, nonindigenous plant richness and diversity were higher with high V. dubia cover. Ventenata dubia and other winter annual grasses (Bromus spp., medusahead [Taeniatherum caput-medusae (L.) Nevski]) were moderately associated according to the NMDS analysis. Indicator species analysis showed V. dubia was positively associated with nonindigenous winter annual grasses and negatively associated with indigenous low shrub species. Abiotic factors that explained V. dubia abundance included shallow soils and a south to west aspect. Overall, these findings indicate V. dubia can successfully invade both dry and relatively wet plant communities and is more abundant than other invasive annual grasses. We suggest these findings foreshadow what will happen in sagebrush steppe and Great Plains grasslands, regions where V. dubia recently became established.
The Eating Assessment in Toddlers FFQ (EAT FFQ) has been shown to have good reliability and comparative validity for ranking nutrient intakes in young children. With the addition of food items (n 4), we aimed to re-assess the validity of the EAT FFQ and estimate calibration factors in a sub-sample of children (n 97) participating in the Growing Up Milk – Lite (GUMLi) randomised control trial (2015–2017). Participants completed the ninety-nine-item GUMLi EAT FFQ and record-assisted 24-h recalls (24HR) on two occasions. Energy and nutrient intakes were assessed at months 9 and 12 post-randomisation and calibration factors calculated to determine predicted estimates from the GUMLi EAT FFQ. Validity was assessed using Pearson correlation coefficients, weighted kappa (κ) and exact quartile categorisation. Calibration was calculated using linear regression models on 24HR, adjusted for sex and treatment group. Nutrient intakes were significantly correlated between the GUMLi EAT FFQ and 24HR at both time points. Energy-adjusted, de-attenuated Pearson correlations ranged from 0·3 (fibre) to 0·8 (Fe) at 9 months and from 0·3 (Ca) to 0·7 (Fe) at 12 months. Weighted κ for the quartiles ranged from 0·2 (Zn) to 0·6 (Fe) at 9 months and from 0·1 (total fat) to 0·5 (Fe) at 12 months. Exact agreement ranged from 30 to 74 %. Calibration factors predicted up to 56 % of the variation in the 24HR at 9 months and 44 % at 12 months. The GUMLi EAT FFQ remained a useful tool for ranking nutrient intakes with similar estimated validity compared with other FFQ used in children under 2 years.
From an evolutionary perspective, aggression is viewed as a flexible context-specific adaption that was selected for because it enhanced the survival and reproductive success of ancestral humans. Evolutionary pressures have impinged differentially on the sexes, leading to the hypothesis that sex differences should be manifest in aggressive behavior. Evidence to date supports key predictions made from sexual selection theory that women direct their aggression primarily toward same-sex competitors, which peaks as mate competition intensifies. Women demonstrate a notable preference across cultures for more indirect, as opposed to direct, forms of intrasexual rivalry as a likely consequence of heightened obligatory parental investment, lower lifetime reproductive potential, and the greater importance of maternal survival for the health and longevity of offspring. An evolutionary approach can yield unique insights into the sex-differentiated functions, development, and outcomes of aggressive behavior.