To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Research into the relationship between ecosystem services and human well-being, including poverty alleviation, has blossomed. However, little is known about who has produced this knowledge, what collaborative patterns and institutional and funding conditions have underpinned it, or what implications these matters may have. To investigate the potential implications of such production for conservation science and practice, we address this by developing a social network analysis of the most prolific writers in the production of knowledge about ecosystem services and poverty alleviation. We show that 70% of these authors are men, most are trained in either the biological sciences or economics and almost none in the humanities. Eighty per cent of authors obtained their PhD from universities in the EU or the USA, and they are currently employed in these regions. The co-authorship network is strongly collaborative, without dominant authors, and with the top 30 most cited scholars being based in the USA and co-authoring frequently. These findings suggest, firstly, that the production of knowledge on ecosystem services and poverty alleviation research has the same geographical and gender biases that characterize knowledge production in other scientific areas and, secondly, that there is an expertise bias that also characterizes other environmental matters. This is despite the fact that the research field of ecosystem services and poverty alleviation, by its nature, requires a multidisciplinary lens. This could be overcome through promoting more extensive collaboration and knowledge co-production.
The impacts of the COVID-19 pandemic extend to global biodiversity and its conservation. Although short-term beneficial or adverse impacts on biodiversity have been widely discussed, there is less attention to the likely political and economic responses to the crisis and their implications for conservation. Here we describe four possible alternative future policy responses: (1) restoration of the previous economy, (2) removal of obstacles to economic growth, (3) green recovery and (4) transformative economic reconstruction. Each alternative offers opportunities and risks for conservation. They differ in the agents they emphasize to mobilize change (e.g. markets or states) and in the extent to which they prioritize or downplay the protection of nature. We analyse the advantages and disadvantages of these four options from a conservation perspective. We argue that the choice of post-COVID-19 recovery strategy has huge significance for the future of biodiversity, and that conservationists of all persuasions must not shrink from engagement in the debates to come.
Potential effectiveness of harvest weed seed control (HWSC) systems depends upon seed shatter of the target weed species at crop maturity, enabling its collection and processing at crop harvest. However, seed retention likely is influenced by agroecological and environmental factors. In 2016 and 2017, we assessed seed-shatter phenology in 13 economically important broadleaf weed species in soybean [Glycine max (L.) Merr.] from crop physiological maturity to 4 wk after physiological maturity at multiple sites spread across 14 states in the southern, northern, and mid-Atlantic United States. Greater proportions of seeds were retained by weeds in southern latitudes and shatter rate increased at northern latitudes. Amaranthus spp. seed shatter was low (0% to 2%), whereas shatter varied widely in common ragweed (Ambrosia artemisiifolia L.) (2% to 90%) over the weeks following soybean physiological maturity. Overall, the broadleaf species studied shattered less than 10% of their seeds by soybean harvest. Our results suggest that some of the broadleaf species with greater seed retention rates in the weeks following soybean physiological maturity may be good candidates for HWSC.
Seed shatter is an important weediness trait on which the efficacy of harvest weed seed control (HWSC) depends. The level of seed shatter in a species is likely influenced by agroecological and environmental factors. In 2016 and 2017, we assessed seed shatter of eight economically important grass weed species in soybean [Glycine max (L.) Merr.] from crop physiological maturity to 4 wk after maturity at multiple sites spread across 11 states in the southern, northern, and mid-Atlantic United States. From soybean maturity to 4 wk after maturity, cumulative percent seed shatter was lowest in the southern U.S. regions and increased moving north through the states. At soybean maturity, the percent of seed shatter ranged from 1% to 70%. That range had shifted to 5% to 100% (mean: 42%) by 25 d after soybean maturity. There were considerable differences in seed-shatter onset and rate of progression between sites and years in some species that could impact their susceptibility to HWSC. Our results suggest that many summer annual grass species are likely not ideal candidates for HWSC, although HWSC could substantially reduce their seed output during certain years.
Radiocarbon (14C) ages cannot provide absolutely dated chronologies for archaeological or paleoenvironmental studies directly but must be converted to calendar age equivalents using a calibration curve compensating for fluctuations in atmospheric 14C concentration. Although calibration curves are constructed from independently dated archives, they invariably require revision as new data become available and our understanding of the Earth system improves. In this volume the international 14C calibration curves for both the Northern and Southern Hemispheres, as well as for the ocean surface layer, have been updated to include a wealth of new data and extended to 55,000 cal BP. Based on tree rings, IntCal20 now extends as a fully atmospheric record to ca. 13,900 cal BP. For the older part of the timescale, IntCal20 comprises statistically integrated evidence from floating tree-ring chronologies, lacustrine and marine sediments, speleothems, and corals. We utilized improved evaluation of the timescales and location variable 14C offsets from the atmosphere (reservoir age, dead carbon fraction) for each dataset. New statistical methods have refined the structure of the calibration curves while maintaining a robust treatment of uncertainties in the 14C ages, the calendar ages and other corrections. The inclusion of modeled marine reservoir ages derived from a three-dimensional ocean circulation model has allowed us to apply more appropriate reservoir corrections to the marine 14C data rather than the previous use of constant regional offsets from the atmosphere. Here we provide an overview of the new and revised datasets and the associated methods used for the construction of the IntCal20 curve and explore potential regional offsets for tree-ring data. We discuss the main differences with respect to the previous calibration curve, IntCal13, and some of the implications for archaeology and geosciences ranging from the recent past to the time of the extinction of the Neanderthals.
UK Biobank is a well-characterised cohort of over 500 000 participants including genetics, environmental data and imaging. An online mental health questionnaire was designed for UK Biobank participants to expand its potential.
Describe the development, implementation and results of this questionnaire.
An expert working group designed the questionnaire, using established measures where possible, and consulting a patient group. Operational criteria were agreed for defining likely disorder and risk states, including lifetime depression, mania/hypomania, generalised anxiety disorder, unusual experiences and self-harm, and current post-traumatic stress and hazardous/harmful alcohol use.
A total of 157 366 completed online questionnaires were available by August 2017. Participants were aged 45–82 (53% were ≥65 years) and 57% women. Comparison of self-reported diagnosed mental disorder with a contemporary study shows a similar prevalence, despite respondents being of higher average socioeconomic status. Lifetime depression was a common finding, with 24% (37 434) of participants meeting criteria and current hazardous/harmful alcohol use criteria were met by 21% (32 602), whereas other criteria were met by less than 8% of the participants. There was extensive comorbidity among the syndromes. Mental disorders were associated with a high neuroticism score, adverse life events and long-term illness; addiction and bipolar affective disorder in particular were associated with measures of deprivation.
The UK Biobank questionnaire represents a very large mental health survey in itself, and the results presented here show high face validity, although caution is needed because of selection bias. Built into UK Biobank, these data intersect with other health data to offer unparalleled potential for crosscutting biomedical research involving mental health.
When 2017 Hurricane Harvey struck the coastline of Texas on August 25, 2017, it resulted in 88 fatalities and more than US $125 billion in damage to infrastructure. The floods associated with the storm created a toxic mix of chemicals, sewage and other biohazards, and over 6 million cubic meters of garbage in Houston alone. The level of biohazard exposure and injuries from trauma among persons residing in affected areas was widespread and likely contributed to increases in emergency department (ED) visits in Houston and cities receiving hurricane evacuees. We investigated medical surge resulting from these evacuations in Dallas–Fort Worth (DFW) metroplex EDs.
We used data sourced from the North Texas Syndromic Surveillance Region 2/3 in ESSENCE to investigate ED visit surge following the storm in DFW hospitals because this area received evacuees from the 60 counties with disaster declarations due to the storm. We used the interrupted time series (ITS) analysis to estimate the magnitude and duration of the ED surge. ITS was applied to all ED visits in DFW and visits made by patients residing in any of the 60 counties with disaster declarations due to the storm. The DFW metropolitan statistical area included 55 hospitals. Time series analyses examined data from March 1, 2017–January 6, 2018 with focus on the storm impact period, August 14–September 15, 2017. Data from before, during, and after the storm were visualized spatially and temporally to characterize magnitude, duration, and spatial variation of medical surge attributable to Hurricane Harvey.
During the study period overall, ED visits in the DFW area rose immediately by about 11% (95% CI: 9%, 13%), amounting to ~16 500 excess total visits before returning to the baseline on September 21, 2017. Visits by patients identified as residing in disaster declaration counties to DFW hospitals rose immediately by 127% (95% CI: 125%, 129%), amounting to 654 excess visits by September 29, 2017, when visits returned to the baseline. A spatial analysis revealed that evacuated patients were strongly clustered (Moran’s I = 0.35, P < 0.0001) among 5 of the counties with disaster declarations in the 11-day window during the storm surge.
The observed increase in ED visits in DFW due to Hurricane Harvey and ensuing evacuation was significant. Anticipating medical surge following large-scale hurricanes is critical for community preparedness planning. Coordinated planning across stakeholders is necessary to safeguard the population and for a skillful response to medical surge needs. Plans that address hurricane response, in particular, should have contingencies for support beyond the expected disaster areas.
Hurricane Maria caused catastrophic damage in Puerto Rico, increasing the risk for morbidity and mortality in the post-impact period. We aimed to establish a syndromic surveillance system to describe the number and type of visits at 2 emergency health-care settings in the same hospital system in Ponce, Puerto Rico.
We implemented a hurricane surveillance system by interviewing patients with a short questionnaire about the reason for visit at a hospital emergency department and associated urgent care clinic in the 6 mo after Hurricane Maria. We then evaluated the system by comparing findings with data from the electronic medical record (EMR) system for the same time period.
The hurricane surveillance system captured information from 5116 participants across the 2 sites, representing 17% of all visits captured in the EMR for the same period. Most visits were associated with acute illness/symptoms (79%), followed by injury (11%). The hurricane surveillance and EMR data were similar, proportionally, by sex, age, and visit category.
The hurricane surveillance system provided timely and representative data about the number and type of visits at 2 sites. This system, or an adapted version using available electronic data, should be considered in future disaster settings.
As demonstrated by neuroimaging data, the human brain contains systems that control responses to threat. The revised Reinforcement Sensitivity Theory of personality predicts that individual differences in the reactivity of these brain systems produce anxiety and fear-related personality traits. Here we discuss some of the challenges in testing this theory and, as an example, present a pilot study that aimed to dissociate brain activity during pursuit by threat and goal conflict. We did this by translating the Mouse Defense Test Battery for human fMRI use. In this version, dubbed the Joystick Operated Runway Task (JORT), we repeatedly exposed 24 participants to pursuit and goal conflict, with and without threat of electric shock. The runway design of JORT allowed the effect of threat distance on brain activation to be evaluated independently of context. Goal conflict plus threat of electric shock caused deactivation in a network of brain areas that included the fusiform and middle temporal gyri, as well as the default mode network core, including medial frontal regions, precuneus and posterior cingulate gyrus, and laterally the inferior parietal and angular gyri. Consistent with earlier research, we also found that imminent threat activated the midbrain and that this effect was significantly stronger during the simple pursuit condition than during goal conflict. Also consistent with earlier research, we found significantly greater hippocampal activation during goal conflict than pursuit by imminent threat. In conclusion, our results contribute knowledge to theories linking anxiety disorders to altered functioning in defensive brain systems and also highlight challenges in this research domain.
As the IAU heads towards its second century, many changes have simultaneously transformed Astronomy and the human condition world-wide. Amid the amazing recent discoveries of exoplanets, primeval galaxies, and gravitational radiation, the human condition on Earth has become blazingly interconnected, yet beset with ever-increasing problems of over-population, pollution, and never-ending wars. Fossil-fueled global climate change has begun to yield perilous consequences. And the displacement of people from war-torn nations has reached levels not seen since World War II.
The combination of sensitivity and large sky coverage of the ALFALFA HI survey has enabled the detection of difficult to observe low mass galaxies in large numbers, including dwarf galaxies overlooked in optical surveys. Three different, but connected, studies of dwarf galaxies from the ALFALFA survey are of particular interest: SHIELD (Survey of HI in Extremely Low-mass Dwarfs), candidate gas-rich ultra-faint dwarf galaxies, and the (Almost) Dark population. SHIELD is a systematic multiwavelength study of all dwarf galaxies from ALFALFA with MHI < 107.2M⊙ and clear optical counterparts. Candidate gas-rich ultra-faint dwarf galaxies extend the dwarf galaxy population to even lower masses. These galaxies are identified as isolated HI clouds with no discernible optical counterpart but subsequent observations reveal that some are extremely faint, gas-dominated galaxies. Leo P, discovered first as an HI detection, and then found to be an actively star-forming galaxy, bridges the gap between these candidate galaxies and the SHIELD sample. The (Almost) Dark sample consists of galaxies whose optical counterparts are overlooked in current optical surveys but which are clear detections in ALFALFA. This sample includes field gas-rich ultra-diffuse galaxies. Coma P, with a peak surface brightness of only ∼26.4 mag arcsec−2 in g’, demonstrates the sort of extreme low surface brightness galaxy that can be discovered in an HI survey.
As the concept of ecosystem services is applied more widely in conservation, its users will encounter the issue of poverty alleviation. Policy initiatives involving ecosystem services are often marked by their use of win-win narratives that conceal the trade-offs they must entail. Modelling this paper on an earlier essay about conservation and poverty, we explore the different views that underlie apparent agreement. We identify five positions that reflect different mixes of concern for ecosystem condition, poverty and economic growth, and we suggest that acknowledging these helps to uncover the subjacent goals of policy interventions and the trade-offs they involve in practice. Recognizing their existence and foundations can ultimately support the emergence of more legitimate and robust policies.
UK Biobank is a well-characterised cohort of over 500 000 participants that offers unique opportunities to investigate multiple diseases and risk factors.
An online mental health questionnaire completed by UK Biobank participants was expected to expand the potential for research into mental disorders.
An expert working group designed the questionnaire, using established measures where possible, and consulting with a patient group regarding acceptability. Case definitions were defined using operational criteria for lifetime depression, mania, anxiety disorder, psychotic-like experiences and self-harm, as well as current post-traumatic stress and alcohol use disorders.
157 366 completed online questionnaires were available by August 2017. Comparison of self-reported diagnosed mental disorder with a contemporary study shows a similar prevalence, despite respondents being of higher average socioeconomic status than the general population across a range of indicators. Thirty-five per cent (55 750) of participants had at least one defined syndrome, of which lifetime depression was the most common at 24% (37 434). There was extensive comorbidity among the syndromes. Mental disorders were associated with high neuroticism score, adverse life events and long-term illness; addiction and bipolar affective disorder in particular were associated with measures of deprivation.
The questionnaire represents a very large mental health survey in itself, and the results presented here show high face validity, although caution is needed owing to selection bias. Built into UK Biobank, these data intersect with other health data to offer unparalleled potential for crosscutting biomedical research involving mental health.
Declaration of interest
G.B. received grants from the National Institute for Health Research during the study; and support from Illumina Ltd. and the European Commission outside the submitted work. B.C. received grants from the Scottish Executive Chief Scientist Office and from The Dr Mortimer and Theresa Sackler Foundation during the study. C.S. received grants from the Medical Research Council and Wellcome Trust during the study, and is the Chief Scientist for UK Biobank. M.H. received grants from the Innovative Medicines Initiative via the RADAR-CNS programme and personal fees as an expert witness outside the submitted work.
Whether monozygotic (MZ) and dizygotic (DZ) twins differ from each other in a variety of phenotypes is important for genetic twin modeling and for inferences made from twin studies in general. We analyzed whether there were differences in individual, maternal and paternal education between MZ and DZ twins in a large pooled dataset. Information was gathered on individual education for 218,362 adult twins from 27 twin cohorts (53% females; 39% MZ twins), and on maternal and paternal education for 147,315 and 143,056 twins respectively, from 28 twin cohorts (52% females; 38% MZ twins). Together, we had information on individual or parental education from 42 twin cohorts representing 19 countries. The original education classifications were transformed to education years and analyzed using linear regression models. Overall, MZ males had 0.26 (95% CI [0.21, 0.31]) years and MZ females 0.17 (95% CI [0.12, 0.21]) years longer education than DZ twins. The zygosity difference became smaller in more recent birth cohorts for both males and females. Parental education was somewhat longer for fathers of DZ twins in cohorts born in 1990–1999 (0.16 years, 95% CI [0.08, 0.25]) and 2000 or later (0.11 years, 95% CI [0.00, 0.22]), compared with fathers of MZ twins. The results show that the years of both individual and parental education are largely similar in MZ and DZ twins. We suggest that the socio-economic differences between MZ and DZ twins are so small that inferences based upon genetic modeling of twin data are not affected.
To determine the impact of recurrent Clostridium difficile infection (RCDI) on patient behaviors following illness.
Using a computer algorithm, we searched the electronic medical records of 7 Chicago-area hospitals to identify patients with RCDI (2 episodes of CDI within 15 to 56 days of each other). RCDI was validated by medical record review. Patients were asked to complete a telephone survey. The survey included questions regarding general health, social isolation, symptom severity, emotional distress, and prevention behaviors.
In total, 119 patients completed the survey (32%). On average, respondents were 57.4 years old (standard deviation, 16.8); 57% were white, and ~50% reported hospitalization for CDI. At the time of their most recent illness, patients rated their diarrhea as high severity (58.5%) and their exhaustion as extreme (30.7%). Respondents indicated that they were very worried about getting sick again (41.5%) and about infecting others (31%). Almost 50% said that they have washed their hands more frequently (47%) and have increased their use of soap and water (45%) since their illness. Some of these patients (22%–32%) reported eating out less, avoiding certain medications and public areas, and increasing probiotic use. Most behavioral changes were unrelated to disease severity.
Having had RCDI appears to increase prevention-related behaviors in some patients. While some behaviors are appropriate (eg, handwashing), others are not supported by evidence of decreased risk and may negatively impact patient quality of life. Providers should discuss appropriate prevention behaviors with their patients and should clarify that other behaviors (eg, eating out less) will not affect their risk of future illness.