To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Humans have engineered their environments throughout the Holocene, especially in the construction of hydraulic infrastructure. In many regions, however, this infrastructure is difficult to date, including the vestiges of water-management systems in the Andean highlands. Focusing on silt reservoirs in the upper Ica drainage, Peru, the authors use cores and radiocarbon dates to demonstrate the pre-Hispanic construction of walls to enhance and expand wetlands for camelid pasture. Interventions dated to the Inca period (AD 1400–1532) indicate an intensification of investment in hydraulic infrastructure to expand production capacity in support of the state. The results are discussed in the context of the hydraulic strategies of other states and empires.
Schizophrenia (SZ), bipolar disorder (BD) and depression (D) run in families. This susceptibility is partly due to hundreds or thousands of common genetic variants, each conferring a fractional risk. The cumulative effects of the associated variants can be summarised as a polygenic risk score (PRS). Using data from the EUropean Network of national schizophrenia networks studying Gene-Environment Interactions (EU-GEI) first episode case–control study, we aimed to test whether PRSs for three major psychiatric disorders (SZ, BD, D) and for intelligent quotient (IQ) as a neurodevelopmental proxy, can discriminate affective psychosis (AP) from schizophrenia-spectrum disorder (SSD).
Participants (842 cases, 1284 controls) from 16 European EU-GEI sites were successfully genotyped following standard quality control procedures. The sample was stratified based on genomic ancestry and analyses were done only on the subsample representing the European population (573 cases, 1005 controls). Using PRS for SZ, BD, D, and IQ built from the latest available summary statistics, we performed simple or multinomial logistic regression models adjusted for 10 principal components for the different clinical comparisons.
In case–control comparisons PRS-SZ, PRS-BD and PRS-D distributed differentially across psychotic subcategories. In case–case comparisons, both PRS-SZ [odds ratio (OR) = 0.7, 95% confidence interval (CI) 0.54–0.92] and PRS-D (OR = 1.31, 95% CI 1.06–1.61) differentiated AP from SSD; and within AP categories, only PRS-SZ differentiated BD from psychotic depression (OR = 2.14, 95% CI 1.23–3.74).
Combining PRS for severe psychiatric disorders in prediction models for psychosis phenotypes can increase discriminative ability and improve our understanding of these phenotypes. Our results point towards the potential usefulness of PRSs in specific populations such as high-risk or early psychosis phases.
The Patagonian longfin squid Doryteuthis gahi has an annual life cycle with two seasonal cohorts (autumn and spring spawners). Earlier studies on the Patagonian shelf found a predominance of Euphausiacea in the D. gahi diet, but no studies to date have investigated differences between feeding spectra of the two cohorts or decadal diet shifts. The present study investigated differences in diet of D. gahi on the Patagonian shelf sampled two decades apart, and differences between seasonal cohorts. Classical stomach content analysis and generalized additive models were used to investigate and model the influence of mantle length, sampling period and spawning cohort on the diet. Results revealed an ontogenetic diet change from ~70% Frequency of Occurrence of Euphausiacea in small squid to more than 60% FO of fish and Cephalopoda at larger sizes. Cannibalism was also frequently observed. Euphausiacea were ingested more frequently and in higher amounts during the austral summer and therefore were consumed more by the autumn spawning cohort, whereas fish was more frequently fed upon during austral winter and also by the spring spawning cohort. Cannibalism was also recorded more in austral winter months but, contrary to feeding on fish, was more prevalent in the autumn spawning cohort. Increased predation of Munida gregaria was observed in 2020 compared with 2001. This study is an important step towards improving the knowledge of D. gahi's two seasonal cohorts, providing data that can be used for future ecosystem modelling.
In recent years, plant biologists interested in quantifying molecules and molecular events in vivo have started to complement reporter systems with genetically encoded fluorescent biosensors (GEFBs) that directly sense an analyte. Such biosensors can allow measurements at the level of individual cells and over time. This information is proving valuable to mathematical modellers interested in representing biological phenomena in silico, because improved measurements can guide improved model construction and model parametrisation. Advances in synthetic biology have accelerated the pace of biosensor development, and the simultaneous expression of spectrally compatible biosensors now allows quantification of multiple nodes in signalling networks. For biosensors that directly respond to stimuli, targeting to specific cellular compartments allows the observation of differential accumulation of analytes in distinct organelles, bringing insights to reactive oxygen species/calcium signalling and photosynthesis research. In conjunction with improved image analysis methods, advances in biosensor imaging can help close the loop between experimentation and mathematical modelling.
Background: During the COVID-19 pandemic, public-health decision makers have increasingly relied on hospitalization forecasts that are routinely provided, accurate, and based on timely input data to inform pandemic planning. In North Carolina, we adapted an existing agent-based model (ABM) to produce 30-day hospitalization forecasts of COVID-19 and non–COVID-19 hospitalizations for use by public-health decision makers. We sought to continually improve model speed and accuracy during forecasting. Methods: The geospatially explicit ABM included movement of agents (ie, patients) among 104 short-term acute-care hospitals, 10 long-term acute-care hospitals, 421 licensed nursing homes, and the community in North Carolina. Agents were based on a synthetic population of North Carolina residents (ie, >10.4 million agents). We assigned SARS-CoV-2 infections to agents according to county-level susceptible, exposed, infectious, recovered (SEIR) models informed by reported COVID-19 cases by county. Agents’ COVID-19 severity and probability of hospitalization were determined using agent-specific characteristics (eg, age, comorbidities). During May 2020–December 2020, we produced weekly 30-day forecasts of intensive care unit (ICU) and non-ICU bed occupancy for COVID-19 agents and non–COVID-19 agents statewide and by region under a range of SARS-CoV-2 effective reproduction numbers. During the reporting period, we identified optimizations for faster results turnaround. We evaluated the incorporation of real-time hospital-level occupancy data at model initialization on forecast accuracy using mean absolute percent error (MAPE). Results: During May 2020–December 2020, we provided 31 weekly reports of 30-day hospitalization forecasts with a 1-day turnaround time. Reports included (1) raw and smoothed 7-day average values for 42 model output variables; (2) static visuals of ICU and non-ICU bed demand and capacity; and (3) an interactive Tableau workbook of hospital demand variables. Identifying code efficiencies reduced a single model runtime from ~100 seconds to 28 seconds. The use of cloud computing reduced simulation runtime from ~20 hours to 15 minutes. Across forecasts, the average MAPEs were 21.6% and 7.1% for ICU and non-ICU bed demand, respectively. By incorporating hospital-level occupancy data, we reduced the average MAPE to 6.5% for ICU bed demand and 3.9% for non-ICU bed demand, indicating improved accuracy. Conclusions: We adapted an ABM and continually improved it during COVID-19 forecasting by optimizing code and computing resources and including real-time hospital-level occupancy data. Planned SEIR model updates for enhanced forecasts include the addition of compartments for undocumented infections and recoveries as well as permission of reinfection from recovered compartments.
Group Name: VHA Center for Antimicrobial Stewardship and Prevention of Antimicrobial Resistance (CASPAR) Background: Antimicrobial stewardship programs (ASPs) are advised to measure antimicrobial consumption as a metric for audit and feedback. However, most ASPs lack the tools necessary for appropriate risk adjustment and standardized data collection, which are critical for peer-program benchmarking. We created a system that automatically extracts antimicrobial use data and patient-level factors for risk-adjustment and a dashboard to present risk-adjusted benchmarking metrics for ASP within the Veterans’ Health Administration (VHA). Methods: We built a system to extract patient-level data for antimicrobial use, procedures, demographics, and comorbidities for acute inpatient and long-term care units at all VHA hospitals utilizing the VHA’s Corporate Data Warehouse (CDW). We built baseline negative binomial regression models to perform risk-adjustments based on patient- and unit-level factors using records dated between October 2016 and September 2018. These models were then leveraged both retrospectively and prospectively to calculate observed-to-expected ratios of antimicrobial use for each hospital and for specific units within each hospital. Data transformation and applications of risk-adjustment models were automatically performed within the CDW database server, followed by monthly scheduled data transfer from the CDW to the Microsoft Power BI server for interactive data visualization. Frontline antimicrobial stewards at 10 VHA hospitals participated in the project as pilot users. Results: Separate baseline risk-adjustment models to predict days of therapy (DOT) for all antibacterial agents were created for acute-care and long-term care units based on 15,941,972 patient days and 3,011,788 DOT between October 2016 and September 2018 at 134 VHA hospitals. Risk adjustment models include month, unit types (eg, intensive care unit [ICU] vs non-ICU for acute care), specialty, age, gender, comorbidities (50 and 30 factors for acute care and long-term care, respectively), and preceding procedures (45 and 24 procedures for acute care and long-term care, respectively). We created additional models for each antimicrobial category based on National Healthcare Safety Network definitions. For each hospital, risk-adjusted benchmarking metrics and a monthly ranking within the VHA system were visualized and presented to end users through the dashboard (an example screenshot in Figure 1). Conclusions: Developing an automated surveillance system for antimicrobial consumption and risk-adjustment benchmarking using an electronic medical record data warehouse is feasible and can potentially provide valuable tools for ASPs, especially at hospitals with no or limited local informatics expertise. Future efforts will evaluate the effectiveness of dashboards in these settings.
Comparative transcriptomics can be used to translate an understanding of gene regulatory networks from model systems to less studied species. Here, we use RNA-Seq to determine and compare gene expression dynamics through the floral transition in the model species Arabidopsis thaliana and the closely related crop Brassica rapa. We find that different curve registration functions are required for different genes, indicating that there is no single common ‘developmental time’ between Arabidopsis and B. rapa. A detailed comparison between Arabidopsis and B. rapa and between two B. rapa accessions reveals different modes of regulation of the key floral integrator SOC1, and that the floral transition in the B. rapa accessions is triggered by different pathways. Our study adds to the mechanistic understanding of the regulatory network of flowering time in rapid cycling B. rapa and highlights the importance of registration methods for the comparison of developmental gene expression data.
Circular features made from mammoth bone are known from across Upper Palaeolithic Eastern Europe, and are widely identified as dwellings. The first systematic flotation programme of samples from a recently discovered feature at Kostenki 11 in Russia has yielded assemblages of charcoal, burnt bone and microlithic debitage. New radiocarbon dates provide the first coherent chronology for the site, revealing it to be one of the oldest such features on the Russian Plain. The authors discuss the implications for understanding the function of circular mammoth-bone features during the onset of the Last Glacial Maximum.