To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
There are only so many technologies and devices that have the same type of impact as that of the internal combustion (IC) engine. Its ubiquitous nature pervades our everyday life, many times without us even realizing it. Whether it be the spark-ignited engine driving our vehicle, the compression-ignition engine hauling food to our local grocery store, the jet engine we hear flying 38,000 feet overhead, or the gas turbine powering the laptop screen from which we read this article, internal combustion engines are quite literally intricately and irreplaceably woven into our daily lives. The internal combustion has taken on many different forms throughout its long, greater than 150-year history, but combustion has always been one of its few constants. Indeed, combustion is even in its name and helps differentiate it from other thermodynamic work devices such heat engines and fuel cells.
Rush skeletonweed is an aggressive perennial weed that establishes itself on land in the Conservation Reserve Program (CRP), and persists during cropping following contract expiration. It depletes critical soil moisture required for yield potential of winter wheat. In a winter wheat/fallow cropping system, weed control is maintained with glyphosate and tillage during conventional fallow, and with herbicides only in no-till fallow. Research was conducted for control of rush skeletonweed at two sites in eastern Washington, Lacrosse and Hay, to compare the effectiveness of a weed-sensing sprayer and broadcast applications of four herbicides (aminopyralid, chlorsulfuron + metsulfuron, clopyralid, and glyphosate). Experimental design was a split-plot with herbicide and application type as main and subplot factors, respectively. Herbicides were applied in the fall at either broadcast or spot-spraying rates depending on sprayer type. Aminopyralid (1.1 plants m-2), glyphosate (1.4 plants m-2), clopyralid (1.7 plants m-2), and chlorsulfuron + metsulfuron (1.8 plants m-2) reduced rush skeletonweed density in May compared to the nontreated check (2.6 plants m-2). No treatment differences were observed after May 2019. There was no interaction between herbicide and application system. Area covered using the weed-sensing sprayer was, on average, 52% (p<0.001) less than the broadcast application at Lacrosse but only 20% (p=0.01) at Hay. Spray reduction is dependent on foliar cover in relation to weed density and size. At Lacrosse, the weed-sensing sprayer reduced costs for all herbicide treatments except aminopyralid, with savings up to 6.8 US$ ha-1. At Hay, the weed-sensing sprayer resulted in economic loss for all products because of higher rush skeletonweed density. The weed-sensing sprayer is a viable fallow weed control tool when weed densities are low or patchy.
Supplier of system components face the challenge of customer requirements influencing the property level functional integral product architectures. For this, solution approaches focusing on the re-use of pre-engineered part variants are not applicable. However, to generate a valid product structure, customer-specific properties have to fit modelled product knowledge. Therefore, the approach models a reference class structure and analysis compatibilities on the property level for customer specific inputs concerning explicit product knowledge and constraints.
Facing a rising competitive pressure, manufactures create advantages when they are able to offer customer-specific products to the conditions of a mass production article. Traditional configurators support the creation of personalized products from the elements of a modular product system, but are based on a pre-defined set of rules. The model based approach changes the environment of configuration from static configuration rules to the dependencies defined within the product's system model. So, by regarding target quantities of the user, the configurator identifies the optimal variant.
In recent years, scholars have made major progress in understanding the dynamics of “policy drift”—the transformation of a policy's outcomes due to the failure to update its rules or structures to reflect changing circumstances. Drift is a ubiquitous mode of policy change in America's gridlock-prone polity, and its causes are now well understood. Yet surprisingly little attention has been paid to the political consequences of drift—to the ways in which drift, like the adoption of new policies, may generate its own feedback effects. In this article, we seek to fill this gap. We first outline a set of theoretical expectations about how drift should affect downstream politics. We then examine these dynamics in the context of four policy domains: labor law, health care, welfare, and disability insurance. In each, drift is revealed to be both mobilizing and constraining: While it increases demands for policy innovation, group adaptation, and new group formation, it also delimits the range of possible paths forward. These reactions to drift, in turn, generate new problems, cleavages, and interest alignments that alter subsequent political trajectories. Whether formal policy revision or further stalemate results, these processes reveal key mechanisms through which American politics and policy develop.
Registry-based trials have emerged as a potentially cost-saving study methodology. Early estimates of cost savings, however, conflated the benefits associated with registry utilisation and those associated with other aspects of pragmatic trial designs, which might not all be as broadly applicable. In this study, we sought to build a practical tool that investigators could use across disciplines to estimate the ranges of potential cost differences associated with implementing registry-based trials versus standard clinical trials.
We built simulation Markov models to compare unique costs associated with data acquisition, cleaning, and linkage under a registry-based trial design versus a standard clinical trial. We conducted one-way, two-way, and probabilistic sensitivity analyses, varying study characteristics over broad ranges, to determine thresholds at which investigators might optimally select each trial design.
Registry-based trials were more cost effective than standard clinical trials 98.6% of the time. Data-related cost savings ranged from $4300 to $600,000 with variation in study characteristics. Cost differences were most reactive to the number of patients in a study, the number of data elements per patient available in a registry, and the speed with which research coordinators could manually abstract data. Registry incorporation resulted in cost savings when as few as 3768 independent data elements were available and when manual data abstraction took as little as 3.4 seconds per data field.
Registries offer important resources for investigators. When available, their broad incorporation may help the scientific community reduce the costs of clinical investigation. We offer here a practical tool for investigators to assess potential costs savings.
Metabolic resistances to atrazine (atz-R) and mesotrione (meso-R) occur in several waterhemp [Amaranthus tuberculatus (Moq.) Sauer] populations in the United States. Interestingly, although metabolic atz-R but mesotrione-sensitive A. tuberculatus populations have been reported, an Amaranthus population has not been confirmed as meso-R but atrazine-sensitive, implying an association between these traits. Experiments were designed to investigate whether the single gene conferring metabolic atz-R plays a role in meso-R. An F2 population was generated from a multiple herbicide–resistant A. tuberculatus population from McLean County, IL (MCR). A cross was made between a known meso-R male clone (MCR-6) and a herbicide-sensitive female clone from Wayne County, IL (WCS-2) to develop an F1 population. Survival of MCR-6 plants following atrazine POST treatment (14.4 kg ha−1) indicated the male parent was homozygous atz-R. F1 plants were intermated to obtain a segregating pseudo-F2 population. Dose–response and metabolic studies conducted with mesotrione using F1 plants indicated intermediate biomass reductions and metabolic rates compared with MCR-6 and WCS. F2 plants were initially treated with either mesotrione (260 g ha−1) or atrazine (2 kg ha−1) POST, and after 21 d of recovery, vegetative clones from surviving resistant plants were subsequently treated with the other herbicide. When mesotrione was applied first, the meso-R frequency was 8.2%, and when atrazine was applied first, the atz-R frequency was 75%. However, the meso-R frequency increased to 16.5% following preselection for atz-R, and 100% of surviving meso-R plants were atz-R. Our findings indicate that the gene conferring metabolic atz-R is also involved with the meso-R trait within the population tested.
Staff members of psychiatric facilities are at high risk of secondhand smoking. Smoking exposure was assessed in 41 nonsmoking employees of a psychiatry department before and after a ban. Subjective exposure measures decreased in 76% of the subjects. Salivary cotinine decreased in the subsample of seven subjects with high pre-ban levels (32 ±8 vs 40 ± 17 ng/ml, p = .045).
To evaluate patient and physician satisfaction with risperidone long-acting injection (RLAI) in patients with schizophrenia enrolled in the electronic Schizophrenia Treatment Adherence Registry (e-STAR) in Belgium.
e-STAR is an ongoing, international, prospective, observational study of patients with schizophrenia who start RLAI during their routine clinical management. Treatment satisfaction was assessed by the patient and physician on a 5-point scale from ‘very good’ to ‘very bad’.
135 patients with mean age 40.9±14 years and duration of illness 9.5±9.2 years initiated treatment with RLAI, followed-up for at least 18 months were included in this analysis. At baseline, only 29.2% of patients expressed “good” or “very good” satisfaction while 21.1% of them expressed “bad” or “very bad” with their previous treatment. Similarly at baseline, 38.2% of physicians reported “good” or “very good” level of satisfaction and 14.6% rated their satisfaction as “bad” or “very bad” at that time. After initiation of RLAI, both patient and physician satisfaction with treatment improved dramatically. At 18 months, 76.5% of patients were satisfied (‘good’ or ‘very good’) with RLAI treatment and only 2.4% felt ‘bad’ and none reported ‘very bad’. Physicians also expressed satisfaction with RLAI with 82.1% of them rated it as ‘good’ or ‘very good’. Only one physician reported satisfaction below ‘moderate’.
The low levels of patient and physician satisfaction with treatment prior to RLAI are likely to be a key decision driver to change therapy. After starting treatment with RLAI, both patient and physician satisfaction with the treatment substantially improved.
The electronic Schizophrenia Treatment Adherence Registry (e-STAR) is a prospective, observational study of patients with schizophrenia designed to evaluate long-term treatment outcomes in routine clinical practice.
Parameters were assessed at baseline and at 3 month intervals for 2 years in patients initiated on risperidone long-acting injection (RLAI) (n = 1345) or a new oral antipsychotic (AP) (n = 277; 35.7% and 36.5% on risperidone and olanzapine, respectively) in Spain. Hospitalization prior to therapy was assessed by a retrospective chart review.
At 24 months, treatment retention (81.8% for RLAI versus 63.4% for oral APs, p < 0.0001) and reduction in Clinical Global Impression Severity scores (−1.14 for RLAI versus −0.94 for APs, p = 0.0165) were significantly higher with RLAI. Compared to the pre-switch period, RLAI patients had greater reductions in the number (reduction of 0.37 stays per patient versus 0.2, p < 0.05) and days (18.74 versus 13.02, p < 0.01) of hospitalizations at 24 months than oral AP patients.
This 2 year, prospective, observational study showed that, compared to oral antipsychotics, RLAI was associated with better treatment retention, greater improvement in clinical symptoms and functioning, and greater reduction in hospital stays and days in hospital in patients with schizophrenia. Improved treatment adherence, increased efficacy and reduced hospitalization with RLAI offer the opportunity of substantial therapeutic improvement in schizophrenia.
We investigated the contribution of polymorphisms shown to moderate transcription of serotonin transporter (5HTT) and monoamine oxidase A (MAOA) to the development of violence, and furthermore to test for gene x environment interactions. To do so, a cohort of 184 adult male volunteers referred for forensic assessment were assigned to a violent or non-violent group. 45% of violent, but only 30% of non-violent individuals carried the low-activity, short MAOA allele. In the violent group, carriers of low-function variants of 5HTT were found in 77%, as compared to 59%. Logistic regression was performed and the best fitting model revealed a significant, independent effect of childhood environment and MAOA genotype. A significant influence of an interaction between childhood environment and 5HTT genotype was found (Fig. 1). MAOA thus appears to be independently associated with violent crime, while there is a relevant 5HTT x environment interaction.
To evaluate changes in the use of non-antipsychotic concomitant medication related to schizophrenia in patients enrolled in e-STAR in Belgium (B), Spain (S) and Australia (A) who were initiated on RLAI.
e-STAR is a secure web-based, international, long-term (1 year retrospective and 2 year prospective) ongoing observational study of schizophrenia patients who initiate a new antipsychotic drug during their routine clinical management. Data reported here are for patients enrolled to date in B, S and A who had information available about the use of concomitant medication at baseline and at 6 months after the start of RLAI.
Of 1,605 evaluable patients (B, n=180; S, n=919; A, n=506), 73.7% received concomitant non-antipsychotic medication at baseline. This proportion had reduced to 60.3% at 6 months after the start of RLAI (82.2% to 71.7% for B, p<0.001; 72.8% to 54.8% for S, p<0.001; 72.3% to 66.2% for A, p=0.01). Reductions between baseline and 6 months were overall: for anticholinergics 29.4% to 17.0% and for antidepressants 22.9% to 19.3% (each p<0.05 for B; p<0.001 for S); for mood stabilisers 17.6% to 15.8% (p=0.01 for S); for benzodiazepines 48.9% to 39.0% (p<0.001 for S; p=0.002 for A); for somatic medication 16.9% to 16.0%. Conclusions. Following the start of RLAI, the use of concomitant non-antipsychotic medication for the management of symptoms associated with schizophrenia or its treatment declined significantly at 6 months compared to baseline.
Hypoxic ischemic encephalopathy (HIE) is a condition that occurs when the entire brain is deprived of an adequate oxygen supply, and is often a complication of cardiac arrest or profound hypotension. This can result in poor outcomes including significant impairments in memory, cognition, and attention.
In the context of sparse literature reports on chronic delirum following cardiac arrest related HIE, we report a case of a 59 year old male patient with normal premorbid functioning who developed chronic confusional state following a hypoxic insult to the brain subsequent to cardiac arrest and try to highlight the challenges encountered during his clinical course and management.
This case highlights the presence of chronic delirium following hypoxic ischaemic encephalopathy, an unfortunate consequence of cardiac arrest. It also highlights the problems encountered in managing such patients.
The German version of the Conners Adult ADHD Rating Scales (CAARS) has proven to show very high model fit in confirmative factor analyses with the established factors inattention/memory problems, hyperactivity/restlessness, impulsivity/emotional lability, and problems with self-concept in both large healthy control and ADHD patient samples. This study now presents data on the psychometric properties of the German CAARS-self-report (CAARS-S) and observer-report (CAARS-O) questionnaires.
CAARS-S/O and questions on sociodemographic variables were filled out by 466 patients with ADHD, 847 healthy control subjects that already participated in two prior studies, and a total of 896 observer data sets were available. Cronbach's-alpha was calculated to obtain internal reliability coefficients. Pearson correlations were performed to assess test-retest reliability, and concurrent, criterion, and discriminant validity. Receiver Operating Characteristics (ROC-analyses) were used to establish sensitivity and specificity for all subscales.
Coefficient alphas ranged from .74 to .95, and test-retest reliability from .85 to .92 for the CAARS-S, and from .65 to .85 for the CAARS-O. All CAARS subscales, except problems with self-concept correlated significantly with the Barrett Impulsiveness Scale (BIS), but not with the Wender Utah Rating Scale (WURS). Criterion validity was established with ADHD subtype and diagnosis based on DSM-IV criteria. Sensitivity and specificity were high for all four subscales.
The reported results confirm our previous study and show that the German CAARS-S/O do indeed represent a reliable and cross-culturally valid measure of current ADHD symptoms in adults.
Online learning has become an increasingly expected and popular component for education of the modern-day adult learner, including the medical provider. In light of the recent coronavirus pandemic, there has never been more urgency to establish opportunities for supplemental online learning. Heart University aims to be “the go-to online resource” for e-learning in CHD and paediatric-acquired heart disease. It is a carefully curated open access library of paedagogical material for all providers of care to children and adults with CHD or children with acquired heart disease, whether a trainee or a practising provider. In this manuscript, we review the aims, development, current offerings and standing, and future goals of Heart University.
To determine how well machine learning algorithms can classify mild cognitive impairment (MCI) subtypes and Alzheimer’s disease (AD) using features obtained from the digital Clock Drawing Test (dCDT).
dCDT protocols were administered to 163 patients diagnosed with AD(n = 59), amnestic MCI (aMCI; n = 26), combined mixed/dysexecutive MCI (mixed/dys MCI; n = 43), and patients without MCI (non-MCI; n = 35) using standard clock drawing command and copy procedures, that is, draw the face of the clock, put in all of the numbers, and set the hands for “10 after 11.” A digital pen and custom software recorded patient’s drawings. Three hundred and fifty features were evaluated for maximum information/minimum redundancy. The best subset of features was used to train classification models to determine diagnostic accuracy.
Neural network employing information theoretic feature selection approaches achieved the best 2-group classification results with 10-fold cross validation accuracies at or above 83%, that is, AD versus non-MCI = 91.42%; AD versus aMCI = 91.49%; AD versus mixed/dys MCI = 84.05%; aMCI versus mixed/dys MCI = 84.11%; aMCI versus non-MCI = 83.44%; and mixed/dys MCI versus non-MCI = 85.42%. A follow-up two-group non-MCI versus all MCI patients analysis yielded comparable results (83.69%). Two-group classification analyses were achieved with 25–125 dCDT features depending on group classification. Three- and four-group analyses yielded lower but still promising levels of classification accuracy.
Early identification of emergent neurodegenerative illness is criterial for better disease management. Applying machine learning to standard neuropsychological tests promises to be an effective first line screening method for classification of non-MCI and MCI subtypes.
The aim of the current study was to replicate findings in adults indicating that higher sensitivity to stressful events is predictive of both onset and persistence of psychopathological symptoms in a sample of adolescents and young adults. In addition, we tested the hypothesis that sensitivity to mild stressors in particular is predictive of the developmental course of psychopathology.
We analyzed experience sampling and questionnaire data collected at baseline and one-year follow-up of 445 adolescent and young adult twins and non-twin siblings (age range: 15–34). Linear multilevel regression was used for the replication analyses. To test if affective sensitivity to mild stressors in particular was associated with follow-up symptoms, we used a categorical approach adding variables on affective sensitivity to mild, moderate and severe daily stressors to the model.
Linear analyses showed that emotional stress reactivity was not associated with onset (ß = .02; P = .56) or persistence (ß = -.01; P = .78) of symptoms. There was a significant effect of baseline symptom score (ß = .53; P < .001) and average negative affect (NA: ß = .19; P < .001) on follow-up symptoms. Using the categorical approach, we found that affective sensitivity to mild (ß = .25; P < .001), but not moderate (ß = -.03; P = .65) or severe (ß = -.06; P = .42), stressors was associated with symptom persistence one year later.
We were unable to replicate previous findings relating stress sensitivity linearly to symptom onset or persistence in a younger sample. Whereas sensitivity to more severe stressors may reflect adaptive coping, high sensitivity to the mildest of daily stressors may indicate an increased risk for psychopathology.
Neurocognitive impairments robustly predict functional outcome. However, heterogeneity in neurocognition is common within diagnostic groups, and data-driven analyses reveal homogeneous neurocognitive subgroups cutting across diagnostic boundaries.
To determine whether data-driven neurocognitive subgroups of young people with emerging mental disorders are associated with 3-year functional course.
Model-based cluster analysis was applied to neurocognitive test scores across nine domains from 629 young people accessing mental health clinics. Cluster groups were compared on demographic, clinical and substance-use measures. Mixed-effects models explored associations between cluster-group membership and socio-occupational functioning (using the Social and Occupational Functioning Assessment Scale) over 3 years, adjusted for gender, premorbid IQ, level of education, depressive, positive, negative and manic symptoms, and diagnosis of a primary psychotic disorder.
Cluster analysis of neurocognitive test scores derived three subgroups described as ‘normal range’ (n = 243, 38.6%), ‘intermediate impairment’ (n = 252, 40.1%), and ‘global impairment’ (n = 134, 21.3%). The major mental disorder categories (depressive, anxiety, bipolar, psychotic and other) were represented in each neurocognitive subgroup. The global impairment subgroup had lower functioning for 3 years of follow-up; however, neither the global impairment (B = 0.26, 95% CI −0.67 to 1.20; P = 0.581) or intermediate impairment (B = 0.46, 95% CI −0.26 to 1.19; P = 0.211) subgroups differed from the normal range subgroup in their rate of change in functioning over time.
Neurocognitive impairment may follow a continuum of severity across the major syndrome-based mental disorders, with data-driven neurocognitive subgroups predictive of functional course. Of note, the global impairment subgroup had longstanding functional impairment despite continuing engagement with clinical services.
Mechanistic models (MMs) have served as causal pathway analysis and ‘decision-support’ tools within animal production systems for decades. Such models quantitatively define how a biological system works based on causal relationships and use that cumulative biological knowledge to generate predictions and recommendations (in practice) and generate/evaluate hypotheses (in research). Their limitations revolve around obtaining sufficiently accurate inputs, user training and accuracy/precision of predictions on-farm. The new wave in digitalization technologies may negate some of these challenges. New data-driven (DD) modelling methods such as machine learning (ML) and deep learning (DL) examine patterns in data to produce accurate predictions (forecasting, classification of animals, etc.). The deluge of sensor data and new self-learning modelling techniques may address some of the limitations of traditional MM approaches – access to input data (e.g. sensors) and on-farm calibration. However, most of these new methods lack transparency in the reasoning behind predictions, in contrast to MM that have historically been used to translate knowledge into wisdom. The objective of this paper is to propose means to hybridize these two seemingly divergent methodologies to advance the models we use in animal production systems and support movement towards truly knowledge-based precision agriculture. In order to identify potential niches for models in animal production of the future, a cross-species (dairy, swine and poultry) examination of the current state of the art in MM and new DD methodologies (ML, DL analytics) is undertaken. We hypothesize that there are several ways via which synergy may be achieved to advance both our predictive capabilities and system understanding, being: (1) building and utilizing data streams (e.g. intake, rumination behaviour, rumen sensors, activity sensors, environmental sensors, cameras and near IR) to apply MM in real-time and/or with new resolution and capabilities; (2) hybridization of MM and DD approaches where, for example, a ML framework is augmented by MM-generated parameters or predicted outcomes and (3) hybridization of the MM and DD approaches, where biological bounds are placed on parameters within a MM framework, and the DD system parameterizes the MM for individual animals, farms or other such clusters of data. As animal systems modellers, we should expand our toolbox to explore new DD approaches and big data to find opportunities to increase understanding of biological systems, find new patterns in data and move the field towards intelligent, knowledge-based precision agriculture systems.