To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
A literature review was carried out to identify pre and perinatal characteristics associated with variation in Apgar scores in population-based studies. The parameters identified in the literature search were included in the classical twin design study to estimate effects of pre and perinatal factors shared and nonshared by twins and to test for a contribution of genetic factors in 1- and 5-min Apgar scores in a large sample of Dutch monozygotic (MZ) and dizygotic (DZ) twins. The sample included MZ and DZ twins (N = 5181 pairs) recruited by the Netherlands Twin Register shortly after birth, with data on prenatal characteristics and Apgar scores at first and/or fifth minutes. The ordinal regression and structural equation modeling were used to analyze the effects of characteristics identified in the literature review and to estimate genetic and nongenetic variance components. The literature review identified 63 papers. Consistent with the review, we observed statistically significant effects of birth order, zygosity and gestational age (GA) for 1- and 5-min Apgar scores of both twins. Apgar scores are higher in first-born versus second-born twins and DZ first-born versus MZ first-born twins. Birth weight had an effect on the 5-min Apgar of the first born. Fetal presentation and mode of delivery had different effects on Apgar scores of first- and second-born twins. Parental characteristics and chorionicity did not have significant main effects on Apgar scores. The MZ twins’ Apgar correlations equaled the DZ Apgar correlations. Our analyses suggest that individual differences in 1- and 5-min Apgar scores are attributable to shared and nonshared pre and perinatal factors, but not to genotypic factors of the newborns. The main predictors of Apgar scores are birth order, zygosity, GA, birth weight, mode of delivery and fetal presentation.
We compared antibiotic prescribing to older people in different settings to inform antibiotic stewardship interventions. We used data linkage to stratify individuals aged 65 years and over in Northern Ireland, 1st January 2012–31st December 2013, by residence: community dwelling, care home dwelling or ‘transitioned’ if admitted to a care home. The odds of being prescribed an antibiotic by residence were analysed using logistic regression, adjusting for patient demographics and selected medication use (proxy for co-morbidities). Trends in monthly antibiotic prescribing were examined in the 6 months pre- and post-admission to the care home. The odds of being prescribed at least one antibiotic were twofold higher in care homes compared with community dwellers (adjusted odds ratio 2.05, 95% CI 1.93–2.17). There was a proportionate increase of 51.5% in the percentage prescribed an antibiotic on admission, with a monthly average of 23% receiving an antibiotic in the 6 months post admission. While clinical need likely accounts for some of the observed antibiotic prescribing in care homes we cannot rule out more liberal prescribing, given the twofold difference between care home residents and their community dwelling peers having accounted for co-morbidities. The appropriateness of antibiotic prescribing in the care home setting should be examined.
Objective: The human gut microbiota has been demonstrated to be associated with a number of host phenotypes, including obesity and a number of obesity-associated phenotypes. This study is aimed at further understanding and describing the relationship between the gut microbiota and obesity-associated measurements obtained from human participants. Subjects/Methods: Here, we utilize genetically informative study designs, including a four-corners design (extremes of genetic risk for BMI and of observed BMI; N = 50) and the BMI monozygotic (MZ) discordant twin pair design (N = 30), in order to help delineate the role of host genetics and the gut microbiota in the development of obesity. Results: Our results highlight a negative association between BMI and alpha diversity of the gut microbiota. The low genetic risk/high BMI group of individuals had a lower gut microbiota alpha diversity when compared to the other three groups. Although the difference in alpha diversity between the lean and heavy groups of the BMI-discordant MZ twin design did not achieve significance, this difference was observed to be in the expected direction, with the heavier participants having a lower average alpha diversity. We have also identified nine OTUs observed to be associated with either a leaner or heavier phenotype, with enrichment for OTUs classified to the Ruminococcaceae and Oxalobacteraceae taxonomic families. Conclusion: Our study presents evidence of a relationship between BMI and alpha diversity of the gut microbiota. In addition to these findings, a number of OTUs were found to be significantly associated with host BMI. These findings may highlight separate subtypes of obesity, one driven by genetic factors, the other more heavily influenced by environmental factors.
Disturbances in Pavlovian valuation systems are reported to follow traumatic stress exposure. However, motivated decisions are also guided by instrumental mechanisms, but to date the effect of traumatic stress on these instrumental systems remain poorly investigated. Here, we examine whether a single episode of severe traumatic stress influences flexible instrumental decisions through an impact on a Pavlovian system.
Twenty-six survivors of the 2011 Norwegian terror attack and 30 matched control subjects performed an instrumental learning task in which Pavlovian and instrumental associations promoted congruent or conflicting responses. We used reinforcement learning models to infer how traumatic stress affected learning and decision-making. Based on the importance of dorsal anterior cingulate cortex (dACC) for cognitive control, we also investigated if individual concentrations of Glx (=glutamate + glutamine) in dACC predicted the Pavlovian bias of choice.
Survivors of traumatic stress expressed a greater Pavlovian interference with instrumental action selection and had significantly lower levels of Glx in the dACC. Across subjects, the degree of Pavlovian interference was negatively associated with dACC Glx concentrations.
Experiencing traumatic stress appears to render instrumental decisions less flexible by increasing the susceptibility to Pavlovian influences. An observed association between prefrontal glutamatergic levels and this Pavlovian bias provides novel insight into the neurochemical basis of decision-making, and suggests a mechanism by which traumatic stress can impair flexible instrumental behaviours.
Sequence-based association studies are at a critical inflexion point with the increasing availability of exome-sequencing data. A popular test of association is the sequence kernel association test (SKAT). Weights are embedded within SKAT to reflect the hypothesized contribution of the variants to the trait variance. Because the true weights are generally unknown, and so are subject to misspecification, we examined the efficiency of a data-driven weighting scheme. We propose the use of a set of theoretically defensible weighting schemes, of which, we assume, the one that gives the largest test statistic is likely to capture best the allele frequency–functional effect relationship. We show that the use of alternative weights obviates the need to impose arbitrary frequency thresholds. As both the score test and the likelihood ratio test (LRT) may be used in this context, and may differ in power, we characterize the behavior of both tests. The two tests have equal power, if the weights in the set included weights resembling the correct ones. However, if the weights are badly specified, the LRT shows superior power (due to its robustness to misspecification). With this data-driven weighting procedure the LRT detected significant signal in genes located in regions already confirmed as associated with schizophrenia — the PRRC2A (p = 1.020e-06) and the VARS2 (p = 2.383e-06) — in the Swedish schizophrenia case-control cohort of 11,040 individuals with exome-sequencing data. The score test is currently preferred for its computational efficiency and power. Indeed, assuming correct specification, in some circumstances, the score test is the most powerful test. However, LRT has the advantageous properties of being generally more robust and more powerful under weight misspecification. This is an important result given that, arguably, misspecified models are likely to be the rule rather than the exception in weighting-based approaches.
Deliberately lit vegetation fires have the greatest destructive potential of any intentionally lit blaze. The ‘Black Saturday’ bushfires of 7 February 2009 in Victoria, Australia, killed 173 people, injured 414 and destroyed 3500 buildings, including two entire towns (Teague et al, 2010). Even before the fires had abated, police and firefighters revealed that several had been deliberately lit (Silvester, 2009). The subsequent Royal Commission attributed four of the large fires to arson. These four fires caused 52 deaths and burnt approximately 2000 km2 of land, an area slightly larger than that of Greater London (Teague et al, 2010). The community was united in its outrage that anyone would intentionally set a bushfire, particularly on a day with the most severe fire danger rating in over 20 years. The question of why anyone would set such fires is, unfortunately, not easy to answer, as there has been little investigation of those who deliberately light bushfires, especially in Australia. The lack of research in this area is somewhat surprising, given that events like Black Saturday are not uncommon in Australia and other fire-prone regions. The Australian Institute of Criminology estimates that 25 000–30 000 bushfires are deliberately lit in Australia each year (Bryant, 2008). Disaster-level bushfires (those resulting in more than $10 million in damage) cost the Australian economy an annual average of $77 million, even before the associated costs of police and courts and the intangible human and social costs wrought by large fires are considered (Department of Infrastructure, Transport, Regional Development and Local Government, 2001). Estimates from the USA suggest that between 20% and 25% of wildfires are deliberately lit, with rates dependent on location (Federal Emergency Management Agency, 1994; Hall, 1998). Even in the UK, where vegetation fires rarely result in the type of widespread destruction seen elsewhere, it is thought that some 20% of fires in open countryside are the result of arson (Lewis, 1999).
In areas such as south-eastern Australia, where bushfire is a seasonal hazard responsible for significant property loss, there is little tolerance for deliberate firesetting. Public pressure on law-enforcement agencies and government to prevent bushfire arson is becoming increasingly intense.
Obsessive–compulsive disorder (OCD) has been linked to functional abnormalities in fronto-striatal networks as well as impairments in decision making and learning. Little is known about the neurocognitive mechanisms causing these decision-making and learning deficits in OCD, and how they relate to dysfunction in fronto-striatal networks.
We investigated neural mechanisms of decision making in OCD patients, including early and late onset of disorder, in terms of reward prediction errors (RPEs) using functional magnetic resonance imaging. RPEs index a mismatch between expected and received outcomes, encoded by the dopaminergic system, and are known to drive learning and decision making in humans and animals. We used reinforcement learning models and RPE signals to infer the learning mechanisms and to compare behavioural parameters and neural RPE responses of the OCD patients with those of healthy matched controls.
Patients with OCD showed significantly increased RPE responses in the anterior cingulate cortex (ACC) and the putamen compared with controls. OCD patients also had a significantly lower perseveration parameter than controls.
Enhanced RPE signals in the ACC and putamen extend previous findings of fronto-striatal deficits in OCD. These abnormally strong RPEs suggest a hyper-responsive learning network in patients with OCD, which might explain their indecisiveness and intolerance of uncertainty.
Genetic–epidemiological studies that estimate the contributions of genetic factors to variation in tic symptoms are scarce. We estimated the extent to which genetic and environmental influences contribute to tics, employing various phenotypic definitions ranging between mild and severe symptomatology, in a large population-based adult twin-family sample.
In an extended twin-family design, we analysed lifetime tic data reported by adult mono- and dizygotic twins (n = 8323) and their family members (n = 7164; parents and siblings) from 7311 families in the Netherlands Twin Register. We measured tics by the abbreviated version of the Schedule for Tourette and Other Behavioral Syndromes. Heritability was estimated by genetic structural equation modeling for four tic disorder definitions: three dichotomous and one trichotomous phenotype, characterized by increasingly strictly defined criteria.
Prevalence rates of the different tic disorders in our sample varied between 0.3 and 4.5% depending on tic disorder definition. Tic frequencies decreased with increasing age. Heritability estimates varied between 0.25 and 0.37, depending on phenotypic definitions. None of the phenotypes showed evidence of assortative mating, effects of shared environment or non-additive genetic effects.
Heritabilities of mild and severe tic phenotypes were estimated to be moderate. Overlapping confidence intervals of the heritability estimates suggest overlapping genetic liabilities between the various tic phenotypes. The most lenient phenotype (defined only by tic characteristics, excluding criteria B, C and D of DSM-IV) rendered sufficiently reliable heritability estimates. These findings have implications in phenotypic definitions for future genetic studies.
This study sought to identify trajectories of DSM-IV based internalizing (INT) and externalizing (EXT) problem scores across childhood and adolescence and to provide insight into the comorbidity by modeling the co-occurrence of INT and EXT trajectories. INT and EXT were measured repeatedly between age 7 and age 15 years in over 7,000 children and analyzed using growth mixture models. Five trajectories were identified for both INT and EXT, including very low, low, decreasing, and increasing trajectories. In addition, an adolescent onset trajectory was identified for INT and a stable high trajectory was identified for EXT. Multinomial regression showed that similar EXT and INT trajectories were associated. However, the adolescent onset INT trajectory was independent of high EXT trajectories, and persisting EXT was mainly associated with decreasing INT. Sex and early life environmental risk factors predicted EXT and, to a lesser extent, INT trajectories. The association between trajectories indicates the need to consider comorbidity when a child presents with INT or EXT disorders, particularly when symptoms start early. This is less necessary when INT symptoms start at adolescence. Future studies should investigate the etiology of co-occurring INT and EXT and the specific treatment needs of these severely affected children.
One of the first stellar photometry programs completed with the High Speed Photometer (HSP) on the Hubble Space Telescope (HST) was visual and ultraviolet observations of the Crab pulsar. We obtained continuous observations on four consecutive days using a visual filter (4000 - 7000 Å) and an additional observation, approximately two months later, using an ultraviolet filter (1600 - 3000 Å). Each observation has a time resolution of 10.7 μsec and spans approximately 30 minutes in duration. In addition to the observations made with the HSP, contemporaneous UBVR observations were also made at Jodrell Bank and McDonald Observatory. Some of the more prominent results include the following: 1) the main pulse arrival time is the same in the UV as it is in the optical and the radio regions of the spectrum, 2) there is essentially no difference in the shape of the optical pulse from one observation to the next, 3) the “flatness” of the peak of the main pulse suggests that the main pulse has been resolved in time, and 4) in accordance with the trend of observations from the radio to infrared wavelengths, the main pulse is slightly narrower in the UV than in the optical.
A second HSP science observing program was a long-term program to monitor the eclipsing dwarf nova, Z Chamaeleontis (Porb = 107 minutes). We obtained a total of 42 observations of Z Cha in the UV (1120 - 1580 Å) each with a duration of approximately 45 minutes and separated by approximately three days. Although the majority of the observations cover the eclipse of the white dwarf and hot spot, a few observations were obtained outside-of-eclipse in order to obtain the complete light curve. During the course of this program, Z Cha underwent two “normal” outbursts in which the shape of the light curve changed dramatically. We will present a comparison of the light curve in quiescence with that during a “normal” outburst and quantify such geometrical and physical parameters as temperature and size of the white dwarf, hot spot, and accretion disk.
The question of whether psychopathology constructs are discrete kinds or continuous dimensions represents an important issue in clinical psychology and psychiatry. The present paper reviews psychometric modelling approaches that can be used to investigate this question through the application of statistical models. The relation between constructs and indicator variables in models with categorical and continuous latent variables is discussed, as are techniques specifically designed to address the distinction between latent categories as opposed to continua (taxometrics). In addition, we examine latent variable models that allow latent structures to have both continuous and categorical characteristics, such as factor mixture models and grade-of-membership models. Finally, we discuss recent alternative approaches based on network analysis and dynamical systems theory, which entail that the structure of constructs may be continuous for some individuals but categorical for others. Our evaluation of the psychometric literature shows that the kinds–continua distinction is considerably more subtle than is often presupposed in research; in particular, the hypotheses of kinds and continua are not mutually exclusive or exhaustive. We discuss opportunities to go beyond current research on the issue by using dynamical systems models, intra-individual time series and experimental manipulations.
Changes in reflexive emotional responses are hallmarks of depression, but how emotional reflexes make an impact on adaptive decision-making in depression has not been examined formally. Using a Pavlovian-instrumental transfer (PIT) task, we compared the influence of affectively valenced stimuli on decision-making in depression and generalized anxiety disorder compared with healthy controls; and related this to the longitudinal course of the illness.
A total of 40 subjects with a current DSM-IV-TR diagnosis of major depressive disorder, dysthymia, generalized anxiety disorder, or a combination thereof, and 40 matched healthy controls performed a PIT task that assesses how instrumental approach and withdrawal behaviours are influenced by appetitive and aversive Pavlovian conditioned stimuli (CSs). Patients were followed up after 4–6 months. Analyses focused on patients with depression alone (n = 25).
In healthy controls, Pavlovian CSs exerted action-specific effects, with appetitive CSs boosting active approach and aversive CSs active withdrawal. This action-specificity was absent in currently depressed subjects. Greater action-specificity in patients was associated with better recovery over the follow-up period.
Depression is associated with an abnormal influence of emotional reactions on decision-making in a way that may predict recovery.
Longitudinal studies of neuroticism have shown that, on average, neuroticism scores decrease from adolescence to adulthood. The heritability of neuroticism is estimated between 0.30 and 0.60 and does not seem to vary greatly as a function of age. Shared environmental effects are rarely reported. Less is known about the role of genetic and environmental influences on the rank order stability of neuroticism in the period from adolescence to adulthood. We studied the stability of neuroticism in a cohort sequential (classical) twin design, from adolescence (age 14 years) to young adulthood (age 32 years). A genetic simplex model that was fitted to the longitudinal neuroticism data showed that the genetic stability of neuroticism was relatively high (genetic correlations between adjacent age bins >0.9), and increased from adolescence to adulthood. Environmental stability was appreciably lower (environmental correlations between adjacent age bins were between 0.3 and 0.6). This low stability was largely due to age-specific environmental variance, which was dominated by measurement error. This attenuated the age-to-age environmental correlations. We constructed an environmental covariance matrix corrected for this error, under the strong assumption that all age-specific environmental variance is error variance. The environmental (co)variance matrix corrected for attenuation revealed highly stable environmental influences on neuroticism (correlations between adjacent age bins were between 0.7 and 0.9). Our results indicate that both genetic and environmental influences have enduring effects on individual differences in neuroticism.
The influence of genetic factors on major depressive disorder is lower than on other psychiatric disorders. Heritability estimates mainly derive from cross-sectional studies, and knowledge on the longitudinal aetiology of symptoms of anxiety and depression (SxAnxDep) across the lifespan is limited. We aimed to assess phenotypic, genetic and environmental stability in SxAnxDep between ages 3 and 63 years.
We used a cohort-sequential design combining data from 49 524 twins followed from birth to age ⩾20 years, and from adolescence into adulthood. SxAnxDep were assessed repeatedly with a maximum of eight assessments over a 25-year period. Data were ordered in 30 age groups and analysed with longitudinal genetic models.
Over age, there was a significant increase during adolescence in mean scores with sex differences (women>men) emerging. Heritability was high in childhood and decreased to 30–40% during adulthood. This decrease in heritability was due to an increase in environmental variance. Phenotypic stability was moderate in children (correlations across ages ~0.5) and high in adolescents (r = 0.6), young adults (r = 0.7), and adults (r = 0.8). Longitudinal stability was mostly attributable to genetic factors. During childhood and adolescence there was also significant genetic innovation, which was absent in adults. Environmental effects contributed to short-term stability.
The substantial stability in SxAnxDep is mainly due to genetic effects. The importance of environmental effects increases with age and explains the relatively low heritability of depression in adults. The environmental effects are transient, but the contribution to stability increases with age.
An understanding of the exact nature of executive function (EF) deficits in conduct disorder (CD) remains elusive because of issues of co-morbidity with attention deficit hyperactivity disorder (ADHD).
Seventy-two adolescents with CD, 35 with CD + ADHD and 20 healthy controls (HCs) were assessed on a computerized battery of putative ‘cool’ and ‘hot’ EFs. Participants also completed the Child Behaviour Checklist (CBCL).
In the cool EF tasks such as planning, the CD + ADHD group in particular showed most notable impairments compared to HCs. This pattern was less evident for set shifting and behavioural inhibition but there were significant correlations between errors scores on these tasks and indices of externalizing behaviours on the CBCL across the sample. For hot EF tasks, all clinical groups performed worse than HCs on delay of gratification and poor performance was correlated with externalizing scores. Although there were no notable group differences on the punishment-based card-playing task, there were significant correlations between ultimate payout and externalizing behaviour across groups.
Overall, our findings highlight the fact that there may be more common than distinguishing neuropsychological underpinnings to these co-morbid disorders and that a dimensional symptom-based approach may be the way forward.