We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure coreplatform@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Response to lithium in patients with bipolar disorder is associated with clinical and transdiagnostic genetic factors. The predictive combination of these variables might help clinicians better predict which patients will respond to lithium treatment.
Aims
To use a combination of transdiagnostic genetic and clinical factors to predict lithium response in patients with bipolar disorder.
Method
This study utilised genetic and clinical data (n = 1034) collected as part of the International Consortium on Lithium Genetics (ConLi+Gen) project. Polygenic risk scores (PRS) were computed for schizophrenia and major depressive disorder, and then combined with clinical variables using a cross-validated machine-learning regression approach. Unimodal, multimodal and genetically stratified models were trained and validated using ridge, elastic net and random forest regression on 692 patients with bipolar disorder from ten study sites using leave-site-out cross-validation. All models were then tested on an independent test set of 342 patients. The best performing models were then tested in a classification framework.
Results
The best performing linear model explained 5.1% (P = 0.0001) of variance in lithium response and was composed of clinical variables, PRS variables and interaction terms between them. The best performing non-linear model used only clinical variables and explained 8.1% (P = 0.0001) of variance in lithium response. A priori genomic stratification improved non-linear model performance to 13.7% (P = 0.0001) and improved the binary classification of lithium response. This model stratified patients based on their meta-polygenic loadings for major depressive disorder and schizophrenia and was then trained using clinical data.
Conclusions
Using PRS to first stratify patients genetically and then train machine-learning models with clinical predictors led to large improvements in lithium response prediction. When used with other PRS and biological markers in the future this approach may help inform which patients are most likely to respond to lithium treatment.
We use a simplified version of the framework of resource monoids, introduced by Dal Lago and Hofmann, to interpret simply typed λ-calculus with constants zero and successor. We then use this model to prove a simple quantitative result about bounding the size of the normal form of λ-terms. While the bound itself is already known, this is to our knowledge the first semantic proof of this fact. Our use of resource monoids differs from the other instances found in the literature, in that it measures the size of λ-terms rather than time complexity.
We introduce a novel amortised resource analysis couched in a type-and-effect system. Our analysis is formulated in terms of the physicist’s method of amortised analysis and is potentialbased. The type system makes use of logarithmic potential functions and is the first such system to exhibit logarithmic amortised complexity. With our approach, we target the automated analysis of self-adjusting data structures, like splay trees, which so far have only manually been analysed in the literature. In particular, we have implemented a semi-automated prototype, which successfully analyses the zig-zig case of splaying, once the type annotations are fixed.
Pathological gambling is a behavioural addiction with negative economic, social, and psychological consequences. Identification of contributing genes and pathways may improve understanding of aetiology and facilitate therapy and prevention. Here, we report the first genome-wide association study of pathological gambling. Our aims were to identify pathways involved in pathological gambling, and examine whether there is a genetic overlap between pathological gambling and alcohol dependence.
Methods
Four hundred and forty-five individuals with a diagnosis of pathological gambling according to the Diagnostic and Statistical Manual of Mental Disorders were recruited in Germany, and 986 controls were drawn from a German general population sample. A genome-wide association study of pathological gambling comprising single marker, gene-based, and pathway analyses, was performed. Polygenic risk scores were generated using data from a German genome-wide association study of alcohol dependence.
Results
No genome-wide significant association with pathological gambling was found for single markers or genes. Pathways for Huntington's disease (P-value = 6.63 × 10−3); 5′-adenosine monophosphate-activated protein kinase signalling (P-value = 9.57 × 10−3); and apoptosis (P-value = 1.75 × 10−2) were significant. Polygenic risk score analysis of the alcohol dependence dataset yielded a one-sided nominal significant P-value in subjects with pathological gambling, irrespective of comorbid alcohol dependence status.
Conclusions
The present results accord with previous quantitative formal genetic studies which showed genetic overlap between non-substance- and substance-related addictions. Furthermore, pathway analysis suggests shared pathology between Huntington's disease and pathological gambling. This finding is consistent with previous imaging studies.
This article describes an atomic force microscope (AFM) that can operate in any scanning electron microscope (SEM) or SEM combined with a focused ion-beam (FIB) column. The combination of AFM, SEM imaging, energy-dispersive X-ray spectrometry (EDX), FIB milling, and nanofabrication methods (field-emission scanning probe lithography, tip-based electron beam induced deposition, and nanomachining) provides a new tool for correlative nanofabrication and microscopy. Piezoresistive, thermo-mechanically actuated cantilevers (active cantilevers) are used for fast imaging and nanofabrication. Thus, the AFM with active cantilevers integrated into an SEM (AFMinSEM) can generate and characterize nanostructures in situ without breaking vacuum or contaminating the sample.
Elemental, chemical, and structural analysis of polycrystalline materials at the micron scale is frequently carried out using microfocused synchrotron X-ray beams, sometimes on multiple instruments. The Maia pixelated energy-dispersive X-ray area detector enables the simultaneous collection of X-ray fluorescence (XRF) and diffraction because of the relatively large solid angle and number of pixels when compared with other systems. The large solid angle also permits extraction of surface topography because of changes in self-absorption. This work demonstrates the capability of the Maia detector for simultaneous measurement of XRF and diffraction for mapping the short- and long-range order across the grain structure in a Ni polycrystalline foil.
We propose a method to generate a max-stable process in C[0, 1] from a max-stable random vector in Rd by generalizing the max-linear model established by Wang and Stoev (2011). For this purpose, an interpolation technique that preserves max-stability is proposed. It turns out that if the random vector follows some finite-dimensional distribution of some initial max-stable process, the approximating processes converge uniformly to the original process and the pointwise mean-squared error can be represented in a closed form. The obtained results carry over to the case of generalized Pareto processes. The introduced method enables the reconstruction of the initial process only from a finite set of observation points and, thus, a reasonable prediction of max-stable processes in space becomes possible. A possible extension to arbitrary dimensions is outlined.
The presence of multiple fields during inflation might seed a detectable amount of non-Gaussianity in the curvature perturbations, which in turn becomes observable in present data sets like the cosmic microwave background (CMB) or the large scale structure (LSS). Within this proceeding we present a fully analytic method to infer inflationary parameters from observations by exploiting higher-order statistics of the curvature perturbations. To keep this analyticity, and thereby to dispense with numerically expensive sampling techniques, a saddle-point approximation is introduced whose precision has been validated for a numerical toy example. Applied to real data, this approach might enable to discriminate among the still viable models of inflation.
Linear typing schemes can be used to guarantee non-interference and so the soundness of in-place update with respect to a functional semantics. But linear schemes are restrictive in practice, and more restrictive than necessary to guarantee soundness of in-place update. This limitation has prompted research into static analysis and more sophisticated typing disciplines to determine when in-place update may be safely used, or to combine linear and non-linear schemes. Here we contribute to this direction by defining a new typing scheme that better approximates the semantic property of soundness of in-place update for a functional semantics. We begin from the observation that some data are used only in a “read-only” context, after which it may be safely re-used before being destroyed. Formalising the in-place update interpretation in a machine model semantics allows us to refine this observation, motivating three usage aspects apparent from the semantics that are used to annotate function argument types. The aspects are (1) used destructively, (2), used read-only but shared with result, and (3) used read-only and not shared with the result. The main novelty is aspect (2), which allows a linear value to be safely read and even aliased with a result of a function without being consumed. This novelty makes our type system more expressive than previous systems for functional languages in the literature. The system remains simple and intuitive, but it enjoys a strong soundness property whose proof is non-trivial. Moreover, our analysis features principal types and feasible type reconstruction, as shown in M. Konečn'y (In TYPES 2002 workshop, Nijmegen, Proceedings, Springer-Verlag, 2003).
We present a method which allows to calculate gas sorption in complex polymers where, as slow processes, gas induced plasticization and volume dilation are important factors. Since the relaxational swelling of the polymer matrix that is observed at elevated gas concentrations takes hours or days, the swelling process is orders of magnitudes too slow to simulate the respective molecular dynamics in reasonable time and effort. To address this apparent incompatibility of experiment and simulation, we use single representative reference states from experiment and construct atomistic packing models according to these specifications. Gas sorption of CO2 and CH4 was successfully calculated on polysulfone, a 6FDA-polyimide, and a polymer of intrinsic microporosity, PIM-1, at 308 K and pressures up to 50 bar.
At the centre of this chapter are the process of migration, its structural trends, geographical patterns, conceptual delineation and statistical measurement. In describing and analysing these, we do not follow traditional theoretical concepts that interpret migration as a ‘natural’ function and only as a consequence of economic or political disparities. This perception of migration as an automatic flow in an uneven world does not do justice to the complexity of this phenomenon. Migration is regulated and defined by various forces, two of which will be in the centre of attention in this chapter: the economy and the society. The economy and its specific demand for qualified and unqualified labour are of critical importance because they have the societal power to define the size and the structure of the labour markets to which the migrants have to adapt. The institutional approach, by contrast, is central to explaining why and which migration takes place. It underlines the significance of policy and administrative procedures for canalising migration flows. Of course, these two forces interact. The enterprises and their political representatives formulate their needs and economic interests and influence the institutional rules. The institutional rules, in turn, delimit the scope and options of entrepreneurial action.
The economy and the societal institutions open and close gates for migrants; they also define and differentiate between spatial mobility and migration. Usually, only some forms of spatial mobility are perceived as migration – a fact not reflected in the general and rather technical definition of migration given by the United Nations recommendation dating back to 1998: ‘a long-term migrant should be defined as a person who moves to a country other than that of his or her usual residence for a period of at least a year (12 months), so that the country of destination effectively becomes his or her new country of usual residence.’ According to these guidelines, EU citizens moving within the EU are migrants while in reality they may not be perceived as such. On the other hand, in some countries labour migrants are categorised as guest workers and not as migrants. And it is also a matter of public perception whether asylum seekers, who are obviously mobile, are migrants.
We show that real-number computations in the interval-domain environment are ‘inherently parallel’ in a precise mathematical sense. We do this by reducing computations of the weak parallel-or operation on the Sierpinski domain to computations of the addition operation on the interval domain.
A classical quantified modal logic is used to define a “feasible” arithmetic whose provably total functions are exactly the polynomial-time computable functions. Informally, one understands ⃞∝ as “∝ is feasibly demonstrable”.
differs from a system that is as powerful as Peano Arithmetic only by the restriction of induction to ontic (i.e., ⃞-free) formulas. Thus, is defined without any reference to bounding terms, and admitting induction over formulas having arbitrarily many alternations of unbounded quantifiers. The system also uses only a very small set of initial functions.
To obtain the characterization, one extends the Curry-Howard isomorphism to include modal operations. This leads to a realizability translation based on recent results in higher-type ramified recursion. The fact that induction formulas are not restricted in their logical complexity, allows one to use the Friedman A translation directly.
The development also leads us to propose a new Frege rule, the “Modal Extension” rule: if ⊢ ∝ a then ⊢ A ↔ ∝ for new symbol A.
We use a syntactical notion of Kripke models to obtain interpretations of subsystems of
arithmetic in their intuitionistic counterparts. This yields, in particular, a new proof of Buss'
result that the Skolem functions of Bounded Arithmetic are polynomial time computable.
Intensive care patients with organ failure often suffer an acute catabolic state. Leptin is a 16-kDa hormone which is produced by mature adipocytes and correlates with human energy expenditure. We investigated whether continuous venovenous haemofiltration, which may eliminate molecules up to 20–30 kDa, is capable of removing human leptin. Leptin measurements were made in the plasma of 15 patients with sepsis before continuous venovenous haemofiltration (T0) and during the procedure at 24 h (T1), 48 h (T2), and 72 h (T3), using samples taken before and after haemofiltration. In addition, measurements were made in the ultrafiltrate at T1–T3. The plasma leptin level at T0 was 17.6 ng mL−1. The concentration at T1 was 17.5 ng mL−1 pre-filter and 26.5 ng mL−1 post-filter (T2: 14.2/23.2 ng mL−1; T3: 12.4/16.3 ng mL−1).This concentration effect after haemofiltration was also seen with albumin. The values measured at T3 tended to be lower than those recorded at T1. The mean leptin levels in the ultrafiltrate were 0.15–0.18 ng mL−1. The range of leptin levels in the ultrafiltrate was thus only 0.5–3% of that measured in plasma. We conclude that human leptin is only minimally elimininated into the ultrafiltrate by continuous venovenous haemofiltration and that plasma leptin levels may decrease during sepsis.
This paper was guest-edited by Harry Mairson and Bruce Kapron, for our intended Special
Issue on Functional Programming and Computational Complexity. Other papers submitted for
the special issue were either out-of-scope or otherwise unsuitable for JFP. Even though only
one paper met their high standards, this did not make Harry and Bruce's job any easier, and
we thank them for their efforts.
In previous work the author has introduced a lambda calculus SLR with modal and linear
types which serves as an extension of Bellantoni–Cook's function algebra BC to higher
types. It is a step towards a functional programming language in which all programs run in
polynomial time. While this previous work was concerned with the syntactic metatheory of
SLR in this paper we develop a semantics of SLR in terms of Chu spaces over a certain
category of sheaves from which it follows that all expressible functions are indeed in PTIME.
We notice a similarity between the Chu space interpretation and CPS translation which as
we hope will have further applications in functional programming.