To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Systems to measure gas production to study digestion kinetics have been developed at several locations. The system developed at Cornell University and the rationale behind its evolution are described with an emphasis on whether venting after each observation is necessary and on choice of sensors. Different non-linear-models used to fit gas production data are discussed with an emphasis on the dual-pool logistic model. The third section of the paper includes a theoretical discussion on how gas data can be integrated with data on passage to predict ruminal digestibility. The final section addresses the practical applications of these gas data and ways in which they can be used in models like the Cornell net carbohydrate and protein system. Also included are evaluations of ensiled and freeze-dried samples from the same source as an indication of how gas systems can be used to evaluate the soluble fractions of forages.
GLOBALink, a large online network of tobacco control professionals, was active in the promotion of the World Health Organization's Framework Convention on Tobacco Control treaty, an international treaty aimed at reducing the global burden of tobacco-related death and disease. We examined and compared the roles that different countries served in the GLOBALink community during FCTC negotiation and ratification. Previous studies of FCTC ratification found the process adhered to a diffusion of innovation model (Valente et al., 2015). We followed that work by conducting content analyses of discussion messages posted by GLOBALink members representing different countries. Based on the time when they ratified the FCTC, each country was labeled by one of the four adoption stages of the diffusion model and we investigated the amount of shared word use between the different stages. A goodness-of-fit chi-squared test indicated that content was not shared in an expected manner between stages (χ2 = 11,856.45, N = 51,447, p < 0.001). A deeper look at the specific words shared between countries within and between adoption stages provided insight into how interactions between certain countries might have served to support the ratification process.
We apply cross-correlation to Pléiades satellite images to generate accurate, high-resolution monthly surface velocity maps of Monte Tronador glaciers between March and June 2012. Measured surface displacements cover periods as short as 19 days, with a precision of ∼0.58 m (11 m a−1). These glaciers follow a radial flow pattern, with maximum surface speeds of ∼390 m a−1 associated with steep icefalls. The lower reaches of the debris-covered tongues of Verde and Casa Pangue glaciers are almost stagnant, whereas Ventisquero Negro, another debris-covered glacier, shows acceleration at the front due to calving into a proglacial lake. Low-elevation debris-covered glacier tongues show increasing velocities at the beginning of the accumulation season, whereas higher-elevation, clean-ice tongues reduce their speed during this period. This contrasting behavior is probably in response to an increase in water input to the subglacial system from winter rainfall events at low elevations and a decrease in meltwater production at higher elevations. These sequential velocity maps can help to identify the controls on glacier surface velocity, aid in the delimitation of ice divides and could also contribute to more realistic calibration of ice-flux-mass–balance models in this glacierized area.
Observations show that glaciers around the world are in retreat and losing mass. Internationally coordinated for over a century, glacier monitoring activities provide an unprecedented dataset of glacier observations from ground, air and space. Glacier studies generally select specific parts of these datasets to obtain optimal assessments of the mass-balance data relating to the impact that glaciers exercise on global sea-level fluctuations or on regional runoff. In this study we provide an overview and analysis of the main observational datasets compiled by the World Glacier Monitoring Service (WGMS). The dataset on glacier front variations (∼42 000 since 1600) delivers clear evidence that centennial glacier retreat is a global phenomenon. Intermittent readvance periods at regional and decadal scale are normally restricted to a subsample of glaciers and have not come close to achieving the maximum positions of the Little Ice Age (or Holocene). Glaciological and geodetic observations (∼5200 since 1850) show that the rates of early 21st-century mass loss are without precedent on a global scale, at least for the time period observed and probably also for recorded history, as indicated also in reconstructions from written and illustrated documents. This strong imbalance implies that glaciers in many regions will very likely suffer further ice loss, even if climate remains stable.
Most glaciological studies in Argentina have focused on the large outlet glaciers of the Southern Patagonia Icefield (SPI); the numerous smaller neighboring glaciers have received significantly less attention. We present an inventory of 248 medium- to small-size glaciers (0.01–25 km2) adjacent to the northeast margin of the SPI, describe their change over the period 1979–2005 and assess local and regional climatic variations in an attempt to explain the observed glacier changes. Based on an ASTER mosaic from 20 February 2005 and the ASTER Global Digital Elevation Model, we identified a total glacier area of 187.2 ± 7.4 km2 between 600 and 2870 m a.s.l. Glaciers are largely debris-free and are concentrated in the western, more humid sector adjacent to the SPI. Using a 20 March 1979 US military intelligence Hexagon KH-9 satellite photograph, we measured a total areal reduction of ∼33.7 km2 (15.2%) between 1979 and 2005. Ablation season temperatures from the study area have followed a regional warming trend that could partly explain the observed glacier shrinkage. Annual precipitation estimates show a gradual decrease between 1979 and 2002 that may also have contributed to the ice mass loss.
The ready availability of full-field velocity measurements in present-day experiments has kindled interest in using such data for force estimation, especially in situations where direct measurements are difficult. Among the methods proposed, a formulation based on impulse is attractive, for both practical and physical reasons. However, evaluation of the impulse requires a complete description of the vorticity field, and this is particularly hard to achieve in the important region close to a body surface. This paper presents a solution to the problem. The incomplete experimental-vorticity field is augmented by a vortex sheet on the body, with strength determined by the no-slip boundary condition. The impulse is then found from the sum of vortex-sheet and experimental contributions. Components of physical interest can straightforwardly be recognised; for example, the classical ‘added mass’ associated with fluid inertia is represented by an explicit term in the formulation for the vortex sheet. The method is implemented in the context of two-dimensional flat-plate flow, and tested on velocity-field data from a translating wing experiment. The results show that the vortex-sheet contribution is significant for the test data set. Furthermore, when it is included, good agreement with force-balance measurements is found. It is thus recommended that any impulse-based force calculation should correct for (likely) data incompleteness in this way.
Jay Myung, Ohio State University (USA),
Daniel R. Cavagnaro, Mihaylo College of Business and Economics, California State University at Fullerton (USA),
Mark A. Pitt, Department of Psychology, Ohio State University (USA)
The study of cognition is challenged by the difficulty of inferring representation and processes in such a complex system as the brain. The field of cognitive science has met this challenge by borrowing and developing research tools with which to study the brain. Tools are meant broadly to include not just hardware (e.g., computers, eye trackers, imaging equipment) that is used for data collection, but also the quantitative tools used to guide inference, including statistical methods (frequentist and Bayesian) and cognitive modeling.
Cognitive modeling assists in scientific inference by, among other things, assessing the plausibility of an explanation (e.g., theory, process). It achieves this by instantiating a version of the explanation in some quantitative form (i.e., the mathematical model), and thereby demonstrating its plausibility (Polk and Seifert, 2002; Busemeyer and Diederich, 2010; Lewandowsky and Farrell, 2011; Lee and Wagenmakers, 2014). However, as with theories, all quantitative explanations are not equally good or convincing, so what criteria should be used to evaluate models? What makes a model a good explanation from which it is reasonable to draw inferences, and what signs indicate the model is poor? These questions are the focus of this chapter. Like modeling itself, the field is still very much in its infancy. Progress has been made, but many challenges remain. Before reviewing the state-of-the-art, we first provide a broader context in which to situate the enterprise of model evaluation.
Although cognitive modeling has been around since the 1950s, its popularity increased once computers became cheap and fast. Also, user-friendly software has accelerated its adoptions, to the point where more and more researchers recognize the value of models and their usefulness for knowledge discovery (Shiffrin and Nobel, 1997; Fum et al., 2007; McClelland, 2009). Theories in much of the field tend to be broad claims about foundational issues in cognition (e.g., representations are distributed rather than local; grammar acquisition is probabilistic rather than rule-based; category learning is Bayesian). By instantiating these claims in a model, the theory becomes more viable, and as a consequence more persuasive, especially when its performance is shown to mimic that of individuals. In addition, it can be difficult to develop a theory with much depth without formalizing it quantitatively in some way.
In the 1970s, Feldman and Moore classified separably acting von Neumann algebras containing Cartan maximal abelian self-adjoint subalgebras (MASAs) using measured equivalence relations and 2-cocycles on such equivalence relations. In this paper we give a new classification in terms of extensions of inverse semigroups. Our approach is more algebraic in character and less point-based than that of Feldman and Moore. As an application, we give a restatement of the spectral theorem for bimodules in terms of subsets of inverse semigroups. We also show how our viewpoint leads naturally to a description of maximal subdiagonal algebras.
The IUE spacecraft was launched with prime and redundant mechanical Panoramic Attitude Sensors (PAS) to determine coarse spacecraft pointing. Attitude determination typically took at least 24 hours. After launch both systems failed. A new method was developed which required pointing the spacecraft at the antisolar position. After the failure of the 4th IUE gyro, it was no longer possible to point in the antisolar direction. A second method was developed which utilizes IUE’s ability to track the sun with a solid state two-dimensional sun sensor. Attitude determination can now be completed in several hours. An hour is required for coarse position measurement and several more hours are needed, using a small 15 arc minute square finder camera, for final attitude confirmation. These methods should be of use for other spacecraft where weight is critical or there is a desire to avoid mechanical devices.
The International Ultraviolet Explorer (IUE) is a geosynchronous orbiting telescope launched by the National Aeronautics and Space Administration (NASA) on January 26, 1978, and operated jointly by NASA and the European Space Agency. The science instrument consists of two spectrographs which span the wavelength range of 1150 to 3200 Å and offer two dispersions with resolutions of 6 Å and 0.2 Å. The spacecraft’s attitude control system originally included an inertial reference package containing 6 gyroscopes for 3-axis stabilization. The science instrument includes a prime and redundant Field Error Sensor (FES) camera for target aquisition and offset guiding. Since launch, 4 of the 6 gyroscopes have failed. The current attitude control system utilizes the remaining 2 gyros and a Fine Sun Sensor (FSS) for 3-axis stabilization. When the next gyro fails, a new attitude control system will be uplinked which will rely on the remaining gyro and the FSS for general 3-axis stabilzation. In addition to the FSS, the FES cameras will be required to assist in maintaining fine attitude control during target aquisition. This has required thoroughly determining the characteristics of the FES cameras and the spectrograph aperture plate as well as devising new target acquisition procedures. The results of this work are presented.
It has been postulated that aging is the consequence of an accelerated accumulation of somatic DNA mutations and that subsequent errors in the primary structure of proteins ultimately reach levels sufficient to affect organismal functions. The technical limitations of detecting somatic changes and the lack of insight about the minimum level of erroneous proteins to cause an error catastrophe hampered any firm conclusions on these theories. In this study, we sequenced the whole genome of DNA in whole blood of two pairs of monozygotic (MZ) twins, 40 and 100 years old, by two independent next-generation sequencing (NGS) platforms (Illumina and Complete Genomics). Potentially discordant single-base substitutions supported by both platforms were validated extensively by Sanger, Roche 454, and Ion Torrent sequencing. We demonstrate that the genomes of the two twin pairs are germ-line identical between co-twins, and that the genomes of the 100-year-old MZ twins are discerned by eight confirmed somatic single-base substitutions, five of which are within introns. Putative somatic variation between the 40-year-old twins was not confirmed in the validation phase. We conclude from this systematic effort that by using two independent NGS platforms, somatic single nucleotide substitutions can be detected, and that a century of life did not result in a large number of detectable somatic mutations in blood. The low number of somatic variants observed by using two NGS platforms might provide a framework for detecting disease-related somatic variants in phenotypically discordant MZ twins.
Correct handling of names and binders is an important issue in meta-programming. This paper presents an embedding of constraint logic programming into the αML functional programming language, which provides a provably correct means of implementing proof search computations over inductive definitions involving names and binders modulo α-equivalence. We show that the execution of proof search in the αML operational semantics is sound and complete with regard to the model-theoretic semantics of formulae, and develop a theory of contextual equivalence for the subclass of αML expressions which correspond to inductive definitions and formulae. In particular, we prove that αML expressions, which denote inductive definitions, are contextually equivalent precisely when those inductive definitions have the same model-theoretic semantics. This paper is a revised and extended version of the conference paper (Lakin, M. R. & Pitts, A. M. (2009) Resolving inductive definitions with binders in higher-order typed functional programming. In Proceedings of the 18th European Symposium on Programming (ESOP 2009), Castagna, G. (ed), Lecture Notes in Computer Science, vol. 5502. Berlin, Germany: Springer-Verlag, pp. 47–61) and draws on material from the first author's PhD thesis (Lakin, M. R. (2010) An Executable Meta-Language for Inductive Definitions with Binders. University of Cambridge, UK).
Drug discovery has classically targeted the active sites of enzymes or ligand-binding sites of receptors and ion channels. In an attempt to improve selectivity of drug candidates, modulation of protein–protein interfaces (PPIs) of multiprotein complexes that mediate conformation or colocation of components of cell-regulatory pathways has become a focus of interest. However, PPIs in multiprotein systems continue to pose significant challenges, as they are generally large, flat and poor in distinguishing features, making the design of small molecule antagonists a difficult task. Nevertheless, encouragement has come from the recognition that a few amino acids – so-called hotspots – may contribute the majority of interaction-free energy. The challenges posed by protein–protein interactions have led to a wellspring of creative approaches, including proteomimetics, stapled α-helical peptides and a plethora of antibody inspired molecular designs. Here, we review a more generic approach: fragment-based drug discovery. Fragments allow novel areas of chemical space to be explored more efficiently, but the initial hits have low affinity. This means that they will not normally disrupt PPIs, unless they are tethered, an approach that has been pioneered by Wells and co-workers. An alternative fragment-based approach is to stabilise the uncomplexed components of the multiprotein system in solution and employ conventional fragment-based screening. Here, we describe the current knowledge of the structures and properties of protein–protein interactions and the small molecules that can modulate them. We then describe the use of sensitive biophysical methods – nuclear magnetic resonance, X-ray crystallography, surface plasmon resonance, differential scanning fluorimetry or isothermal calorimetry – to screen and validate fragment binding. Fragment hits can subsequently be evolved into larger molecules with higher affinity and potency. These may provide new leads for drug candidates that target protein–protein interactions and have therapeutic value.
Vanadium dioxide exhibits a semiconductor to metallic phase transition at a temperature of about 68 °C. This can be exploited in the form of optical thin film structures which will exhibit non-linear behaviour when exposed to pulsed infra-red radiation. Since the phase transition is structural in nature, it is of interest to explore the temporal and spatial properties when irradiated with pulsed laser sources. Fast CMT detectors have been used to resolve nanosecond temporal detail on an experimental basis and the spatial charateristics have been explored using a simple adiabatic heating model. The dynamic transmission values measured for VO2 devices are complex combinations of the temporally and spatially varying characteristics of the film.
The development of effective protocols for the control of crystal structure, size and morphology attracts considerable interest given the requirement for particles of modal size and shape in many areas of particle processing and the importance of crystallochemical selectivity in determining the exploitable properties of crystalline solids. In biological systems there are many examples of advanced “crystal engineering” in which materials are deposited in a highly controlled manner to produce crystal phases that are unique with respect to their structure, habit, uniformity of size and texture. A review of biomineralisation will show that while a complex array of strategies have evolved for regulating crystal growth, one feature is common to the biological paradigm. Interactions between supramolecular organic structures and the nascent inorganic solids play a fundamental role in controlling the deposition of the biominerals and ordering the assembly of these units into hierarchical structures. In order to gain a better understanding of the molecular recognition events, which take place at the organic-inorganic interface, a bio-inspired crystal chemical approach has been adopted. For this work organised organic assemblies (e.g. surfactant aggregates, peptide mimics, dendrimers) of precise molecular design (head group identity, packing conformation, primary sequence etc.) are being assayed for their effectiveness in controlling the nucleation and growth of crystals. It is evident from these studies that the chemical organisation of the polymeric microenvironment operates at the molecular level to control certain aspects of the nucleation, growth and stabilisation of inorganic particles. By systematically changing the molecular motif of the organic template we have established that the size, crystallographic orientation, growth and assembly of the mineral phase can be tailored to function. These results have relevance not only to our understanding of biomineralisation but also suggest a multiplicity of exploitable opportunities for the engineering of crystals.
Laser-induced selected area epitaxy of CdTe thin films on GaAs substrates has been investigated and the role of vapour phase and surface reactions considered. Photo-enhanced growth rates of CdTe have been measured as a function of UV laser intensity and as a function of Cd to Te alkyl ratios. The growth rates are not simply determined by vapour phase photo-dissociation but also by a photolytic reaction on the surface. The latter enables good pattern definition where the growth rate is enhanced by the incident uv -radiation. The factors that determine the photo-enhancement are considered in the light of the Langmuir-Hinsheltwod model.
This paper presents the results of our investigation into the possibility of increasing both the radiative cross-section and the electrical activation efficiency in erbium (Er3+) doped silicon (Si). The energy levels of the isolated Er3+ have been theoretically predicted, employing the Thomas-Fermi method. The behaviour of these levels in Si was then investigated using a Kronig-Penney approach. Initial theoretical results imply that fluorine (F), in addition to Er3+ in Si, increases the radiative cross-section of Er3+ by at least an order of magnitude, and that co-doping appears to enhance the mixing of the 4f and 5d levels and causes the Er3+ energy levels to overlap with those of the host. Photoluminescence spectra of Er3+ in Si co-doped with F also indicate an interaction with the host lattice which appears to be dependent on its electrical characteristics.
In this study, we have investigated the effectiveness of semimetallic, amorphous carbon films as a liftoff mask in the selective growth, by OMVPE, of GaInP epitaxial layers, on (001) oriented GaAs substrates. Scanning electron microscopy, cathodoluminescence spectroscopy and EDX analysis have been employed to characterize the epitaxial material. Our results show excellent selectivity with little nucleation taking place on the amorphous carbon mask in the region of the patterned openings. Liftoff of the carbon mask was very easily achieved, leaving no unintentional nucleation on the substrate below. Investigations in the AlGaAs/GaAs system did not yield the same degree of selectivity with this mask, but the polycrystalline film that deposited on the mask was cleanly lifted from the substrate by the liftoff of the mask.