To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We present the data and initial results from the first pilot survey of the Evolutionary Map of the Universe (EMU), observed at 944 MHz with the Australian Square Kilometre Array Pathfinder (ASKAP) telescope. The survey covers
of an area covered by the Dark Energy Survey, reaching a depth of 25–30
rms at a spatial resolution of
11–18 arcsec, resulting in a catalogue of
220 000 sources, of which
180 000 are single-component sources. Here we present the catalogue of single-component sources, together with (where available) optical and infrared cross-identifications, classifications, and redshifts. This survey explores a new region of parameter space compared to previous surveys. Specifically, the EMU Pilot Survey has a high density of sources, and also a high sensitivity to low surface brightness emission. These properties result in the detection of types of sources that were rarely seen in or absent from previous surveys. We present some of these new results here.
In recent years, a variety of efforts have been made in political science to enable, encourage, or require scholars to be more open and explicit about the bases of their empirical claims and, in turn, make those claims more readily evaluable by others. While qualitative scholars have long taken an interest in making their research open, reflexive, and systematic, the recent push for overarching transparency norms and requirements has provoked serious concern within qualitative research communities and raised fundamental questions about the meaning, value, costs, and intellectual relevance of transparency for qualitative inquiry. In this Perspectives Reflection, we crystallize the central findings of a three-year deliberative process—the Qualitative Transparency Deliberations (QTD)—involving hundreds of political scientists in a broad discussion of these issues. Following an overview of the process and the key insights that emerged, we present summaries of the QTD Working Groups’ final reports. Drawing on a series of public, online conversations that unfolded at www.qualtd.net, the reports unpack transparency’s promise, practicalities, risks, and limitations in relation to different qualitative methodologies, forms of evidence, and research contexts. Taken as a whole, these reports—the full versions of which can be found in the Supplementary Materials—offer practical guidance to scholars designing and implementing qualitative research, and to editors, reviewers, and funders seeking to develop criteria of evaluation that are appropriate—as understood by relevant research communities—to the forms of inquiry being assessed. We dedicate this Reflection to the memory of our coauthor and QTD working group leader Kendra Koivu.1
We present a detailed overview of the cosmological surveys that we aim to carry out with Phase 1 of the Square Kilometre Array (SKA1) and the science that they will enable. We highlight three main surveys: a medium-deep continuum weak lensing and low-redshift spectroscopic HI galaxy survey over 5 000 deg2; a wide and deep continuum galaxy and HI intensity mapping (IM) survey over 20 000 deg2 from
$z = 0.35$
to 3; and a deep, high-redshift HI IM survey over 100 deg2 from
$z = 3$
to 6. Taken together, these surveys will achieve an array of important scientific goals: measuring the equation of state of dark energy out to
$z \sim 3$
with percent-level precision measurements of the cosmic expansion rate; constraining possible deviations from General Relativity on cosmological scales by measuring the growth rate of structure through multiple independent methods; mapping the structure of the Universe on the largest accessible scales, thus constraining fundamental properties such as isotropy, homogeneity, and non-Gaussianity; and measuring the HI density and bias out to
$z = 6$
. These surveys will also provide highly complementary clustering and weak lensing measurements that have independent systematic uncertainties to those of optical and near-infrared (NIR) surveys like Euclid, LSST, and WFIRST leading to a multitude of synergies that can improve constraints significantly beyond what optical or radio surveys can achieve on their own. This document, the 2018 Red Book, provides reference technical specifications, cosmological parameter forecasts, and an overview of relevant systematic effects for the three key surveys and will be regularly updated by the Cosmology Science Working Group in the run up to start of operations and the Key Science Programme of SKA1.
We comment on the conjecture by Parker et al. (2016) that Antarctic
toothfish recently returned to McMurdo Sound, arguing that this species never
departed. Instead, as deduced from a 40-year fishing effort, toothfish water column
prevalence became markedly reduced where bottom depths are <500 m, with research
continuing to show their presence on the bottom or above the bottom where depths are
deeper. We also counter arguments that toothfish departed, and remained absent,
during and following a five-year presence of mega-icebergs residing near the opposite
coast of Ross Island, the icebergs inhibiting or fomenting conditions that
discouraged toothfish presence in the Sound. Available analyses reveal that toothfish
movement into the Sound was probably not significantly affected, and additionally
that neither changes in hydrography nor in primary productivity in the Sound would
have been sufficient to impact toothfish presence through food web alteration. We
hypothesize that the local effect of predation by seals and whales and the regional
effect of a fishery targeting the largest toothfish (those neutrally buoyant and thus
capable of occupying upper levels of the water column) has resulted in the remaining
toothfish now being found predominantly closer to the bottom at greater depths.
Andrew R. Liddle, Astronomy Centre, University of Sussex, Brighton BN1 9QH, UK,
Pia Mukherjee, Astronomy Centre, University of Sussex, Brighton BN1 9QH, UK,
David Parkinson, Astronomy Centre, University of Sussex, Brighton BN1 9QH, UK
One of the principal aims of cosmology is to identify the correct cosmological model, able to explain the available high-quality data. Determining the best model is a two-stage process. First, we must identify the set of parameters that we will allow to vary in seeking to fit the observations. As part of this process we need also to fix the allowable (prior) ranges that these parameters might take, most generally by providing a probability density function in the N-dimensional parameter space. This combination of parameter set and prior distribution is what we will call a model, and it should make calculable predictions for the quantities we are going to measure. Having chosen the model, the second stage is to determine, from the observations, the ranges of values of the parameters which are compatible with the data. This second step, parameter estimation, is described in the cosmological context by Lewis and Bridle in Chapter 3 of this volume. In this article, we shall concentrate on the choice of model.
Typically, there is not a single model that we wish to fit to the data. Rather, the aim of obtaining the data is to choose between competing models, where different physical processes may be responsible for the observed outcome. This is the statistical problem of model comparison, or model selection. This is readily carried out by extending the Bayesian parameter estimation framework so that we assign probabilities to models, as well as to parameter values within those models.
In recent years cosmologists have advanced from largely qualitative models of the Universe to precision modelling using Bayesian methods, in order to determine the properties of the Universe to high accuracy. This timely book is the only comprehensive introduction to the use of Bayesian methods in cosmological studies, and is an essential reference for graduate students and researchers in cosmology, astrophysics and applied statistics. The first part of the book focuses on methodology, setting the basic foundations and giving a detailed description of techniques. It covers topics including the estimation of parameters, Bayesian model comparison, and separation of signals. The second part explores a diverse range of applications, from the detection of astronomical sources (including through gravitational waves), to cosmic microwave background analysis and the quantification and classification of galaxy properties. Contributions from 24 highly regarded cosmologists and statisticians make this an authoritative guide to the subject.
A revolution is underway in cosmology, with largely qualitative models of the Universe being replaced with precision modelling and the determination of Universe's properties to high accuracy. The revolution is driven by three distinct elements – the development of sophisticated cosmological models and the ability to extract accurate predictions from them, the acquisition of large and precise observational datasets constraining those models, and the deployment of advanced statistical techniques to extract the best possible constraints from those data.
This book focuses on the last of these. In their approach to analyzing datasets, cosmologists for the most part lie resolutely within the Bayesian methodology for scientific inference. This approach is characterized by the assignment of probabilities to all quantities of interest, which are then manipulated by a set of rules, amongst which Bayes' theorem plays a central role. Those probabilities are constantly updated in response to new observational data, and at any given instant provide a snapshot of the best current understanding. Full deployment of Bayesian inference has only recently come within the abilities of high-performance computing.
Despite the prevalence of Bayesian methods in the cosmology literature, there is no single source which collects together both a description of the main Bayesian methods and a range of illustrative applications to cosmological problems. That, of course, is the aim of this volume. Its seeds grew from a small conference ‘Bayesian Methods in Cosmology’, held at the University of Sussex in June 2006 and attended by around 60 people, at which many cosmological applications of Bayesian methods were discussed.
Roberto Trotta, Astrophysics, Department of Physics, Oxford University, Keble Road, Oxford OX1 3RH, UK; and Astrophysics Group, Imperial College London, Blackett Laboratory, London SW7 2AZ, UK,
Martin Kunz, Astronomy Centre, University of Sussex, Brighton BN1 9QH, UK,
Pia Mukherjee, Astronomy Centre, University of Sussex, Brighton BN1 9QH, UK,
David Parkinson, Astronomy Centre, University of Sussex, Brighton BN1 9QH, UK
Common applications of Bayesian methods in cosmology involve the computation of model probabilities and of posterior probability distributions for the parameters of those models. However, Bayesian statistics is not limited to applications based on existing data, but can equally well handle questions about expectations for the future performance of planned experiments, based on our current knowledge.
This is an important topic, especially with a number of future cosmology experiments and surveys currently being planned. To give a taste, they include: large-scale optical surveys such as Pan-STARRS (Panoramic Survey Telescope and Rapid Response System), DES (the Dark Energy Survey) and LSST (Large Synoptic Survey Telescope), massive spectroscopic surveys such as WFMOS (Wide-Field Fibrefed Multi-Object Spectrograph), satellite missions such as JDEM (the Joint Dark Energy Explorer) and EUCLID, continental-sized radio telescopes such as SKA (the Square Kilometer Array) and future cosmic microwave background experiments such as B-Pol searching for primordial gravitational waves. As the amount of available resources is limited, the question of how to optimize them in order to obtain the greatest possible science return, given present knowledge, will be of increasing importance.
In this chapter we address the issue of experimental forecasting and optimization, starting with the general aspects and a simple example. We then discuss the so-called Fisher Matrix approach, which allows one to compute forecasts rapidly, before looking at a real-world application. Finally, we cover forecasts of model comparison outcomes and model selection Figures of Merit.
GAP-43 levels have been determined by immunoassay in cat visual cortex during postnatal development to test the idea that GAP-43 expression could be related to the duration of the critical period for plasticity. For comparison, GAP-43 levels have also been assayed in primary motor cortex, primary somatosensory cortex, and cerebellum at each age. GAP-43 levels were high in all regions at 5 d (with concentrations ranging from 7−10 ng;/μg protein) and then declined 60−80% by 60 d of age. After 60 d of age, GAP-43 concentrations in each region continued a slow decline to adult values, which ranged from 0.5−2 ng/μg protein. To test for the involvement of GAP-43 in ocular dominance plasticity during the critical period, the effect of visual deprivation on GAP-43 levels was investigated. Monocular deprivation for 2−7 d, ending at either 27 or 35 d of age, had no effect on total membrane levels of GAP-43. The concentrations of membrane-associated GAP-43 prior to 40 d of age correlate with events that occur during postnatal development of the cat visual cortex. However, the slow decline in membrane-associated GAP-43 levels after 40 d of age may be an index of relative plasticity remaining after the peak of the critical period.
The two main receptor subtypes for 5-hydroxytryptamine (5HT) were measured and localized in visual cortical areas of macaque monkey. [3H]5HT was used to label all 5HT-1 receptor subtypes and [3H]ketanserin was used to label 5HT-2 receptors. Both receptor types could be demonstrated in membranes prepared from macaque primary visual cortex. The specificity of these ligands for 5HT-1 or 5HT-2 receptors was demonstrated by the pharmacological profile of inhibitors of the specific binding. 5HT-1A receptor sites were detected by displacement experiments and by direct labeling with [3H]8-hydroxy-2(di-n-propylamino) tetralin 8OH-DPAT. Receptor autoradiography showed that the distribution of these receptor subtypes varied from one part of visual cortex to another. 5HT-1 receptors, labeled with [3H]5HT were present in several bands through layer IV of primary visual cortex with the densest band seen in and above layer IVA: another band was in lower layer VI. The band in layer VI was predominantly 5HT-1A sites. There were two main bands of 5HT-2 receptor sites, the most prominent around the IV/V boundary, and the other extending from layer IVA upwards. Adjacent areas showed 5HT receptors in a broad band corresponding to layer IV. 5HT-1A sites were found in superficial layers of adjacent areas, except V2. These layering patterns did not correspond precisely with cytoarchitectonic layering, nor with the pattern of 5HT-containing presynaptic fibres in published reports. It is important, therefore, in considering the role of the 5HT-containing neurons in cortical function to take account not only of the anatomy of the presynaptic terminals, but also of the postsynaptic receptors upon which the released transmitter will act, and their location within the cortex.
We have purified a protein that changes in relative concentration during the development of the kitten visual cortex. It resembles GAP-43 (a neuronal protein that is expressed at elevated levels during periods of development and regenerative axon growth) in the following respects: (1) it is an acidic protein (pI=4.7) whose electrophoretic mobility on SDS-PAGE is similar to, but lower than rat GAP-43, suggesting that the cat protein is larger; (2) its electrophoretic mobility varies with the acrylamide concentration in a manner that is characteristic of GAP-43; (3) its concentration in kitten forebrain is elevated during early postnatal development; (4) the sequence of ten consecutive amino acids from a chemically generated fragment matches the expected sequence from GAP-43; and (5) its amino-acid content also matches GAP-43. We conclude that our purified protein is cat GAP-43. Immunoblots with an antibody prepared against rat GAP-43 suggested that the concentration of GAP-43 in the visual cortex declines with age.