To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Space Infrared Telescope for Cosmology and Astrophysics (SPICA), the cryogenic infrared space telescope recently pre-selected for a ‘Phase A’ concept study as one of the three remaining candidates for European Space Agency (ESA's) fifth medium class (M5) mission, is foreseen to include a far-infrared polarimetric imager [SPICA-POL, now called B-fields with BOlometers and Polarizers (B-BOP)], which would offer a unique opportunity to resolve major issues in our understanding of the nearby, cold magnetised Universe. This paper presents an overview of the main science drivers for B-BOP, including high dynamic range polarimetric imaging of the cold interstellar medium (ISM) in both our Milky Way and nearby galaxies. Thanks to a cooled telescope, B-BOP will deliver wide-field 100–350 $\mu$m images of linearly polarised dust emission in Stokes Q and U with a resolution, signal-to-noise ratio, and both intensity and spatial dynamic ranges comparable to those achieved by Herschel images of the cold ISM in total intensity (Stokes I). The B-BOP 200 $\mu$m images will also have a factor $\sim $30 higher resolution than Planck polarisation data. This will make B-BOP a unique tool for characterising the statistical properties of the magnetised ISM and probing the role of magnetic fields in the formation and evolution of the interstellar web of dusty molecular filaments giving birth to most stars in our Galaxy. B-BOP will also be a powerful instrument for studying the magnetism of nearby galaxies and testing Galactic dynamo models, constraining the physics of dust grain alignment, informing the problem of the interaction of cosmic rays with molecular clouds, tracing magnetic fields in the inner layers of protoplanetary disks, and monitoring accretion bursts in embedded protostars.
The SPICA mid- and far-infrared telescope will address fundamental issues in our understanding of star formation and ISM physics in galaxies. A particular hallmark of SPICA is the outstanding sensitivity enabled by the cold telescope, optimised detectors, and wide instantaneous bandwidth throughout the mid- and far-infrared. The spectroscopic, imaging, and polarimetric observations that SPICA will be able to collect will help in clarifying the complex physical mechanisms which underlie the baryon cycle of galaxies. In particular, (i) the access to a large suite of atomic and ionic fine-structure lines for large samples of galaxies will shed light on the origin of the observed spread in star-formation rates within and between galaxies, (ii) observations of HD rotational lines (out to ~10 Mpc) and fine structure lines such as [C ii] 158 μm (out to ~100 Mpc) will clarify the main reservoirs of interstellar matter in galaxies, including phases where CO does not emit, (iii) far-infrared spectroscopy of dust and ice features will address uncertainties in the mass and composition of dust in galaxies, and the contributions of supernovae to the interstellar dust budget will be quantified by photometry and monitoring of supernova remnants in nearby galaxies, (iv) observations of far-infrared cooling lines such as [O i] 63 μm from star-forming molecular clouds in our Galaxy will evaluate the importance of shocks to dissipate turbulent energy. The paper concludes with requirements for the telescope and instruments, and recommendations for the observing strategy.
The CMB data comes from many different kinds of experiment, and in the competition between the small balloon-based missions and huge satellite-borne programs it is occasionally the former who win. But in the end, the high quality sky-wide data comes from space experiments. During this century, two have dominated cosmology so far: WMAP and Planck.
In this chapter we describe how the data is acquired, how it is stored in special format maps, and how it is treated to remove all the non-cosmological contributions. After this has been done we have our data, but it is the need to remove noise and foreground contamination that is the most challenging, and the most demanding of computing resources.
We treat the problem of noise and foreground removal in as generic a way as possible, there is not a single best way of doing this. The mathematical formalism is quite complex, as is the statistical data analysis that will follow.
Some of the history of the observations of the Cosmic Microwave Background radiation (CMB) has been recounted in the first part of this book (see Chapter 3). The culmination of the early efforts, the ‘first 30 years’, was perhaps the COBE DMR/FIRAS mission. COBE DMR produced the first all-sky maps of the CMB (Smoot et al., 1991), while the FIRAS experiment on the same satellite had established the remarkable accuracy of the Planckian form of the radiation spectrum (Mather and the COBE collaboration, 1990). Importantly, COBE had delivered a low resolution, albeit noisy, picture of the microwave sky that not only made substantial scientific advances, but also attracted widespread public attention and provided a vital scientific stimulus.
The manifest success of the COBE mission inspired a series of ground-based, balloon-based and and space observatories to look at the fluctuations with more sensitivity and at higher angular resolution. Satellite experiments are expensive and take a long time to build and so the competition to get the key results from lower cost ground based experiments before the space experiments could deliver was fierce, and in large part successful.
In this chapter we cover one particular aspect of frequentist statistics in order to be able to compare and contrast the approach with the Bayesian approach that we shall be discussing shortly. In much of the world that uses statistics, the principle objective appears to be to test hypotheses about given data and come to some conclusion. The conclusion might be the answer to either of the questions: ‘Is this supported by the data or not?’, or ‘Which is the better descriptor of the data?’. Control samples are commonly used as an alterrnative against which the data will be evaluated. Whatever the question, the questioner expects a quantitative answer.
In cosmology we only have the one Universe as a source of data, and we have no other Universe that can act as a control. Of course we do have numerical simulations that can play that role. Our goal is often to fit a parameterised model to data and we can reasonably ask how confident we should be that this is the ‘best’ answer.
In generic terms, the goal of statistical analysis is to discover how some factor Y responds to changes in some input, or set of inputs X. The observations (x, y) are paired, and while the values of x are generally under control of the experimentalist, the response y is subject to measurement errors ∊. The mechanism for doing this may take the form of establishing a model for the dependence of Y on X, or might be to discover which of the factors X, or which combination of the elements of X, are the main contributory factors to the response Y. Here we will consider only the first of these, fitting parameterised models.
The simplest data model, linear regression is commonplace and involves determining parameters α and β in the relationship of the form
Y = AX + ∊, (24.1)
where the experiment consists of observing the values of Y, the response, resulting from treatmentsX. The observations Y are subject to errors ∊.
Unlike Newton's theory of gravitation, Einstein's theory of general relativity views the gravitational force as a manifestation of the geometry of the underlying space-time. The idea sounds good, it evokes images of billiard balls rolling around on tables with hills and valleys where the balls’ otherwise rectilinear motion is disturbed by the geometry of their environment. The difficulty is how to achieve the parallel goal of expressing the force of gravitation geometrically, and to do so without destroying all that we have learned about physics in our local environment.
As we saw in the previous chapter, Einstein saw the principles of covariance and equivalence as a way of formalising that. The theory of gravitation should always admit local inertial frames in which our known laws of physics would hold. Moreover, the mathematical expression of the laws of physics would be the same in all inertial frames. A key step at this point was to argue that physical entities are described by mathematical objects that transform correctly under local Lorentz transformations. This brings us to Minkowski's use of 4-vectors and tensors as the mathematical embodiment of physical quantities.
However, these are local statements, not global ones. They do not tell us how the geometry would affect two widely separated inertial observers in the presence of a gravitational field. The clue was given to Einstein by Marcel Grossmann who suggested that this link would be provided by insisting that the underlying geometry was the geometry of a Riemannian space. This provides the structure to address global issues and to connect different parts of the space.
In this chapter we describe this process and provide the mathematical structure that arises when we follow up on Grossmann's plan. We learn about connecting parts of the space-time and we establish notions of derivatives, geodesics and measures of the curvature.
A Geometric Perspective
The space time of Einstein's theory is specified by its geometry. A gravitational field is thought of as distorting the space-time away from the no-gravity space-time of Minkowski. So, just as the Minkowski space of special relativity is entirely specified by a metric tensor or line element telling us what the space time separation of neighbouring points is, the space-time of the general theory is specified by a more general metric.
It could be said that the discovery and subsequent exploration of the cosmic microwave background radiation marked a transition from cosmology as a branch of astronomy that was largely a philosophical endeavour to cosmology as an astrophysical discipline that is now a fully fledged branch of physics. The CMB established a secure physical framework within which we could undertake sophisticated experiments that would define the parameters of that framework with ever greater precision. As understanding has grown, more and more of traditional astronomy has been embraced to provide evidence in support of this new paradigm: we have garnered evidence from the study of stars, such as supernovae, of galaxies and their velocity fields, and from observations of galaxy clusters and clustering. These studies have involved ground-based and space-based experiments at all wavelengths from radio to gamma-rays.
‘Precision Cosmology’ was born and we now have the recognised disciplines of ‘astro-particle physics’, ‘astro-statistics’ and ‘numerical cosmology’, to name but a few.
Fifty years on from the discovery of the CMB a number of issues have been clarified, but many more remain. Among the numerous areas of active research in cosmology today, there are three which have particular bearing on the first half million years: dark matter, dark energy and gravitational waves.
In the Aftermath of the CMB
The discovery of the CMB not only served to establish our cosmological paradigm as the Hot Big Bang theory, it also stimulated a growth in cosmology as a branch of physics. Fifty years later cosmology has reached a depth and precision of understanding that would have been inconceivable prior to 1965. The 50 years from 1965–2015 saw remarkable advances on both theoretical and data-acquisition and analysis fronts, which are described briefly in this chapter. One important consequence is the blurring of the boundary between theory and observation. Theoretical advances have driven experiments to gather and analyse data through which the theory could be exploited, while ever more sophisticated experiments demand improved methods of data analysis and impose constraints on the values of the parameters of the theories.
It has previously been said that the essence of physics, and science in general, is that it should be possible to perform experiments and to have other groups repeat those experiments, thus providing verification of the results and the consequent conclusions.
We will use Newtonian versions of solutions to the Einstein field equations to describe a series of model universes that provide a framework within which we can better understand how our Universe works. Newton's theory of gravitation has been replaced by Einstein's, but in many respects Newton's theory is a pretty good approximation: good enough that we depend on it in our everyday lives. The Newtonian view is certainly easier for us to relate to and exploit, but we must understand the inherent limitations.
Here, we highlight the fundamental differences between the Newtonian and Einsteinian theories: Newton with his absolute space and universal time, and Einstein with his geometrisation of gravity. Fortunately, there are some relevant solutions of the Einstein equations that have direct Newtonian analogues. Those Newtonian analogues are lacking some important features, notably a lack of a description of how light propagates. Fortunately, we can graft the information from some of the Einstein models onto the Newtonian models to produce what we might call Newtonian surrogates of the Einsteinian models.
In this chapter we introduce the simplest of a series of homogeneous and isotropic cosmological models formulated within the limited framework of Newtonian gravity. These models contains only ‘dust’: pressure free matter made up of particles that are neither created nor destroyed, and that do not interact with one another. The Universe evolves under the mutual gravitational interaction of those particles.
While this model is not realistic it nevertheless allows us to develop a full cosmological model that is the template for the more complex models that follow. We develop these models in considerable detail since much of what is done will be repeated for the other, more realistic, models.
It is worth remarking that these models are fundamental to numerical N-body simulations of the Universe in which the constituent particles are ‘dust’. Such dust models can also be used to study the growth of the structure in the Universe.
Why Bother with Newton?
Two Views of Gravity
The expansion of the Universe is dominated by the gravitational force. The best theory we have for the gravitational force is Einstein's Theory of General Relativity (Einstein, 1916a), which relates geometry with the material content of the space-time containing that matter.
The transition the Universe makes from a fully ionised plasma to a neutral gas happens very quickly when the age of the Universe is around 380 000 years, i.e. a redshift of z ~ 1500. From our point of view, looking out into the distant Universe, and hence into the past, we see this as a cosmic photosphere. This is referred to as the surface of last scattering since this corresponds to the time after which most of the CMB photons travel freely towards us. The event of the Universe becoming neutral is referred to as the epoch of recombination.
The image of this surface we create with our radiometers is a snapshot of the Universe as it was when it was still only slightly inhomogeneous: we see the initial conditions for the birth of cosmic structure.
The photons of the CMB that we observe come from the epoch at a redshift z ~ 1090 when the cosmic plasma was almost neutral and the timescale for electron–photon collisions became longer than the cosmic expansion timescale. The Universe was then some ~ 380 000 years old. Prior to that time, electrons and photons had been held in thermodynamic equilibrium by the Thomson scattering process, while after that time the baryonic component of the cosmic plasma could evolve as an independent component of the plasma.
This was the time of the decoupling of matter from the radiation field. Shortly after that time most of the CMB photons were able to travel directly to us now without being scattered by free electrons. This gave us a direct view of the early conditions that led to the formation of the cosmic structures we see today, the galaxies, clusters of galaxies and their organisation into what is now known as the ‘cosmic web’.
The Universe did not suddenly become neutral as it emerged from the fully ionised fire-ball phase of the cosmic expansion. The neutralisation, or ‘recombination’, of the cosmic plasma took place over several tens of thousands of years. That is enough to slightly blur the details of the structure that might be observed on the last scattering surface: we are looking into the cosmic photosphere.
The study of the very early Universe has given birth to a discipline that is broadly referred to as astro-particle physics, a subject that sits on the border of high energy particle physics, nuclear physics and astrophysics. Astro-particle physics has proven to be a wonderful symbiosis of experimental physics and experimental astrophysics in which the early Universe is, in effect, a high energy physics laboratory. Arguably, it was Gamow and his collaborators, in the immediate post World War II decade, who took the first tentative steps in this direction by arguing that the early Universe was the site of synthesis of the chemical elements.
The physics domain of cosmic nucleosynthesis is the first few minutes of the Big Bang. Hayashi (1950) took the first step further back into the past and towards the Big Bang itself with his exploitation of weak interactions to describe what had happened just prior to the period of nucleosynthesis. Since then, particle experiments have pushed back our understanding of the physics of matter to the point where we can now discuss the period before the first micro-seconds of the cosmic origin within the context of known high energy physics.
Early Thermal History
Astro-particle physics inevitably brings in an even wider domain of physics than the classical cosmology of general relativistic models. Some fine texts have been written on this subject from a variety of points of view and at different levels. The approach used here is to explain the physics of the early Universe with a view to understanding what precision cosmology has to say about the particle and nuclear physics aspects of the earliest moments after the Big Bang.
The first steps in this direction were taken by Gamow and his co-workers during the first decade after the Second World War. Gamow had realised that the wartime research into nuclear physics could answer some important questions about his concept of the early Universe that he had started working on prior to 1939. We can trace back Gamow's interest in a Big Bang Universe to Gamow and Teller (1939a,b) in which he had described a simple idea for the formation of galaxies, and no doubt his interest extended back to his brief association with Friedmann.
Almost every civilisation throughout history has had a cosmology of some kind. By this we mean a description of the Universe in which they live based on their state of knowledge. The Vikings, for example, had a complex cosmology in which the world and its inhabitants were controlled by a set of Gods, both good and bad. Nature was ruled not by the laws of physics, but by the forces of nature controlled by the whims of these Gods. However bizarre that may seem to us now, at the time this belief-set dominated human behaviour: its social mores and values.
Today we live in a Universe that is described by physical laws. What is remarkable is that these laws have more often than not been discovered on the basis of laboratory experiments, and subsequently found to work on the vastest scales imaginable. That fact leads us to believe that our explanations of the Universe are a valid description of what is actually happening. We do not need to invoke special laws just to explain the cosmos and our position within it.
The physical laws governing the Universe and its constituents were discovered over a period of several centuries. Some might say this path to realisation started with Copernicus putting the Sun at the centre of everything rather than the Earth. Others might argue that this was merely descriptive and that knowledge of the laws started to emerge following on from the work of Kepler, Galileo and Newton. However one sees it, by the beginning of the 20th century, with Einstein's Theory of Relativity, the scene was set to embark on a journey of observational cosmology which 100 years later would lead to most scientists agreeing that we have a self-consistent theory of the Universe based on known laws of physics. Some, no doubt, would go as far as to say that the current view was incontrovertible.
Just as the early map makers measured and marked out our planet, the cosmographers of the 20th century marked out and mapped the Universe.
The inter-war years, 1918–1939, were a period of coming to terms (a) with Einstein's General Relativity and (b) with Hubble's discovery of the redshift–distance relationship. By the end of the period our cosmological framework was understood well enough in terms of an expanding homogeneous and isotropic solution of the Einstein equations and it probably seemed a matter of acquiring redshift in order to settle the parameters of the model. Two parameters would do the job.
It could not have been imagined that by 1955 there would be a heated argument between two camps: Gamow, who said there was a Hot Big Bang, and Hoyle, Bondi and Gold who said there was not. Added to that was another heated, even acrimonious, argument about the interpretation of the counts of recently discovered radio sources made by Ryle in Cambridge, England and Mills in Sydney, Australia. Ten year after that we had the Quasars and Cosmic Microwave Background Radiation (CMB) that, at the time, not everyone believed was cosmic in origin.
This chapter relates some of that story. It is an essential part of explaining how come we are where we are.
Models of the Cosmic Expansion
Several spatially homogeneous solutions of the equations of the General Theory of Relativity were discovered within the first decade following their publication (Einstein, 1916a). At that time relatively little was known about the Universe: it was still uncertain whether or not the nebulae were merely parts of our own Galaxy, although, through the pioneering work of Slipher, it was known that most of them were rushing away from us. Little or nothing was known about the homogeneity or isotropy of the Universe, but the assumption of homogeneity and isotropy would simplify the largely intractable Einstein equations. During the decade following the publication of the Einstein equations several important cosmological solutions were discovered. These are discussed next.
It is interesting in this context to read and compare the texts of Eddington (1923), published before Edwin Hubble's study of the nebulae and his consequent discovery of the cosmic expansion, and the text of Tolman (1934b) which was published shortly thereafter. The search for understanding the Universe in terms of models is beautifully described in Michael Heller's Ultimate Explanations of the Universe (Heller, 2010).
In all cases the full title of the article and other bibliographic data are available from the corresponding entry in the References section. ‘CCC’ refers to permissions gained through the Copyright Clearance Centre and the number following is the granted licence number. These cover situations where the authors’ permission was required but was not available.
NASA copyright policy states that ‘NASA material is not protected by copyright unless noted’. Thus Figure 3.1 is in the public domain.
Unless otherwise noted, images and video on Laser Interferometer Gravitational-wave Observatory (LIGO) public web sites (public sites ending with a ligo.caltech.edu or ligo.mit.edu address) may be used for any purpose without prior permission. See Figure 4.6.
Figure 3.10: The Particle Data Group publishes annual Reports on Particle Physics (RPP which are published in the journal Chinese Physics, C. Since 2014, the figures from RPP are in the public domain (Olive and Particle Data Group, 2014) and author permission is automatically granted. This concerns Figure 27.1 of Scott and Smoot (2014).
The distance to an astronomical source is a fundamental astronomical datum about the source. While astronomers measure sizes of objects in centimeters, meters or kilometers, distances to distant objects are measured in light years or parsecs. Large objects like galaxies and clusters of galaxies are also measured in light years or parsecs.
The light year is simply the distance travelled by light in one year and it is a fairly convenient unit of measure for the distances to stars. However, distances to stars are not measured using the travel time of light rays: nearby stellar distances are measured using parallax. The parallax is the maximum apparent angular shift in position of a star on the sky as the Earth goes from one side of its orbit to the other. This is the same as the angle subtended by the Earth's orbit as viewed from the star.
The parsec is the distance at which the parallax of an object would be 1 arc second. The parallax of the nearest star, 4.2 light years away, is 0.772 arcsec. The relationship between light years and parsecs is 1 parsec = 3.26 light years. Distances to cosmological objects are to be measured in Megaparsecs (Mpc).
Because of the curvature of space the distance as measured by parallax is not the same as the distances measured by brightness, or by diameter.
Magnitudes and all that
It is difficult to avoid discussing the apparent brightness of objects such as stars and galaxies without encountering one of astronomy's major idiosyncrasies – the magnitude scale. Ever since the time of Hipparchus of Niacea (c190BC–120BC), the brightnesses of stars have been measured relative to one another on a logarithmic scale. Around 150BC, Hipparchus had classified the brightest stars he could see as being of ‘magnitude 1’ and the faintest as being of ‘magnitude 6’: a change of 5 magnitudes represented a factor of 100 in brightness. Because the brain perceives increments in brightness logarithmically, each increment of 1 magnitude corresponded to a change in brightness by a factor of 10⅕ = 2.512.
The magnitude scale, as we know it today, was introduced by Pogson (1856). Nowadays we might decide to base a logarithmic brightness scale on powers of two, so that each ‘magnitude’ was twice as bright as its predecessor.
Systems of units are always a problem, despite international agreement to standardise on the SI system. The old cgs system and its multitude of mutually inconsistent variants (Gauss, Lorentz–Heaviside, esu, emu, etc.) is still in use. This is particularly an issue when it comes to using the Maxwell equations. In astrophysics we generally see a mixture of cgs units and subject-specific units, like the magnitude scale for brightness, parsecs for distance, and flux units in the older radio astronomy literature. This is what we might call the Astrophysical System of Units and, like many contemporary authors, I have adhered to that in this text. However, certain issues must be clarified, and in particular the conventions used in the Maxwell equations.
This appendix provides translation between Gaussian cgs units and the SI system for several of the quantities that are used in the text, and provides some update on the values of the fundamental physical constants that have occurred since the decision to fix the speed of light at a given numerical value. The Maxwell equations have a small section to themselves.
SI, MKS and cgs
There has been a slow move in astronomy away from the traditional centimetre-gram-second (CGS) based system of units that evolved in the 19th century towards the metric metre-kilogram-second (MKS) based system of units and the subsequent International System (SI), which is also known as the MKSA system. However, the change has been slow or, as in the case of astronomy, only partial. By the 1950s many fields had adopted their own subject-specific units, chosen largely because they were particularly convenient or simply entrenched in the culture of the discipline. There are perhaps just two reasons for the slowness of this change: teachers today were often taught in cgs or hybrid systems, and used textbooks with which they were familiar. Moving from cgs to SI units takes people out of their comfort zone.
The full details of the system of physical units are maintained and documented by the National Institute of Standards and Technology, NIST, which is maintained by the US
Department of Commerce. The information presented here is for convenience and is largely abstracted from the NIST web site.
In the Newtonian perspective, light simply moves in straight lines with constant velocity. The propagation of light through space was not an issue that Newtonian theory could address without making additional assumptions. In the curved geometry of Einstein's space-times, light responds to the ever changing curvature of the space-time.
In special relativity the light rays are defined by the fact that the proper distance moved by light is zero. This is enshrined in saying that light rays are the curves on which the line element is ds2 = 0. Given a coordinate system and a metric expressed in those coordinates, we can, in principle, solve the resulting equations. There is no reason to believe that our Minkowskian intuition about the propagation of light would be of help, and some of the results can at first be quite surprising. Like the result that the angular diameter subtended by an object of fixed size can increase with distance!
Aside from this counter intuitive behaviour there is another problem: the problem of understanding and interpreting the coordinates in terms of what a person sitting in an inertial frame observes. This is fundamental to the astronomer observing the Universe, and so we develop the relevant equation for an FLRW Universe.
Light Propagation in Curved Space-Time
Understanding the physics of light propagation is a vital step in interpreting the observed cosmological redshift. Getting to understand the redshift was not at all straightforward even though it is a consequence of the cosmological models constructed within the context of general relativity. Some of this early story with regard to de Sitter's Universe was retold in Section 15.6.4.
As late as the 1960s and 1970s there was still an active, and often heated, debate as to whether the observed redshifts of galaxies and quasars were of cosmological origin. That was not simply a question of whether or not general relativity could provide the explanation for the phenomenon. The debate was more about whether there was an additional contribution from phenomena intrinsic to the source or from something that might happen along the path of the light rays (as in ‘tired light’). Some of the debate even suggested that the constants of nature might be a function of time, an idea that originated with Dirac (1937, 1938) following up on earlier ideas of Milne and Eddington.