Book contents
- Frontmatter
- Dedication
- Contents
- Preface
- Notation and Conventions
- Part I 100 Years of Cosmology
- Part II Newtonian Cosmology
- Part III Relativistic Cosmology
- Part IV The Physics of Matter and Radiation
- Part V Precision Tools for Precision Cosmology
- 23 Likelihood
- 24 Frequentist Hypothesis Testing
- 25 Statistical Inference: Bayesian
- 26 CMB Data Processing
- 27 Parameterising the Universe
- 28 Precision Cosmology
- 29 Epilogue
- Appendix A SI, CGS and Planck Units
- Appendix B Magnitudes and Distances
- Appendix C Representing Vectors and Tensors
- Appendix D The Electromagnetic Field
- Appendix E Statistical Distributions
- Appendix F Functions on a Sphere
- Appendix G Acknowledgements
- References
- Index
23 - Likelihood
from Part V - Precision Tools for Precision Cosmology
Published online by Cambridge University Press: 04 May 2017
- Frontmatter
- Dedication
- Contents
- Preface
- Notation and Conventions
- Part I 100 Years of Cosmology
- Part II Newtonian Cosmology
- Part III Relativistic Cosmology
- Part IV The Physics of Matter and Radiation
- Part V Precision Tools for Precision Cosmology
- 23 Likelihood
- 24 Frequentist Hypothesis Testing
- 25 Statistical Inference: Bayesian
- 26 CMB Data Processing
- 27 Parameterising the Universe
- 28 Precision Cosmology
- 29 Epilogue
- Appendix A SI, CGS and Planck Units
- Appendix B Magnitudes and Distances
- Appendix C Representing Vectors and Tensors
- Appendix D The Electromagnetic Field
- Appendix E Statistical Distributions
- Appendix F Functions on a Sphere
- Appendix G Acknowledgements
- References
- Index
Summary
The concept of likelihood is fundamental in the statistical estimation of the parameters that enter into a parameterised model of a dataset. We write down an expression telling us how well the model represents the data, and choose the parameter values that do the best job. Despite the apparent simplicity of this statement, it hides a wealth of issues that have caused a great schism in thinking about how we should draw conclusions from data.
This is the great divide between the frequentist and the Bayesian schools of thought concerning the way likelihood should be implemented in practice. Should we simply take a mechanistic approach and find the most likely parameter set, with an estimate of how confident we are in asserting the answer? Should we somehow fold in our prior prejudices to take account of our past experience? How should we interpret the result of any process that purports to assign a confidence to an answer? What, indeed, would confidence mean in this sense, especially if we cannot analyse any further samples to confirm our result?
The Great Schism
In this chapter we develop the theory of likelihood which is widely used in deriving parameters that fit models to data. Here, we regard the parameters of the model simply as numbers that are to be determined by some optimisation process. We shall generalise this in subsequent chapters when we come to discuss Bayesian inference, but along the way we shall point out salient differences between the two ways of thinking.
We are interested in the selection of parameters that provide the best fit to the given data. What we mean by best fit is generally described by a cost function that provides a quantitative measure of what we mean by goodness of fit. The usual criterion is to ask for the smallest set of parameters that provides the least total deviation of the model from the given data. That too involves an assumption regarding precisely what we mean by ‘smallest total deviation’, or indeed by a measure of the deviation.
Once we have made that fit we may be confronted by a new data set, or an augmentation of the first data set. In either case, we may repeat the fitting procedure only to find a different set of parameters than we found in the first place.
- Type
- Chapter
- Information
- Precision CosmologyThe First Half Million Years, pp. 541 - 566Publisher: Cambridge University PressPrint publication year: 2017