- Print publication year: 1984
- Online publication date: June 2013

- Publisher: Cambridge University Press
- DOI: https://doi.org/10.1017/CBO9781107340718.005
- pp 1-50

In this preliminary chapter we shall give an exposition of certain topics in probability theory which are necessary to understand and interpret the definition and properties of entropy. We have tried to write the chapter in such a way that a reader with a knowledge of measure theory as given in Ash [15], Halmos [55], or any other basic measure theory text can follow the arguments and understand the examples. We introduce just those parts of probability theory which are necessary for the subsequent chapters and attempt to make them meaningful by use of very simple examples. We also restrict the discussion to “nice” probability spaces, so that conditional expectation and conditional probability are more intuitive and hopefully easier to understand. These “nice” spaces also make it possible to use partitions as models for random experiments, even those experiments which are limits of sequences of experiments.

Probability Spaces

Entropy is a quantitative measurement of uncertainty associated with random phenomena. In order to define this quantity precisely, it is necessary to have a mathematical model for random phenomena which is general enough to include many different physical situations and which has enough structure to allow us to use mathematical reasoning to answer questions about the phenomena.

Such a model is given by a mathematical structure called a probability space, which is nothing more than a measure space in which the measure of the universe set is 1.