Book contents
- Frontmatter
- Contents
- Preface
- 1 Chernoff–Hoeffding Bounds
- 2 Applications of the Chernoff–Hoeffding Bounds
- 3 Chernoff–Hoeffding Bounds in Dependent Settings
- 4 Interlude: Probabilistic Recurrences
- 5 Martingales and the Method of Bounded Differences
- 6 The Simple Method of Bounded Differences in Action
- 7 The Method of Averaged Bounded Differences
- 8 The Method of Bounded Variances
- 9 Interlude: The Infamous Upper Tail
- 10 Isoperimetric Inequalities and Concentration
- 11 Talagrand's Isoperimetric Inequality
- 12 Isoperimetric Inequalities and Concentration via Transportation Cost Inequalities
- 13 Quadratic Transportation Cost and Talagrand's Inequality
- 14 Log-Sobolev Inequalities and Concentration
- Appendix A Summary of the Most Useful Bounds
- Bibliography
- Index
Preface
Published online by Cambridge University Press: 19 October 2009
- Frontmatter
- Contents
- Preface
- 1 Chernoff–Hoeffding Bounds
- 2 Applications of the Chernoff–Hoeffding Bounds
- 3 Chernoff–Hoeffding Bounds in Dependent Settings
- 4 Interlude: Probabilistic Recurrences
- 5 Martingales and the Method of Bounded Differences
- 6 The Simple Method of Bounded Differences in Action
- 7 The Method of Averaged Bounded Differences
- 8 The Method of Bounded Variances
- 9 Interlude: The Infamous Upper Tail
- 10 Isoperimetric Inequalities and Concentration
- 11 Talagrand's Isoperimetric Inequality
- 12 Isoperimetric Inequalities and Concentration via Transportation Cost Inequalities
- 13 Quadratic Transportation Cost and Talagrand's Inequality
- 14 Log-Sobolev Inequalities and Concentration
- Appendix A Summary of the Most Useful Bounds
- Bibliography
- Index
Summary
The aim of this book is to provide a body of tools for establishing concentration of measure that is accessible to researchers working in the design and analysis of randomized algorithms.
Concentration of measure refers to the phenomenon that a function of a large number of random variables tends to concentrate its values in a relatively narrow range (under certain conditions of smoothness of the function and under certain conditions of the dependence amongst the set of random variables). Such a result is of obvious importance to the analysis of randomized algorithms: for instance, the running time of such an algorithm can then be guaranteed to be concentrated around a pre-computed value. More generally, various other parameters measuring the performance of randomized algorithms can be provided tight guarantees via such an analysis.
In a sense, the subject of concentration of measure lies at the core of modern probability theory as embodied in the laws of large numbers, the central limit theorem and, in particular, the theory of large deviations [26]. However, these results are asymptotic: they refer to the limit as the number of variables n goes to infinity, for example. In the analysis of algorithms, we typically require quantitative estimates that are valid for finite (though large) values of n. The earliest such results can be traced back to the work of Azuma, Chernoff and Hoeffding in the 1950s. Subsequently, there have been steady advances, particularly in the classical setting of martingales. In the last couple of decades, these methods have taken on renewed interest, driven by applications in algorithms and optimisation. Also several new techniques have been developed.
- Type
- Chapter
- Information
- Publisher: Cambridge University PressPrint publication year: 2009