Book contents
- Frontmatter
- Dedication
- Contents
- Preface
- Notation
- Part One Machine Learning
- Part Two Optimal Recovery
- Part Three Compressive Sensing
- Part Four Optimization
- Part Five Neural Networks
- Appendices
- Appendix A High-Dimensional Geometry
- Appendix B Probability Theory
- Appendix C Functional Analysis
- Appendix D Matrix Analysis
- Appendix E Approximation Theory
- References
- Index
Appendix B - Probability Theory
from Appendices
Published online by Cambridge University Press: 21 April 2022
- Frontmatter
- Dedication
- Contents
- Preface
- Notation
- Part One Machine Learning
- Part Two Optimal Recovery
- Part Three Compressive Sensing
- Part Four Optimization
- Part Five Neural Networks
- Appendices
- Appendix A High-Dimensional Geometry
- Appendix B Probability Theory
- Appendix C Functional Analysis
- Appendix D Matrix Analysis
- Appendix E Approximation Theory
- References
- Index
Summary
This appendix recalls some key notions of probability theory, such as tails and moment generating functions. These notions are essential in the proof of some concentration inequalities, e.g., the McDiarmid inequality. In turn, these inequalities are used to establish the restricted isometry properties for sparse vectors and for low-rank matrices required earlier.
- Type
- Chapter
- Information
- Mathematical Pictures at a Data Science Exhibition , pp. 259 - 273Publisher: Cambridge University PressPrint publication year: 2022