Book contents
- Frontmatter
- Dedication
- Contents
- Preface
- Notation
- Part One Machine Learning
- Executive Summary
- 1 Rudiments of Statistical Learning Theory
- 2 Vapnik–Chervonenkis Dimension
- 3 Learnability for Binary Classification
- 4 Support Vector Machines
- 5 Reproducing Kernel Hilbert Spaces
- 6 Regression and Regularization
- 7 Clustering
- 8 Dimension Reduction
- Part Two Optimal Recovery
- Part Three Compressive Sensing
- Part Four Optimization
- Part Five Neural Networks
- Appendices
- References
- Index
3 - Learnability for Binary Classification
from Part One - Machine Learning
Published online by Cambridge University Press: 21 April 2022
- Frontmatter
- Dedication
- Contents
- Preface
- Notation
- Part One Machine Learning
- Executive Summary
- 1 Rudiments of Statistical Learning Theory
- 2 Vapnik–Chervonenkis Dimension
- 3 Learnability for Binary Classification
- 4 Support Vector Machines
- 5 Reproducing Kernel Hilbert Spaces
- 6 Regression and Regularization
- 7 Clustering
- 8 Dimension Reduction
- Part Two Optimal Recovery
- Part Three Compressive Sensing
- Part Four Optimization
- Part Five Neural Networks
- Appendices
- References
- Index
Summary
This chapter contains the proof of the fundamental theorem of PAC-learning for binary classification. In particular, the so-called uniform convergence property is introduced and used to show that finite VC-dimension implies PAC-learnability. Conversely, the so-called no-free-lunch theorem is used to show that PAC-learnability implies finite VC-dimension.
- Type
- Chapter
- Information
- Mathematical Pictures at a Data Science Exhibition , pp. 16 - 22Publisher: Cambridge University PressPrint publication year: 2022