Book contents
- Frontmatter
- Dedication
- Contents
- Preface
- Notation
- Part One Machine Learning
- Part Two Optimal Recovery
- Part Three Compressive Sensing
- Part Four Optimization
- Part Five Neural Networks
- Appendices
- Appendix A High-Dimensional Geometry
- Appendix B Probability Theory
- Appendix C Functional Analysis
- Appendix D Matrix Analysis
- Appendix E Approximation Theory
- References
- Index
Appendix E - Approximation Theory
from Appendices
Published online by Cambridge University Press: 21 April 2022
- Frontmatter
- Dedication
- Contents
- Preface
- Notation
- Part One Machine Learning
- Part Two Optimal Recovery
- Part Three Compressive Sensing
- Part Four Optimization
- Part Five Neural Networks
- Appendices
- Appendix A High-Dimensional Geometry
- Appendix B Probability Theory
- Appendix C Functional Analysis
- Appendix D Matrix Analysis
- Appendix E Approximation Theory
- References
- Index
Summary
This appendix starts by presenting some results about the uniform approximation of functions, some of them well known (Stone–Weierstrass, Jackson, and Bersntein theorems) and some of them lesser known (Korovkin theorem). It proceeds by establishing the Riesz-Fejér and Carathéodory—Toeplitz theorems, as they play a role related to semidefinite programming. It concludes with a proof of the Kolmogorov superposition theorem, since this theorem is significant in connection with neural networks.
- Type
- Chapter
- Information
- Mathematical Pictures at a Data Science Exhibition , pp. 297 - 310Publisher: Cambridge University PressPrint publication year: 2022