Book contents
- Frontmatter
- Dedication
- Contents
- Preface
- Notation
- Part One Machine Learning
- Part Two Optimal Recovery
- Part Three Compressive Sensing
- Part Four Optimization
- Part Five Neural Networks
- Executive Summary
- 24 First Encounter with ReLU Networks
- 25 Expressiveness of Shallow Networks
- 26 Various Advantages of Depth
- 27 Tidbits on Neural Network Training
- Appendices
- References
- Index
26 - Various Advantages of Depth
from Part Five - Neural Networks
Published online by Cambridge University Press: 21 April 2022
- Frontmatter
- Dedication
- Contents
- Preface
- Notation
- Part One Machine Learning
- Part Two Optimal Recovery
- Part Three Compressive Sensing
- Part Four Optimization
- Part Five Neural Networks
- Executive Summary
- 24 First Encounter with ReLU Networks
- 25 Expressiveness of Shallow Networks
- 26 Various Advantages of Depth
- 27 Tidbits on Neural Network Training
- Appendices
- References
- Index
Summary
This chapter corroborates the empirical belief in the superiority of deep networks over shallow ones. It does so by highlighting three situations where a clear advantage can be demonstrated. First, using depth two, there are activation functions turning neural networks into universal approximators even when restricting the width. Second, depth overcomes the limitation that shallow ReLU networks cannot generate compactly supported functions. Third, the approximation rate of Lipschitz functions by deep ReLU networks is better than that of shallow ones.
- Type
- Chapter
- Information
- Mathematical Pictures at a Data Science Exhibition , pp. 226 - 238Publisher: Cambridge University PressPrint publication year: 2022