Book contents
- Frontmatter
- Dedication
- Contents
- Preface
- Notation
- Part One Machine Learning
- Part Two Optimal Recovery
- Executive Summary
- 9 Foundational Results of Optimal Recovery
- 10 Approximability Models
- 11 Ideal Selection of Observation Schemes
- 12 Curse of Dimensionality
- 13 Quasi-Monte Carlo Integration
- Part Three Compressive Sensing
- Part Four Optimization
- Part Five Neural Networks
- Appendices
- References
- Index
11 - Ideal Selection of Observation Schemes
from Part Two - Optimal Recovery
Published online by Cambridge University Press: 21 April 2022
- Frontmatter
- Dedication
- Contents
- Preface
- Notation
- Part One Machine Learning
- Part Two Optimal Recovery
- Executive Summary
- 9 Foundational Results of Optimal Recovery
- 10 Approximability Models
- 11 Ideal Selection of Observation Schemes
- 12 Curse of Dimensionality
- 13 Quasi-Monte Carlo Integration
- Part Three Compressive Sensing
- Part Four Optimization
- Part Five Neural Networks
- Appendices
- References
- Index
Summary
After having determined optimal recovery maps when observation functionals were fixed, this chapter intends to further determine optimal observation functionals. Two examples for which this quest is realizable are given: the Hilbert setting, and the integration of Lipschitz functions. In passing, it is shown that adaptive observations are not really superior to nonadaptive observations for the estimation of linear functionals over symmetric and convex model sets.
- Type
- Chapter
- Information
- Mathematical Pictures at a Data Science Exhibition , pp. 86 - 93Publisher: Cambridge University PressPrint publication year: 2022