Book contents
- Frontmatter
- Dedication
- Contents
- Preface
- Notation
- Part One Machine Learning
- Part Two Optimal Recovery
- Executive Summary
- 9 Foundational Results of Optimal Recovery
- 10 Approximability Models
- 11 Ideal Selection of Observation Schemes
- 12 Curse of Dimensionality
- 13 Quasi-Monte Carlo Integration
- Part Three Compressive Sensing
- Part Four Optimization
- Part Five Neural Networks
- Appendices
- References
- Index
13 - Quasi-Monte Carlo Integration
from Part Two - Optimal Recovery
Published online by Cambridge University Press: 21 April 2022
- Frontmatter
- Dedication
- Contents
- Preface
- Notation
- Part One Machine Learning
- Part Two Optimal Recovery
- Executive Summary
- 9 Foundational Results of Optimal Recovery
- 10 Approximability Models
- 11 Ideal Selection of Observation Schemes
- 12 Curse of Dimensionality
- 13 Quasi-Monte Carlo Integration
- Part Three Compressive Sensing
- Part Four Optimization
- Part Five Neural Networks
- Appendices
- References
- Index
Summary
This chapter is concerned with quasi-Monte Carlo rules, i.e., multivariate quadrature rules featuring equal weights and deterministically chosen evaluation points. The variation of a function and the star discrepancy of a set of points are defined as a prerequisite to the Koksma--Hlawka inequality, which bounds the error of a quasi-Monte Carlo rule by the product of the variation and the star discrepancy. Finally, some evaluation points with small star discrepancy are uncovered, namely the Halton sequence and the Hammersley set.
- Type
- Chapter
- Information
- Mathematical Pictures at a Data Science Exhibition , pp. 102 - 112Publisher: Cambridge University PressPrint publication year: 2022