Book contents
- Frontmatter
- Dedication
- Contents
- Preface
- Notation
- Part One Machine Learning
- Part Two Optimal Recovery
- Part Three Compressive Sensing
- Executive Summary
- 14 Sparse Recovery from Linear Observations
- 15 The Complexity of Sparse Recovery
- 16 Low-Rank Recovery from Linear Observations
- 17 Sparse Recovery from One-Bit Observations
- 18 Group Testing
- Part Four Optimization
- Part Five Neural Networks
- Appendices
- References
- Index
16 - Low-Rank Recovery from Linear Observations
from Part Three - Compressive Sensing
Published online by Cambridge University Press: 21 April 2022
- Frontmatter
- Dedication
- Contents
- Preface
- Notation
- Part One Machine Learning
- Part Two Optimal Recovery
- Part Three Compressive Sensing
- Executive Summary
- 14 Sparse Recovery from Linear Observations
- 15 The Complexity of Sparse Recovery
- 16 Low-Rank Recovery from Linear Observations
- 17 Sparse Recovery from One-Bit Observations
- 18 Group Testing
- Part Four Optimization
- Part Five Neural Networks
- Appendices
- References
- Index
Summary
In this chapter, a variation of the standard compressive sensing problem is studied. In this variation, sparse vectors are replaced by low-rank matrices. Recovery is now performed by nuclear-norm minimization, with success characterized by an analog of the null space property for the observation map. This property holds with high probability for random observation maps, again as a consequence of an analog of the restricted isometry property. Finally, a formulation of nuclear norm minimization as a semidefinite program is justified.
- Type
- Chapter
- Information
- Mathematical Pictures at a Data Science Exhibition , pp. 132 - 138Publisher: Cambridge University PressPrint publication year: 2022