Book contents
- Frontmatter
- Contents
- List of Acronyms
- Notation
- Foreword
- 1 Introduction to the World of Sparsity
- 2 The Wavelet Transform
- 3 Redundant Wavelet Transform
- 4 Nonlinear Multiscale Transforms
- 5 Multiscale Geometric Transforms
- 6 Sparsity andNoiseRemoval
- 7 Linear Inverse Problems
- 8 Morphological Diversity
- 9 Sparse Blind Source Separation
- 10 Dictionary Learning
- 11 Three-Dimensional Sparse Representations
- 12 Multiscale Geometric Analysis on the Sphere
- 13 Compressed Sensing
- 14 This Book's Take-Home Message
- Notes
- References
- Index
- Plate section
13 - Compressed Sensing
Published online by Cambridge University Press: 05 October 2015
- Frontmatter
- Contents
- List of Acronyms
- Notation
- Foreword
- 1 Introduction to the World of Sparsity
- 2 The Wavelet Transform
- 3 Redundant Wavelet Transform
- 4 Nonlinear Multiscale Transforms
- 5 Multiscale Geometric Transforms
- 6 Sparsity andNoiseRemoval
- 7 Linear Inverse Problems
- 8 Morphological Diversity
- 9 Sparse Blind Source Separation
- 10 Dictionary Learning
- 11 Three-Dimensional Sparse Representations
- 12 Multiscale Geometric Analysis on the Sphere
- 13 Compressed Sensing
- 14 This Book's Take-Home Message
- Notes
- References
- Index
- Plate section
Summary
INTRODUCTION
In this chapter, we provide essential insights on the theory of compressed sensing (CS) that emerged in Candes et al. (2006b);Candès and Tao (2006); Donoho (2006a). Compressed sensing is also known under the names of compressive sensing, compressed or compressive sampling.
The conventional wisdom in digital signal processing is that for a band-limited continuous-time signal to be reconstructed exactly from its samples, the signal needs to be sampled at least at twice its bandwidth (the so-called Nyquist rate). This is the celebrated Shannon sampling theorem. In fact, this principle underlies nearly all signal acquisition protocols used. However, such a sampling scheme excludes many signals of interest that are not necessarily band-limited but can still be explained by a small number of degrees of freedom.
CS is paradigm that allows to sample a signal at a rate proportional to its information content rather than its bandwidth (think of sparsity as a measure of the information content). In a discrete setting, this tells us that a signal can be recovered from a small number of samples provided that it is sufficiently sparse or compressible. The sampling step is very fast since it employs nonadaptive linear projections that capture the structure of the signal.The signal is reconstructed from these projections by viewing the decoding step as a linear inverse problem that is cast as a sparsity-regularized convex optimization problem.
In this chapter, we will focus on convex l1-based recovery from CS measurements, for which the algorithms described in Chapter 7 are efficient solvers. l1-minimization is however not the only way to proceed. Other algorithms with theoretical recovery guarantees exist, for example, greedy algorithms or variants (Tropp and Gilbert 2007; Donoho et al. 2012; Needell and Tropp 2008; Needell and Vershynin 2009), or nonconvex lp-regularization with 0 ≤ p ≤ 1 (Chartrand 2007; Chartrand and Staneva 2008; Foucart and Lai 2009; Blanchard et al. 2009). We will not discuss these here.
One of the charms of the CS theory is its interdisciplinary approach, as it draws from various applied mathematical disciplines including linear algebra, probability theory, high dimensional geometry, functional analysis, computational harmonic analysis, and optimization. It also has implications in statistics, signal processing, information theory and learning theory.
- Type
- Chapter
- Information
- Sparse Image and Signal ProcessingWavelets and Related Geometric Multiscale Analysis, pp. 373 - 390Publisher: Cambridge University PressPrint publication year: 2015