Book contents
- Frontmatter
- Contents
- Acronyms
- Notation
- Preface
- 1 Introduction to the World of Sparsity
- 2 The Wavelet Transform
- 3 Redundant Wavelet Transform
- 4 Nonlinear Multiscale Transforms
- 5 The Ridgelet and Curvelet Transforms
- 6 Sparsity and Noise Removal
- 7 Linear Inverse Problems
- 8 Morphological Diversity
- 9 Sparse Blind Source Separation
- 10 Multiscale Geometric Analysis on the Sphere
- 11 Compressed Sensing
- References
- List of Algorithms
- Index
- Plate section
11 - Compressed Sensing
Published online by Cambridge University Press: 06 July 2010
- Frontmatter
- Contents
- Acronyms
- Notation
- Preface
- 1 Introduction to the World of Sparsity
- 2 The Wavelet Transform
- 3 Redundant Wavelet Transform
- 4 Nonlinear Multiscale Transforms
- 5 The Ridgelet and Curvelet Transforms
- 6 Sparsity and Noise Removal
- 7 Linear Inverse Problems
- 8 Morphological Diversity
- 9 Sparse Blind Source Separation
- 10 Multiscale Geometric Analysis on the Sphere
- 11 Compressed Sensing
- References
- List of Algorithms
- Index
- Plate section
Summary
INTRODUCTION
In this chapter, we provide essential insights on the theory of compressed sensing (CS) that emerged in the work of Candès et al. (2006b), Candès and Tao (2006), and Donoho (2006a). Compressed sensing is also known by the names of compressive sensing and compressed or compressive sampling.
The conventional wisdom in digital signal processing is that for a band-limited signal to be reconstructed exactly from its samples, the signal needs to be sampled at at least twice its bandwidth (the so-called Nyquist rate). This is the celebrated Shannon sampling theorem. In fact, this principle underlies nearly all signal acquisition protocols used. However, such a sampling scheme excludes many signals of interest that are not necessarily band limited but can still be explained by a small number of degrees of freedom.
CS is a sampling paradigm that allows us to go beyond the Shannon limit by exploiting the sparsity structure of the signal. CS allows us to capture and represent compressible signals at a rate significantly below the Nyquist rate. The sampling step is very fast because it employs nonadaptive linear projections that preserve the structure of the signal. The signal is reconstructed from these projections by viewing the decoding step as a linear inverse problem that is cast as a sparsity-regularized convex optimization problem.
In this chapter, we will focus on convex l1-based recovery from CS measurements, for which the algorithms described in Chapter 7 are efficient solvers.
- Type
- Chapter
- Information
- Sparse Image and Signal ProcessingWavelets, Curvelets, Morphological Diversity, pp. 277 - 288Publisher: Cambridge University PressPrint publication year: 2010