Book contents
- Frontmatter
- Contents
- List of Figures
- List of Tables
- Foreword
- Preface
- Acknowledgements
- 1 Mathematical Techniques
- 2 Single Channel Static MR Image Reconstruction
- 3 Multi-Coil Parallel MRI Reconstruction
- 4 Dynamic MRI Reconstruction
- 5 Applications in Other Areas
- 6 Some Open Problems
- Index
- About the author
- Color Plates
1 - Mathematical Techniques
Published online by Cambridge University Press: 05 June 2016
- Frontmatter
- Contents
- List of Figures
- List of Tables
- Foreword
- Preface
- Acknowledgements
- 1 Mathematical Techniques
- 2 Single Channel Static MR Image Reconstruction
- 3 Multi-Coil Parallel MRI Reconstruction
- 4 Dynamic MRI Reconstruction
- 5 Applications in Other Areas
- 6 Some Open Problems
- Index
- About the author
- Color Plates
Summary
Most natural signals are continuous, but storage and computational devices are digital. Thus, for any information processing, the signals need to be converted from continuous to discrete domain. The conversion from continuous/analog domain to the discrete/digital domain is called sampling. The sampling requirement is guided by the famous Shannon– Nyquist–Whittaker–Kotelnikov sampling theorem. Plainly speaking the theorem says: In order to reconstruct a signal from its sampled measurements, the sampling rate should be at least twice the maximum frequency content of the signal.
There are several problems with this approach. The first problem is that the signal is assumed to be low-pass, that is, limited by a maximum frequency. However, naturally occurring signals are not exactly low-pass. There are two approaches to address this discrepancy – the first one is to filter the signal through a low-pass filter; the second approach is to sample at a very high rate so that high frequency contents of the signal can be captured during sampling. When the signal is artificially made to be low-pass, it loses its natural sharpness. It is blurred at the onset; one can only sample and reconstruct the blurred signal. The second approach is even more problematic. In order to capture high frequency content of the signal, the sampling rate has to increase; requirements for such fast sampling challenge the physics of the sampling device. On the other hand, it produces a large amount of digital data that is difficult to store and manipulate (one must remember that computers are not always available for processing sampled signals). The third problem with this approach is that try, however, hard, there will always be some high frequency content that will not be captured during the reconstruction process. During reconstruction, this leads to Gibb's phenomenon along sharp signal discontinuities.
In the previous paragraph, we have discussed the problems pertaining to sampling (limitations on the physics of the scanner) and reconstruction (blurry images or Gibb's artifacts). The other problem with Shannon–Nyquist sampling is that it is wasteful. More often than not the signal is compressed for ease of storage and retrieval. The compression is generally effected via transform coding. Let us take the example of digital photography. To compress a digital image, its transform coefficients are computed – the discrete cosine transform for use in JPEG or wavelet transform for use in JPEG2000.
- Type
- Chapter
- Information
- Publisher: Cambridge University PressPrint publication year: 2015