Book contents
- Frontmatter
- Contents
- Preface
- Acknowledgments
- 1 Motivation
- 2 Book overview
- 3 Principles of lossless compression
- 4 Entropy coding techniques
- 5 Lossy compression of scalar sources
- 6 Coding of sources with memory
- 7 Mathematical transformations
- 8 Rate control in transform coding systems
- 9 Transform coding systems
- 10 Set partition coding
- 11 Subband/wavelet coding systems
- 12 Methods for lossless compression of images
- 13 Color and multi-component image and video coding
- 14 Distributed source coding
- Index
- References
10 - Set partition coding
Published online by Cambridge University Press: 05 June 2012
- Frontmatter
- Contents
- Preface
- Acknowledgments
- 1 Motivation
- 2 Book overview
- 3 Principles of lossless compression
- 4 Entropy coding techniques
- 5 Lossy compression of scalar sources
- 6 Coding of sources with memory
- 7 Mathematical transformations
- 8 Rate control in transform coding systems
- 9 Transform coding systems
- 10 Set partition coding
- 11 Subband/wavelet coding systems
- 12 Methods for lossless compression of images
- 13 Color and multi-component image and video coding
- 14 Distributed source coding
- Index
- References
Summary
Principles
The storage requirements of samples of data depend on their number of possible values, called alphabet size. Real-valued data theoretically require an unlimited number of bits per sample to store with perfect precision, because their alphabet size is infinite. However, there is some level of noise in every practical measurement of continuous quantities, which means that only some digits in the measured value have actual physical sense. Therefore, they are stored with imperfect, but usually adequate precision using 32 or 64 bits. Only integer-valued data samples can be stored with perfect precision when they have a finite alphabet, as is the case for image data. Therefore, we limit our considerations here to integer data.
Natural representation of integers in a dataset requires a number of bits per sample no less than the base 2 logarithm of the number of possible integer values. For example, the usual monochrome image has integer values from 0 to 255, so we use 8 bits to store every sample. Suppose, however, that we can find a group of samples whose values do not exceed 15. Then every sample in that group needs at most 4 bits to specify its value, which is a saving of at least 4 bits per sample. We of course need location information for the samples in the group. If the location information in bits is less than four times the number of such samples, then we have achieved compression.
- Type
- Chapter
- Information
- Digital Signal CompressionPrinciples and Practice, pp. 265 - 312Publisher: Cambridge University PressPrint publication year: 2011
References
- 1
- Cited by