Book contents
- Frontmatter
- Contents
- Preface
- Notation
- Commonly used abbreviations
- 1 Channels, codes and capacity
- 2 Low-density parity-check codes
- 3 Low-density parity-check codes: properties and constructions
- 4 Convolutional codes
- 5 Turbo codes
- 6 Serial concatenation and RA codes
- 7 Density evolution and EXIT charts
- 8 Error floor analysis
- References
- Index
4 - Convolutional codes
Published online by Cambridge University Press: 05 June 2012
- Frontmatter
- Contents
- Preface
- Notation
- Commonly used abbreviations
- 1 Channels, codes and capacity
- 2 Low-density parity-check codes
- 3 Low-density parity-check codes: properties and constructions
- 4 Convolutional codes
- 5 Turbo codes
- 6 Serial concatenation and RA codes
- 7 Density evolution and EXIT charts
- 8 Error floor analysis
- References
- Index
Summary
Introduction
In this chapter we introduce convolutional codes, the building blocks of turbo codes. Our starting point is to introduce convolutional encoders and their trellis representation. Then we consider the decoding of convolutional codes using the BCJR algorithm for the computation of maximum a posteriori message probabilities and the Viterbi algorithm for finding the maximum likelihood (ML) codeword. Our aim is to enable the presentation of turbo codes in the following chapter, so this chapter is by no means a thorough consideration of convolutional codes – we shall only present material directly relevant to turbo codes.
Convolutional encoders
Unlike a block code, which acts on the message in finite-length blocks, a convolutional code acts like a finite-state machine, taking in a continuous stream of message bits and producing a continuous stream of output bits. The convolutional encoder has a memory of the past inputs, which is held in the encoder state. The output depends on the value of this state, as well as on the present message bits at the input, but is completely unaffected by any subsequent message bits. Thus the encoder can begin encoding and transmission before it has the entire message. This differs from block codes, where the encoder must wait for the entire message before encoding.
When discussing convolutional codes it is convenient to use time to mark the progression of input bits through the encoder.
- Type
- Chapter
- Information
- Iterative Error CorrectionTurbo, Low-Density Parity-Check and Repeat-Accumulate Codes, pp. 121 - 164Publisher: Cambridge University PressPrint publication year: 2009
- 1
- Cited by