Book contents
- Frontmatter
- Contents
- Chapter dependencies
- Preface
- 1 Introduction to probability
- 2 Introduction to discrete random variables
- 3 More about discrete random variables
- 4 Continuous random variables
- 5 Cumulative distribution functions and their applications
- 6 Statistics
- 7 Bivariate random variables
- 8 Introduction to random vectors
- 9 Gaussian random vectors
- 10 Introduction to random processes
- 11 Advanced concepts in random processes
- 12 Introduction to Markov chains
- 13 Mean convergence and applications
- 14 Other modes of convergence
- 15 Self similarity and long-range dependence
- Bibliography
- Index
12 - Introduction to Markov chains
Published online by Cambridge University Press: 05 June 2012
- Frontmatter
- Contents
- Chapter dependencies
- Preface
- 1 Introduction to probability
- 2 Introduction to discrete random variables
- 3 More about discrete random variables
- 4 Continuous random variables
- 5 Cumulative distribution functions and their applications
- 6 Statistics
- 7 Bivariate random variables
- 8 Introduction to random vectors
- 9 Gaussian random vectors
- 10 Introduction to random processes
- 11 Advanced concepts in random processes
- 12 Introduction to Markov chains
- 13 Mean convergence and applications
- 14 Other modes of convergence
- 15 Self similarity and long-range dependence
- Bibliography
- Index
Summary
A Markov chain is a random process with the property that given the values of the process from time zero up through the current time, the conditional probability of the value of the process at any future time depends only on its value at the current time. This is equivalent to saying that the future and the past are conditionally independent given the present (cf. Problem 70 in Chapter 1).
Markov chains often have intuitively pleasing interpretations. Some examples discussed in this chapter are random walks (without barriers and with barriers, which may be reflecting, absorbing, or neither), queuing systems (with finite or infinite buffers), birth–death processes (with or without spontaneous generation), life (with states being “healthy,” “sick,” and “death”), and the gambler's ruin problem.
Section 12.1 briefly highlights some simple properties of conditional probability that are very useful in studying Markov chains. Sections 12.2–12.4 cover basic results about discrete-time Markov chains. Continuous-time chains are discussed in Section 12.5.
Preliminary results
We present some easily-derived properties of conditional probability. These observations will greatly simplify some of our calculations for Markov chains.
- Type
- Chapter
- Information
- Publisher: Cambridge University PressPrint publication year: 2006
- 1
- Cited by