Skip to main content Accessibility help
  • Print publication year: 2015
  • Online publication date: May 2018

6 - Dynamics of countable-state Markov models


Markov processes are useful for modeling a variety of dynamical systems. Often questions involving the long-time behavior of such systems are of interest, such as whether the process has a limiting distribution, or whether time averages constructed using the process are asymptotically the same as statistical averages.

Examples with finite state space

Recall that a probability distribution π on S is an equilibrium probability distribution for a time-homogeneous Markov process X if π = πH(t) for all t. In the discrete-time case, this condition reduces to π = πP. We shall see in this section that under certain natural conditions, the existence of an equilibrium probability distribution is related to whether the distribution of X(t) converges as t → ∞. Existence of an equilibrium distribution is also connected to the mean time needed for X to return to its starting state. To motivate the conditions that will be imposed, we begin by considering four examples of finite state processes. Then the relevant definitions are given for finite or countably infinite state space, and propositions regarding convergence are presented.

Example 6.1 Consider the discrete-time Markov process with the one-step probability diagram shown in Figure 6.1. Note that the process can't escape from the set of states S1 = {a, b, c, d, e}, so that if the initial state X(0) is in S1 with probability one, then the limiting distribution is supported by S1. Similarly if the initial state X(0) is in S2 = {f, g, h} with probability one, then the limiting distribution is supported by S2. Thus, the limiting distribution is not unique for this process. The natural way to deal with this problem is to decompose the original problem into two problems. That is, consider a Markov process on S1, and then consider a Markov process on S2.

Does the distribution of X(0) necessarily converge if X(0) ∈ S1 with probability one? The answer is no.