Hostname: page-component-8448b6f56d-c47g7 Total loading time: 0 Render date: 2024-04-23T21:32:24.162Z Has data issue: false hasContentIssue false

Letter to the Editor

Published online by Cambridge University Press:  11 December 2019

Omer Angel
Affiliation:
Department of Mathematics, The University of British Columbia, 1984 Mathematics Road, Vancouver, BC V6T1Z2, Canada.
Mark Holmes
Affiliation:
School of Mathematics and Statistics, The University of Melbourne, Parkville, VIC 3010, Australia.
Rights & Permissions [Opens in a new window]

Abstract

Type
Letter to the Editor
Copyright
© Applied Probability Trust 2019 

Dear Editor,

Kemeny’s constant for infinite DTMCs is infinite

Consider a positive recurrent discrete-time Markov chain ${(X_n)}_{n \ge 0}$ with (countable) state space $\mathcal{S}$ . For $x\in \mathcal{S}$ , define the positive hitting time $T_x=\inf\{n\ge 1\colon X_n=x\}$ and the hitting time $\theta_x=\inf\{n\ge 0\colon X_n=x\}$ . Let $\mathbb{P}_x$ denote the law of the process started from state x, and let $\mathbb{E}_x$ denote the corresponding expectation. It was observed by Kemeny and Snell [Reference Kemeny and Snell3] that, when $\mathcal{S}$ is finite, the expected hitting time of a random stationary target, i.e. the quantity

(1) \begin{equation} \kappa_x=\sum_{y \in \mathcal{S}}\pi_y \mathbb{E}_x[T_y], \label{eqn1} \end{equation}

does not depend on x. (Here ${\pi}=(\pi_y)_{y \in \mathcal{S}}$ is the stationary distribution for the chain.) Thus, the quantity $\kappa=\kappa_x$ in (1) is called Kemeny’s constant. Considerable effort has been devoted to giving an ‘intuitive’ proof of this result. In [Reference Bini, Hunter, Latouche, Meini and Taylor1] it was argued that it is more natural to consider the quantity

\begin{equation*} \omega_x=\sum_{y \in \mathcal{S}}\pi_y \mathbb{E}_x[\theta_y]. \end{equation*}

Note that $\mathbb{E}_x[\theta_y]=\indic{ y\ne x}\mathbb{E}_x[T_y]$ , from which it follows that $\kappa_x=1+\omega_x$ (since $\pi_x\mathbb{E}_x[T_x]=1$ ). For finite $\mathcal{S}$ , Hunter [Reference Hunter2] established the sharp bound $\kappa\ge (|\mathcal{S}|+1)/2$ (the bound is achieved by the directed non-random walk on the cycle). It was conjectured in [1, p. 1031] that $\kappa$ is infinite for any infinite state chain. In this note we verify this conjecture.

Theorem 1. For an irreducible positive recurrent, discrete-time Markov chain with infinite state space and for any $x\in\mathcal{S}$ , we have $\kappa_x = \smash{\sum_{y\in\mathcal{S}} \pi_y \mathbb{E}_x[T_y]} = \infty$ .

This theorem is an immediate consequence of the following result.

Lemma 1. Let $\mathcal{S}$ be finite or infinite. Then, for every $x,y\in \mathcal{S}$ , $\mathbb{E}_x[T_y]\ge \pi_x/(2\pi_y)$ .

Proof. We first prove by induction on $n\ge 0$ that $\mathbb{P}_x(X_n=y) \le {\pi_y}/{\pi_x}$ for every x,y. The case $n=0$ is trivial (for both $x=y$ and $x\ne y$ ). For $n\ge 1$ , we have

(2) $$\begin{equation} \mathbb{P}_x(X_n=y)=\sum_{u \in \mathcal{S}}\mathbb{P}_x(X_{n-1}=u)p_{u,y}\le \sum_{u \in \mathcal{S}}\dfrac{\pi_u}{\pi_x}p_{u,y}=\dfrac{\pi_y}{\pi_x}, \label{eqn2} \end{equation}$$

where $( p_{w,z})_{w,z \in \mathcal{S}}$ are the one-step transition probabilities, and we have used the induction hypothesis and the full balance equations. Using (2), we have

$$\mathbb{P}_x(T_y\le n)=\mathbb{P}_x\bigg(\bigcup_{j=1}^n\{X_j=y\}\bigg) \le \sum_{j=1}^n \mathbb{P}_x(X_j=y) \le \dfrac{n\pi_y}{\pi_x}.$$

Therefore, $\mathbb{P}_x(T_y>n)\ge 1-n{\pi_y}/{\pi_x}$ , and

$$\mathbb{E}_x[T_y] = \sum_{n=0}^\infty \mathbb{P}_x(T_y>n) \ge \sum_{n=0}^{\floor{ \pi_x/\pi_y}} \bigg(1-\dfrac{n\pi_y}{\pi_x}\bigg) \ge \dfrac{\pi_x}{2\pi_y}.$$

The last step uses the fact that, for $a\ge 0$ ,

$$\sum_{n=0}^{\floor{a}} \bigg(1-\dfrac{n}{a}\bigg) = \dfrac{(2a-\floor{a})(\floor{a}+1)}{2a} \ge \dfrac{a}{2}.\\[-45pt]$$

Acknowledgements

OA was supported in part by NSERC. MH was supported by a Future Fellowship grant (no. FT160100166) from the Australian Research Council.

References

Bini, D., Hunter, J. J., Latouche, G., Meini, B. and Taylor, P. G. (2018). Why is Kemeny’s constant a constant? J. Appl. Prob. 55, 10251036.CrossRefGoogle Scholar
Hunter, J. J. (2006). Mixing times with applications to perturbed Markov chains. Linear Algebra Appl. 417, 108123.CrossRefGoogle Scholar
Kemeny, J. G. and Snell, J. L. (1960). Finite Markov Chains. Van Nostrand, Princeton, NJ.Google Scholar