Hostname: page-component-848d4c4894-8kt4b Total loading time: 0 Render date: 2024-06-25T00:02:49.027Z Has data issue: false hasContentIssue false

Bounded truncation error for long-run averages in infinite Markov chains

Published online by Cambridge University Press:  30 March 2016

Hendrik Baumann*
Affiliation:
Clausthal University of Technology
Werner Sandmann*
Affiliation:
University of Derby
*
Postal address: Department of Applied Stochastics and Operations Research, Clausthal University of Technology, Erzstr. 1, D-38678 Clausthal-Zellerfeld, Germany. Email address: hendrik.baumann@tu-clausthal.de
∗∗ Postal address: School of Computing and Mathematics, University of Derby, Kedleston Road, Derby DE22 1GB, UK.
Rights & Permissions [Opens in a new window]

Abstract

Core share and HTML view are not available for this content. However, as you have access to this content, a full PDF is available via the ‘Save PDF’ action button.

We consider long-run averages of additive functionals on infinite discrete-state Markov chains, either continuous or discrete in time. Special cases include long-run average costs or rewards, stationary moments of the components of ergodic multi-dimensional Markov chains, queueing network performance measures, and many others. By exploiting Foster-Lyapunov-type criteria involving drift conditions for the finiteness of long-run averages we determine suitable finite subsets of the state space such that the truncation error is bounded. Illustrative examples demonstrate the application of this method.

Type
Research Papers
Copyright
Copyright © 2015 by the Applied Probability Trust 

References

Anderson, W. J. (1991). Continuous-Time Markov Chains. Springer, New York.Google Scholar
Asmussen, S. (2003). Applied Probability and Queues, 2nd edn. Springer, New York.Google Scholar
Baumann, H. and Sandmann, W. (2010). Numerical solution of level dependent quasi-birth-and-death processes. Procedia Comput. Sci. 1 1561-1569.Google Scholar
Baumann, H. and Sandmann, W. (2012). Steady state analysis of level dependent quasi-birth-and-death processes with catastrophes. Comput. Operat. Res. 39 413-423.Google Scholar
Baumann, H. and Sandmann, W. (2013). Computing stationary expectations in level-dependent QBD processes. J. Appl. Prob. 50 151-165.Google Scholar
Baumann, H. and Sandmann, W. (2014). On finite long run costs and rewards in infinite Markov chains. Statist. Prob. Lett. 91 41-46.Google Scholar
Bright, L. and Taylor, P. G. (1995). Calculating the equilibrium distribution in level dependent quasi-birth-and-death processes. Commun. Statist. Stoch. Models 11 497-525.Google Scholar
Chung, K. L. (1960). Markov Chains with Stationary Transition Probabilities. Springer, Berlin.Google Scholar
Dayar, T., Sandmann, W., Spieler, D. and Wolf, V. (2011). Infinite level-dependent QBD processes and matrix-analytic solutions for stochastic chemical kinetics. Adv. Appl. Prob. 43 1005-1026.CrossRefGoogle Scholar
Foster, F. G. (1953). On the stochastic matrices associated with certain queueing processes. Ann. Math. Statist. 24 355-360.Google Scholar
Gibson, D. and Seneta, E. (1987). Augmented truncations of infinite stochastic matrices. J. Appl. Prob. 24 600-608.Google Scholar
Glynn, P. W. and Zeevi, A. (2008). Bounding stationary expectations of Markov processes. In Markov Processes and Related Topics: A Festschrift for Thomas G. Kurtz (Inst. Math. Statist. Collect. 4), Institute of Mathematical Statistics, Beachwood, OH, pp. 195-214.Google Scholar
Golub, G. H. and Seneta, E. (1973). Computation of the stationary distribution of an infinite Markov matrix. Bull. Austral. Math. Soc. 8 333-341.Google Scholar
Golub, G. H. and Seneta, E. (1974). Computation of the stationary distribution of an infinite stochastic matrix of special form. Bull. Austral. Math. Soc. 10 255-261.Google Scholar
Hanschke, T. (1999). A matrix continued fraction algorithm for the multiserver repeated order queue. Math. Comput. Modelling 30 159-170.Google Scholar
Latouche, G. and Taylor, P. (2002). Truncation and augmentation of level-independent QBD processes. Stoch. Process. Appl. 99 53-80.Google Scholar
Pakes, A. G. (1969). Some conditions for ergodicity and recurrence of Markov chains. Operat. Res. 17, 1058-1061.Google Scholar
Seneta, E. (1981). Non-negative Matrices and Markov Chains, 2nd edn. Springer, New York.Google Scholar
Serfozo, R. (2009). Basics of Applied Stochastic Processes. Springer, Berlin.Google Scholar
Thattai, M. and van Oudenaarden, A. (2001). Intrinsic noise in gene regulatory networks. Proc. Nat. Acad. Sci. USA 98 8614-8619.Google Scholar
Tweedie, R. L. (1975). Sufficient conditions for regularity, recurrence and ergodicity of Markov processes. Math. Proc. Camb. Phil. Soc. 78 125-136.Google Scholar
Tweedie, R. L. (1983). The existence of moments for stationary Markov chains. J. Appl. Prob. 20 191-196.Google Scholar
Tweedie, R. L. (1988). Invariant measures for Markov chains with no irreducibility assumptions. A Celebration of Applied Probability (J. Appl. Prob. Spec. Vol. 25A), Applied Probability Trust, Sheffield, pp. 275285.Google Scholar
Tweedie, R. L. (1998). Truncation approximations of invariant measures for Markov chains. J. Appl. Prob. 35 517-536.Google Scholar
Zhao, Y. Q. and Liu, D. (1996). The censored Markov chain and the best augmentation. J. Appl. Prob. 33 623-629.Google Scholar