Hostname: page-component-84b7d79bbc-dwq4g Total loading time: 0 Render date: 2024-07-27T18:48:39.305Z Has data issue: false hasContentIssue false

The calculation of limit probabilities for denumerable Markov processes from infinitesimal properties

Published online by Cambridge University Press:  14 July 2016

Richard L. Tweedie*
Affiliation:
University of Cambridge
*
Now at the Australian National University, Canberra.

Abstract

The problem considered is that of estimating the limit probability distribution (equilibrium distribution) πof a denumerable continuous time Markov process using only the matrix Q of derivatives of transition functions at the origin. We utilise relationships between the limit vector πand invariant measures for the jump-chain of the process (whose transition matrix we write P∗), and apply truncation theorems from Tweedie (1971) to P∗. When Q is regular, we derive algorithms for estimating πfrom truncations of Q; these extend results in Tweedie (1971), Section 4, from q-bounded processes to arbitrary regular processes. Finally, we show that this method can be extended even to non-regular chains of a certain type.

Type
Research Papers
Copyright
Copyright © Applied Probability Trust 1973 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Cox, D. R. and Miller, H. D. (1965) The Theory of Stochastic Processes. John Wiley and Sons, New York.Google Scholar
Chung, K. L. (1967) Markov Chains with Stationary Transition Probabilities. 2nd ed. Springer-Verlag, Berlin.Google Scholar
Derman, C. (1954) A solution to a set of fundamental equations in Markov chains. Proc. Amer. Math. Soc. 5, 332334.CrossRefGoogle Scholar
Dobrušin, R. L. (1952) On conditions of regularity of stationary Markov processes with a denumerable number of possible states. (In Russian) Uspehi Mat. Nauk (N.S) . 7, 6(52), 185191.Google Scholar
Feller, W. (1971) An Introduction to Probability Theory and its Applications. Vol. 2, 2nd ed. Wiley, New York.Google Scholar
Foster, F. G. (1951) Markoff chains with an enumerable number of states and a class of cascade processes. Proc. Camb. Phil. Soc. 47, 7785.Google Scholar
Kendall, D. G. (1951) On non-dissipative Markoff chains with an enumerable infinity of states. Proc. Camb. Phil. Soc. 47, 633634.Google Scholar
Kendall, D. G. (1956) Some further pathological examples in the theory of denumerable Markov processes. Quart. J. Math. Oxford. (Ser 2) 7, 3956.Google Scholar
Kendall, D. G. and Reuter, G. E. H. (1957) The calculation of the ergodic projection for Markov chains and processes with a countable infinity of states. Acta Math. 97, 103144.Google Scholar
Mauldon, J. G. (1957) On non-dissipative Markov chains. Proc. Camb. Phil. Soc. 53, 825835.Google Scholar
Miller, R. G. Jr. (1963) Stationary equations in continuous time Markov chains. Trans. Amer. Math. Soc. 109, 3544.Google Scholar
Reuter, G. E. H. (1957) Denumerable Markov processes and the associated contraction semigroups on l . Acta Math. 97, 146.Google Scholar
Seneta, E. (1968) Finite approximations for infinite non-negative matrices II: refinements and applications. Proc. Camb. Phil. Soc. 64, 465470.Google Scholar
Tweedie, R. L. (1971) Truncation procedures for non-negative matrices. J . Appl. Prob. 8, 311320.Google Scholar