Hostname: page-component-848d4c4894-r5zm4 Total loading time: 0 Render date: 2024-06-17T00:19:17.519Z Has data issue: false hasContentIssue false

Quasistationary distributions for continuous time Markov chains when absorption is not certain

Published online by Cambridge University Press:  14 July 2016

P. K. Pollett*
Affiliation:
University of Queensland
*
Postal address: Department of Mathematics, The University of Queensland, Brisbane, Queensland 4072, Australia. Email address: pkp@maths.uq.edu.au.

Abstract

Recently, Elmes et al. (see [2]) proposed a definition of a quasistationary distribution to accommodate absorbing Markov chains for which absorption occurs with probability less than 1. We will show that the probabilistic interpretation pertaining to cases where absorption is certain (see [13]) does not hold in the present context. We prove that the state probabilities at time t conditional on absorption taking place after t, generally depend on t. Conditions are derived under which there is no initial distribution such that the conditional state probabilities are stationary.

Type
Short Communications
Copyright
Copyright © Applied Probability Trust 1999 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Anderson, W. (1991). Continuous-Time Markov Chains: An Applications-Oriented Approach. Springer, New York.CrossRefGoogle Scholar
Elmes, S., Pollett, P., and Walker, D. (1998). Further results on the relationship between μinvariant measures and quasistationary distributions for continuous-time Markov chains. To appear in Math. Computer Modelling 9.Google Scholar
Flaspohler, D. (1974). Quasi-stationary distributions for absorbing continuous-time denumerable Markov chains. Ann. Inst. Statist. Math. 26, 351356.CrossRefGoogle Scholar
Kijima, M., Nair, M., Pollett, P. and van Doorn, E. (1997). Limiting conditional distributions for birth-death processes. Adv. Appl. Prob. 29, 185204.CrossRefGoogle Scholar
Kingman, J. (1963). The exponential decay of Markov transition probabilities. Proc. London Math. Soc. 13, 337358.CrossRefGoogle Scholar
Nair, M., and Pollett, P. (1993). On the relationship between μ-invariant measures and quasistationary distributions for continuous-time Markov chains. Adv. Appl. Prob. 25, 82102.CrossRefGoogle Scholar
Pakes, A. (1995). Quasi-stationary laws for Markov processes: examples of an always proximate absorbing state. Adv. Appl. Prob. 27, 120145.CrossRefGoogle Scholar
Pakes, A., and Pollett, P. (1989). The supercritical birth, death and catastrophe process: limit theorems on the set of extinction. Stochast. Proc. Appl. 32, 161170.CrossRefGoogle Scholar
Pollett, P. (1987). On the long-term behaviour of a population that is subject to large-scale mortality or emigration. In Proc. 8th National Conference of the Australian Society for Operations Research, ed. Kumar, S. Australian Society for Operations Research, Melbourne, pp. 196207.Google Scholar
Pollett, P. (1988). Reversibility, invariance and μ-invariance. Adv. Appl. Prob. 20, 600621.CrossRefGoogle Scholar
Schrijner, P. (1995). Quasi-stationarity of discrete-time Markov chains. , Faculty of Applied Mathematics, University of Twente, Netherlands.Google Scholar
Seneta, E. (1966). Quasi-stationary behaviour in the random walk with continuous time. Austral. J. Statist. 8, 9298.CrossRefGoogle Scholar
Van Doorn, E. (1991). Quasi-stationary distributions and convergence to quasi-stationarity of birth-death processes. Adv. Appl. Prob. 23, 683700.CrossRefGoogle Scholar
Vere-Jones, D. (1967). Ergodic properties on non-negative matrices I. Pacific J. Math. 22, 361386.CrossRefGoogle Scholar