Hostname: page-component-77c89778f8-gvh9x Total loading time: 0 Render date: 2024-07-17T18:19:07.664Z Has data issue: false hasContentIssue false

Convergence of quasi-stationary to stationary distributions for stochastically monotone Markov processes

Published online by Cambridge University Press:  14 July 2016

Moshe Pollak*
Affiliation:
Hebrew University
David Siegmund*
Affiliation:
Stanford University
*
Postal address: Department of Statistics, The Hebrew University, Jerusalem, Israel.
∗∗Postal address: Department of Statistics, Stanford University, Sequoia Hall, Stanford, CA 94305, USA.

Abstract

It is shown that if a stochastically monotone Markov process on [0,∞) with stationary distribution H has its state space truncated by making all states in [B,∞) absorbing, then the quasi-stationary distribution of the new process converges to H as B →∞.

Type
Short Communications
Copyright
Copyright © Applied Probability Trust 1986 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Footnotes

Research supported in part by the Office of Naval Research and the U.S.– Israel Binational Science Foundation.

References

Pollar, ?. and Siegmund, D. (1985) A diffusion process and its application to detecting a change in the drift of Brownian motion. Biometrika 72, 267280.Google Scholar
Seneta, E. (1980) Non-negative Matrices and Markov Chains , 2nd edn. Springer-Verlag, New York.Google Scholar
Seneta, E. and Vere-Jones, D. (1966) On quasi-stationary distributions in discrete-time Markov chains with a denumerable infinity of states. J. Appl. Prob. 3, 403434.CrossRefGoogle Scholar
Siegmund, D. (1976) The equivalence of absorbing and reflecting barrier problems for stochastically monotone Markov processes. Ann. Prob. 4, 914924.CrossRefGoogle Scholar