Hostname: page-component-76fb5796d-22dnz Total loading time: 0 Render date: 2024-04-26T09:36:25.455Z Has data issue: false hasContentIssue false

Zero-sum games for continuous-time Markov chains with unbounded transition and average payoff rates

Published online by Cambridge University Press:  14 July 2016

Xianping Guo*
Affiliation:
Zhongshan University
Onésimo Hernández-Lerma*
Affiliation:
Centro de Investigación y de Estudios, Avanzados del IPN, Mexico
*
Postal address: The School of Mathematics and Computational Science, Zhongshan University, Guangzhou 510275, People's Republic of China.
∗∗ Postal address: Departamento de Matemáticas, CINVESTAV-IPN, Apartado Postal 14-740, México D.F. 07000, Mexico. Email address: ohernand@math.cinvestav.mx

Abstract

This paper is a first study of two-person zero-sum games for denumerable continuous-time Markov chains determined by given transition rates, with an average payoff criterion. The transition rates are allowed to be unbounded, and the payoff rates may have neither upper nor lower bounds. In the spirit of the ‘drift and monotonicity’ conditions for continuous-time Markov processes, we give conditions on the controlled system's primitive data under which the existence of the value of the game and a pair of strong optimal stationary strategies is ensured by using the Shapley equations. Also, we present a ‘martingale characterization’ of a pair of strong optimal stationary strategies. Our results are illustrated with a birth-and-death game.

Type
Research Papers
Copyright
Copyright © Applied Probability Trust 2003 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Altman, E., Hordijk, A., and Spieksma, F. M. (1997). Contraction conditions for average and α-discount optimality in countable state Markov games with unbounded rewards. Math. Operat. Res. 22, 588618.Google Scholar
Anderson, W. J. (1991). Continuous-Time Markov Chains. Springer, New York.CrossRefGoogle Scholar
Blumental, R. M., and Getoor, R. K. (1968). Markov Processes and Potential Theory. Academic Press, New York.Google Scholar
Borkar, V. S., and Ghosh, M. K. (1992). Stochastic differential games: an occupation measure based approach. J. Optimization Theory Appl. 73, 359385. (Correction: 88 (1996), 251–252.)CrossRefGoogle Scholar
Borkar, V. S., and Ghosh, M. K. (1993). Denumerable state stochastic games with limiting average payoff. J. Optimization Theory Appl. 76, 539560.Google Scholar
Chung, K. L. (1960). Markov Chains with Stationary Transition Probabilities. Springer, Berlin.Google Scholar
Chung, K. L. (1970). Lectures on Boundary Theory for Markov Chains. Princeton University Press.Google Scholar
Doshi, B. T. (1976). Continuous-time control of Markov processes on an arbitrary state space: average reward criterion. Stoch. Process. Appl. 4, 5577.Google Scholar
Feller, W. (1940). On the integro-differential equations of purely discontinuous Markoff processes. Trans. Amer. Math. Soc. 48, 488515.Google Scholar
Filar, J. A., and Vrieze, K. (1997). Competitive Markov Decision Processes. Springer, New York.Google Scholar
Freedman, D. (1983). Markov Chains. Springer, New York.Google Scholar
Ghosh, M. K., and Bagchi, A. (1998). Stochastic games with average payoff criterion. Appl. Math. Optimization 38, 283301.Google Scholar
Gihman, I. I., and Skorohod, A. V. (1979). Controlled Stochastic Processes. Springer, New York.Google Scholar
Guo, X. P. and Hernández-Lerma, O. (2003). Continuous-time controlled Markov chains. Ann. Appl. Prob. 13, 363388.CrossRefGoogle Scholar
Guo, X. P. and Hernández-Lerma, O. (2003). Drift and monotonicity conditions for continuous-time controlled Markov chains with an average criterion. IEEE Trans. Automatic Control 48, 236245.Google Scholar
Guo, X. P. and Hernández-Lerma, O. (2003). Nonzero-sum games for continuous-time Markov chains with unbounded discounted payoffs. Submitted.Google Scholar
Guo, X. P., and Liu, K. (2001). A note on optimality conditions for continuous-time Markov decision processes with average cost criterion. IEEE Trans. Automatic Control 46, 19841989.Google Scholar
Hernández-Lerma, O., and Lasserre, J. B. (1999). Further Topics on Discrete-Time Markov Control Processes. Springer, New York.Google Scholar
Hernández-Lerma, O., and Lasserre, J. B. (2001). Zero-sum stochastic games in Borel spaces: average payoff criterion. SIAM J. Control Optimization 39, 15201539.Google Scholar
Hou, Z. T., and Guo, X. P. (1998). Markov Decision Processes. Science and Technology Press of Hunan, Changsha (in Chinese).Google Scholar
Kakumanu, P. (1972). Nondiscounted continuous-time Markov decision processes with countable state space. SIAM J. Control Optimization 10, 210220.Google Scholar
Lai, H.-C., and Tanaka, K. (1984). A noncooperative n-person semi-Markov game with a separable metric state space. Appl. Math. Optimization 11, 2342.Google Scholar
Lai, H.-C., and Tanaka, K. (1984). On an N-person noncooperative Markov game with a metric state space. J. Math. Anal. Appl. 101, 7896.Google Scholar
Lal, A. K., and Sinha, S. (1992). Zero-sum two-person semi-Markov games. J. Appl. Prob. 29, 5672.CrossRefGoogle Scholar
Lund, R. B., Meyn, S. P., and Tweedie, R. L. (1996). Computable exponential convergence rates for stochastically ordered Markov processes. Ann. Appl. Prob. 6, 218237.CrossRefGoogle Scholar
Nowak, A. S. (2000). Some remarks on equilibria in semi-Markov games. Appl. Math. (Warsaw) 27, 385394.Google Scholar
Polowczuk, W. (2000). Nonzero-sum semi-Markov games with countable state spaces. Appl. Math. (Warsaw) 27, 395402.Google Scholar
Puterman, M. L. (1994). Markov Decision Processes. John Wiley, New York.Google Scholar
Ramachandran, K. M. (1999). A convergence method for stochastic differential games with a small parameter. Stoch. Anal. Appl. 17, 219252.Google Scholar
Rao, M. M. (1995). Stochastic Processes: General Theory (Math. Appl. 342). Kluwer, Dordrecht.Google Scholar
Rieder, U. (1997). Average optimality in Markov games with general state space. In Proc. 3rd Internat. Conf. Approx. Theory Optimization in the Caribbean (Puebla, 1995). Available at http://www.emis.de/proceedings/.Google Scholar
Rolski, T., Schmidli, H., Schmidt, V., and Teugels, J. (1999). Stochastic Processes for Insurance and Finance. John Wiley, Chichester.Google Scholar
Sennott, L. I. (1994). Zero-sum stochastic games with unbounded cost: discounted and average cost cases. Z. Operat. Res. 39, 209225.Google Scholar
Tanaka, K., and Wakata, K. (1977). On continuous time Markov games with the expected average reward criterion. Sci. Rep. Niigata Univ. A No. 14, 1524.Google Scholar
Williams, D. (1979). Diffusions, Markov Processes, and Martingales, Vol. 1, Foundations. John Wiley, Chichester.Google Scholar