Skip to main content Accessibility help
×
Home

Average optimality for Markov decision processes in borel spaces: a new condition and approach

  • Xianping Guo (a1) and Quanxin Zhu (a2)

Abstract

In this paper we study discrete-time Markov decision processes with Borel state and action spaces. The criterion is to minimize average expected costs, and the costs may have neither upper nor lower bounds. We first provide two average optimality inequalities of opposing directions and give conditions for the existence of solutions to them. Then, using the two inequalities, we ensure the existence of an average optimal (deterministic) stationary policy under additional continuity-compactness assumptions. Our conditions are slightly weaker than those in the previous literature. Also, some new sufficient conditions for the existence of an average optimal stationary policy are imposed on the primitive data of the model. Moreover, our approach is slightly different from the well-known ‘optimality inequality approach’ widely used in Markov decision processes. Finally, we illustrate our results in two examples.

    • Send article to Kindle

      To send this article to your Kindle, first ensure no-reply@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about sending to your Kindle. Find out more about sending to your Kindle.

      Note you can select to send to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

      Find out more about the Kindle Personal Document Service.

      Average optimality for Markov decision processes in borel spaces: a new condition and approach
      Available formats
      ×

      Send article to Dropbox

      To send this article to your Dropbox account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your <service> account. Find out more about sending content to Dropbox.

      Average optimality for Markov decision processes in borel spaces: a new condition and approach
      Available formats
      ×

      Send article to Google Drive

      To send this article to your Google Drive account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your <service> account. Find out more about sending content to Google Drive.

      Average optimality for Markov decision processes in borel spaces: a new condition and approach
      Available formats
      ×

Copyright

Corresponding author

Postal address: School of Mathematics and Computational Science, Zhongshan University, Guangzhou, 510275, PR China. Email address: mcsgxp@mail.sysu.edu.cn
∗∗ Postal address: Department of Mathematics, South China Normal University, Guangzhou, 510631, PR China. Email address: zqx1975@sina.com.cn

Footnotes

Hide All

Partially supported by the NSFC, the NCET, and the RFDP.

Footnotes

References

Hide All
[1] Altman, E., Hordijk, A. and Spieksma, F. M. (1979). Contraction conditions for average and α-discount optimality in countable state Markov games with unbounded rewards. Math. Operat. Res. 22, 588618.
[2] Arapostathis, A. et al. (1993). Discrete-time controlled Markov processes with average cost criterion: a survey. SIAM J. Control Optimization 31, 282344.
[3] Borkar, V. S. (1989). Control of Markov chains with long-run average cost criterion: the dynamic programming equations. SIAM J. Control Optimization 27, 642657.
[4] Cavazos-Cadena, R. and Fernández-Gaucherand, E. (1996). Denumerable controlled Markov chains with strong average optimality criterion: bounded and unbounded costs. Math. Meth. Operat. Res. 43, 281300.
[5] Dekker, R. and Hordijk, A. (1988). Average, sensitive and Blackwell optimal policies in denumerable Markov decision chains with unbounded rewards. Math. Operat. Res. 13, 395420.
[6] Derman, C. (1970). Finite State Markovian Decision Processes. Academic Press, New York.
[7] Dynkin, E. B. and Yushkevich, A. A. (1979). Controlled Markov Processes. Springer, New York.
[8] Gordienko, E. and Hernández-Lerma, O. (1995). Average cost Markov control processes with weighted norms: existence of canonical policies. Appl. Math. 23, 199218.
[9] Guo, X. P. and Shi, P. (2001). Limiting average criteria for nonstationary Markov decision processes. SIAM J. Optimization 11, 10371053.
[10] Guo, X. P., Liu, J. Y. and Liu, K. (2000). Nonhomogeneous Markov decision processes with Borel state space—the average criterion with nonuniformly bounded rewards. Math. Operat. Res. 25, 667678.
[11] Guo, X. P., Shi, P. and Zhu, W. P. (2001). Strong average optimality for controlled nonhomogeneous Markov chains. Stoch. Anal. Appl. 19, 115134.
[12] Hernández-Lerma, O. (1989). Adaptive Markov Control Processes. Springer, New York.
[13] Hernández-Lerma, O. and Lasserre, J. B. (1996). Discrete-Time Markov Control Processes. Basic Optimality Criteria. Springer, New York.
[14] Hernández-Lerma, O. and Lasserre, J. B. (1999). Further Topics on Discrete-Time Markov Control Processes. Springer, New York.
[15] Hordijk, A. and Yushkevich, A. A. (1999). Blackwell optimality in the class of stationary policies in Markov decision chains with a Borel state space and unbounded rewards. Math. Meth. Operat. Res. 49, 139.
[16] Hordijk, A. and Yushkevich, A. A. (1999). Blackwell optimality in the class of all policies in Markov decision chains with a Borel state space and unbounded rewards. Math. Meth. Operat. Res. 50, 421448.
[17] Meyn, S. P. and Tweedie, R. L. (1994). Computable bounds for geometric convergence rates of Markov chains. Ann. Appl. Prob. 4, 9811011.
[18] Puterman, M. L. (1994). Markov Decision Processes: Discrete Stochastic Dynamic Programming. John Wiley, New York.
[19] Ritt, R. K. and Sennott, L. I. (1992). Optimal stationary policies in general state space Markov decision chains with finite action sets. Math. Operat. Res. 17, 901909.
[20] Robinson, D. R. (1976). Markov decision chains with unbounded costs and applications to the control of queues. Adv. Appl. Prob. 8, 159176.
[21] Rolski, T., Schmidli, H., Schmidli, V. and Teugels, J. (1998). Stochastic Processes for Insurance and Finance. John Wiley, Chichester.
[22] Ross, S. M. (1968). Arbitrary state Markovian decision processes. Ann. Math. Statist. 39, 21182122.
[23] Scott, D. J. and Tweedie, R. L. (1996). Explicit rates of convergence of stochastically ordered Markov chains. In Athens Conference on Applied Probability and Time Series, Vol. 1, Applied Probability (Lecture Notes Statist. 114), eds Heyde, C. C. et al., Springer, Berlin, pp. 176191.
[24] Sennott, L. I. (1999). Stochastic Dynamic Programming and the Control of Queueing Systems. John Wiley, New York.
[25] Sennott, L. I. (2002). Average reward optimization theory for denumerable state spaces. In Handbook of Markov Decision Processes (Internat. Ser. Operat. Res. Manag. Sci. 40), eds Feinberg, E. A. and Shwartz, A., Kluwer, Boston, MA, pp. 153172.

Keywords

MSC classification

Related content

Powered by UNSILO

Average optimality for Markov decision processes in borel spaces: a new condition and approach

  • Xianping Guo (a1) and Quanxin Zhu (a2)

Metrics

Full text views

Total number of HTML views: 0
Total number of PDF views: 0 *
Loading metrics...

Abstract views

Total abstract views: 0 *
Loading metrics...

* Views captured on Cambridge Core between <date>. This data will be updated every 24 hours.

Usage data cannot currently be displayed.