Skip to main content Accessibility help
×
Home

Finite-horizon optimality for continuous-time Markov decision processes with unbounded transition rates

  • Xianping Guo (a1), Xiangxiang Huang (a1) and Yonghui Huang (a1)

Abstract

In this paper we focus on the finite-horizon optimality for denumerable continuous-time Markov decision processes, in which the transition and reward/cost rates are allowed to be unbounded, and the optimality is over the class of all randomized history-dependent policies. Under mild reasonable conditions, we first establish the existence of a solution to the finite-horizon optimality equation by designing a technique of approximations from the bounded transition rates to unbounded ones. Then we prove the existence of ε (≥ 0)-optimal Markov policies and verify that the value function is the unique solution to the optimality equation by establishing the analog of the Itô-Dynkin formula. Finally, we provide an example in which the transition rates and the value function are all unbounded and, thus, obtain solutions to some of the unsolved problems by Yushkevich (1978).

    • Send article to Kindle

      To send this article to your Kindle, first ensure no-reply@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about sending to your Kindle. Find out more about sending to your Kindle.

      Note you can select to send to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

      Find out more about the Kindle Personal Document Service.

      Finite-horizon optimality for continuous-time Markov decision processes with unbounded transition rates
      Available formats
      ×

      Send article to Dropbox

      To send this article to your Dropbox account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your <service> account. Find out more about sending content to Dropbox.

      Finite-horizon optimality for continuous-time Markov decision processes with unbounded transition rates
      Available formats
      ×

      Send article to Google Drive

      To send this article to your Google Drive account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your <service> account. Find out more about sending content to Google Drive.

      Finite-horizon optimality for continuous-time Markov decision processes with unbounded transition rates
      Available formats
      ×

Copyright

Corresponding author

Postal address: School of Mathematics and Computational Science, Sun Yat-Sen University, Guangzhou, 510275, P. R. China.
∗∗ Email address: mcsgxp@mail.sysu.edu.cn
∗∗∗ Email address: hxiangx3@163.com
∗∗∗∗ Email address: hyongh5@mail.sysu.edu.cn

References

Hide All
[1] Baüerle, N. and Rieder, U. (2011). Markov Decision Processes with Applications to Finance. Springer, Heidelberg.
[2] Bertsekas, D. P. and Shreve, S. E. (1978). Stochastic Optimal Control. The Discrete Time Case. Academic Press, New York.
[3] Feinberg, E. A. (2004). Continuous time discounted jump Markov decision processes: a discrete-event approach. Math. Operat. Res. 29, 492524.
[4] Feinberg, E. A. (2012). Reduction of discounted continuous-time MDPs with unbounded jump and reward rates to discrete-time total-reward MDPs. In Optimization, Control, and Applications of Stochastic Systems. Birkhäuser, New York, pp. 7797.
[5] Feinberg, E. A., Mandava, M. and Shiryaev, A. N. (2014). On solutions of Kolmogorov's equations for nonhomogeneous jump Markov processes. J. Math. Anal. Appl. 411, 261270.
[6] Ghosh, M. K. and Saha, S. (2012). Continuous-time controlled jump Markov processes on the finite horizon. In Optimization, Control, and Applications of Stochastic Systems. Birkhäuser, New York, pp. 99109.
[7] Guo, X. (2007). Continuous-time Markov decision processes with discounted rewards: the case of Polish spaces. Math. Operat. Res. 32, 7387.
[8] Guo, X. and Hernández-Lerma, O. (2009). Continuous-Time Markov Decision Processes. Springer, Berlin.
[9] Guo, X. and Piunovskiy, A. (2011). Discounted continuous-time Markov decision processes with constraints: unbounded transition and loss rates. Math. Operat. Res. 36, 105132.
[10] Guo, X. and Ye, L. (2010). New discount and average optimality conditions for continuous-time Markov decision processes. Adv. Appl. Prob. 42, 953985.
[11] Guo, X., Hernández-Lerma, O. and Prieto-Rumeau, T. (2006). A survey of recent results on continuous-time Markov decision processes. Top 14, 177261.
[12] Guo, X., Huang, Y. and Song, X. (2012). Linear programming and constrained average optimality for general continuous-time Markov decision processes in history-dependent policies. SIAM J. Control Optimization 50, 2347.
[13] Hernández-Lerma, O. and Lasserre, J. B. (1996). Discrete-Time Markov Control Processes. Basic Optimality Criteria. Springer, New York.
[14] Hernández-Lerma, O. and Lasserre, J. B. (1999). Further Topics on Discrete-Time Markov Control Processes. Springer, New York.
[15] Jacod, J. (1975). Multivariate point processes: predictable projection, Radon–Nikodým derivatives, representation of martingales. Z. Wahrscheinlichkeitsth. 31, 235253.
[16] Kakumanu, P. (1971). Continuously discounted Markov decision model with countable state and action space. Ann. Math. Statist. 42, 919926.
[17] Kakumanu, P. (1975). Continuous time Markovian decision processes average return criterion. J. Math. Anal. Appl. 52, 173188.
[18] Kitaev, M. Y. and Rykov, V. V. (1995). Controlled Queueing Systems. CRC Press, Boca Raton, FL.
[19] Miller, B. L. (1968). Finite state continuous time Markov decision processes with a finite planning horizon. SIAM. J. Control 6, 266280.
[20] Piunovskiy, A. and Zhang, Y. (2011). Accuracy of fluid approximations to controlled birth-and-death processes: absorbing case. Math. Meth. Operat. Res. 73, 159187.
[21] Piunovskiy, A. and Zhang, Y. (2011). Discounted continuous-time Markov decision processes with unbounded rates: the convex analytic approach. SIAM J. Control Optimization 49, 20322061.
[22] Pliska, S. R. (1975). Controlled jump processes. Stoch. Process. Appl. 3, 259282.
[23] Prieto-Rumeau, T. and Hernández-Lerma, O. (2012). Discounted continuous-time controlled Markov chains: convergence of control models. J. Appl. Prob. 49, 10721090.
[24] Prieto-Rumeau, T. and Hernández-Lerma, O. (2012). Selected Topics on Continuous-Time Controlled Markov Chains and Markov Games. Imperial College Press, London.
[25] Prieto-Rumeau, T. and Lorenzo, J. M. (2010). Approximating ergodic average reward continuous-time controlled Markov chains. IEEE Trans. Automatic Control 55, 201207.
[26] Ye, L. and Guo, X. (2012). Continuous-time Markov decision processes with state-dependent discount factors. Acta Appl. Math. 121, 527.
[27] Yushkevich, A. A. (1978). Controlled Markov models with countable state space and continuous time. Theory Prob. Appl. 22, 215235.

Keywords

MSC classification

Related content

Powered by UNSILO

Finite-horizon optimality for continuous-time Markov decision processes with unbounded transition rates

  • Xianping Guo (a1), Xiangxiang Huang (a1) and Yonghui Huang (a1)

Metrics

Full text views

Total number of HTML views: 0
Total number of PDF views: 0 *
Loading metrics...

Abstract views

Total abstract views: 0 *
Loading metrics...

* Views captured on Cambridge Core between <date>. This data will be updated every 24 hours.

Usage data cannot currently be displayed.