Baüerle, N. and Rieder, U. (2011). Markov Decision Processes with Applications to Finance. Springer, Heidelberg.
Bertsekas, D. P. and Shreve, S. E. (1978). Stochastic Optimal Control. The Discrete Time Case. Academic Press, New York.
Feinberg, E. A. (2004). Continuous time discounted jump Markov decision processes: a discrete-event approach. Math. Operat. Res. 29, 492–524.
Feinberg, E. A. (2012). Reduction of discounted continuous-time MDPs with unbounded jump and reward rates to discrete-time total-reward MDPs. In Optimization, Control, and Applications of Stochastic Systems. Birkhäuser, New York, pp. 77–97.
Feinberg, E. A., Mandava, M. and Shiryaev, A. N. (2014). On solutions of Kolmogorov's equations for nonhomogeneous jump Markov processes. J. Math. Anal. Appl. 411, 261–270.
Ghosh, M. K. and Saha, S. (2012). Continuous-time controlled jump Markov processes on the finite horizon. In Optimization, Control, and Applications of Stochastic Systems. Birkhäuser, New York, pp. 99–109.
Guo, X. (2007). Continuous-time Markov decision processes with discounted rewards: the case of Polish spaces. Math. Operat. Res. 32, 73–87.
Guo, X. and Hernández-Lerma, O. (2009). Continuous-Time Markov Decision Processes. Springer, Berlin.
Guo, X. and Piunovskiy, A. (2011). Discounted continuous-time Markov decision processes with constraints: unbounded transition and loss rates. Math. Operat. Res. 36, 105–132.
Guo, X. and Ye, L. (2010). New discount and average optimality conditions for continuous-time Markov decision processes. Adv. Appl. Prob. 42, 953–985.
Guo, X., Hernández-Lerma, O. and Prieto-Rumeau, T. (2006). A survey of recent results on continuous-time Markov decision processes. Top
Guo, X., Huang, Y. and Song, X. (2012). Linear programming and constrained average optimality for general continuous-time Markov decision processes in history-dependent policies. SIAM J. Control Optimization
Hernández-Lerma, O. and Lasserre, J. B. (1996). Discrete-Time Markov Control Processes. Basic Optimality Criteria. Springer, New York.
Hernández-Lerma, O. and Lasserre, J. B. (1999). Further Topics on Discrete-Time Markov Control Processes. Springer, New York.
Jacod, J. (1975). Multivariate point processes: predictable projection, Radon–Nikodým derivatives, representation of martingales. Z. Wahrscheinlichkeitsth. 31, 235–253.
Kakumanu, P. (1971). Continuously discounted Markov decision model with countable state and action space. Ann. Math. Statist. 42, 919–926.
Kakumanu, P. (1975). Continuous time Markovian decision processes average return criterion. J. Math. Anal. Appl. 52, 173–188.
Kitaev, M. Y. and Rykov, V. V. (1995). Controlled Queueing Systems. CRC Press, Boca Raton, FL.
Miller, B. L. (1968). Finite state continuous time Markov decision processes with a finite planning horizon. SIAM. J. Control
Piunovskiy, A. and Zhang, Y. (2011). Accuracy of fluid approximations to controlled birth-and-death processes: absorbing case. Math. Meth. Operat. Res. 73, 159–187.
Piunovskiy, A. and Zhang, Y. (2011). Discounted continuous-time Markov decision processes with unbounded rates: the convex analytic approach. SIAM J. Control Optimization
Pliska, S. R. (1975). Controlled jump processes. Stoch. Process. Appl. 3, 259–282.
Prieto-Rumeau, T. and Hernández-Lerma, O. (2012). Discounted continuous-time controlled Markov chains: convergence of control models. J. Appl. Prob. 49, 1072–1090.
Prieto-Rumeau, T. and Hernández-Lerma, O. (2012). Selected Topics on Continuous-Time Controlled Markov Chains and Markov Games. Imperial College Press, London.
Prieto-Rumeau, T. and Lorenzo, J. M. (2010). Approximating ergodic average reward continuous-time controlled Markov chains. IEEE Trans. Automatic Control
Ye, L. and Guo, X. (2012). Continuous-time Markov decision processes with state-dependent discount factors. Acta Appl. Math. 121, 5–27.
Yushkevich, A. A. (1978). Controlled Markov models with countable state space and continuous time. Theory Prob. Appl. 22, 215–235.