1.Apostol, T. (1974). Mathematical analysis, 2nd ed.Reading, MA: Addison-Wesley.
2.Bertsekas, D. (1987). Dynamic programming: Deterministic and stochastic models. Englewood Cliffs, NJ: Prentice-Hall.
3.Borkar, V. (1984). On minimum cost per unit time control of Markov chains. SIAM Journal on Control and Optimization 22: 965–978.
4.Borkar, V. (1989). Control of Markov chains with long-run average cost criterion: The dynamic programming equations. SIAM Journal on Control and Optimization 27: 642–657.
5.Cavazos-Cadena, R. (1989). Weak conditions for the existence of optimal stationary policies in average Markov decision chains with unbounded cost. Kybernetika 25: 145–156.
6.Cavazos-Cadena, R. (1991). A counterexample on the optimality equation in Markov decision chains with the average cost criterion. Systems and Control Letters 16: 387–392.
7.Cavazos-Cadena, R. (1991). Recent results on conditions for the existence of average optimal stationary policies. Annals of Operations Research 28: 3–28.
8.Cavazos-Cadena, R. (1991). Solution to the optimality equation in a class of Markov decision chains with the average cost criterion. Kybernetika 27: 23–37.
9.Cavazos-Cadena, R. & Sennott, L. (1992). Comparing recent assumptions for the existence of average optimal stationary policies. Operations Research Letters 11: 33–37.
10.Chung, K.L. (1967). Markov chains with stationary transition probabilities, 2nd ed.New York: Springer-Verlag.
11.Derman, C. & Veinott, A. Jr. (1967). A solution to a countable system of equations arising in Markovian decision processes. Annals of Mathematical Statistics 38: 582–584.
12.Pakes, A. (1969). Some conditions for ergodicity and recurrence of Markov chains. Operations Research 17: 1058–1061.
13.Ritt, R. & Sennott, L. (to appear). Optimal stationary policies in general state space Markov decision chains with finite action sets. Mathematics of Operations Research.
14.Ross, S. (1983). Introduction to stochastic dynamic programming. New York: Academic Press.
15.Schal, M. (to appear). Average optimality in dynamic programming with general state space. Mathematics of Operations Research.
16.Sennott, L. (1989). Average cost optimal stationary policies in infinite state Markov decision processes with unbounded costs. Operations Research 37: 626–633.
17.Sennott, L. (1989). Average cost semi-Markov decision processes and the control of queueing systems. Probability in the Engineering and Informational Sciences 3: 247–272.
18.Sennott, L., Humblet, P., & Tweedie, R. (1983). Mean drifts and the non-ergodicity of Markov chains. Operations Research 31: 783–789.
19.Shwartz, A. & Makowski, A. (submitted). On the Poisson equation for Markov chains: Existence of solutions and parameter dependence by probabilistic methods.
20.Stidham, S. Jr. & Weber, R. (1989). Monotonic and insensitive optimal policies for control of queues with undiscounted costs. Operations Research 37: 611–625.