×

Reinforcement learning for long-run average cost. (English) Zbl 1102.90374

Summary: A large class of sequential decision-making problems under uncertainty can be modeled as Markov and semi-Markov decision problems (SMDPs), when their underlying probability structure has a Markov chain. They may be solved by using classical dynamic programming (DP) methods. However, DP methods suffer from the curse of dimensionality and break down rapidly in face of large state-spaces. In addition, DP methods require the exact computation of the so-called transition probabilities, which are often hard to obtain and are hence said to suffer from the curse of modeling as well. In recent years, a simulation-based method, called reinforcement learning (RL), has emerged in the literature. It can, to a great extent, alleviate stochastic DP of its curses by generating ‘near-optimal’ solutions to problems having large state-spaces and complex transition mechanisms. In this paper, a simulation-based algorithm that solves Markov and SMDPs is presented, along with its convergence analysis. The algorithm involves a step-size based transformation on two-time scales. Its convergence analysis is based on a recent result on asynchronous convergence of iterates on two-time scales. We present numerical results from the new algorithm on a classical preventive maintenance case study of a reasonable size, where results on the optimal policy are also available. In addition, we present a tutorial that explains the framework of RL in the context of long-run average cost SMDPs.

MSC:

90C40 Markov and semi-Markov decision processes
Full Text: DOI

References:

[1] J. Abounadi, D. Bertsekas, V. Borkar, Ode analysis for Q-learning algorithms, LIDS Report, MIT, Cambridge, MA, 1996; J. Abounadi, D. Bertsekas, V. Borkar, Ode analysis for Q-learning algorithms, LIDS Report, MIT, Cambridge, MA, 1996 · Zbl 1063.62567
[2] Bellman, R., The theory of dynamic programming, Bulletin of American Mathematical Society, 60, 503-516 (1954) · Zbl 0057.12503
[3] Bertsekas, D. P., Dynamic Programming and Optimal Control (1995), Athena Scientific: Athena Scientific Belmont, MA · Zbl 0935.90037
[4] Bertsekas, D. P.; Tsitsiklis, J. N., Neuro-Dynamic Programming (1996), Athena Scientific: Athena Scientific Belmont, MA · Zbl 0924.68163
[5] Borkar, V. S., Stochastic approximation with two-time scales, System and Control Letters, 29, 291-294 (1997) · Zbl 0895.62085
[6] Borkar, V. S., Asynchronous stochastic approximation, SIAM Journal on Control and Optimization, 36, 3, 840-851 (1998) · Zbl 0922.62081
[7] V.S. Borkar, S.P. Meyn, The ode method for convergence of stochastic approximation and reinforcement learning, Working paper; V.S. Borkar, S.P. Meyn, The ode method for convergence of stochastic approximation and reinforcement learning, Working paper · Zbl 0990.62071
[8] Borkar, V. S.; Soumyanath, K., An analog scheme for fixed point computation, Part i: Theory, IEEE Transactions Circuits and Systems I. Fundamental Theory and Application, 44, 351-354 (1997)
[9] Cassandras, C. G.; Han, Y., Optimal inspection policies for a manufacturing station, European Journal of Operational Research, 63, 35-53 (1992) · Zbl 0761.90050
[10] Das, T. K.; Gosavi, A.; Mahadevan, S.; Marchalleck, N., Solving semi-Markov decision problems using average reward reinforcement learning, Management Science, 45, 4, 560-574 (1999) · Zbl 1231.90225
[11] Das, T. K.; Sarkar, S., Optimal preventive maintenance in a production inventory system, IIE Transactions on Quality and Reliability, 31, 537-551 (1999)
[12] M. Elhafsi, S. Bai, Optimal and near-optimal control of a two-part stochastic manufacturing system with dynamic setups, Research Report 95-10, at the Department of Industrial Engineering, University of Florida, Gainesville, 1997; M. Elhafsi, S. Bai, Optimal and near-optimal control of a two-part stochastic manufacturing system with dynamic setups, Research Report 95-10, at the Department of Industrial Engineering, University of Florida, Gainesville, 1997
[13] A. Gosavi, An algorithm for solving semi-Markov decision problems using reinforcement learning: Convergence analysis and numerical results, Unpublished Ph.D. Dissertation, Department of Industrial and Management Systems Engineering, University of South Florida, Tampa, FL, May 1999; A. Gosavi, An algorithm for solving semi-Markov decision problems using reinforcement learning: Convergence analysis and numerical results, Unpublished Ph.D. Dissertation, Department of Industrial and Management Systems Engineering, University of South Florida, Tampa, FL, May 1999
[14] A. Gosavi, On the convergence of some reinforcement learning algorithms. Working paper, Department of Engineering, University of Southern Colorado, Pueblo, 2000; A. Gosavi, On the convergence of some reinforcement learning algorithms. Working paper, Department of Engineering, University of Southern Colorado, Pueblo, 2000
[15] Gosavi, A.; Bandla, N.; Das, T. K., Airline seat allocation among multiple fare classes with overbooking, IIE Transactions, 34, 9, 729-742 (2002)
[16] A. Gosavi, T.K. Das, S. Sarkar, A simulation-based learning automata framework for solving semi-Markov decision problems under long-run average reward. IIE Transactions (in press); A. Gosavi, T.K. Das, S. Sarkar, A simulation-based learning automata framework for solving semi-Markov decision problems under long-run average reward. IIE Transactions (in press)
[17] V.R. Konda, V.S. Borkar, Actor-critic type learning algorithms for Markov decision processes, Working paper, Indian Institute of Sciences, Bangalore, India; V.R. Konda, V.S. Borkar, Actor-critic type learning algorithms for Markov decision processes, Working paper, Indian Institute of Sciences, Bangalore, India · Zbl 0938.93069
[18] Kushner, H. J.; Clark, D. S., Stochastic Approximation Methods for Constrained and Unconstrained Systems (1978), Springer-Verlag: Springer-Verlag Berlin · Zbl 0381.60004
[19] Lewis, E. E., Introduction to Reliability Engineering (1994), John Wiley and Sons: John Wiley and Sons New York
[20] Ljung, L., Analysis of recursive stochastic algorithms, IEEE Transactions on Automatic Control, 22, 551-575 (1977) · Zbl 0362.93031
[21] Mahadevan, S., Average reward reinforcement learning: Foundations, algorithms, and empirical results, Machine Learning, 22, 1, 159-195 (1996) · Zbl 1099.68692
[22] M.L. Littman, Algorithms for sequential decision-making, Unpublished Ph.D. Thesis, Brown University, Providence, RI, 1996; M.L. Littman, Algorithms for sequential decision-making, Unpublished Ph.D. Thesis, Brown University, Providence, RI, 1996
[23] Narendra, K.; Thatachar, M. A.L., Learning Automata: An Introduction (1989), Prentice Hall: Prentice Hall Englewood Cliffs, NJ
[24] Puterman, M. L., Markov Decision Processes (1994), Wiley Interscience: Wiley Interscience New York · Zbl 0336.93047
[25] Robbins, H.; Monro, S., A stochastic approximation method, Annals Mathematical Statistics, 22, 400-407 (1951) · Zbl 0054.05901
[26] A. Schwartz. A reinforcement learning method for maximizing undiscounted rewards, in: Proceedings of the Tenth Annual Conference on Machine Learning, 1993, pp. 298-305; A. Schwartz. A reinforcement learning method for maximizing undiscounted rewards, in: Proceedings of the Tenth Annual Conference on Machine Learning, 1993, pp. 298-305
[27] Seidmann, A.; Schweitzer, P. J., Part selection policy for a flexible manufacturing cell feeding several production lines, IIE Transactions, 16, 4, 355-362 (1984)
[28] Sennott, L., Stochastic Dynamic Programming and the Control of Queueing Systems (1999), John Wiley and Sons: John Wiley and Sons New York · Zbl 0997.93503
[29] Shioyama, T., Optimal control of a queuing network system with two types of customers, European Journal of Operational Research, 52, 367-372 (1991) · Zbl 0725.60102
[30] Singh, S., Reinforcement learning algorithms for average-payoff Markovian decision processes, (Proceedings of the 12th AAAI (1994), MIT Press: MIT Press Cambridge, MA)
[31] Sutton, R., Reinforcement Learning. Reinforcement Learning, Machine Learning Journal, 8, 3, 5 (1992), (special issue)
[32] Button, R.; Barto, A. G., Reinforcement Learning (1998), The MIT Press: The MIT Press Cambridge, MA
[33] Tadepalli, P.; Ok, D., Scaling up average reward reinforcement learning by approximating the domain models and the value function, (Proceedings of the Thirteenth International Machine Learning Conference (1996), Morgan Kaufmann: Morgan Kaufmann New York), 471-479
[34] Tsitsiklis, J., Asynchronous stochastic approximation and q-learning, Machine Learning, 16, 185-202 (1994) · Zbl 0820.68105
[35] C.J. Watkins, Learning from Delayed Rewards, Ph.D. Thesis, Kings College, Cambridge, England, May 1989; C.J. Watkins, Learning from Delayed Rewards, Ph.D. Thesis, Kings College, Cambridge, England, May 1989
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.