Reference : Iteratively extending time horizon reinforcement learning
Scientific congresses and symposiums : Paper published in a book
Engineering, computing & technology : Computer science
http://hdl.handle.net/2268/9361
Iteratively extending time horizon reinforcement learning
English
Ernst, Damien mailto [Université de Liège - ULg > Dép. d'électric., électron. et informat. (Inst.Montefiore) > Systèmes et modélisation >]
Geurts, Pierre mailto [Université de Liège - ULg > Dép. d'électric., électron. et informat. (Inst.Montefiore) > Systèmes et modélisation >]
Wehenkel, Louis mailto [Université de Liège - ULg > Dép. d'électric., électron. et informat. (Inst.Montefiore) > Systèmes et modélisation >]
2003
Machine Learning: ECML 2003, 14th European Conference on Machine Learning
Springer-Verlag Berlin
Lecture Notes in Articial Intelligence, Volume 2837
96-107
Yes
International
978-3-540-20121-2
Berlin
14th European Conference on Machine Learning (ECML 2003)
[en] reinforcement learning ; regression trees ; Q-function
[en] Reinforcement learning aims to determine an (infinite time horizon) optimal control policy from interaction with a system. It can be solved by approximating the so-called Q-function from a sample of four-tuples (x(t), u(t), r(t), x(t+1)) where x(t) denotes the system state at time t, ut the control action taken, rt the instantaneous reward obtained and x(t+1) the successor state of the system, and by determining the optimal control from the Q-function. Classical reinforcement learning algorithms use an ad hoc version of stochastic approximation which iterates over the Q-function approximations on a four-tuple by four-tuple basis. In this paper, we reformulate this problem as a sequence of batch mode supervised learning problems which in the limit converges to (an approximation of) the Q-function. Each step of this algorithm uses the full sample of four-tuples gathered from interaction with the system and extends by one step the horizon of the optimality criterion. An advantage of this approach is to allow the use of standard batch mode supervised learning algorithms, instead of the incremental versions used up to now. In addition to a theoretical justification the paper provides empirical tests in the context of the "Car on the Hill" control problem based on the use of ensembles of regression trees. The resulting algorithm is in principle able to handle efficiently large scale reinforcement learning problems.
Researchers ; Professionals
http://hdl.handle.net/2268/9361
10.1007/b13633
http://www.montefiore.ulg.ac.be/~ernst/

File(s) associated to this reference

Fulltext file(s):

FileCommentaryVersionSizeAccess
Open access
fulltext.pdfPublisher postprint541.07 kBView/Open

Bookmark and Share SFX Query

All documents in ORBi are protected by a user license.