Reference : Tree-based batch mode reinforcement learning
Scientific journals : Article
Engineering, computing & technology : Computer science
http://hdl.handle.net/2268/9360
Tree-based batch mode reinforcement learning
English
Ernst, Damien mailto [Université de Liège - ULg > Dép. d'électric., électron. et informat. (Inst.Montefiore) > Systèmes et modélisation >]
Geurts, Pierre mailto [Université de Liège - ULg > Dép. d'électric., électron. et informat. (Inst.Montefiore) > Systèmes et modélisation >]
Wehenkel, Louis mailto [Université de Liège - ULg > Dép. d'électric., électron. et informat. (Inst.Montefiore) > Systèmes et modélisation >]
Apr-2005
Journal of Machine Learning Research
Microtome Publishing
6
503-556
Yes (verified by ORBi)
International
1532-4435
Brookline
[en] fitted Q iteration ; batch mode reinforcement learning ; ensemble of regression trees ; supervised learning ; fitted value iteration ; optimal control
[en] Reinforcement learning aims to determine an optimal control policy from interaction with a system or from observations gathered from a system. In batch mode, it can be achieved by approximating the so-called Q-function based on a set of four-tuples (x(t), u(t), r(t), x(t+1)) where x(t) denotes the system state at time t, u(t) the control action taken, r(t) the instantaneous reward obtained and x(t+1) the successor state of the system, and by determining the control policy from this Q-function. The Q-function approximation may be obtained from the limit of a sequence of (batch mode) supervised learning problems. Within this framework we describe the use of several classical tree-based supervised learning methods (CART, Kd-tree, tree bagging) and two newly proposed ensemble algorithms, namely extremely and totally randomized trees. We study their performances on several examples and find that the ensemble methods based on regression trees perform well in extracting relevant information about the optimal control policy from sets of four-tuples. In particular, the totally randomized trees give good results while ensuring the convergence of the sequence, whereas by relaxing the convergence constraint even better accuracy results are provided by the extremely randomized trees.
Fonds de la Recherche Scientifique (Communauté française de Belgique) - F.R.S.-FNRS
Researchers ; Professionals
http://hdl.handle.net/2268/9360

File(s) associated to this reference

Fulltext file(s):

FileCommentaryVersionSizeAccess
Open access
ernst05a.pdfThe fitted Q iteration (FQI) algorithm was first described in the paper "Iteratively extending time horizon reinforcement learning" (see below) but this paper is the first one to name it fitted Q iteration (or FQI in short). Publisher postprint1.29 MBView/Open

Additional material(s):

File Commentary Size Access
Open access
ernst-fittedQIteration.pdfPresentation that gives a brief overview of our work on fitted Q iteration.327.19 kBView/Open
Open access
ernst-icopi2005-slides.pdfPresentation that discusses several strategies for using supervised learning in the context of batch-mode reinforcement learning.751.08 kBView/Open

Bookmark and Share SFX Query

All documents in ORBi are protected by a user license.