Reference : Towards min max generalization in reinforcement learning
Parts of books : Contribution to collective works
Engineering, computing & technology : Electrical & electronics engineering
http://hdl.handle.net/2268/99584
Towards min max generalization in reinforcement learning
English
Fonteneau, Raphaël mailto [Université de Liège - ULg > Dép. d'électric., électron. et informat. (Inst.Montefiore) > Systèmes et modélisation >]
Murphy, Susan [> >]
Wehenkel, Louis mailto [Université de Liège - ULg > Dép. d'électric., électron. et informat. (Inst.Montefiore) > Systèmes et modélisation >]
Ernst, Damien mailto [Université de Liège - ULg > Dép. d'électric., électron. et informat. (Inst.Montefiore) > Systèmes et modélisation >]
2011
Agents and Artificial Intelligence: International Conference, ICAART 2010, Valencia, Spain, January 2010, Revised Selected Papers
Filipe, Joaquim
Fred, Ana
Sharp, Bernadette
Springer
61-77
Yes
978-3-642-19889-2
[en] reinforcement learning ; generalization
[en] In this paper, we introduce a min max approach for addressing the generalization problem in Reinforcement Learning. The min max approach works by determining a sequence of actions that maximizes the worst return that could possibly be obtained considering any dynamics and reward function compatible with the sample of trajectories and some prior knowledge on the environment. We consider the particular case of deterministic Lipschitz continuous environments over continuous state spaces, nite action spaces, and a nite optimization horizon. We discuss the non-triviality of computing an exact solution of the min max problem even after reformulating it so as to avoid search in function spaces. For addressing this problem, we propose to replace, inside this min max problem, the search for the worst environment given a sequence of actions by an expression that lower bounds the worst return that can be obtained for a given sequence of actions. This lower bound has a tightness that depends on the sample sparsity. From there, we propose an algorithm of polynomial complexity that returns a sequence of actions leading to the maximization of this lower bound. We give a condition on the sample sparsity ensuring that, for a given initial state, the proposed algorithm produces an optimal sequence of actions in open-loop. Our experiments show that this algorithm can lead to more cautious policies than algorithms combining dynamic programming with function approximators.
Researchers ; Professionals ; Students
http://hdl.handle.net/2268/99584

File(s) associated to this reference

Fulltext file(s):

FileCommentaryVersionSizeAccess
Open access
towards-min-max-generalisation-RL.pdfPublisher postprint453.05 kBView/Open

Bookmark and Share SFX Query

All documents in ORBi are protected by a user license.