Paper published in a book (Scientific congresses and symposiums)
Learning to play K-armed bandit problems
Maes, Francis; Wehenkel, Louis; Ernst, Damien
2012In Proceedings of the 4th International Conference on Agents and Artificial Intelligence (ICAART 2012)
Peer reviewed
 

Files


Full Text
icaart-2012.pdf
Publisher postprint (138.45 kB)
Download
Annexes
Ernst-INRIA-2011-talk.pdf
Publisher postprint (398.54 kB)
This paper together with the two papers "Optimized look-ahead tree search policies" and "Automatic discovery of ranking formulas for playing with multi-armed bandits" are part of a body of work that focuses on the automatic learning of good strategies for exploration-(exploitation) problems in RL. This file is a presentation of this body of work.
Download

All documents in ORBi are protected by a user license.

Send to



Details



Keywords :
Multi-armed bandit problems; reinforcement learning; exploration-exploitation dilemma
Abstract :
[en] We propose a learning approach to pre-compute K-armed bandit playing policies by exploiting prior information describing the class of problems targeted by the player. Our algorithm first samples a set of K-armed bandit problems from the given prior, and then chooses in a space of candidate policies one that gives the best average performances over these problems. The candidate policies use an index for ranking the arms and pick at each play the arm with the highest index; the index for each arm is computed in the form of a linear combination of features describing the history of plays (e.g., number of draws, average reward, variance of rewards and higher order moments), and an estimation of distribution algorithm is used to determine its optimal parameters in the form of feature weights. We carry out simulations in the case where the prior assumes a fixed number of Bernoulli arms, a fixed horizon, and uniformly distributed parameters of the Bernoulli arms. These simulations show that learned strategies perform very well with respect to several other strategies previously proposed in the literature (UCB1, UCB2, UCB-V, KL-UCB and $\epsilon_n$-GREEDY); they also highlight the robustness of these strategies with respect to wrong prior information.
Disciplines :
Computer science
Author, co-author :
Maes, Francis ;  Université de Liège - ULiège > Dép. d'électric., électron. et informat. (Inst.Montefiore) > Systèmes et modélisation
Wehenkel, Louis  ;  Université de Liège - ULiège > Dép. d'électric., électron. et informat. (Inst.Montefiore) > Systèmes et modélisation
Ernst, Damien  ;  Université de Liège - ULiège > Dép. d'électric., électron. et informat. (Inst.Montefiore) > Smart grids
Language :
English
Title :
Learning to play K-armed bandit problems
Publication date :
February 2012
Event name :
4th International Conference on Agents and Artificial Intelligence (ICAART 2012)
Event place :
Vilamoura, Algarve, Portugal
Event date :
6-8 February 2012
Audience :
International
Main work title :
Proceedings of the 4th International Conference on Agents and Artificial Intelligence (ICAART 2012)
Peer reviewed :
Peer reviewed
Available on ORBi :
since 26 October 2011

Statistics


Number of views
335 (20 by ULiège)
Number of downloads
494 (8 by ULiège)

Scopus citations®
 
10
Scopus citations®
without self-citations
6

Bibliography


Similar publications



Contact ORBi