Reference : Learning to play K-armed bandit problems
 Document type : Scientific congresses and symposiums : Paper published in a book Discipline(s) : Engineering, computing & technology : Computer science To cite this reference: http://hdl.handle.net/2268/101066
 Title : Learning to play K-armed bandit problems Language : English Author, co-author : Maes, Francis [Université de Liège - ULg > Dép. d'électric., électron. et informat. (Inst.Montefiore) > Systèmes et modélisation >] Wehenkel, Louis [Université de Liège - ULg > Dép. d'électric., électron. et informat. (Inst.Montefiore) > Systèmes et modélisation >] Ernst, Damien [Université de Liège - ULg > Dép. d'électric., électron. et informat. (Inst.Montefiore) > Smart grids >] Publication date : Feb-2012 Main document title : Proceedings of the 4th International Conference on Agents and Artificial Intelligence (ICAART 2012) Peer reviewed : Yes On invitation : No Audience : International Event name : 4th International Conference on Agents and Artificial Intelligence (ICAART 2012) Event date : 6-8 February 2012 Event place (city) : Vilamoura, Algarve Event country : Portugal Keywords : [en] Multi-armed bandit problems ; reinforcement learning ; exploration-exploitation dilemma Abstract : [en] We propose a learning approach to pre-compute K-armed bandit playing policies by exploiting prior information describing the class of problems targeted by the player. Our algorithm ﬁrst samples a set of K-armed bandit problems from the given prior, and then chooses in a space of candidate policies one that gives the best average performances over these problems. The candidate policies use an index for ranking the arms and pick at each play the arm with the highest index; the index for each arm is computed in the form of a linear combination of features describing the history of plays (e.g., number of draws, average reward, variance of rewards and higher order moments), and an estimation of distribution algorithm is used to determine its optimal parameters in the form of feature weights. We carry out simulations in the case where the prior assumes a ﬁxed number of Bernoulli arms, a ﬁxed horizon, and uniformly distributed parameters of the Bernoulli arms. These simulations show that learned strategies perform very well with respect to several other strategies previously proposed in the literature (UCB1, UCB2, UCB-V, KL-UCB and $\epsilon_n$-GREEDY); they also highlight the robustness of these strategies with respect to wrong prior information. Permalink : http://hdl.handle.net/2268/101066

File(s) associated to this reference

Fulltext file(s):

FileCommentaryVersionSizeAccess
Open access
icaart-2012.pdfPublisher postprint135.2 kBView/Open