Proceedings of the 2008 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE-08)
2008 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE-08)
1-6 June 2008
[en] reinforcement learning ; value iteration ; fuzzy approximators
[en] Reinforcement learning (RL) is a widely used paradigm for learning control. Computing exact RL solutions is generally only possible when process states and control actions take values in a small discrete set. In practice, approximate algorithms are necessary. In this paper, we propose an approximate, model-based Q-iteration algorithm that relies on a fuzzy partition of the state space, and a discretization of the action space. Using assumptions on the continuity of the dynamics and of the reward function, we show that the resulting algorithm is consistent, i.e., that the optimal solution is obtained asymptotically as the approximation accuracy increases. An experimental study indicates that a continuous reward function is also important for a predictable improvement in performance as the approximation accuracy increases.
Fonds de la Recherche Scientifique (Communauté française de Belgique) - F.R.S.-FNRS