[en] Parallel and distributed algorithms have become a necessity in modern machine
learning tasks. In this work, we focus on parallel asynchronous gradient descent and propose a zealous variant that minimizes the idle time of processors to achieve a substantial speedup. We then experimentally study this algorithm in the context of training a restricted Boltzmann machine on a large collaborative filtering task.
Fonds de la Recherche Scientifique (Communauté française de Belgique) - F.R.S.-FNRS