LNCS Homepage
ContentsAuthor IndexSearch

Adaptation in Nonlinear Learning Models for Nonstationary Tasks

Wolfgang Konen and Patrick Koch

Department of Computer Science, Cologne University of Applied Sciences 51643, Gummersbach, Germany
wolfgang.konen@fh-koeln.de

Abstract. The adaptation of individual learning rates is important for many learning tasks, particularly in the case of nonstationary learning environments. Sutton has presented with the Incremental Delta Bar Delta algorithm a versatile method for many tasks. However, this algorithm was formulated only for linear models. A straightforward generalization to nonlinear models is possible, but we show in this work that it poses some obstacles, namely the stability of the learning algorithm. We propose a new self-regulation of the model’s activation which ensures stability. Our algorithm shows better performance than other approaches on a nonstationary benchmark task. Furthermore we show how to derive this algorithm from basic loss functions.

Keywords: Machine learning, IDBD, learning rates, adaptation

LNCS 8672, p. 292 ff.

Full article in PDF | BibTeX


lncs@springer.com
© Springer International Publishing Switzerland 2014