Crossprop: Learning Representations by Stochastic Meta-Gradient Descent in Neural Networks

Representations are fundamental to artificial intelligence. The performance of a learning system depends on how the data is represented. Typically, these representations are hand-engineered using domain knowledge. Recently, the trend is to learn these representations through stochastic gradient desc...

Full description

Saved in:
Bibliographic Details
Published inMachine Learning and Knowledge Discovery in Databases pp. 445 - 459
Main Authors Veeriah, Vivek, Zhang, Shangtong, Sutton, Richard S.
Format Book Chapter
LanguageEnglish
Published Cham Springer International Publishing
SeriesLecture Notes in Computer Science
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Representations are fundamental to artificial intelligence. The performance of a learning system depends on how the data is represented. Typically, these representations are hand-engineered using domain knowledge. Recently, the trend is to learn these representations through stochastic gradient descent in multi-layer neural networks, which is called backprop. Learning representations directly from the incoming data stream reduces human labour involved in designing a learning system. More importantly, this allows in scaling up a learning system to difficult tasks. In this paper, we introduce a new incremental learning algorithm called crossprop, that learns incoming weights of hidden units based on the meta-gradient descent approach. This meta-gradient descent approach was previously introduced by Sutton (1992) and Schraudolph (1999) for learning step-sizes. The final update equation introduces an additional memory parameter for each of these weights and generalizes the backprop update equation. From our empirical experiments, we show that crossprop learns and reuses its feature representation while tackling new and unseen tasks whereas backprop relearns a new feature representation.
Bibliography:V. Veeriah and S. Zhang—These authors contributed equally to this work.
ISBN:3319712489
9783319712482
ISSN:0302-9743
1611-3349
DOI:10.1007/978-3-319-71249-9_27