Dynamic Learning Rate for Neural Networks: A Fixed-Time Stability Approach
Neural Networks (NN) have become important tools that have demonstrated their value solving complex problems regarding pattern recognition, natural language processing, automatic speech recognition, among others. Recently, the number of applications that require running the training process at the f...
Saved in:
Published in | 2018 24th International Conference on Pattern Recognition (ICPR) pp. 1378 - 1383 |
---|---|
Main Authors | , , , , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
01.08.2018
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Neural Networks (NN) have become important tools that have demonstrated their value solving complex problems regarding pattern recognition, natural language processing, automatic speech recognition, among others. Recently, the number of applications that require running the training process at the front-end in an online manner have increased dramatically. Unfortunately, in state-of-the-art (SoA) methods, this training process is an unbounded function of the initial conditions. Thus, there is no insight on the number of epochs required, making the online training a difficult problem. Speeding up the training process plays a key role in machine learning. In this work, an algorithm for dynamic learning rate is proposed based on recent results from fixed-time stability of continuous-time nonlinear systems, which ensures a convergence time bound to the equilibrium point independently of the initial conditions. We show experimentally that our discrete-time implementation presents promising results, proving that the number of epochs required for the training remains bounded, independently of the initial weights. This constitutes an important feature toward learning systems with real-time constraints. The efficiency of the method proposed is illustrated under different scenarios, including the public database MNIST, which shows that out algorithm outperforms SoA methods in terms of the number of epoch required for the training. |
---|---|
DOI: | 10.1109/ICPR.2018.8546084 |