ADMMiRNN: Training RNN with Stable Convergence via an Efficient ADMM Approach

It is hard to train Recurrent Neural Network (RNN) with stable convergence and avoid gradient vanishing and exploding, as the weights in the recurrent unit are repeated from iteration to iteration. Moreover, RNN is sensitive to the initialization of weights and bias, which brings difficulty in the t...

Full description

Saved in:
Bibliographic Details
Published inMachine Learning and Knowledge Discovery in Databases Vol. 12458; pp. 3 - 18
Main Authors Tang, Yu, Kan, Zhigang, Sun, Dequan, Qiao, Linbo, Xiao, Jingjing, Lai, Zhiquan, Li, Dongsheng
Format Book Chapter
LanguageEnglish
Published Switzerland Springer International Publishing AG 2021
Springer International Publishing
SeriesLecture Notes in Computer Science
Online AccessGet full text

Cover

Loading…