Dual extended Kalman filtering in recurrent neural networks

In the classical deterministic Elman model, the estimation of parameters must be very accurate. Otherwise, the system performance is very poor. To improve the system performance, we can use a Kalman filtering algorithm to guide the operation of a trained recurrent neural network (RNN). In this case,...

Full description

Saved in:
Bibliographic Details
Published inNeural networks Vol. 16; no. 2; pp. 223 - 239
Main Authors Leung, Chi-Sing, Chan, Lai-Wan
Format Journal Article
LanguageEnglish
Published Oxford Elsevier Ltd 01.03.2003
Elsevier Science
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:In the classical deterministic Elman model, the estimation of parameters must be very accurate. Otherwise, the system performance is very poor. To improve the system performance, we can use a Kalman filtering algorithm to guide the operation of a trained recurrent neural network (RNN). In this case, during training, we need to estimate the state of hidden layer, as well as the weights of the RNN. This paper discusses how to use the dual extended Kalman filtering (DEKF) for this dual estimation and how to use our proposing DEKF for removing some unimportant weights from a trained RNN. In our approach, one Kalman algorithm is used for estimating the state of the hidden layer, and one recursive least square (RLS) algorithm is used for estimating the weights. After training, we use the error covariance matrix of the RLS algorithm to remove unimportant weights. Simulation showed that our approach is an effective joint-learning–pruning method for RNNs under the online operation.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:0893-6080
1879-2782
DOI:10.1016/S0893-6080(02)00230-7