Online Training for Open Faulty RBF Networks

Recently, a batch mode learning algorithm, namely optimal open weight fault regularization (OOWFR), was developed for handling the open fault situation. In terms of the Kullback–Leibler divergence, this batch mode learning algorithm is optimal. However, the main disadvantage of this batch mode learn...

Full description

Saved in:
Bibliographic Details
Published inNeural processing letters Vol. 42; no. 2; pp. 397 - 416
Main Authors Xiao, Yi, Feng, Ruibin, Leung, Chi Sing, Sum, Pui Fai
Format Journal Article
LanguageEnglish
Published New York Springer US 01.10.2015
Springer Nature B.V
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Recently, a batch mode learning algorithm, namely optimal open weight fault regularization (OOWFR), was developed for handling the open fault situation. In terms of the Kullback–Leibler divergence, this batch mode learning algorithm is optimal. However, the main disadvantage of this batch mode learning algorithm is that it requires to store the entire input–output history. Therefore, the memory consumption is a problem when the number of training samples is large. In this paper, we present an online version for the OOWFR algorithm. We consider two learning rate cases, fixed learning rate and adaptive learning rate. We present the convergent conditions for these two cases. Simulation results show that the performance of the proposed online mode learning algorithm is better than that of other online mode learning algorithms. Also, the performance of the proposed algorithm is close to that of the batch mode OOWFR algorithm.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1370-4621
1573-773X
DOI:10.1007/s11063-014-9363-8