Stacked Broad Learning System: From Incremental Flatted Structure to Deep Model

The broad learning system (BLS) has been proved to be effective and efficient lately. In this article, several deep variants of BLS are reviewed, and a new adaptive incremental structure, Stacked BLS, is proposed. The proposed model is a novel incremental stacking of BLS. This invariant inherits the...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on systems, man, and cybernetics. Systems Vol. 51; no. 1; pp. 209 - 222
Main Authors Liu, Zhulin, Chen, C. L. Philip, Feng, Shuang, Feng, Qiying, Zhang, Tong
Format Journal Article
LanguageEnglish
Published New York IEEE 01.01.2021
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:The broad learning system (BLS) has been proved to be effective and efficient lately. In this article, several deep variants of BLS are reviewed, and a new adaptive incremental structure, Stacked BLS, is proposed. The proposed model is a novel incremental stacking of BLS. This invariant inherits the efficiency and effectiveness of BLS that the structure and weights of lower layers of BLS are fixed when the new blocks are added. The incremental stacking algorithm computes not only the connection weights between the newly stacking blocks but also the connection weights of the enhancement nodes within the BLS block. The Stacked BLS is considered as the increment of "layers" and "neurons" dynamically during the training for multilayer neural networks. The proposed architecture along with the training algorithms that utilizes the residual characteristic is very versatile in comparison with traditional fixed architecture. Finally, experimental results on UCI datasets, MNIST dataset, NORB dataset, CIFAR-10 dataset, SVHN dataset, and CIFAR-100 dataset indicate that the proposed method outperforms the selected state-of-the-art methods on both accuracy and training speed, such as deep residual networks. The results also imply that the proposed structure could highly reduce the number of nodes and the training time of the original BLS in the classification task of some datasets.
ISSN:2168-2216
2168-2232
DOI:10.1109/TSMC.2020.3043147