Sparse Bayesian Extreme Learning Machine for Multi-classification

Extreme learning machine (ELM) has become a popular topic in machine learning in recent years. ELM is a new kind of single-hidden layer feedforward neural network with an extremely low computational cost. ELM, however, has two evident drawbacks: 1) the output weights solved by Moore-Penrose generali...

Full description

Saved in:
Bibliographic Details
Published inIEEE transaction on neural networks and learning systems Vol. 25; no. 4; pp. 836 - 843
Main Authors JIAHUA LUO, VONG, Chi-Man, WONG, Pak-Kin
Format Journal Article
LanguageEnglish
Published New York, NY IEEE 01.04.2014
Institute of Electrical and Electronics Engineers
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text
ISSN2162-237X
2162-2388
2162-2388
DOI10.1109/TNNLS.2013.2281839

Cover

Loading…
More Information
Summary:Extreme learning machine (ELM) has become a popular topic in machine learning in recent years. ELM is a new kind of single-hidden layer feedforward neural network with an extremely low computational cost. ELM, however, has two evident drawbacks: 1) the output weights solved by Moore-Penrose generalized inverse is a least squares minimization issue, which easily suffers from overfitting and 2) the accuracy of ELM is drastically sensitive to the number of hidden neurons so that a large model is usually generated. This brief presents a sparse Bayesian approach for learning the output weights of ELM in classification. The new model, called Sparse Bayesian ELM (SBELM), can resolve these two drawbacks by estimating the marginal likelihood of network outputs and automatically pruning most of the redundant hidden neurons during learning phase, which results in an accurate and compact model. The proposed SBELM is evaluated on wide types of benchmark classification problems, which verifies that the accuracy of SBELM model is relatively insensitive to the number of hidden neurons; and hence a much more compact model is always produced as compared with other state-of-the-art neural network classifiers.
Bibliography:ObjectType-Article-2
SourceType-Scholarly Journals-1
ObjectType-Feature-1
content type line 14
ObjectType-Article-1
ObjectType-Feature-2
content type line 23
ISSN:2162-237X
2162-2388
2162-2388
DOI:10.1109/TNNLS.2013.2281839