A Further Investigation on the Reliability of Extreme Learning Machines

Research community has recently put more attention to the Extreme Learning Machines (ELMs) algorithm in Neural Network (NN) area. The ELMs are much faster than the traditional gradient-descent-based learning algorithms due to its analytical determination of output weights with the random choice of i...

Full description

Saved in:
Bibliographic Details
Published in2014 IEEE International Conference on Data Mining Workshop pp. 1031 - 1037
Main Authors Yanxing Hu, Yuan Wang, Jane Jia You, Liu, Jame N. K., Yulin He
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.12.2014
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Research community has recently put more attention to the Extreme Learning Machines (ELMs) algorithm in Neural Network (NN) area. The ELMs are much faster than the traditional gradient-descent-based learning algorithms due to its analytical determination of output weights with the random choice of input weights and hidden layer bias. However, since the input weights and bias are randomly assigned and not adjusted, the ELMs model shows an instability if we repeat the experiments many times. Such instability makes the ELMs less reliable than other computational intelligence models. In our investigation, we try to solve this problem by using the Random Production in the first layer of the ELMs. Thus, we can reduce the chance of using random weight assignment in ELMs by removing the bias in the hidden layer. Experiment son different data sets demonstrate that the proposed model has higher stability and reliability than the classical ELMs.
ISSN:2375-9232
2375-9259
DOI:10.1109/ICDMW.2014.117