The generalized sigmoid activation function: Competitive supervised learning
Multilayer perceptron (MLP) networks trained using backpropagation are perhaps the most commonly used neural network model. Central to the MLP model is the use of neurons with nonlinear and differentiable activation functions. The most commonly used activation function is a sigmoidal function, and f...
Saved in:
Published in | Information sciences Vol. 99; no. 1; pp. 69 - 82 |
---|---|
Main Author | |
Format | Journal Article |
Language | English |
Published |
Elsevier Inc
1997
|
Online Access | Get full text |
Cover
Loading…
Summary: | Multilayer perceptron (MLP) networks trained using backpropagation are perhaps the most commonly used neural network model. Central to the MLP model is the use of neurons with nonlinear and differentiable activation functions. The most commonly used activation function is a
sigmoidal function, and frequently all neurons in an MLP network employ the same activation function. In this paper, we introduce the notion of the
generalized sigmoid as an activation function for neurons in the
output layer of an MLP network. The enhancements afforded by the use of the generalized sigmoid are analyzed and demonstrated in the context of some well-known classification problems. |
---|---|
Bibliography: | ObjectType-Article-2 SourceType-Scholarly Journals-1 ObjectType-Feature-1 content type line 23 |
ISSN: | 0020-0255 1872-6291 |
DOI: | 10.1016/S0020-0255(96)00200-9 |