The generalized sigmoid activation function: Competitive supervised learning

Multilayer perceptron (MLP) networks trained using backpropagation are perhaps the most commonly used neural network model. Central to the MLP model is the use of neurons with nonlinear and differentiable activation functions. The most commonly used activation function is a sigmoidal function, and f...

Full description

Saved in:
Bibliographic Details
Published inInformation sciences Vol. 99; no. 1; pp. 69 - 82
Main Author Narayan, Sridhar
Format Journal Article
LanguageEnglish
Published Elsevier Inc 1997
Online AccessGet full text

Cover

Loading…
More Information
Summary:Multilayer perceptron (MLP) networks trained using backpropagation are perhaps the most commonly used neural network model. Central to the MLP model is the use of neurons with nonlinear and differentiable activation functions. The most commonly used activation function is a sigmoidal function, and frequently all neurons in an MLP network employ the same activation function. In this paper, we introduce the notion of the generalized sigmoid as an activation function for neurons in the output layer of an MLP network. The enhancements afforded by the use of the generalized sigmoid are analyzed and demonstrated in the context of some well-known classification problems.
Bibliography:ObjectType-Article-2
SourceType-Scholarly Journals-1
ObjectType-Feature-1
content type line 23
ISSN:0020-0255
1872-6291
DOI:10.1016/S0020-0255(96)00200-9