A new algorithm for learning in piecewise-linear neural networks

Piecewise-linear (PWL) neural networks are widely known for their amenability to digital implementation. This paper presents a new algorithm for learning in PWL networks consisting of a single hidden layer. The approach adopted is based upon constructing a continuous PWL error function and developin...

Full description

Saved in:
Bibliographic Details
Published inNeural networks Vol. 13; no. 4; pp. 485 - 505
Main Authors Gad, E.F., Atiya, A.F., Shaheen, S., El-Dessouki, A.
Format Journal Article
LanguageEnglish
Published Oxford Elsevier Ltd 01.05.2000
Elsevier Science
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Piecewise-linear (PWL) neural networks are widely known for their amenability to digital implementation. This paper presents a new algorithm for learning in PWL networks consisting of a single hidden layer. The approach adopted is based upon constructing a continuous PWL error function and developing an efficient algorithm to minimize it. The algorithm consists of two basic stages in searching the weight space. The first stage of the optimization algorithm is used to locate a point in the weight space representing the intersection of N linearly independent hyperplanes, with N being the number of weights in the network. The second stage is then called to use this point as a starting point in order to continue searching by moving along the single-dimension boundaries between the different linear regions of the error function, hopping from one point (representing the intersection of N hyperplanes) to another. The proposed algorithm exhibits significantly accelerated convergence, as compared to standard algorithms such as back-propagation and improved versions of it, such as the conjugate gradient algorithm. In addition, it has the distinct advantage that there are no parameters to adjust, and therefore there is no time-consuming parameters tuning step. The new algorithm is expected to find applications in function approximation, time series prediction and binary classification problems.
Bibliography:ObjectType-Article-2
SourceType-Scholarly Journals-1
ObjectType-Feature-1
content type line 23
ObjectType-Article-1
ObjectType-Feature-2
ISSN:0893-6080
1879-2782
DOI:10.1016/S0893-6080(00)00024-1