Partially pre-calculated weights for the backpropagation learning regime and high accuracy function mapping using continuous input RAM-based sigma–pi nets

In this article we present a methodology that partially pre-calculates the weight updates of the backpropagation learning regime and obtains high accuracy function mapping. The paper shows how to implement neural units in a digital formulation which enables the weights to be quantised to 8-bits and...

Full description

Saved in:
Bibliographic Details
Published inNeural networks Vol. 13; no. 1; pp. 91 - 110
Main Authors Neville, R.S., Stonham, T.J., Glover, R.J.
Format Journal Article
LanguageEnglish
Published Oxford Elsevier Ltd 2000
Elsevier Science
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:In this article we present a methodology that partially pre-calculates the weight updates of the backpropagation learning regime and obtains high accuracy function mapping. The paper shows how to implement neural units in a digital formulation which enables the weights to be quantised to 8-bits and the activations to 9-bits. A novel methodology is introduced to enable the accuracy of sigma–pi units to be increased by expanding their internal state space. We, also, introduce a novel means of implementing bit-streams in ring memories instead of utilising shift registers. The investigation utilises digital “Higher Order” sigma–pi nodes and studies continuous input RAM-based sigma–pi units. The units are trained with the backpropagation learning regime to learn functions to a high accuracy. The neural model is the sigma–pi units which can be implemented in digital microelectronic technology. The ability to perform tasks that require the input of real-valued information, is one of the central requirements of any cognitive system that utilises artificial neural network methodologies. In this article we present recent research which investigates a technique that can be used for mapping accurate real-valued functions to RAM-nets. One of our goals was to achieve accuracies of better than 1% for target output functions in the range Y∈[0,1], this is equivalent to an average Mean Square Error (MSE) over all training vectors of 0.0001 or an error modulus of 0.01. We present a development of the sigma–pi node which enables the provision of high accuracy outputs. The sigma–pi neural model was initially developed by Gurney (Learning in nets of structured hypercubes. PhD Thesis, Department of Electrical Engineering, Brunel University, Middlessex, UK, 1989; available as Technical Memo CN/R/144). Gurney's neuron models, the Time Integration Node (TIN), utilises an activation that was derived from a bit-stream. In this article we present a new methodology for storing sigma–pi node's activations as single values which are averages. In the course of the article we state what we define as a real number; how we represent real numbers and input of continuous values in our neural system. We show how to utilise the bounded quantised site-values (weights) of sigma–pi nodes to make training of these neurocomputing systems simple, using pre-calculated look-up tables to train the nets. In order to meet our accuracy goal, we introduce a means of increasing the bandwidth capability of sigma–pi units by expanding their internal state-space. In our implementation we utilise bit-streams when we calculate the real-valued outputs of the net. To simplify the hardware implementation of bit-streams we present a method of mapping them to RAM-based hardware using ‘ring memories’. Finally, we study the sigma–pi units’ ability to generalise once they are trained to map real-valued, high accuracy, continuous functions. We use sigma–pi units as they have been shown to have shorter training times than their analogue counterparts and can also overcome some of the drawbacks of semi-linear units (Gurney, 1992. Neural Networks, 5, 289–303).
Bibliography:ObjectType-Article-2
SourceType-Scholarly Journals-1
ObjectType-Feature-1
content type line 23
ObjectType-Article-1
ObjectType-Feature-2
ISSN:0893-6080
1879-2782
DOI:10.1016/S0893-6080(99)00102-1