Limiting fault-induced output errors in ANNs

Summary form only given, as follows. The worst-case output errors produced by the failure of a hidden neuron in layered feedforward artificial neural networks were investigated. These errors can be much worse than simply the loss of the contribution of a neuron whose output goes to zero. A much larg...

Full description

Saved in:
Bibliographic Details
Published inIJCNN-91-Seattle International Joint Conference on Neural Networks Vol. ii; p. 965 vol.2
Main Authors Clay, R.D., Sequin, C.H.
Format Conference Proceeding
LanguageEnglish
Published IEEE 1991
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Summary form only given, as follows. The worst-case output errors produced by the failure of a hidden neuron in layered feedforward artificial neural networks were investigated. These errors can be much worse than simply the loss of the contribution of a neuron whose output goes to zero. A much larger erroneous signal can be produced when the failure sets the value of the hidden neuron to one of the power supply voltages. A method was investigated that limits the fractional error in the output signal of a feedforward net due to such saturated hidden unit faults in analog function approximation tasks. The number of hidden units is significantly increased, and the maximal contribution of each unit is limited to a small fraction of the net output signal. To achieve a large localized output signal, several Gaussian hidden units are moved into the same location in the input domain and the gain of the linear summing output unit is suitably adjusted. Since the contribution of each unit is equal in magnitude, there is only a modest error under any possible failure mode.< >
ISBN:0780301641
9780780301641
DOI:10.1109/IJCNN.1991.155612