A parallel approach for backpropagation learning of neural networks

Fast response, storage efficiency, fault tolerance and graceful degradation in face of scarce or spurious inputs make neural networks appropiate tools for Intelligent Computer Systems. But on the other hand, learning algorithms for neural networks involve CPU intensive processing and consequently gr...

Full description

Saved in:
Bibliographic Details
Published inJournal of Computer Science & Technology Vol. 1; no. 1; pp. 14 - 14 p.
Main Authors Crespo, María Liz, Piccoli, María Fabiana, Printista, Alicia Marcela, Gallard, Raúl Hector
Format Journal Article
LanguageSpanish
English
Published La Plata Graduate Network of Argentine Universities with Computer Science Schools (RedUNCI) 01.03.1999
Universidad Nacional de la Plata, Journal of Computer Science and Technology
Postgraduate Office, School of Computer Science, Universidad Nacional de La Plata
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Fast response, storage efficiency, fault tolerance and graceful degradation in face of scarce or spurious inputs make neural networks appropiate tools for Intelligent Computer Systems. But on the other hand, learning algorithms for neural networks involve CPU intensive processing and consequently great effort hass been done to develop parallel implementation intended for a reduction of learning time. Looking at both sides of the coin, this paper shows firstly two alternatives to parallelise the learning process and then an apllication of neural networks to computing systems. On the parallel alternative distributed implementations to parallelise the learning process of neural networks using pattern partitioning approach. Under this approach weight changes are computed concurently, exchanged between system components and adjusted accordingly until the whole parallel learning process is completed. On the application side, some design and implementation insights to build a system where decision support for load distribution is based on a neural network device are shown. Incoming task allocation, as a previous step, is a fundamental service aiming for improving distributed system perfomance facilitating further dynamic load balancing. A neural network device inserted into the kernel of a distributed system as an intelligent dool, allows to achieve automatic allocation of execution requests under some predefinided perfomance criteria based on resource availability and incoming process requeriments. Perfomamnec results of the parallelised approach for learning of backpropagation neural networks, are shown. This include a comparison of recall and generalisation abilities to support parallelism.
ISSN:1666-6046
1666-6038