Radial-Basis-Function Networks
Neural networks are characterized by being massively parallel in their architecture and that they use a learning paradigm rather than being programmed. The radial basis function (RBF) represents one such processing method. The approximation by the sum can also be interpreted as a rather simple singl...
Saved in:
Published in | Intelligent Systems pp. 7-1 - 7-12 |
---|---|
Main Authors | , , |
Format | Book Chapter |
Language | English |
Published |
United Kingdom
CRC Press
2011
Taylor & Francis Group |
Edition | 2 |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Neural networks are characterized by being massively parallel in their architecture and that they use a learning paradigm rather than being programmed. The radial basis function (RBF) represents one such processing method. The approximation by the sum can also be interpreted as a rather simple singlelayer type of artificial neural network. The RBFs taking on the role of the activation functions of the neural network. The most well-known learning paradigm for feed-forward neural networks is the backpropagation paradigm. Generally speaking, very few implementations of neural networks have been done outside the university world, where such implementations have been quite popular. In very general terms, the approach is to map an N-dimensional space by prototypes. The RMS brightness jitter as well as the position jitter in pixels turned out to be equally good or even better than most conventional star tracker of same complexity. |
---|---|
ISBN: | 1439802831 9781439802830 1138071889 9781138071889 |
DOI: | 10.1201/9781315218427-7 |