A new structure-preserving dimensionality reduction approach and OI-net implementation
A new generic nonlinear feature extraction map f is presented based on concepts from approximation theory. Let f map an input data vector x/spl isin//spl Rfr//sup n/, where n is high, to an appropriate feature vector y/spl isin//spl Rfr//sup m/, where m is sufficiently low. Also let X={X/sub 1/,...,...
Saved in:
Published in | 1998 IEEE International Joint Conference on Neural Networks Proceedings. IEEE World Congress on Computational Intelligence (Cat. No.98CH36227) Vol. 1; pp. 690 - 694 vol.1 |
---|---|
Main Authors | , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
1998
|
Subjects | |
Online Access | Get full text |
ISBN | 0780348591 9780780348592 |
ISSN | 1098-7576 |
DOI | 10.1109/IJCNN.1998.682364 |
Cover
Summary: | A new generic nonlinear feature extraction map f is presented based on concepts from approximation theory. Let f map an input data vector x/spl isin//spl Rfr//sup n/, where n is high, to an appropriate feature vector y/spl isin//spl Rfr//sup m/, where m is sufficiently low. Also let X={X/sub 1/,...,X/sub N/} denote an available training set in /spl Rfr//sup n/. In this paper f is derived by requiring that the geometric structure (metric space attributes) of the points f(X)={f(X/sub 1/),...,f(X/sub N/)} in the feature space /spl Rfr//sup m/ be as similar as possible to the structure of the points X in the data space /spl Rfr//sup n/. This is accomplished by selecting first an appropriate dimension m for the feature space /spl Rfr//sup m/ according to the size N of the available training set X, subject to bounds on the distortion of the data structure caused by f and on the error in the estimation of the underlying likelihood functions in the feature space. The map f(i) is designed by a multi-dimensional scaling (MDS) approach that minimizes the Sammon's cost function. This approach uses graph-theoretic (minimal spanning tree) and genetic algorithmic concepts to search efficiently the optimal structure-preserving point-to-point mapping of the training samples X/sub 1/,...,X/sub N/ to their images in /spl Rfr//sup m/,Y/sub 1/,...,Y/sub N/. Finally, an optimal interpolating (OI) artificial neural network is used to recover the entire function f:/spl Rfr//spl rarr//spl Rfr//sup m/ by interpolating the values y/sub i/=1,...,N at X/sub i/, i=1,...,N. Preliminary simulation results based on this approach are also given. |
---|---|
ISBN: | 0780348591 9780780348592 |
ISSN: | 1098-7576 |
DOI: | 10.1109/IJCNN.1998.682364 |