Orthogonal projections applied to the assignment problem

This paper presents a significant improvement to the traditional neural approach to the assignment problem (AP). The technique is based on identifying the feasible space (F) with a linear subspace of R(n/sup 2/), and then analyzing the orthogonal projection onto F. The formula for the orthogonal pro...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on neural networks Vol. 8; no. 3; pp. 774 - 778
Main Authors Wolfe, W.J., Ulmer, R.M.
Format Journal Article
LanguageEnglish
Published New York, NY IEEE 01.05.1997
Institute of Electrical and Electronics Engineers
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:This paper presents a significant improvement to the traditional neural approach to the assignment problem (AP). The technique is based on identifying the feasible space (F) with a linear subspace of R(n/sup 2/), and then analyzing the orthogonal projection onto F. The formula for the orthogonal projection is shown to be simple and easy to integrate into the traditional neural model. This projection concept was first developed by Wolfe et al. (1993), but here we show that the projection can be computed in a much simpler way, and that the addition of a "clip" operator at the boundaries of the cube can improve the results by an order of magnitude in both accuracy and run time. It is proven that the array of numbers that define an AP can be projected onto F without loss of information and the network can be constrained to operate exclusively in F until a neuron is saturated (i.e., reaches the maximum or minimum activation). Two "clip" options are presented and compared. Statistical results are presented for randomly generated APs of sizes n=10 to n=50. The statistics confirm the theory.
Bibliography:ObjectType-Article-2
SourceType-Scholarly Journals-1
ObjectType-Feature-1
content type line 23
ObjectType-Article-1
ObjectType-Feature-2
ISSN:1045-9227
1941-0093
DOI:10.1109/72.572112