Dictionary Training for Sparse Representation as Generalization of K-Means Clustering
Recent dictionary training algorithms for sparse representation like K-SVD, MOD, and their variation are reminiscent of K-means clustering, and this letter investigates such algorithms from that viewpoint. It shows: though K-SVD is sequential like K-means, it fails to simplify to K-means by destroyi...
Saved in:
Published in | IEEE signal processing letters Vol. 20; no. 6; pp. 587 - 590 |
---|---|
Main Authors | , |
Format | Journal Article |
Language | English |
Published |
IEEE
01.06.2013
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Recent dictionary training algorithms for sparse representation like K-SVD, MOD, and their variation are reminiscent of K-means clustering, and this letter investigates such algorithms from that viewpoint. It shows: though K-SVD is sequential like K-means, it fails to simplify to K-means by destroying the structure in the sparse coefficients. In contrast, MOD can be viewed as a parallel generalization of K-means, which simplifies to K-means without perturbing the sparse coefficients. Keeping memory usage in mind, we propose an alternative to MOD; a sequential generalization of K-means (SGK). While experiments suggest a comparable training performances across the algorithms, complexity analysis shows MOD and SGK to be faster under a dimensionality condition. |
---|---|
ISSN: | 1070-9908 1558-2361 |
DOI: | 10.1109/LSP.2013.2258912 |