Randomized Dimensionality Reduction for k -Means Clustering
We study the topic of dimensionality reduction for k-means clustering. Dimensionality reduction encompasses the union of two approaches: 1) feature selection and 2) feature extraction. A feature selection-based algorithm for k-means clustering selects a small subset of the input features and then ap...
Saved in:
Published in | IEEE transactions on information theory Vol. 61; no. 2; pp. 1045 - 1062 |
---|---|
Main Authors | , , , |
Format | Journal Article |
Language | English |
Published |
New York
IEEE
01.02.2015
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
ISSN | 0018-9448 1557-9654 |
DOI | 10.1109/TIT.2014.2375327 |
Cover
Loading…
Summary: | We study the topic of dimensionality reduction for k-means clustering. Dimensionality reduction encompasses the union of two approaches: 1) feature selection and 2) feature extraction. A feature selection-based algorithm for k-means clustering selects a small subset of the input features and then applies k-means clustering on the selected features. A feature extraction-based algorithm for k-means clustering constructs a small set of new artificial features and then applies k-means clustering on the constructed features. Despite the significance of k-means clustering as well as the wealth of heuristic methods addressing it, provably accurate feature selection methods for k-means clustering are not known. On the other hand, two provably accurate feature extraction methods for k-means clustering are known in the literature; one is based on random projections and the other is based on the singular value decomposition (SVD). This paper makes further progress toward a better understanding of dimensionality reduction for k-means clustering. Namely, we present the first provably accurate feature selection method for k-means clustering and, in addition, we present two feature extraction methods. The first feature extraction method is based on random projections and it improves upon the existing results in terms of time complexity and number of features needed to be extracted. The second feature extraction method is based on fast approximate SVD factorizations and it also improves upon the existing results in terms of time complexity. The proposed algorithms are randomized and provide constant-factor approximation guarantees with respect to the optimal k-means objective value. |
---|---|
Bibliography: | SourceType-Scholarly Journals-1 ObjectType-Feature-1 content type line 14 |
ISSN: | 0018-9448 1557-9654 |
DOI: | 10.1109/TIT.2014.2375327 |