Quickest convergence of online algorithms via data selection

Big data applications demand efficient solvers capable of providing accurate solutions to large-scale problems at affordable computational costs. Processing data sequentially, online algorithms offer attractive means to deal with massive data sets. However, they may incur prohibitive complexity in h...

Full description

Saved in:
Bibliographic Details
Published in2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) pp. 6185 - 6189
Main Authors Romero, Daniel, Berberidis, Dimitris, Giannakis, Georgios B.
Format Conference Proceeding Journal Article
LanguageEnglish
Published IEEE 01.03.2016
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Big data applications demand efficient solvers capable of providing accurate solutions to large-scale problems at affordable computational costs. Processing data sequentially, online algorithms offer attractive means to deal with massive data sets. However, they may incur prohibitive complexity in high-dimensional scenarios if the entire data set is processed. It is therefore necessary to confine computations to an informative subset. While existing approaches have focused on selecting a prescribed fraction of the available data vectors, the present paper capitalizes on this degree of freedom to accelerate the convergence of a generic class of online algorithms in terms of processing time/computational resources by balancing the required burden with a metric of how informative each datum is. The proposed method is illustrated in a linear regression setting, and simulations corroborate the superior convergence rate of the recursive least-squares algorithm when the novel data selection is effected.
Bibliography:ObjectType-Article-2
SourceType-Scholarly Journals-1
ObjectType-Conference-1
ObjectType-Feature-3
content type line 23
SourceType-Conference Papers & Proceedings-2
ISSN:2379-190X
DOI:10.1109/ICASSP.2016.7472866