Learning theory of distributed spectral algorithms
Spectral algorithms have been widely used and studied in learning theory and inverse problems. This paper is concerned with distributed spectral algorithms, for handling big data, based on a divide-and-conquer approach. We present a learning theory for these distributed kernel-based learning algorit...
Saved in:
Published in | Inverse problems Vol. 33; no. 7; pp. 74009 - 74037 |
---|---|
Main Authors | , , |
Format | Journal Article |
Language | English |
Published |
IOP Publishing
01.07.2017
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Spectral algorithms have been widely used and studied in learning theory and inverse problems. This paper is concerned with distributed spectral algorithms, for handling big data, based on a divide-and-conquer approach. We present a learning theory for these distributed kernel-based learning algorithms in a regression framework including nice error bounds and optimal minimax learning rates achieved by means of a novel integral operator approach and a second order decomposition of inverse operators. Our quantitative estimates are given in terms of regularity of the regression function, effective dimension of the reproducing kernel Hilbert space, and qualification of the filter function of the spectral algorithm. They do not need any eigenfunction or noise conditions and are better than the existing results even for the classical family of spectral algorithms. |
---|---|
Bibliography: | IP-100847.R3 |
ISSN: | 0266-5611 1361-6420 |
DOI: | 10.1088/1361-6420/aa72b2 |