Learning theory of distributed spectral algorithms

Spectral algorithms have been widely used and studied in learning theory and inverse problems. This paper is concerned with distributed spectral algorithms, for handling big data, based on a divide-and-conquer approach. We present a learning theory for these distributed kernel-based learning algorit...

Full description

Saved in:
Bibliographic Details
Published inInverse problems Vol. 33; no. 7; pp. 74009 - 74037
Main Authors Guo, Zheng-Chu, Lin, Shao-Bo, Zhou, Ding-Xuan
Format Journal Article
LanguageEnglish
Published IOP Publishing 01.07.2017
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Spectral algorithms have been widely used and studied in learning theory and inverse problems. This paper is concerned with distributed spectral algorithms, for handling big data, based on a divide-and-conquer approach. We present a learning theory for these distributed kernel-based learning algorithms in a regression framework including nice error bounds and optimal minimax learning rates achieved by means of a novel integral operator approach and a second order decomposition of inverse operators. Our quantitative estimates are given in terms of regularity of the regression function, effective dimension of the reproducing kernel Hilbert space, and qualification of the filter function of the spectral algorithm. They do not need any eigenfunction or noise conditions and are better than the existing results even for the classical family of spectral algorithms.
Bibliography:IP-100847.R3
ISSN:0266-5611
1361-6420
DOI:10.1088/1361-6420/aa72b2