Distributed kernel gradient descent algorithm for minimum error entropy principle

Distributed learning based on the divide and conquer approach is a powerful tool for big data processing. We introduce a distributed kernel gradient descent algorithm for the minimum error entropy principle and analyze its convergence. We show that the L2 error decays at a minimax optimal rate under...

Full description

Saved in:
Bibliographic Details
Published inApplied and computational harmonic analysis Vol. 49; no. 1; pp. 229 - 256
Main Authors Hu, Ting, Wu, Qiang, Zhou, Ding-Xuan
Format Journal Article
LanguageEnglish
Published Elsevier Inc 01.07.2020
Subjects
Online AccessGet full text
ISSN1063-5203
1096-603X
DOI10.1016/j.acha.2019.01.002

Cover

Loading…
More Information
Summary:Distributed learning based on the divide and conquer approach is a powerful tool for big data processing. We introduce a distributed kernel gradient descent algorithm for the minimum error entropy principle and analyze its convergence. We show that the L2 error decays at a minimax optimal rate under some mild conditions. As a tool we establish some concentration inequalities for U-statistics which play pivotal roles in our error analysis.
ISSN:1063-5203
1096-603X
DOI:10.1016/j.acha.2019.01.002