Adaptive mixture methods based on Bregman divergences

We investigate adaptive mixture methods that linearly combine outputs of m constituent filters running in parallel to model a desired signal. We use Bregman divergences and obtain certain multiplicative updates to train the linear combination weights under an affine constraint or without any constra...

Full description

Saved in:
Bibliographic Details
Published inDigital signal processing Vol. 23; no. 1; pp. 86 - 97
Main Authors Donmez, Mehmet A., Inan, Huseyin A., Kozat, Suleyman S.
Format Journal Article
LanguageEnglish
Published Elsevier Inc 01.01.2013
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:We investigate adaptive mixture methods that linearly combine outputs of m constituent filters running in parallel to model a desired signal. We use Bregman divergences and obtain certain multiplicative updates to train the linear combination weights under an affine constraint or without any constraints. We use unnormalized relative entropy and relative entropy to define two different Bregman divergences that produce an unnormalized exponentiated gradient update and a normalized exponentiated gradient update on the mixture weights, respectively. We then carry out the mean and the mean-square transient analysis of these adaptive algorithms when they are used to combine outputs of m constituent filters. We illustrate the accuracy of our results and demonstrate the effectiveness of these updates for sparse mixture systems.
Bibliography:ObjectType-Article-2
SourceType-Scholarly Journals-1
ObjectType-Feature-1
content type line 23
ISSN:1051-2004
1095-4333
DOI:10.1016/j.dsp.2012.09.006