On the dynamics of the LRE algorithm: a distribution learning approach to adaptive equalization

We present the general formulation for the adaptive equalization by distribution learning introduced by Adali (see Proc. IEEE Int. Conf. Acoust., Speech, Signal Processing, vol.3, p.297-300, April 1994) In this framework, adaptive equalization can be viewed as a parametrized conditional distribution...

Full description

Saved in:
Bibliographic Details
Published in1995 International Conference on Acoustics, Speech, and Signal Processing Vol. 2; pp. 929 - 932 vol.2
Main Authors Adali, T., Sonmez, M.K., Patel, K.
Format Conference Proceeding
LanguageEnglish
Published IEEE 1995
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:We present the general formulation for the adaptive equalization by distribution learning introduced by Adali (see Proc. IEEE Int. Conf. Acoust., Speech, Signal Processing, vol.3, p.297-300, April 1994) In this framework, adaptive equalization can be viewed as a parametrized conditional distribution estimation problem where the parameter estimation is achieved by learning on a multilayer perceptron (MLP). Depending on the definition of the conditioning event set either supervised or unsupervised (blind) algorithms in either recurrent or feedforward networks result. We derive the least relative entropy (LRE) algorithm for binary data communications and analyze its statistical and dynamical properties. Particularly, we show that LRE learning is consistent and asymptotically normal by working in the partial likelihood estimation framework, and that the algorithm can always recover from convergence at the wrong extreme as opposed to the MSE based MLP's by working within an extension of the well-formed cost functions framework of Wittner and Denker (1988). We present simulation examples to demonstrate this fact.
ISBN:0780324315
9780780324312
ISSN:1520-6149
2379-190X
DOI:10.1109/ICASSP.1995.480327