Feature Normalized Lms Algorithms

Recently, a great effort has been observed to exploit sparsity in physical parameters; however, in many cases, sparsity is hidden in relations between parameters, and some appropriate tools should be utilized to expose it. In this paper, a family of algorithms called feature normalized least-mean-sq...

Full description

Saved in:
Bibliographic Details
Published in2019 53rd Asilomar Conference on Signals, Systems, and Computers pp. 806 - 809
Main Authors Yazdanpanah, Hamed, Apolinario, Jose A.
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.11.2019
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Recently, a great effort has been observed to exploit sparsity in physical parameters; however, in many cases, sparsity is hidden in relations between parameters, and some appropriate tools should be utilized to expose it. In this paper, a family of algorithms called feature normalized least-mean-square (F-NLMS) algorithms is proposed to exploit hidden sparsity. The major key of these algorithms is a so-called feature matrix that transforms non-sparse systems to new systems containing sparsity; then the revealed sparsity is exploited by some sparsity-promoting penalty function. Numerical results demonstrate that the F-NLMS algorithms can reduce the steady-state mean-squared-error (MSE) and/or improve the convergence rate of the learning process significantly.
ISSN:2576-2303
DOI:10.1109/IEEECONF44664.2019.9048952