Feature Normalized Lms Algorithms
Recently, a great effort has been observed to exploit sparsity in physical parameters; however, in many cases, sparsity is hidden in relations between parameters, and some appropriate tools should be utilized to expose it. In this paper, a family of algorithms called feature normalized least-mean-sq...
Saved in:
Published in | 2019 53rd Asilomar Conference on Signals, Systems, and Computers pp. 806 - 809 |
---|---|
Main Authors | , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
01.11.2019
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Recently, a great effort has been observed to exploit sparsity in physical parameters; however, in many cases, sparsity is hidden in relations between parameters, and some appropriate tools should be utilized to expose it. In this paper, a family of algorithms called feature normalized least-mean-square (F-NLMS) algorithms is proposed to exploit hidden sparsity. The major key of these algorithms is a so-called feature matrix that transforms non-sparse systems to new systems containing sparsity; then the revealed sparsity is exploited by some sparsity-promoting penalty function. Numerical results demonstrate that the F-NLMS algorithms can reduce the steady-state mean-squared-error (MSE) and/or improve the convergence rate of the learning process significantly. |
---|---|
ISSN: | 2576-2303 |
DOI: | 10.1109/IEEECONF44664.2019.9048952 |