Feature wise normalization: An effective way of normalizing data
A novel approach feature-wise normalization (FWN) has been presented to normalize the data.•FWN normalizes each feature independently from the pools of normalization methods.•The collective response of multiple methods mitigates the problems of outliers and dominant features more effectively.•Antlio...
Saved in:
Published in | Pattern recognition Vol. 122; p. 108307 |
---|---|
Main Authors | , |
Format | Journal Article |
Language | English |
Published |
Elsevier Ltd
01.02.2022
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | A novel approach feature-wise normalization (FWN) has been presented to normalize the data.•FWN normalizes each feature independently from the pools of normalization methods.•The collective response of multiple methods mitigates the problems of outliers and dominant features more effectively.•Antlion optimization is used to search for normalization methods along with the parameters of classifiers.•FWN outperformed conventional data-wise normalization on four popular machine learning algorithms.
This paper presents a novel Feature Wise Normalization approach for the effective normalization of data. In this approach, each feature is normalized independently with one of the methods from the pool of normalization methods. It is in contrast to the conventional approach which normalizes the data with one method only and as a result, yields suboptimal performance. Additionally, generalization and superiority among normalization methods are also not ensured owing to different machine learning mechanisms for solving classification tasks. The proposed approach benefits from the collective response of multiple methods to normalize the data better as individual features become a normalization unit. The selection of methods is a combinatorial problem that can be solved with optimization algorithms. For this purpose, Antlion optimization is considered that combines the search of methods with the fine-tuning of classifier parameters. Twelve methods are used to create the pool beside the original scale, and the obtained data is evaluated on four learning algorithms. Experiments are performed on 18 benchmark datasets to show the efficacy of the proposed approach in contrast to conventional normalization. |
---|---|
ISSN: | 0031-3203 1873-5142 |
DOI: | 10.1016/j.patcog.2021.108307 |