Maximizing minority accuracy for imbalanced pattern classification problems using cost-sensitive Localized Generalization Error Model

Traditional machine learning methods may not yield satisfactory generalization capability when samples in different classes are imbalanced. These methods tend to sacrifice the accuracy of the minority class to improve the overall accuracy without regarding the fact that misclassifications of minorit...

Full description

Saved in:
Bibliographic Details
Published inApplied soft computing Vol. 104; p. 107178
Main Authors Ng, Wing W.Y., Liu, Zhengxi, Zhang, Jianjun, Pedrycz, Witold
Format Journal Article
LanguageEnglish
Published Elsevier B.V 01.06.2021
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Traditional machine learning methods may not yield satisfactory generalization capability when samples in different classes are imbalanced. These methods tend to sacrifice the accuracy of the minority class to improve the overall accuracy without regarding the fact that misclassifications of minority samples usually costs more in many real world applications. Therefore, we propose a neural network training method via a minimization of the cost-sensitive localized generalization error-based objective function (c-LGEM) to achieve a better balance of error yielded by the minority and the majority classes. The c-LGEM emphasizes the minimization of the generalization error of the minority class in a cost-sensitive manner. Experimental results obtained on 16 UCI datasets show that neural networks trained by the c-LGEM yield better performance in comparison to the performance yielded by some existing methods. •We propose a new model to enhance network performance in local regions of samples.•A new neural network training method is proposed for imbalance data.•The proposed model yields better decision boundary for imbalance data.
ISSN:1568-4946
1872-9681
DOI:10.1016/j.asoc.2021.107178