Linear classifier design under heteroscedasticity in Linear Discriminant Analysis

•We derive a linear classifier for heteroscedastic linear discriminant analysis.•The proposed scheme efficiently minimises the Bayes error for binary classification.•A local neighbourhood search is also proposed for non-normal distributions.•The proposed schemes are experimentally validated on twelv...

Full description

Saved in:
Bibliographic Details
Published inExpert systems with applications Vol. 79; pp. 44 - 52
Main Authors Gyamfi, Kojo Sarfo, Brusey, James, Hunt, Andrew, Gaura, Elena
Format Journal Article
LanguageEnglish
Published New York Elsevier Ltd 15.08.2017
Elsevier BV
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:•We derive a linear classifier for heteroscedastic linear discriminant analysis.•The proposed scheme efficiently minimises the Bayes error for binary classification.•A local neighbourhood search is also proposed for non-normal distributions.•The proposed schemes are experimentally validated on twelve datasets. Under normality and homoscedasticity assumptions, Linear Discriminant Analysis (LDA) is known to be optimal in terms of minimising the Bayes error for binary classification. In the heteroscedastic case, LDA is not guaranteed to minimise this error. Assuming heteroscedasticity, we derive a linear classifier, the Gaussian Linear Discriminant (GLD), that directly minimises the Bayes error for binary classification. In addition, we also propose a local neighbourhood search (LNS) algorithm to obtain a more robust classifier if the data is known to have a non-normal distribution. We evaluate the proposed classifiers on two artificial and ten real-world datasets that cut across a wide range of application areas including handwriting recognition, medical diagnosis and remote sensing, and then compare our algorithm against existing LDA approaches and other linear classifiers. The GLD is shown to outperform the original LDA procedure in terms of the classification accuracy under heteroscedasticity. While it compares favourably with other existing heteroscedastic LDA approaches, the GLD requires as much as 60 times lower training time on some datasets. Our comparison with the support vector machine (SVM) also shows that, the GLD, together with the LNS, requires as much as 150 times lower training time to achieve an equivalent classification accuracy on some of the datasets. Thus, our algorithms can provide a cheap and reliable option for classification in a lot of expert systems.
ISSN:0957-4174
1873-6793
DOI:10.1016/j.eswa.2017.02.039