Empirical wavelet transform based automated alcoholism detecting using EEG signal features

[Display omitted] •Proposed empirical wavelet transform (EWT) based automated classification model for alcoholism detection.•Feature vectors are extracted from EEG signals using Hilbert–Huang transform (HHT).•Improved Time- frequency representation using HHT.•Classifiers parameter optimization is pe...

Full description

Saved in:
Bibliographic Details
Published inBiomedical signal processing and control Vol. 57; p. 101777
Main Authors Anuragi, Arti, Sisodia, Dilip Singh
Format Journal Article
LanguageEnglish
Published Elsevier Ltd 01.03.2020
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:[Display omitted] •Proposed empirical wavelet transform (EWT) based automated classification model for alcoholism detection.•Feature vectors are extracted from EEG signals using Hilbert–Huang transform (HHT).•Improved Time- frequency representation using HHT.•Classifiers parameter optimization is performed to improved the classification performance.•The proposed model achieved 98.76 % average accuracy and 98 % AUC value with LS-SVM (polynomial kernel) learner. Electroencephalogram (EEG) signals are well used to characterize the brain states and actions. In this paper, a novel empirical wavelet transform (EWT) based machine learning framework is proposed for the classification of alcoholic and normal subjects using EEG signals. In the framework, the adaptive filtering is used to extract Time–Frequency-domain features from Hilbert–Huang Transform (HHT). The boundary detection method is used for segmenting the Fourier spectrum of the EEG signals to represent in scale-space. Hilbert–Huang Transform (HHT) examines time and frequency information in a single domain using instantaneous amplitude (IA) and instantaneous frequency (IF). The IA and IF are used to form intrinsic mode functions (IMF). The empirical wavelets transform (EWT) using Hilbert–Huang transforms (HHT) extract the statistical features such as mean, standard deviation, variance, skewness, kurtosis, Shannon entropy, and log entropy from each of the intrinsic mode functions (IMF). The extracted features are evaluated by t-test for finding the most significant features. The significant feature matrix is fed to various classification algorithms listed as least square-support vector machine (LS-SVM), support vector machine (SVM), Naïve Bayes (NB), and k-Nearest Neighbors (K-NN). The leave-one-out cross-validation (LOOCV) is used for training and testing of used models to minimize the chance of overfitting. The results suggest that the highest numbers of the positive samples are obtained using LS-SVM classifier with the polynomial kernel. The LS-SVM also achieved an average accuracy of 98.75%, the sensitivity 98.35%, specificity 99.16%, the precision 99.17%, F-measure 98.76%, and Matthews Correlation Coefficient (MCC) 97.50%.
ISSN:1746-8094
1746-8108
DOI:10.1016/j.bspc.2019.101777