Enhanced feature selection method based on regularization and kernel trick for 5G applications and beyond
Prediction of wireless channel scenarios is fundamental for modern wireless communication systems with diverse propagation conditions. Moreover, the type of data extracted from a wireless communication channel impulse response (CIR) is complex. In recent research, machine learning (ML) techniques ha...
Saved in:
Published in | Alexandria engineering journal Vol. 61; no. 12; pp. 11589 - 11600 |
---|---|
Main Authors | , , , |
Format | Journal Article |
Language | English |
Published |
Elsevier B.V
01.12.2022
Elsevier |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Prediction of wireless channel scenarios is fundamental for modern wireless communication systems with diverse propagation conditions. Moreover, the type of data extracted from a wireless communication channel impulse response (CIR) is complex. In recent research, machine learning (ML) techniques have proven their success in classification problems of wireless communication scenarios and provide reasonable results. In this paper, a new enhanced feature selection method is proposed to improve the training model and classification performance of the conventional model. This improvement is achieved based on the concept of regularization in which the selection of the best features is considered before training the model under any propagation environment. The adoption of regularization leads to a high Total Explained Variance (TEV) during the process of kernel Principal Component Analysis (k-PCA). As a consequence, two principal features are used instead of three. The proposed model has high generalization ability since it reduces the features dimensionality (computational complexity) and, generally, enhances the ML classification performance. Experimental simulation is executed to compare the proposed model and the conventional one in terms of accuracy, precision and recall. The accuracy is increased from 97% to 99%, from 96% to 99%, from 89% to 97% and from 90% to 98% for k-nearest neighbor (k-NN), support vector machine (SVM), k-Means and Gaussian mixture model (GMM), respectively. |
---|---|
ISSN: | 1110-0168 |
DOI: | 10.1016/j.aej.2022.05.024 |