Bayesian Optimization Machine Learning Models for True and Fake News Classification

The performance of a machine learning algorithm depends largely on determining a set of hyperparameters. These hyperparameters have a significant influence on the accuracy of the algorithm. With the increase in algorithm complexity, there are more and more candidates for hyperparameters. How to quic...

Full description

Saved in:
Bibliographic Details
Published in2023 IEEE 6th Information Technology,Networking,Electronic and Automation Control Conference (ITNEC) Vol. 6; pp. 1530 - 1533
Main Authors Zhao, Gaohua, Song, Shouyou, Lin, Hao, Jiang, Wei
Format Conference Proceeding
LanguageEnglish
Published IEEE 24.02.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:The performance of a machine learning algorithm depends largely on determining a set of hyperparameters. These hyperparameters have a significant influence on the accuracy of the algorithm. With the increase in algorithm complexity, there are more and more candidates for hyperparameters. How to quickly and accurately select the right hyperparameters for a given problem has become a popular area of research. This paper is based on a Bayesian optimization approach to assist machine learning for hyperparameter extraction. It is also fully validated based on the task of dichotomous classification of true and false news. This paper analyses the principles of the Bayesian optimization approach and how it can be applied to machine learning model parameter selection. The machine learning models to be used in this paper include K-Nearest Neighbour (KNN), Random Forest as well as Gradient Boosted Decision Trees (GBDT). These three are commonly used machine learning models for binary classification problems, with different numbers and classes of hyperparameters. The results of the experiments show that adjusting the original hyperparameters of machine learning using Bayesian optimization can substantially improve classification accuracy. The research in this paper can also provide ideas for other similar work of super parameter selection.
ISSN:2693-3128
DOI:10.1109/ITNEC56291.2023.10082424