Enhancing Multi-layer Perceptron Training with Prairie Dog Optimizer: Advancing Classification Accuracy in Medical Diagnostics
Multi-layer Perceptron (MLP) networks are widely used for classification and regression due to their flexibility and capacity. However, their performance depends heavily on the optimization algorithm used during training, with traditional backpropagation often failing to achieve globally optimal sol...
Saved in:
Published in | 2024 IEEE Silchar Subsection Conference (SILCON 2024) pp. 1 - 6 |
---|---|
Main Authors | , , , , , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
15.11.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Multi-layer Perceptron (MLP) networks are widely used for classification and regression due to their flexibility and capacity. However, their performance depends heavily on the optimization algorithm used during training, with traditional backpropagation often failing to achieve globally optimal solutions. This paper presents a Prairie Dog Optimization (PDO) based backpropagation technique for MLP training, aimed at enhancing accuracy, especially in classification tasks. PDO improves MLP optimization by using a dynamic, population-based search strategy to navigate complex loss landscapes effectively, leading to better convergence to global optima. Experiments on breast cancer diagnosis and diabetes classification demonstrate that PDO-MLP significantly outperforms other optimization-based backpropagation methods, including flower pollination algorithm (FPA), ant colony optimization (ACO), standard gradient descent backpropagation (BP), and particle swarm optimization (PSO). Notably, PDO-MLP achieved 97.30% accuracy on the Iris dataset, 82.6% on diabetes classification, and 96.41% on breast cancer diagnosis, proving its effectiveness for various real-world applications. |
---|---|
DOI: | 10.1109/SILCON63976.2024.10910597 |