The Comparison of Activation Functions in Feature Extraction Layer using Sharpen Filter

Activation functions are a critical component in the feature extraction layer of deep learning models, influencing their ability to identify patterns and extract meaningful features from input data. This study investigates the impact of five widely used activation functions—ReLU, SELU, ELU, sigmoid,...

Full description

Saved in:
Bibliographic Details
Published inJournal of Applied Engineering and Technological Science (Online) Vol. 6; no. 2; pp. 1254 - 1267
Main Authors Rachmawati, Oktavia Citra Resmi, Barakbah, Ali Ridho, Karlita, Tita
Format Journal Article
LanguageEnglish
Published Yayasan Pendidikan Riset dan Pengembangan Intelektual (YRPI) 08.06.2025
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Activation functions are a critical component in the feature extraction layer of deep learning models, influencing their ability to identify patterns and extract meaningful features from input data. This study investigates the impact of five widely used activation functions—ReLU, SELU, ELU, sigmoid, and tanh—on convolutional neural network (CNN) performance when combined with sharpening filters for feature extraction. Using a custom-built CNN program module within the researchers’ machine learning library, Analytical Libraries for Intelligent-computing (ALI), the performance of each activation function was evaluated by analyzing mean squared error (MSE) values obtained during the training process. The findings revealed that ReLU consistently outperformed other activation functions by achieving the lowest MSE values, making it the most effective choice for feature extraction tasks using sharpening filters. This study provides practical and theoretical insights, highlighting the significance of selecting suitable activation functions to enhance CNN performance. These findings contribute to optimizing CNN architectures, offering a valuable reference for future work in image processing and other machine-learning applications that rely on feature extraction layers. Additionally, this research underscores the importance of activation function selection as a fundamental consideration in deep learning model design.
ISSN:2715-6087
2715-6079
DOI:10.37385/jaets.v6i2.5895