Intelligent real-time Arabic sign language classification using attention-based inception and BiLSTM
•Bio-Inspired novel attention-based inception architecture is proposed that can adapt to different types of spatial contexts using convolution filters of different sizes. The characteristics of each dataset are unique, hence the attention mechanism helps focus on those features to improve classifica...
Saved in:
Published in | Computers & electrical engineering Vol. 95; p. 107395 |
---|---|
Main Authors | , , , , , , , |
Format | Journal Article |
Language | English |
Published |
Amsterdam
Elsevier Ltd
01.10.2021
Elsevier BV |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | •Bio-Inspired novel attention-based inception architecture is proposed that can adapt to different types of spatial contexts using convolution filters of different sizes. The characteristics of each dataset are unique, hence the attention mechanism helps focus on those features to improve classification performance.•The shallow inception model is designed with a two-layer attention mechanism with fewer layers but with a large number of convolution filters that can address the overfitting problem caused by small dataset sizes.•LSTM-based recurrent neural network (RNN) module is proposed to extract temporal features after the inception module is applied.•The proposed model is lightweight with fewer parameters and has less processing time.•The proposed model achieves good performance for both dynamic and static signs and gestures.
Bio-inspired deep learning models have revolutionized sign language classification, achieving extraordinary accuracy and human-like video understanding. Recognition and classification of sign language videos in real-time are challenging because the duration and speed of each sign vary for different subjects, the background of videos is dynamic in most cases, and the classification result should be produced in real-time. This study proposes a model based on a convolution neural network (CNN) Inception model with an attention mechanism for extracting spatial features and Bi-LSTM (long short-term memory) for temporal feature extraction. The proposed model is tested on datasets with highly variable characteristics such as different clothing, variable lighting, and variable distance from the camera. Real-time classification achieves significant early detections while achieving performance comparable to the offline operation. The proposed model has fewer parameters, fewer deep learning layers, and requires significantly less processing time than state-of-the-art models.
The Inception model with an attention mechanism with two attention blocks [Display omitted] |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ISSN: | 0045-7906 1879-0755 |
DOI: | 10.1016/j.compeleceng.2021.107395 |