Sign Language Recognition: A Deep Survey

Sign language, as a different form of the communication language, is important to large groups of people in society. There are different signs in each sign language with variability in hand shape, motion profile, and position of the hand, face, and body parts contributing to each sign. So, visual si...

Full description

Saved in:
Bibliographic Details
Published inExpert systems with applications Vol. 164; p. 113794
Main Authors Rastgoo, Razieh, Kiani, Kourosh, Escalera, Sergio
Format Journal Article
LanguageEnglish
Published New York Elsevier Ltd 01.02.2021
Elsevier BV
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Sign language, as a different form of the communication language, is important to large groups of people in society. There are different signs in each sign language with variability in hand shape, motion profile, and position of the hand, face, and body parts contributing to each sign. So, visual sign language recognition is a complex research area in computer vision. Many models have been proposed by different researchers with significant improvement by deep learning approaches in recent years. In this survey, we review the vision-based proposed models of sign language recognition using deep learning approaches from the last five years. While the overall trend of the proposed models indicates a significant improvement in recognition accuracy in sign language recognition, there are some challenges yet that need to be solved. We present a taxonomy to categorize the proposed models for isolated and continuous sign language recognition, discussing applications, datasets, hybrid models, complexity, and future lines of research in the field. •We perform a comprehensive review of recent works for sign language recognition.•We define a taxonomy to group existing works and discuss on their pros and cons.•We discuss on features, modalities, evaluation metrics, applications, and datasets.•Different challenges and future lines of research in the field are presented.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0957-4174
1873-6793
DOI:10.1016/j.eswa.2020.113794