Real-time Arabic avatar for deaf-mute communication enabled by deep learning sign language translation

Deaf-mute individuals encounter substantial difficulties in their daily lives due to communication impediments. These individuals may encounter difficulties in social contact, communication, and capacity to acquire knowledge and engage in employment. Recent studies have contributed to decreasing the...

Full description

Saved in:
Bibliographic Details
Published inComputers & electrical engineering Vol. 119; p. 109475
Main Authors Talaat, Fatma M., El-Shafai, Walid, Soliman, Naglaa F., Algarni, Abeer D., Abd El-Samie, Fathi E., Siam, Ali I.
Format Journal Article
LanguageEnglish
Published Elsevier Ltd 01.10.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Deaf-mute individuals encounter substantial difficulties in their daily lives due to communication impediments. These individuals may encounter difficulties in social contact, communication, and capacity to acquire knowledge and engage in employment. Recent studies have contributed to decreasing the gap in communication between deaf-mute people and normal people by studying sign language interpretation. In this paper, a real-time Arabic avatar system is created to help deaf-mute people communicate with other people. The system translates text or spoken input into Arabic Sign Language (ArSL) movements that the avatar makes using deep-learning-based translation. The dynamic generation of the avatar movements allows smooth and organic real-time communication. In order to improve the precision and effectiveness of ArSL translation, this study depends on a state-of-the-art deep learning model, which makes use of YOLOv8, to recognize and interpret sign language gestures in realtime. The avatar is trained on three diverse datasets of Arabic sign language images, namely Sign-language-detection Image (SLDI), Arabic Sign Language (ArSL), and RGB Arabic Alphabet Sign Language (AASL), enabling it to accurately capture the nuances and variations of hand movements. The best recognition accuracy of the suggested approach was 99.4% on the AASL dataset. The experimental results of the suggested approach demonstrate that deaf-mute people will be able to communicate with others in Arabic-speaking communities more effectively and easily.
ISSN:0045-7906
DOI:10.1016/j.compeleceng.2024.109475