Empowering Deaf with Indian Sign Language Interpreter using Deep Learning

In the ever-changing global landscape, the diversity of sign languages, with over 140 distinct variants worldwide, poses a significant challenge in developing universally applicable recognition models. This complexity is compounded by the dynamic nature of sign languages, requiring constant adaptati...

Full description

Saved in:
Bibliographic Details
Published in2024 MIT Art, Design and Technology School of Computing International Conference (MITADTSoCiCon) pp. 1 - 6
Main Authors Nimbalkar, Shivanjali Vijay, Vaidya, Soham Nilesh, Gade, Mayuri Mahesh, Hagare, Pranali Sandip, Shendage, Pradip N.
Format Conference Proceeding
LanguageEnglish
Published IEEE 25.04.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:In the ever-changing global landscape, the diversity of sign languages, with over 140 distinct variants worldwide, poses a significant challenge in developing universally applicable recognition models. This complexity is compounded by the dynamic nature of sign languages, requiring constant adaptation to incorporate emerging signs tied to technological advancements. This research aims to improve communication accessibility for the deaf community in India through a Real-time Indian Sign Language (ISL) recognition system. Specifically addressing the communication gap experienced by deaf children, our approach leverages advanced technologies such as deep learning, computer vision, and neural networks to convert ISL gestures into a textual format. The system relies on a carefully curated dataset, including emergency words, Devanagari script, and English alphabets in ISL. Using methodologies like the mediapipe library and cvzone hand tracking module, we construct the SignVaria and implement dynamic hand gesture recognition modules with advanced deep learning architectures, including Convolutional Neural Networks (CNN) and Long Short-Term Memory (LSTM) networks. The goal is to overcome regional specificity challenges in sign language recognition, ensuring the integration of cutting-edge technologies for enhanced robustness and accessibility for the target demographic. Our research yielded the most impressive results using the Convolutional Neural Network (CNN), achieving an accuracy of 96.1%.
DOI:10.1109/MITADTSoCiCon60330.2024.10575064