Real-Time Sign Language Gesture and Facial Expressions Detection Method to Assist the Speech and Hearing-impaired

Sign language is crucial in order to communicate with others who have speech or hearing impairments. Adding inclusiveness-promoting facial expressions can further boost its efficiency. This work uses the SSD MobileNet model to enable real-time face expression and gesture identification in sign langu...

Full description

Saved in:
Bibliographic Details
Published in2024 IEEE International Conference for Women in Innovation, Technology & Entrepreneurship (ICWITE) pp. 477 - 483
Main Authors Venkatesh, Ananya, Vaibhavi, M, Aishwarya, R, Moghis, Adeeba, Padmapriya, V.
Format Conference Proceeding
LanguageEnglish
Published IEEE 16.02.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Sign language is crucial in order to communicate with others who have speech or hearing impairments. Adding inclusiveness-promoting facial expressions can further boost its efficiency. This work uses the SSD MobileNet model to enable real-time face expression and gesture identification in sign language. The methodology trains on a custom dataset that is diverse and yields accurate results using computer vision techniques. The model's originality and potential to enhance communication are highlighted by its precision in recognizing sign language and facial expressions. The transformative impact of this research extends to communication devices, promising to refine sign language techniques and create a more comfortable future for individuals facing challenges associated with speech and hearing impairments. This innovative approach not only addresses immediate communication needs but also paves the way for continuous advancements in assistive technologies.
DOI:10.1109/ICWITE59797.2024.10503532