Real-Time Sign Language Gesture and Facial Expressions Detection Method to Assist the Speech and Hearing-impaired
Sign language is crucial in order to communicate with others who have speech or hearing impairments. Adding inclusiveness-promoting facial expressions can further boost its efficiency. This work uses the SSD MobileNet model to enable real-time face expression and gesture identification in sign langu...
Saved in:
Published in | 2024 IEEE International Conference for Women in Innovation, Technology & Entrepreneurship (ICWITE) pp. 477 - 483 |
---|---|
Main Authors | , , , , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
16.02.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Sign language is crucial in order to communicate with others who have speech or hearing impairments. Adding inclusiveness-promoting facial expressions can further boost its efficiency. This work uses the SSD MobileNet model to enable real-time face expression and gesture identification in sign language. The methodology trains on a custom dataset that is diverse and yields accurate results using computer vision techniques. The model's originality and potential to enhance communication are highlighted by its precision in recognizing sign language and facial expressions. The transformative impact of this research extends to communication devices, promising to refine sign language techniques and create a more comfortable future for individuals facing challenges associated with speech and hearing impairments. This innovative approach not only addresses immediate communication needs but also paves the way for continuous advancements in assistive technologies. |
---|---|
DOI: | 10.1109/ICWITE59797.2024.10503532 |