Transforming Sign Language Recognition with VGG16 and ResNet50 Deep Learning Models
Sign language recognition (SLR) is essential for facilitating communication for individuals who are deaf or hard of hearing. This study examines the effectiveness of two popular deep learning models, VGG16 and ResNet50, in SLR applications. By employing the VGG16 and ResNet50 frameworks, we achieved...
Saved in:
Published in | 2024 Asian Conference on Intelligent Technologies (ACOIT) pp. 1 - 5 |
---|---|
Main Authors | , , , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
06.09.2024
|
Subjects | |
Online Access | Get full text |
ISBN | 9798350374933 |
DOI | 10.1109/ACOIT62457.2024.10939471 |
Cover
Summary: | Sign language recognition (SLR) is essential for facilitating communication for individuals who are deaf or hard of hearing. This study examines the effectiveness of two popular deep learning models, VGG16 and ResNet50, in SLR applications. By employing the VGG16 and ResNet50 frameworks, we achieved high accuracy rates of 99.92% and 99.95%, respectively, in recognizing sign language gestures. Our findings demonstrate that these models are highly effective in interpreting hand movements and gestures, thus enhancing communication for sign language users. This research leverages advanced deep learning techniques to advance SLR systems, offering significant potential for improving inclusive communication and accessibility. |
---|---|
ISBN: | 9798350374933 |
DOI: | 10.1109/ACOIT62457.2024.10939471 |