Transforming Sign Language Recognition with VGG16 and ResNet50 Deep Learning Models

Sign language recognition (SLR) is essential for facilitating communication for individuals who are deaf or hard of hearing. This study examines the effectiveness of two popular deep learning models, VGG16 and ResNet50, in SLR applications. By employing the VGG16 and ResNet50 frameworks, we achieved...

Full description

Saved in:
Bibliographic Details
Published in2024 Asian Conference on Intelligent Technologies (ACOIT) pp. 1 - 5
Main Authors Chauhan, Shanvi, Gill, Kanwarpartap Singh, Chauhan, Rahul, Pokhariyal, Hemant Singh
Format Conference Proceeding
LanguageEnglish
Published IEEE 06.09.2024
Subjects
Online AccessGet full text
ISBN9798350374933
DOI10.1109/ACOIT62457.2024.10939471

Cover

More Information
Summary:Sign language recognition (SLR) is essential for facilitating communication for individuals who are deaf or hard of hearing. This study examines the effectiveness of two popular deep learning models, VGG16 and ResNet50, in SLR applications. By employing the VGG16 and ResNet50 frameworks, we achieved high accuracy rates of 99.92% and 99.95%, respectively, in recognizing sign language gestures. Our findings demonstrate that these models are highly effective in interpreting hand movements and gestures, thus enhancing communication for sign language users. This research leverages advanced deep learning techniques to advance SLR systems, offering significant potential for improving inclusive communication and accessibility.
ISBN:9798350374933
DOI:10.1109/ACOIT62457.2024.10939471