Hand and Sign Recognition of Alphabets Using YOLOv5

Deaf and dumb people who apply visual gestures can communicate using hand, facial, and body gestures. As there are various ways to communicate but still no compatible technologies available that aid in linking this group of deaf and dumb people to the rest of the people in the world, one of the main...

Full description

Saved in:
Bibliographic Details
Published inSN computer science Vol. 5; no. 3; p. 311
Main Authors Poornima, I. Gethzi Ahila, Priya, G. Sakthi, Yogaraja, C. A., Venkatesh, R., Shalini, P.
Format Journal Article
LanguageEnglish
Published Singapore Springer Nature Singapore 09.03.2024
Springer Nature B.V
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Deaf and dumb people who apply visual gestures can communicate using hand, facial, and body gestures. As there are various ways to communicate but still no compatible technologies available that aid in linking this group of deaf and dumb people to the rest of the people in the world, one of the main forms of communication among them is hand gestures. With the use of picture classification and machine learning, computers can understand sign language, which can subsequently be translated by humans. Finger sign alphabets are a fundamental part of learning sign language. This work primarily focuses on teaching American alphabet sign languages, which is the foundational stage for teaching hand gestures. YOLO (You Only Look Once) version 5 is used aimed at this kind of object recognition. The majority of current technologies involve machine learning algorithms, CNN algorithms, and electromyography approaches; however, they fall short of Yolo's level of accuracy. Yolo v5 has been proved to be better than its previous version in terms of both accuracy and speed.
ISSN:2661-8907
2662-995X
2661-8907
DOI:10.1007/s42979-024-02628-4