Hand and Sign Recognition of Alphabets Using YOLOv5
Deaf and dumb people who apply visual gestures can communicate using hand, facial, and body gestures. As there are various ways to communicate but still no compatible technologies available that aid in linking this group of deaf and dumb people to the rest of the people in the world, one of the main...
Saved in:
Published in | SN computer science Vol. 5; no. 3; p. 311 |
---|---|
Main Authors | , , , , |
Format | Journal Article |
Language | English |
Published |
Singapore
Springer Nature Singapore
09.03.2024
Springer Nature B.V |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Deaf and dumb people who apply visual gestures can communicate using hand, facial, and body gestures. As there are various ways to communicate but still no compatible technologies available that aid in linking this group of deaf and dumb people to the rest of the people in the world, one of the main forms of communication among them is hand gestures. With the use of picture classification and machine learning, computers can understand sign language, which can subsequently be translated by humans. Finger sign alphabets are a fundamental part of learning sign language. This work primarily focuses on teaching American alphabet sign languages, which is the foundational stage for teaching hand gestures. YOLO (You Only Look Once) version 5 is used aimed at this kind of object recognition. The majority of current technologies involve machine learning algorithms, CNN algorithms, and electromyography approaches; however, they fall short of Yolo's level of accuracy. Yolo v5 has been proved to be better than its previous version in terms of both accuracy and speed. |
---|---|
ISSN: | 2661-8907 2662-995X 2661-8907 |
DOI: | 10.1007/s42979-024-02628-4 |