Automatic sign language translation system using neural network technologies and 3D animation
Implementation of automatic sign language translation software in the process of social inclusion of people with hearing impairment is an important task. Social inclusion for people with hearing disabilities is an acute problem that must be solved in the context of the development of IT technologies...
Saved in:
Published in | Sučasnij stan naukovih doslìdženʹ ta tehnologìj v promislovostì (Online) no. 4(26); pp. 108 - 121 |
---|---|
Main Authors | , , , |
Format | Journal Article |
Language | English |
Published |
27.12.2023
|
Online Access | Get full text |
ISSN | 2522-9818 2524-2296 |
DOI | 10.30837/ITSSI.2023.26.108 |
Cover
Loading…
Abstract | Implementation of automatic sign language translation software in the process of social inclusion of people with hearing impairment is an important task. Social inclusion for people with hearing disabilities is an acute problem that must be solved in the context of the development of IT technologies and legislative initiatives that ensure the rights of people with disabilities and their equal opportunities. This substantiates the relevance of the research of assistive technologies, in the context of software tools, such as the process of social inclusion of people with severe hearing impairment in society. The subject of research is methods of automated sign language translation using intelligent technologies. The purpose of the work is the development and research of sign language automation methods to improve the quality of life of people with hearing impairments in accordance with the "Goals of Sustainable Development of Ukraine" (in the "Reduction of Inequality" part). The main tasks of the research are the development and testing of methods of converting sign language into text, converting text into sign language, as well as automating translation from one sign language to another sign language using modern intelligent technologies. Neural network modeling and 3D animation methods were used to solve these problems. The following results were obtained in the work: the main problems and tasks of social inclusion for people with hearing impairments were identified; a comparative analysis of modern methods and software platforms of automatic sign language translation was carried out; a system combining the SL-to-Text method is proposed and investigated; the Text-to-SL method using 3D animation to generate sign language concepts; the method of generating a 3D-animated gesture from video recordings; method of implementing the Sign Language1 to Sign Language2 technology. For gesture recognition, a convolutional neural network model is used, which is trained using imported and system-generated datasets of video gestures. The trained model has a high recognition accuracy (98.52%). The creation of a 3D model for displaying the gesture on the screen and its processing took place in the Unity 3D environment. The structure of the project, executive and auxiliary files used to build 3D animation for the generation of sign language concepts includes: event handler files; display results according to which they carry information about the position of the tracked points of the body; files that store the characteristics of materials that have been added to certain body mapping points. Conclusions: the proposed methods of automated translation have practical significance, which is confirmed by the demo versions of the software applications "Sign Language to Text" and "Text to Sign Language". A promising direction for continuing research on the topic of the work is the improvement of SL1-to-SL2 methods, the creation of open datasets of video gestures, the joining of scientists and developers to fill dictionaries with concepts of various sign languages. |
---|---|
AbstractList | Implementation of automatic sign language translation software in the process of social inclusion of people with hearing impairment is an important task. Social inclusion for people with hearing disabilities is an acute problem that must be solved in the context of the development of IT technologies and legislative initiatives that ensure the rights of people with disabilities and their equal opportunities. This substantiates the relevance of the research of assistive technologies, in the context of software tools, such as the process of social inclusion of people with severe hearing impairment in society. The subject of research is methods of automated sign language translation using intelligent technologies. The purpose of the work is the development and research of sign language automation methods to improve the quality of life of people with hearing impairments in accordance with the "Goals of Sustainable Development of Ukraine" (in the "Reduction of Inequality" part). The main tasks of the research are the development and testing of methods of converting sign language into text, converting text into sign language, as well as automating translation from one sign language to another sign language using modern intelligent technologies. Neural network modeling and 3D animation methods were used to solve these problems. The following results were obtained in the work: the main problems and tasks of social inclusion for people with hearing impairments were identified; a comparative analysis of modern methods and software platforms of automatic sign language translation was carried out; a system combining the SL-to-Text method is proposed and investigated; the Text-to-SL method using 3D animation to generate sign language concepts; the method of generating a 3D-animated gesture from video recordings; method of implementing the Sign Language1 to Sign Language2 technology. For gesture recognition, a convolutional neural network model is used, which is trained using imported and system-generated datasets of video gestures. The trained model has a high recognition accuracy (98.52%). The creation of a 3D model for displaying the gesture on the screen and its processing took place in the Unity 3D environment. The structure of the project, executive and auxiliary files used to build 3D animation for the generation of sign language concepts includes: event handler files; display results according to which they carry information about the position of the tracked points of the body; files that store the characteristics of materials that have been added to certain body mapping points. Conclusions: the proposed methods of automated translation have practical significance, which is confirmed by the demo versions of the software applications "Sign Language to Text" and "Text to Sign Language". A promising direction for continuing research on the topic of the work is the improvement of SL1-to-SL2 methods, the creation of open datasets of video gestures, the joining of scientists and developers to fill dictionaries with concepts of various sign languages. |
Author | Udovenko, Serhii Chala, Larysa Grynyova, Olena Shovkovyi, Yevhenii |
Author_xml | – sequence: 1 givenname: Yevhenii orcidid: 0009-0007-5613-0946 surname: Shovkovyi fullname: Shovkovyi, Yevhenii – sequence: 2 givenname: Olena orcidid: 0000-0002-3367-8067 surname: Grynyova fullname: Grynyova, Olena – sequence: 3 givenname: Serhii orcidid: 0000-0001-5945-8647 surname: Udovenko fullname: Udovenko, Serhii – sequence: 4 givenname: Larysa orcidid: 0000-0002-9890-4790 surname: Chala fullname: Chala, Larysa |
BookMark | eNp9kM1OwzAQhC1UJErpC3DyCyT4J3bsY1X-IlXi0HJEkevawZDaKHaE-vaYwIkDp1nt6lvNzCWY-eANANcYlRQJWt80u-22KQkitCS8xEicgTlhpCoIkXw2zaSQAosLsIzxDSFERM0RwXPwshpTOKrkNIyu87BXvhtVZ2AalI99PgQP4ykmc4RjdL6D3oyD6rOkzzC8w2T0qw996JyJUPkDpLdZ3HEir8C5VX00y19dgOf7u936sdg8PTTr1abQmHJRcKusqBlFzEptcc7Bqr1ljNXaILEXmGFJ64M44LraKykRr5i2TDJKhaqIoQsgfv7qIcQ4GNtqlyYHOYbrW4zaqal2aqr9bqolPK9FRskf9GPI7ofTf9AXvgZvCQ |
CitedBy_id | crossref_primary_10_5902_1984686X89913 crossref_primary_10_1016_j_procs_2024_10_328 |
ContentType | Journal Article |
DBID | AAYXX CITATION |
DOI | 10.30837/ITSSI.2023.26.108 |
DatabaseName | CrossRef |
DatabaseTitle | CrossRef |
DatabaseTitleList | CrossRef |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Business |
EISSN | 2524-2296 |
EndPage | 121 |
ExternalDocumentID | 10_30837_ITSSI_2023_26_108 |
GroupedDBID | AAYXX ADBBV ALMA_UNASSIGNED_HOLDINGS BCNDV CITATION GROUPED_DOAJ |
ID | FETCH-LOGICAL-c1368-6faf875305f9cf102354bf5557ce08b8151937d8d174ba990645cf595338a42e3 |
ISSN | 2522-9818 |
IngestDate | Thu Apr 24 23:07:11 EDT 2025 Tue Jul 01 04:10:16 EDT 2025 |
IsDoiOpenAccess | false |
IsOpenAccess | true |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 4(26) |
Language | English |
License | http://creativecommons.org/licenses/by-nc-sa/4.0 |
LinkModel | OpenURL |
MergedId | FETCHMERGED-LOGICAL-c1368-6faf875305f9cf102354bf5557ce08b8151937d8d174ba990645cf595338a42e3 |
ORCID | 0000-0001-5945-8647 0000-0002-9890-4790 0009-0007-5613-0946 0000-0002-3367-8067 |
OpenAccessLink | https://journals.uran.ua/itssi/article/download/294939/287801 |
PageCount | 14 |
ParticipantIDs | crossref_citationtrail_10_30837_ITSSI_2023_26_108 crossref_primary_10_30837_ITSSI_2023_26_108 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 2023-12-27 |
PublicationDateYYYYMMDD | 2023-12-27 |
PublicationDate_xml | – month: 12 year: 2023 text: 2023-12-27 day: 27 |
PublicationDecade | 2020 |
PublicationTitle | Sučasnij stan naukovih doslìdženʹ ta tehnologìj v promislovostì (Online) |
PublicationYear | 2023 |
SSID | ssj0002876021 ssib044762074 ssib036251356 |
Score | 2.242786 |
Snippet | Implementation of automatic sign language translation software in the process of social inclusion of people with hearing impairment is an important task.... |
SourceID | crossref |
SourceType | Enrichment Source Index Database |
StartPage | 108 |
Title | Automatic sign language translation system using neural network technologies and 3D animation |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1Jj9MwFLbKICEuiFUMm3ygpygl8ZLl2EkHDUjApZWGA4oSx1HbKQmappHK_-V_8GwnqVUWMVzSyrKdtO_T2_LeZ4ReR0QQn5XSzXlJXZb7xI1Dj7kS4tuQxgJiaNXg_OFjcLFg7y_55Wj0w6pa2jX5RHz_bV_J_0gVxkCuqkv2BpIdNoUB-A7yhStIGK7_JOPprqkN5aoqwxhyj-rch2pritw6qmZnp3MCir0SZFKZ2m-n6fPqEC7rtwh0Bh-rrwdprft-Ee10zrJttVrr9INTZburul0tnaJWDiodT5NinPDx2TkosmQ6PoudJoM7mBuYCWunVQVhAK1N3apuEz16zHjaUUbWLey_18UGn2W7lNVqdSgW2lf7utV-7ycwm4NpWRQ1KO8rnf4FLbg8LEmW2SYzjeDX-21mJzsIVYUjhjvA6EQC7qIbR53Klv0YcwkxB-N2cGXw5OrwytjSzb4XWWbeN43ZxxaEgkuqXmK_m4Mxm6hHmJBgMiy16bqPzOhQ3Ahhld4l1Xukao-UBIqJ9Ra6TSCa8azIH9QeuBDct1gCGQMD1VMUrXX-Mww83UE4_HzT_qVv8-aXR7VcLMtXmt9H97ogB08NYh-gkaweojt9j8Uj9GUALlbAxT1wsQVcbICLNXCxAS7ugItt4GIALqYzPAD3MVq8PZ8nF253zocrfBpEblBmpQqbPV7GolRcIpzlJec8FNKL8shXUUZYRAVEz3kG7lPAuCi5KoyOMkYkfYJOqrqSTxFmoF_iMPfyMoBQHJxhCgGCDEWRFVQwQU6R3_81qehI8NVZLJv0z1I7Rc6w5puhgPnL7Gc3mv0c3T3A_AU6aa538iV4uU3-SmPkJ50wnqA |
linkProvider | Directory of Open Access Journals |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Automatic+sign+language+translation+system+using+neural+network+technologies+and+3D+animation&rft.jtitle=Su%C4%8Dasnij+stan+naukovih+dosl%C3%ACd%C5%BEen%CA%B9+ta+tehnolog%C3%ACj+v+promislovost%C3%AC+%28Online%29&rft.au=Shovkovyi%2C+Yevhenii&rft.au=Grynyova%2C+Olena&rft.au=Udovenko%2C+Serhii&rft.au=Chala%2C+Larysa&rft.date=2023-12-27&rft.issn=2522-9818&rft.eissn=2524-2296&rft.issue=4%2826%29&rft.spage=108&rft.epage=121&rft_id=info:doi/10.30837%2FITSSI.2023.26.108&rft.externalDBID=n%2Fa&rft.externalDocID=10_30837_ITSSI_2023_26_108 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2522-9818&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2522-9818&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2522-9818&client=summon |