SILFA: Sign Language Facial Action Database for the Development of Assistive Technologies for the Deaf

Facial expressions are fundamental in Sign Languages (SLs), which are visuospatial linguistic systems structured on gestures, adopted around the world by deaf people to communicate. Deaf individuals frequently need a sign language interpreter in their access to school and public services. In such sc...

Full description

Saved in:
Bibliographic Details
Published in2020 15th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2020) pp. 688 - 692
Main Authors Silva, Emely Pujolli da, Costa, Paula Dornhofer Paro, Kumada, Kate Mamhy Oliveira, De Martino, Jose Mario
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.11.2020
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Facial expressions are fundamental in Sign Languages (SLs), which are visuospatial linguistic systems structured on gestures, adopted around the world by deaf people to communicate. Deaf individuals frequently need a sign language interpreter in their access to school and public services. In such scenarios, the absence of interpreters typically results in discouraging experiences. Developments in Automatic Sign Language Recognition (ASLR) can enable newer assistive technologies and change the interaction of the deaf with the world. One major barrier to improving ASLR is the difficulty in obtaining sets of well-annotated data. We present a newly developed video database of Brazilian Sign Language facial expressions in a diverse group of deaf and hearing young adults. Well-validated sentences stimulus were used to elicit affective and grammatical facial expressions. Frame-work ground-truth for facial actions was manually annotated using the Facial Action Coding System (FACS). Also, the work promotes the exploration of discriminant features in subtle facial expression in sign language, a better understanding of the relation between grammatical facial expression class dynamics in facial action units, and a deeper understanding of its facial action occurrence. To provide a baseline for use in future research, protocols and benchmarks for automated action unit recognition are reported.
DOI:10.1109/FG47880.2020.00059