Classification of extreme facial events in sign language videos

We propose a new approach for Extreme States Classification (ESC) on feature spaces of facial cues in sign language (SL) videos. The method is built upon Active Appearance Model (AAM) face tracking and feature extraction of global and local AAMs. ESC is applied on various facial cues - as, for insta...

Full description

Saved in:
Bibliographic Details
Published inEURASIP journal on image and video processing Vol. 2014; no. 1; pp. 1 - 19
Main Authors Antonakos, Epameinondas, Pitsikalis, Vassilis, Maragos, Petros
Format Journal Article
LanguageEnglish
Published Cham Springer International Publishing 13.03.2014
Springer Nature B.V
BioMed Central Ltd
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:We propose a new approach for Extreme States Classification (ESC) on feature spaces of facial cues in sign language (SL) videos. The method is built upon Active Appearance Model (AAM) face tracking and feature extraction of global and local AAMs. ESC is applied on various facial cues - as, for instance, pose rotations, head movements and eye blinking - leading to the detection of extreme states such as left/right, up/down and open/closed. Given the importance of such facial events in SL analysis, we apply ESC to detect visual events on SL videos, including both American (ASL) and Greek (GSL) corpora, yielding promising qualitative and quantitative results. Further, we show the potential of ESC for assistive annotation tools and demonstrate a link of the detections with indicative higher-level linguistic events. Given the lack of facial annotated data and the fact that manual annotations are highly time-consuming, ESC results indicate that the framework can have significant impact on SL processing and analysis.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:1687-5281
1687-5176
1687-5281
DOI:10.1186/1687-5281-2014-14