Quantifying Gaze Behavior During Real-World Interactions Using Automated Object, Face, and Fixation Detection

As technologies develop for acquiring gaze behavior in real world social settings, robust methods are needed that minimize the time required for a trained observer to code behaviors. We record gaze behavior from a subject wearing eye-tracking glasses during a naturalistic interaction with three othe...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on cognitive and developmental systems Vol. 10; no. 4; pp. 1143 - 1152
Main Authors Chukoskie, Leanne, Guo, Shengyao, Ho, Eric, Zheng, Yalun, Chen, Qiming, Meng, Vivian, Cao, John, Devgan, Nikhita, Wu, Si, Cosman, Pamela C.
Format Journal Article
LanguageEnglish
Published Piscataway IEEE 01.12.2018
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:As technologies develop for acquiring gaze behavior in real world social settings, robust methods are needed that minimize the time required for a trained observer to code behaviors. We record gaze behavior from a subject wearing eye-tracking glasses during a naturalistic interaction with three other people, with multiple objects that are referred to or manipulated during the interaction. The resulting gaze-in-world video from each interaction can be manually coded for different behaviors, but this is extremely time-consuming and requires trained behavioral coders. Instead, we use a neural network to detect objects, and a Viola-Jones framework with feature tracking to detect faces. The time sequence of gazes landing within the object/face bounding boxes is processed for run lengths to determine "looks," and we discuss optimization of run length parameters. Algorithm performance is compared against an expert holistic ground truth.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2379-8920
2379-8939
DOI:10.1109/TCDS.2018.2821566