Evolutionary Fuzzy Integral-Based Gaze Control With Preference of Human Gaze

Research on developing human-like gaze control has been carried out to enhance human-robot interaction. From the viewpoint of a large consistency of human gaze, conventional research had focused on predicting where humans usually pay attention to. However, gaze control is a cognitive process that ca...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on cognitive and developmental systems Vol. 8; no. 3; pp. 186 - 200
Main Authors Yoo, Bum-Soo, Kim, Jong-Hwan
Format Journal Article
LanguageEnglish
Published IEEE 01.09.2016
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Research on developing human-like gaze control has been carried out to enhance human-robot interaction. From the viewpoint of a large consistency of human gaze, conventional research had focused on predicting where humans usually pay attention to. However, gaze control is a cognitive process that can even produce different scanpaths from the same visual information. In this paper, an evolutionary fuzzy integral-based gaze control algorithm with preference is proposed. It produces various scanpaths according to the preference of human gaze. The proposed gaze control algorithm evaluates each pixel point with fuzzy measures and fuzzy integral, and produces a scanpath through repeated selections considering memory and bio-inspired processes. The produced scanpath is transformed into a fixation map and compared with a scanpath obtained from a human subject by the earth mover's distance. Based on the comparison, quantum-inspired evolutionary algorithm gradually develops preference of human gaze to produce a scanpath similar to the human scanpath. The effectiveness of the proposed algorithm is demonstrated by comparing a human scanpath with a scanpath produced from the algorithm using the developed preference. The applicability of the proposed algorithm is also demonstrated by applying the developed preference to gaze control for learning from demonstration.
ISSN:2379-8920
2379-8939
DOI:10.1109/TCDS.2016.2558516