Attention Prediction in Egocentric Video Using Motion and Visual Saliency

We propose a method of predicting human egocentric visual attention using bottom-up visual saliency and egomotion information. Computational models of visual saliency are often employed to predict human attention; however, its mechanism and effectiveness have not been fully explored in egocentric vi...

Full description

Saved in:
Bibliographic Details
Published inAdvances in Image and Video Technology Vol. 7087; pp. 277 - 288
Main Authors Yamada, Kentaro, Sugano, Yusuke, Okabe, Takahiro, Sato, Yoichi, Sugimoto, Akihiro, Hiraki, Kazuo
Format Book Chapter
LanguageEnglish
Published Germany Springer Berlin / Heidelberg 2011
Springer Berlin Heidelberg
SeriesLecture Notes in Computer Science
Subjects
Online AccessGet full text
ISBN9783642253669
3642253660
ISSN0302-9743
1611-3349
DOI10.1007/978-3-642-25367-6_25

Cover

More Information
Summary:We propose a method of predicting human egocentric visual attention using bottom-up visual saliency and egomotion information. Computational models of visual saliency are often employed to predict human attention; however, its mechanism and effectiveness have not been fully explored in egocentric vision. The purpose of our framework is to compute attention maps from an egocentric video that can be used to infer a person’s visual attention. In addition to a standard visual saliency model, two kinds of attention maps are computed based on a camera’s rotation velocity and direction of movement. These rotation-based and translation-based attention maps are aggregated with a bottom-up saliency map to enhance the accuracy with which the person’s gaze positions can be predicted. The efficiency of the proposed framework was examined in real environments by using a head-mounted gaze tracker, and we found that the egomotion-based attention maps contributed to accurately predicting human visual attention.
ISBN:9783642253669
3642253660
ISSN:0302-9743
1611-3349
DOI:10.1007/978-3-642-25367-6_25