A Visual Attention Model Based on Eye Tracking in 3D Scene Maps

Visual attention plays a crucial role in the map-reading process and is closely related to the map cognitive process. Eye-tracking data contains a wealth of visual information that can be used to identify cognitive behavior during map reading. Nevertheless, few researchers have applied these data to...

Full description

Saved in:
Bibliographic Details
Published inISPRS international journal of geo-information Vol. 10; no. 10; p. 664
Main Authors Yang, Bincheng, Li, Hongwei
Format Journal Article
LanguageEnglish
Published Basel MDPI AG 01.10.2021
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Visual attention plays a crucial role in the map-reading process and is closely related to the map cognitive process. Eye-tracking data contains a wealth of visual information that can be used to identify cognitive behavior during map reading. Nevertheless, few researchers have applied these data to quantifying visual attention. This study proposes a method for quantitatively calculating visual attention based on eye-tracking data for 3D scene maps. First, eye-tracking technology was used to obtain the differences in the participants’ gaze behavior when browsing a street view map in the desktop environment, and to establish a quantitative relationship between eye movement indexes and visual saliency. Then, experiments were carried out to determine the quantitative relationship between visual saliency and visual factors, using vector 3D scene maps as stimulus material. Finally, a visual attention model was obtained by fitting the data. It was shown that a combination of three visual factors can represent the visual attention value of a 3D scene map: color, shape, and size, with a goodness of fit (R2) greater than 0.699. The current research helps to determine and quantify the visual attention allocation during map reading, laying the foundation for automated machine mapping.
ISSN:2220-9964
2220-9964
DOI:10.3390/ijgi10100664