Driver Visual Attention Estimation using Head Pose and Eye Appearance Information

In autonomous, as well as manually operated vehicles, monitoring the driver visual attention provides useful information about the behavior, intent and vigilance level of the driver. The gaze of the driver can be formulated in terms of a probabilistic visual map representing the region around which...

Full description

Saved in:
Bibliographic Details
Published inIEEE open journal of intelligent transportation systems Vol. 4; p. 1
Main Authors Jha, Sumit, Al-Dhahir, Naofal, Busso, Carlos
Format Journal Article
LanguageEnglish
Published New York IEEE 01.01.2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:In autonomous, as well as manually operated vehicles, monitoring the driver visual attention provides useful information about the behavior, intent and vigilance level of the driver. The gaze of the driver can be formulated in terms of a probabilistic visual map representing the region around which the driver's attention is focused. The area of the estimated region changes based on the level of confidence of the estimation. This paper proposes a framework based on convolutional neural networks (CNNs) that takes the head pose and the eye appearance of the driver as inputs, and creates a fusion model that estimates the driver's gaze on a 2D grid. The model contains upsampling layers to create estimations at multiple resolutions. The model is trained using data collected from 59 subjects with continuous recordings where the subject looks at a moving target in a parked car, and glances at a set of markers inside the car while driving the vehicle and while the car is parked. Our fusion framework provides superior performance than unimodal systems trained exclusively with head pose or eye appearance information. It estimates the gaze region with the target location lying within the 75% confidence region with an accuracy of 92.54%.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2687-7813
2687-7813
DOI:10.1109/OJITS.2023.3258184