Visual perception modelling for intelligent virtual driver agents in synthetic driving simulation

This paper introduces a vision model used within a synthetic (or purely software-based) driving simulation framework. This framework represents driver decision-making, individual vehicle movement and emergent traffic flow, and is intended to aid the integration of driver psychology, traffic manageme...

Full description

Saved in:
Bibliographic Details
Published inJournal of experimental & theoretical artificial intelligence Vol. 15; no. 1; pp. 73 - 102
Main Authors Dumbuya, A. D, wood, R. L.
Format Journal Article
LanguageEnglish
Published Taylor & Francis Group 01.01.2003
Subjects
Online AccessGet full text
ISSN0952-813X
1362-3079
DOI10.1080/0952813021000031933

Cover

More Information
Summary:This paper introduces a vision model used within a synthetic (or purely software-based) driving simulation framework. This framework represents driver decision-making, individual vehicle movement and emergent traffic flow, and is intended to aid the integration of driver psychology, traffic management and vehicle engineering. The aims of developing the vision model discussed here are twofold: Firstly, to remove the unrealistic availability of 'perfect knowledge' concerning the positions and velocities of vehicles in a simulation and secondly, to provide a means of introducing deeper cognitive models of driver reasoning and behaviour. The paper presents the essential mechanisms of the vision model, along with the results of initial validation experiments conducted with Leicestershire Constabulary, Traffic Division in the UK. In these experiments, subjects' visual perception of positions and speeds of moving vehicles were measured and compared with estimations of the agent based driving simulator. The results have demonstrated the feasibility of modelling driver vision within an agent based traffic simulation using concepts derived from AI and ALife systems. The paper is completed by a short discussion on the future development of cognitive models enabled through more detailed vision modelling.
Bibliography:ObjectType-Article-2
SourceType-Scholarly Journals-1
ObjectType-Feature-1
content type line 23
ISSN:0952-813X
1362-3079
DOI:10.1080/0952813021000031933