Multi-Modal Vehicle Trajectory Prediction by Collaborative Learning of Lane Orientation, Vehicle Interaction, and Intention

Accurate trajectory prediction is an essential task in automated driving, which is achieved by sensing and analyzing the behavior of surrounding vehicles. Although plenty of research works have been invested in this field, it is still a challenging subject due to the environment's complexity an...

Full description

Saved in:
Bibliographic Details
Published inSensors (Basel, Switzerland) Vol. 22; no. 11; p. 4295
Main Authors Tian, Wei, Wang, Songtao, Wang, Zehan, Wu, Mingzhi, Zhou, Sihong, Bi, Xin
Format Journal Article
LanguageEnglish
Published Switzerland MDPI AG 05.06.2022
MDPI
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Accurate trajectory prediction is an essential task in automated driving, which is achieved by sensing and analyzing the behavior of surrounding vehicles. Although plenty of research works have been invested in this field, it is still a challenging subject due to the environment's complexity and the driving intention uncertainty. In this paper, we propose a joint learning architecture to incorporate the lane orientation, vehicle interaction, and driving intention in vehicle trajectory forecasting. This work employs a coordinate transform to encode the vehicle trajectory with lane orientation information, which is further incorporated into various interaction models to explore the mutual trajectory relations. Extracted features are applied in a dual-level stochastic choice learning to distinguish the trajectory modality at both the intention and motion levels. By collaborative learning of lane orientation, interaction, and intention, our approach can be applied to both highway and urban scenes. Experiments on the NGSIM, HighD, and Argoverse datasets demonstrate that the proposed method achieves a significant improvement in prediction accuracy compared with the baseline.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:1424-8220
1424-8220
DOI:10.3390/s22114295