Multi-Dimensional Spatial-Temporal Fusion for Pedestrian Trajectory Prediction

Pedestrian trajectory prediction is a key technology in autonomous driving. Due to the variability of pedestrian trajectories and complex interactions, effective spatial-temporal feature extraction and fusion of trajectories is a key point. Most previous studies did not explicitly consider the trend...

Full description

Saved in:
Bibliographic Details
Published in2022 2nd International Conference on Networking Systems of AI (INSAI) pp. 170 - 174
Main Authors Luo, Tong, Shang, Huiliang, Li, Zengwen, Chen, Changxue
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.10.2022
Subjects
Online AccessGet full text
DOI10.1109/INSAI56792.2022.00040

Cover

Loading…
More Information
Summary:Pedestrian trajectory prediction is a key technology in autonomous driving. Due to the variability of pedestrian trajectories and complex interactions, effective spatial-temporal feature extraction and fusion of trajectories is a key point. Most previous studies did not explicitly consider the trend of the interaction between pedestrians, which can help the model focus on adjacent pedestrians with high impact on the future motion of the predicted target. To address this issue, we propose a Multi-dimensional Spatial-Temporal fusion Graph attention network, called MST-G. Specifically, the directed graphs is used to model the interactions among pedestrians. Meanwhile, on the basis of using spatial-temporal convolution to obtain trajectory features with interaction, we add edge convolution to extract the temporal continuity of the interaction. Finally, the LSTM codec is used for trajectory generation. Experiments show that our model achieves better performance on two publicly available population datasets (ETH and UCY).
DOI:10.1109/INSAI56792.2022.00040