RIANet++: Road Graph and Image Attention Networks for Robust Urban Autonomous Driving Under Road Changes
The structure of roads plays an important role in designing autonomous driving algorithms. We propose a novel road graph based driving framework, named RIANet++. The proposed framework considers the road structural scene context by incorporating both graphical features of the road and visual informa...
Saved in:
Published in | IEEE robotics and automation letters Vol. 8; no. 11; pp. 7815 - 7822 |
---|---|
Main Authors | , , , , , , , |
Format | Journal Article |
Language | English |
Published |
Piscataway
IEEE
01.11.2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | The structure of roads plays an important role in designing autonomous driving algorithms. We propose a novel road graph based driving framework, named RIANet++. The proposed framework considers the road structural scene context by incorporating both graphical features of the road and visual information through the attention mechanism. Also, the proposed framework can deal with the performance degradation problem, caused by road changes and corresponding road graph data unreliability. For this purpose, we suggest a road change detection module which can filter out unreliable road graph data by evaluating the similarity between the camera image and the query road graph. In this letter, we suggest two types of detection methods, semantic matching and graph matching. The semantic matching (resp., graph matching) method computes the similarity score by transforming the road graph data (resp., camera data) into the semantic image domain (resp., road graph domain). In experiments, we test the proposed method in two driving environments: the CARLA simulator and the FMTC real-world environment. The experiment results demonstrate that the proposed driving framework outperforms other baselines and operates robustly under road changes. |
---|---|
ISSN: | 2377-3766 2377-3766 |
DOI: | 10.1109/LRA.2023.3320491 |