Automated 3D Object Reference Generation for the Evaluation of Autonomous Vehicle Perception

Understanding the surrounding traffic is a challenging task for automated driving systems. A reliable perception is not only mandatory for safe prediction, planning and subsequent operation in traffic but also serves as a basis for post-analysis to identify and collect encountered scenarios. Evaluat...

Full description

Saved in:
Bibliographic Details
Published in2021 5th International Conference on System Reliability and Safety (ICSRS) pp. 312 - 321
Main Authors Philipp, Robin, Zhu, Zhijing, Fuchs, Julian, Hartjen, Lukas, Schuldt, Fabian, Howar, Falk
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.01.2021
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Understanding the surrounding traffic is a challenging task for automated driving systems. A reliable perception is not only mandatory for safe prediction, planning and subsequent operation in traffic but also serves as a basis for post-analysis to identify and collect encountered scenarios. Evaluating a perception component relies mostly on comparing object hypotheses to a reference. These references are often the result of manual labeling processes which are time-consuming, expensive and can be prone to errors. In this work, we propose a process for the automatic generation of dimension and classification references of perceived objects. Our approach post-processes perceived objects under consideration of sensor mounting information and infrastructure elements defined by an HD map. The dimension reference generation considers reliable measurements that correspond to situations that are assessed as favorable for perceiving the analyzed object. The classification reference is generated by investigating objects towards patterns like specific movement profiles or interactions with infrastructure elements. We show process feasibility and evaluate initial results by comparison with manually labelled object classifications and dimensions based on corresponding camera images. The results show an improved correctness up to 93.7% regarding object classifications and accuracy of vehicle length and width (RMSE = {37.51 cm, 24.14 cm} respectively). Finally, we discuss how the proposed approach can facilitate perception evaluation.
DOI:10.1109/ICSRS53853.2021.9660660