Multi-View Radar Autoencoder for Self-Supervised Automotive Radar Representation Learning

Automotive radar has been extensively utilized in cars for many years as an essential sensor, primarily due to its robustness in extreme weather conditions, its capacity to measure Doppler information in the surrounding environment, and its cost-effectiveness. Recently, developments in radar technol...

Full description

Saved in:
Bibliographic Details
Published in2024 IEEE Intelligent Vehicles Symposium (IV) pp. 1601 - 1608
Main Authors Zhu, Haoran, He, Haoze, Choromanska, Anna, Ravindran, Satish, Shi, Binbin, Chen, Lihui
Format Conference Proceeding
LanguageEnglish
Published IEEE 02.06.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Automotive radar has been extensively utilized in cars for many years as an essential sensor, primarily due to its robustness in extreme weather conditions, its capacity to measure Doppler information in the surrounding environment, and its cost-effectiveness. Recently, developments in radar technologies and the availability of open-source radar data sets have attracted more attention to radars and using them for perception tasks in deep learning based autonomous driving. However, annotating radar data for large-scale autonomous driving perception tasks is challenging, i.e., it is difficult for humans to label this data and often requires a semi-automatic approach that involves projecting labels from other sensors, such as cameras and LiDARs. The lack of high-quality labeled data has limited the performance of radar perception models. In this paper, we propose MVRAE, a Multi-View Radar AutoEncoder, which employs self-supervised learning to learn meaningful representations from multi-view radar data without any labels. Our approach is based on the intuition that a good representation for multi-view radar data, which includes range-angle, range-Doppler, and angle-Doppler views, should enable the reconstruction of one view solely from the representations of the other two views. Experimental results demonstrate that our proposed self-supervised method, that can be used as a pre-training step for autonomous driving task, allows the model to learn meaningful representations from unlabeled radar data and achieves enhanced label efficiency for downstream tasks, such as radar semantic segmentation. To the best of our knowledge, MVRAE is the first work that employs self-supervised learning and conducts systematic experiments with multi-view radar data.
ISSN:2642-7214
DOI:10.1109/IV55156.2024.10588463