Effects of Sim2Real Image Translation on Lane Keeping Assist System in CARLA Simulator

Autonomous vehicle simulation has the advantage of testing algorithms in various environment variables and scenarios without wasting time and resources, however, there is a visual gap with the real-world. In this paper, we trained DCLGAN to realistically convert the image of the CARLA simulator and...

Full description

Saved in:
Bibliographic Details
Main Authors Pahk, Jinu, Shim, Jungseok, Baek, MinHyeok, Lim, Yongseob, Choi, Gyeungho
Format Journal Article
LanguageEnglish
Published 23.11.2022
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Autonomous vehicle simulation has the advantage of testing algorithms in various environment variables and scenarios without wasting time and resources, however, there is a visual gap with the real-world. In this paper, we trained DCLGAN to realistically convert the image of the CARLA simulator and evaluated the effect of the Sim2Real conversion focusing on the LKAS (Lane Keeping Assist System) algorithm. In order to avoid the case where the lane is translated distortedly by DCLGAN, we found the optimal training hyperparameter using FSIM (feature-similarity). After training, we built a system that connected the DCLGAN model with CARLA and AV in real-time. Then, we collected data (e.g. images, GPS) and analyzed them using the following four methods. First, image reality was measured with FID, which we verified quantitatively reflects the lane characteristics. CARLA images that passed through DCLGAN had smaller FID values than the original images. Second, lane segmentation accuracy through ENet-SAD was improved by DCLGAN. Third, in the curved route, the case of using DCLGAN drove closer to the center of the lane and had a high success rate. Lastly, in the straight route, DCLGAN improved lane restoring ability after deviating from the center of the lane as much as in reality.
DOI:10.48550/arxiv.2211.12873