An Approach Towards Generalization of Automotive Camera Sensor Data using Deep Learning

Integrating automotive camera sensors with software applications is an intricate and elaborate procedure that demands collaboration between sensor vendors and Original Equipment Manufacturers (OEMs). The interaction between the sensors and software necessitates considering any sensors' modifica...

Full description

Saved in:
Bibliographic Details
Published in2024 International Congress on Human-Computer Interaction, Optimization and Robotic Applications (HORA) pp. 1 - 10
Main Authors Ramesh, Goutham Bharadwaj, Chamas, Mohamad, Wang, Aoli, Raghuraman, SunilKumar, Sax, Eric
Format Conference Proceeding
LanguageEnglish
Published IEEE 23.05.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Integrating automotive camera sensors with software applications is an intricate and elaborate procedure that demands collaboration between sensor vendors and Original Equipment Manufacturers (OEMs). The interaction between the sensors and software necessitates considering any sensors' modifications, as they impact the essential functioning of the software applications. Due to this, a Plug-and-Play (PnP) framework named "Sensor-Agnostic Image Translation Framework (SAITF)" is developed using a Deep Learning (DL) technique to ensure that the applications operate consistently regardless of changes made to the camera sensors. The primary objective is to homogenize the data to maintain and improve the applications' functionalities and dependability. As this research deals with image-to-image translations, specifically regarding high-quality style transfers and domain adaptations, the Cycle-Consistent Generative Adversarial Network (CycleGAN) model structure serves as the foundation of this work. The Drive&Act dataset, which highlights in-vehicle perception systems, is utilized for the model training.To determine the efficacy of the findings, the generalized data output is subjected to evaluation metrics from SAITF and the You Only Look Once (YOLO) object detection model. Key Performance Indicators (KPIs) such as confidence scores and detection rates of previously undetected objects are utilized to measure object detection performance. Additionally, this study presents a comparative analysis between two approaches, namely DL and classical Computer Vision (CV) frameworks, and establishes a foundation for generalizing camera sensor data.
DOI:10.1109/HORA61326.2024.10550698