Unleashing the power of generative adversarial networks: A novel machine learning approach for vehicle detection and localisation in the dark

Abstract Machine vision in low‐light conditions is a critical requirement for object detection in road transportation, particularly for assisted and autonomous driving scenarios. Existing vision‐based techniques are limited to daylight traffic scenarios due to their reliance on adequate lighting and...

Full description

Saved in:
Bibliographic Details
Published inCognitive computation and systems Vol. 5; no. 3; pp. 169 - 180
Main Authors Hassan Onim, Md Saif, Nyeem, Hussain, Khan Arnob, Md. Wahiduzzaman, Pooja, Arunima Dey
Format Journal Article
LanguageEnglish
Published Dordrecht John Wiley & Sons, Inc 01.09.2023
Wiley
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Abstract Machine vision in low‐light conditions is a critical requirement for object detection in road transportation, particularly for assisted and autonomous driving scenarios. Existing vision‐based techniques are limited to daylight traffic scenarios due to their reliance on adequate lighting and high frame rates. This paper presents a novel approach to tackle this problem by investigating Vehicle Detection and Localisation (VDL) in extremely low‐light conditions by using a new machine learning model. Specifically, the proposed model employs two customised generative adversarial networks, based on Pix2PixGAN and CycleGAN, to enhance dark images for input into a YOLOv4‐based VDL algorithm. The model's performance is thoroughly analysed and compared against the prominent models. Our findings validate that the proposed model detects and localises vehicles accurately in extremely dark images, with an additional run‐time of approximately 11 ms and an accuracy improvement of 10%–50% compared to the other models. Moreover, our model demonstrates a 4%–8% increase in Intersection over Union (IoU) at a mean frame rate of 9 fps , which underscores its potential for broader applications in ubiquitous road‐object detection. The results demonstrate the significance of the proposed model as an early step to overcoming the challenges of low‐light vision in road‐object detection and autonomous driving, paving the way for safer and more efficient transportation systems.
ISSN:2517-7567
1873-9601
2517-7567
1873-961X
DOI:10.1049/ccs2.12085