Edge Computing Driven Low-Light Image Dynamic Enhancement for Object Detection

With fast increase in volume of mobile multimedia data, how to apply powerful deep learning methods to process data with real-time response becomes a major issue. Meanwhile, edge computing structure helps improve response time and user experience by bringing flexible computation and storage capabili...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on network science and engineering Vol. 10; no. 5; pp. 3086 - 3098
Main Authors Wu, Yirui, Guo, Haifeng, Chakraborty, Chinmay, Khosravi, Mohammad R., Berretti, Stefano, Wan, Shaohua
Format Journal Article
LanguageEnglish
Published Piscataway IEEE 01.09.2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:With fast increase in volume of mobile multimedia data, how to apply powerful deep learning methods to process data with real-time response becomes a major issue. Meanwhile, edge computing structure helps improve response time and user experience by bringing flexible computation and storage capabilities. Considering both technologies for successful AI-based applications, we propose an edge-computing driven and end-to-end framework to perform tasks of image enhancement and object detection under low-light conditions. The framework consists of a cloud-based enhancement and an edge-based detection stage. In the first stage, we establish connections between edge devices and cloud servers to input re-scaled illumination parts of low-light images, where enhancement subnetworks are dynamically and parallel coupled to compute enhanced illumination parts based on low-light context. During the edge-based detection stage, edge devices could accurately and rapidly detect objects based on cloud-computed informative feature map. Experimental results show the proposed method significantly improves detection performance in low-light conditions with low latency running on edge devices.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2327-4697
2334-329X
DOI:10.1109/TNSE.2022.3151502