Unified-EGformer: Exposure Guided Lightweight Transformer for Mixed-Exposure Image Enhancement

Despite recent strides made by AI in image processing, the issue of mixed exposure, pivotal in many real-world scenarios like surveillance and photography, remains inadequately addressed. Traditional image enhancement techniques and current transformer models are limited with primary focus on either...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Adhikarla, Eashan, Zhang, Kai, VidalMata, Rosaura G, Aithal, Manjushree, Madhusudhana, Nikhil Ambha, Nicholson, John, Sun, Lichao, Davison, Brian D
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 18.07.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Despite recent strides made by AI in image processing, the issue of mixed exposure, pivotal in many real-world scenarios like surveillance and photography, remains inadequately addressed. Traditional image enhancement techniques and current transformer models are limited with primary focus on either overexposure or underexposure. To bridge this gap, we introduce the Unified-Exposure Guided Transformer (Unified-EGformer). Our proposed solution is built upon advanced transformer architectures, equipped with local pixel-level refinement and global refinement blocks for color correction and image-wide adjustments. We employ a guided attention mechanism to precisely identify exposure-compromised regions, ensuring its adaptability across various real-world conditions. U-EGformer, with a lightweight design featuring a memory footprint (peak memory) of only \(\sim\)1134 MB (0.1 Million parameters) and an inference time of 95 ms (9.61x faster than the average), is a viable choice for real-time applications such as surveillance and autonomous navigation. Additionally, our model is highly generalizable, requiring minimal fine-tuning to handle multiple tasks and datasets with a single architecture.
ISSN:2331-8422