Unified-EGformer: Exposure Guided Lightweight Transformer for Mixed-Exposure Image Enhancement
Despite recent strides made by AI in image processing, the issue of mixed exposure, pivotal in many real-world scenarios like surveillance and photography, remains inadequately addressed. Traditional image enhancement techniques and current transformer models are limited with primary focus on either...
Saved in:
Main Authors | , , , , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
18.07.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Despite recent strides made by AI in image processing, the issue of mixed
exposure, pivotal in many real-world scenarios like surveillance and
photography, remains inadequately addressed. Traditional image enhancement
techniques and current transformer models are limited with primary focus on
either overexposure or underexposure. To bridge this gap, we introduce the
Unified-Exposure Guided Transformer (Unified-EGformer). Our proposed solution
is built upon advanced transformer architectures, equipped with local
pixel-level refinement and global refinement blocks for color correction and
image-wide adjustments. We employ a guided attention mechanism to precisely
identify exposure-compromised regions, ensuring its adaptability across various
real-world conditions. U-EGformer, with a lightweight design featuring a memory
footprint (peak memory) of only $\sim$1134 MB (0.1 Million parameters) and an
inference time of 95 ms (9.61x faster than the average), is a viable choice for
real-time applications such as surveillance and autonomous navigation.
Additionally, our model is highly generalizable, requiring minimal fine-tuning
to handle multiple tasks and datasets with a single architecture. |
---|---|
DOI: | 10.48550/arxiv.2407.13170 |