Multi-Scale and Multi-Layer Lattice Transformer for Underwater Image Enhancement
Underwater images are often subject to color deviation and a loss of detail due to the absorption and scattering of light. The challenge of enhancing underwater images is compounded by variations in wavelength and distance attenuation, as well as color deviation that exist across different scales an...
Saved in:
Published in | ACM transactions on multimedia computing communications and applications Vol. 20; no. 11; pp. 1 - 24 |
---|---|
Main Authors | , |
Format | Journal Article |
Language | English |
Published |
New York, NY
ACM
14.11.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Underwater images are often subject to color deviation and a loss of detail due to the absorption and scattering of light. The challenge of enhancing underwater images is compounded by variations in wavelength and distance attenuation, as well as color deviation that exist across different scales and layers, resulting in different degrees of color deviation, attenuation, and blurring. To address these issues, we propose a novel multi-scale and multi-layer lattice transformer (MMLattFormer) to effectively eliminate artifacts and color deviation, prevent over-enhancement, and preserve details across various scales and layers, thereby achieving more accurate and natural results in underwater image enhancement. The proposed MMLattFormer model integrates the advantage of LattFormer to enhance global perception with the advantage of “multi-scale and multi-layer” configuration to leverages the differences and complementarities between features of various scales and layers to boost local perception. The proposed MMLattFormer model is comprised of multi-scale and multi-layer LattFormers. Each LattFormer primarily encompasses two modules: Multi-head Transposed-attention Residual Network (MTRN) and Gated-attention Residual Network (GRN). The MTRN module enables cross-pixel interaction and pixel-level aggregation in an efficient manner to extract more significant and distinguishable features, whereas the GRN module can effectively suppress under-informed or redundant features and retain only useful information, enabling excellent image restoration exploiting the local and global structures of the images. Moreover, to better capture local details, we introduce depthwise convolution in these two modules before generating global attention maps and decomposing images into different features to better capture the local context in image features. The qualitative and quantitative results indicate that the proposed method outperforms state-of-the-art approaches in delivering more natural results. This is evident in its superior detail preservation, effective prevention of over-enhancement, and successful removal of artifacts and color deviation on several public datasets. |
---|---|
ISSN: | 1551-6857 1551-6865 |
DOI: | 10.1145/3688802 |