Lighting dark images with linear attention and decoupled network
Nighttime photography encounters escalating challenges in extremely low-light conditions, primarily attributable to the ultra-low signal-to-noise ratio. For real-world deployment, a practical solution must not only produce visually appealing results but also require minimal computation. However, mos...
Saved in:
Published in | Pattern recognition Vol. 170; p. 111930 |
---|---|
Main Authors | , , , , |
Format | Journal Article |
Language | English |
Published |
Elsevier Ltd
01.02.2026
|
Subjects | |
Online Access | Get full text |
ISSN | 0031-3203 |
DOI | 10.1016/j.patcog.2025.111930 |
Cover
Loading…
Abstract | Nighttime photography encounters escalating challenges in extremely low-light conditions, primarily attributable to the ultra-low signal-to-noise ratio. For real-world deployment, a practical solution must not only produce visually appealing results but also require minimal computation. However, most existing methods are either focused on improving restoration performance or employ lightweight models at the cost of quality. This paper proposes a lightweight network that outperforms existing state-of-the-art (SOTA) methods in low-light enhancement tasks while minimizing computation. The proposed network incorporates Siamese Self-Attention Block (SSAB) and Skip-Channel Attention (SCA) modules, which enhance the model’s capacity to aggregate global information and are well-suited for high-resolution images. Additionally, based on our analysis of the low-light image restoration process, we propose a Two-Stage Framework that achieves superior results. Our model can restore a UHD 4K resolution image with minimal computation while keeping SOTA restoration quality.
•Lightweight network restores 4K ultra-dark images with SOTA quality and efficiency.•SSAB aggregates compact global context with linear scaling to image size.•SCA fuses encoder-decoder cues to resolve cross-layer ambiguity and enhance quality.•Two-stage RAW-sRGB pipeline decouples noise and brightness for superior results. |
---|---|
AbstractList | Nighttime photography encounters escalating challenges in extremely low-light conditions, primarily attributable to the ultra-low signal-to-noise ratio. For real-world deployment, a practical solution must not only produce visually appealing results but also require minimal computation. However, most existing methods are either focused on improving restoration performance or employ lightweight models at the cost of quality. This paper proposes a lightweight network that outperforms existing state-of-the-art (SOTA) methods in low-light enhancement tasks while minimizing computation. The proposed network incorporates Siamese Self-Attention Block (SSAB) and Skip-Channel Attention (SCA) modules, which enhance the model’s capacity to aggregate global information and are well-suited for high-resolution images. Additionally, based on our analysis of the low-light image restoration process, we propose a Two-Stage Framework that achieves superior results. Our model can restore a UHD 4K resolution image with minimal computation while keeping SOTA restoration quality.
•Lightweight network restores 4K ultra-dark images with SOTA quality and efficiency.•SSAB aggregates compact global context with linear scaling to image size.•SCA fuses encoder-decoder cues to resolve cross-layer ambiguity and enhance quality.•Two-stage RAW-sRGB pipeline decouples noise and brightness for superior results. |
ArticleNumber | 111930 |
Author | Li, Cheng Liu, Yangxing Li, Lei Zheng, Jiazhang Liao, Qiuping |
Author_xml | – sequence: 1 givenname: Jiazhang surname: Zheng fullname: Zheng, Jiazhang – sequence: 2 givenname: Qiuping surname: Liao fullname: Liao, Qiuping – sequence: 3 givenname: Lei surname: Li fullname: Li, Lei – sequence: 4 givenname: Cheng surname: Li fullname: Li, Cheng – sequence: 5 givenname: Yangxing surname: Liu fullname: Liu, Yangxing email: yangxing.liu@tcl.com |
BookMark | eNp9j8tOAyEYRlnUxLb6Bi54gRn5Gea2MZrGW9LETfeEws-UtkIDaOPbO824dvWtzpdzFmTmg0dC7oCVwKC535cnlXUYSs54XQJAX7EZmTNWQVFxVl2TRUp7xqAFwefkce2GXXZ-oEbFA3WfasBEzy7v6NF5VJGqnNFnFzxV3lCDOnydjmiox3wO8XBDrqw6Jrz92yXZvDxvVm_F-uP1ffW0LjSv21xwoaGxnWgtbIXuQWy5bW03KndMjTLGWNWarumht8KiVl3FalvXnVCNEqZaEjHd6hhSimjlKY6y8UcCk5dwuZdTuLyEyyl8xB4mDEe1b4dRJu3QazQuos7SBPf_wS-gDGdX |
Cites_doi | 10.1109/CVPR42600.2020.00283 10.1109/ICCV.2019.00679 10.1109/TNNLS.2024.3502424 10.1109/ICCV48922.2021.00082 10.1109/CVPR52688.2022.01716 10.1145/3130800.3130816 10.1109/ICCV.2019.00260 10.1109/CVPRW53098.2021.00045 10.1109/ICCV48922.2021.00986 10.1109/CVPR52729.2023.01737 10.1109/CVPR.2018.00347 10.1109/CVPR42600.2020.00185 10.1109/TIP.2024.3390565 10.1109/CVPR.2016.207 10.1109/ICCV48922.2021.00455 10.1109/TCSVT.2024.3426527 10.1109/TBC.2022.3215249 10.1109/CVPR52688.2022.00564 10.1007/978-3-030-01234-2_1 10.1109/CVPR52729.2023.01739 10.1609/aaai.v34i07.7013 10.1109/CVPR42600.2020.00235 10.1109/CVPR52688.2022.01691 10.1145/3503161.3548186 10.1109/ICCV51070.2023.01176 10.1109/CVPR.2018.00813 10.1109/CVPR52729.2023.00571 10.1109/CVPR.2018.00745 10.1109/ICCV51070.2023.01149 10.1109/CVPR42600.2020.01155 10.1038/scientificamerican1277-108 10.1109/TBC.2022.3231101 10.1109/TIP.2022.3140610 10.1109/CVPR46437.2021.00349 10.1109/ICCVW54120.2021.00210 |
ContentType | Journal Article |
Copyright | 2025 Elsevier Ltd |
Copyright_xml | – notice: 2025 Elsevier Ltd |
DBID | AAYXX CITATION |
DOI | 10.1016/j.patcog.2025.111930 |
DatabaseName | CrossRef |
DatabaseTitle | CrossRef |
DatabaseTitleList | |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Computer Science |
ExternalDocumentID | 10_1016_j_patcog_2025_111930 S0031320325005904 |
GroupedDBID | --K --M -D8 -DT -~X .DC .~1 0R~ 123 1B1 1RT 1~. 1~5 29O 4.4 457 4G. 53G 5VS 7-5 71M 8P~ 9JN AABNK AAEDT AAEDW AAIKJ AAKOC AALRI AAOAW AAQFI AAQXK AATTM AAXKI AAXUO AAYFN AAYWO ABBOA ABDPE ABEFU ABFNM ABFRF ABHFT ABJNI ABMAC ABWVN ABXDB ACBEA ACDAQ ACGFO ACGFS ACNNM ACRLP ACRPL ACVFH ACZNC ADBBV ADCNI ADEZE ADJOM ADMUD ADMXK ADNMO ADTZH AEBSH AECPX AEFWE AEIPS AEKER AENEX AEUPX AFJKZ AFPUW AFTJW AFXIZ AGCQF AGHFR AGQPQ AGRNS AGUBO AGYEJ AHHHB AHJVU AHZHX AIALX AIEXJ AIGII AIIUN AIKHN AITUG AKBMS AKRWK AKYEP ALMA_UNASSIGNED_HOLDINGS AMRAJ ANKPU AOUOD APXCP ASPBG AVWKF AXJTR AZFZN BJAXD BKOJK BLXMC BNPGV CS3 DU5 EBS EFJIC EFKBS EJD EO8 EO9 EP2 EP3 F0J F5P FD6 FDB FEDTE FGOYB FIRID FNPLU FYGXN G-Q GBLVA GBOLZ HLZ HVGLF HZ~ H~9 IHE J1W JJJVA KOM KZ1 LG9 LMP LY1 M41 MO0 N9A O-L O9- OAUVE OZT P-8 P-9 P2P PC. Q38 R2- RIG RNS ROL RPZ SBC SDF SDG SDP SDS SES SEW SPC SPCBC SST SSV SSZ T5K TN5 UNMZH VOH WUQ XJE XPP ZMT ZY4 ~G- AAYXX CITATION SSH |
ID | FETCH-LOGICAL-c257t-24c16f847f1b4c914b2f7f810180a017ddfa7d86919f4feca8305f5584a6a4d3 |
IEDL.DBID | AIKHN |
ISSN | 0031-3203 |
IngestDate | Thu Jul 03 08:30:38 EDT 2025 Tue Jul 29 20:18:44 EDT 2025 |
IsPeerReviewed | true |
IsScholarly | true |
Keywords | Attention mechanism Lightweight network Low-light raw image enhancement |
Language | English |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c257t-24c16f847f1b4c914b2f7f810180a017ddfa7d86919f4feca8305f5584a6a4d3 |
ParticipantIDs | crossref_primary_10_1016_j_patcog_2025_111930 elsevier_sciencedirect_doi_10_1016_j_patcog_2025_111930 |
PublicationCentury | 2000 |
PublicationDate | February 2026 2026-02-00 |
PublicationDateYYYYMMDD | 2026-02-01 |
PublicationDate_xml | – month: 02 year: 2026 text: February 2026 |
PublicationDecade | 2020 |
PublicationTitle | Pattern recognition |
PublicationYear | 2026 |
Publisher | Elsevier Ltd |
Publisher_xml | – name: Elsevier Ltd |
References | S.W. Zamir, A. Arora, S. Khan, M. Hayat, F.S. Khan, M.-H. Yang, Restormer: Efficient transformer for high-resolution image restoration, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 5728–5739. Zhou, Lan, Wei, Liao, Mao, Li, Wu, Xiang, Fang (b30) 2022; 69 X. Zhu, D. Cheng, Z. Zhang, S. Lin, J. Dai, An empirical study of spatial attention mechanisms in deep networks, in: Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 6688–6697. Huang, Yang, Hu, Liu, Duan (b44) 2022; 31 K. Wei, Y. Fu, J. Yang, H. Huang, A physics-based noise formation model for extreme low-light raw denoising, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 2758–2767. Wu, Wang, Tu, Patsch, Jin (b16) 2024 C. Guo, C. Li, J. Guo, C.C. Loy, J. Hou, S. Kwong, R. Cong, Zero-reference deep curve estimation for low-light image enhancement, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 1780–1789. M. Lamba, K. Mitra, Restoring Extremely Dark Images in Real Time, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 3487–3497. Brateanu, Balmez, Avram, Orhei (b15) 2024 X. Chen, Y. Liu, Z. Zhang, Y. Qiao, C. Dong, Hdrunet: Single image hdr reconstruction with denoising and dequantization, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 354–363. Wu, Zhan, Jin (b18) 2024 Zhou, Chen, Wei, Liao, Mao, Wang, Pu, Luo, Xiang, Fang (b31) 2023; 69 J. Liang, J. Cao, G. Sun, K. Zhang, L. Van Gool, R. Timofte, Swinir: Image restoration using swin transformer, in: Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 1833–1844. J. Hu, L. Shen, G. Sun, Squeeze-and-excitation networks, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 7132–7141. X. Wang, R. Girshick, A. Gupta, K. He, Non-local neural networks, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 7794–7803. Q. Wang, B. Wu, P. Zhu, P. Li, W. Zuo, Q. Hu, ECA-Net: Efficient Channel Attention for Deep Convolutional Neural Networks, in: The IEEE Conference on Computer Vision and Pattern Recognition, CVPR, 2020. H. Feng, L. Wang, Y. Wang, H. Huang, Learnability enhancement for low-light raw denoising: Where paired real data meets noise modeling, in: Proceedings of the 30th ACM International Conference on Multimedia, 2022, pp. 1436–1444. Wang, Huang, Xu, Liu, Liu, Wang (b39) 2020 M. Zhu, P. Pan, W. Chen, Y. Yang, Eemefn: Low-light image enhancement via edge-enhanced multi-exposure fusion network, in: Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 34, 2020, pp. 13106–13113. X. Jin, L.-H. Han, Z. Li, C.-L. Guo, Z. Chai, C. Li, DNF: Decouple and feedback network for seeing in the dark, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 18135–18144. Cui, Li, Gu, Su, Gao, Jiang, Qiao, Harada (b17) 2022 Dosovitskiy, Beyer, Kolesnikov, Weissenborn, Zhai, Unterthiner, Dehghani, Minderer, Heigold, Gelly (b7) 2020 Maharjan, Li, Li, Xu, Ma, Li (b13) 2019 Y. Cai, H. Bian, J. Lin, H. Wang, R. Timofte, Y. Zhang, Retinexformer: One-stage retinex-based transformer for low-light image enhancement, in: Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 12504–12513. Y. Qiu, K. Zhang, C. Wang, W. Luo, H. Li, Z. Jin, MB-TaylorFormer: Multi-branch efficient transformer expanded by Taylor formula for image dehazing, in: Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 12802–12813. X. Dong, W. Xu, Z. Miao, L. Ma, C. Zhang, J. Yang, Z. Jin, A.B.J. Teoh, J. Shen, Abandoning the Bayer-filter to see in the dark, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 17431–17440. Zhou, Zhao, Luo, Luo, Pu, Xiang (b11) 2023; 20 Y. Wang, L. Peng, L. Li, Y. Cao, Z.-J. Zha, Decoupling-and-aggregating for image exposure correction, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 18115–18124. Z. Wang, X. Cun, J. Bao, W. Zhou, J. Liu, H. Li, Uformer: A general u-shaped transformer for image restoration, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 17683–17693. K. Xu, X. Yang, B. Yin, R.W. Lau, Learning to restore low-light images via decomposition-and-enhancement, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 2281–2290. Y. Zhang, H. Qin, X. Wang, H. Li, Rethinking noise synthesis and modeling in raw denoising, in: Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 4593–4601. Eilertsen, Kronander, Denes, Mantiuk, Unger (b36) 2017; 36 S. Gu, Y. Li, L.V. Gool, R. Timofte, Self-guided network for fast image denoising, in: Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 2511–2520. Lamba, Balaji, Mitra (b6) 2020 Z. Liu, Y. Lin, Y. Cao, H. Hu, Y. Wei, Z. Zhang, S. Lin, B. Guo, Swin transformer: Hierarchical vision transformer using shifted windows, in: Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 10012–10022. S. Woo, J. Park, J.-Y. Lee, I.S. Kweon, Cbam: Convolutional block attention module, in: Proceedings of the European Conference on Computer Vision, ECCV, 2018, pp. 3–19. Wei, Fu, Zheng, Yang (b38) 2021; 44 Land (b10) 1977; 237 C. Chen, Q. Chen, J. Xu, V. Koltun, Learning to see in the dark, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 3291–3300. X. Chen, H. Li, M. Li, J. Pan, Learning a sparse transformer network for effective image deraining, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 5896–5905. Shen, Zhou, Luo, Li, Kwong (b32) 2024 Wang, Zhu, Zhao, Wang, Ma (b12) 2019; Vol. 1345 Jin, Wang, Luo (b21) 2024 Z. Qin, P. Zhang, F. Wu, X. Li, Fcanet: Frequency channel attention networks, in: Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 783–792. W. Shi, J. Caballero, F. Huszár, J. Totz, A.P. Aitken, R. Bishop, D. Rueckert, Z. Wang, Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 1874–1883. 10.1016/j.patcog.2025.111930_b1 10.1016/j.patcog.2025.111930_b2 Wang (10.1016/j.patcog.2025.111930_b12) 2019; Vol. 1345 Huang (10.1016/j.patcog.2025.111930_b44) 2022; 31 10.1016/j.patcog.2025.111930_b5 Dosovitskiy (10.1016/j.patcog.2025.111930_b7) 2020 10.1016/j.patcog.2025.111930_b3 Lamba (10.1016/j.patcog.2025.111930_b6) 2020 10.1016/j.patcog.2025.111930_b4 10.1016/j.patcog.2025.111930_b9 10.1016/j.patcog.2025.111930_b8 10.1016/j.patcog.2025.111930_b40 10.1016/j.patcog.2025.111930_b41 10.1016/j.patcog.2025.111930_b20 10.1016/j.patcog.2025.111930_b42 10.1016/j.patcog.2025.111930_b43 10.1016/j.patcog.2025.111930_b22 10.1016/j.patcog.2025.111930_b34 Jin (10.1016/j.patcog.2025.111930_b21) 2024 10.1016/j.patcog.2025.111930_b35 10.1016/j.patcog.2025.111930_b14 10.1016/j.patcog.2025.111930_b37 Wu (10.1016/j.patcog.2025.111930_b16) 2024 Zhou (10.1016/j.patcog.2025.111930_b11) 2023; 20 Brateanu (10.1016/j.patcog.2025.111930_b15) 2024 10.1016/j.patcog.2025.111930_b19 Eilertsen (10.1016/j.patcog.2025.111930_b36) 2017; 36 Wei (10.1016/j.patcog.2025.111930_b38) 2021; 44 Wang (10.1016/j.patcog.2025.111930_b39) 2020 Wu (10.1016/j.patcog.2025.111930_b18) 2024 Zhou (10.1016/j.patcog.2025.111930_b30) 2022; 69 Cui (10.1016/j.patcog.2025.111930_b17) 2022 10.1016/j.patcog.2025.111930_b33 10.1016/j.patcog.2025.111930_b23 10.1016/j.patcog.2025.111930_b45 10.1016/j.patcog.2025.111930_b24 Maharjan (10.1016/j.patcog.2025.111930_b13) 2019 10.1016/j.patcog.2025.111930_b25 Land (10.1016/j.patcog.2025.111930_b10) 1977; 237 10.1016/j.patcog.2025.111930_b26 Zhou (10.1016/j.patcog.2025.111930_b31) 2023; 69 10.1016/j.patcog.2025.111930_b27 10.1016/j.patcog.2025.111930_b28 10.1016/j.patcog.2025.111930_b29 Shen (10.1016/j.patcog.2025.111930_b32) 2024 |
References_xml | – reference: Z. Wang, X. Cun, J. Bao, W. Zhou, J. Liu, H. Li, Uformer: A general u-shaped transformer for image restoration, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 17683–17693. – reference: Y. Wang, L. Peng, L. Li, Y. Cao, Z.-J. Zha, Decoupling-and-aggregating for image exposure correction, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 18115–18124. – reference: X. Wang, R. Girshick, A. Gupta, K. He, Non-local neural networks, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 7794–7803. – reference: Z. Qin, P. Zhang, F. Wu, X. Li, Fcanet: Frequency channel attention networks, in: Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 783–792. – start-page: 1 year: 2020 end-page: 16 ident: b39 article-title: Practical deep raw image denoising on mobile devices publication-title: Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part VI – volume: 44 start-page: 8520 year: 2021 end-page: 8537 ident: b38 article-title: Physics-based noise modeling for extreme low-light photography publication-title: IEEE Trans. Pattern Anal. Mach. Intell. – reference: M. Lamba, K. Mitra, Restoring Extremely Dark Images in Real Time, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 3487–3497. – volume: 237 start-page: 108 year: 1977 end-page: 129 ident: b10 article-title: The retinex theory of color vision publication-title: Sci. Am. – reference: C. Chen, Q. Chen, J. Xu, V. Koltun, Learning to see in the dark, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 3291–3300. – reference: K. Xu, X. Yang, B. Yin, R.W. Lau, Learning to restore low-light images via decomposition-and-enhancement, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 2281–2290. – start-page: 1 year: 2024 end-page: 19 ident: b18 article-title: Understanding and improving zero-reference deep curve estimation for low-light image enhancement publication-title: Appl. Intell. – reference: X. Zhu, D. Cheng, Z. Zhang, S. Lin, J. Dai, An empirical study of spatial attention mechanisms in deep networks, in: Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 6688–6697. – start-page: 916 year: 2019 end-page: 921 ident: b13 article-title: Improving extreme low-light image denoising via residual learning publication-title: 2019 IEEE International Conference on Multimedia and Expo – year: 2024 ident: b15 article-title: Lyt-net: Lightweight YUV transformer-based network for low-light image enhancement – reference: X. Dong, W. Xu, Z. Miao, L. Ma, C. Zhang, J. Yang, Z. Jin, A.B.J. Teoh, J. Shen, Abandoning the Bayer-filter to see in the dark, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 17431–17440. – reference: Q. Wang, B. Wu, P. Zhu, P. Li, W. Zuo, Q. Hu, ECA-Net: Efficient Channel Attention for Deep Convolutional Neural Networks, in: The IEEE Conference on Computer Vision and Pattern Recognition, CVPR, 2020. – reference: X. Jin, L.-H. Han, Z. Li, C.-L. Guo, Z. Chai, C. Li, DNF: Decouple and feedback network for seeing in the dark, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 18135–18144. – volume: Vol. 1345 year: 2019 ident: b12 article-title: Enhancement of low-light image based on wavelet U-Net publication-title: Journal of Physics: Conference Series – reference: X. Chen, H. Li, M. Li, J. Pan, Learning a sparse transformer network for effective image deraining, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 5896–5905. – reference: M. Zhu, P. Pan, W. Chen, Y. Yang, Eemefn: Low-light image enhancement via edge-enhanced multi-exposure fusion network, in: Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 34, 2020, pp. 13106–13113. – year: 2022 ident: b17 article-title: You only need 90k parameters to adapt light: a light weight transformer for image enhancement and exposure correction – year: 2024 ident: b32 article-title: Graph-represented distribution similarity index for full-reference image quality assessment publication-title: IEEE Trans. Image Process. – year: 2024 ident: b16 article-title: CSPN: A category-specific processing network for low-light image enhancement publication-title: IEEE Trans. Circuits Syst. Video Technol. – year: 2020 ident: b6 article-title: Towards fast and light-weight restoration of dark images publication-title: BMVC – reference: J. Liang, J. Cao, G. Sun, K. Zhang, L. Van Gool, R. Timofte, Swinir: Image restoration using swin transformer, in: Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 1833–1844. – reference: J. Hu, L. Shen, G. Sun, Squeeze-and-excitation networks, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 7132–7141. – volume: 31 start-page: 1391 year: 2022 end-page: 1405 ident: b44 article-title: Towards low light enhancement with raw images publication-title: IEEE Trans. Image Process. – reference: S. Woo, J. Park, J.-Y. Lee, I.S. Kweon, Cbam: Convolutional block attention module, in: Proceedings of the European Conference on Computer Vision, ECCV, 2018, pp. 3–19. – reference: Y. Cai, H. Bian, J. Lin, H. Wang, R. Timofte, Y. Zhang, Retinexformer: One-stage retinex-based transformer for low-light image enhancement, in: Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 12504–12513. – reference: Y. Zhang, H. Qin, X. Wang, H. Li, Rethinking noise synthesis and modeling in raw denoising, in: Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 4593–4601. – reference: H. Feng, L. Wang, Y. Wang, H. Huang, Learnability enhancement for low-light raw denoising: Where paired real data meets noise modeling, in: Proceedings of the 30th ACM International Conference on Multimedia, 2022, pp. 1436–1444. – year: 2024 ident: b21 article-title: Colorization-inspired customized low-light image enhancement by a decoupled network publication-title: IEEE Trans. Neural Netw. Learn. Syst. – volume: 36 start-page: 1 year: 2017 end-page: 15 ident: b36 article-title: HDR image reconstruction from a single exposure using deep CNNs publication-title: ACM Trans. Graph. – reference: Y. Qiu, K. Zhang, C. Wang, W. Luo, H. Li, Z. Jin, MB-TaylorFormer: Multi-branch efficient transformer expanded by Taylor formula for image dehazing, in: Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 12802–12813. – reference: C. Guo, C. Li, J. Guo, C.C. Loy, J. Hou, S. Kwong, R. Cong, Zero-reference deep curve estimation for low-light image enhancement, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 1780–1789. – reference: K. Wei, Y. Fu, J. Yang, H. Huang, A physics-based noise formation model for extreme low-light raw denoising, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 2758–2767. – volume: 20 start-page: 1 year: 2023 end-page: 20 ident: b11 article-title: Robust RGB-t tracking via adaptive modality weight correlation filters and cross-modality learning publication-title: ACM Trans. Multimed. Comput. Commun. Appl. – reference: S. Gu, Y. Li, L.V. Gool, R. Timofte, Self-guided network for fast image denoising, in: Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 2511–2520. – volume: 69 start-page: 369 year: 2022 end-page: 377 ident: b30 article-title: An end-to-end blind image quality assessment method using a recurrent network and self-attention publication-title: IEEE Trans. Broadcast. – reference: X. Chen, Y. Liu, Z. Zhang, Y. Qiao, C. Dong, Hdrunet: Single image hdr reconstruction with denoising and dequantization, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 354–363. – year: 2020 ident: b7 article-title: An image is worth 16x16 words: Transformers for image recognition at scale – reference: W. Shi, J. Caballero, F. Huszár, J. Totz, A.P. Aitken, R. Bishop, D. Rueckert, Z. Wang, Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 1874–1883. – reference: Z. Liu, Y. Lin, Y. Cao, H. Hu, Y. Wei, Z. Zhang, S. Lin, B. Guo, Swin transformer: Hierarchical vision transformer using shifted windows, in: Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 10012–10022. – volume: 69 start-page: 396 year: 2023 end-page: 405 ident: b31 article-title: Perception-oriented U-shaped transformer network for 360-degree no-reference image quality assessment publication-title: IEEE Trans. Broadcast. – reference: S.W. Zamir, A. Arora, S. Khan, M. Hayat, F.S. Khan, M.-H. Yang, Restormer: Efficient transformer for high-resolution image restoration, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 5728–5739. – ident: 10.1016/j.patcog.2025.111930_b41 doi: 10.1109/CVPR42600.2020.00283 – ident: 10.1016/j.patcog.2025.111930_b34 doi: 10.1109/ICCV.2019.00679 – year: 2022 ident: 10.1016/j.patcog.2025.111930_b17 – year: 2024 ident: 10.1016/j.patcog.2025.111930_b21 article-title: Colorization-inspired customized low-light image enhancement by a decoupled network publication-title: IEEE Trans. Neural Netw. Learn. Syst. doi: 10.1109/TNNLS.2024.3502424 – ident: 10.1016/j.patcog.2025.111930_b26 doi: 10.1109/ICCV48922.2021.00082 – ident: 10.1016/j.patcog.2025.111930_b28 doi: 10.1109/CVPR52688.2022.01716 – volume: 36 start-page: 1 issue: 6 year: 2017 ident: 10.1016/j.patcog.2025.111930_b36 article-title: HDR image reconstruction from a single exposure using deep CNNs publication-title: ACM Trans. Graph. doi: 10.1145/3130800.3130816 – ident: 10.1016/j.patcog.2025.111930_b45 doi: 10.1109/ICCV.2019.00260 – year: 2020 ident: 10.1016/j.patcog.2025.111930_b6 article-title: Towards fast and light-weight restoration of dark images – ident: 10.1016/j.patcog.2025.111930_b37 doi: 10.1109/CVPRW53098.2021.00045 – ident: 10.1016/j.patcog.2025.111930_b9 doi: 10.1109/ICCV48922.2021.00986 – ident: 10.1016/j.patcog.2025.111930_b20 doi: 10.1109/CVPR52729.2023.01737 – ident: 10.1016/j.patcog.2025.111930_b1 doi: 10.1109/CVPR.2018.00347 – volume: 44 start-page: 8520 issue: 11 year: 2021 ident: 10.1016/j.patcog.2025.111930_b38 article-title: Physics-based noise modeling for extreme low-light photography publication-title: IEEE Trans. Pattern Anal. Mach. Intell. – ident: 10.1016/j.patcog.2025.111930_b19 doi: 10.1109/CVPR42600.2020.00185 – year: 2024 ident: 10.1016/j.patcog.2025.111930_b32 article-title: Graph-represented distribution similarity index for full-reference image quality assessment publication-title: IEEE Trans. Image Process. doi: 10.1109/TIP.2024.3390565 – ident: 10.1016/j.patcog.2025.111930_b35 doi: 10.1109/CVPR.2016.207 – ident: 10.1016/j.patcog.2025.111930_b40 doi: 10.1109/ICCV48922.2021.00455 – year: 2024 ident: 10.1016/j.patcog.2025.111930_b16 article-title: CSPN: A category-specific processing network for low-light image enhancement publication-title: IEEE Trans. Circuits Syst. Video Technol. doi: 10.1109/TCSVT.2024.3426527 – volume: 69 start-page: 369 issue: 2 year: 2022 ident: 10.1016/j.patcog.2025.111930_b30 article-title: An end-to-end blind image quality assessment method using a recurrent network and self-attention publication-title: IEEE Trans. Broadcast. doi: 10.1109/TBC.2022.3215249 – start-page: 1 year: 2020 ident: 10.1016/j.patcog.2025.111930_b39 article-title: Practical deep raw image denoising on mobile devices – ident: 10.1016/j.patcog.2025.111930_b29 doi: 10.1109/CVPR52688.2022.00564 – ident: 10.1016/j.patcog.2025.111930_b22 doi: 10.1007/978-3-030-01234-2_1 – volume: Vol. 1345 year: 2019 ident: 10.1016/j.patcog.2025.111930_b12 article-title: Enhancement of low-light image based on wavelet U-Net – ident: 10.1016/j.patcog.2025.111930_b5 doi: 10.1109/CVPR52729.2023.01739 – ident: 10.1016/j.patcog.2025.111930_b43 doi: 10.1609/aaai.v34i07.7013 – ident: 10.1016/j.patcog.2025.111930_b2 doi: 10.1109/CVPR42600.2020.00235 – ident: 10.1016/j.patcog.2025.111930_b3 doi: 10.1109/CVPR52688.2022.01691 – ident: 10.1016/j.patcog.2025.111930_b42 doi: 10.1145/3503161.3548186 – ident: 10.1016/j.patcog.2025.111930_b33 doi: 10.1109/ICCV51070.2023.01176 – start-page: 916 year: 2019 ident: 10.1016/j.patcog.2025.111930_b13 article-title: Improving extreme low-light image denoising via residual learning – ident: 10.1016/j.patcog.2025.111930_b25 doi: 10.1109/CVPR.2018.00813 – ident: 10.1016/j.patcog.2025.111930_b27 doi: 10.1109/CVPR52729.2023.00571 – year: 2020 ident: 10.1016/j.patcog.2025.111930_b7 – ident: 10.1016/j.patcog.2025.111930_b24 doi: 10.1109/CVPR.2018.00745 – ident: 10.1016/j.patcog.2025.111930_b14 doi: 10.1109/ICCV51070.2023.01149 – ident: 10.1016/j.patcog.2025.111930_b23 doi: 10.1109/CVPR42600.2020.01155 – volume: 237 start-page: 108 issue: 6 year: 1977 ident: 10.1016/j.patcog.2025.111930_b10 article-title: The retinex theory of color vision publication-title: Sci. Am. doi: 10.1038/scientificamerican1277-108 – volume: 69 start-page: 396 issue: 2 year: 2023 ident: 10.1016/j.patcog.2025.111930_b31 article-title: Perception-oriented U-shaped transformer network for 360-degree no-reference image quality assessment publication-title: IEEE Trans. Broadcast. doi: 10.1109/TBC.2022.3231101 – volume: 31 start-page: 1391 year: 2022 ident: 10.1016/j.patcog.2025.111930_b44 article-title: Towards low light enhancement with raw images publication-title: IEEE Trans. Image Process. doi: 10.1109/TIP.2022.3140610 – ident: 10.1016/j.patcog.2025.111930_b4 doi: 10.1109/CVPR46437.2021.00349 – ident: 10.1016/j.patcog.2025.111930_b8 doi: 10.1109/ICCVW54120.2021.00210 – volume: 20 start-page: 1 issue: 4 year: 2023 ident: 10.1016/j.patcog.2025.111930_b11 article-title: Robust RGB-t tracking via adaptive modality weight correlation filters and cross-modality learning publication-title: ACM Trans. Multimed. Comput. Commun. Appl. – start-page: 1 year: 2024 ident: 10.1016/j.patcog.2025.111930_b18 article-title: Understanding and improving zero-reference deep curve estimation for low-light image enhancement publication-title: Appl. Intell. – year: 2024 ident: 10.1016/j.patcog.2025.111930_b15 |
SSID | ssj0017142 |
Score | 2.4886289 |
Snippet | Nighttime photography encounters escalating challenges in extremely low-light conditions, primarily attributable to the ultra-low signal-to-noise ratio. For... |
SourceID | crossref elsevier |
SourceType | Index Database Publisher |
StartPage | 111930 |
SubjectTerms | Attention mechanism Lightweight network Low-light raw image enhancement |
Title | Lighting dark images with linear attention and decoupled network |
URI | https://dx.doi.org/10.1016/j.patcog.2025.111930 |
Volume | 170 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV07T8MwED71sbDwRpRH5YHVtE7sxN6oKqry6lSkbpFjx6g80ii0K78dO3EQSIiBMZZOir_kvrtLvjsDXIShjHjMFabCZJhySrAwWmEdpMMwY5yLSu3-MIumj_R2wRYtGDe9ME5W6bm_5vSKrf3KwKM5KJZL1-Prxg4OQxvEXQclbUM3CEXEOtAd3dxNZ18_E2JC66HhIcHOoOmgq2RehWW81ZMtFAPm6EM4OfRvEepb1JnswrZPF9GovqM9aGX5Puw0RzEg75kHcHXvimwbhpCW5QtavlmaeEfuIytyeaQskZujWSkbkcw10rbq3BSvmUZ5rQM_hPnkej6eYn84AlbWy9Y4oIpExsYWQ1KqBKFpYGLDq4Fc0u5ZayNjzSNBhKEmU5JbzzbM5hsyklSHR9DJV3l2DEgFSqWxtXU9bzrWQqqAD5nSmjHJGe0BbvBIinoERtJow56TGr_E4ZfU-PUgbkBLfjzKxLL0n5Yn_7Y8hS175eXUZ9BZl5vs3GYL67QP7csP0vfvxCd2mr46 |
linkProvider | Elsevier |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV07T8MwED6VMsDCG1GeHlhNm8RO7A1UURVoOxWpm-XYMSqPtCrtym_HlwcCCTGwJjkp-ZL77i767g7gMop0LBJhKJMuo0ywgEpnDbVh2okyLoQs1O7DUdx_ZPcTPmlAt-6FQVllxf0lpxdsXR1pV2i259Mp9vji2MFO5IM4dlCyNVhnPEpQ13f18aXzwAXf5cjwKKB4ed0_V4i85p7vZk--TAw5kodEMfRv8elbzOntwFaVLJKb8n52oZHle7BdL2IglV_uw_UAS2wfhIjVixcyffMk8U7wFyvBLFIvCE7RLHSNROeWWF9zruavmSV5qQI_gHHvdtzt02o1AjXex5Y0ZCaInY8sLkiZkQFLQ5c4UYzj0v6ZrXU6sSKWgXTMZUYL79eO-2xDx5rZ6BCa-SzPjoCY0Jg08bbY8WYTK7UJRYcbaznXgrMW0BoPNS8HYKhaGfasSvwU4qdK_FqQ1KCpHy9SeY7-0_L435YXsNEfDwdqcDd6OIFNf6YSVp9Cc7lYZWc-b1im58V38QkLNb8F |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Lighting+dark+images+with+linear+attention+and+decoupled+network&rft.jtitle=Pattern+recognition&rft.au=Zheng%2C+Jiazhang&rft.au=Liao%2C+Qiuping&rft.au=Li%2C+Lei&rft.au=Li%2C+Cheng&rft.date=2026-02-01&rft.pub=Elsevier+Ltd&rft.issn=0031-3203&rft.volume=170&rft_id=info:doi/10.1016%2Fj.patcog.2025.111930&rft.externalDocID=S0031320325005904 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0031-3203&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0031-3203&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0031-3203&client=summon |