Low‐light image enhancement via lightweight custom non‐linear transform network
Convolutional neural network (CNN)‐based models have shown significant progress in low light image enhancement. However, many existing models possess a large number of parameters, making them unsuitable for deployment on terminal devices. Moreover, adjustments to brightness, contrast, and colour in...
Saved in:
Published in | Electronics letters Vol. 60; no. 19 |
---|---|
Main Author | |
Format | Journal Article |
Language | English |
Published |
Wiley
01.10.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | Convolutional neural network (CNN)‐based models have shown significant progress in low light image enhancement. However, many existing models possess a large number of parameters, making them unsuitable for deployment on terminal devices. Moreover, adjustments to brightness, contrast, and colour in images are often non‐linear, and convolution is not the best at capturing complex non‐linear relationships in image data. To address these issues, a model based on an end‐to‐end custom non‐linear transform network (CNTNet) is proposed. CNTNet combines a custom non‐linear transform layer with CNN layers to achieve image contrast and detail enhancement. The CNT layer in this model introduces transformation parameters at multiple scales to manipulate input images within various ranges. CNTNet progressively processes images by stacking multiple non‐linear transform layers and convolutional layers while integrating residual connections to capture and leverage subtle image features. The final output is generated through convolutional layers to obtain enhanced images. Experimental results of CNTNet demonstrate that, while maintaining a comparable level of image quality evaluation metrics to mainstream models, it significantly reduces the parameter count to only 2K.
Piecewise linear mapping and convolution are both advantageous tools in image enhancement. This paper designs a network called custom non‐linear transform network, which combines trainable non‐linear mapping with a convolutional neural network. The advantage of trainable nonlinear mapping is its lightweight nature and effective adjustment of image brightness. Therefore, custom non‐linear transform network significantly reduces the number of model parameters while maintaining enhancement performance. |
---|---|
AbstractList | Convolutional neural network (CNN)‐based models have shown significant progress in low light image enhancement. However, many existing models possess a large number of parameters, making them unsuitable for deployment on terminal devices. Moreover, adjustments to brightness, contrast, and colour in images are often non‐linear, and convolution is not the best at capturing complex non‐linear relationships in image data. To address these issues, a model based on an end‐to‐end custom non‐linear transform network (CNTNet) is proposed. CNTNet combines a custom non‐linear transform layer with CNN layers to achieve image contrast and detail enhancement. The CNT layer in this model introduces transformation parameters at multiple scales to manipulate input images within various ranges. CNTNet progressively processes images by stacking multiple non‐linear transform layers and convolutional layers while integrating residual connections to capture and leverage subtle image features. The final output is generated through convolutional layers to obtain enhanced images. Experimental results of CNTNet demonstrate that, while maintaining a comparable level of image quality evaluation metrics to mainstream models, it significantly reduces the parameter count to only 2K. Abstract Convolutional neural network (CNN)‐based models have shown significant progress in low light image enhancement. However, many existing models possess a large number of parameters, making them unsuitable for deployment on terminal devices. Moreover, adjustments to brightness, contrast, and colour in images are often non‐linear, and convolution is not the best at capturing complex non‐linear relationships in image data. To address these issues, a model based on an end‐to‐end custom non‐linear transform network (CNTNet) is proposed. CNTNet combines a custom non‐linear transform layer with CNN layers to achieve image contrast and detail enhancement. The CNT layer in this model introduces transformation parameters at multiple scales to manipulate input images within various ranges. CNTNet progressively processes images by stacking multiple non‐linear transform layers and convolutional layers while integrating residual connections to capture and leverage subtle image features. The final output is generated through convolutional layers to obtain enhanced images. Experimental results of CNTNet demonstrate that, while maintaining a comparable level of image quality evaluation metrics to mainstream models, it significantly reduces the parameter count to only 2K. Convolutional neural network (CNN)‐based models have shown significant progress in low light image enhancement. However, many existing models possess a large number of parameters, making them unsuitable for deployment on terminal devices. Moreover, adjustments to brightness, contrast, and colour in images are often non‐linear, and convolution is not the best at capturing complex non‐linear relationships in image data. To address these issues, a model based on an end‐to‐end custom non‐linear transform network (CNTNet) is proposed. CNTNet combines a custom non‐linear transform layer with CNN layers to achieve image contrast and detail enhancement. The CNT layer in this model introduces transformation parameters at multiple scales to manipulate input images within various ranges. CNTNet progressively processes images by stacking multiple non‐linear transform layers and convolutional layers while integrating residual connections to capture and leverage subtle image features. The final output is generated through convolutional layers to obtain enhanced images. Experimental results of CNTNet demonstrate that, while maintaining a comparable level of image quality evaluation metrics to mainstream models, it significantly reduces the parameter count to only 2K. Piecewise linear mapping and convolution are both advantageous tools in image enhancement. This paper designs a network called custom non‐linear transform network, which combines trainable non‐linear mapping with a convolutional neural network. The advantage of trainable nonlinear mapping is its lightweight nature and effective adjustment of image brightness. Therefore, custom non‐linear transform network significantly reduces the number of model parameters while maintaining enhancement performance. |
Author | Li, Yang |
Author_xml | – sequence: 1 givenname: Yang orcidid: 0000-0002-0087-3472 surname: Li fullname: Li, Yang email: liyang_19901222@163.com organization: Jiangsu Vocational College of Information Technology |
BookMark | eNp9UMtOwzAQtFCRaAsXviBnpBY_6_iIqgKVInEAJG7WOlm3KWmMnEDVG5_AN_IlpCniyGm1szOzoxmRQR1qJOSS0Smj0lxjVfGpplSJEzJkQtGJYexlQIaUMjFRzMgzMmqaTbdyY_SQPGZh9_35VZWrdZuUW1hhgvUa6hy3WLfJRwlJf9thz8jfmzZsk-5rL6oRYtJGqBsfYgdjuwvx9ZyceqgavPidY_J8u3ia30-yh7vl_Cab5PKQBrRH7wSTRhuqcqm1mznKCjPj0oMpCnTGu2LGuRDoEHSaAkgFKUiac4ViTJZH3yLAxr7FLn7c2wCl7YEQVxZiW-YVWiWAzpzXgndqyUXKc6-404WTKk152nldHb3yGJomov_zY9QeqrWHam1fbUdmR_KurHD_D9MusowfNT97VYAg |
Cites_doi | 10.1016/j.engappai.2022.105532 10.1109/TIP.2016.2639450 10.1364/JOSA.61.000001 10.1109/TIP.2019.2910412 10.1007/s11263-021-01466-8 10.1109/ICIP.1996.560995 10.1109/TCSVT.2020.2967424 10.1016/j.patrec.2018.01.010 10.1109/CVPR.2019.00446 10.1117/12.805805 10.1109/TIP.2021.3051462 10.1007/s11263-020-01418-8 10.1109/CVPR42600.2020.00185 10.1016/j.patcog.2016.06.008 10.1109/TMM.2020.3037526 10.1609/aaai.v37i3.25364 10.1016/j.jvcir.2022.103712 10.1088/1742-6596/1019/1/012026 10.1109/CVPR.2018.00068 10.1002/cpe.5184 10.1007/978-3-030-58595-2_30 10.1007/978-3-031-08277-1_17 10.1007/s11548-020-02120-3 10.1145/3343031.3350926 10.1109/CVPR52688.2022.00555 10.1109/CVPR42600.2020.00313 10.1109/I2MTC.2012.6229639 10.1109/JSYST.2019.2952459 |
ContentType | Journal Article |
Copyright | 2024 The Author(s). published by John Wiley & Sons Ltd on behalf of The Institution of Engineering and Technology. |
Copyright_xml | – notice: 2024 The Author(s). published by John Wiley & Sons Ltd on behalf of The Institution of Engineering and Technology. |
DBID | 24P AAYXX CITATION DOA |
DOI | 10.1049/ell2.70053 |
DatabaseName | Wiley Online Library Open Access CrossRef DOAJ Directory of Open Access Journals |
DatabaseTitle | CrossRef |
DatabaseTitleList | CrossRef |
Database_xml | – sequence: 1 dbid: DOA name: DOAJ Directory of Open Access Journals url: https://www.doaj.org/ sourceTypes: Open Website – sequence: 2 dbid: 24P name: Wiley Online Library Open Access url: https://authorservices.wiley.com/open-science/open-access/browse-journals.html sourceTypes: Publisher |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Engineering |
EISSN | 1350-911X |
EndPage | n/a |
ExternalDocumentID | oai_doaj_org_article_53a06bf7328a442382cf52b7db458828 10_1049_ell2_70053 ELL270053 |
Genre | shortCommunication |
GrantInformation_xml | – fundername: Water Conservancy Science and Technology Project of Jiangsu Province funderid: 2022058 – fundername: Research Project of Jiangsu Vocational College of Information Technology funderid: 10072020028(001) – fundername: 2021 Jiangsu Higher Education Teaching Reform Research Project – fundername: Jiangsu Province Higher Vocational Colleges Engineering Technology Research and Development Center funderid: 11 – fundername: “Taihu Light” Science and Technology Research (Fundamental Research) Project funderid: K20221052; K20231011 – fundername: Jiangsu Province Higher Vocational Education High‐Level Professional Group Construction Project Funding funderid: 1 – fundername: Political Reform of the “4+N” Mixed Curriculum of the Program Design Foundation funderid: 2021JSJG504 – fundername: Jiangsu Provincial Colleges of Natural Science General Program funderid: 21KJB520006; 22KJB520017; 24KJB520009 – fundername: Jiangsu Province Vocational Education ‘Double Qualified’ Master Teacher Studio funderid: 31 – fundername: Jiangsu Information Vocational Technology College Research Platform funderid: 2 |
GroupedDBID | -4A -~X .DC 0R~ 0ZK 1OC 24P 29G 2QL 3EH 4.4 4IJ 5GY 6IK 8FE 8FG 8VB 96U AAHHS AAHJG AAJGR ABJCF ABQXS ACCFJ ACCMX ACESK ACGFO ACGFS ACIWK ACXQS ADEYR ADIYS ADZOD AEEZP AEGXH AENEX AEQDE AFAZI AFKRA AI. AIAGR AIWBW AJBDE ALMA_UNASSIGNED_HOLDINGS ALUQN ARAPS AVUZU BBWZM BENPR BGLVJ CCPQU CS3 DU5 EBS EJD ELQJU ESX F5P F8P GOZPB GROUPED_DOAJ GRPMH HCIFZ HZ~ IAO IFBGX IFIPE IPLJI ITC JAVBF K1G K7- L6V LAI LXO LXU M43 M7S MCNEO MS~ NADUK NXXTH O9- OCL OK1 P0- P2P P62 PTHSS QWB R4Z RIE RIG RNS RUI TN5 U5U UNMZH VH1 WH7 ZL0 ~ZZ AAMMB AAYXX AEFGJ AGXDD AIDQK AIDYY CITATION IDLOA PHGZM PHGZT PQGLB PUEGO WIN |
ID | FETCH-LOGICAL-c4013-a7fefb31497905c477b6b01d9624fa9ddeb9fbd62233ebea788aa45a8a40c25e3 |
IEDL.DBID | 24P |
ISSN | 0013-5194 |
IngestDate | Wed Aug 27 01:20:03 EDT 2025 Sun Jul 06 05:02:28 EDT 2025 Wed Jan 22 17:16:04 EST 2025 |
IsDoiOpenAccess | true |
IsOpenAccess | true |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 19 |
Language | English |
License | Attribution |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c4013-a7fefb31497905c477b6b01d9624fa9ddeb9fbd62233ebea788aa45a8a40c25e3 |
ORCID | 0000-0002-0087-3472 |
OpenAccessLink | https://onlinelibrary.wiley.com/doi/abs/10.1049%2Fell2.70053 |
PageCount | 5 |
ParticipantIDs | doaj_primary_oai_doaj_org_article_53a06bf7328a442382cf52b7db458828 crossref_primary_10_1049_ell2_70053 wiley_primary_10_1049_ell2_70053_ELL270053 |
PublicationCentury | 2000 |
PublicationDate | October 2024 2024-10-00 2024-10-01 |
PublicationDateYYYYMMDD | 2024-10-01 |
PublicationDate_xml | – month: 10 year: 2024 text: October 2024 |
PublicationDecade | 2020 |
PublicationTitle | Electronics letters |
PublicationYear | 2024 |
Publisher | Wiley |
Publisher_xml | – name: Wiley |
References | 2017; 61 2021; PP 1971; 61 2012 2017; 26 2019; 31 2023; 37 2018; 104 2021; 129 2019; 14 2009; 7240 2020; 15 2005 2018; 1019 2021; 30 2022 2020 2019; 28 2023; 117 2019 2018 2016 2020; 23 2023; 90 1996; 3 e_1_2_9_30_1 e_1_2_9_31_1 e_1_2_9_11_1 e_1_2_9_10_1 e_1_2_9_13_1 e_1_2_9_32_1 e_1_2_9_12_1 e_1_2_9_33_1 e_1_2_9_15_1 e_1_2_9_14_1 e_1_2_9_17_1 e_1_2_9_16_1 e_1_2_9_19_1 e_1_2_9_18_1 e_1_2_9_20_1 e_1_2_9_22_1 e_1_2_9_21_1 e_1_2_9_24_1 e_1_2_9_23_1 e_1_2_9_8_1 e_1_2_9_7_1 e_1_2_9_6_1 e_1_2_9_5_1 e_1_2_9_4_1 e_1_2_9_3_1 e_1_2_9_2_1 e_1_2_9_9_1 e_1_2_9_26_1 e_1_2_9_25_1 e_1_2_9_28_1 e_1_2_9_27_1 e_1_2_9_29_1 |
References_xml | – start-page: 1780 year: 2020 end-page: 1789 article-title: Zero‐reference deep curve estimation for low‐light image enhancement – volume: 14 start-page: 1592 issue: 2 year: 2019 end-page: 1601 article-title: Heterogeneous system‐on‐chip‐based lattice‐Boltzmann visual simulation system publication-title: IEEE Syst. J. – volume: 31 issue: 17 year: 2019 article-title: Zynq SoC based acceleration of the lattice Boltzmann method publication-title: Concurr. Comput.: Pract. Exp. – volume: 23 start-page: 4093 year: 2020 end-page: 4105 article-title: TBEFN: a two‐branch exposure‐fusion network for low‐light image enhancement publication-title: IEEE Trans. Multimedia – volume: 61 start-page: 1 issue: 1 year: 1971 end-page: 11 article-title: Lightness and retinex theory publication-title: Josa – year: 2005 article-title: Single‐scale retinex using digital signal processors – volume: 30 start-page: 2340 year: 2021 end-page: 2349 article-title: Enlightengan: deep light enhancement without paired supervision publication-title: IEEE Trans. Image Process. – volume: 61 start-page: 650 year: 2017 end-page: 662 article-title: Llnet: A deep autoencoder approach to natural low‐light image enhancement publication-title: Pattern Recognit. – volume: 129 start-page: 1153 year: 2021 end-page: 1184 article-title: Benchmarking low‐light image enhancement and beyond publication-title: Int. J. Comput. Vision – start-page: 586 year: 2018 end-page: 595 article-title: The unreasonable effectiveness of deep features as a perceptual metric – volume: 28 start-page: 4364 issue: 9 year: 2019 end-page: 4375 article-title: Low‐light image enhancement via a deep hybrid network publication-title: IEEE Trans. Image Process. – year: 2016 – volume: 117 year: 2023 article-title: Automated liver tissues delineation techniques: a systematic survey on machine learning current trends and future orientations publication-title: Eng. Appl. Artif. Intell. – start-page: 5627 year: 2022 end-page: 5636 article-title: Toward fast, flexible, and robust low‐light image enhancement – start-page: 204 year: 2022 end-page: 212 article-title: Scheduling techniques for liver segmentation: Reducelronplateau vs onecyclelr – volume: 3 start-page: 1003 year: 1996 end-page: 1006 article-title: Multi‐scale retinex for color image enhancement – start-page: 3060 year: 2020 end-page: 3069 article-title: From fidelity to perceptual quality: a semi‐supervised approach for low‐light image enhancement – volume: 129 start-page: 2175 issue: 11 year: 2021 end-page: 2193 article-title: Attention guided low‐light image enhancement with a large scale low‐light simulation dataset publication-title: Int. J. Comput. Vision – start-page: 4326 year: 2019 end-page: 4334 article-title: A general and adaptive robust loss function – volume: 7240 start-page: 212 year: 2009 end-page: 223 article-title: SS‐SSIM and MS‐SSIM for digital cinema applications – volume: 37 start-page: 2654 year: 2023 end-page: 2662 article-title: Ultra‐high‐definition low‐light image enhancement: a benchmark and transformer‐based method – year: 2022 – volume: 1019 year: 2018 article-title: A review of histogram equalization techniques in image enhancement application – volume: 26 start-page: 982 issue: 2 year: 2017 end-page: 993 article-title: Lime: Low‐light image enhancement via illumination map estimation publication-title: IEEE Trans. Image Process. – volume: PP start-page: 1 issue: 99 year: 2021 end-page: 1 article-title: Retinexdip: A unified deep framework for low‐light image enhancement publication-title: IEEE Trans. Circuits Syst. Video Technol. – start-page: 492 year: 2020 end-page: 511 article-title: Learning enriched features for real image restoration and enhancement – start-page: 985 year: 2012 end-page: 990 article-title: A universal hypercomplex color image quality index – start-page: 1632 year: 2019 end-page: 1640 article-title: Kindling the darkness: a practical low‐light image enhancer – volume: 90 year: 2023 article-title: R2rnet: low‐light image enhancement via real‐low to real‐normal network publication-title: J. Visual Commun. Image Represent. – volume: 15 start-page: 629 year: 2020 end-page: 639 article-title: Lattice‐Boltzmann interactive blood flow simulation pipeline publication-title: Int. J. Comput. Assist. Radiol. Surg. – volume: 104 start-page: 15 year: 2018 end-page: 22 article-title: Lightennet: a convolutional neural network for weakly illuminated image enhancement publication-title: Pattern Recognit. Lett. – ident: e_1_2_9_5_1 doi: 10.1016/j.engappai.2022.105532 – ident: e_1_2_9_28_1 doi: 10.1109/TIP.2016.2639450 – ident: e_1_2_9_14_1 – ident: e_1_2_9_8_1 doi: 10.1364/JOSA.61.000001 – ident: e_1_2_9_4_1 doi: 10.1109/TIP.2019.2910412 – ident: e_1_2_9_7_1 doi: 10.1007/s11263-021-01466-8 – ident: e_1_2_9_10_1 doi: 10.1109/ICIP.1996.560995 – ident: e_1_2_9_11_1 doi: 10.1109/TCSVT.2020.2967424 – ident: e_1_2_9_17_1 doi: 10.1016/j.patrec.2018.01.010 – ident: e_1_2_9_23_1 doi: 10.1109/CVPR.2019.00446 – ident: e_1_2_9_25_1 doi: 10.1117/12.805805 – ident: e_1_2_9_18_1 doi: 10.1109/TIP.2021.3051462 – ident: e_1_2_9_6_1 doi: 10.1007/s11263-020-01418-8 – ident: e_1_2_9_16_1 – ident: e_1_2_9_2_1 doi: 10.1109/CVPR42600.2020.00185 – ident: e_1_2_9_13_1 doi: 10.1016/j.patcog.2016.06.008 – ident: e_1_2_9_30_1 doi: 10.1109/TMM.2020.3037526 – ident: e_1_2_9_15_1 doi: 10.1609/aaai.v37i3.25364 – ident: e_1_2_9_27_1 doi: 10.1016/j.jvcir.2022.103712 – ident: e_1_2_9_12_1 doi: 10.1088/1742-6596/1019/1/012026 – ident: e_1_2_9_21_1 doi: 10.1109/CVPR.2018.00068 – ident: e_1_2_9_33_1 doi: 10.1002/cpe.5184 – ident: e_1_2_9_29_1 doi: 10.1007/978-3-030-58595-2_30 – ident: e_1_2_9_20_1 doi: 10.1007/978-3-031-08277-1_17 – ident: e_1_2_9_31_1 doi: 10.1007/s11548-020-02120-3 – ident: e_1_2_9_22_1 – ident: e_1_2_9_26_1 doi: 10.1145/3343031.3350926 – ident: e_1_2_9_3_1 doi: 10.1109/CVPR52688.2022.00555 – ident: e_1_2_9_19_1 doi: 10.1109/CVPR42600.2020.00313 – ident: e_1_2_9_9_1 – ident: e_1_2_9_24_1 doi: 10.1109/I2MTC.2012.6229639 – ident: e_1_2_9_32_1 doi: 10.1109/JSYST.2019.2952459 |
SSID | ssj0012997 |
Score | 2.4421616 |
Snippet | Convolutional neural network (CNN)‐based models have shown significant progress in low light image enhancement. However, many existing models possess a large... Abstract Convolutional neural network (CNN)‐based models have shown significant progress in low light image enhancement. However, many existing models possess... |
SourceID | doaj crossref wiley |
SourceType | Open Website Index Database Publisher |
SubjectTerms | image and vision processing and display technology image enhancement multimedia computing |
SummonAdditionalLinks | – databaseName: DOAJ Directory of Open Access Journals dbid: DOA link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV1LSwMxEA7Skx7EJ9YXAT0Ja7fZPJqjSkuR6kULvYVkk2ChXYu09upP8Df6S5xkd6W96MXbsg-yfDNkvkkm3yB0SSzjHW_ThBCXJjSzNNFBCdNyoAcZT7WOzSYeHnl_SO9HbLTS6ivUhJXywCVwLZbplBsfNGU0hdjfIblnxAhrwhlLEo_5Qsyrk6lq_wAmWVH3LgCOQmthUipbbjIh1yI431ooior96ww1hpjeDtquuCG-Kf9pF224Yg9trSgG7qOnwevy6-NzElJqPJ7CZIBd8RIsF1b58PtY4_hsGVc8cb4AbjfFkOLHjwpwazyvqSouyhLwAzTsdZ_v-knVFyHJQzYEYHrnTQa5TVDXyqkQhpu0bSUn1GsJE5aR3lgOkT8DG2nIcrWmTAOAaU6Yyw5RAwZ2RwhTYZgLlaSWOUhEnDTE-7bo8ByiFpVZE13UEKlZKX-h4rY1lSoAqSKQTXQb0Pt5I0hWxxtgSFUZUv1lyCa6itj_Mo7qDgYkXh3_x4gnaJMARSlL805RY_62cGdAMebmPHrTN1L-zJ4 priority: 102 providerName: Directory of Open Access Journals |
Title | Low‐light image enhancement via lightweight custom non‐linear transform network |
URI | https://onlinelibrary.wiley.com/doi/abs/10.1049%2Fell2.70053 https://doaj.org/article/53a06bf7328a442382cf52b7db458828 |
Volume | 60 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3NSsQwEB50vehB_MX1ZwnoSajWNE1b8LLKisgqglbES0naRBfWKsuue_URfEafxJl0u-pF8FbShMBMMvNNMvkGYI8XoYxt4XucG98TQSE8RUyYhUR4EEhfKVds4vJKnqfi4j68n4Hj-i1MxQ8xPXCjneHsNW1wpasqJAhqSYn9Pj-IaBHNwhy9raWEPi6up3cIaGijun4B4hRRk5OK5PB77C935Fj7f6NU52bOlmBxgg9Zu1LoMsyYcgUWfrAGrsJN92X8-f7Rp7Ca9Z7RIDBTPpH26KSPvfUUc__G7tST5SPEd88Mw3w3qMSlzYY1XGVllQa-BulZ5_b03JvURvByiohQoNZYHWB8QwxbuYgiLbV_VCSSC6sSNFo6sbqQ6P0D1JPCSFcpEapYCT_noQnWoYETmw1gItKhoWzSIjQYjJhEc2uPoljm6LlEEjRhtxZR9lpRYGTu6lokGQkyc4JswglJb9qDaKtdw8vgMZvsgiwMlC-1JYIgJRDIxTy3IddRoenBLI-bsO9k_8c8Wafb5e5r8z-dt2CeIxyp0vC2oTEcjMwOwomhbrlV04K59l36kLZcUP4FrwbIfg |
linkProvider | Wiley-Blackwell |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV3NTtwwEB5ROFAOiJ9WXX4t0V4qhQbHcZIDB361QOBStkJcUjuxC9KSRbCw4sYj8CK8FE_CjJMscKnEgVuUOIkzmbG_GY-_AfjOi1DGtvA9zo3viaAQniImzEIiPAikr5QrNnF4JNsdsX8SnozAY7MXpuKHGAbcyDLceE0GTgHpyuEURJJpul2-GpEW1TmVB-ZugB7b9freNv7eH5zv7hxvtb26qICXkyuBPbHG6gAdA6KmykUUaan9tSKRXFiVoLXrxOpC4rQZ4AcqdBGVEqGKlfBzHpoAn_sJxghGoRGNbfzpnHaGyxY4tkdNyQSERqLhQxXJr5fevpkBXaGAt8DYzWy7UzBZQ1K2UenQNIyYcgYmXhEVzsLvtDd4un_okifPzi9wDGKmPCOFoeAiuz1XzF0buEAry28QUl6wsle6m0qUG-s3CJmVVeb5F-h8iOS-wii-2HwDJiIdGkpgLUKD_o9JNLd2LYpljpOlSIIWrDQiyi4r1o3MrZaLJCNBZk6QLdgk6Q1bEFO2O9G7-pfVhpeFgfKltsRJpARix5jnNuQ6KjTt0eVxC3462f_nPdlOmnJ3NPeexssw3j4-TLN07-hgHj5zRENVFuACjPavbswiopm-Xqp1iMHfj1bbZ_riBec |
linkToPdf | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV1LT9wwEB7RRarKAfFoxfK0RLkgpQ2O7SQHDrxWPLaoEt0K9RLs2AakJYtgYcWNn8AP4VfxSxg7yQIXpB64RYmTOJMZzzf2-BuA71RzkVgdBpSaMGCRZoF0TJhaIDyIRCilLzbx61Dsdtj-MT8egcd6L0zJDzGccHOW4cdrZ-CX2pbxJnMcmabbpT9ip0RVSuWBuRtgwHa9vreNf3eF0tbOn63doKopEOQuksCOWGNVhHGBY6bKWRwrocI1nQrKrEzR2FVqlRboNSP8PokRopSMy0SyMKfcRPjcTzDK0Q2GDRjd-Nv51xmuWuDQHtcVExAZsZoOlaU_X3r7xgH6OgFvcbF3bK0JGK8QKdkoVWgSRkwxBWOveAqn4ajdGzzdP3RdIE_OL3AIIqY4c_ri5hbJ7bkk_trAz7OS_AYR5QUpeoW_qUC5kX4NkElRJp5_hc6HSO4bNPDFZgYIixU3Ln9Vc4Phj0kVtXYtTkSOvpKlUROWaxFllyXpRuYXy1maOUFmXpBN2HTSG7ZwRNn-RO_qNKvsLuORDIWyjpJIMoSOCc0tpyrWym3RpUkTVr3s33lPttNuU380-z-Nl-Dz7-1W1t47PJiDLxSxUJkDOA-N_tWNWUAs01eLlQoROPlorX0GJS8FBw |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Low%E2%80%90light+image+enhancement+via+lightweight+custom+non%E2%80%90linear+transform+network&rft.jtitle=Electronics+letters&rft.au=Li%2C+Yang&rft.date=2024-10-01&rft.issn=0013-5194&rft.eissn=1350-911X&rft.volume=60&rft.issue=19&rft_id=info:doi/10.1049%2Fell2.70053&rft.externalDBID=n%2Fa&rft.externalDocID=10_1049_ell2_70053 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0013-5194&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0013-5194&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0013-5194&client=summon |