Beyond Single Reference for Training: Underwater Image Enhancement via Comparative Learning
Due to the wavelength-dependent light absorption and scattering, the raw underwater images are usually inevitably degraded. Underwater image enhancement (UIE) is of great importance for underwater observation and operation. Data-driven methods, such as deep learning-based UIE approaches, tend to be...
Saved in:
Published in | IEEE transactions on circuits and systems for video technology Vol. 33; no. 6; pp. 2561 - 2576 |
---|---|
Main Authors | , , , , , , |
Format | Journal Article |
Language | English |
Published |
New York
IEEE
01.06.2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | Due to the wavelength-dependent light absorption and scattering, the raw underwater images are usually inevitably degraded. Underwater image enhancement (UIE) is of great importance for underwater observation and operation. Data-driven methods, such as deep learning-based UIE approaches, tend to be more applicable to real underwater scenarios. However, the training of deep models is limited by the extreme scarcity of underwater images with enhancement references, resulting in their poor performance in dynamic and diverse underwater scenes. As an alternative, enhancement reference achieved by volunteer voting alleviate the sample shortage to some extent. Since such artificially acquired references are not veritable ground truth, they are far from complete and accurate to provide correct and rich supervision for the enhancement model training. Beyond training with single reference, we propose the first comparative learning framework for UIE problem, namely CLUIE-Net, to learn from multiple candidates of enhancement reference. This new strategy also supports semi-supervised learning mode. Besides, we propose a regional quality-superiority discriminative network (RQSD-Net) as an embedded quality discriminator for the CLUIE-Net. Comprehensive experiments demonstrate the effectiveness of RQSD-Net and the comparative learning strategy for UIE problem. The code, models and new dataset RQSD-UI are available at: https://justwj.github.io/CLUIE-Net.html/ . |
---|---|
AbstractList | Due to the wavelength-dependent light absorption and scattering, the raw underwater images are usually inevitably degraded. Underwater image enhancement (UIE) is of great importance for underwater observation and operation. Data-driven methods, such as deep learning-based UIE approaches, tend to be more applicable to real underwater scenarios. However, the training of deep models is limited by the extreme scarcity of underwater images with enhancement references, resulting in their poor performance in dynamic and diverse underwater scenes. As an alternative, enhancement reference achieved by volunteer voting alleviate the sample shortage to some extent. Since such artificially acquired references are not veritable ground truth, they are far from complete and accurate to provide correct and rich supervision for the enhancement model training. Beyond training with single reference, we propose the first comparative learning framework for UIE problem, namely CLUIE-Net, to learn from multiple candidates of enhancement reference. This new strategy also supports semi-supervised learning mode. Besides, we propose a regional quality-superiority discriminative network (RQSD-Net) as an embedded quality discriminator for the CLUIE-Net. Comprehensive experiments demonstrate the effectiveness of RQSD-Net and the comparative learning strategy for UIE problem. The code, models and new dataset RQSD-UI are available at: https://justwj.github.io/CLUIE-Net.html/ . |
Author | Qi, Qi Zhou, Liqin Li, Kunqian Wu, Li Gao, Xiang Liu, Wenjie Song, Dalei |
Author_xml | – sequence: 1 givenname: Kunqian orcidid: 0000-0001-9831-6457 surname: Li fullname: Li, Kunqian email: likunqian@ouc.edu.cn organization: College of Engineering, Ocean University of China, Qingdao, China – sequence: 2 givenname: Li surname: Wu fullname: Wu, Li email: wuli@stu.ouc.edu.cn organization: College of Engineering, Ocean University of China, Qingdao, China – sequence: 3 givenname: Qi orcidid: 0000-0002-1837-9501 surname: Qi fullname: Qi, Qi email: qiqi2013@stu.ouc.edu.cn organization: College of Computer Science and Technology, Ocean University of China, Qingdao, China – sequence: 4 givenname: Wenjie surname: Liu fullname: Liu, Wenjie email: lwj8310@stu.ouc.edu.cn organization: College of Engineering, Ocean University of China, Qingdao, China – sequence: 5 givenname: Xiang orcidid: 0000-0003-1497-5637 surname: Gao fullname: Gao, Xiang email: xiang.gao@ia.ac.cn organization: Institute of Automation, Chinese Academy of Sciences, Beijing, China – sequence: 6 givenname: Liqin surname: Zhou fullname: Zhou, Liqin email: zlq@ouc.edu.cn organization: College of Engineering, Ocean University of China, Qingdao, China – sequence: 7 givenname: Dalei orcidid: 0000-0001-5407-5989 surname: Song fullname: Song, Dalei email: songdalei@ouc.edu.cn organization: College of Engineering and Institute for Advanced Ocean Study, Ocean University of China, Qingdao, China |
BookMark | eNp9kDFPwzAQhS0EEm3hD8BiiTnFduLYZoOoQKVKSDRlYYgc91JSNXZx0qL-e1xaMTAw3Un3vrt3r49OrbOA0BUlQ0qJus2z6Vs-ZISxYcwYj0V6gnqUcxkxRvhp6AmnkWSUn6N-2y4JoYlMRA-9P8DO2Tme1naxAvwKFXiwBnDlPM69rm0Y3OGZnYP_0h14PG70AvDIfugga8B2eFtrnLlmrb3u6i3gCWi_xy7QWaVXLVwe6wDNHkd59hxNXp7G2f0kMkzxLiqhVJrLEqTQwKESVKYipqJMRTkvE2YksNBoIRUYIxIJoBKujeQyLZmg8QDdHPauvfvcQNsVS7fxNpwsmGQxVbFKk6CSB5Xxrm09VIWpu2DY2S68uSooKfZRFj9RFvsoi2OUAWV_0LWvG-13_0PXB6gGgF9AqZQnwdE39MKCww |
CODEN | ITCTEM |
CitedBy_id | crossref_primary_10_1016_j_inffus_2024_102494 crossref_primary_10_3390_e26110918 crossref_primary_10_1007_s11263_024_01987_y crossref_primary_10_1016_j_engappai_2024_109999 crossref_primary_10_1109_LGRS_2024_3397866 crossref_primary_10_1007_s11760_024_03047_x crossref_primary_10_3390_jmse11071285 crossref_primary_10_1109_TCSVT_2024_3412748 crossref_primary_10_1016_j_asoc_2024_112000 crossref_primary_10_1007_s10489_024_05538_3 crossref_primary_10_1109_JOE_2024_3458351 crossref_primary_10_1109_TGRS_2023_3315772 crossref_primary_10_1109_ACCESS_2024_3400533 crossref_primary_10_1016_j_engappai_2024_108411 crossref_primary_10_1016_j_image_2025_117271 crossref_primary_10_1016_j_eswa_2024_125350 crossref_primary_10_1109_TGRS_2024_3353371 crossref_primary_10_1109_TGRS_2024_3485030 crossref_primary_10_1007_s00138_024_01651_y crossref_primary_10_3390_s25061861 crossref_primary_10_1109_ACCESS_2023_3335618 crossref_primary_10_1109_TCSVT_2023_3289566 crossref_primary_10_1109_ACCESS_2024_3435569 crossref_primary_10_1364_OE_525348 crossref_primary_10_1007_s00371_024_03611_z crossref_primary_10_1007_s11760_023_02562_7 crossref_primary_10_1109_TMM_2024_3387760 crossref_primary_10_3390_electronics13122313 crossref_primary_10_1016_j_eswa_2024_125549 crossref_primary_10_1007_s11042_024_18686_y crossref_primary_10_1007_s11263_024_02318_x crossref_primary_10_3389_fmars_2025_1555128 crossref_primary_10_1007_s00530_023_01224_5 crossref_primary_10_1016_j_dsp_2025_105048 crossref_primary_10_1016_j_compeleceng_2025_110228 crossref_primary_10_1109_TGRS_2024_3358892 crossref_primary_10_1109_TCSVT_2023_3328272 crossref_primary_10_1109_ACCESS_2024_3465550 crossref_primary_10_1109_TCSVT_2023_3305777 |
Cites_doi | 10.1016/j.image.2020.115978 10.1109/TIP.2017.2759252 10.1016/j.image.2021.116248 10.1109/CVPR.2018.00194 10.1109/TIM.2021.3120130 10.1016/j.patrec.2017.05.023 10.1109/CVPR.2014.224 10.1109/LGRS.2019.2950056 10.1109/TCSVT.2019.2963772 10.1109/ICRA.2018.8460552 10.1109/TCSVT.2021.3074197 10.1109/ICIP.2018.8451209 10.1109/CVPR42600.2020.00975 10.1007/s11042-022-12151-4 10.1109/TNNLS.2014.2336852 10.1016/j.image.2020.115892 10.1016/j.image.2019.08.006 10.1109/CVPR52688.2022.00572 10.3156/jsoft.29.5_177_2 10.1109/TIP.2020.3002478 10.1007/978-3-030-00776-8_62 10.1016/j.ins.2018.11.055 10.1109/CVPR.2006.100 10.1007/978-3-319-24574-4_28 10.1109/IROS45743.2020.9340821 10.1109/JOE.2015.2469915 10.1109/ICIP.2014.7025927 10.1007/978-981-13-5841-8_59 10.1109/CVPR.2012.6247661 10.1109/TCSVT.2021.3115791 10.1109/TIP.2015.2491020 10.1109/LSP.2021.3099746 10.1109/TPAMI.2020.2977624 10.1109/ICIP.2017.8296508 10.1049/ipr2.12433 10.1109/TPAMI.2010.168 10.1016/j.engappai.2022.104759 10.1016/j.optlastec.2018.05.048 10.1109/LRA.2017.2730363 10.15607/RSS.2020.XVI.018 10.1109/TMM.2019.2957984 10.1109/TIP.2022.3196546 10.1109/ICCV.2017.118 10.1109/MSP.2017.2736018 10.1109/TPAMI.2020.2982166 10.1109/TIP.2017.2708503 10.1109/TIP.2017.2663846 10.1049/iet-ipr.2018.5237 10.1109/TMM.2019.2933334 10.1109/TIP.2022.3216208 10.1109/LSP.2018.2792050 10.1109/TPAMI.2014.2345401 10.1109/ICSMC.2010.5642311 10.1109/CVPR.2019.00178 10.1109/TCSVT.2022.3174817 10.1109/LSP.2021.3072563 10.1109/TIP.2016.2612882 10.1109/ACCESS.2019.2932611 10.1016/j.image.2021.116218 10.1109/TIP.2022.3177129 10.1109/JOE.2019.2911447 10.1109/TCSVT.2021.3055197 10.1109/CVPR46437.2021.01041 10.1016/j.patcog.2019.107038 10.1109/TIP.2019.2955241 10.1109/ACCESS.2019.2932130 10.1109/ISPACS.2017.8266583 10.1109/TCSVT.2016.2543099 10.1109/TIP.2021.3076367 10.1007/978-3-030-00764-5_47 10.1109/ACCESS.2020.3034275 10.1109/TIP.2021.3061932 10.1109/TCSVT.2018.2886771 10.1002/col.20070 10.1109/LRA.2020.2974710 |
ContentType | Journal Article |
Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023 |
Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023 |
DBID | 97E RIA RIE AAYXX CITATION 7SC 7SP 8FD JQ2 L7M L~C L~D |
DOI | 10.1109/TCSVT.2022.3225376 |
DatabaseName | IEEE All-Society Periodicals Package (ASPP) 2005–Present IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Electronic Library (IEL) CrossRef Computer and Information Systems Abstracts Electronics & Communications Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional |
DatabaseTitle | CrossRef Technology Research Database Computer and Information Systems Abstracts – Academic Electronics & Communications Abstracts ProQuest Computer Science Collection Computer and Information Systems Abstracts Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Professional |
DatabaseTitleList | Technology Research Database |
Database_xml | – sequence: 1 dbid: RIE name: IEEE Electronic Library (IEL) url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/ sourceTypes: Publisher |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Engineering |
EISSN | 1558-2205 |
EndPage | 2576 |
ExternalDocumentID | 10_1109_TCSVT_2022_3225376 9965419 |
Genre | orig-research |
GrantInformation_xml | – fundername: National Natural Science Foundation of China grantid: 61906177 funderid: 10.13039/501100001809 – fundername: Natural Science Foundation of Shandong Province grantid: ZR2019BF034 funderid: 10.13039/501100007129 – fundername: Fundamental Research Funds for the Central Universities grantid: 201964013 funderid: 10.13039/501100012226 |
GroupedDBID | -~X 0R~ 29I 4.4 5GY 5VS 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABQJQ ABVLG ACGFO ACGFS ACIWK AENEX AETIX AGQYO AGSQL AHBIQ AI. AIBXA AKJIK AKQYR ALLEH ALMA_UNASSIGNED_HOLDINGS ASUFR ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ CS3 DU5 EBS EJD HZ~ H~9 ICLAB IFIPE IFJZH IPLJI JAVBF LAI M43 O9- OCL P2P RIA RIE RNS RXW TAE TN5 VH1 AAYXX CITATION RIG 7SC 7SP 8FD JQ2 L7M L~C L~D |
ID | FETCH-LOGICAL-c295t-beb9a58be87ae5ef71867317b67bdb42c8e2bdba789ecc748ee945ac8586b2713 |
IEDL.DBID | RIE |
ISSN | 1051-8215 |
IngestDate | Mon Jun 30 10:11:43 EDT 2025 Tue Jul 01 00:41:19 EDT 2025 Thu Apr 24 23:10:14 EDT 2025 Wed Aug 27 02:25:56 EDT 2025 |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 6 |
Language | English |
License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037 |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c295t-beb9a58be87ae5ef71867317b67bdb42c8e2bdba789ecc748ee945ac8586b2713 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ORCID | 0000-0002-1837-9501 0000-0003-1497-5637 0000-0001-9831-6457 0000-0001-5407-5989 |
PQID | 2823193964 |
PQPubID | 85433 |
PageCount | 16 |
ParticipantIDs | ieee_primary_9965419 proquest_journals_2823193964 crossref_citationtrail_10_1109_TCSVT_2022_3225376 crossref_primary_10_1109_TCSVT_2022_3225376 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 2023-06-01 |
PublicationDateYYYYMMDD | 2023-06-01 |
PublicationDate_xml | – month: 06 year: 2023 text: 2023-06-01 day: 01 |
PublicationDecade | 2020 |
PublicationPlace | New York |
PublicationPlace_xml | – name: New York |
PublicationTitle | IEEE transactions on circuits and systems for video technology |
PublicationTitleAbbrev | TCSVT |
PublicationYear | 2023 |
Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
References | ref13 ref57 ref12 ref56 ref15 ref59 ref14 ref58 Anwar (ref25) 2018 ref52 ref11 ref55 ref10 ref54 ref17 ref16 ref19 ref18 ref51 ref50 ref46 ref45 ref48 ref47 ref42 ref41 ref44 ref43 Simonyan (ref53) ref49 ref8 ref7 ref9 ref4 ref3 ref6 ref5 ref40 ref35 ref79 ref34 ref78 ref37 ref36 ref31 ref75 ref30 ref74 ref33 ref32 ref76 ref2 ref1 ref39 ref38 ref71 ref70 ref73 ref24 ref68 ref23 ref67 ref26 Chen (ref77) ref69 ref20 ref64 ref63 ref22 ref66 ref21 ref65 ref28 ref27 ref29 Ma (ref72) 2021 ref60 ref62 ref61 |
References_xml | – year: 2021 ident: ref72 article-title: An underwater image semantic segmentation method focusing on boundaries and a real underwater scene semantic segmentation dataset publication-title: arXiv:2108.11727 – ident: ref1 doi: 10.1016/j.image.2020.115978 – ident: ref67 doi: 10.1109/TIP.2017.2759252 – ident: ref33 doi: 10.1016/j.image.2021.116248 – ident: ref50 doi: 10.1109/CVPR.2018.00194 – ident: ref17 doi: 10.1109/TIM.2021.3120130 – ident: ref3 doi: 10.1016/j.patrec.2017.05.023 – ident: ref42 doi: 10.1109/CVPR.2014.224 – ident: ref29 doi: 10.1109/LGRS.2019.2950056 – ident: ref66 doi: 10.1109/TCSVT.2019.2963772 – ident: ref28 doi: 10.1109/ICRA.2018.8460552 – ident: ref20 doi: 10.1109/TCSVT.2021.3074197 – ident: ref24 doi: 10.1109/ICIP.2018.8451209 – ident: ref78 doi: 10.1109/CVPR42600.2020.00975 – ident: ref9 doi: 10.1007/s11042-022-12151-4 – ident: ref43 doi: 10.1109/TNNLS.2014.2336852 – ident: ref57 doi: 10.1016/j.image.2020.115892 – ident: ref12 doi: 10.1016/j.image.2019.08.006 – ident: ref8 doi: 10.1109/CVPR52688.2022.00572 – ident: ref32 doi: 10.3156/jsoft.29.5_177_2 – ident: ref45 doi: 10.1109/TIP.2020.3002478 – ident: ref64 doi: 10.1007/978-3-030-00776-8_62 – ident: ref75 doi: 10.1016/j.ins.2018.11.055 – ident: ref76 doi: 10.1109/CVPR.2006.100 – ident: ref52 doi: 10.1007/978-3-319-24574-4_28 – ident: ref54 doi: 10.1109/IROS45743.2020.9340821 – ident: ref69 doi: 10.1109/JOE.2015.2469915 – ident: ref61 doi: 10.1109/ICIP.2014.7025927 – ident: ref11 doi: 10.1007/978-981-13-5841-8_59 – ident: ref40 doi: 10.1109/CVPR.2012.6247661 – ident: ref4 doi: 10.1109/TCSVT.2021.3115791 – start-page: 1 volume-title: Proc. Int. Conf. Learn. Represent. (ICLR) ident: ref53 article-title: Very deep convolutional networks for large-scale image recognition – ident: ref68 doi: 10.1109/TIP.2015.2491020 – ident: ref31 doi: 10.1109/LSP.2021.3099746 – ident: ref14 doi: 10.1109/TPAMI.2020.2977624 – ident: ref23 doi: 10.1109/ICIP.2017.8296508 – ident: ref21 doi: 10.1049/ipr2.12433 – ident: ref56 doi: 10.1109/TPAMI.2010.168 – year: 2018 ident: ref25 article-title: Deep underwater image enhancement publication-title: arXiv:1807.03528 – ident: ref5 doi: 10.1016/j.engappai.2022.104759 – ident: ref36 doi: 10.1016/j.optlastec.2018.05.048 – ident: ref27 doi: 10.1109/LRA.2017.2730363 – ident: ref65 doi: 10.15607/RSS.2020.XVI.018 – ident: ref74 doi: 10.1109/TMM.2019.2957984 – ident: ref6 doi: 10.1109/TIP.2022.3196546 – ident: ref49 doi: 10.1109/ICCV.2017.118 – ident: ref41 doi: 10.1109/MSP.2017.2736018 – ident: ref13 doi: 10.1109/TPAMI.2020.2982166 – start-page: 1597 volume-title: Proc. Int. Conf. Mach. Learn. ident: ref77 article-title: A simple framework for contrastive learning of visual representations – ident: ref47 doi: 10.1109/TIP.2017.2708503 – ident: ref60 doi: 10.1109/TIP.2017.2663846 – ident: ref26 doi: 10.1049/iet-ipr.2018.5237 – ident: ref10 doi: 10.1109/TMM.2019.2933334 – ident: ref22 doi: 10.1109/TIP.2022.3216208 – ident: ref34 doi: 10.1109/LSP.2018.2792050 – ident: ref73 doi: 10.1109/TPAMI.2014.2345401 – ident: ref63 doi: 10.1109/ICSMC.2010.5642311 – ident: ref15 doi: 10.1109/CVPR.2019.00178 – ident: ref39 doi: 10.1109/TCSVT.2022.3174817 – ident: ref30 doi: 10.1109/LSP.2021.3072563 – ident: ref58 doi: 10.1109/TIP.2016.2612882 – ident: ref2 doi: 10.1109/ACCESS.2019.2932611 – ident: ref70 doi: 10.1016/j.image.2021.116218 – ident: ref7 doi: 10.1109/TIP.2022.3177129 – ident: ref35 doi: 10.1109/JOE.2019.2911447 – ident: ref46 doi: 10.1109/TCSVT.2021.3055197 – ident: ref79 doi: 10.1109/CVPR46437.2021.01041 – ident: ref16 doi: 10.1016/j.patcog.2019.107038 – ident: ref18 doi: 10.1109/TIP.2019.2955241 – ident: ref55 doi: 10.1109/ACCESS.2019.2932130 – ident: ref62 doi: 10.1109/ISPACS.2017.8266583 – ident: ref48 doi: 10.1109/TCSVT.2016.2543099 – ident: ref19 doi: 10.1109/TIP.2021.3076367 – ident: ref37 doi: 10.1007/978-3-030-00764-5_47 – ident: ref59 doi: 10.1109/ACCESS.2020.3034275 – ident: ref51 doi: 10.1109/TIP.2021.3061932 – ident: ref44 doi: 10.1109/TCSVT.2018.2886771 – ident: ref71 doi: 10.1002/col.20070 – ident: ref38 doi: 10.1109/LRA.2020.2974710 |
SSID | ssj0014847 |
Score | 2.6145337 |
Snippet | Due to the wavelength-dependent light absorption and scattering, the raw underwater images are usually inevitably degraded. Underwater image enhancement (UIE)... |
SourceID | proquest crossref ieee |
SourceType | Aggregation Database Enrichment Source Index Database Publisher |
StartPage | 2561 |
SubjectTerms | blind image quality assessment comparative learning convolutional neural network Deep learning Electromagnetic absorption Generators Image degradation Image enhancement Oceans Semi-supervised learning Task analysis Training Underwater Underwater image enhancement Visualization |
Title | Beyond Single Reference for Training: Underwater Image Enhancement via Comparative Learning |
URI | https://ieeexplore.ieee.org/document/9965419 https://www.proquest.com/docview/2823193964 |
Volume | 33 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1NTwIxEG2Qkx78QiOKpgdvusAuLW29GQJBE7ywGBIPm20Z1IhgEDTx1zvtfkjUGG_dpE2bvmn7Ztt5Q8ipELEZK248n8MYHRQ-8mLha1zxo7G24rfMRbn2bprdAbse8mGBnOexMADgHp9B1RbdXf5oZpb2V1kNuTlnVuNzDR23JFYrvzFg0iUTQ7rgexLPsSxApq5qYat_G6IrGARVa74Nqy-ycgi5rCo_tmJ3vnS2SC8bWfKs5Km6XOiq-fgm2vjfoW-TzZRo0svEMnZIAaa7ZGNFfrBE7pLwFdrHrwnQXHOWIpGlYZo74oK61EjvyEnn9OoZtx_anj5YW7E90rfHmLa-FMRpqtd6v0cGnXbY6nppsgXPBIovPA1axVxqkCIGxE1YpTskF7op9EizwEgIsBALqRB1wSSAYjw2ksumDtDV3SfF6WwKB4RyZG1MC_CNxoagpWgYqQXOu-QqkLxM_Gz2I5MqkduEGJPIeSR1FTnEIotYlCJWJmd5m5dEh-PP2iULQV4znf0yqWQgR-lSfY0CexGqGqrJDn9vdUTWbY755H1YhRQX8yUcIxNZ6BNngp8AVto7 |
linkProvider | IEEE |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwzV3NThsxEB4BPQCHthQQaWnxAU5oQ9axYxuphyoFJfxdWCqkHpa1M4GqEBAkRfRZ-ip9t469zha1iBsSN6-03l17xp5v1jPfAKwqVbi-kS5JJfbJQZG9pFCppRXf61tPfitCluv-QatzJHaO5fEE_KpyYRAxBJ9h3TfDWX7v0o38r7INwuZSpCaGUO7i3S05aDcfu59Jmmucb29l7U4Sawgkjhs5TCxaU0htUasC6XOUJ3Ajm2lbyvas4E4jp0ahtKHBKKERjZCF01K3LCcPjp47CS8IZ0heZodVZxRCh_JlBFDSRJPlHKfkNMxG1j78kpHzyXndL5imZzS5Z_ZCHZf_Nv9g0bZfwe_xXJSBLN_ro6Gtu5__0EQ-18l6DS8jlGafSt2fgwkcvIHZewSL8_C1TNBhh3R1jqxi1WUE1VkWq2NsslD86ZZQ9zXrXtAGy7YGZ341-BGyH98K1v7Lkc4iI-3pAhw9yegWYWpwOcAlYJJwqbAKU2epI1qtmk5bRXLW0nAta5COpZ27yLXuS36c58Hnapg8aEjuNSSPGlKD9arPVck08ujd817k1Z1R2jVYHitVHjejm5z7o17TNC3x9uFeKzDdyfb38r3uwe47mKH3NMtouGWYGl6P8D3hrqH9ENSfwclTq9AfP2w6LQ |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Beyond+Single+Reference+for+Training%3A+Underwater+Image+Enhancement+via+Comparative+Learning&rft.jtitle=IEEE+transactions+on+circuits+and+systems+for+video+technology&rft.au=Li%2C+Kunqian&rft.au=Wu%2C+Li&rft.au=Qi%2C+Qi&rft.au=Liu%2C+Wenjie&rft.date=2023-06-01&rft.pub=The+Institute+of+Electrical+and+Electronics+Engineers%2C+Inc.+%28IEEE%29&rft.issn=1051-8215&rft.eissn=1558-2205&rft.volume=33&rft.issue=6&rft.spage=2561&rft_id=info:doi/10.1109%2FTCSVT.2022.3225376&rft.externalDBID=NO_FULL_TEXT |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1051-8215&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1051-8215&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1051-8215&client=summon |