Attention-Guided Global-Local Adversarial Learning for Detail-Preserving Multi-Exposure Image Fusion
Deep learning networks have recently demonstrated yielded impressive progress for multi-exposure image fusion. However, how to restore realistic texture details while correcting color distortion is still a challenging problem to be solved. To alleviate the aforementioned issues, in this paper, we pr...
Saved in:
Published in | IEEE transactions on circuits and systems for video technology Vol. 32; no. 8; pp. 5026 - 5040 |
---|---|
Main Authors | , , , |
Format | Journal Article |
Language | English |
Published |
New York
IEEE
01.08.2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | Deep learning networks have recently demonstrated yielded impressive progress for multi-exposure image fusion. However, how to restore realistic texture details while correcting color distortion is still a challenging problem to be solved. To alleviate the aforementioned issues, in this paper, we propose an attention-guided global-local adversarial learning network for fusing extreme exposure images in a coarse-to-fine manner. Firstly, the coarse fusion result is generated under the guidance of attention weight maps, which acquires the essential region of interest from both sides. Secondly, we formulate an edge loss function, along with a spatial feature transform layer, for refining the fusion process. So that it can take full use of the edge information to deal with blurry edges. Moreover, by incorporating global-local learning, our method can balance pixel intensity distribution and correct the color distortion on spatially varying source images from both image/patch perspectives. Such a global-local discriminator ensures all the local patches of the fused images align with realistic normal-exposure ones. Extensive experimental results on two publicly available datasets show that our method drastically outperforms state-of-the-art methods in visual inspection and objective analysis. Furthermore, sufficient ablation experiments prove that our method has significant advantages in generating high-quality fused results with appealing details, clear targets, and faithful color. Source code will be available at https://github.com/JinyuanLiu-CV/AGAL . |
---|---|
AbstractList | Deep learning networks have recently demonstrated yielded impressive progress for multi-exposure image fusion. However, how to restore realistic texture details while correcting color distortion is still a challenging problem to be solved. To alleviate the aforementioned issues, in this paper, we propose an attention-guided global-local adversarial learning network for fusing extreme exposure images in a coarse-to-fine manner. Firstly, the coarse fusion result is generated under the guidance of attention weight maps, which acquires the essential region of interest from both sides. Secondly, we formulate an edge loss function, along with a spatial feature transform layer, for refining the fusion process. So that it can take full use of the edge information to deal with blurry edges. Moreover, by incorporating global-local learning, our method can balance pixel intensity distribution and correct the color distortion on spatially varying source images from both image/patch perspectives. Such a global-local discriminator ensures all the local patches of the fused images align with realistic normal-exposure ones. Extensive experimental results on two publicly available datasets show that our method drastically outperforms state-of-the-art methods in visual inspection and objective analysis. Furthermore, sufficient ablation experiments prove that our method has significant advantages in generating high-quality fused results with appealing details, clear targets, and faithful color. Source code will be available at https://github.com/JinyuanLiu-CV/AGAL . |
Author | Shang, Jingjie Liu, Jinyuan Fan, Xin Liu, Risheng |
Author_xml | – sequence: 1 givenname: Jinyuan orcidid: 0000-0003-2085-2676 surname: Liu fullname: Liu, Jinyuan email: atlantis918@hotmail organization: School of Software Technology, Dalian University of Technology, Dalian, China – sequence: 2 givenname: Jingjie orcidid: 0000-0002-9276-2349 surname: Shang fullname: Shang, Jingjie email: ishawnshang@foxmail.com organization: DUT-RU International School of Information Science and Engineering, Dalian University of Technology, Dalian, China – sequence: 3 givenname: Risheng orcidid: 0000-0002-9554-0565 surname: Liu fullname: Liu, Risheng email: rsliu@dlut.edu.cn organization: DUT-RU International School of Information Science and Engineering, Dalian University of Technology, Dalian, China – sequence: 4 givenname: Xin orcidid: 0000-0002-8991-4188 surname: Fan fullname: Fan, Xin email: xin.fan@dlut.edu.cn organization: DUT-RU International School of Information Science and Engineering, Dalian University of Technology, Dalian, China |
BookMark | eNp9kE9PwzAMxSMEEuPPF4BLJc4ZSRq3zXEaMJCGQGJwrbLGnYJKM5J0gm9PxhAHDpxsWX7v2b8jst-7Hgk542zMOVOXi-nTy2IsmBDjnEspAfbIiANUVAgG-6lnwGklOBySoxBeGeOykuWImEmM2EfrejobrEGTzTq31B2du0Z32cRs0AftbernqH1v-1XWOp9dYdS2o48eA_rNdno_dNHS64-1C4PH7O5NrzC7GUKyPiEHre4Cnv7UY_J8c72Y3tL5w-xuOpnTRiiIVLZMt9BISGc0y1K1ShUS84opyDkYk1dlrkQB5bIolwalZq0yDUpgTJkKVH5MLna-a-_eBwyxfnWD71NkLQpVQpmehrQldluNdyF4bOu1t2_af9ac1Vua9TfNekuz_qGZRNUfUWOj3nKLPoH4X3q-k1pE_M1SRSUVz_MvsPWFTA |
CODEN | ITCTEM |
CitedBy_id | crossref_primary_10_1007_s11263_024_02266_6 crossref_primary_10_1007_s00034_024_02867_z crossref_primary_10_1007_s11263_024_01987_y crossref_primary_10_1109_TCSVT_2024_3457816 crossref_primary_10_1002_ima_23144 crossref_primary_10_1109_TNNLS_2023_3293274 crossref_primary_10_1109_TIM_2023_3267525 crossref_primary_10_1109_TIP_2024_3430532 crossref_primary_10_1038_s41598_023_39863_3 crossref_primary_10_3390_jmse11071285 crossref_primary_10_1016_j_inffus_2023_102073 crossref_primary_10_1007_s00371_024_03736_1 crossref_primary_10_1109_JSEN_2023_3346642 crossref_primary_10_1109_TGRS_2024_3525411 crossref_primary_10_1016_j_neucom_2024_128146 crossref_primary_10_1109_TIM_2024_3472902 crossref_primary_10_1109_ACCESS_2023_3348789 crossref_primary_10_1007_s11263_023_01952_1 crossref_primary_10_1016_j_neunet_2023_11_008 crossref_primary_10_1109_TCSVT_2024_3439348 crossref_primary_10_1016_j_inffus_2025_102944 crossref_primary_10_1109_LSP_2023_3243767 crossref_primary_10_1016_j_eswa_2024_124188 crossref_primary_10_1016_j_engappai_2023_106457 crossref_primary_10_1109_TAI_2024_3418378 crossref_primary_10_1007_s11831_022_09833_5 crossref_primary_10_1016_j_infrared_2025_105804 crossref_primary_10_1007_s00530_024_01473_y crossref_primary_10_1109_TMM_2024_3379883 crossref_primary_10_1117_1_JEI_33_1_013027 crossref_primary_10_1007_s00371_023_02880_4 crossref_primary_10_1155_2024_8442383 crossref_primary_10_1016_j_infrared_2023_104901 crossref_primary_10_1016_j_inffus_2024_102359 crossref_primary_10_1016_j_optlaseng_2024_108176 crossref_primary_10_1109_ACCESS_2023_3332121 crossref_primary_10_1109_TCSVT_2024_3366912 crossref_primary_10_1016_j_jvcir_2024_104051 crossref_primary_10_1016_j_eswa_2025_127147 crossref_primary_10_1109_TCSVT_2022_3202692 crossref_primary_10_3389_fnbot_2024_1521603 crossref_primary_10_1049_ipr2_12750 crossref_primary_10_1016_j_jvcir_2024_104059 crossref_primary_10_1016_j_inffus_2024_102534 crossref_primary_10_1016_j_jvcir_2025_104410 crossref_primary_10_1109_TIM_2024_3353285 crossref_primary_10_1016_j_asoc_2024_112240 crossref_primary_10_1109_TIM_2024_3386922 crossref_primary_10_1109_TGRS_2023_3293912 crossref_primary_10_1109_TMM_2023_3347099 crossref_primary_10_3390_e25050718 crossref_primary_10_1007_s00371_022_02681_1 crossref_primary_10_1016_j_patcog_2024_110558 crossref_primary_10_1109_TCSVT_2023_3311766 crossref_primary_10_1109_TIP_2023_3315123 crossref_primary_10_1109_LSP_2023_3266980 crossref_primary_10_1109_TCSVT_2024_3493254 crossref_primary_10_1016_j_infrared_2024_105202 crossref_primary_10_1109_TCSVT_2024_3351933 crossref_primary_10_1016_j_cviu_2024_104218 crossref_primary_10_1109_TCSVT_2023_3344222 crossref_primary_10_1109_TIP_2024_3378176 crossref_primary_10_1016_j_inffus_2023_102003 crossref_primary_10_1109_TMM_2023_3348333 crossref_primary_10_32604_cmc_2024_054685 crossref_primary_10_1007_s11263_024_02318_x crossref_primary_10_1007_s11263_023_01853_3 crossref_primary_10_1007_s11263_024_02256_8 crossref_primary_10_1109_TIM_2023_3343823 crossref_primary_10_1016_j_neucom_2024_128132 crossref_primary_10_1016_j_inffus_2023_101828 |
Cites_doi | 10.1364/JOSA.61.000001 10.1109/ICASSP40776.2020.9053765 10.1109/TIP.2016.2621674 10.1109/TPAMI.2007.1027 10.1191/1365782806li164oa 10.1109/TCYB.2013.2290435 10.1109/CVPR.2016.396 10.1109/TCSVT.2021.3056725 10.1109/ICME46284.2020.9102832 10.1109/TCE.2012.6227469 10.1609/aaai.v34i07.6975 10.1016/j.jvcir.2012.02.009 10.1007/978-3-030-32239-7_49 10.1109/TGRS.2016.2538295 10.1109/TNNLS.2019.2921597 10.1109/TIP.2013.2244222 10.1109/TIP.2017.2705426 10.1109/CVPR.2018.00644 10.1109/LGRS.2017.2668299 10.1109/ICASSP.2018.8461604 10.1016/j.neucom.2019.10.107 10.1109/TIP.2011.2150235 10.1109/TIP.2021.3058764 10.1109/TPAMI.2020.2984244 10.1109/TCSVT.2021.3053405 10.1016/j.infrared.2013.07.010 10.1109/CVPR.2018.00070 10.1109/ICME51207.2021.9428192 10.1109/TCI.2017.2786138 10.1109/TIP.2011.2157514 10.1142/S0218126616501231 10.1109/CVPR42600.2020.01229 10.1007/s11263-021-01466-8 10.1109/TIP.2021.3049959 10.1109/CVPR.2000.855857 10.1109/TIP.2017.2671921 10.1109/TIP.2018.2794218 10.1109/ICME.2017.8019319 10.1109/TCSVT.2019.2919310 10.1109/TIP.2020.2987133 10.1109/LSP.2021.3109818 10.1007/978-3-030-58536-5_24 10.1201/9781315119526 10.1109/TCSVT.2017.2740321 10.1109/TPAMI.2020.3012548 10.1109/TIP.2019.2957929 10.1016/j.inffus.2005.09.006 10.1109/ICME.2017.8019529 10.1109/CVPR42600.2020.01308 10.1109/JSTARS.2021.3098678 10.1109/TIP.2020.3011554 10.1109/ICME.2019.00083 10.1016/j.jvcir.2019.06.002 10.1038/scientificamerican1277-108 10.1016/j.jvcir.2018.03.020 10.1002/9781119951483 10.1109/ICME51207.2021.9428212 10.1109/CVPR.2018.00326 10.1016/j.neunet.2019.12.024 10.1109/TIP.2020.2999855 10.1109/CVPR.2019.00925 10.1111/j.1467-8659.2008.01171.x 10.1109/TIP.2020.3043125 10.1109/ICCV.2017.505 10.1109/ICCV.2017.304 |
ContentType | Journal Article |
Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022 |
Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022 |
DBID | 97E RIA RIE AAYXX CITATION 7SC 7SP 8FD JQ2 L7M L~C L~D |
DOI | 10.1109/TCSVT.2022.3144455 |
DatabaseName | IEEE All-Society Periodicals Package (ASPP) 2005–Present IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Electronic Library (IEL) CrossRef Computer and Information Systems Abstracts Electronics & Communications Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional |
DatabaseTitle | CrossRef Technology Research Database Computer and Information Systems Abstracts – Academic Electronics & Communications Abstracts ProQuest Computer Science Collection Computer and Information Systems Abstracts Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Professional |
DatabaseTitleList | Technology Research Database |
Database_xml | – sequence: 1 dbid: RIE name: IEEE Electronic Library (IEL) url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/ sourceTypes: Publisher |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Engineering |
EISSN | 1558-2205 |
EndPage | 5040 |
ExternalDocumentID | 10_1109_TCSVT_2022_3144455 9684913 |
Genre | orig-research |
GrantInformation_xml | – fundername: National Natural Science Foundation of China grantid: 61922019; 61733002; 62027826; 61772105 funderid: 10.13039/501100001809 – fundername: Fundamental Research Funds for the Central Universities funderid: 10.13039/501100012226 – fundername: National Key Research and Development Program of China grantid: 2020YFB1313503 funderid: 10.13039/501100012166 – fundername: LiaoNing Revitalization Talents Program grantid: XLYC1807088 funderid: 10.13039/501100018617 |
GroupedDBID | -~X 0R~ 29I 4.4 5GY 5VS 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABQJQ ABVLG ACGFO ACGFS ACIWK AENEX AETIX AGQYO AGSQL AHBIQ AI. AIBXA AKJIK AKQYR ALLEH ALMA_UNASSIGNED_HOLDINGS ASUFR ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ CS3 DU5 EBS EJD HZ~ H~9 ICLAB IFIPE IFJZH IPLJI JAVBF LAI M43 O9- OCL P2P RIA RIE RNS RXW TAE TN5 VH1 AAYXX CITATION RIG 7SC 7SP 8FD JQ2 L7M L~C L~D |
ID | FETCH-LOGICAL-c295t-4f0af5c45dedcb79f9964e38095315dd387392657b67bde4a0f9dce45009d8593 |
IEDL.DBID | RIE |
ISSN | 1051-8215 |
IngestDate | Mon Jun 30 06:32:12 EDT 2025 Tue Jul 01 00:41:17 EDT 2025 Thu Apr 24 22:59:05 EDT 2025 Wed Aug 27 02:23:49 EDT 2025 |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 8 |
Language | English |
License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037 |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c295t-4f0af5c45dedcb79f9964e38095315dd387392657b67bde4a0f9dce45009d8593 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ORCID | 0000-0002-9554-0565 0000-0002-8991-4188 0000-0003-2085-2676 0000-0002-9276-2349 |
PQID | 2697571485 |
PQPubID | 85433 |
PageCount | 15 |
ParticipantIDs | crossref_citationtrail_10_1109_TCSVT_2022_3144455 proquest_journals_2697571485 ieee_primary_9684913 crossref_primary_10_1109_TCSVT_2022_3144455 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 2022-08-01 |
PublicationDateYYYYMMDD | 2022-08-01 |
PublicationDate_xml | – month: 08 year: 2022 text: 2022-08-01 day: 01 |
PublicationDecade | 2020 |
PublicationPlace | New York |
PublicationPlace_xml | – name: New York |
PublicationTitle | IEEE transactions on circuits and systems for video technology |
PublicationTitleAbbrev | TCSVT |
PublicationYear | 2022 |
Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
References | ref57 ref13 ref56 ref12 ref59 ref15 ref58 ref14 ref53 ref52 ref55 ref11 ref54 ref10 ref17 ref16 ref19 ref18 ref50 ref46 ref45 ref48 ref47 ref42 ref41 ref44 ref43 ref49 ref8 ref7 ref9 ref4 ref3 ref6 ref5 ref40 ref35 ref34 ref37 ref36 ref31 ref30 ref33 zhang (ref51) 2021 ref32 ref2 ref1 ref39 ref38 johnson (ref66) 2016 ref68 ref24 ref67 ref23 ref26 ref69 ref25 ref64 ref20 ref63 ref22 ref65 ref21 ref28 ref27 ref29 ref60 ref62 ref61 |
References_xml | – ident: ref63 doi: 10.1364/JOSA.61.000001 – ident: ref15 doi: 10.1109/ICASSP40776.2020.9053765 – ident: ref40 doi: 10.1109/TIP.2016.2621674 – ident: ref65 doi: 10.1109/TPAMI.2007.1027 – ident: ref3 doi: 10.1191/1365782806li164oa – ident: ref30 doi: 10.1109/TCYB.2013.2290435 – ident: ref55 doi: 10.1109/CVPR.2016.396 – ident: ref43 doi: 10.1109/TCSVT.2021.3056725 – ident: ref47 doi: 10.1109/ICME46284.2020.9102832 – ident: ref10 doi: 10.1109/TCE.2012.6227469 – start-page: 694 year: 2016 ident: ref66 article-title: Perceptual losses for real-time style transfer and super-resolution publication-title: Proc ECCV – ident: ref26 doi: 10.1609/aaai.v34i07.6975 – ident: ref33 doi: 10.1016/j.jvcir.2012.02.009 – ident: ref59 doi: 10.1007/978-3-030-32239-7_49 – ident: ref5 doi: 10.1109/TGRS.2016.2538295 – ident: ref22 doi: 10.1109/TNNLS.2019.2921597 – year: 2021 ident: ref51 article-title: Hierarchical feature fusion with mixed convolution attention for single image dehazing publication-title: IEEE Trans Circuits Syst Video Technol – ident: ref8 doi: 10.1109/TIP.2013.2244222 – ident: ref16 doi: 10.1109/ICME46284.2020.9102832 – ident: ref4 doi: 10.1109/TIP.2017.2705426 – ident: ref21 doi: 10.1109/CVPR.2018.00644 – ident: ref6 doi: 10.1109/LGRS.2017.2668299 – ident: ref20 doi: 10.1109/ICASSP.2018.8461604 – ident: ref24 doi: 10.1016/j.neucom.2019.10.107 – ident: ref9 doi: 10.1109/TIP.2011.2150235 – ident: ref49 doi: 10.1109/TIP.2021.3058764 – ident: ref14 doi: 10.1109/TPAMI.2020.2984244 – ident: ref41 doi: 10.1109/TCSVT.2021.3053405 – ident: ref37 doi: 10.1016/j.infrared.2013.07.010 – ident: ref62 doi: 10.1109/CVPR.2018.00070 – ident: ref29 doi: 10.1109/ICME51207.2021.9428192 – ident: ref61 doi: 10.1364/JOSA.61.000001 – ident: ref19 doi: 10.1109/TCI.2017.2786138 – ident: ref32 doi: 10.1109/TIP.2011.2157514 – ident: ref34 doi: 10.1142/S0218126616501231 – ident: ref56 doi: 10.1109/CVPR42600.2020.01229 – ident: ref60 doi: 10.1007/s11263-021-01466-8 – ident: ref54 doi: 10.1109/TIP.2021.3049959 – ident: ref7 doi: 10.1109/CVPR.2000.855857 – ident: ref31 doi: 10.1109/TIP.2017.2671921 – ident: ref68 doi: 10.1109/TIP.2018.2794218 – ident: ref18 doi: 10.1109/ICME.2017.8019319 – ident: ref11 doi: 10.1109/TCSVT.2019.2919310 – ident: ref13 doi: 10.1109/TIP.2020.2987133 – ident: ref42 doi: 10.1109/LSP.2021.3109818 – ident: ref69 doi: 10.1007/978-3-030-58536-5_24 – ident: ref2 doi: 10.1201/9781315119526 – ident: ref25 doi: 10.1109/TCSVT.2017.2740321 – ident: ref27 doi: 10.1109/TPAMI.2020.3012548 – ident: ref23 doi: 10.1109/TIP.2019.2957929 – ident: ref36 doi: 10.1016/j.inffus.2005.09.006 – ident: ref35 doi: 10.1109/ICME.2017.8019529 – ident: ref57 doi: 10.1109/CVPR42600.2020.01308 – ident: ref46 doi: 10.1109/JSTARS.2021.3098678 – ident: ref53 doi: 10.1109/TIP.2020.3011554 – ident: ref28 doi: 10.1109/ICME.2019.00083 – ident: ref17 doi: 10.1016/j.jvcir.2019.06.002 – ident: ref64 doi: 10.1038/scientificamerican1277-108 – ident: ref39 doi: 10.1016/j.jvcir.2018.03.020 – ident: ref1 doi: 10.1002/9781119951483 – ident: ref44 doi: 10.1109/ICME51207.2021.9428212 – ident: ref52 doi: 10.1109/CVPR.2018.00326 – ident: ref58 doi: 10.1016/j.neunet.2019.12.024 – ident: ref48 doi: 10.1109/TIP.2020.2999855 – ident: ref50 doi: 10.1109/CVPR.2019.00925 – ident: ref38 doi: 10.1111/j.1467-8659.2008.01171.x – ident: ref45 doi: 10.1109/TIP.2020.3043125 – ident: ref12 doi: 10.1109/ICCV.2017.505 – ident: ref67 doi: 10.1109/ICCV.2017.304 |
SSID | ssj0014847 |
Score | 2.6351345 |
Snippet | Deep learning networks have recently demonstrated yielded impressive progress for multi-exposure image fusion. However, how to restore realistic texture... |
SourceID | proquest crossref ieee |
SourceType | Aggregation Database Enrichment Source Index Database Publisher |
StartPage | 5026 |
SubjectTerms | Ablation adversarial learning attention learning Color Computer vision Deep learning Distortion Exposure Feature extraction illumination correction Image color analysis Image edge detection Image fusion Image processing Image restoration Inspection multi-exposure image Source code Task analysis |
Title | Attention-Guided Global-Local Adversarial Learning for Detail-Preserving Multi-Exposure Image Fusion |
URI | https://ieeexplore.ieee.org/document/9684913 https://www.proquest.com/docview/2697571485 |
Volume | 32 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1LT9tAEF6lOZVDgQIiEKo99NZuiO1Zr_cYASGtChdCxc2y94GqQhIRW0L8embWToQoQr1Z9q690jee_WZ2Hox9tb7wiQMjnI0seau80KgpRSktFCU-UZJyhy8u08k1_LyRNx32fZ0L45wLwWduQJfhLN_OTU2usmOdZqCpRe0HNNyaXK31iQFkoZkY0oVIZLiPrRJkhvp4enL1e4qmYByjhQoAlNb3YhMKXVX-UcVhfxlvsovVypqwkr-DuioH5ulV0cb_XfoW-9QSTT5qJGObddzsM9t4UX5wh9lRVTXRjuK8_mOd5U0HAPGLNjgeejUvC5JQ3pZhveXIcflpCDsVFL1BmgbvhjRecfa4mJPDkf-4Ry3FxzV54nbZ9fhsejIRbdcFYWItKwF-WHhpQOJnTam0R4sIXJJRYbpIWptkCjlVKlWZqtI6KIZeW-NAIluzVD1tj3Vn85nbZ1zHXkaFSkoHGmJIigKhd0hIQPksk6bHohUMuWlLklNnjLs8mCZDnQfocoIub6HrsW_rOYumIMe7o3cIi_XIFoYe66_Qztt_dpnHqVZSoSDJg7dnHbKP9O4m_K_PutVD7Y6QklTllyCLzwEQ3W0 |
linkProvider | IEEE |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwzV1Lb9QwEB6VcgAOvErFQgEf4IS83Th2HB84VG2XXbrthS3qLSR-IATsVmwiHr-Fv8J_Y8ZJVhUgbpW4RYmdKJ7xzDfjeQA8daEMqZeWe5c48lYFblBS8ko5WVb4RCvKHT4-ySan8tWZOtuAH-tcGO99DD7zQ7qMZ_luaRtyle2aLJcm6VtVH_lvX9BAW72YHiA1nwkxPpzvT3jXQ4BbYVTNZRiVQVmpnHe20iYgvpc-zanMWqKcS3ONCCFTusp05bwsR8E466VC7OGoFhi-9wpcRZyhRJsdtj6jkHlsX4YAJeE5as4-JWdkduf7r9_M0fgUAm1iKSUlEl5Qe7GPyx_CP2q08S342a9FG8jyYdjU1dB-_61M5P-6WLfhZgel2V7L-3dgwy_uwo0LBRa3wO3VdRvPyV827_F7rO1xwGekwlnsRr0qaQ-yrtDsO4Yonh3EwFpO8SkkS_FuTFTmh1_Pl-RSZdNPKIfZuCFf4z04vZTf3IbNxXLh7wMzIqik1GnlpZFCpmWJzO0Rckkd8lzZASQ92QvbFV2n3h8fi2h8jUwRWaUgVik6VhnA8_Wc87bkyD9HbxHt1yM7sg9gp-euopNKq0JkRiuNjKse_H3WE7g2mR_Pitn05OghXKfvtMGOO7BZf278IwRgdfU47gMGby-bl34B1ro6mw |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Attention-Guided+Global-Local+Adversarial+Learning+for+Detail-Preserving+Multi-Exposure+Image+Fusion&rft.jtitle=IEEE+transactions+on+circuits+and+systems+for+video+technology&rft.au=Liu%2C+Jinyuan&rft.au=Shang%2C+Jingjie&rft.au=Liu%2C+Risheng&rft.au=Fan%2C+Xin&rft.date=2022-08-01&rft.pub=IEEE&rft.issn=1051-8215&rft.volume=32&rft.issue=8&rft.spage=5026&rft.epage=5040&rft_id=info:doi/10.1109%2FTCSVT.2022.3144455&rft.externalDocID=9684913 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1051-8215&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1051-8215&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1051-8215&client=summon |