MAGAN: Multi-Attention Generative Adversarial Network for Infrared and Visible Image Fusion
Deep learning has been widely used in infrared and visible image fusion owing to its strong feature extraction and generalization capabilities. However, it is difficult to directly extract specific image features from different modal images. Therefore, according to the characteristics of infrared an...
Saved in:
Published in | IEEE transactions on instrumentation and measurement p. 1 |
---|---|
Main Authors | , , , , |
Format | Journal Article |
Language | English |
Published |
IEEE
01.06.2023
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | Deep learning has been widely used in infrared and visible image fusion owing to its strong feature extraction and generalization capabilities. However, it is difficult to directly extract specific image features from different modal images. Therefore, according to the characteristics of infrared and visible images, this paper proposes a multi-attention generative adversarial network (MAGAN) for infrared and visible image fusion, which is composed of a multi- attention generator and two multi-attention discriminators. The multi-attention generator gradually realizes the extraction and fusion of image features by constructing two modules: a triple-path feature pre-fusion module (TFPM) and a feature emphasis fusion module (FEFM). The two multi-attention discriminators are constructed to ensure that the fused images retain the salient targets and the texture information from the source images. In MAGAN, an intensity attention and a texture attention are designed to extract the specific features of the source images to retain more intensity and texture information in the fused image. In addition, a saliency target intensity loss is defined to ensure that the fused images obtain more accurate salient information from infrared images. Experimental results on two public datasets show that the proposed MAGAN outperforms some state-of-the-art models in terms of visual effects and quantitative metrics. |
---|---|
AbstractList | Deep learning has been widely used in infrared and visible image fusion owing to its strong feature extraction and generalization capabilities. However, it is difficult to directly extract specific image features from different modal images. Therefore, according to the characteristics of infrared and visible images, this paper proposes a multi-attention generative adversarial network (MAGAN) for infrared and visible image fusion, which is composed of a multi- attention generator and two multi-attention discriminators. The multi-attention generator gradually realizes the extraction and fusion of image features by constructing two modules: a triple-path feature pre-fusion module (TFPM) and a feature emphasis fusion module (FEFM). The two multi-attention discriminators are constructed to ensure that the fused images retain the salient targets and the texture information from the source images. In MAGAN, an intensity attention and a texture attention are designed to extract the specific features of the source images to retain more intensity and texture information in the fused image. In addition, a saliency target intensity loss is defined to ensure that the fused images obtain more accurate salient information from infrared images. Experimental results on two public datasets show that the proposed MAGAN outperforms some state-of-the-art models in terms of visual effects and quantitative metrics. |
Author | Kong, Xiangkai Song, Zixiang Huang, Shuying Wan, Weiguo Yang, Yong |
Author_xml | – sequence: 1 givenname: Shuying orcidid: 0000-0003-2771-8461 surname: Huang fullname: Huang, Shuying organization: School of Software, Tiangong University, Tianjin, China – sequence: 2 givenname: Zixiang orcidid: 0000-0002-2593-4147 surname: Song fullname: Song, Zixiang organization: School of Information Technology, Jiangxi University of Finance and Economics, Nanchang, China – sequence: 3 givenname: Yong orcidid: 0000-0001-9467-0942 surname: Yang fullname: Yang, Yong organization: School of Computer Science and Technology, Tiangong University, Tianjin, China – sequence: 4 givenname: Weiguo orcidid: 0000-0002-3537-979X surname: Wan fullname: Wan, Weiguo organization: School of Software and Internet of Things Engineering, Jiangxi University of Finance and Economics, Nanchang, China – sequence: 5 givenname: Xiangkai orcidid: 0000-0002-5643-3425 surname: Kong fullname: Kong, Xiangkai organization: School of Information Technology, Jiangxi University of Finance and Economics, Nanchang, China |
BookMark | eNqFzLFuwjAQgGEPqUTSsnfocC-QcLZTQtiiqtAMYUJdGJBRLpXb4KCzk6pvDwN7p3_4pD8RkRscCfEsMZMSy8W-bjKFSmdarZRGjESMKFdpmb8uZyLx_hsRi2VexOLQVNtqt4Zm7INNqxDIBTs42JIjNsFOBFU7EXvD1vSwo_A78A90A0PtOjZMLRjXwqf19tQT1GfzRbAZ_W3yJB4603ua3_soXjbv-7eP1BLR8cL2bPjvKFHmWhW5_oevenhCyw |
CODEN | IEIMAO |
ContentType | Journal Article |
DBID | 97E RIA RIE |
DOI | 10.1109/TIM.2023.3282300 |
DatabaseName | IEEE All-Society Periodicals Package (ASPP) 2005-present IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Xplore |
DatabaseTitleList | |
Database_xml | – sequence: 1 dbid: RIE name: IEEE Electronic Library Online url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/ sourceTypes: Publisher |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Engineering Physics |
EndPage | 1 |
ExternalDocumentID | 10143274 |
Genre | orig-research |
GrantInformation_xml | – fundername: National Natural Science Foundation of China grantid: 62072218; 62201025 funderid: 10.13039/501100001809 |
GroupedDBID | -~X 0R~ 29I 4.4 5GY 6IK 85S 97E AAJGR AASAJ ABQJQ ACGFO ACIWK ACNCT AENEX AKJIK ALMA_UNASSIGNED_HOLDINGS ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ CS3 DU5 EBS F5P HZ~ IFIPE IPLJI JAVBF LAI M43 O9- OCL P2P RIA RIE RIG RNS TN5 TWZ |
ID | FETCH-ieee_primary_101432743 |
IEDL.DBID | RIE |
ISSN | 0018-9456 |
IngestDate | Mon Nov 04 12:05:19 EST 2024 |
IsPeerReviewed | true |
IsScholarly | true |
Language | English |
LinkModel | DirectLink |
MergedId | FETCHMERGED-ieee_primary_101432743 |
ORCID | 0000-0003-2771-8461 0000-0002-5643-3425 0000-0002-3537-979X 0000-0002-2593-4147 0000-0001-9467-0942 |
ParticipantIDs | ieee_primary_10143274 |
PublicationCentury | 2000 |
PublicationDate | 20230601 |
PublicationDateYYYYMMDD | 2023-06-01 |
PublicationDate_xml | – month: 6 year: 2023 text: 20230601 day: 1 |
PublicationDecade | 2020 |
PublicationTitle | IEEE transactions on instrumentation and measurement |
PublicationTitleAbbrev | TIM |
PublicationYear | 2023 |
Publisher | IEEE |
Publisher_xml | – name: IEEE |
SSID | ssj0007647 |
Score | 4.8485713 |
Snippet | Deep learning has been widely used in infrared and visible image fusion owing to its strong feature extraction and generalization capabilities. However, it is... |
SourceID | ieee |
SourceType | Publisher |
StartPage | 1 |
SubjectTerms | Electronic mail Feature extraction Fuses Generative adversarial networks Generators Image fusion intensity attention multi-attention GAN texture attention Training |
Title | MAGAN: Multi-Attention Generative Adversarial Network for Infrared and Visible Image Fusion |
URI | https://ieeexplore.ieee.org/document/10143274 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1LS8NAEB5sQdCDj1rxUWUPXjdNdpM26y2ItRGaU5WCh7Kb3YBoU2mTi7_efTSiouAt7CF87GN2ZuebbwCuqNTwYsIxkxHHIZMF5pTkONbxEKecxLm0BNlsMH4I72fRbFOsbmthlFKWfKY882lz-XKZ1-aprG_6ylIdRrWgNWTMFWt9mt3hIHQCmYE-wdotaHKSPutP04ln2oR7lJi8kv-tk4q9SEb7kDUQHH_kxasr4eXvP9QZ_43xAPY2LiVK3B44hC1VdmD3i9BgB7Yt0TNfH8HTJLlLsmtkC29xUlWO7oic_LSxfcj2aF5zszNR5ljiSLu2KC2LlaGrI15K9Pisz9KrQulCGyQ0qs2jWxd6o9vpzRgbvPM3p2Mxb6DSY2iXy1KdAAq4IEp7dFEhdGDiFzwmIi9CLnggWSzCU-j--ouzP8bPYcdMvONW9aBdrWp1oW_xSlza1fsA8zefKQ |
link.rule.ids | 315,783,787,799,27936,27937,55086 |
linkProvider | IEEE |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1LS8QwEB50RdSDj3XFx6o5eG23b1tvRaytbnuqsuChJE0Ki9qV3fbirzePragoeAs5hIEkk28y33wDcGFTbp5vYS2gLtacgFYatq1S83k8hG1s-SWVBNnMix-cu4k7WRary1oYxpgknzFdDGUun87KVnyVjURfWZuHUauwxoG176lyrU_He-k5SiLT5HeYA4MuK2kEozxJddEoXLctkVkyvvVSkU9JtANZZ4RikDzrbUP08v2HPuO_rdyF7SWoRKE6BXuwwuo-bH2RGuzDuqR6lot9eErD2zC7QrL0VgubRhEekRKgFt4PyS7NCyzOJsoUTxxxcIuSupoLwjrCNUWPU36bXhhKXrlLQlErvt0GMIxu8utYE_YWb0rJouhMtQ-gV89qdgjIxMRiHNO5FeGhiVFh3yJl5WCCTRr4xDmCwa9LHP8xfw4bcZ6Oi3GS3Z_AptgExbQaQq-Zt-yUv-kNOZM7-QF1iaJ0 |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=MAGAN%3A+Multi-Attention+Generative+Adversarial+Network+for+Infrared+and+Visible+Image+Fusion&rft.jtitle=IEEE+transactions+on+instrumentation+and+measurement&rft.au=Huang%2C+Shuying&rft.au=Song%2C+Zixiang&rft.au=Yang%2C+Yong&rft.au=Wan%2C+Weiguo&rft.date=2023-06-01&rft.pub=IEEE&rft.issn=0018-9456&rft.spage=1&rft.epage=1&rft_id=info:doi/10.1109%2FTIM.2023.3282300&rft.externalDocID=10143274 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0018-9456&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0018-9456&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0018-9456&client=summon |