BTMF-GAN: A multi-modal MRI fusion generative adversarial network for brain tumors
Image fusion techniques have been widely used for multi-modal medical image fusion tasks. Most existing methods aim to improve the overall quality of the fused image and do not focus on the more important textural details and contrast between the tissues of the lesion in the regions of interest (ROI...
Saved in:
Published in | Computers in biology and medicine Vol. 157; p. 106769 |
---|---|
Main Authors | , , , , , , , , |
Format | Journal Article |
Language | English |
Published |
United States
Elsevier Ltd
01.05.2023
Elsevier Limited |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | Image fusion techniques have been widely used for multi-modal medical image fusion tasks. Most existing methods aim to improve the overall quality of the fused image and do not focus on the more important textural details and contrast between the tissues of the lesion in the regions of interest (ROIs). This can lead to the distortion of important tumor ROIs information and thus limits the applicability of the fused images in clinical practice. To improve the fusion quality of ROIs relevant to medical implications, we propose a multi-modal MRI fusion generative adversarial network (BTMF-GAN) for the task of multi-modal MRI fusion of brain tumors. Unlike existing deep learning approaches which focus on improving the global quality of the fused image, the proposed BTMF-GAN aims to achieve a balance between tissue details and structural contrasts in brain tumor, which is the region of interest crucial to many medical applications. Specifically, we employ a generator with a U-shaped nested structure and residual U-blocks (RSU) to enhance multi-scale feature extraction. To enhance and recalibrate features of the encoder, the multi-perceptual field adaptive transformer feature enhancement module (MRF-ATFE) is used between the encoder and the decoder instead of a skip connection. To increase contrast between tumor tissues of the fused image, a mask-part block is introduced to fragment the source image and the fused image, based on which, we propose a novel salient loss function. Qualitative and quantitative analysis of the results on the public and clinical datasets demonstrate the superiority of the proposed approach to many other commonly used fusion methods.
•The BTMF-GAN is proposed for the multi-modal MRI fusion of brain tumors.•Uses a generator with a U-shaped nested structure to improve multi-scale feature extraction.•Uses MRF-AFTE to enhance and calibrate the features of the encoder.•Proposes a novel salient loss function to preserve tissue contrasts and textural details. |
---|---|
AbstractList | Image fusion techniques have been widely used for multi-modal medical image fusion tasks. Most existing methods aim to improve the overall quality of the fused image and do not focus on the more important textural details and contrast between the tissues of the lesion in the regions of interest (ROIs). This can lead to the distortion of important tumor ROIs information and thus limits the applicability of the fused images in clinical practice. To improve the fusion quality of ROIs relevant to medical implications, we propose a multi-modal MRI fusion generative adversarial network (BTMF-GAN) for the task of multi-modal MRI fusion of brain tumors. Unlike existing deep learning approaches which focus on improving the global quality of the fused image, the proposed BTMF-GAN aims to achieve a balance between tissue details and structural contrasts in brain tumor, which is the region of interest crucial to many medical applications. Specifically, we employ a generator with a U-shaped nested structure and residual U-blocks (RSU) to enhance multi-scale feature extraction. To enhance and recalibrate features of the encoder, the multi-perceptual field adaptive transformer feature enhancement module (MRF-ATFE) is used between the encoder and the decoder instead of a skip connection. To increase contrast between tumor tissues of the fused image, a mask-part block is introduced to fragment the source image and the fused image, based on which, we propose a novel salient loss function. Qualitative and quantitative analysis of the results on the public and clinical datasets demonstrate the superiority of the proposed approach to many other commonly used fusion methods.Image fusion techniques have been widely used for multi-modal medical image fusion tasks. Most existing methods aim to improve the overall quality of the fused image and do not focus on the more important textural details and contrast between the tissues of the lesion in the regions of interest (ROIs). This can lead to the distortion of important tumor ROIs information and thus limits the applicability of the fused images in clinical practice. To improve the fusion quality of ROIs relevant to medical implications, we propose a multi-modal MRI fusion generative adversarial network (BTMF-GAN) for the task of multi-modal MRI fusion of brain tumors. Unlike existing deep learning approaches which focus on improving the global quality of the fused image, the proposed BTMF-GAN aims to achieve a balance between tissue details and structural contrasts in brain tumor, which is the region of interest crucial to many medical applications. Specifically, we employ a generator with a U-shaped nested structure and residual U-blocks (RSU) to enhance multi-scale feature extraction. To enhance and recalibrate features of the encoder, the multi-perceptual field adaptive transformer feature enhancement module (MRF-ATFE) is used between the encoder and the decoder instead of a skip connection. To increase contrast between tumor tissues of the fused image, a mask-part block is introduced to fragment the source image and the fused image, based on which, we propose a novel salient loss function. Qualitative and quantitative analysis of the results on the public and clinical datasets demonstrate the superiority of the proposed approach to many other commonly used fusion methods. Image fusion techniques have been widely used for multi-modal medical image fusion tasks. Most existing methods aim to improve the overall quality of the fused image and do not focus on the more important textural details and contrast between the tissues of the lesion in the regions of interest (ROIs). This can lead to the distortion of important tumor ROIs information and thus limits the applicability of the fused images in clinical practice. To improve the fusion quality of ROIs relevant to medical implications, we propose a multi-modal MRI fusion generative adversarial network (BTMF-GAN) for the task of multi-modal MRI fusion of brain tumors. Unlike existing deep learning approaches which focus on improving the global quality of the fused image, the proposed BTMF-GAN aims to achieve a balance between tissue details and structural contrasts in brain tumor, which is the region of interest crucial to many medical applications. Specifically, we employ a generator with a U-shaped nested structure and residual U-blocks (RSU) to enhance multi-scale feature extraction. To enhance and recalibrate features of the encoder, the multi-perceptual field adaptive transformer feature enhancement module (MRF-ATFE) is used between the encoder and the decoder instead of a skip connection. To increase contrast between tumor tissues of the fused image, a mask-part block is introduced to fragment the source image and the fused image, based on which, we propose a novel salient loss function. Qualitative and quantitative analysis of the results on the public and clinical datasets demonstrate the superiority of the proposed approach to many other commonly used fusion methods. Image fusion techniques have been widely used for multi-modal medical image fusion tasks. Most existing methods aim to improve the overall quality of the fused image and do not focus on the more important textural details and contrast between the tissues of the lesion in the regions of interest (ROIs). This can lead to the distortion of important tumor ROIs information and thus limits the applicability of the fused images in clinical practice. To improve the fusion quality of ROIs relevant to medical implications, we propose a multi-modal MRI fusion generative adversarial network (BTMF-GAN) for the task of multi-modal MRI fusion of brain tumors. Unlike existing deep learning approaches which focus on improving the global quality of the fused image, the proposed BTMF-GAN aims to achieve a balance between tissue details and structural contrasts in brain tumor, which is the region of interest crucial to many medical applications. Specifically, we employ a generator with a U-shaped nested structure and residual U-blocks (RSU) to enhance multi-scale feature extraction. To enhance and recalibrate features of the encoder, the multi-perceptual field adaptive transformer feature enhancement module (MRF-ATFE) is used between the encoder and the decoder instead of a skip connection. To increase contrast between tumor tissues of the fused image, a mask-part block is introduced to fragment the source image and the fused image, based on which, we propose a novel salient loss function. Qualitative and quantitative analysis of the results on the public and clinical datasets demonstrate the superiority of the proposed approach to many other commonly used fusion methods. •The BTMF-GAN is proposed for the multi-modal MRI fusion of brain tumors.•Uses a generator with a U-shaped nested structure to improve multi-scale feature extraction.•Uses MRF-AFTE to enhance and calibrate the features of the encoder.•Proposes a novel salient loss function to preserve tissue contrasts and textural details. AbstractImage fusion techniques have been widely used for multi-modal medical image fusion tasks. Most existing methods aim to improve the overall quality of the fused image and do not focus on the more important textural details and contrast between the tissues of the lesion in the regions of interest (ROIs). This can lead to the distortion of important tumor ROIs information and thus limits the applicability of the fused images in clinical practice. To improve the fusion quality of ROIs relevant to medical implications, we propose a multi-modal MRI fusion generative adversarial network (BTMF-GAN) for the task of multi-modal MRI fusion of brain tumors. Unlike existing deep learning approaches which focus on improving the global quality of the fused image, the proposed BTMF-GAN aims to achieve a balance between tissue details and structural contrasts in brain tumor, which is the region of interest crucial to many medical applications. Specifically, we employ a generator with a U-shaped nested structure and residual U-blocks (RSU) to enhance multi-scale feature extraction. To enhance and recalibrate features of the encoder, the multi-perceptual field adaptive transformer feature enhancement module (MRF-ATFE) is used between the encoder and the decoder instead of a skip connection. To increase contrast between tumor tissues of the fused image, a mask-part block is introduced to fragment the source image and the fused image, based on which, we propose a novel salient loss function. Qualitative and quantitative analysis of the results on the public and clinical datasets demonstrate the superiority of the proposed approach to many other commonly used fusion methods. |
ArticleNumber | 106769 |
Author | Liu, Jie Chen, Hongyi Yu, Zekuan Zhou, Kun Du, Peng Xiang, Rui Liu, Weifan Liu, Xiao Yao, Chong |
Author_xml | – sequence: 1 givenname: Xiao surname: Liu fullname: Liu, Xiao organization: School of Computer and Information Technology, Beijing Jiaotong University, Beijing, 100044, China – sequence: 2 givenname: Hongyi orcidid: 0000-0003-2458-6854 surname: Chen fullname: Chen, Hongyi organization: Academy for Engineering and Technology, Fudan University, Shanghai, 200433, China – sequence: 3 givenname: Chong surname: Yao fullname: Yao, Chong organization: College of Information and Electrical Engineering, China Agricultural University, Beijing, 100083, China – sequence: 4 givenname: Rui surname: Xiang fullname: Xiang, Rui organization: Academy for Engineering and Technology, Fudan University, Shanghai, 200433, China – sequence: 5 givenname: Kun orcidid: 0000-0002-9247-2691 surname: Zhou fullname: Zhou, Kun organization: Academy for Engineering and Technology, Fudan University, Shanghai, 200433, China – sequence: 6 givenname: Peng surname: Du fullname: Du, Peng organization: Department of Radiology, Huashan Hospital, Fudan University, Shanghai, 200040, China – sequence: 7 givenname: Weifan orcidid: 0000-0002-0710-9311 surname: Liu fullname: Liu, Weifan email: weifanliu@bjfu.edu.cn organization: College of Science, Beijing Forestry University, Beijing, 100083, China – sequence: 8 givenname: Jie surname: Liu fullname: Liu, Jie email: jieliu@bjtu.edu.cn organization: School of Computer and Information Technology, Beijing Jiaotong University, Beijing, 100044, China – sequence: 9 givenname: Zekuan orcidid: 0000-0003-3655-872X surname: Yu fullname: Yu, Zekuan organization: Academy for Engineering and Technology, Fudan University, Shanghai, 200433, China |
BackLink | https://www.ncbi.nlm.nih.gov/pubmed/36947904$$D View this record in MEDLINE/PubMed |
BookMark | eNqNkk1r3DAQhkVJaTZJ_0IR9NKLtyPJ1to9hG5CkwaSFpL0LGxpXLSxpa1kb8m_r8zmAxYKexKIZx5p3pkjcuC8Q0IogzkDJj-v5tr368b6Hs2cAxfpWi5k9YbMWLmoMihEfkBmAAyyvOTFITmKcQUAOQh4Rw6FrPJFBfmM3J7d31xkl8sfX-iS9mM32Kz3pu7oze0VbcdovaO_0WGoB7tBWpsNhlgHmwiHw18fHmjrA21CbR0dxt6HeELetnUX8f3TeUx-XXy7P_-eXf-8vDpfXme6YPmQ5ZrnZVFCpcFI0LXE0nBTIm-FyJuFFpXhrBItIhalFI1kVSMNK7mQYJqGi2PyaetdB_9nxDio3kaNXVc79GNUPHUIBeNcJPTjDrryY3DpdxPFJj-DRH14osYmBavWwfZ1eFTPaSXgdAvo4GMM2CpthxSMd0Pqv1MM1DQetVKv41HTeNR2PElQ7gie39ij9GxbiinSjcWgorboNBobUA_KeLuP5HRHojvrrK67B3zE-BIKU5ErUHfTAk37wwUkSz5l_vX_gv3-8A80XNh2 |
CitedBy_id | crossref_primary_10_1016_j_bspc_2024_106571 crossref_primary_10_1016_j_compbiomed_2023_107181 crossref_primary_10_1109_ACCESS_2024_3370848 crossref_primary_10_1016_j_compbiomed_2024_108046 crossref_primary_10_1109_ACCESS_2024_3434714 crossref_primary_10_1016_j_wneu_2025_123768 crossref_primary_10_3390_app142411822 crossref_primary_10_1016_j_jksuci_2024_102090 crossref_primary_10_1007_s10462_024_10712_7 crossref_primary_10_1016_j_bspc_2023_105289 |
Cites_doi | 10.1016/j.inffus.2018.02.004 10.1016/j.patrec.2011.06.002 10.1016/j.inffus.2021.02.023 10.1016/j.inffus.2012.03.002 10.23919/ICIF.2017.8009769 10.1016/j.image.2019.06.002 10.1109/TIM.2009.2026612 10.1109/TIP.2018.2887342 10.1016/j.neucom.2018.07.030 10.1049/el:20000267 10.1016/j.inffus.2011.08.002 10.1109/WACV45572.2020.9093526 10.1109/JSTSP.2011.2112332 10.1016/j.neuroimage.2017.10.052 10.1109/LSP.2016.2618776 10.1016/j.infrared.2017.02.005 10.1109/TIP.2020.2977573 10.1109/ACCESS.2019.2898111 10.1016/j.neucom.2016.02.047 10.1016/j.inffus.2019.07.011 10.1109/TIM.2019.2962849 10.1109/TMM.2019.2895292 10.1016/j.optcom.2014.12.032 10.1007/s00034-019-01131-z 10.1006/gmip.1995.1022 10.1109/CVPR.2017.632 10.1016/j.image.2005.04.001 10.1109/CCDC.2019.8833211 10.1007/s11042-019-08070-6 10.1109/ICCV.2017.505 10.1016/j.inffus.2021.12.004 10.3390/s20082169 10.1109/TPAMI.2020.3012548 10.1016/S1566-2535(01)00020-3 10.1016/j.infrared.2014.07.019 10.1002/cncr.23741 10.1109/ACCESS.2017.2735019 10.1016/j.inffus.2021.06.001 10.1016/j.inffus.2011.01.002 10.1016/j.ins.2017.09.010 10.1049/el:20020212 10.1109/TIP.2003.819861 10.3390/e23121692 10.1016/j.inffus.2018.09.004 10.1016/j.inffus.2017.10.007 10.1109/TIP.2022.3193288 10.1016/j.inffus.2006.02.001 10.1007/s11760-013-0556-9 |
ContentType | Journal Article |
Copyright | 2023 Elsevier Ltd Elsevier Ltd Copyright © 2023 Elsevier Ltd. All rights reserved. 2023. Elsevier Ltd |
Copyright_xml | – notice: 2023 Elsevier Ltd – notice: Elsevier Ltd – notice: Copyright © 2023 Elsevier Ltd. All rights reserved. – notice: 2023. Elsevier Ltd |
DBID | AAYXX CITATION CGR CUY CVF ECM EIF NPM 3V. 7RV 7X7 7XB 88E 8AL 8AO 8FD 8FE 8FG 8FH 8FI 8FJ 8FK 8G5 ABUWG AFKRA ARAPS AZQEC BBNVY BENPR BGLVJ BHPHI CCPQU DWQXO FR3 FYUFA GHDGH GNUQQ GUQSH HCIFZ JQ2 K7- K9. KB0 LK8 M0N M0S M1P M2O M7P M7Z MBDVC NAPCQ P5Z P62 P64 PHGZM PHGZT PJZUB PKEHL PPXIY PQEST PQGLB PQQKQ PQUKI PRINS Q9U 7X8 |
DOI | 10.1016/j.compbiomed.2023.106769 |
DatabaseName | CrossRef Medline MEDLINE MEDLINE (Ovid) MEDLINE MEDLINE PubMed ProQuest Central (Corporate) Nursing & Allied Health Database Health & Medical Collection ProQuest Central (purchase pre-March 2016) Medical Database (Alumni Edition) Computing Database (Alumni Edition) ProQuest Pharma Collection Technology Research Database ProQuest SciTech Collection ProQuest Technology Collection ProQuest Natural Science Journals Hospital Premium Collection Hospital Premium Collection (Alumni Edition) ProQuest Central (Alumni) (purchase pre-March 2016) ProQuest Research Library ProQuest Central (Alumni) ProQuest Central UK/Ireland Advanced Technologies & Aerospace Collection ProQuest Central Essentials Local Electronic Collection Information Biological Science Collection ProQuest Central Technology Collection Natural Science Collection ProQuest One Community College ProQuest Central Engineering Research Database Health Research Premium Collection Health Research Premium Collection (Alumni) ProQuest Central Student ProQuest Research Library SciTech Premium Collection ProQuest Computer Science Collection Computer Science Database ProQuest Health & Medical Complete (Alumni) Nursing & Allied Health Database (Alumni Edition) Biological Sciences Computing Database ProQuest Health & Medical Collection PML(ProQuest Medical Library) Research Library Biological Science Database Biochemistry Abstracts 1 Research Library (Corporate) Nursing & Allied Health Premium Advanced Technologies & Aerospace Database ProQuest Advanced Technologies & Aerospace Collection Biotechnology and BioEngineering Abstracts ProQuest Central Premium ProQuest One Academic ProQuest Health & Medical Research Collection ProQuest One Academic Middle East (New) ProQuest One Health & Nursing ProQuest One Academic Eastern Edition (DO NOT USE) ProQuest One Applied & Life Sciences ProQuest One Academic ProQuest One Academic UKI Edition ProQuest Central China ProQuest Central Basic MEDLINE - Academic |
DatabaseTitle | CrossRef MEDLINE Medline Complete MEDLINE with Full Text PubMed MEDLINE (Ovid) Research Library Prep Computer Science Database ProQuest Central Student ProQuest Advanced Technologies & Aerospace Collection ProQuest Central Essentials ProQuest Computer Science Collection SciTech Premium Collection ProQuest Central China ProQuest One Applied & Life Sciences Health Research Premium Collection Natural Science Collection Health & Medical Research Collection Biological Science Collection ProQuest Central (New) ProQuest Medical Library (Alumni) Advanced Technologies & Aerospace Collection ProQuest Biological Science Collection ProQuest One Academic Eastern Edition ProQuest Hospital Collection ProQuest Technology Collection Health Research Premium Collection (Alumni) Biological Science Database ProQuest Hospital Collection (Alumni) Biotechnology and BioEngineering Abstracts Nursing & Allied Health Premium ProQuest Health & Medical Complete ProQuest One Academic UKI Edition ProQuest Nursing & Allied Health Source (Alumni) Engineering Research Database ProQuest One Academic ProQuest One Academic (New) Technology Collection Technology Research Database ProQuest One Academic Middle East (New) ProQuest Health & Medical Complete (Alumni) ProQuest Central (Alumni Edition) ProQuest One Community College ProQuest One Health & Nursing Research Library (Alumni Edition) ProQuest Natural Science Collection ProQuest Pharma Collection ProQuest Central ProQuest Health & Medical Research Collection Health and Medicine Complete (Alumni Edition) ProQuest Central Korea ProQuest Research Library ProQuest Computing ProQuest Central Basic ProQuest Computing (Alumni Edition) ProQuest Nursing & Allied Health Source ProQuest SciTech Collection Advanced Technologies & Aerospace Database ProQuest Medical Library Biochemistry Abstracts 1 ProQuest Central (Alumni) MEDLINE - Academic |
DatabaseTitleList | MEDLINE - Academic Research Library Prep MEDLINE |
Database_xml | – sequence: 1 dbid: NPM name: PubMed url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database – sequence: 2 dbid: EIF name: MEDLINE url: https://proxy.k.utb.cz/login?url=https://www.webofscience.com/wos/medline/basic-search sourceTypes: Index Database – sequence: 3 dbid: 8FG name: ProQuest Technology Collection url: https://search.proquest.com/technologycollection1 sourceTypes: Aggregation Database |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Medicine |
EISSN | 1879-0534 |
EndPage | 106769 |
ExternalDocumentID | 36947904 10_1016_j_compbiomed_2023_106769 S0010482523002342 1_s2_0_S0010482523002342 |
Genre | Research Support, Non-U.S. Gov't Journal Article |
GrantInformation_xml | – fundername: National Natural Science Foundation of China grantid: KKA309004533; 81571836 funderid: http://dx.doi.org/10.13039/501100001809 – fundername: Guangxi Key Laboratory of Automatic Detecting Technology and Instruments grantid: YQ21208 |
GroupedDBID | --- --K --M --Z -~X .1- .55 .DC .FO .GJ .~1 0R~ 1B1 1P~ 1RT 1~. 1~5 29F 4.4 457 4G. 53G 5GY 5VS 7-5 71M 7RV 7X7 88E 8AO 8FE 8FG 8FH 8FI 8FJ 8G5 8P~ 9JN AAEDT AAEDW AAIKJ AAKOC AALRI AAOAW AAQFI AAQXK AATTM AAXKI AAXUO AAYFN AAYWO ABBOA ABFNM ABJNI ABMAC ABMZM ABOCM ABUWG ABWVN ABXDB ACDAQ ACGFS ACIEU ACIUM ACIWK ACNNM ACPRK ACRLP ACRPL ACVFH ACZNC ADBBV ADCNI ADEZE ADJOM ADMUD ADNMO AEBSH AEIPS AEKER AENEX AEUPX AEVXI AFJKZ AFKRA AFPUW AFRAH AFRHN AFTJW AFXIZ AGCQF AGHFR AGQPQ AGUBO AGYEJ AHHHB AHMBA AHZHX AIALX AIEXJ AIGII AIIUN AIKHN AITUG AJRQY AJUYK AKBMS AKRWK AKYEP ALMA_UNASSIGNED_HOLDINGS AMRAJ ANKPU ANZVX AOUOD APXCP ARAPS ASPBG AVWKF AXJTR AZFZN AZQEC BBNVY BENPR BGLVJ BHPHI BKEYQ BKOJK BLXMC BNPGV BPHCQ BVXVI CCPQU CS3 DU5 DWQXO EBS EFJIC EFKBS EJD EMOBN EO8 EO9 EP2 EP3 EX3 F5P FDB FEDTE FGOYB FIRID FNPLU FYGXN FYUFA G-2 G-Q GBLVA GBOLZ GNUQQ GUQSH HCIFZ HLZ HMCUK HMK HMO HVGLF HZ~ IHE J1W K6V K7- KOM LK8 LX9 M1P M29 M2O M41 M7P MO0 N9A NAPCQ O-L O9- OAUVE OZT P-8 P-9 P2P P62 PC. PHGZM PHGZT PJZUB PPXIY PQGLB PQQKQ PROAC PSQYO PUEGO Q38 R2- ROL RPZ RXW SAE SBC SCC SDF SDG SDP SEL SES SEW SPC SPCBC SSH SSV SSZ SV3 T5K TAE UAP UKHRP WOW WUQ X7M XPP Z5R ZGI ~G- 3V. AACTN AFCTW AFKWA AJOXV ALIPV AMFUW M0N RIG AAIAV ABLVK ABYKQ AHPSJ AJBFU EFLBG LCYCR AAYXX AGRNS CITATION CGR CUY CVF ECM EIF NPM 7XB 8AL 8FD 8FK FR3 JQ2 K9. M7Z MBDVC P64 PKEHL PQEST PQUKI PRINS Q9U 7X8 |
ID | FETCH-LOGICAL-c514t-4c2485809c0d60ca6e8d2d8e2f334b7c39d2193feee5863b619b6d182360dbb23 |
IEDL.DBID | .~1 |
ISSN | 0010-4825 1879-0534 |
IngestDate | Mon Jul 21 11:38:34 EDT 2025 Wed Aug 13 09:07:58 EDT 2025 Wed Feb 19 02:24:49 EST 2025 Tue Jul 01 03:28:59 EDT 2025 Thu Apr 24 22:57:16 EDT 2025 Fri Feb 23 02:36:34 EST 2024 Tue Feb 25 20:08:37 EST 2025 Tue Aug 26 20:14:05 EDT 2025 |
IsPeerReviewed | true |
IsScholarly | true |
Keywords | Adaptive transformer Multi-modal MRI Salient loss Image fusion |
Language | English |
License | Copyright © 2023 Elsevier Ltd. All rights reserved. |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c514t-4c2485809c0d60ca6e8d2d8e2f334b7c39d2193feee5863b619b6d182360dbb23 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 |
ORCID | 0000-0003-3655-872X 0000-0002-9247-2691 0000-0002-0710-9311 0000-0003-2458-6854 |
PMID | 36947904 |
PQID | 2791586310 |
PQPubID | 1226355 |
PageCount | 1 |
ParticipantIDs | proquest_miscellaneous_2790051223 proquest_journals_2791586310 pubmed_primary_36947904 crossref_citationtrail_10_1016_j_compbiomed_2023_106769 crossref_primary_10_1016_j_compbiomed_2023_106769 elsevier_sciencedirect_doi_10_1016_j_compbiomed_2023_106769 elsevier_clinicalkeyesjournals_1_s2_0_S0010482523002342 elsevier_clinicalkey_doi_10_1016_j_compbiomed_2023_106769 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 2023-05-01 |
PublicationDateYYYYMMDD | 2023-05-01 |
PublicationDate_xml | – month: 05 year: 2023 text: 2023-05-01 day: 01 |
PublicationDecade | 2020 |
PublicationPlace | United States |
PublicationPlace_xml | – name: United States – name: Oxford |
PublicationTitle | Computers in biology and medicine |
PublicationTitleAlternate | Comput Biol Med |
PublicationYear | 2023 |
Publisher | Elsevier Ltd Elsevier Limited |
Publisher_xml | – name: Elsevier Ltd – name: Elsevier Limited |
References | Wang, Mingyao, Wei, Qi, Li (b36) 2020; 20 Ma, Hu, Liu, Fang, Xu (b31) 2019; 78 Xydeas, Petrovic (b56) 2000; 36 Pan, Jing, Qiao, Li (b27) 2018; 61 K. Ram Prabhakar, V. Sai Srikar, R. Venkatesh Babu, Deepfuse: A deep unsupervised approach for exposure fusion with extreme exposure image pairs, in: Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 4714–4722. Guo, Nie, Cao, Zhou, Mei, He (b43) 2019; 21 Cercignani, Bouyagoub (b2) 2018; 182 Du, Gao (b34) 2017; 5 Du, Li, Xiao, Nawaz (b19) 2016; 194 Hu, Li (b28) 2012; 13 Zhang, Liu, Sun, Yan, Zhao, Zhang (b14) 2020; 54 Zhu, Yin, Chai, Li, Qi (b30) 2018; 432 Wang, Li, Duan, Zhang, Wang (b35) 2019; 78 Yang, Ling, Lu, Ong, Yao (b58) 2005; 20 Liu, Chen, Wang, Wang, Ward, Wang (b4) 2018; 42 Li, Manjunath, Mitra (b20) 1995; 57 Yang, Li (b7) 2010; 59 Zhu, He, Qi, Li, Cong, Liu (b41) 2022; 91 Tang, Yuan, Ma (b50) 2022; 82 Zhang, Wang, Li, Ma (b40) 2011; 32 Zhang, Blum (b8) 2001; 2 Cui, Feng, Xu, Li, Chen (b52) 2015; 341 Wang, Li, fang Tian (b5) 2014; 19 Bavirisetti, Xiao, Zhao, Dhuli, Liu (b48) 2019; 38 Yang, Li (b29) 2009; 59 Wang, Li, Tian (b24) 2014; 19 F. Huang, A. Zeng, M. Liu, Q. Lai, Q. Xu, DeepFuse: An IMU-Aware Network for Real-Time 3D Human Pose Estimation from Multi-View Image, in: 2020 IEEE Winter Conference on Applications of Computer Vision, WACV, 2020, pp. 418–427. P. Isola, J.-Y. Zhu, T. Zhou, A.A. Efros, Image-to-image translation with conditional adversarial networks, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 1125–1134. A.R. Alankrita, A. Shrivastava, V. Bhateja, Contrast improvement of cerebral mri features using combination of non-linear enhancement operator and morphological filter, in: Proc. of (IEEE) International Conference on Network and Computational Intelligence (ICNCI 2011), Zhengzhou, China, Vol. 4, 2011, pp. 182–187. Shreyamsha Kumar (b25) 2015; 9 Ma, Ma, Li (b3) 2019; 45 Xu, Ma, Jiang, Guo, Ling (b38) 2020; 44 Y. Liu, X. Chen, J. Cheng, H. Peng, A medical image fusion method based on convolutional neural networks, in: 2017 20th International Conference on Information Fusion (Fusion), 2017, pp. 1–7. P. Li, H. Wang, X. Li, H. Hu, H. Wei, Y. Yuan, Z. Zhang, G. Qi, A novel Image Fusion Framework based on Non-Subsampled Shearlet Transform (NSST) Domain, in: 2019 Chinese Control and Decision Conference (CCDC), 2019, pp. 1409–1414. Rao, Wu, Xu (b45) 2022 Ma, Zhang, Shao, Liang, Xu (b42) 2020; 70 Kong, Lei, Zhao (b16) 2014; 67 Ma, Yu, Liang, Li, Jiang (b13) 2019; 48 Roberts, Van Aardt, Ahmed (b54) 2008; 2 Li, Qu, Dong, Zheng (b17) 2018; 315 Ma, Xu, Jiang, Mei, Zhang (b51) 2020; 29 Li, Wu, Kittler (b10) 2021; 73 Nencini, Garzelli, Baronti, Alparone (b21) 2007; 8 Cui, Zhang, Wu (b18) 2009 Zhao, Zhang, Ding, Cui (b39) 2021; 23 Tang, He, Liu, Duan (b44) 2022; 31 Han, Cai, Cao, Xu (b57) 2013; 14 Zhu, Zheng, Qi, Wang, Xiang (b22) 2019; 7 Wang, Bovik, Sheikh, Simoncelli (b15) 2004; 13 Liu, Chen, Ward, Wang (b32) 2016; 23 Jin, Jiang, Chu, Lang, Yao, Li, Zhou (b6) 2019; 69 Xu, Ma (b49) 2021; 76 Ma, Zhou, Wang, Zong (b26) 2017; 82 Yu, Qiu, Bi, Wang (b53) 2011; 5 Qu, Zhang, Yan (b55) 2002; 38 Bondy, Scheurer, Malmer, Barnholtz-Sloan, Davis, Il’Yasova, Kruchko, McCarthy, Rajaraman, Schwartzbaum (b1) 2008; 113 Li, Wu (b9) 2019; 28 Du (10.1016/j.compbiomed.2023.106769_b19) 2016; 194 10.1016/j.compbiomed.2023.106769_b37 10.1016/j.compbiomed.2023.106769_b33 Wang (10.1016/j.compbiomed.2023.106769_b15) 2004; 13 Ma (10.1016/j.compbiomed.2023.106769_b51) 2020; 29 Li (10.1016/j.compbiomed.2023.106769_b20) 1995; 57 Zhu (10.1016/j.compbiomed.2023.106769_b22) 2019; 7 Yang (10.1016/j.compbiomed.2023.106769_b29) 2009; 59 Bondy (10.1016/j.compbiomed.2023.106769_b1) 2008; 113 Yang (10.1016/j.compbiomed.2023.106769_b58) 2005; 20 Guo (10.1016/j.compbiomed.2023.106769_b43) 2019; 21 Xu (10.1016/j.compbiomed.2023.106769_b49) 2021; 76 Wang (10.1016/j.compbiomed.2023.106769_b5) 2014; 19 Cercignani (10.1016/j.compbiomed.2023.106769_b2) 2018; 182 10.1016/j.compbiomed.2023.106769_b23 Rao (10.1016/j.compbiomed.2023.106769_b45) 2022 Zhu (10.1016/j.compbiomed.2023.106769_b41) 2022; 91 Cui (10.1016/j.compbiomed.2023.106769_b18) 2009 Bavirisetti (10.1016/j.compbiomed.2023.106769_b48) 2019; 38 Du (10.1016/j.compbiomed.2023.106769_b34) 2017; 5 Roberts (10.1016/j.compbiomed.2023.106769_b54) 2008; 2 Zhang (10.1016/j.compbiomed.2023.106769_b14) 2020; 54 Xu (10.1016/j.compbiomed.2023.106769_b38) 2020; 44 Yu (10.1016/j.compbiomed.2023.106769_b53) 2011; 5 Qu (10.1016/j.compbiomed.2023.106769_b55) 2002; 38 Zhao (10.1016/j.compbiomed.2023.106769_b39) 2021; 23 10.1016/j.compbiomed.2023.106769_b11 10.1016/j.compbiomed.2023.106769_b12 Shreyamsha Kumar (10.1016/j.compbiomed.2023.106769_b25) 2015; 9 Wang (10.1016/j.compbiomed.2023.106769_b35) 2019; 78 Ma (10.1016/j.compbiomed.2023.106769_b42) 2020; 70 Zhang (10.1016/j.compbiomed.2023.106769_b40) 2011; 32 Li (10.1016/j.compbiomed.2023.106769_b17) 2018; 315 Wang (10.1016/j.compbiomed.2023.106769_b24) 2014; 19 Xydeas (10.1016/j.compbiomed.2023.106769_b56) 2000; 36 Han (10.1016/j.compbiomed.2023.106769_b57) 2013; 14 Ma (10.1016/j.compbiomed.2023.106769_b31) 2019; 78 Ma (10.1016/j.compbiomed.2023.106769_b13) 2019; 48 Li (10.1016/j.compbiomed.2023.106769_b9) 2019; 28 Yang (10.1016/j.compbiomed.2023.106769_b7) 2010; 59 10.1016/j.compbiomed.2023.106769_b46 Jin (10.1016/j.compbiomed.2023.106769_b6) 2019; 69 10.1016/j.compbiomed.2023.106769_b47 Tang (10.1016/j.compbiomed.2023.106769_b50) 2022; 82 Wang (10.1016/j.compbiomed.2023.106769_b36) 2020; 20 Nencini (10.1016/j.compbiomed.2023.106769_b21) 2007; 8 Zhang (10.1016/j.compbiomed.2023.106769_b8) 2001; 2 Hu (10.1016/j.compbiomed.2023.106769_b28) 2012; 13 Kong (10.1016/j.compbiomed.2023.106769_b16) 2014; 67 Liu (10.1016/j.compbiomed.2023.106769_b32) 2016; 23 Li (10.1016/j.compbiomed.2023.106769_b10) 2021; 73 Zhu (10.1016/j.compbiomed.2023.106769_b30) 2018; 432 Tang (10.1016/j.compbiomed.2023.106769_b44) 2022; 31 Liu (10.1016/j.compbiomed.2023.106769_b4) 2018; 42 Pan (10.1016/j.compbiomed.2023.106769_b27) 2018; 61 Cui (10.1016/j.compbiomed.2023.106769_b52) 2015; 341 Ma (10.1016/j.compbiomed.2023.106769_b26) 2017; 82 Ma (10.1016/j.compbiomed.2023.106769_b3) 2019; 45 |
References_xml | – volume: 70 start-page: 1 year: 2020 end-page: 14 ident: b42 article-title: Ganmcc: A generative adversarial network with multiclassification constraints for infrared and visible image fusion publication-title: IEEE Trans. Instrum. Meas. – volume: 19 start-page: 20 year: 2014 end-page: 28 ident: b24 article-title: Multi-modal medical image fusion using the inter-scale and intra-scale dependencies between image shift-invariant shearlet coefficients publication-title: Inf. Fusion – volume: 23 start-page: 1692 year: 2021 ident: b39 article-title: MFF-net: deepfake detection network based on multi-feature fusion publication-title: Entropy – volume: 5 start-page: 1074 year: 2011 end-page: 1082 ident: b53 article-title: Image features extraction and fusion based on joint sparse representation publication-title: IEEE J. Sel. Top. Sign. Proces. – volume: 45 start-page: 153 year: 2019 end-page: 178 ident: b3 article-title: Infrared and visible image fusion methods and applications: A survey publication-title: Inf. Fusion – volume: 82 start-page: 8 year: 2017 end-page: 17 ident: b26 article-title: Infrared and visible image fusion based on visual saliency map and weighted least square optimization publication-title: Infrared Phys. Technol. – volume: 194 start-page: 326 year: 2016 end-page: 339 ident: b19 article-title: Union Laplacian pyramid with multiple features for medical image fusion publication-title: Neurocomputing – volume: 44 start-page: 502 year: 2020 end-page: 518 ident: b38 article-title: U2fusion: A unified unsupervised image fusion network publication-title: IEEE Trans. Pattern Anal. Mach. Intell. – volume: 38 start-page: 1 year: 2002 ident: b55 article-title: Information measure for performance of image fusion publication-title: Electron. Lett. – volume: 432 start-page: 516 year: 2018 end-page: 529 ident: b30 article-title: A novel multi-modality image fusion method based on image decomposition and sparse representation publication-title: Inform. Sci. – volume: 5 start-page: 15750 year: 2017 end-page: 15761 ident: b34 article-title: Image segmentation-based multi-focus image fusion through multi-scale convolutional neural network publication-title: IEEE Access – volume: 13 start-page: 196 year: 2012 end-page: 206 ident: b28 article-title: The multiscale directional bilateral filter and its application to multisensor image fusion publication-title: Inf. Fusion – volume: 57 start-page: 235 year: 1995 end-page: 245 ident: b20 article-title: Multisensor image fusion using the wavelet transform publication-title: Graph. Models Image Process. – volume: 78 start-page: 125 year: 2019 end-page: 134 ident: b31 article-title: Multi-focus image fusion based on joint sparse representation and optimum theory publication-title: Signal Process., Image Commun. – volume: 73 start-page: 72 year: 2021 end-page: 86 ident: b10 article-title: RFN-nest: An end-to-end residual fusion network for infrared and visible images publication-title: Inf. Fusion – volume: 315 start-page: 371 year: 2018 end-page: 380 ident: b17 article-title: Hyperspectral pansharpening via improved PCA approach and optimal weighted fusion strategy publication-title: Neurocomputing – volume: 8 start-page: 143 year: 2007 end-page: 156 ident: b21 article-title: Remote sensing image fusion using the curvelet transform publication-title: Inf. Fusion – volume: 29 start-page: 4980 year: 2020 end-page: 4995 ident: b51 article-title: Ddcgan: A dual-discriminator conditional generative adversarial network for multi-resolution image fusion publication-title: IEEE Trans. Image Process. – volume: 31 start-page: 5134 year: 2022 end-page: 5149 ident: b44 article-title: MATR: Multimodal medical image fusion via multiscale adaptive transformer publication-title: IEEE Trans. Image Process. – volume: 13 start-page: 600 year: 2004 end-page: 612 ident: b15 article-title: Image quality assessment: from error visibility to structural similarity publication-title: IEEE Trans. Image Process. – volume: 59 start-page: 884 year: 2009 end-page: 892 ident: b29 article-title: Multifocus image fusion and restoration with sparse representation publication-title: IEEE Trans. Instrum. Meas. – volume: 20 start-page: 2169 year: 2020 ident: b36 article-title: Multi-modality medical image fusion using convolutional neural network and contrast pyramid publication-title: Sensors – volume: 341 start-page: 199 year: 2015 end-page: 209 ident: b52 article-title: Detail preserved fusion of visible and infrared images using regional saliency extraction and multi-scale image decomposition publication-title: Opt. Commun. – reference: Y. Liu, X. Chen, J. Cheng, H. Peng, A medical image fusion method based on convolutional neural networks, in: 2017 20th International Conference on Information Fusion (Fusion), 2017, pp. 1–7. – volume: 91 year: 2022 ident: b41 article-title: Brain tumor segmentation based on the fusion of deep semantics and edge information in multimodal MRI publication-title: Inf. Fusion – volume: 23 start-page: 1882 year: 2016 end-page: 1886 ident: b32 article-title: Image fusion with convolutional sparse representation publication-title: IEEE Signal Process. Lett. – volume: 113 start-page: 1953 year: 2008 end-page: 1968 ident: b1 article-title: Brain tumor epidemiology: consensus from the brain tumor epidemiology consortium publication-title: Cancer – volume: 21 start-page: 1982 year: 2019 end-page: 1996 ident: b43 article-title: FuseGAN: Learning to fuse multi-focus image via conditional generative adversarial network publication-title: IEEE Trans. Multimed. – start-page: 480 year: 2009 end-page: 483 ident: b18 article-title: Medical image fusion based on wavelet transform and independent component analysis publication-title: 2009 International Joint Conference on Artificial Intelligence – volume: 67 start-page: 161 year: 2014 end-page: 172 ident: b16 article-title: Adaptive fusion method of visible light and infrared images based on non-subsampled shearlet transform and fast non-negative matrix factorization publication-title: Infrared Phys. Technol. – volume: 78 start-page: 34483 year: 2019 end-page: 34512 ident: b35 article-title: Multifocus image fusion using convolutional neural networks in the discrete wavelet transform domain publication-title: Multimedia Tools Appl. – reference: F. Huang, A. Zeng, M. Liu, Q. Lai, Q. Xu, DeepFuse: An IMU-Aware Network for Real-Time 3D Human Pose Estimation from Multi-View Image, in: 2020 IEEE Winter Conference on Applications of Computer Vision, WACV, 2020, pp. 418–427. – volume: 14 start-page: 127 year: 2013 end-page: 135 ident: b57 article-title: A new image fusion performance metric based on visual information fidelity publication-title: Inf. Fusion – volume: 7 start-page: 20811 year: 2019 end-page: 20824 ident: b22 article-title: A phase congruency and local Laplacian energy based multi-modality medical image fusion method in NSCT domain publication-title: IEEE Access – volume: 32 start-page: 1544 year: 2011 end-page: 1553 ident: b40 article-title: Similarity-based multimodality image fusion with shiftable complex directional pyramid publication-title: Pattern Recognit. Lett. – volume: 28 start-page: 2614 year: 2019 end-page: 2623 ident: b9 article-title: DenseFuse: A fusion approach to infrared and visible images publication-title: IEEE Trans. Image Process. – reference: K. Ram Prabhakar, V. Sai Srikar, R. Venkatesh Babu, Deepfuse: A deep unsupervised approach for exposure fusion with extreme exposure image pairs, in: Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 4714–4722. – volume: 54 start-page: 99 year: 2020 end-page: 118 ident: b14 article-title: IFCNN: A general image fusion framework based on convolutional neural network publication-title: Inf. Fusion – reference: P. Li, H. Wang, X. Li, H. Hu, H. Wei, Y. Yuan, Z. Zhang, G. Qi, A novel Image Fusion Framework based on Non-Subsampled Shearlet Transform (NSST) Domain, in: 2019 Chinese Control and Decision Conference (CCDC), 2019, pp. 1409–1414. – volume: 9 start-page: 1193 year: 2015 end-page: 1204 ident: b25 article-title: Image fusion based on pixel significance using cross bilateral filter publication-title: Signal, Image Video Process. – volume: 182 start-page: 117 year: 2018 end-page: 127 ident: b2 article-title: Brain microstructure by multi-modal MRI: Is the whole greater than the sum of its parts? publication-title: NeuroImage – volume: 69 start-page: 5900 year: 2019 end-page: 5913 ident: b6 article-title: Brain medical image fusion using L2-norm-based features and fuzzy-weighted measurements in 2-D littlewood–Paley EWT domain publication-title: IEEE Trans. Instrum. Meas. – volume: 82 start-page: 28 year: 2022 end-page: 42 ident: b50 article-title: Image fusion in the loop of high-level vision tasks: A semantic-aware real-time infrared and visible image fusion network publication-title: Inf. Fusion – volume: 59 start-page: 884 year: 2010 end-page: 892 ident: b7 article-title: Multifocus image fusion and restoration with sparse representation publication-title: IEEE Trans. Instrum. Meas. – volume: 2 year: 2008 ident: b54 article-title: Assessment of image fusion procedures using entropy, image quality, and multispectral classification publication-title: J. Appl. Remote Sens. – volume: 36 start-page: 308 year: 2000 end-page: 309 ident: b56 article-title: Objective image fusion performance measure publication-title: Electron. Lett. – volume: 48 start-page: 11 year: 2019 end-page: 26 ident: b13 article-title: Fusiongan: A generative adversarial network for infrared and visible image fusion publication-title: Inf. Fusion – year: 2022 ident: b45 article-title: Tgfuse: An infrared and visible image fusion approach based on transformer and generative adversarial network – reference: A.R. Alankrita, A. Shrivastava, V. Bhateja, Contrast improvement of cerebral mri features using combination of non-linear enhancement operator and morphological filter, in: Proc. of (IEEE) International Conference on Network and Computational Intelligence (ICNCI 2011), Zhengzhou, China, Vol. 4, 2011, pp. 182–187. – volume: 76 start-page: 177 year: 2021 end-page: 186 ident: b49 article-title: Emfusion: An unsupervised enhanced medical image fusion network publication-title: Inf. Fusion – volume: 20 start-page: 662 year: 2005 end-page: 680 ident: b58 article-title: Just noticeable distortion model and its applications in video coding publication-title: Signal Process., Image Commun. – volume: 38 start-page: 5576 year: 2019 end-page: 5605 ident: b48 article-title: Multi-scale guided image and video fusion: A fast and efficient approach publication-title: Circuits Systems Signal Process. – reference: P. Isola, J.-Y. Zhu, T. Zhou, A.A. Efros, Image-to-image translation with conditional adversarial networks, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 1125–1134. – volume: 2 start-page: 135 year: 2001 end-page: 149 ident: b8 article-title: A hybrid image registration technique for a digital camera image fusion application publication-title: Inf. Fusion – volume: 19 start-page: 20 year: 2014 end-page: 28 ident: b5 article-title: Multi-modal medical image fusion using the inter-scale and intra-scale dependencies between image shift-invariant shearlet coefficients publication-title: Inf. Fusion – volume: 42 start-page: 158 year: 2018 end-page: 173 ident: b4 article-title: Deep learning for pixel-level image fusion: Recent advances and future prospects publication-title: Inf. Fusion – volume: 61 start-page: 1 year: 2018 end-page: 049103 ident: b27 article-title: Visible and infrared image fusion using l0-generalized total variation model publication-title: Inform. Sci. – volume: 45 start-page: 153 year: 2019 ident: 10.1016/j.compbiomed.2023.106769_b3 article-title: Infrared and visible image fusion methods and applications: A survey publication-title: Inf. Fusion doi: 10.1016/j.inffus.2018.02.004 – volume: 32 start-page: 1544 issue: 13 year: 2011 ident: 10.1016/j.compbiomed.2023.106769_b40 article-title: Similarity-based multimodality image fusion with shiftable complex directional pyramid publication-title: Pattern Recognit. Lett. doi: 10.1016/j.patrec.2011.06.002 – volume: 73 start-page: 72 year: 2021 ident: 10.1016/j.compbiomed.2023.106769_b10 article-title: RFN-nest: An end-to-end residual fusion network for infrared and visible images publication-title: Inf. Fusion doi: 10.1016/j.inffus.2021.02.023 – volume: 19 start-page: 20 year: 2014 ident: 10.1016/j.compbiomed.2023.106769_b5 article-title: Multi-modal medical image fusion using the inter-scale and intra-scale dependencies between image shift-invariant shearlet coefficients publication-title: Inf. Fusion doi: 10.1016/j.inffus.2012.03.002 – ident: 10.1016/j.compbiomed.2023.106769_b33 doi: 10.23919/ICIF.2017.8009769 – volume: 78 start-page: 125 year: 2019 ident: 10.1016/j.compbiomed.2023.106769_b31 article-title: Multi-focus image fusion based on joint sparse representation and optimum theory publication-title: Signal Process., Image Commun. doi: 10.1016/j.image.2019.06.002 – ident: 10.1016/j.compbiomed.2023.106769_b47 – volume: 59 start-page: 884 issue: 4 year: 2009 ident: 10.1016/j.compbiomed.2023.106769_b29 article-title: Multifocus image fusion and restoration with sparse representation publication-title: IEEE Trans. Instrum. Meas. doi: 10.1109/TIM.2009.2026612 – volume: 28 start-page: 2614 issue: 5 year: 2019 ident: 10.1016/j.compbiomed.2023.106769_b9 article-title: DenseFuse: A fusion approach to infrared and visible images publication-title: IEEE Trans. Image Process. doi: 10.1109/TIP.2018.2887342 – volume: 315 start-page: 371 year: 2018 ident: 10.1016/j.compbiomed.2023.106769_b17 article-title: Hyperspectral pansharpening via improved PCA approach and optimal weighted fusion strategy publication-title: Neurocomputing doi: 10.1016/j.neucom.2018.07.030 – volume: 36 start-page: 308 issue: 4 year: 2000 ident: 10.1016/j.compbiomed.2023.106769_b56 article-title: Objective image fusion performance measure publication-title: Electron. Lett. doi: 10.1049/el:20000267 – volume: 14 start-page: 127 issue: 2 year: 2013 ident: 10.1016/j.compbiomed.2023.106769_b57 article-title: A new image fusion performance metric based on visual information fidelity publication-title: Inf. Fusion doi: 10.1016/j.inffus.2011.08.002 – ident: 10.1016/j.compbiomed.2023.106769_b12 doi: 10.1109/WACV45572.2020.9093526 – volume: 5 start-page: 1074 issue: 5 year: 2011 ident: 10.1016/j.compbiomed.2023.106769_b53 article-title: Image features extraction and fusion based on joint sparse representation publication-title: IEEE J. Sel. Top. Sign. Proces. doi: 10.1109/JSTSP.2011.2112332 – volume: 182 start-page: 117 year: 2018 ident: 10.1016/j.compbiomed.2023.106769_b2 article-title: Brain microstructure by multi-modal MRI: Is the whole greater than the sum of its parts? publication-title: NeuroImage doi: 10.1016/j.neuroimage.2017.10.052 – volume: 23 start-page: 1882 year: 2016 ident: 10.1016/j.compbiomed.2023.106769_b32 article-title: Image fusion with convolutional sparse representation publication-title: IEEE Signal Process. Lett. doi: 10.1109/LSP.2016.2618776 – year: 2022 ident: 10.1016/j.compbiomed.2023.106769_b45 – volume: 82 start-page: 8 year: 2017 ident: 10.1016/j.compbiomed.2023.106769_b26 article-title: Infrared and visible image fusion based on visual saliency map and weighted least square optimization publication-title: Infrared Phys. Technol. doi: 10.1016/j.infrared.2017.02.005 – volume: 29 start-page: 4980 year: 2020 ident: 10.1016/j.compbiomed.2023.106769_b51 article-title: Ddcgan: A dual-discriminator conditional generative adversarial network for multi-resolution image fusion publication-title: IEEE Trans. Image Process. doi: 10.1109/TIP.2020.2977573 – volume: 7 start-page: 20811 year: 2019 ident: 10.1016/j.compbiomed.2023.106769_b22 article-title: A phase congruency and local Laplacian energy based multi-modality medical image fusion method in NSCT domain publication-title: IEEE Access doi: 10.1109/ACCESS.2019.2898111 – volume: 194 start-page: 326 year: 2016 ident: 10.1016/j.compbiomed.2023.106769_b19 article-title: Union Laplacian pyramid with multiple features for medical image fusion publication-title: Neurocomputing doi: 10.1016/j.neucom.2016.02.047 – volume: 54 start-page: 99 year: 2020 ident: 10.1016/j.compbiomed.2023.106769_b14 article-title: IFCNN: A general image fusion framework based on convolutional neural network publication-title: Inf. Fusion doi: 10.1016/j.inffus.2019.07.011 – volume: 69 start-page: 5900 issue: 8 year: 2019 ident: 10.1016/j.compbiomed.2023.106769_b6 article-title: Brain medical image fusion using L2-norm-based features and fuzzy-weighted measurements in 2-D littlewood–Paley EWT domain publication-title: IEEE Trans. Instrum. Meas. doi: 10.1109/TIM.2019.2962849 – volume: 21 start-page: 1982 issue: 8 year: 2019 ident: 10.1016/j.compbiomed.2023.106769_b43 article-title: FuseGAN: Learning to fuse multi-focus image via conditional generative adversarial network publication-title: IEEE Trans. Multimed. doi: 10.1109/TMM.2019.2895292 – volume: 341 start-page: 199 year: 2015 ident: 10.1016/j.compbiomed.2023.106769_b52 article-title: Detail preserved fusion of visible and infrared images using regional saliency extraction and multi-scale image decomposition publication-title: Opt. Commun. doi: 10.1016/j.optcom.2014.12.032 – volume: 2 issue: 1 year: 2008 ident: 10.1016/j.compbiomed.2023.106769_b54 article-title: Assessment of image fusion procedures using entropy, image quality, and multispectral classification publication-title: J. Appl. Remote Sens. – volume: 38 start-page: 5576 issue: 12 year: 2019 ident: 10.1016/j.compbiomed.2023.106769_b48 article-title: Multi-scale guided image and video fusion: A fast and efficient approach publication-title: Circuits Systems Signal Process. doi: 10.1007/s00034-019-01131-z – volume: 57 start-page: 235 issue: 3 year: 1995 ident: 10.1016/j.compbiomed.2023.106769_b20 article-title: Multisensor image fusion using the wavelet transform publication-title: Graph. Models Image Process. doi: 10.1006/gmip.1995.1022 – ident: 10.1016/j.compbiomed.2023.106769_b46 doi: 10.1109/CVPR.2017.632 – volume: 20 start-page: 662 issue: 7 year: 2005 ident: 10.1016/j.compbiomed.2023.106769_b58 article-title: Just noticeable distortion model and its applications in video coding publication-title: Signal Process., Image Commun. doi: 10.1016/j.image.2005.04.001 – ident: 10.1016/j.compbiomed.2023.106769_b23 doi: 10.1109/CCDC.2019.8833211 – volume: 78 start-page: 34483 issue: 24 year: 2019 ident: 10.1016/j.compbiomed.2023.106769_b35 article-title: Multifocus image fusion using convolutional neural networks in the discrete wavelet transform domain publication-title: Multimedia Tools Appl. doi: 10.1007/s11042-019-08070-6 – ident: 10.1016/j.compbiomed.2023.106769_b37 doi: 10.1109/ICCV.2017.505 – volume: 82 start-page: 28 year: 2022 ident: 10.1016/j.compbiomed.2023.106769_b50 article-title: Image fusion in the loop of high-level vision tasks: A semantic-aware real-time infrared and visible image fusion network publication-title: Inf. Fusion doi: 10.1016/j.inffus.2021.12.004 – volume: 20 start-page: 2169 year: 2020 ident: 10.1016/j.compbiomed.2023.106769_b36 article-title: Multi-modality medical image fusion using convolutional neural network and contrast pyramid publication-title: Sensors doi: 10.3390/s20082169 – volume: 44 start-page: 502 issue: 1 year: 2020 ident: 10.1016/j.compbiomed.2023.106769_b38 article-title: U2fusion: A unified unsupervised image fusion network publication-title: IEEE Trans. Pattern Anal. Mach. Intell. doi: 10.1109/TPAMI.2020.3012548 – volume: 61 start-page: 1 issue: 049103 year: 2018 ident: 10.1016/j.compbiomed.2023.106769_b27 article-title: Visible and infrared image fusion using l0-generalized total variation model publication-title: Inform. Sci. – start-page: 480 year: 2009 ident: 10.1016/j.compbiomed.2023.106769_b18 article-title: Medical image fusion based on wavelet transform and independent component analysis – volume: 2 start-page: 135 issue: 2 year: 2001 ident: 10.1016/j.compbiomed.2023.106769_b8 article-title: A hybrid image registration technique for a digital camera image fusion application publication-title: Inf. Fusion doi: 10.1016/S1566-2535(01)00020-3 – volume: 67 start-page: 161 year: 2014 ident: 10.1016/j.compbiomed.2023.106769_b16 article-title: Adaptive fusion method of visible light and infrared images based on non-subsampled shearlet transform and fast non-negative matrix factorization publication-title: Infrared Phys. Technol. doi: 10.1016/j.infrared.2014.07.019 – ident: 10.1016/j.compbiomed.2023.106769_b11 doi: 10.23919/ICIF.2017.8009769 – volume: 19 start-page: 20 year: 2014 ident: 10.1016/j.compbiomed.2023.106769_b24 article-title: Multi-modal medical image fusion using the inter-scale and intra-scale dependencies between image shift-invariant shearlet coefficients publication-title: Inf. Fusion doi: 10.1016/j.inffus.2012.03.002 – volume: 113 start-page: 1953 issue: S7 year: 2008 ident: 10.1016/j.compbiomed.2023.106769_b1 article-title: Brain tumor epidemiology: consensus from the brain tumor epidemiology consortium publication-title: Cancer doi: 10.1002/cncr.23741 – volume: 5 start-page: 15750 year: 2017 ident: 10.1016/j.compbiomed.2023.106769_b34 article-title: Image segmentation-based multi-focus image fusion through multi-scale convolutional neural network publication-title: IEEE Access doi: 10.1109/ACCESS.2017.2735019 – volume: 76 start-page: 177 year: 2021 ident: 10.1016/j.compbiomed.2023.106769_b49 article-title: Emfusion: An unsupervised enhanced medical image fusion network publication-title: Inf. Fusion doi: 10.1016/j.inffus.2021.06.001 – volume: 91 year: 2022 ident: 10.1016/j.compbiomed.2023.106769_b41 article-title: Brain tumor segmentation based on the fusion of deep semantics and edge information in multimodal MRI publication-title: Inf. Fusion – volume: 13 start-page: 196 issue: 3 year: 2012 ident: 10.1016/j.compbiomed.2023.106769_b28 article-title: The multiscale directional bilateral filter and its application to multisensor image fusion publication-title: Inf. Fusion doi: 10.1016/j.inffus.2011.01.002 – volume: 70 start-page: 1 year: 2020 ident: 10.1016/j.compbiomed.2023.106769_b42 article-title: Ganmcc: A generative adversarial network with multiclassification constraints for infrared and visible image fusion publication-title: IEEE Trans. Instrum. Meas. – volume: 432 start-page: 516 year: 2018 ident: 10.1016/j.compbiomed.2023.106769_b30 article-title: A novel multi-modality image fusion method based on image decomposition and sparse representation publication-title: Inform. Sci. doi: 10.1016/j.ins.2017.09.010 – volume: 38 start-page: 1 issue: 7 year: 2002 ident: 10.1016/j.compbiomed.2023.106769_b55 article-title: Information measure for performance of image fusion publication-title: Electron. Lett. doi: 10.1049/el:20020212 – volume: 13 start-page: 600 issue: 4 year: 2004 ident: 10.1016/j.compbiomed.2023.106769_b15 article-title: Image quality assessment: from error visibility to structural similarity publication-title: IEEE Trans. Image Process. doi: 10.1109/TIP.2003.819861 – volume: 23 start-page: 1692 issue: 12 year: 2021 ident: 10.1016/j.compbiomed.2023.106769_b39 article-title: MFF-net: deepfake detection network based on multi-feature fusion publication-title: Entropy doi: 10.3390/e23121692 – volume: 48 start-page: 11 year: 2019 ident: 10.1016/j.compbiomed.2023.106769_b13 article-title: Fusiongan: A generative adversarial network for infrared and visible image fusion publication-title: Inf. Fusion doi: 10.1016/j.inffus.2018.09.004 – volume: 42 start-page: 158 year: 2018 ident: 10.1016/j.compbiomed.2023.106769_b4 article-title: Deep learning for pixel-level image fusion: Recent advances and future prospects publication-title: Inf. Fusion doi: 10.1016/j.inffus.2017.10.007 – volume: 59 start-page: 884 issue: 4 year: 2010 ident: 10.1016/j.compbiomed.2023.106769_b7 article-title: Multifocus image fusion and restoration with sparse representation publication-title: IEEE Trans. Instrum. Meas. doi: 10.1109/TIM.2009.2026612 – volume: 31 start-page: 5134 year: 2022 ident: 10.1016/j.compbiomed.2023.106769_b44 article-title: MATR: Multimodal medical image fusion via multiscale adaptive transformer publication-title: IEEE Trans. Image Process. doi: 10.1109/TIP.2022.3193288 – volume: 8 start-page: 143 issue: 2 year: 2007 ident: 10.1016/j.compbiomed.2023.106769_b21 article-title: Remote sensing image fusion using the curvelet transform publication-title: Inf. Fusion doi: 10.1016/j.inffus.2006.02.001 – volume: 9 start-page: 1193 issue: 5 year: 2015 ident: 10.1016/j.compbiomed.2023.106769_b25 article-title: Image fusion based on pixel significance using cross bilateral filter publication-title: Signal, Image Video Process. doi: 10.1007/s11760-013-0556-9 |
SSID | ssj0004030 |
Score | 2.416836 |
Snippet | Image fusion techniques have been widely used for multi-modal medical image fusion tasks. Most existing methods aim to improve the overall quality of the fused... AbstractImage fusion techniques have been widely used for multi-modal medical image fusion tasks. Most existing methods aim to improve the overall quality of... |
SourceID | proquest pubmed crossref elsevier |
SourceType | Aggregation Database Index Database Enrichment Source Publisher |
StartPage | 106769 |
SubjectTerms | Adaptive transformer Brain Brain cancer Brain Neoplasms - diagnostic imaging Brain tumors Computer vision Deep learning Generative adversarial networks Humans Image contrast Image fusion Image Processing, Computer-Assisted Image quality Internal Medicine Machine learning Magnetic Resonance Imaging Medical imaging Multi-modal MRI Neuroimaging Other Qualitative analysis Salient loss Tumors |
SummonAdditionalLinks | – databaseName: Health & Medical Collection dbid: 7X7 link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1NT9wwELUKlRAXRD-AUFoZqVeDN3aSdTmgpeqWIi0HCtLeLMf2IiFIgOz-f2ZiJ3sBtOdknGg8Hj_bz_MI-SmEcDDNSaYybphUM8uMNTkrvUuFkdanrcbS5DI_v5EX02waN9yaSKvscmKbqF1tcY_8OC3UIBvmgEZOH58Yqkbh6WqU0FgjH7F0GVK6immxvBfJRbiCArlGwlIoMnkCvwsp2-GK-xFKiB9hLTWkPb8-Pb0FP9tpaLxNtiJ-pKPQ4Z_IB199JhuTeEL-hVydXU_G7O_o8hcd0ZYtyB5qBxaTq390tsC9MXrblprGPEcN6jE3BqOQVoERTgHG0hKVI-h88VA_N1_JzfjP9e9zFnUTmAX4M2fSYp2yIVeWu5yD5_3QpW7o05kQsiysUA56QMy89-jREtZQZe4GKH3OXVmmYoesV3Xl98BdSmSOWwsoEdYdJlPOFFZax6EZI5xPSNG5S9tYVBy1Le51xx6700tHa3S0Do5OyKC3fAyFNVawUV2P6O7iKKQ6Ddl_BdviNVvfxDHb6IFuUs31_7ZkEUQLLM7AXKYJOektIywJcGPF7x50oaP7Ty2DOSGH_WMY2HhaYypfL9p3MGMCfEvIbgi53lEiVxIey_33G_9GNvFPAj_zgKzPnxf-O2CoefmjHSgveLwZKA priority: 102 providerName: ProQuest |
Title | BTMF-GAN: A multi-modal MRI fusion generative adversarial network for brain tumors |
URI | https://www.clinicalkey.com/#!/content/1-s2.0-S0010482523002342 https://www.clinicalkey.es/playcontent/1-s2.0-S0010482523002342 https://dx.doi.org/10.1016/j.compbiomed.2023.106769 https://www.ncbi.nlm.nih.gov/pubmed/36947904 https://www.proquest.com/docview/2791586310 https://www.proquest.com/docview/2790051223 |
Volume | 157 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV3da9swEBelg7GXse9564oGe1UqW_KHtqe0NE03EkZoIW9ClpTSsTqlTl73t-_Okl3GOgjsxcYfZ5ufT3cn-3d3hHwSQjhwc5KpnBsm1coyY03Bau8yYaT1WddjaTYvppfy6zJf7pGTPhcGaZXR9geb3lnruOcoonl0e32NOb4wlYAJDgTR4Hgk2mEpS9Ty0a97mofkIqShgL3BsyObJ3C8kLYd0txH2EZ8hPXUkPr8sIv6VwjauaLJM_I0xpB0HB7zOdnzzQvyeBb_kr8ki-OL2YSdjeef6Zh2jEF2s3YgMVuc09UWv4_Rq67cNNo6arAnc2tQE2kTWOEUQllaY_cIutnerO_aV-RycnpxMmWxdwKzEAJtmLRYq6ziynJXcEDfVy5zlc9WQsi6tEI5eAti5b3Pq0LUMI-qC5di-3Pu6joTr8l-s278W4BLidxxayFShLmHyZUzpZXWcbiMEc4npOzh0jYWFsf-Fj91zyD7oe-B1gi0DkAnJB0kb0NxjR1kVP9GdJ88CuZOgwfYQbZ8SNa3cdy2OtVtprn-S7cS8mWQ_EM9d7zvQa86erhVVqoUoU95Qj4Oh2Fw4x8b0_j1tjsHrSaEcAl5E1RuAEoUSsJh-e6_Hu09eYJbgcJ5QPY3d1v_AcKsTX3YjSNYlssSltXk7JA8Gp9_m85hfXw6_774DebmKT0 |
linkProvider | Elsevier |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV1Lb9QwELaqIgEXRHmmLWAkOLpkY-dhEKqWx3aXNnsoW6k317G9SKhN2mZXiD_Fb2QmTrKXgvbSczKxNR6Pv4m_mSHkDefcwjEnmIxDzYScG6aNTljhbMS1MC5qeizl02R8Ir6dxqcb5E-XC4O0ys4nNo7aVgb_kb-LUjmIswTQyP7lFcOuUXi72rXQ8GZx6H7_gpCt_jj5Auv7NopGX2efx6ztKsAMgIMFEwareGWhNKFNQpiXy2xkMxfNORdFari0MD8-d87heAVEGEViB9gYPLRFgYUOwOXfEZxL3FHZ6GCVhxlyn_ICvk1A6NUyhzyfDCniPqV-D1uW72HtNqRZ33wc_gvuNsfe6CF50OJVOvQGtkU2XPmI3M3bG_nH5PjTLB-xg-H0PR3Shp3ILioLEvnxhM6X-C-O_mhKW6NfpRr7P9carZ6WnoFOATbTAjtV0MXyorqun5CTW9HoU7JZVqV7DuqSPLahMYBKIc7RsbQ6NcLYED6juXUBSTt1KdMWMcdeGueqY6v9VCtFK1S08ooOyKCXvPSFPNaQkd2KqC5RFVyrgtNmDdn0JllXtz6iVgNVRypU35sSSWAtEAyCuIgC8qGXbGGQhzdrjrvbmY7qh1ptnoC87h-DI8HbIV26atm8gx4a4GJAnnmT6xXFEyngsdj-_8dfkXvjWX6kjibTwx1yH2fluaG7ZHNxvXQvAL8tipfNpqHk7LZ36V9fMVV2 |
linkToPdf | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV1Nb9QwELWqIlVcEN8EChgJjm6d2EnWIIQWytKl7AqVVtqbcWwHCdGkNLtC_DV-HTNxkr0UtJeek4mt8cz4OX4zQ8hzIYSDbU4ylXLDpCotM9ZkrPAuEUZan7Q9lmbz7PBUflykiy3yp8-FQVplHxPbQO1qi__I95NcxekoAzSyX3a0iM8HkzfnPxl2kMKb1r6dRjCRI__7FxzfmtfTA1jrF0kyeX_y7pB1HQaYBaCwZNJiRa8RV5a7jMMc_cglbuSTUghZ5FYoB3MVpfcexy7gtFFkLsYm4dwVBRY9gPB_LRdpjD6WL_J1TiYXIf0F4pyEY1jHIgrcMqSLh_T6PWxfvod13JByffnW-C_o226Bk5vkRodd6TgY2y2y5avbZGfW3c7fIcdvT2YT9mE8f0nHtGUqsrPagcTseErLFf6Xo9_aMtcYY6nBXtCNQQ-gVWCjU4DQtMCuFXS5Oqsvmrvk9Eo0eo9sV3XlH4C6lEgdtxYQKpx5TKqcya20jsNnjHA-InmvLm27gubYV-OH7plr3_Va0RoVrYOiIxIPkuehqMcGMqpfEd0nrUKY1bDzbCCbXybrmy5eNDrWTaK5_tKWSwJrgYMhiMskIq8GyQ4SBaiz4bi7venoYai1I0Xk2fAYggreFJnK16v2HYzWAB0jcj-Y3KAokSkJj-XD_3_8KdkB_9SfpvOjR-Q6TirQRHfJ9vJi5R8DlFsWT1qfoeTrVTvpX0PoWaM |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=BTMF-GAN%3A+A+multi-modal+MRI+fusion+generative+adversarial+network+for+brain+tumors&rft.jtitle=Computers+in+biology+and+medicine&rft.au=Liu%2C+Xiao&rft.au=Chen%2C+Hongyi&rft.au=Yao%2C+Chong&rft.au=Xiang%2C+Rui&rft.date=2023-05-01&rft.issn=1879-0534&rft.eissn=1879-0534&rft.volume=157&rft.spage=106769&rft_id=info:doi/10.1016%2Fj.compbiomed.2023.106769&rft.externalDBID=NO_FULL_TEXT |
thumbnail_m | http://utb.summon.serialssolutions.com/2.0.0/image/custom?url=https%3A%2F%2Fcdn.clinicalkey.com%2Fck-thumbnails%2F00104825%2Fcov200h.gif |