A New Deep Learning Based Multi-Spectral Image Fusion Method
In this paper, we present a new effective infrared (IR) and visible (VIS) image fusion method by using a deep neural network. In our method, a Siamese convolutional neural network (CNN) is applied to automatically generate a weight map which represents the saliency of each pixel for a pair of source...
Saved in:
Published in | Entropy (Basel, Switzerland) Vol. 21; no. 6; p. 570 |
---|---|
Main Authors | , , |
Format | Journal Article |
Language | English |
Published |
Basel
MDPI AG
05.06.2019
MDPI |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | In this paper, we present a new effective infrared (IR) and visible (VIS) image fusion method by using a deep neural network. In our method, a Siamese convolutional neural network (CNN) is applied to automatically generate a weight map which represents the saliency of each pixel for a pair of source images. A CNN plays a role in automatic encoding an image into a feature domain for classification. By applying the proposed method, the key problems in image fusion, which are the activity level measurement and fusion rule design, can be figured out in one shot. The fusion is carried out through the multi-scale image decomposition based on wavelet transform, and the reconstruction result is more perceptual to a human visual system. In addition, the visual qualitative effectiveness of the proposed fusion method is evaluated by comparing pedestrian detection results with other methods, by using the YOLOv3 object detector using a public benchmark dataset. The experimental results show that our proposed method showed competitive results in terms of both quantitative assessment and visual quality. |
---|---|
AbstractList | In this paper, we present a new effective infrared (IR) and visible (VIS) image fusion method by using a deep neural network. In our method, a Siamese convolutional neural network (CNN) is applied to automatically generate a weight map which represents the saliency of each pixel for a pair of source images. A CNN plays a role in automatic encoding an image into a feature domain for classification. By applying the proposed method, the key problems in image fusion, which are the activity level measurement and fusion rule design, can be figured out in one shot. The fusion is carried out through the multi-scale image decomposition based on wavelet transform, and the reconstruction result is more perceptual to a human visual system. In addition, the visual qualitative effectiveness of the proposed fusion method is evaluated by comparing pedestrian detection results with other methods, by using the YOLOv3 object detector using a public benchmark dataset. The experimental results show that our proposed method showed competitive results in terms of both quantitative assessment and visual quality. In this paper, we present a new effective infrared (IR) and visible (VIS) image fusion method by using a deep neural network. In our method, a Siamese convolutional neural network (CNN) is applied to automatically generate a weight map which represents the saliency of each pixel for a pair of source images. A CNN plays a role in automatic encoding an image into a feature domain for classification. By applying the proposed method, the key problems in image fusion, which are the activity level measurement and fusion rule design, can be figured out in one shot. The fusion is carried out through the multi-scale image decomposition based on wavelet transform, and the reconstruction result is more perceptual to a human visual system. In addition, the visual qualitative effectiveness of the proposed fusion method is evaluated by comparing pedestrian detection results with other methods, by using the YOLOv3 object detector using a public benchmark dataset. The experimental results show that our proposed method showed competitive results in terms of both quantitative assessment and visual quality.In this paper, we present a new effective infrared (IR) and visible (VIS) image fusion method by using a deep neural network. In our method, a Siamese convolutional neural network (CNN) is applied to automatically generate a weight map which represents the saliency of each pixel for a pair of source images. A CNN plays a role in automatic encoding an image into a feature domain for classification. By applying the proposed method, the key problems in image fusion, which are the activity level measurement and fusion rule design, can be figured out in one shot. The fusion is carried out through the multi-scale image decomposition based on wavelet transform, and the reconstruction result is more perceptual to a human visual system. In addition, the visual qualitative effectiveness of the proposed fusion method is evaluated by comparing pedestrian detection results with other methods, by using the YOLOv3 object detector using a public benchmark dataset. The experimental results show that our proposed method showed competitive results in terms of both quantitative assessment and visual quality. |
Author | Shin, Hyunchul Chen, Yunfan Piao, Jingchun |
AuthorAffiliation | Department of Electrical Engineering, Hanyang University, Ansan 15588, Korea |
AuthorAffiliation_xml | – name: Department of Electrical Engineering, Hanyang University, Ansan 15588, Korea |
Author_xml | – sequence: 1 givenname: Jingchun surname: Piao fullname: Piao, Jingchun – sequence: 2 givenname: Yunfan surname: Chen fullname: Chen, Yunfan – sequence: 3 givenname: Hyunchul surname: Shin fullname: Shin, Hyunchul |
BookMark | eNptkU1v1DAQhi1URD_gwD-IxAUOoeNvW0JIpVBYaQsH4Gx57cnWq2y82AlV_z0pW1W04uSx5_Gjsd9jcjDkAQl5SeEt5xZOkVFQIDU8IUcUrG0FBzj4pz4kx7VuABhnVD0jh5wzpZkRR-TdWfMVr5uPiLtmib4MaVg3H3zF2FxO_Zja7zsMY_F9s9j6NTYXU015aC5xvMrxOXna-b7ii7v1hPy8-PTj_Eu7_PZ5cX62bIMQamxtMFRKKXjHBI1Cd53QJq4CBOmpllEbITu0asUlGrryIXLrLcybKKgH5CdksffG7DduV9LWlxuXfXJ_D3JZO1_GFHp0qBQaZExrCwIENR2jlpmgYqfBCjq73u9du2m1xRhwuH3dA-nDzpCu3Dr_dlpSCdLMgtd3gpJ_TVhHt001YN_7AfNUHRNKaaW11DP66hG6yVMZ5q9yTArDjQQrZ-rNngol11qwux-GgrvN193nO7Onj9iQRj_Okcyzpv4_N_4Azpmj5Q |
CitedBy_id | crossref_primary_10_3116_16091833_22_3_165_2021 crossref_primary_10_1016_j_array_2023_100286 crossref_primary_10_1016_j_cviu_2023_103853 crossref_primary_10_1007_s13369_020_05201_2 crossref_primary_10_1007_s12046_023_02262_5 crossref_primary_10_1080_01431161_2021_1995073 crossref_primary_10_1007_s41870_024_01867_1 crossref_primary_10_1016_j_atech_2024_100481 crossref_primary_10_1016_j_eswa_2022_117413 crossref_primary_10_3390_app12073592 crossref_primary_10_3390_e25081215 crossref_primary_10_1016_j_engappai_2023_105919 crossref_primary_10_1007_s11042_023_17429_9 crossref_primary_10_1109_JSEN_2019_2962834 crossref_primary_10_32604_cmc_2022_023905 crossref_primary_10_3116_16091833_24_1_62_2023 crossref_primary_10_1007_s11042_024_19832_2 crossref_primary_10_1016_j_compag_2023_107712 crossref_primary_10_3390_electronics9122162 crossref_primary_10_1007_s11277_023_10542_w crossref_primary_10_3390_e23030376 crossref_primary_10_3390_rs12060943 crossref_primary_10_1007_s11042_023_16292_y crossref_primary_10_3390_e24121759 crossref_primary_10_1134_S0361768820080113 crossref_primary_10_1038_s41467_023_40620_3 crossref_primary_10_1109_ACCESS_2022_3211267 crossref_primary_10_1109_JSTARS_2020_3035633 crossref_primary_10_1016_j_talanta_2024_127110 crossref_primary_10_1155_2020_6765274 crossref_primary_10_1016_j_patrec_2021_03_015 crossref_primary_10_1016_j_inffus_2022_09_019 crossref_primary_10_3390_rs16244781 crossref_primary_10_4103_jmss_JMSS_80_20 crossref_primary_10_1038_s41598_024_59553_y crossref_primary_10_1186_s12880_023_01160_w |
Cites_doi | 10.1109/TBME.2013.2279301 10.1109/TMM.2013.2244870 10.1016/j.infrared.2015.10.004 10.1016/j.inffus.2010.04.001 10.1049/el:20020212 10.1109/97.995823 10.1016/j.neucom.2008.02.025 10.1109/5.726791 10.1109/ICMLC.2006.258681 10.1016/j.inffus.2005.09.006 10.1016/j.inffus.2014.09.004 10.1109/TBME.2012.2217493 10.1016/j.infrared.2014.02.013 10.1049/iet-ipr.2014.0311 10.1016/j.optcom.2014.12.032 10.1016/j.imavis.2007.12.002 10.1016/j.inffus.2011.08.002 10.1117/1.2945910 10.1016/j.inffus.2018.02.004 10.1016/j.inffus.2012.09.005 10.1016/0167-8655(89)90004-4 10.1006/gmip.1995.1022 10.1016/j.neucom.2016.02.047 10.1016/j.inffus.2018.09.004 10.1109/TBME.2012.2211017 10.1016/j.infrared.2017.07.010 |
ContentType | Journal Article |
Copyright | 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. 2019 by the authors. 2019 |
Copyright_xml | – notice: 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. – notice: 2019 by the authors. 2019 |
DBID | AAYXX CITATION 7TB 8FD 8FE 8FG ABJCF ABUWG AFKRA AZQEC BENPR BGLVJ CCPQU DWQXO FR3 HCIFZ KR7 L6V M7S PHGZM PHGZT PIMPY PKEHL PQEST PQGLB PQQKQ PQUKI PRINS PTHSS 7X8 5PM DOA |
DOI | 10.3390/e21060570 |
DatabaseName | CrossRef Mechanical & Transportation Engineering Abstracts Technology Research Database ProQuest SciTech Collection ProQuest Technology Collection ProQuest Materials Science & Engineering ProQuest Central (Alumni) ProQuest Central UK/Ireland ProQuest Central Essentials ProQuest Databases Technology Collection ProQuest One Community College ProQuest Central Korea Engineering Research Database SciTech Premium Collection Civil Engineering Abstracts ProQuest Engineering Collection Engineering Database ProQuest Central Premium ProQuest One Academic Publicly Available Content Database ProQuest One Academic Middle East (New) ProQuest One Academic Eastern Edition (DO NOT USE) ProQuest One Applied & Life Sciences ProQuest One Academic ProQuest One Academic UKI Edition ProQuest Central China Engineering collection MEDLINE - Academic PubMed Central (Full Participant titles) DOAJ: Directory of Open Access Journals |
DatabaseTitle | CrossRef Publicly Available Content Database Technology Collection Technology Research Database ProQuest One Academic Middle East (New) Mechanical & Transportation Engineering Abstracts ProQuest Central Essentials ProQuest Central (Alumni Edition) SciTech Premium Collection ProQuest One Community College ProQuest Central China ProQuest Central ProQuest One Applied & Life Sciences ProQuest Engineering Collection ProQuest Central Korea ProQuest Central (New) Engineering Collection Civil Engineering Abstracts Engineering Database ProQuest One Academic Eastern Edition ProQuest Technology Collection ProQuest SciTech Collection ProQuest One Academic UKI Edition Materials Science & Engineering Collection Engineering Research Database ProQuest One Academic ProQuest One Academic (New) MEDLINE - Academic |
DatabaseTitleList | CrossRef Publicly Available Content Database MEDLINE - Academic |
Database_xml | – sequence: 1 dbid: DOA name: Directory of Open Access Journals url: https://www.doaj.org/ sourceTypes: Open Website – sequence: 2 dbid: 8FG name: ProQuest Technology Collection url: https://search.proquest.com/technologycollection1 sourceTypes: Aggregation Database |
DeliveryMethod | fulltext_linktorsrc |
EISSN | 1099-4300 |
ExternalDocumentID | oai_doaj_org_article_e66e8e22779040418f21928c6df70941 PMC7515058 10_3390_e21060570 |
GroupedDBID | 29G 2WC 5GY 5VS 8FE 8FG AADQD AAFWJ AAYXX ABDBF ABJCF ACIWK ACUHS ADBBV AEGXH AENEX AFKRA AFPKN AFZYC ALMA_UNASSIGNED_HOLDINGS BCNDV BENPR BGLVJ CCPQU CITATION CS3 DU5 E3Z ESX F5P GROUPED_DOAJ GX1 HCIFZ HH5 IAO J9A KQ8 L6V M7S MODMG M~E OK1 OVT PGMZT PHGZM PHGZT PIMPY PROAC PTHSS RNS RPM TR2 TUS XSB ~8M 7TB 8FD ABUWG AZQEC DWQXO FR3 KR7 PKEHL PQEST PQGLB PQQKQ PQUKI PRINS 7X8 5PM PUEGO |
ID | FETCH-LOGICAL-c446t-9c8155543f241d47ff478dbc0c5a175d7845fe96b35e81bacd39a905e8d41a0e3 |
IEDL.DBID | BENPR |
ISSN | 1099-4300 |
IngestDate | Wed Aug 27 01:11:06 EDT 2025 Thu Aug 21 17:39:55 EDT 2025 Fri Jul 11 04:33:11 EDT 2025 Fri Jul 25 11:56:10 EDT 2025 Tue Jul 01 01:57:50 EDT 2025 Thu Apr 24 22:53:01 EDT 2025 |
IsDoiOpenAccess | true |
IsOpenAccess | true |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 6 |
Language | English |
License | https://creativecommons.org/licenses/by/4.0 Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/). |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c446t-9c8155543f241d47ff478dbc0c5a175d7845fe96b35e81bacd39a905e8d41a0e3 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 |
OpenAccessLink | https://www.proquest.com/docview/2548385095?pq-origsite=%requestingapplication% |
PMID | 33267284 |
PQID | 2548385095 |
PQPubID | 2032401 |
ParticipantIDs | doaj_primary_oai_doaj_org_article_e66e8e22779040418f21928c6df70941 pubmedcentral_primary_oai_pubmedcentral_nih_gov_7515058 proquest_miscellaneous_2466767757 proquest_journals_2548385095 crossref_primary_10_3390_e21060570 crossref_citationtrail_10_3390_e21060570 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 20190605 |
PublicationDateYYYYMMDD | 2019-06-05 |
PublicationDate_xml | – month: 6 year: 2019 text: 20190605 day: 5 |
PublicationDecade | 2010 |
PublicationPlace | Basel |
PublicationPlace_xml | – name: Basel |
PublicationTitle | Entropy (Basel, Switzerland) |
PublicationYear | 2019 |
Publisher | MDPI AG MDPI |
Publisher_xml | – name: MDPI AG – name: MDPI |
References | Yang (ref_13) 2012; 13 Han (ref_28) 2013; 14 Liu (ref_18) 2015; 24 Jin (ref_2) 2014; 64 Zheng (ref_11) 2013; 15 Ma (ref_16) 2019; 48 Li (ref_8) 1995; 57 Li (ref_14) 2012; 59 Singh (ref_7) 2014; 19 Liu (ref_15) 2015; 9 Zhang (ref_1) 2015; 73 Shen (ref_17) 2013; 60 Qu (ref_31) 2002; 38 Wang (ref_32) 2002; 9 Toet (ref_6) 1989; 9 Ma (ref_33) 2019; 14 ref_25 Chen (ref_29) 2009; 27 Roberts (ref_30) 2008; 2 ref_24 Du (ref_5) 2016; 194 LeCun (ref_19) 1998; 86 ref_23 ref_22 ref_21 Cui (ref_3) 2015; 341 Wang (ref_12) 2014; 61 Lewis (ref_9) 2007; 8 ref_27 Yang (ref_10) 2008; 72 ref_26 Jin (ref_20) 2017; 85 ref_4 |
References_xml | – volume: 61 start-page: 197 year: 2014 ident: ref_12 article-title: Multimodal medical volumetric data fusion using 3-d discrete shearlet transform and global-to-local rule publication-title: IEEE Trans. Biomed. Eng. doi: 10.1109/TBME.2013.2279301 – volume: 15 start-page: 1014 year: 2013 ident: ref_11 article-title: Directive contrast based multimodal medical image fusion in nsct domain publication-title: IEEE Trans. Multimedia doi: 10.1109/TMM.2013.2244870 – ident: ref_24 – volume: 73 start-page: 286 year: 2015 ident: ref_1 article-title: A fusion algorithm for infrared and visible images based on saliency analysis and non-subsampled Shearlet transform publication-title: Infrared Phys. Technol. doi: 10.1016/j.infrared.2015.10.004 – ident: ref_26 – volume: 13 start-page: 10 year: 2012 ident: ref_13 article-title: Pixel-level image fusion with simultaneous orthogonal matching pursuit publication-title: Inf. Fusion doi: 10.1016/j.inffus.2010.04.001 – volume: 38 start-page: 313 year: 2002 ident: ref_31 article-title: Information measure for performance of image fusion publication-title: Electron. Lett. doi: 10.1049/el:20020212 – volume: 9 start-page: 81 year: 2002 ident: ref_32 article-title: A universal image quality index publication-title: IEEE Signal Process. Lett. doi: 10.1109/97.995823 – volume: 72 start-page: 203 year: 2008 ident: ref_10 article-title: Multimodality medical image fusion based on multiscale geometric analysis of contourlet transform publication-title: Neurocomputing doi: 10.1016/j.neucom.2008.02.025 – volume: 86 start-page: 2278 year: 1998 ident: ref_19 article-title: Gradient-based leaning applied to document recognition publication-title: Proc. IEEE doi: 10.1109/5.726791 – ident: ref_23 doi: 10.1109/ICMLC.2006.258681 – volume: 8 start-page: 119 year: 2007 ident: ref_9 article-title: Pixel- and region-based image fusion with complex wavelets publication-title: Inf. Fusion doi: 10.1016/j.inffus.2005.09.006 – volume: 24 start-page: 147 year: 2015 ident: ref_18 article-title: A general framework for image fusion based on multi-scale transform and sparse representation publication-title: Inf. Fusion doi: 10.1016/j.inffus.2014.09.004 – ident: ref_21 – volume: 59 start-page: 3450 year: 2012 ident: ref_14 article-title: Group-sparse representation with dictionary learning for medical image denoising and fusion publication-title: IEEE Trans. Biomed. Eng. doi: 10.1109/TBME.2012.2217493 – volume: 64 start-page: 134 year: 2014 ident: ref_2 article-title: A fusion method for visible and infrared images based on contrast pyramid with teaching learning based optimization publication-title: Infrared Phys. Technol. doi: 10.1016/j.infrared.2014.02.013 – volume: 9 start-page: 347 year: 2015 ident: ref_15 article-title: Simultaneous image fusion and denosing with adaptive sparse representation publication-title: IET Image Process. doi: 10.1049/iet-ipr.2014.0311 – ident: ref_25 – volume: 341 start-page: 199 year: 2015 ident: ref_3 article-title: Detail preserved fusion of visible and infrared images using regional saliency extraction and multi-scale image decomposition publication-title: Opt. Commun. doi: 10.1016/j.optcom.2014.12.032 – ident: ref_4 – ident: ref_27 – volume: 27 start-page: 1421 year: 2009 ident: ref_29 article-title: A new automated quality assessment algorithm for image fusion publication-title: Image Vis. Comput. doi: 10.1016/j.imavis.2007.12.002 – volume: 14 start-page: 127 year: 2013 ident: ref_28 article-title: A new image fusion performance metric based on visual information fidelity publication-title: Inf. Fusion doi: 10.1016/j.inffus.2011.08.002 – volume: 2 start-page: 023522 year: 2008 ident: ref_30 article-title: Assessment of image fusion procedures using entropy, image quality, and multispectral classification publication-title: J. Appl. Remote Sens. doi: 10.1117/1.2945910 – volume: 14 start-page: 153 year: 2019 ident: ref_33 article-title: Infrared and visible image fusion methods and applications: A survey publication-title: Inf. Fusion doi: 10.1016/j.inffus.2018.02.004 – volume: 19 start-page: 49 year: 2014 ident: ref_7 article-title: Fusion of multimodal medical images using Daubechies complex wavelet transform c a multiresolution approach publication-title: Inf. Fusion doi: 10.1016/j.inffus.2012.09.005 – volume: 9 start-page: 255 year: 1989 ident: ref_6 article-title: A morphological pyramidal image decomposition publication-title: Pattern Recognit. Lett. doi: 10.1016/0167-8655(89)90004-4 – volume: 57 start-page: 235 year: 1995 ident: ref_8 article-title: Multi sensor image fusion using the wavelet transform publication-title: Graph. Models Image Process. doi: 10.1006/gmip.1995.1022 – volume: 194 start-page: 326 year: 2016 ident: ref_5 article-title: Union Laplacian pyramid with multiple features for medical image fusion publication-title: Neurocomputing doi: 10.1016/j.neucom.2016.02.047 – volume: 48 start-page: 11 year: 2019 ident: ref_16 article-title: FusionGAN: A generative adversarial network for infrared and visible image fusion publication-title: Inf. Fusion doi: 10.1016/j.inffus.2018.09.004 – ident: ref_22 – volume: 60 start-page: 1069 year: 2013 ident: ref_17 article-title: Cross-scale coefficient selection for volumetric medical image fusion publication-title: IEEE Trans. Biomed. Eng. doi: 10.1109/TBME.2012.2211017 – volume: 85 start-page: 478 year: 2017 ident: ref_20 article-title: A survey of infrared and visible image fusion methods publication-title: Infrared Phys. Technol. doi: 10.1016/j.infrared.2017.07.010 |
SSID | ssj0023216 |
Score | 2.3687897 |
Snippet | In this paper, we present a new effective infrared (IR) and visible (VIS) image fusion method by using a deep neural network. In our method, a Siamese... |
SourceID | doaj pubmedcentral proquest crossref |
SourceType | Open Website Open Access Repository Aggregation Database Enrichment Source Index Database |
StartPage | 570 |
SubjectTerms | Algorithms Artificial neural networks Classification Computer vision convolutional neural network Decomposition Deep learning Design Image classification image fusion Image processing infrared Machine learning Methods Neural networks Principal components analysis Quality assessment Radiation Siamese network Training visible Visual perception Wavelet transforms |
SummonAdditionalLinks | – databaseName: DOAJ: Directory of Open Access Journals dbid: DOA link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV3PS8MwFA6ykxdRVKxOieLBS1nS_AYvmzqmME8Odittk6qg3dDt__cl7coKghePbR8hfa8v733Ny_cQui6EEzxzIuZSljGnGQeXMoB5mKNME5Eb7s87T5_lZMaf5mK-1erL14TV9MC14gZOSqddknhePE441SX4WKILaUsF0CQAH4h5GzDVQC2WUFnzCDEA9QMHwAbydt-ReCv6BJL-TmbZrYvcCjTjfbTXZIh4WM_sAO246hDdDjEsR_jeuSVuKFFf8QgikMXhCG3s-8j70fDjJ6wQeLz2f8HwNPSHPkKz8cPL3SRuGh_EBaCzVWwKDWFecFZCfLVclSVX2uYFKUQG4d4qzX2NmMyZcJB2ZoVlJjMELixomzh2jHrVonInCAtdGg6YguYcxjZEZ4ZaS3NJ_Q6lIBG62SgkLRpWcN-c4iMFdOB1l7a6i9BVK7qsqTB-Exp5rbYCnr063ACbpo1N079sGqH-xiZp41LfKSBZzTTkNyJCl-1jcAa_w5FVbrEGGe5LdpUSKkKqY8vOhLpPqve3QKutILUjQp_-xxucoV3IrEyoKRN91Ft9rd05ZC-r_CJ8qD-7HOkc priority: 102 providerName: Directory of Open Access Journals |
Title | A New Deep Learning Based Multi-Spectral Image Fusion Method |
URI | https://www.proquest.com/docview/2548385095 https://www.proquest.com/docview/2466767757 https://pubmed.ncbi.nlm.nih.gov/PMC7515058 https://doaj.org/article/e66e8e22779040418f21928c6df70941 |
Volume | 21 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwfV3fa9swED7a9GUvY2Md89oFbfRhL6ZW9BsGo9madoWWMVbIm7EluRu0Ttom___uHMWrYezFYOuwzUm6u086fQdw5FVUsooql1o3ueSVxCnlEPOIyIUtVO0knXe-vNLn1_JiruZpwe0xpVVubWJnqMPC0xr5MQIZKyy6N_V5eZ9T1SjaXU0lNHZhD02wtSPYm55eff_RQy4x4XrDJyQQ3B9HBDgYv1Nl4ideqCPrH0SYw_zIJw5n9gKep0iRnWy69iXsxPYVfDphaJbY1xiXLFGj3rApeqLAuqO0OdWTp7exb3doKdhsTath7LKrE70P17PTn1_O81QAIfeI0la58xbdvZKiQT8bpGkaaWyofeFVhW4_GCspV0zXQkUMPysfhKtcgTcBtV5E8RpG7aKNb4Ap2ziJ2ILXEt_tCls5HgKvNaedSlVk8HGrkNIndnAqUnFbIkog3ZW97jL40IsuN5QY_xKaklZ7AWKx7h4sHm7KNCnKqHW0cTIhzkNZSG4btJ8T63VoDMJOnsHhtk_KNLUey78DIYP3fTNOCtrpqNq4WKOMpNRdY5TJwAz6cvBDw5b296-OXttgiFco-_b_Hz-AZxg7uS5rTB3CaPWwju8wPlnVY9i1s7NxGorjDuXj9WzO_wD8teaC |
linkProvider | ProQuest |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV3NbtQwEB6VcoBLVQSIlAIGgcQlqh3biS0VoZay7NJuT63UW0hspyDR7NLuqupL8YzMZJPQSIhbj4lHTjIez8wXzw_AW6eDVkXQsUrTKlaiULilLGIeGYQ0XJdWUb7z9Dgdn6qvZ_psDX53uTAUVtnpxEZR-5mjf-Q7CGSMNGje9Mf5r5i6RtHpatdCYyUWh-HmGiHb1YfJAa7vuyQZfT75NI7brgKxQ-iziK0zaEO1khUaL6-yqlKZ8aXjThdoS31mFAVgpaXUAX26wnlpC8vxwuOn8CBx3ntwX0m05JSZPvrSAzyZiHRVvQgH-U5AOIVogfog37J5TWuAgT87jMa8Zd5Gm7DR-qVsbyVIj2At1I9hd4-hEmQHIcxZW4j1nO2j3fOsSdyNqXs9zcYmF6iX2GhJ_97YtOlK_QRO74QxT2G9ntXhGTBtKqsQyYhS4dyWm8IK70WZCjoX1TyC9x1DctfWIqeWGD9zxCTEu7znXQRvetL5qgDHv4j2ias9AdXMbm7MLs_zdgvmIU2DCUlCFRYVV8JUqK0T41JfZQhyRQTb3Zrk7Ua-yv-KXQSv-2HcgnSuUtRhtkQaRYHCWaazCLLBWg5eaDhS__jeFPPO0KHk2mz9_-Gv4MH4ZHqUH02OD5_DQ_TabBOvprdhfXG5DC_QM1qULxtxZPDtruX_DxBGHy0 |
linkToPdf | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV1fa9RAEB_qFcQXUVRMrXYVBV_CZbO72V1QSs_r0bP2KGKhbzHJbqrQ5s72DvGr9dN1JpfEBsS3PiYZNsns_Pvtzs4AvC2UVzLzKpRJUoaSZxJVyiLmEZ4LE6ncSjrvfDRLDk7k51N1ugHX7VkYSqtsbWJtqN28oDXyIQIZIwy6NzUsm7SI4_Fkd_ErpA5StNPattNYi8ih__Mb4dvVx-kY5_pdHE_2v306CJsOA2GBMGgZ2sKgP1VSlOjInNRlKbVxeREVKkO_6rSRlIyV5EJ5jO-ywgmb2QgvHP5W5AWOew82NaGiAWyO9mfHXzu4J2KerGsZCWGjoUdwhdiBuiLf8oB1o4BedNvPzbzl7CaP4GETpbK9tVg9hg1fPYEPewxNIht7v2BNWdYzNkIv6Fh9jDekXvY0GpteoJVikxWtxLGjukf1Uzi5E9Y8g0E1r_xzYMqUViKu4bnEsW1kMsud43nCaZdURQG8bxmSFk1lcmqQcZ4iQiHepR3vAnjTkS7W5Tj-RTQirnYEVEG7vjG_PEsbhUx9knjj45jqLcpIclOi7Y5NkbhSI-TlAWy3c5I2an2V_hXCAF53j1EhaZclq_x8hTSS0oa1VjoA3ZvL3gf1n1Q_f9SlvTWGl5EyW_9_-Q7cR9lPv0xnhy_gAYZwtk5eU9swWF6u_EsMk5b5q0YeGXy_axW4ASw_JL8 |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=A+New+Deep+Learning+Based+Multi-Spectral+Image+Fusion+Method&rft.jtitle=Entropy+%28Basel%2C+Switzerland%29&rft.au=Chen%2C+Yunfan&rft.date=2019-06-05&rft.pub=MDPI+AG&rft.eissn=1099-4300&rft.volume=21&rft.issue=6&rft.spage=570&rft_id=info:doi/10.3390%2Fe21060570&rft.externalDBID=HAS_PDF_LINK |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1099-4300&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1099-4300&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1099-4300&client=summon |