General Image Fusion for an Arbitrary Number of Inputs Using Convolutional Neural Networks

In this paper, we propose a unified and flexible framework for general image fusion tasks, including multi-exposure image fusion, multi-focus image fusion, infrared/visible image fusion, and multi-modality medical image fusion. Unlike other deep learning-based image fusion methods applied to a fixed...

Full description

Saved in:
Bibliographic Details
Published inSensors (Basel, Switzerland) Vol. 22; no. 7; p. 2457
Main Authors Xiao, Yifan, Guo, Zhixin, Veelaert, Peter, Philips, Wilfried
Format Journal Article
LanguageEnglish
Published Switzerland MDPI AG 23.03.2022
MDPI
Subjects
Online AccessGet full text

Cover

Loading…
Abstract In this paper, we propose a unified and flexible framework for general image fusion tasks, including multi-exposure image fusion, multi-focus image fusion, infrared/visible image fusion, and multi-modality medical image fusion. Unlike other deep learning-based image fusion methods applied to a fixed number of input sources (normally two inputs), the proposed framework can simultaneously handle an arbitrary number of inputs. Specifically, we use the symmetrical function (e.g., Max-pooling) to extract the most significant features from all the input images, which are then fused with the respective features from each input source. This symmetry function enables permutation-invariance of the network, which means the network can successfully extract and fuse the saliency features of each image without needing to remember the input order of the inputs. The property of permutation-invariance also brings convenience for the network during inference with unfixed inputs. To handle multiple image fusion tasks with one unified framework, we adopt continual learning based on Elastic Weight Consolidation (EWC) for different fusion tasks. Subjective and objective experiments on several public datasets demonstrate that the proposed method outperforms state-of-the-art methods on multiple image fusion tasks.
AbstractList In this paper, we propose a unified and flexible framework for general image fusion tasks, including multi-exposure image fusion, multi-focus image fusion, infrared/visible image fusion, and multi-modality medical image fusion. Unlike other deep learning-based image fusion methods applied to a fixed number of input sources (normally two inputs), the proposed framework can simultaneously handle an arbitrary number of inputs. Specifically, we use the symmetrical function (e.g., Max-pooling) to extract the most significant features from all the input images, which are then fused with the respective features from each input source. This symmetry function enables permutation-invariance of the network, which means the network can successfully extract and fuse the saliency features of each image without needing to remember the input order of the inputs. The property of permutation-invariance also brings convenience for the network during inference with unfixed inputs. To handle multiple image fusion tasks with one unified framework, we adopt continual learning based on Elastic Weight Consolidation (EWC) for different fusion tasks. Subjective and objective experiments on several public datasets demonstrate that the proposed method outperforms state-of-the-art methods on multiple image fusion tasks.
In this paper, we propose a unified and flexible framework for general image fusion tasks, including multi-exposure image fusion, multi-focus image fusion, infrared/visible image fusion, and multi-modality medical image fusion. Unlike other deep learning-based image fusion methods applied to a fixed number of input sources (normally two inputs), the proposed framework can simultaneously handle an arbitrary number of inputs. Specifically, we use the symmetrical function (e.g., Max-pooling) to extract the most significant features from all the input images, which are then fused with the respective features from each input source. This symmetry function enables permutation-invariance of the network, which means the network can successfully extract and fuse the saliency features of each image without needing to remember the input order of the inputs. The property of permutation-invariance also brings convenience for the network during inference with unfixed inputs. To handle multiple image fusion tasks with one unified framework, we adopt continual learning based on Elastic Weight Consolidation (EWC) for different fusion tasks. Subjective and objective experiments on several public datasets demonstrate that the proposed method outperforms state-of-the-art methods on multiple image fusion tasks.In this paper, we propose a unified and flexible framework for general image fusion tasks, including multi-exposure image fusion, multi-focus image fusion, infrared/visible image fusion, and multi-modality medical image fusion. Unlike other deep learning-based image fusion methods applied to a fixed number of input sources (normally two inputs), the proposed framework can simultaneously handle an arbitrary number of inputs. Specifically, we use the symmetrical function (e.g., Max-pooling) to extract the most significant features from all the input images, which are then fused with the respective features from each input source. This symmetry function enables permutation-invariance of the network, which means the network can successfully extract and fuse the saliency features of each image without needing to remember the input order of the inputs. The property of permutation-invariance also brings convenience for the network during inference with unfixed inputs. To handle multiple image fusion tasks with one unified framework, we adopt continual learning based on Elastic Weight Consolidation (EWC) for different fusion tasks. Subjective and objective experiments on several public datasets demonstrate that the proposed method outperforms state-of-the-art methods on multiple image fusion tasks.
Audience Academic
Author Xiao, Yifan
Guo, Zhixin
Veelaert, Peter
Philips, Wilfried
AuthorAffiliation Department of Telecommunications and Information Processing, IPI-IMEC, Ghent University, 9000 Ghent, Belgium; zhixin.guo@ugent.be (Z.G.); peter.veelaert@ugent.be (P.V.); wilfried.philips@ugent.be (W.P.)
AuthorAffiliation_xml – name: Department of Telecommunications and Information Processing, IPI-IMEC, Ghent University, 9000 Ghent, Belgium; zhixin.guo@ugent.be (Z.G.); peter.veelaert@ugent.be (P.V.); wilfried.philips@ugent.be (W.P.)
Author_xml – sequence: 1
  givenname: Yifan
  orcidid: 0000-0001-9040-6394
  surname: Xiao
  fullname: Xiao, Yifan
– sequence: 2
  givenname: Zhixin
  surname: Guo
  fullname: Guo, Zhixin
– sequence: 3
  givenname: Peter
  orcidid: 0000-0003-4746-9087
  surname: Veelaert
  fullname: Veelaert, Peter
– sequence: 4
  givenname: Wilfried
  surname: Philips
  fullname: Philips, Wilfried
BackLink https://www.ncbi.nlm.nih.gov/pubmed/35408072$$D View this record in MEDLINE/PubMed
BookMark eNptkk1v1DAQhi1URNuFA38AReJSDtv6I46dC9JqRctKVbmUCxfLcSaLl8Re7KSIf89sty1tQT7YGr_vM57xHJODEAMQ8pbRUyFqepY5p4qXUr0gR6zk5Vxj4ODR-ZAc57yhlAsh9CtyKGRJNVqOyLcLCJBsX6wGu4bifMo-hqKLqbChWKTGj8mm38XVNDSQitgVq7Cdxlx8zT6si2UMN7GfRvQg4gqmdLuNv2L6kV-Tl53tM7y522fk-vzT9fLz_PLLxWq5uJy7ivJxLsqSUUqtAE2FUkpKJaFlroG2ETWTkmqgFRd1RyvtulZAU7dMWMvbilVazMhqj22j3Zht8gM-2ETrzW0gprWxafSuByNsK1gneVUqKHVd27rTVjMtHW8wopD1cc_aTs0ArYOA5fdPoE9vgv9u1vHG1Nhbhd2dkZM7QIo_J8ijGXx20Pc2QJyywdS11KqUDKXvn0k3cUrYx72KVkJS8Ve1tliAD13EvG4HNQulWcUoL3es0_-ocLUweIfD0nmMPzG8e1zoQ4X3g4GCs73ApZhzgs44P9rdRyPZ94ZRsxs98zB66PjwzHEP_Vf7B1Ed1Ms
CitedBy_id crossref_primary_10_1016_j_neucom_2024_129125
crossref_primary_10_1016_j_engappai_2023_105919
crossref_primary_10_3390_s23062888
crossref_primary_10_3390_s24020633
crossref_primary_10_3390_s24227287
Cites_doi 10.1609/aaai.v34i07.6975
10.1049/iet-ipr.2014.0311
10.1016/j.inffus.2011.01.002
10.1109/ICPR.2018.8546006
10.24963/ijcai.2019/549
10.1016/j.acha.2007.09.003
10.1049/el:20000267
10.1016/j.inffus.2019.07.011
10.1109/TIP.2020.2987133
10.1016/j.ins.2017.12.043
10.1016/j.inffus.2020.06.013
10.1109/TIP.2018.2887342
10.1109/TIP.2013.2244222
10.1109/TIP.2018.2794218
10.1016/j.patcog.2004.03.010
10.1109/JSEN.2019.2928818
10.1016/j.inffus.2011.08.002
10.1109/CVPRW.2017.150
10.1007/s00521-020-05358-9
10.1016/j.inffus.2019.02.003
10.1371/journal.pone.0191085
10.1016/j.inffus.2014.09.004
10.1109/ICCV.2017.505
10.1117/1.OE.52.5.057006
10.1109/TIP.2019.2952716
10.1109/TIP.2003.819861
10.1007/978-3-030-01237-3_45
10.1109/ICAICT.2014.7036000
10.1016/j.aeue.2015.09.004
10.1016/j.inffus.2020.08.022
10.1609/aaai.v34i07.6936
10.1073/pnas.1611835114
10.1016/j.inffus.2016.12.001
10.1109/TPAMI.2020.3012548
10.1109/TIP.2015.2442920
10.1016/j.inffus.2018.09.004
10.1016/j.inffus.2006.09.001
10.1109/TIP.2020.2976190
10.1007/s11760-013-0556-9
10.1016/j.inffus.2014.10.004
10.1016/j.inffus.2017.10.007
10.1109/ICASSP.2016.7471980
10.1016/j.inffus.2016.05.004
10.1016/j.dsp.2016.08.004
10.1016/j.patrec.2006.05.004
10.1016/j.inffus.2019.07.005
10.1109/LSP.2017.2752233
10.1109/JSEN.2019.2921803
ContentType Journal Article
Copyright COPYRIGHT 2022 MDPI AG
2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
2022 by the authors. 2022
Copyright_xml – notice: COPYRIGHT 2022 MDPI AG
– notice: 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
– notice: 2022 by the authors. 2022
DBID AAYXX
CITATION
CGR
CUY
CVF
ECM
EIF
NPM
3V.
7X7
7XB
88E
8FI
8FJ
8FK
ABUWG
AFKRA
AZQEC
BENPR
CCPQU
DWQXO
FYUFA
GHDGH
K9.
M0S
M1P
PHGZM
PHGZT
PIMPY
PJZUB
PKEHL
PPXIY
PQEST
PQQKQ
PQUKI
PRINS
7X8
5PM
DOA
DOI 10.3390/s22072457
DatabaseName CrossRef
Medline
MEDLINE
MEDLINE (Ovid)
MEDLINE
MEDLINE
PubMed
ProQuest Central (Corporate)
Health & Medical Collection
ProQuest Central (purchase pre-March 2016)
Medical Database (Alumni Edition)
Hospital Premium Collection
Hospital Premium Collection (Alumni Edition)
ProQuest Central (Alumni) (purchase pre-March 2016)
ProQuest Central (Alumni)
ProQuest Central UK/Ireland
ProQuest Central Essentials
ProQuest Central
ProQuest One
ProQuest Central Korea
Health Research Premium Collection
Health Research Premium Collection (Alumni)
ProQuest Health & Medical Complete (Alumni)
Health & Medical Collection (Alumni)
PML(ProQuest Medical Library)
ProQuest Central Premium
ProQuest One Academic
Publicly Available Content Database
ProQuest Health & Medical Research Collection
ProQuest One Academic Middle East (New)
ProQuest One Health & Nursing
ProQuest One Academic Eastern Edition (DO NOT USE)
ProQuest One Academic
ProQuest One Academic UKI Edition
ProQuest Central China
MEDLINE - Academic
PubMed Central (Full Participant titles)
DOAJ Directory of Open Access Journals
DatabaseTitle CrossRef
MEDLINE
Medline Complete
MEDLINE with Full Text
PubMed
MEDLINE (Ovid)
Publicly Available Content Database
ProQuest One Academic Middle East (New)
ProQuest Central Essentials
ProQuest Health & Medical Complete (Alumni)
ProQuest Central (Alumni Edition)
ProQuest One Community College
ProQuest One Health & Nursing
ProQuest Central China
ProQuest Central
ProQuest Health & Medical Research Collection
Health Research Premium Collection
Health and Medicine Complete (Alumni Edition)
ProQuest Central Korea
Health & Medical Research Collection
ProQuest Central (New)
ProQuest Medical Library (Alumni)
ProQuest One Academic Eastern Edition
ProQuest Hospital Collection
Health Research Premium Collection (Alumni)
ProQuest Hospital Collection (Alumni)
ProQuest Health & Medical Complete
ProQuest Medical Library
ProQuest One Academic UKI Edition
ProQuest One Academic
ProQuest One Academic (New)
ProQuest Central (Alumni)
MEDLINE - Academic
DatabaseTitleList Publicly Available Content Database
MEDLINE



CrossRef
MEDLINE - Academic
Database_xml – sequence: 1
  dbid: DOA
  name: Directory of Open Access Journals (DOAJ)
  url: https://www.doaj.org/
  sourceTypes: Open Website
– sequence: 2
  dbid: NPM
  name: PubMed
  url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
– sequence: 3
  dbid: EIF
  name: MEDLINE
  url: https://proxy.k.utb.cz/login?url=https://www.webofscience.com/wos/medline/basic-search
  sourceTypes: Index Database
– sequence: 4
  dbid: BENPR
  name: ProQuest Central
  url: https://www.proquest.com/central
  sourceTypes: Aggregation Database
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
EISSN 1424-8220
ExternalDocumentID oai_doaj_org_article_3ad31f52647e4899a9f8a8185c2b7e47
PMC9002723
A781610241
35408072
10_3390_s22072457
Genre Journal Article
GrantInformation_xml – fundername: China Scholarship Council
  grantid: 201806220060
GroupedDBID ---
123
2WC
53G
5VS
7X7
88E
8FE
8FG
8FI
8FJ
AADQD
AAHBH
AAYXX
ABDBF
ABUWG
ACUHS
ADBBV
ADMLS
AENEX
AFKRA
AFZYC
ALIPV
ALMA_UNASSIGNED_HOLDINGS
BENPR
BPHCQ
BVXVI
CCPQU
CITATION
CS3
D1I
DU5
E3Z
EBD
ESX
F5P
FYUFA
GROUPED_DOAJ
GX1
HH5
HMCUK
HYE
IAO
ITC
KQ8
L6V
M1P
M48
MODMG
M~E
OK1
OVT
P2P
P62
PHGZM
PHGZT
PIMPY
PQQKQ
PROAC
PSQYO
RNS
RPM
TUS
UKHRP
XSB
~8M
3V.
ABJCF
ARAPS
CGR
CUY
CVF
ECM
EIF
HCIFZ
KB.
M7S
NPM
PDBOC
PMFND
7XB
8FK
AZQEC
DWQXO
K9.
PJZUB
PKEHL
PPXIY
PQEST
PQUKI
PRINS
7X8
5PM
PUEGO
ID FETCH-LOGICAL-c602t-3441000a3e8037775575ed1cbedb3915508e06239f068cfd3eb9d13aa2d61683
IEDL.DBID M48
ISSN 1424-8220
IngestDate Wed Aug 27 01:31:26 EDT 2025
Thu Aug 21 18:24:19 EDT 2025
Fri Jul 11 06:48:58 EDT 2025
Sat Aug 23 13:02:22 EDT 2025
Tue Jun 17 22:24:34 EDT 2025
Tue Jun 10 21:12:59 EDT 2025
Wed Feb 19 02:25:40 EST 2025
Tue Jul 01 02:41:48 EDT 2025
Thu Apr 24 23:02:52 EDT 2025
IsDoiOpenAccess true
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Issue 7
Keywords permutation-invariant network
continual learning
multiple inputs
image fusion
Language English
License https://creativecommons.org/licenses/by/4.0
Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c602t-3441000a3e8037775575ed1cbedb3915508e06239f068cfd3eb9d13aa2d61683
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ORCID 0000-0001-9040-6394
0000-0003-4746-9087
OpenAccessLink http://journals.scholarsportal.info/openUrl.xqy?doi=10.3390/s22072457
PMID 35408072
PQID 2649063503
PQPubID 2032333
ParticipantIDs doaj_primary_oai_doaj_org_article_3ad31f52647e4899a9f8a8185c2b7e47
pubmedcentral_primary_oai_pubmedcentral_nih_gov_9002723
proquest_miscellaneous_2649587451
proquest_journals_2649063503
gale_infotracmisc_A781610241
gale_infotracacademiconefile_A781610241
pubmed_primary_35408072
crossref_citationtrail_10_3390_s22072457
crossref_primary_10_3390_s22072457
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 20220323
PublicationDateYYYYMMDD 2022-03-23
PublicationDate_xml – month: 3
  year: 2022
  text: 20220323
  day: 23
PublicationDecade 2020
PublicationPlace Switzerland
PublicationPlace_xml – name: Switzerland
– name: Basel
PublicationTitle Sensors (Basel, Switzerland)
PublicationTitleAlternate Sensors (Basel)
PublicationYear 2022
Publisher MDPI AG
MDPI
Publisher_xml – name: MDPI AG
– name: MDPI
References ref_50
Nejati (ref_55) 2015; 25
Pajares (ref_10) 2004; 37
Siddiqui (ref_17) 2011; 7
Ma (ref_31) 2019; 29
Liu (ref_3) 2020; 64
Zhang (ref_13) 2013; 52
Ma (ref_2) 2020; 54
Li (ref_53) 2020; 29
Li (ref_23) 2019; 21
Ma (ref_30) 2015; 24
Aslantas (ref_47) 2015; 69
Wang (ref_41) 2004; 13
Xydeas (ref_52) 2000; 36
ref_22
Kirkpatrick (ref_40) 2017; 114
Li (ref_45) 2013; 22
Kumar (ref_15) 2015; 9
ref_20
Li (ref_18) 2006; 27
Xu (ref_8) 2020; 44
Liu (ref_4) 2018; 42
Hu (ref_11) 2012; 13
Easley (ref_12) 2008; 25
Ma (ref_21) 2019; 48
Zhang (ref_24) 2021; 66
ref_36
ref_35
Liu (ref_26) 2017; 36
ref_34
ref_33
Herzig (ref_37) 2018; 31
Rahman (ref_16) 2017; 60
Rahman (ref_54) 2017; 24
Cai (ref_43) 2018; 27
ref_39
ref_38
Yang (ref_51) 2008; 9
Liu (ref_14) 2015; 9
Li (ref_1) 2017; 33
Zhang (ref_32) 2020; 54
Tang (ref_25) 2018; 433
Aghagolzadeh (ref_27) 2019; 51
ref_46
Li (ref_6) 2020; 29
Li (ref_29) 2019; 19
ref_44
Liu (ref_19) 2015; 24
ref_42
Li (ref_5) 2018; 28
Ma (ref_28) 2020; 33
ref_49
ref_9
ref_7
Han (ref_48) 2013; 14
References_xml – volume: 7
  start-page: 3583
  year: 2011
  ident: ref_17
  article-title: Block-based pixel level multi-focus image fusion using particle swarm optimization
  publication-title: Int. Innov. Comput. Inf. Control
– ident: ref_46
  doi: 10.1609/aaai.v34i07.6975
– volume: 9
  start-page: 347
  year: 2015
  ident: ref_14
  article-title: Simultaneous image fusion and denoising with adaptive sparse representation
  publication-title: IET Image Process.
  doi: 10.1049/iet-ipr.2014.0311
– volume: 13
  start-page: 196
  year: 2012
  ident: ref_11
  article-title: The multiscale directional bilateral filter and its application to multisensor image fusion
  publication-title: Inf. Fusion
  doi: 10.1016/j.inffus.2011.01.002
– ident: ref_20
  doi: 10.1109/ICPR.2018.8546006
– ident: ref_22
  doi: 10.24963/ijcai.2019/549
– ident: ref_39
– volume: 25
  start-page: 25
  year: 2008
  ident: ref_12
  article-title: Sparse directional image representations using the discrete shearlet transform
  publication-title: Appl. Comput. Harmon. Anal.
  doi: 10.1016/j.acha.2007.09.003
– volume: 36
  start-page: 308
  year: 2000
  ident: ref_52
  article-title: Objective image fusion performance measure
  publication-title: Electron. Lett.
  doi: 10.1049/el:20000267
– volume: 54
  start-page: 99
  year: 2020
  ident: ref_32
  article-title: IFCNN: A general image fusion framework based on convolutional neural network
  publication-title: Inf. Fusion
  doi: 10.1016/j.inffus.2019.07.011
– volume: 29
  start-page: 5805
  year: 2020
  ident: ref_53
  article-title: Fast multi-scale structural patch decomposition for multi-exposure image fusion
  publication-title: IEEE Trans. Image Process.
  doi: 10.1109/TIP.2020.2987133
– volume: 433
  start-page: 125
  year: 2018
  ident: ref_25
  article-title: Pixel convolutional neural network for multi-focus image fusion
  publication-title: Inf. Sci.
  doi: 10.1016/j.ins.2017.12.043
– volume: 64
  start-page: 71
  year: 2020
  ident: ref_3
  article-title: Multi-focus image fusion: A Survey of the state of the art
  publication-title: Inf. Fusion
  doi: 10.1016/j.inffus.2020.06.013
– ident: ref_35
– volume: 28
  start-page: 2614
  year: 2018
  ident: ref_5
  article-title: DenseFuse: A fusion approach to infrared and visible images
  publication-title: IEEE Trans. Image Process.
  doi: 10.1109/TIP.2018.2887342
– volume: 22
  start-page: 2864
  year: 2013
  ident: ref_45
  article-title: Image fusion with guided filtering
  publication-title: IEEE Trans. Image Process.
  doi: 10.1109/TIP.2013.2244222
– volume: 27
  start-page: 2049
  year: 2018
  ident: ref_43
  article-title: Learning a deep single image contrast enhancer from multi-exposure images
  publication-title: IEEE Trans. Image Process.
  doi: 10.1109/TIP.2018.2794218
– volume: 37
  start-page: 1855
  year: 2004
  ident: ref_10
  article-title: A wavelet-based image fusion tutorial
  publication-title: Pattern Recognit.
  doi: 10.1016/j.patcog.2004.03.010
– volume: 19
  start-page: 9755
  year: 2019
  ident: ref_29
  article-title: Multi-focus image fusion using u-shaped networks with a hybrid objective
  publication-title: IEEE Sens. J.
  doi: 10.1109/JSEN.2019.2928818
– volume: 14
  start-page: 127
  year: 2013
  ident: ref_48
  article-title: A new image fusion performance metric based on visual information fidelity
  publication-title: Inf. Fusion
  doi: 10.1016/j.inffus.2011.08.002
– ident: ref_42
  doi: 10.1109/CVPRW.2017.150
– volume: 33
  start-page: 5793
  year: 2020
  ident: ref_28
  article-title: Sesf-fuse: An unsupervised deep model for multi-focus image fusion
  publication-title: Neural Comput. Appl.
  doi: 10.1007/s00521-020-05358-9
– volume: 51
  start-page: 201
  year: 2019
  ident: ref_27
  article-title: Ensemble of CNN for multi-focus image fusion
  publication-title: Inf. Fusion
  doi: 10.1016/j.inffus.2019.02.003
– ident: ref_9
  doi: 10.1371/journal.pone.0191085
– volume: 24
  start-page: 147
  year: 2015
  ident: ref_19
  article-title: A general framework for image fusion based on multi-scale transform and sparse representation
  publication-title: Inf. Fusion
  doi: 10.1016/j.inffus.2014.09.004
– ident: ref_7
  doi: 10.1109/ICCV.2017.505
– volume: 52
  start-page: 057006
  year: 2013
  ident: ref_13
  article-title: Dictionary learning method for joint sparse representation-based image fusion
  publication-title: Opt. Eng.
  doi: 10.1117/1.OE.52.5.057006
– volume: 29
  start-page: 2808
  year: 2019
  ident: ref_31
  article-title: Deep guided learning for fast multi-exposure image fusion
  publication-title: IEEE Trans. Image Process.
  doi: 10.1109/TIP.2019.2952716
– volume: 13
  start-page: 600
  year: 2004
  ident: ref_41
  article-title: Image quality assessment: From error visibility to structural similarity
  publication-title: IEEE Trans. Image Process.
  doi: 10.1109/TIP.2003.819861
– ident: ref_34
– ident: ref_38
  doi: 10.1007/978-3-030-01237-3_45
– ident: ref_49
  doi: 10.1109/ICAICT.2014.7036000
– volume: 69
  start-page: 1890
  year: 2015
  ident: ref_47
  article-title: A new image quality metric for image fusion: The sum of the correlations of differences
  publication-title: Aeu-Int. J. Electron. Commun.
  doi: 10.1016/j.aeue.2015.09.004
– volume: 66
  start-page: 40
  year: 2021
  ident: ref_24
  article-title: MFF-GAN: An unsupervised generative adversarial network with adaptive and gradient joint constraints for multi-focus image fusion
  publication-title: Inf. Fusion
  doi: 10.1016/j.inffus.2020.08.022
– ident: ref_33
  doi: 10.1609/aaai.v34i07.6936
– volume: 114
  start-page: 3521
  year: 2017
  ident: ref_40
  article-title: Overcoming catastrophic forgetting in neural networks
  publication-title: Proc. Natl. Acad. Sci. USA
  doi: 10.1073/pnas.1611835114
– volume: 36
  start-page: 191
  year: 2017
  ident: ref_26
  article-title: Multi-focus image fusion with a deep convolutional neural network
  publication-title: Inf. Fusion
  doi: 10.1016/j.inffus.2016.12.001
– volume: 44
  start-page: 502
  year: 2020
  ident: ref_8
  article-title: U2Fusion: A unified unsupervised image fusion network
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
  doi: 10.1109/TPAMI.2020.3012548
– volume: 24
  start-page: 3345
  year: 2015
  ident: ref_30
  article-title: Perceptual quality assessment for multi-exposure image fusion
  publication-title: IEEE Trans. Image Process.
  doi: 10.1109/TIP.2015.2442920
– volume: 48
  start-page: 11
  year: 2019
  ident: ref_21
  article-title: FusionGAN: A generative adversarial network for infrared and visible image fusion
  publication-title: Inf. Fusion
  doi: 10.1016/j.inffus.2018.09.004
– volume: 9
  start-page: 156
  year: 2008
  ident: ref_51
  article-title: A novel similarity based quality metric for image fusion
  publication-title: Inf. Fusion
  doi: 10.1016/j.inffus.2006.09.001
– ident: ref_50
– volume: 29
  start-page: 4816
  year: 2020
  ident: ref_6
  article-title: DRPL: Deep Regression Pair Learning for Multi-Focus Image Fusion
  publication-title: IEEE Trans. Image Process.
  doi: 10.1109/TIP.2020.2976190
– volume: 9
  start-page: 1193
  year: 2015
  ident: ref_15
  article-title: Image fusion based on pixel significance using cross bilateral filter
  publication-title: Signal Image Video Process.
  doi: 10.1007/s11760-013-0556-9
– volume: 25
  start-page: 72
  year: 2015
  ident: ref_55
  article-title: Multi-focus image fusion using dictionary-based sparse representation
  publication-title: Inf. Fusion
  doi: 10.1016/j.inffus.2014.10.004
– volume: 42
  start-page: 158
  year: 2018
  ident: ref_4
  article-title: Deep learning for pixel-level image fusion: Recent advances and future prospects
  publication-title: Inf. Fusion
  doi: 10.1016/j.inffus.2017.10.007
– volume: 31
  start-page: 7211
  year: 2018
  ident: ref_37
  article-title: Mapping images to scene graphs with permutation-invariant structured prediction
  publication-title: Adv. Neural Inf. Process. Syst.
– ident: ref_44
  doi: 10.1109/ICASSP.2016.7471980
– volume: 33
  start-page: 100
  year: 2017
  ident: ref_1
  article-title: Pixel-level image fusion: A survey of the state of the art
  publication-title: Inf. Fusion
  doi: 10.1016/j.inffus.2016.05.004
– volume: 60
  start-page: 1
  year: 2017
  ident: ref_16
  article-title: Multi-focal image fusion using degree of focus and fuzzy logic
  publication-title: Digit. Signal Process.
  doi: 10.1016/j.dsp.2016.08.004
– volume: 27
  start-page: 1948
  year: 2006
  ident: ref_18
  article-title: A region-based multi-sensor image fusion scheme using pulse-coupled neural network
  publication-title: Pattern Recognit. Lett.
  doi: 10.1016/j.patrec.2006.05.004
– ident: ref_36
– volume: 54
  start-page: 85
  year: 2020
  ident: ref_2
  article-title: Infrared and visible image fusion via detail preserving adversarial learning
  publication-title: Inf. Fusion
  doi: 10.1016/j.inffus.2019.07.005
– volume: 24
  start-page: 1671
  year: 2017
  ident: ref_54
  article-title: Evaluating multiexposure fusion using image information
  publication-title: IEEE Signal Process. Lett.
  doi: 10.1109/LSP.2017.2752233
– volume: 21
  start-page: 7458
  year: 2019
  ident: ref_23
  article-title: Coupled GAN with relativistic discriminators for infrared and visible images fusion
  publication-title: IEEE Sens. J.
  doi: 10.1109/JSEN.2019.2921803
SSID ssj0023338
Score 2.3868103
Snippet In this paper, we propose a unified and flexible framework for general image fusion tasks, including multi-exposure image fusion, multi-focus image fusion,...
SourceID doaj
pubmedcentral
proquest
gale
pubmed
crossref
SourceType Open Website
Open Access Repository
Aggregation Database
Index Database
Enrichment Source
StartPage 2457
SubjectTerms continual learning
Deep learning
Design
image fusion
Image Processing, Computer-Assisted - methods
Medical imaging equipment
Methods
multiple inputs
Neural networks
Neural Networks, Computer
Night vision
permutation-invariant network
Records
Wavelet transforms
SummonAdditionalLinks – databaseName: DOAJ Directory of Open Access Journals
  dbid: DOA
  link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV1Lb9QwELZQT-WAKM_QggxCgktUP5LYPrYVq5ZDT0WquFiOH6ISZCt2l9_PjOONEoHEhVOkeJLYk7HnG2v8DSHvjfQQBIhQ940RdeObrjbBtLX3IvGUTDQ-Z1tcd5dfms-37e2s1BfmhI30wKPiTqULkqcW_LaKDQQHziTt0Mt40cOdfI4cfN4-mCqhloTIa-QRkhDUn26EYEo06INm3ieT9P-5FM980TJPcuZ4Vo_Jo4IY6dnY0yPyIA5PyMMZj-BT8rWQR9OrH7A80NUOt8AowFHqBniwv8tn6-l1rv5B14leDfe77YbmfAF6sR5-FQOEVyBbR77k9PDNM3Kz-nRzcVmXogm175jY1hLwDSxzTkbNpFKqBTwWA_d9DH0mg2c6MsA8JrFO-xRk7E3g0jkROt5p-ZwcDOshviSUK91qGWJIiTdOgogB7MGjUsyw2KeKfNzr0vpCKI51Lb5bCCxQ7XZSe0XeTaL3I4vG34TO8YdMAkh8nW-AOdhiDvZf5lCRD_g7LU5P6Ix35ZQBDAmJruyZ0oBxAZjwipwsJGFa-WXz3iBsmdYbC581gOlaJivydmrGJzFVbYjr3SjTYhEBeMWL0X6mIeEmm4bRVkQtLGsx5mXLcPctk34b3EAQ8tX_UNIxORR4ioPJWsgTcrD9uYuvAVtt-zd5Gv0GvbogRQ
  priority: 102
  providerName: Directory of Open Access Journals
– databaseName: ProQuest Technology Collection
  dbid: 8FG
  link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwfV1Lb9QwELagXOBQUZ4ppTIICS5R_Uhi-4RK1aXl0FORKi5R4getBMnS7PL7mXG8YSMqTivFEyu2573jbwh5Z6SFIEC4vC2MyAtbVLlxpsytFYGHYLyxsdriojr7Wny5Kq9Swm1IZZUbnRgVtest5siPwHAbMKclkx-Xv3LsGoX_rqYWGvfJAw6WBku69OLzFHBJiL9GNCEJof3RIARTokBLtGWDIlT_vwp5yyLNqyW3zM_iMdlNfiM9Hg96j9zz3RPyaAtN8Cn5liCk6flPUBJ0scZEGAWnlDYdvNjexBv29CL2AKF9oOfdcr0aaKwaoCd99zuxIUyBmB3xJxaJD8_I5eL08uQsT60TclsxscoleDmg7BrpNZNKqRK8Mu-4bb1rIyQ8056B52MCq7QNTvrWOC6bRriKV1o-Jztd3_mXhHKlSy2ddyHwopFAYsAD4V4pZphvQ0Y-bPaytglWHLtb_KghvMBtr6dtz8jbiXQ5YmncRfQJD2QiQPjr-KC__V4naapl4yQPJfCE8gVEjI0JukHXw4oWnsAk7_E4axRS-BjbpLsGsCSEu6qPlQZPF9wTnpGDGSUIl50PbxiiTsI91H9ZMSNvpmF8EwvWOt-vR5oSWwnAFC9G_pmWhKk2DavNiJpx1mzN85Hu5jpCfxtMIwi5___PekUeCrylwWQu5AHZWd2u_WvwnVbtYRSQPzHCGDU
  priority: 102
  providerName: ProQuest
Title General Image Fusion for an Arbitrary Number of Inputs Using Convolutional Neural Networks
URI https://www.ncbi.nlm.nih.gov/pubmed/35408072
https://www.proquest.com/docview/2649063503
https://www.proquest.com/docview/2649587451
https://pubmed.ncbi.nlm.nih.gov/PMC9002723
https://doaj.org/article/3ad31f52647e4899a9f8a8185c2b7e47
Volume 22
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV1Lb9QwEB71cYEDank1fawMQoJLwLGT2D4g1FZdWiRWCLXSikuU-AGVStLuA8G_Z-xko43okUsixRNnPR7b33jH3wC8UlyjE8BMXKWKxalO81gZlcVaM5c4p6zSIdpikp9fpZ-m2XQDVjk2OwXO73XtfD6pq9nN2993fz7ggH_vPU502d_NGaOCpZnYhG1ckIRPZPA57f9MYBzdsJZUaCg-WIoCY_-_8_LawjQMmlxbhcY78KiDj-S47e9d2LD1Y3i4Rir4BL51TNLk4ifOFWS89PthBLEpKWt8sboOB-3JJKQCIY0jF_XtcjEnIXiAnDb1r84asQpP3RFuIVZ8_hQux2eXp-dxl0Eh1jlli5gj2ME5r-RWUi6EyBCcWZPoypoqMMNTaSkCIOVoLrUz3FbKJLwsmcmTXPJnsFU3td0DkgiZSW6scS5JS44iCoFIYoWgitrKRfBmpctCd-ziPsnFTYFehld70as9gpe96G1LqXGf0InvkF7As2CHB83se9ENqoKXhicuQ0wnbIqOY6mcLD0C0azCJ1jJa9-dhbce_DG67I4cYJM861VxLCQCXkQpSQSHA0kcY3pYvDKIYmWiBX5WIcDLKI_gRV_s3_Rxa7Vtlq1M5jMKYBXPW_vpm-R33CS2NgIxsKxBm4cl9fWPwACu_G4C4_v_Q0kH8ID5Ix2Ux4wfwtZitrRHCLQW1Qg2xVTgVY4_jmD75Gzy5esobFqMwgD7C9x7Ksk
linkProvider Scholars Portal
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV1Lb9QwELZKOUAPiGcJFDAIBJeojp2XDwiVwrJLy54WqeJiJX7QSpAszS6IH8V_ZMZ5sBGIW08rxRMrtscz33jH3xDyVAoNQQA3YRlLHsY6TkNpZBJqzV3knLRS-2yLeTr9GL8_SU62yK_-LgymVfY20RtqU2s8I98Hxy3BnSZMvFp-C7FqFP672pfQaNXiyP78ASFb83L2Btb3GeeTt4vDadhVFQh1yvgqFAAAwA4UwuZMZFmWAGCxJtKlNaVnS2e5ZQAKpGNprp0RtpQmEkXBTRqluYBuL5HLsQBHjhfTJ--G-E5AuNeSF0Ej2284ZxmP0fFtuDxfGeBv-7_hAMfJmRvebnKdXOtgKj1o9eoG2bLVTbKzQV54i3zqGKvp7CvYJDpZ47kbBQxMiwpeLM_8hX469yVHaO3orFquVw31SQr0sK6-d1oPXSBFiP_xOenNbbK4iDm9Q7arurJ3CY2yPMmFsca5KC4EiEgAPJHNMiaZLV1AXvRzqXTHYo7FNL4oiGZw2tUw7QF5MoguW-qOfwm9xgUZBJBt2z-ozz-rbvMqURgRuQRUMLMxBKiFdHmBSEfzEp5AJ89xORXaBPgYXXRXG2BIyK6lDrIcgDWgoSggeyNJ2Mt63NwrhOpsSaP-aH5AHg_N-Cbmx1W2XrcyCVYugC52W_0ZhoQnezmMNiDZSLNGYx63VGennmlc4qkFF_f-_1mPyJXp4sOxOp7Nj-6TqxwviDARcrFHtlfna_sAYNuqfOg3CyXqgjfnb8U9Utk
linkToPdf http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV1Lb9QwELbKVkJwQDzLQgGDQHCJ1rGTOD4g1NeqS9GqQkWquFiJH1CpTZbuLoifxr9jxsmGjUDceopkT6zYnqcz_oaQl0oYCAK4jcpE8SgxSRYpq9LIGO5j75VTJmRbTLPDT8n70_R0g_xa3YXBtMqVTgyK2tYGz8hHYLgVmNOUiZFv0yKO98fvZt8irCCFf1pX5TQaFjlyP39A-DZ_O9mHvX7F-fjgZO8waisMRCZjfBEJcAZAJxTC5UxIKVNwXpyNTelsGZDTWe4YOAjKsyw33gpXKhuLouA2i7NcwLDXyKbEoGhANncPpscfu2hPQPDXQBkJodhozjmTPEEzuGYAQ52Av63Bmjnsp2qu2b7xbXKrdVrpTsNld8iGq-6Sm2tQhvfI5xa_mk4uQEPR8RJP4Sh4xLSo4MXyLFzvp9NQgITWnk6q2XIxpyFlge7V1fdWBmAIBAwJj5ChPr9PTq5iVR-QQVVX7iGhsczTXFhnvY-TQgCJAvcndlIyxVzph-TNai21aTHNsbTGuYbYBpddd8s-JC860lkD5PEvol3ckI4AsbdDQ335RbeirEVhRexTYEjpEghXC-XzAv0ew0togUFe43Zq1BDwMaZoLzrAlBBrS-_IHNxs8I3iIdnuUYJkm373iiF0q1nm-o8cDMnzrhvfxGy5ytXLhibFOgYwxFbDP92U8Jwvh9kOiexxVm_O_Z7q7GvAHVd4hsHFo_9_1jNyHQRTf5hMjx6TGxxvizARcbFNBovLpXsCPtyifNpKCyX6iuXzN0vVWGs
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=General+Image+Fusion+for+an+Arbitrary+Number+of+Inputs+Using+Convolutional+Neural+Networks&rft.jtitle=Sensors+%28Basel%2C+Switzerland%29&rft.au=Yifan+Xiao&rft.au=Zhixin+Guo&rft.au=Peter+Veelaert&rft.au=Wilfried+Philips&rft.date=2022-03-23&rft.pub=MDPI+AG&rft.eissn=1424-8220&rft.volume=22&rft.issue=7&rft.spage=2457&rft_id=info:doi/10.3390%2Fs22072457&rft.externalDBID=DOA&rft.externalDocID=oai_doaj_org_article_3ad31f52647e4899a9f8a8185c2b7e47
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1424-8220&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1424-8220&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1424-8220&client=summon