Fully Convolutional Network-Based Multifocus Image Fusion
As the optical lenses for cameras always have limited depth of field, the captured images with the same scene are not all in focus. Multifocus image fusion is an efficient technology that can synthesize an all-in-focus image using several partially focused images. Previous methods have accomplished...
Saved in:
Published in | Neural computation Vol. 30; no. 7; pp. 1775 - 1800 |
---|---|
Main Authors | , , , , |
Format | Journal Article |
Language | English |
Published |
One Rogers Street, Cambridge, MA 02142-1209, USA
MIT Press
01.07.2018
MIT Press Journals, The |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | As the optical lenses for cameras always have limited depth of field, the captured images with the same scene are not all in focus. Multifocus image fusion is an efficient technology that can synthesize an all-in-focus image using several partially focused images. Previous methods have accomplished the fusion task in spatial or transform domains. However, fusion rules are always a problem in most methods. In this letter, from the aspect of focus region detection, we propose a novel multifocus image fusion method based on a fully convolutional network (FCN) learned from synthesized multifocus images. The primary novelty of this method is that the pixel-wise focus regions are detected through a learning FCN, and the entire image, not just the image patches, are exploited to train the FCN. First, we synthesize 4500 pairs of multifocus images by repeatedly using a gaussian filter for each image from PASCAL VOC 2012, to train the FCN. After that, a pair of source images is fed into the trained FCN, and two score maps indicating the focus property are generated. Next, an inversed score map is averaged with another score map to produce an aggregative score map, which take full advantage of focus probabilities in two score maps. We implement the fully connected conditional random field (CRF) on the aggregative score map to accomplish and refine a binary decision map for the fusion task. Finally, we exploit the weighted strategy based on the refined decision map to produce the fused image. To demonstrate the performance of the proposed method, we compare its fused results with several start-of-the-art methods not only on a gray data set but also on a color data set. Experimental results show that the proposed method can achieve superior fusion performance in both human visual quality and objective assessment. |
---|---|
AbstractList | As the optical lenses for cameras always have limited depth of field, the captured images with the same scene are not all in focus. Multifocus image fusion is an efficient technology that can synthesize an all-in-focus image using several partially focused images. Previous methods have accomplished the fusion task in spatial or transform domains. However, fusion rules are always a problem in most methods. In this letter, from the aspect of focus region detection, we propose a novel multifocus image fusion method based on a fully convolutional network (FCN) learned from synthesized multifocus images. The primary novelty of this method is that the pixel-wise focus regions are detected through a learning FCN, and the entire image, not just the image patches, are exploited to train the FCN. First, we synthesize 4500 pairs of multifocus images by repeatedly using a gaussian filter for each image from PASCAL VOC 2012, to train the FCN. After that, a pair of source images is fed into the trained FCN, and two score maps indicating the focus property are generated. Next, an inversed score map is averaged with another score map to produce an aggregative score map, which take full advantage of focus probabilities in two score maps. We implement the fully connected conditional random field (CRF) on the aggregative score map to accomplish and refine a binary decision map for the fusion task. Finally, we exploit the weighted strategy based on the refined decision map to produce the fused image. To demonstrate the performance of the proposed method, we compare its fused results with several start-of-the-art methods not only on a gray data set but also on a color data set. Experimental results show that the proposed method can achieve superior fusion performance in both human visual quality and objective assessment. As the optical lenses for cameras always have limited depth of field, the captured images with the same scene are not all in focus. Multifocus image fusion is an efficient technology that can synthesize an all-in-focus image using several partially focused images. Previous methods have accomplished the fusion task in spatial or transform domains. However, fusion rules are always a problem in most methods. In this letter, from the aspect of focus region detection, we propose a novel multifocus image fusion method based on a fully convolutional network (FCN) learned from synthesized multifocus images. The primary novelty of this method is that the pixel-wise focus regions are detected through a learning FCN, and the entire image, not just the image patches, are exploited to train the FCN. First, we synthesize 4500 pairs of multifocus images by repeatedly using a gaussian filter for each image from PASCAL VOC 2012, to train the FCN. After that, a pair of source images is fed into the trained FCN, and two score maps indicating the focus property are generated. Next, an inversed score map is averaged with another score map to produce an aggregative score map, which take full advantage of focus probabilities in two score maps. We implement the fully connected conditional random field (CRF) on the aggregative score map to accomplish and refine a binary decision map for the fusion task. Finally, we exploit the weighted strategy based on the refined decision map to produce the fused image. To demonstrate the performance of the proposed method, we compare its fused results with several start-of-the-art methods not only on a gray data set but also on a color data set. Experimental results show that the proposed method can achieve superior fusion performance in both human visual quality and objective assessment.As the optical lenses for cameras always have limited depth of field, the captured images with the same scene are not all in focus. Multifocus image fusion is an efficient technology that can synthesize an all-in-focus image using several partially focused images. Previous methods have accomplished the fusion task in spatial or transform domains. However, fusion rules are always a problem in most methods. In this letter, from the aspect of focus region detection, we propose a novel multifocus image fusion method based on a fully convolutional network (FCN) learned from synthesized multifocus images. The primary novelty of this method is that the pixel-wise focus regions are detected through a learning FCN, and the entire image, not just the image patches, are exploited to train the FCN. First, we synthesize 4500 pairs of multifocus images by repeatedly using a gaussian filter for each image from PASCAL VOC 2012, to train the FCN. After that, a pair of source images is fed into the trained FCN, and two score maps indicating the focus property are generated. Next, an inversed score map is averaged with another score map to produce an aggregative score map, which take full advantage of focus probabilities in two score maps. We implement the fully connected conditional random field (CRF) on the aggregative score map to accomplish and refine a binary decision map for the fusion task. Finally, we exploit the weighted strategy based on the refined decision map to produce the fused image. To demonstrate the performance of the proposed method, we compare its fused results with several start-of-the-art methods not only on a gray data set but also on a color data set. Experimental results show that the proposed method can achieve superior fusion performance in both human visual quality and objective assessment. |
Author | Guo, Xiaopeng Cao, Jinde Nie, Rencan Qian, Wenhua Zhou, Dongming |
Author_xml | – sequence: 1 givenname: Xiaopeng surname: Guo fullname: Guo, Xiaopeng email: xiaopengguo@mail.ynu.edu.cn organization: School of Information Science and Engineering, Yunnan University, Kunming, Yunnan 650091, China xiaopengguo@mail.ynu.edu.cn – sequence: 2 givenname: Rencan surname: Nie fullname: Nie, Rencan email: rcnie@ynu.edu.cn – sequence: 3 givenname: Jinde surname: Cao fullname: Cao, Jinde email: jdcao@seu.edu.cn organization: School of Mathematics, Southeast University, Jiangsu, Nanjing 210096, China jdcao@seu.edu.cn – sequence: 4 givenname: Dongming surname: Zhou fullname: Zhou, Dongming email: zhoudm@ynu.edu.cn organization: School of Information Science and Engineering, Yunnan University, Kunming, Yunnan 650091, China zhoudm@ynu.edu.cn – sequence: 5 givenname: Wenhua surname: Qian fullname: Qian, Wenhua organization: School of Information Science and Engineering, Yunnan University, Kunming, Yunnan 650091, China qwhua003@sina.com |
BackLink | https://www.ncbi.nlm.nih.gov/pubmed/29894654$$D View this record in MEDLINE/PubMed |
BookMark | eNp10E1v1DAQBmALFdFt4ca5itQLhwbGnsQft7arLqxU4AISN8vxOsitEy9xvKj99c2qC1oqeprLM6N53yNy0MfeEfKWwntKOfvQOxu10UBByRdkRmuEUkr544DMQCpVCs7FITlK6QYAOIX6FTlkSqqK19WMqEUO4a6Yx34TQx597E0ovrjxdxxuy0uT3Kr4nMPo22hzKpad-emKRU6Te01etiYk92Y3j8n3xdW3-afy-uvH5fziurSVqMdSKGO5qxtsqZHKgnPIeGM4tIYjOtYgKGpqWvEWsGUUV4yvDJcSG5SNs3hM3j3eXQ_xV3Zp1J1P1oVgehdz0gzqSiEyQSd6-oTexDxMibZKMCEAUU3qZKdy07mVXg--M8Od_lPKBM4egR1iSoNr_xIKetu53u984uwJt3402yrHwfjw3NIuVef3vnyGnv-HbskGwQuNwGrBpoSMTtsalL73639PPABww6aD |
CitedBy_id | crossref_primary_10_3389_fnins_2020_615435 crossref_primary_10_1155_2021_6051009 crossref_primary_10_3390_su141912321 crossref_primary_10_1109_TIP_2023_3276330 crossref_primary_10_1109_ACCESS_2019_2935006 crossref_primary_10_1007_s11042_023_16074_6 crossref_primary_10_1088_1742_6596_1961_1_012024 crossref_primary_10_1109_JSEN_2021_3106063 crossref_primary_10_1155_2023_4155948 crossref_primary_10_1007_s10462_021_09961_7 crossref_primary_10_1007_s11042_022_12031_x crossref_primary_10_1587_transinf_2024EDP7046 crossref_primary_10_1007_s00371_021_02300_5 crossref_primary_10_1109_JPHOT_2021_3073022 crossref_primary_10_1016_j_inffus_2019_02_003 crossref_primary_10_1016_j_inffus_2022_06_001 crossref_primary_10_1007_s11042_019_08070_6 crossref_primary_10_1145_3640811 crossref_primary_10_1049_ipr2_12383 crossref_primary_10_1155_2022_4326638 crossref_primary_10_1007_s10489_021_02358_7 crossref_primary_10_1109_TIM_2021_3124058 crossref_primary_10_1016_j_patcog_2021_108045 crossref_primary_10_1007_s11042_021_11576_7 crossref_primary_10_1016_j_inffus_2020_06_013 crossref_primary_10_1109_TIP_2020_2976190 crossref_primary_10_1007_s11042_022_13323_y crossref_primary_10_1016_j_measurement_2019_04_034 crossref_primary_10_1016_j_asoc_2020_106253 crossref_primary_10_1109_JPHOT_2019_2950949 crossref_primary_10_1364_AO_381082 crossref_primary_10_1007_s12652_019_01199_0 crossref_primary_10_3390_electronics9091531 crossref_primary_10_1016_j_sigpro_2020_107793 crossref_primary_10_3390_s20143901 crossref_primary_10_3390_f12101419 crossref_primary_10_1007_s00521_021_05926_7 crossref_primary_10_1109_TCI_2021_3059497 crossref_primary_10_1109_ACCESS_2020_3022208 crossref_primary_10_1016_j_cmpb_2023_107688 crossref_primary_10_1109_TCDS_2021_3126330 crossref_primary_10_37126_aige_v2_i2_12 crossref_primary_10_1007_s10489_022_03194_z crossref_primary_10_3233_JIFS_211434 crossref_primary_10_1007_s10489_024_05983_0 crossref_primary_10_1109_ACCESS_2020_3018264 crossref_primary_10_1109_JSEN_2019_2928818 crossref_primary_10_1109_TIP_2020_3033158 crossref_primary_10_2174_1573405616999200817103920 crossref_primary_10_1049_ipr2_12668 crossref_primary_10_1137_20M1334103 crossref_primary_10_1109_TCSVT_2023_3344222 crossref_primary_10_1364_JOSAA_473908 crossref_primary_10_1016_j_cviu_2021_103228 crossref_primary_10_1371_journal_pone_0302545 crossref_primary_10_1007_s11045_019_00675_2 crossref_primary_10_1049_ipr2_12363 crossref_primary_10_1109_ACCESS_2019_2937461 crossref_primary_10_1080_17480272_2024_2428963 crossref_primary_10_1016_j_knosys_2020_105794 crossref_primary_10_1007_s10489_022_03658_2 crossref_primary_10_1007_s10489_022_03160_9 crossref_primary_10_3390_electronics11010034 crossref_primary_10_1016_j_cmpb_2021_106361 crossref_primary_10_1109_TPAMI_2021_3078906 crossref_primary_10_1049_iet_ipr_2019_0883 crossref_primary_10_1002_tee_23271 crossref_primary_10_1007_s00138_022_01345_3 crossref_primary_10_1109_ACCESS_2019_2900376 |
Cites_doi | 10.1109/34.868688 10.1016/j.imavis.2007.10.012 10.1109/TIP.2005.859376 10.1006/gmip.1995.1022 10.1016/j.jvcir.2017.02.006 10.1016/S0167-8655(02)00029-6 10.1049/el:20000267 10.1109/ICCV.2017.322 10.1016/j.compeleceng.2017.02.003 10.1109/TIP.2005.859378 10.1016/j.inffus.2011.07.001 10.1016/j.patcog.2004.03.010 10.1109/CVPR.2005.202 10.1016/j.patrec.2007.01.013 10.1007/s11760-012-0361-x 10.1145/2897824.2925972 10.1145/2647868.2654889 10.1109/TPAMI.2012.213 10.1016/S1566-2535(01)00038-0 10.1016/j.inffus.2005.09.006 10.1016/j.inffus.2016.09.006 10.1109/LGRS.2014.2376034 10.1016/j.inffus.2017.10.007 10.1109/LGRS.2017.2736020 10.1109/CVPR.2015.7298965 10.1162/neco.1989.1.4.541 10.1016/j.inffus.2014.05.004 10.1007/978-3-7908-2604-3_16 10.1016/j.inffus.2016.12.001 10.1109/TIP.2002.801588 10.1109/TIP.2004.823821 10.1145/3072959.3073609 10.1016/j.sigpro.2012.01.027 10.1016/j.patcog.2010.01.011 10.1007/s11760-013-0556-9 10.1016/j.inffus.2013.11.005 10.1109/MSP.2005.1550194 10.1016/j.neucom.2016.07.039 10.1016/j.optcom.2010.08.085 10.1016/j.eswa.2010.06.011 10.1109/TIP.2013.2244222 10.1016/j.inffus.2014.10.004 10.1016/j.inffus.2009.05.001 10.23919/ICIF.2017.8009769 10.1016/j.sigpro.2009.01.012 10.1016/j.inffus.2012.01.007 10.1109/LGRS.2017.2668299 |
ContentType | Journal Article |
Copyright | Copyright MIT Press Journals, The Jul 2018 |
Copyright_xml | – notice: Copyright MIT Press Journals, The Jul 2018 |
DBID | AAYXX CITATION NPM 7SC 8FD JQ2 L7M L~C L~D 7X8 |
DOI | 10.1162/neco_a_01098 |
DatabaseName | CrossRef PubMed Computer and Information Systems Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional MEDLINE - Academic |
DatabaseTitle | CrossRef PubMed Computer and Information Systems Abstracts Technology Research Database Computer and Information Systems Abstracts – Academic Advanced Technologies Database with Aerospace ProQuest Computer Science Collection Computer and Information Systems Abstracts Professional MEDLINE - Academic |
DatabaseTitleList | CrossRef PubMed Computer and Information Systems Abstracts MEDLINE - Academic |
Database_xml | – sequence: 1 dbid: NPM name: PubMed url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Computer Science |
EISSN | 1530-888X |
EndPage | 1800 |
ExternalDocumentID | 29894654 10_1162_neco_a_01098 neco_a_01098.pdf |
Genre | Research Support, Non-U.S. Gov't Journal Article |
GroupedDBID | - 0R 123 36B 4.4 4S 6IK AAJGR AALMD AAPBV ABDBF ABDNZ ABFLS ABIVO ABPTK ACGFO ACYGS AEGXH AEILP AENEX AFHIN AIAGR ALMA_UNASSIGNED_HOLDINGS ARCSS AVWKF AZFZN BEFXN BFFAM BGNUA BKEBE BPEOZ CAG CS3 DC DU5 EAP EAS EBC EBD EBS ECS EDO EJD EMB EMK EPL EPS EST ESX F5P FEDTE FNEHJ HZ I-F IPLJI JAVBF MCG MKJ O9- OCL P2P PK0 PQEST PQQKQ RMI SV3 TUS WG8 WH7 X XJE ZWS --- -~X .4S .DC 0R~ 41~ 53G AAFWJ AAYOK AAYXX ABAZT ABEFU ABJNI ABVLG ACUHS ADIYS ADMLS AMVHM CITATION COF EMOBN HVGLF HZ~ H~9 MINIK NPM 7SC 8FD JQ2 L7M L~C L~D 7X8 |
ID | FETCH-LOGICAL-c475t-79ac6e5b3f1a89c0ee326ba60fa633e2b3091a5146f03f213d26da6883b38bec3 |
ISSN | 0899-7667 1530-888X |
IngestDate | Fri Jul 11 09:51:17 EDT 2025 Mon Jun 30 11:20:50 EDT 2025 Mon Jul 21 06:18:09 EDT 2025 Thu Apr 24 23:05:22 EDT 2025 Tue Jul 01 01:19:53 EDT 2025 Sun Jul 17 10:31:15 EDT 2022 Tue Mar 01 17:17:47 EST 2022 |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 7 |
Language | English |
LinkModel | OpenURL |
MergedId | FETCHMERGED-LOGICAL-c475t-79ac6e5b3f1a89c0ee326ba60fa633e2b3091a5146f03f213d26da6883b38bec3 |
Notes | July, 2018 ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 ObjectType-Article-2 ObjectType-Undefined-1 ObjectType-Feature-3 content type line 23 |
PMID | 29894654 |
PQID | 2072770339 |
PQPubID | 37252 |
PageCount | 26 |
ParticipantIDs | crossref_primary_10_1162_neco_a_01098 mit_journals_10_1162_neco_a_01098 crossref_citationtrail_10_1162_neco_a_01098 proquest_miscellaneous_2054933271 proquest_journals_2072770339 mit_journals_necov30i7_302572_2021_11_09_zip_neco_a_01098 pubmed_primary_29894654 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 2018-07-01 2018-07-00 20180701 |
PublicationDateYYYYMMDD | 2018-07-01 |
PublicationDate_xml | – month: 07 year: 2018 text: 2018-07-01 day: 01 |
PublicationDecade | 2010 |
PublicationPlace | One Rogers Street, Cambridge, MA 02142-1209, USA |
PublicationPlace_xml | – name: One Rogers Street, Cambridge, MA 02142-1209, USA – name: United States – name: Cambridge |
PublicationTitle | Neural computation |
PublicationTitleAlternate | Neural Comput |
PublicationYear | 2018 |
Publisher | MIT Press MIT Press Journals, The |
Publisher_xml | – name: MIT Press – name: MIT Press Journals, The |
References | B20 B21 B22 B23 Fischer P. (B9) 2015 B24 B25 B26 B27 B28 B29 Rao Y. (B40) 2017 B30 B31 B32 B33 B34 MacKay D. J. (B35) 2003 B36 B37 B38 B39 B1 B2 B4 B5 B6 B7 Azarang A. (B3) 2017 B41 B42 B43 B45 B46 B48 B49 Everingham M. (B8) 2011 Simonyan K. (B44) 2014 Krähenbühl P. (B17) 2011; 24 Stathaki T. (B47) 2008 Xie S. (B52) 2016 Krizhevsky A. (B18) 2012; 25 Glorot X. (B10) 2010; 9 B50 B51 B53 B54 B11 B55 B12 B56 B13 B57 B14 B58 B15 B16 B19 |
References_xml | – ident: B43 doi: 10.1109/34.868688 – start-page: 1 volume-title: Proceedings of the International Workshop on Remote Sensing with Intelligent Processing year: 2017 ident: B40 – ident: B28 doi: 10.1016/j.imavis.2007.10.012 – ident: B7 doi: 10.1109/TIP.2005.859376 – volume: 25 start-page: 1097 volume-title: Advances in neural information processing systems year: 2012 ident: B18 – ident: B23 doi: 10.1006/gmip.1995.1022 – ident: B34 doi: 10.1016/j.jvcir.2017.02.006 – ident: B27 doi: 10.1016/S0167-8655(02)00029-6 – ident: B53 doi: 10.1049/el:20000267 – ident: B12 doi: 10.1109/ICCV.2017.322 – ident: B2 doi: 10.1016/j.compeleceng.2017.02.003 – ident: B42 doi: 10.1109/TIP.2005.859378 – ident: B25 doi: 10.1016/j.inffus.2011.07.001 – volume: 9 start-page: 249 year: 2010 ident: B10 publication-title: Journal of Machine Learning Research – ident: B37 doi: 10.1016/j.patcog.2004.03.010 – year: 2014 ident: B44 publication-title: Very deep convolutional networks for large-scale image recognition – ident: B5 doi: 10.1109/CVPR.2005.202 – volume: 24 start-page: 109 volume-title: Advances in neural information processing systems year: 2011 ident: B17 – ident: B13 doi: 10.1016/j.patrec.2007.01.013 – ident: B19 doi: 10.1007/s11760-012-0361-x – ident: B45 doi: 10.1145/2897824.2925972 – ident: B15 doi: 10.1145/2647868.2654889 – volume-title: Image fusion: Algorithms and applications year: 2008 ident: B47 – ident: B11 doi: 10.1109/TPAMI.2012.213 – ident: B26 doi: 10.1016/S1566-2535(01)00038-0 – start-page: 2758 volume-title: Proceedings of the IEEE International Conference on Computer Vision year: 2015 ident: B9 – ident: B22 doi: 10.1016/j.inffus.2005.09.006 – ident: B57 doi: 10.1016/j.inffus.2016.09.006 – ident: B14 doi: 10.1109/LGRS.2014.2376034 – ident: B31 doi: 10.1016/j.inffus.2017.10.007 – ident: B51 doi: 10.1109/LGRS.2017.2736020 – ident: B33 doi: 10.1109/CVPR.2015.7298965 – ident: B21 doi: 10.1162/neco.1989.1.4.541 – ident: B32 doi: 10.1016/j.inffus.2014.05.004 – year: 2011 ident: B8 publication-title: The Pascal visual object classes challenge 2012 (voc2012) results (2012) – ident: B4 doi: 10.1007/978-3-7908-2604-3_16 – ident: B30 doi: 10.1016/j.inffus.2016.12.001 – ident: B46 doi: 10.1109/TIP.2002.801588 – ident: B39 doi: 10.1109/TIP.2004.823821 – ident: B16 doi: 10.1145/3072959.3073609 – ident: B48 doi: 10.1016/j.sigpro.2012.01.027 – ident: B50 doi: 10.1016/j.patcog.2010.01.011 – start-page: 1 volume-title: Proceedings of the International Conference on Pattern Recognition and Image Analysis year: 2017 ident: B3 – ident: B20 doi: 10.1007/s11760-013-0556-9 – ident: B58 doi: 10.1016/j.inffus.2013.11.005 – ident: B41 doi: 10.1109/MSP.2005.1550194 – start-page: 1395 volume-title: Proceedings of the IEEE International Conference on Computer Vision year: 2016 ident: B52 – ident: B55 doi: 10.1016/j.neucom.2016.07.039 – ident: B49 doi: 10.1016/j.optcom.2010.08.085 – ident: B1 doi: 10.1016/j.eswa.2010.06.011 – ident: B24 doi: 10.1109/TIP.2013.2244222 – ident: B36 doi: 10.1016/j.inffus.2014.10.004 – volume-title: Information theory, inference and learning algorithms year: 2003 ident: B35 – ident: B54 doi: 10.1016/j.inffus.2009.05.001 – ident: B29 doi: 10.23919/ICIF.2017.8009769 – ident: B56 doi: 10.1016/j.sigpro.2009.01.012 – ident: B6 doi: 10.1016/j.inffus.2012.01.007 – ident: B38 doi: 10.1109/LGRS.2017.2668299 |
SSID | ssj0006105 |
Score | 2.5190907 |
Snippet | As the optical lenses for cameras always have limited depth of field, the captured images with the same scene are not all in focus. Multifocus image fusion is... |
SourceID | proquest pubmed crossref mit |
SourceType | Aggregation Database Index Database Enrichment Source Publisher |
StartPage | 1775 |
SubjectTerms | Artificial neural networks Computer vision Datasets Depth of field Fusion Human performance Image detection Image filters Image processing Letters Optics Quality assessment Sensors Synthesis |
Title | Fully Convolutional Network-Based Multifocus Image Fusion |
URI | https://direct.mit.edu/neco/article/doi/10.1162/neco_a_01098 https://www.ncbi.nlm.nih.gov/pubmed/29894654 https://www.proquest.com/docview/2072770339 https://www.proquest.com/docview/2054933271 |
Volume | 30 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1Lb9QwELbo9sKF92NpQUGC0yqQ2ImdHLuFpSDRUyvtLbIdu1rEZlds0kN_fcd2nEdhJeASreJxrPXnxzcezwxC74iISFnqPAS2KsMECHUoEp6EmGZYm2RJWhoH5-_n9Owy-bZMl31mOutdUosP8uaPfiX_gyq8A1yNl-w_INt9FF7Ab8AXnoAwPP8KY6M_mildXbfNQH-fu3vd4Ry2p3Jm_Wv1Rja72de1uZ6zaHYeiR8-cJONvCFteoeRXf5LY89Rlytucmxd9SYM5YCpZD-0Trkz4Zjoi_1h9KZxJL26Wvstsj1hiLPuNipsEH5VjEJQlZfDZbM1p7jhwQZrYMxcLpTfF2dqg72CWl3wwpjksqEYdO12bYGyQeGpiy19Jxi2LzpAhxj0AjxBhyfzT_NFt_kCG0y9fwPFH4eNmbjPbfURCTlYr-r9-oXlGReP0INWQQhOHNqP0T1VPUEPffKNoF2Ln6Lcgh-MwA9G4Ac9-IEFP3DgP0OXi88Xp2dhmwcjlAlL65DlXFKVCqJjnuUyUgo4t-A00pwSorAgQPo4MF-qI6JxTEpMS06zjAiSwRwlz9Gk2lTqJQoyEgugJKUuU5rwmHLNaKwzJRMB85SxKZr5jilkGyTe5Cr5WVhlkeJi2KNT9L6T3rrgKHvk3kIfF-3M2e2RyUcypuyaRCtWEKDlDBcY2ChUK6K8uFlt79Q99uj1H8ARUHPYz0gOzXfFsHgaixiv1KYxMmmSE4JZPEUvHOrdH_Fj5dXekiN0v58ux2hS_2rUa6CotXjTDstbHceQGg |
linkProvider | EBSCOhost |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Fully+Convolutional+Network-Based+Multifocus+Image+Fusion&rft.jtitle=Neural+computation&rft.au=Guo%2C+Xiaopeng&rft.au=Nie%2C+Rencan&rft.au=Cao%2C+Jinde&rft.au=Zhou%2C+Dongming&rft.date=2018-07-01&rft.eissn=1530-888X&rft.volume=30&rft.issue=7&rft.spage=1775&rft_id=info:doi/10.1162%2Fneco_a_01098&rft_id=info%3Apmid%2F29894654&rft.externalDocID=29894654 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0899-7667&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0899-7667&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0899-7667&client=summon |