FDNet: A Novel Image Focus Discriminative Network for Enhancing Camera Autofocus

Accurate activation and optimization of autofocus (AF) functions are essential for capturing high-quality images and minimizing camera response time. Traditional contrast detection autofocus (CDAF) methods suffer from a trade-off between accuracy and robustness, while learning-based methods often in...

Full description

Saved in:
Bibliographic Details
Published inNeural processing letters Vol. 57; no. 5; p. 76
Main Authors Kou, Chenhao, Xiao, Zhaolin, Jin, Haiyan, Guo, Qifeng, Su, Haonan
Format Journal Article
LanguageEnglish
Published New York Springer US 18.08.2025
Springer Nature B.V
Subjects
Online AccessGet full text
ISSN1573-773X
1370-4621
1573-773X
DOI10.1007/s11063-025-11788-0

Cover

Abstract Accurate activation and optimization of autofocus (AF) functions are essential for capturing high-quality images and minimizing camera response time. Traditional contrast detection autofocus (CDAF) methods suffer from a trade-off between accuracy and robustness, while learning-based methods often incur high spatio-temporal computational costs. To address these issues, we propose a lightweight focus discriminative network (FDNet) tailored for AF tasks. Built upon the ShuffleNet V2 backbone, FDNet leverages a genetic algorithm optimization (GAO) strategy to automatically search for efficient network structures, and incorporates coordinate attention (CA) and multi-scale feature fusion (MFF) modules to enhance spatial, directional, and contextual feature extraction. A dedicated focus stack dataset is constructed with high-quality annotations to support training and evaluation. Experimental results show that FDNet outperforms mainstream methods by up to 4% in classification accuracy while requiring only 0.2 GFLOPs, 0.5 M parameters, a model size of 2.1 MB, and an inference time of 0.06 s, achieving a superior balance between performance and efficiency. Ablation studies further confirm the effectiveness of the GAO, CA, and MFF components in improving the accuracy and robustness of focus feature classification.
AbstractList Accurate activation and optimization of autofocus (AF) functions are essential for capturing high-quality images and minimizing camera response time. Traditional contrast detection autofocus (CDAF) methods suffer from a trade-off between accuracy and robustness, while learning-based methods often incur high spatio-temporal computational costs. To address these issues, we propose a lightweight focus discriminative network (FDNet) tailored for AF tasks. Built upon the ShuffleNet V2 backbone, FDNet leverages a genetic algorithm optimization (GAO) strategy to automatically search for efficient network structures, and incorporates coordinate attention (CA) and multi-scale feature fusion (MFF) modules to enhance spatial, directional, and contextual feature extraction. A dedicated focus stack dataset is constructed with high-quality annotations to support training and evaluation. Experimental results show that FDNet outperforms mainstream methods by up to 4% in classification accuracy while requiring only 0.2 GFLOPs, 0.5 M parameters, a model size of 2.1 MB, and an inference time of 0.06 s, achieving a superior balance between performance and efficiency. Ablation studies further confirm the effectiveness of the GAO, CA, and MFF components in improving the accuracy and robustness of focus feature classification.
Accurate activation and optimization of autofocus (AF) functions are essential for capturing high-quality images and minimizing camera response time. Traditional contrast detection autofocus (CDAF) methods suffer from a trade-off between accuracy and robustness, while learning-based methods often incur high spatio-temporal computational costs. To address these issues, we propose a lightweight focus discriminative network (FDNet) tailored for AF tasks. Built upon the ShuffleNet V2 backbone, FDNet leverages a genetic algorithm optimization (GAO) strategy to automatically search for efficient network structures, and incorporates coordinate attention (CA) and multi-scale feature fusion (MFF) modules to enhance spatial, directional, and contextual feature extraction. A dedicated focus stack dataset is constructed with high-quality annotations to support training and evaluation. Experimental results show that FDNet outperforms mainstream methods by up to 4% in classification accuracy while requiring only 0.2 GFLOPs, 0.5 M parameters, a model size of 2.1 MB, and an inference time of 0.06 s, achieving a superior balance between performance and efficiency. Ablation studies further confirm the effectiveness of the GAO, CA, and MFF components in improving the accuracy and robustness of focus feature classification.
ArticleNumber 76
Author Guo, Qifeng
Xiao, Zhaolin
Kou, Chenhao
Su, Haonan
Jin, Haiyan
Author_xml – sequence: 1
  givenname: Chenhao
  surname: Kou
  fullname: Kou, Chenhao
  organization: Xi’an University of Technology
– sequence: 2
  givenname: Zhaolin
  surname: Xiao
  fullname: Xiao, Zhaolin
  email: xiaozhaolin@xaut.edu.cn
  organization: Xi’an University of Technology, Shaanxi Key Laboratory for Network Computing and Security Technology
– sequence: 3
  givenname: Haiyan
  surname: Jin
  fullname: Jin, Haiyan
  organization: Xi’an University of Technology, Shaanxi Key Laboratory for Network Computing and Security Technology
– sequence: 4
  givenname: Qifeng
  surname: Guo
  fullname: Guo, Qifeng
  organization: Shenzhen Shenzhi Weilai Co., Ltd
– sequence: 5
  givenname: Haonan
  surname: Su
  fullname: Su, Haonan
  organization: Xi’an University of Technology, Shaanxi Key Laboratory for Network Computing and Security Technology
BookMark eNp9kMtOwzAQRS1UJNrCD7CyxDrgR2w37Ko-oFJVWIDEznKSSUlp7WInRfw9LkGCFauZxT13NGeAetZZQOiSkmtKiLoJlBLJE8JEQqkajRJygvpUKJ4oxV96f_YzNAhhQ0jEGOmjx_l0Bc0tHuOVO8AWL3ZmDXjuijbgaR0KX-9qa5r6ADjmPpx_w5XzeGZfjS1qu8YTswNv8LhtXHWkztFpZbYBLn7mED3PZ0-T-2T5cLeYjJdJwalsEqA5KVKZFUZUqjRM8kyqPM-zuGd5DnKUq0qIUlBgICsFQpYsHWUkYyXjYPgQXXW9e-_eWwiN3rjW23hSc5YSIdKU0ZhiXarwLgQPld7Hj4z_1JToozndmdPRnP42p0mEeAeFGLZr8L_V_1BfFdVySQ
Cites_doi 10.1109/CVPR52688.2022.01167
10.1109/CVPR42600.2020.00230
10.1109/CVPR52729.2023.00764
10.1080/09500340.2017.1411540
10.1109/CVPR46437.2021.01350
10.1109/TCSVT.2021.3114601
10.1016/j.eswa.2024.125665
10.18226/23185279.v3iss1p1
10.1109/TCSVT.2025.3541588
10.1109/TCSVT.2023.3338689
10.1109/CVPR52733.2024.02363
10.1109/ICCV51070.2023.00134
10.1109/ITCA52113.2020.00106
10.1109/TIP.2006.881959
10.1109/CIMSA.2010.5611751
10.1007/978-3-319-24574-4_28
10.1007/s10732-015-9291-4
10.1007/978-3-030-01264-9_8
10.1016/j.displa.2024.102837
10.1007/978-3-540-73190-0_7
10.1007/s11263-024-02056-0
10.1109/TCI.2021.3059497
10.1016/j.optlaseng.2023.107991
10.1016/j.image.2014.10.009
10.1016/j.engappai.2023.106755
10.46632/rmc/3/1/1
10.1088/1742-6596/1634/1/012110
10.1109/SYSMART.2016.7894491
10.1364/AO.57.000F44
10.1109/TPAMI.2024.3510793
10.1109/CVPR46437.2021.01487
ContentType Journal Article
Copyright The Author(s) 2025
The Author(s) 2025. This work is published under http://creativecommons.org/licenses/by-nc-nd/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Copyright_xml – notice: The Author(s) 2025
– notice: The Author(s) 2025. This work is published under http://creativecommons.org/licenses/by-nc-nd/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
DBID C6C
AAYXX
CITATION
JQ2
DOI 10.1007/s11063-025-11788-0
DatabaseName Springer Nature OA Free Journals
CrossRef
ProQuest Computer Science Collection
DatabaseTitle CrossRef
ProQuest Computer Science Collection
DatabaseTitleList
ProQuest Computer Science Collection
Database_xml – sequence: 1
  dbid: C6C
  name: Springer Nature OA Free Journals
  url: http://www.springeropen.com/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Computer Science
EISSN 1573-773X
ExternalDocumentID 10_1007_s11063_025_11788_0
GrantInformation_xml – fundername: NSFC
  grantid: 62371389; 62272383
GroupedDBID -~C
.86
.DC
.VR
06D
0R~
0VY
123
1N0
203
29N
2J2
2JY
2KG
2LR
2~H
30V
4.4
406
408
409
40D
40E
53G
5VS
67Z
6NX
8TC
8UJ
95-
95.
95~
96X
AAAVM
AABHQ
AAHNG
AAIAL
AAJKR
AAJSJ
AAKKN
AANZL
AARTL
AASML
AATVU
AAUYE
AAWCG
AAYIU
AAYQN
AAYZH
ABBBX
ABBXA
ABDBE
ABDZT
ABECU
ABEEZ
ABFSG
ABFTD
ABFTV
ABHLI
ABHQN
ABJOX
ABKCH
ABKTR
ABMNI
ABMQK
ABNWP
ABQBU
ABSXP
ABTHY
ABTKH
ABTMW
ABWNU
ABXPI
ACACY
ACGFS
ACHSB
ACHXU
ACKNC
ACMDZ
ACMLO
ACOKC
ACOMO
ACSNA
ACSTC
ACULB
ADHHG
ADHIR
ADIMF
ADKNI
ADRFC
ADTPH
ADURQ
ADYFF
ADZKW
AEFQL
AEGAL
AEGNC
AEJHL
AEJRE
AENEX
AEOHA
AEPYU
AESKC
AETLH
AEVLU
AEXYK
AEZWR
AFBBN
AFGXO
AFHIU
AFLOW
AFQWF
AFWTZ
AFZKB
AGAYW
AGDGC
AGMZJ
AGQEE
AGQMX
AGRTI
AGWIL
AGWZB
AGYKE
AHAVH
AHBYD
AHKAY
AHPBZ
AHSBF
AHWEU
AHYZX
AIAKS
AIIXL
AILAN
AITGF
AIXLP
AJRNO
AJZVZ
ALMA_UNASSIGNED_HOLDINGS
ALWAN
AMKLP
AMXSW
AMYLF
AMYQR
AOCGG
ARMRJ
ASPBG
AVWKF
AXYYD
AYFIA
AYJHY
AZFZN
B-.
BA0
C24
C6C
CS3
CSCUP
DDRTE
DL5
DNIVK
DPUIP
DU5
EBLON
EBS
EIOEI
ESBYG
FEDTE
FERAY
FFXSO
FNLPD
FRRFC
FWDCC
GGCAI
GGRSB
GJIRD
GNWQR
GQ7
GQ8
GXS
HF~
HG5
HG6
HMJXF
HQYDN
HRMNR
HVGLF
HZ~
I09
IHE
IJ-
IKXTQ
ITM
IWAJR
IXC
IXE
IZIGR
IZQ
I~X
I~Z
J-C
J0Z
JBSCW
JCJTX
JZLTJ
KDC
KOV
LAK
LLZTM
MA-
NB0
NPVJJ
NQJWS
O93
O9G
O9I
O9J
OAM
P19
P2P
P9O
PF0
PT5
QOK
QOS
R89
R9I
RHV
RNS
RPX
RSV
S16
S1Z
S27
S3B
SAP
SDH
SDM
SHX
SISQX
SNE
SNPRN
SNX
SOHCF
SOJ
SPH
SPISZ
SRMVM
SSLCW
STPWE
SZN
T13
TSG
TSK
TSV
TUC
U2A
UG4
UOJIU
UTJUX
VC2
W23
W48
WK8
YLTOR
Z45
ZMTXR
~EX
77I
AAYXX
ABTEG
BGNMA
CITATION
M4Y
NU0
JQ2
ID FETCH-LOGICAL-c316t-e1b0c469ca5f7da263967bbb9da29bbe68b7f55d51e2e6f7e56d2489092d23ea3
IEDL.DBID C24
ISSN 1573-773X
1370-4621
IngestDate Wed Aug 27 05:10:29 EDT 2025
Thu Sep 11 00:31:16 EDT 2025
Tue Aug 26 01:11:13 EDT 2025
IsDoiOpenAccess true
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Issue 5
Keywords Focus discrimination
Genetic algorithm
Coordinate attention
Multi-scale feature fusion
Autofocus
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c316t-e1b0c469ca5f7da263967bbb9da29bbe68b7f55d51e2e6f7e56d2489092d23ea3
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
OpenAccessLink https://link.springer.com/10.1007/s11063-025-11788-0
PQID 3240554421
PQPubID 2043838
ParticipantIDs proquest_journals_3240554421
crossref_primary_10_1007_s11063_025_11788_0
springer_journals_10_1007_s11063_025_11788_0
PublicationCentury 2000
PublicationDate 2025-08-18
PublicationDateYYYYMMDD 2025-08-18
PublicationDate_xml – month: 08
  year: 2025
  text: 2025-08-18
  day: 18
PublicationDecade 2020
PublicationPlace New York
PublicationPlace_xml – name: New York
– name: Dordrecht
PublicationTitle Neural processing letters
PublicationTitleAbbrev Neural Process Lett
PublicationYear 2025
Publisher Springer US
Springer Nature B.V
Publisher_xml – name: Springer US
– name: Springer Nature B.V
References T Wang (11788_CR25) 2024; 132
11788_CR6
11788_CR4
H Mir (11788_CR22) 2015; 21
G Wang (11788_CR16) 2024
Y Ouyang (11788_CR1) 2025; 262
X Zhang (11788_CR28) 2021; 32
Z Li (11788_CR21) 2024; 175
Y Wang (11788_CR14) 2018; 65
11788_CR29
CC Chan (11788_CR2) 2019; 31
11788_CR24
11788_CR23
M Subasi (11788_CR13) 2004; 147
11788_CR20
H Zhai (11788_CR8) 2024; 85
M Song (11788_CR18) 2025
T Wang (11788_CR26) 2023; 37
S Chinnasamy (11788_CR12) 2022; 3
J Tan (11788_CR27) 2023; 34
O Baltag (11788_CR10) 2015; 3
HR Sheikh (11788_CR36) 2006; 15
Y Yao (11788_CR11) 2006; 6246
Y Wan (11788_CR7) 2023; 125
11788_CR17
N Ponomarenko (11788_CR37) 2015; 30
11788_CR15
11788_CR35
11788_CR34
C Guo (11788_CR3) 2018; 57
11788_CR9
11788_CR33
11788_CR32
11788_CR31
11788_CR30
C Wang (11788_CR19) 2021; 7
J Liang (11788_CR5) 2020; 1634
References_xml – volume: 6246
  start-page: 132
  year: 2006
  ident: 11788_CR11
  publication-title: Vis Inf Process XV
– ident: 11788_CR32
  doi: 10.1109/CVPR52688.2022.01167
– ident: 11788_CR20
  doi: 10.1109/CVPR42600.2020.00230
– ident: 11788_CR35
  doi: 10.1109/CVPR52729.2023.00764
– volume: 65
  start-page: 858
  issue: 7
  year: 2018
  ident: 11788_CR14
  publication-title: J Mod Opt
  doi: 10.1080/09500340.2017.1411540
– ident: 11788_CR30
  doi: 10.1109/CVPR46437.2021.01350
– volume: 32
  start-page: 3490
  issue: 6
  year: 2021
  ident: 11788_CR28
  publication-title: IEEE Trans Circuits Syst Video Technol
  doi: 10.1109/TCSVT.2021.3114601
– volume: 37
  start-page: 2654
  issue: 3
  year: 2023
  ident: 11788_CR26
  publication-title: Proc AAAI Conf Artif Intell
– volume: 262
  start-page: 125665
  year: 2025
  ident: 11788_CR1
  publication-title: Expert Syst Appl
  doi: 10.1016/j.eswa.2024.125665
– volume: 3
  start-page: 1
  issue: 1
  year: 2015
  ident: 11788_CR10
  publication-title: Science
  doi: 10.18226/23185279.v3iss1p1
– volume: 31
  start-page: 1
  year: 2019
  ident: 11788_CR2
  publication-title: Electr Imaging
– year: 2025
  ident: 11788_CR18
  publication-title: IEEE Trans Circuits Syst Video Technol
  doi: 10.1109/TCSVT.2025.3541588
– volume: 34
  start-page: 4914
  issue: 6
  year: 2023
  ident: 11788_CR27
  publication-title: IEEE Trans Circuits Syst Video Technol
  doi: 10.1109/TCSVT.2023.3338689
– ident: 11788_CR24
  doi: 10.1109/CVPR52733.2024.02363
– ident: 11788_CR33
  doi: 10.1109/ICCV51070.2023.00134
– ident: 11788_CR6
  doi: 10.1109/ITCA52113.2020.00106
– volume: 15
  start-page: 3440
  issue: 11
  year: 2006
  ident: 11788_CR36
  publication-title: IEEE Trans Image Process
  doi: 10.1109/TIP.2006.881959
– ident: 11788_CR23
  doi: 10.1109/CIMSA.2010.5611751
– ident: 11788_CR31
  doi: 10.1007/978-3-319-24574-4_28
– volume: 21
  start-page: 599
  year: 2015
  ident: 11788_CR22
  publication-title: J Heuristics
  doi: 10.1007/s10732-015-9291-4
– ident: 11788_CR9
  doi: 10.1007/978-3-030-01264-9_8
– volume: 85
  start-page: 102837
  year: 2024
  ident: 11788_CR8
  publication-title: Displays
  doi: 10.1016/j.displa.2024.102837
– volume: 147
  start-page: 893
  issue: 3
  year: 2004
  ident: 11788_CR13
  publication-title: Appl Math Comput
– ident: 11788_CR29
  doi: 10.1007/978-3-540-73190-0_7
– volume: 132
  start-page: 4541
  issue: 10
  year: 2024
  ident: 11788_CR25
  publication-title: Int J Comput Vis
  doi: 10.1007/s11263-024-02056-0
– volume: 7
  start-page: 258
  year: 2021
  ident: 11788_CR19
  publication-title: IEEE Trans Comput Imaging
  doi: 10.1109/TCI.2021.3059497
– volume: 175
  start-page: 107991
  year: 2024
  ident: 11788_CR21
  publication-title: Opt Lasers Eng
  doi: 10.1016/j.optlaseng.2023.107991
– volume: 30
  start-page: 57
  year: 2015
  ident: 11788_CR37
  publication-title: Signal Process Image Commun
  doi: 10.1016/j.image.2014.10.009
– volume: 125
  start-page: 106755
  year: 2023
  ident: 11788_CR7
  publication-title: Eng Appl Artif Intell
  doi: 10.1016/j.engappai.2023.106755
– ident: 11788_CR4
– volume: 3
  start-page: 1
  issue: 1
  year: 2022
  ident: 11788_CR12
  publication-title: Recent Trends Manag Commer
  doi: 10.46632/rmc/3/1/1
– volume: 1634
  start-page: 012110
  issue: 1
  year: 2020
  ident: 11788_CR5
  publication-title: J Phys Conf Ser
  doi: 10.1088/1742-6596/1634/1/012110
– ident: 11788_CR34
– ident: 11788_CR15
  doi: 10.1109/SYSMART.2016.7894491
– volume: 57
  start-page: F44
  issue: 34
  year: 2018
  ident: 11788_CR3
  publication-title: Appl Opt
  doi: 10.1364/AO.57.000F44
– year: 2024
  ident: 11788_CR16
  publication-title: IEEE Trans Pattern Anal Mach Intell
  doi: 10.1109/TPAMI.2024.3510793
– ident: 11788_CR17
  doi: 10.1109/CVPR46437.2021.01487
SSID ssj0010020
Score 2.3857107
Snippet Accurate activation and optimization of autofocus (AF) functions are essential for capturing high-quality images and minimizing camera response time....
SourceID proquest
crossref
springer
SourceType Aggregation Database
Index Database
Publisher
StartPage 76
SubjectTerms Ablation
Accuracy
Annotations
Artificial Intelligence
Cameras
Classification
Complex Systems
Computational Intelligence
Computer Science
Datasets
Decision making
Efficiency
Feature extraction
Genetic algorithms
Image quality
Lighting
Optimization
Robustness
Semantics
Title FDNet: A Novel Image Focus Discriminative Network for Enhancing Camera Autofocus
URI https://link.springer.com/article/10.1007/s11063-025-11788-0
https://www.proquest.com/docview/3240554421
Volume 57
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3NT4MwFG_MdvHit3E6lx68KQkttFBvhA2nRuLBJfNEWihqomAc8-_3lY8QjR68NAQoh1f63u_X94XQmaa5LT1PWqkvKBCUjFhSmChzpdwcTG7alNi4i_l84d4s2bJNClt10e6dS7LW1H2yG7AX43NkFiFA3Cwg6kMG3N00bAhNjkPXtwButekxv8_7boJ6XPnDFVpbmGgHbbXQEAfNWu6iDV3soe2u7QJud-E-uo-msa4ucYDj8lO_4us30Ak4KtP1Ck9fjBow4S1GjeG4CfLGgEzxrHg2tTWKJxxKcxKFg3VV5mbWAVpEs4dwbrWNEazUIbyyNFF2Crw2lSz3MkkBZXBPKSXgWiilua-8nLGMEU01zz3NeEZdX9iCZtTR0jlEg6Is9BHCRAG_YCLzuARiIR2RKdfNdK5MNIlDnBE672SVvDf1L5K-0rGRbAKSTWrJJvYIjTtxJu1eWCWm5B-AFpeSEbroRNw__vtrx_97_QRtUrPKpl6tP0aD6mOtTwExVGqChsHV4-1sUv8oZuQhjAsafAGx5rut
linkProvider Springer Nature
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV09T8MwED2hdoCF8ikKBTywQVDsfLNFbUNL24ihlcoU2YkDCEgQTRj49dhNQkQFQ7dIiS3rzj6_lzs_A1xwEqvUsqgS2g4RBCXCCnVklTljeiy23LCQ2Jj45mCm382NeXkobFFVu1cpyWWkrg-7CfYic46GgrEgboog6k1dcHC1AU339mHU_8keSAxUHpD5u-XvTahGlivJ0OUe47VgVo2uKC15uc4zdh1-rQg3rjv8HdguQSdyi1myCxs82YNWdaEDKtf3Ptx7PZ9nN8hFfvrJX9HwTUQb5KVhvkC9ZxlgZOGMDJDIL8rHkcC8qJ88SdWO5BF1qfzHhdw8S2PZ6gBmXn_aHSjllQtKqGEzUzhmaigYc0iN2IooEfjFtBhjjnh2GOOmzazYMCIDc8LN2OKGGRHddlSHRETjVDuERpIm_AgQZoK5GE5kmVRQFqo5EdP1iMdM1qloWGvDZeWD4L1Q1ghqDWVprEAYK1gaK1Db0KncFJSrbBFIMUEBh3SC23BVWb1-_X9vx-t9fg6bg-lkHIyH_ugEtoh0olTFtTvQyD5yfipwScbOymn4DaMr2Hg
linkToPdf http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV09T8MwELVQkRAL34hCAQ9sELV2Eidhq9pGLR9RByp1i-zYBiRIKpry-_HlgwCCgS1S4gzPse-93N0zQheK6h73PG4lfkCNQJHE4gFUmQvhaBNyk9Ji4z5i45lzM3fnX7r4i2r3OiVZ9jSAS1OadxdSd5vGN6NkIP_oWoQYEWcZ0b7uQOiDdC0bfOYRgA1VrTK_j_sejhqO-SMtWkSbcAdtVTQR98t53UVrKt1D2_URDLhakftoGg4jlV_jPo6yd_WCJ69mf8BhlqyWePgMWwKUusCWhqOy4BsblopH6RP4bKSPeMDhrxTur_JMw6gDNAtHD4OxVR2SYCU2YbmliOglRuMm3NWe5NQwDuYJIQJzHQihmC887brSJYoqpj3lMkkdP-gFVFJbcfsQtdIsVUcIE2G0hhtIj3EjMrgdSOE4UmkBlSU2sdvossYqXpReGHHjegzIxgbZuEA27rVRp4YzrtbFMgb7P0NgHEra6KqGuLn999uO__f4OdqYDsP4bhLdnqBNChMONrZ-B7Xyt5U6NUQiF2fFt_IBT9-_qw
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=FDNet%3A+A+Novel+Image+Focus+Discriminative+Network+for+Enhancing+Camera+Autofocus&rft.jtitle=Neural+processing+letters&rft.au=Kou%2C+Chenhao&rft.au=Xiao%2C+Zhaolin&rft.au=Jin%2C+Haiyan&rft.au=Guo%2C+Qifeng&rft.date=2025-08-18&rft.pub=Springer+US&rft.eissn=1573-773X&rft.volume=57&rft.issue=5&rft_id=info:doi/10.1007%2Fs11063-025-11788-0&rft.externalDocID=10_1007_s11063_025_11788_0
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1573-773X&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1573-773X&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1573-773X&client=summon