A Unified Metric Learning-Based Framework for Co-Saliency Detection

Co-saliency detection, which focuses on extracting commonly salient objects in a group of relevant images, has been attracting research interest because of its broad applications. In practice, the relevant images in a group may have a wide range of variations, and the salient objects may also have l...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on circuits and systems for video technology Vol. 28; no. 10; pp. 2473 - 2483
Main Authors Han, Junwei, Cheng, Gong, Li, Zhenpeng, Zhang, Dingwen
Format Journal Article
LanguageEnglish
Published New York IEEE 01.10.2018
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
Abstract Co-saliency detection, which focuses on extracting commonly salient objects in a group of relevant images, has been attracting research interest because of its broad applications. In practice, the relevant images in a group may have a wide range of variations, and the salient objects may also have large appearance changes. Such wide variations usually bring about large intra-co-salient objects (intra-COs) diversity and high similarity between COs and background, which makes the co-saliency detection task more difficult. To address these problems, we make the earliest effort to introduce metric learning to co-saliency detection. Specifically, we propose a unified metric learning-based framework to jointly learn discriminative feature representation and co-salient object detector. This is achieved by optimizing a new objective function that explicitly embeds a metric learning regularization term into support vector machine (SVM) training. Here, the metric learning regularization term is used to learn a powerful feature representation that has small intra-COs scatter, but big separation between background and COs and the SVM classifier is used for subsequent co-saliency detection. In the experiments, we comprehensively evaluate the proposed method on two commonly used benchmark data sets. The state-of-the-art results are achieved in comparison with the existing co-saliency detection methods.
AbstractList Co-saliency detection, which focuses on extracting commonly salient objects in a group of relevant images, has been attracting research interest because of its broad applications. In practice, the relevant images in a group may have a wide range of variations, and the salient objects may also have large appearance changes. Such wide variations usually bring about large intra-co-salient objects (intra-COs) diversity and high similarity between COs and background, which makes the co-saliency detection task more difficult. To address these problems, we make the earliest effort to introduce metric learning to co-saliency detection. Specifically, we propose a unified metric learning-based framework to jointly learn discriminative feature representation and co-salient object detector. This is achieved by optimizing a new objective function that explicitly embeds a metric learning regularization term into support vector machine (SVM) training. Here, the metric learning regularization term is used to learn a powerful feature representation that has small intra-COs scatter, but big separation between background and COs and the SVM classifier is used for subsequent co-saliency detection. In the experiments, we comprehensively evaluate the proposed method on two commonly used benchmark data sets. The state-of-the-art results are achieved in comparison with the existing co-saliency detection methods.
Author Zhang, Dingwen
Cheng, Gong
Li, Zhenpeng
Han, Junwei
Author_xml – sequence: 1
  givenname: Junwei
  orcidid: 0000-0001-5545-7217
  surname: Han
  fullname: Han, Junwei
  organization: School of Automation, Northwestern Polytechnical University, Xi'an, China
– sequence: 2
  givenname: Gong
  orcidid: 0000-0001-5030-0683
  surname: Cheng
  fullname: Cheng, Gong
  email: gcheng@nwpu.edu.cn
  organization: School of Automation, Northwestern Polytechnical University, Xi'an, China
– sequence: 3
  givenname: Zhenpeng
  surname: Li
  fullname: Li, Zhenpeng
  organization: School of Automation, Northwestern Polytechnical University, Xi'an, China
– sequence: 4
  givenname: Dingwen
  orcidid: 0000-0001-8369-8886
  surname: Zhang
  fullname: Zhang, Dingwen
  email: zdw2006yyy@mail.nwpu.edu.cn
  organization: School of Automation, Northwestern Polytechnical University, Xi'an, China
BookMark eNp9kE1PAjEQQBuDiYD-Ab1s4nmxM7ttd4-4ippgPABem9LtmiK02C0x_HsXMR48eJrJZN58vAHpOe8MIZdARwC0vJlXs9f5CCmIEQrKkecnpA-MFSkiZb0upwzSAoGdkUHbriiFvMhFn1TjZOFsY02dPJsYrE6mRgVn3Vt6q9quOglqYz59eE8aH5LKpzO1tsbpfXJnotHRendOThu1bs3FTxySxeR-Xj2m05eHp2o8TTWWLKaqbgoULBOipmIJTHOa84ZnimqkIq8ZckOhFloJBpyyJdNaI3ABoEXNimxIro9zt8F_7Ewb5crvgutWSoTuZY6ixK6rOHbp4Ns2mEZqG9XhzhiUXUug8qBMfiuTB2XyR1mH4h90G-xGhf3_0NURssaYX0CUGULJsi-uwHd9
CODEN ITCTEM
CitedBy_id crossref_primary_10_1016_j_cviu_2022_103611
crossref_primary_10_1109_TCSVT_2021_3099120
crossref_primary_10_1109_TNNLS_2019_2927719
crossref_primary_10_1109_TCSVT_2018_2883558
crossref_primary_10_1109_TCSVT_2021_3069034
crossref_primary_10_3390_rs13245065
crossref_primary_10_1016_j_neucom_2019_02_041
crossref_primary_10_1109_TMM_2022_3167805
crossref_primary_10_1016_j_neucom_2022_03_029
crossref_primary_10_1109_TCDS_2021_3078824
crossref_primary_10_1109_TCSVT_2021_3095843
crossref_primary_10_1016_j_neucom_2018_01_076
crossref_primary_10_1109_TIP_2019_2941663
crossref_primary_10_1109_TPAMI_2019_2913863
crossref_primary_10_1109_TKDE_2019_2904256
crossref_primary_10_1007_s00138_018_0927_x
crossref_primary_10_1016_j_neucom_2019_05_009
crossref_primary_10_1109_TIP_2021_3087401
crossref_primary_10_1109_TCYB_2018_2806381
crossref_primary_10_1109_TCYB_2019_2914351
crossref_primary_10_1109_TGRS_2020_3030990
crossref_primary_10_1007_s00138_023_01462_7
crossref_primary_10_1109_TGRS_2020_2991657
crossref_primary_10_1109_TIP_2017_2760512
crossref_primary_10_1080_01431161_2018_1479791
crossref_primary_10_3233_JIFS_172039
crossref_primary_10_1007_s00530_024_01656_7
crossref_primary_10_1016_j_jvcir_2018_11_023
crossref_primary_10_1109_TCSVT_2019_2906246
crossref_primary_10_1007_s10489_022_03165_4
crossref_primary_10_1016_j_neucom_2018_09_048
crossref_primary_10_1016_j_neucom_2018_12_053
crossref_primary_10_1109_TNNLS_2019_2917524
crossref_primary_10_1109_TIP_2020_3034498
crossref_primary_10_1016_j_knosys_2024_112852
crossref_primary_10_1109_TCSVT_2021_3093890
crossref_primary_10_1109_JSTARS_2020_3046838
crossref_primary_10_1016_j_neucom_2019_09_010
crossref_primary_10_1109_TGRS_2020_2991545
crossref_primary_10_1109_TCSVT_2022_3225865
crossref_primary_10_1016_j_jvcir_2019_102579
crossref_primary_10_1016_j_jvcir_2019_102578
crossref_primary_10_1109_TGRS_2020_2985989
crossref_primary_10_1109_ACCESS_2020_3006202
crossref_primary_10_1109_TMM_2022_3210389
crossref_primary_10_1109_TNNLS_2021_3053266
crossref_primary_10_1007_s00138_018_0993_0
crossref_primary_10_1109_TGRS_2019_2907310
crossref_primary_10_1109_TCSVT_2019_2963318
crossref_primary_10_1109_TIM_2024_3368421
crossref_primary_10_1016_j_jvcir_2019_102580
crossref_primary_10_1109_TMM_2018_2884481
crossref_primary_10_1007_s13042_020_01188_2
crossref_primary_10_1109_TCSVT_2019_2897604
crossref_primary_10_1016_j_jvcir_2019_102581
crossref_primary_10_1109_TCYB_2018_2886012
crossref_primary_10_1007_s10489_023_04838_4
crossref_primary_10_1145_3313874
crossref_primary_10_1109_TCYB_2020_3008248
crossref_primary_10_1049_iet_cvi_2018_5717
crossref_primary_10_1016_j_jvcir_2018_12_049
crossref_primary_10_1016_j_knosys_2022_109356
crossref_primary_10_3390_s24103222
crossref_primary_10_1016_j_neucom_2017_12_014
crossref_primary_10_1109_TGRS_2019_2925070
crossref_primary_10_1109_TIP_2022_3184813
crossref_primary_10_1016_j_ins_2021_04_077
crossref_primary_10_1016_j_neucom_2018_03_014
crossref_primary_10_1109_TPAMI_2023_3234586
crossref_primary_10_1016_j_neucom_2018_11_111
crossref_primary_10_1016_j_neucom_2019_05_024
crossref_primary_10_1007_s11192_021_03909_y
crossref_primary_10_1007_s10851_019_00882_3
crossref_primary_10_1109_TMM_2023_3249481
crossref_primary_10_1109_TCSVT_2018_2884173
crossref_primary_10_1109_TCYB_2022_3169431
crossref_primary_10_1109_TGRS_2021_3123231
crossref_primary_10_1016_j_jvcir_2019_102599
crossref_primary_10_1109_TNNLS_2023_3317091
crossref_primary_10_1016_j_cviu_2024_103932
crossref_primary_10_1109_TSMC_2025_3526234
crossref_primary_10_1109_LGRS_2019_2961502
crossref_primary_10_3390_app9102110
crossref_primary_10_1016_j_neucom_2020_02_024
crossref_primary_10_1109_TNNLS_2020_2967471
crossref_primary_10_1016_j_patrec_2018_04_023
crossref_primary_10_1109_TCSVT_2022_3150923
crossref_primary_10_1109_TCYB_2020_2969255
crossref_primary_10_1109_TIP_2019_2962688
crossref_primary_10_1109_TMM_2021_3125195
crossref_primary_10_1016_j_neucom_2018_11_044
crossref_primary_10_1109_TCSVT_2018_2869841
crossref_primary_10_1111_exsy_13508
crossref_primary_10_1007_s00607_023_01213_6
crossref_primary_10_1109_TAI_2020_3026982
crossref_primary_10_1016_j_neucom_2020_08_038
crossref_primary_10_1109_TPAMI_2021_3060412
crossref_primary_10_1109_ACCESS_2018_2873327
crossref_primary_10_1109_TGRS_2021_3079441
crossref_primary_10_1155_2021_8956396
crossref_primary_10_1049_iet_cvi_2018_5814
crossref_primary_10_1016_j_neucom_2018_02_070
crossref_primary_10_1109_TCSVT_2018_2881842
crossref_primary_10_1007_s00371_021_02231_1
crossref_primary_10_1109_TCSVT_2021_3127149
crossref_primary_10_1109_TCYB_2018_2804326
crossref_primary_10_1007_s11760_020_01683_7
crossref_primary_10_1109_TIP_2017_2781424
crossref_primary_10_1016_j_image_2019_08_006
crossref_primary_10_1016_j_neucom_2024_128114
crossref_primary_10_1109_TMM_2019_2936803
crossref_primary_10_1007_s12559_021_09896_9
crossref_primary_10_1016_j_engappai_2019_07_018
crossref_primary_10_1016_j_jvcir_2019_102591
crossref_primary_10_1109_ACCESS_2020_3032346
crossref_primary_10_1109_TCSVT_2018_2870832
crossref_primary_10_1007_s11042_020_08849_y
crossref_primary_10_1109_TGRS_2018_2869004
crossref_primary_10_1109_TPAMI_2019_2900649
crossref_primary_10_1109_TIP_2022_3212906
crossref_primary_10_1109_TMM_2021_3054526
crossref_primary_10_1109_JSTARS_2020_3041344
crossref_primary_10_3390_s18124241
crossref_primary_10_1109_LSP_2021_3049997
crossref_primary_10_1109_TGRS_2019_2951820
crossref_primary_10_3390_app13148007
crossref_primary_10_1016_j_engappai_2019_103419
crossref_primary_10_1109_TCSVT_2022_3213680
crossref_primary_10_1016_j_knosys_2021_106772
crossref_primary_10_1109_TCSVT_2020_2992054
crossref_primary_10_1109_TCSVT_2021_3069812
crossref_primary_10_1109_TGRS_2019_2893180
crossref_primary_10_1109_TPAMI_2023_3336015
crossref_primary_10_1016_j_imavis_2025_105414
crossref_primary_10_3390_rs11020150
crossref_primary_10_1109_ACCESS_2018_2885507
crossref_primary_10_1109_JSTARS_2019_2917703
crossref_primary_10_1109_TKDE_2020_2999504
crossref_primary_10_1109_TCSVT_2019_2896438
crossref_primary_10_1109_TMM_2022_3198848
crossref_primary_10_1109_TIP_2019_2897966
crossref_primary_10_1109_TNNLS_2020_2986823
crossref_primary_10_1007_s13042_022_01531_9
crossref_primary_10_1007_s00371_020_01842_4
crossref_primary_10_1016_j_compeleceng_2021_107006
crossref_primary_10_1016_j_infrared_2023_105107
crossref_primary_10_1016_j_knosys_2022_109791
crossref_primary_10_1109_TGRS_2018_2841823
crossref_primary_10_1007_s11042_018_5793_z
crossref_primary_10_1049_cit2_12020
crossref_primary_10_1109_TCSVT_2018_2879626
crossref_primary_10_1080_01431161_2024_2359734
crossref_primary_10_1109_TGRS_2018_2802785
crossref_primary_10_1109_TIP_2019_2910377
crossref_primary_10_1109_TIP_2019_2903294
crossref_primary_10_1016_j_neucom_2018_12_073
crossref_primary_10_1109_TCSVT_2022_3185252
crossref_primary_10_1109_TPAMI_2023_3264571
Cites_doi 10.1109/ICCV.2011.6126415
10.1109/TCSVT.2014.2381471
10.1109/CVPR.2015.7298995
10.1007/s11263-016-0907-4
10.1109/TGRS.2014.2374218
10.1109/CVPR.2013.407
10.1109/CVPR.2015.7298609
10.1109/ICCV.2013.221
10.1109/TNNLS.2015.2440430
10.1109/TMM.2013.2271476
10.1109/TIP.2013.2260166
10.1109/CVPR.2015.7298629
10.1109/TPAMI.2011.272
10.1109/TMM.2012.2237023
10.1109/CVPR.2014.242
10.1016/j.isprsjprs.2016.03.014
10.1109/LSP.2013.2292873
10.1109/CVPR.2014.190
10.1016/j.isprsjprs.2014.10.002
10.1109/TIP.2011.2156803
10.1109/TPAMI.2016.2567386
10.1109/CVPR.2016.235
10.1109/TIP.2015.2442915
10.1016/j.image.2016.03.005
10.1109/ICCV.2005.171
10.1145/1015330.1015360
10.1109/TIP.2015.2438550
10.1109/ICCV.2013.331
10.1109/CVPR.2015.7298717
10.1109/TGRS.2016.2523563
10.1109/TGRS.2016.2601622
10.1109/TPAMI.2012.120
10.1109/TCSVT.2013.2242594
10.1109/TIP.2017.2652730
10.1109/TNNLS.2015.2495161
10.1109/JPROC.2017.2675998
10.1109/TIP.2016.2607421
10.1109/TIP.2014.2332399
10.1109/TMM.2016.2545409
10.1109/CVPR.2015.7298961
10.1109/TPAMI.2011.170
10.1109/TNNLS.2014.2361142
10.1109/CVPR.2015.7298918
10.1016/j.patrec.2013.04.009
10.1109/LSP.2014.2364896
10.1109/TIP.2012.2216276
10.1007/s11045-015-0370-3
10.1109/TGRS.2015.2393857
10.1007/s00138-013-0558-1
10.1109/TIP.2017.2694222
10.1109/TIP.2015.2456636
10.1109/ICCV.2015.75
ContentType Journal Article
Copyright Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2018
Copyright_xml – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2018
DBID 97E
RIA
RIE
AAYXX
CITATION
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
DOI 10.1109/TCSVT.2017.2706264
DatabaseName IEEE All-Society Periodicals Package (ASPP) 2005–Present
IEEE All-Society Periodicals Package (ASPP) 1998–Present
IEEE Electronic Library (IEL)
CrossRef
Computer and Information Systems Abstracts
Electronics & Communications Abstracts
Technology Research Database
ProQuest Computer Science Collection
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
DatabaseTitle CrossRef
Technology Research Database
Computer and Information Systems Abstracts – Academic
Electronics & Communications Abstracts
ProQuest Computer Science Collection
Computer and Information Systems Abstracts
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts Professional
DatabaseTitleList Technology Research Database

Database_xml – sequence: 1
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
EISSN 1558-2205
EndPage 2483
ExternalDocumentID 10_1109_TCSVT_2017_2706264
7932195
Genre orig-research
GrantInformation_xml – fundername: Fundamental Research Funds for the Central Universities
  grantid: 3102016ZY023
– fundername: National Natural Science Foundation of China
  grantid: 61473231; 61522207; 61401357
  funderid: 10.13039/501100001809
GroupedDBID -~X
0R~
29I
4.4
5GY
5VS
6IK
97E
AAJGR
AARMG
AASAJ
AAWTH
ABAZT
ABQJQ
ABVLG
ACGFO
ACGFS
ACIWK
AENEX
AETIX
AGQYO
AGSQL
AHBIQ
AI.
AIBXA
AKJIK
AKQYR
ALLEH
ALMA_UNASSIGNED_HOLDINGS
ASUFR
ATWAV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
CS3
DU5
EBS
EJD
HZ~
H~9
ICLAB
IFIPE
IFJZH
IPLJI
JAVBF
LAI
M43
O9-
OCL
P2P
RIA
RIE
RNS
RXW
TAE
TN5
VH1
AAYXX
CITATION
RIG
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
ID FETCH-LOGICAL-c295t-adf8275377d07b15c6046f63a0c2074d526e01d7ca751605b5ccc216711c7d583
IEDL.DBID RIE
ISSN 1051-8215
IngestDate Mon Jun 30 04:07:12 EDT 2025
Tue Jul 01 00:41:10 EDT 2025
Thu Apr 24 23:03:33 EDT 2025
Wed Aug 27 02:52:23 EDT 2025
IsPeerReviewed true
IsScholarly true
Issue 10
Language English
License https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c295t-adf8275377d07b15c6046f63a0c2074d526e01d7ca751605b5ccc216711c7d583
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ORCID 0000-0001-8369-8886
0000-0001-5030-0683
0000-0001-5545-7217
PQID 2126462792
PQPubID 85433
PageCount 11
ParticipantIDs crossref_citationtrail_10_1109_TCSVT_2017_2706264
ieee_primary_7932195
crossref_primary_10_1109_TCSVT_2017_2706264
proquest_journals_2126462792
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 2018-10-01
PublicationDateYYYYMMDD 2018-10-01
PublicationDate_xml – month: 10
  year: 2018
  text: 2018-10-01
  day: 01
PublicationDecade 2010
PublicationPlace New York
PublicationPlace_xml – name: New York
PublicationTitle IEEE transactions on circuits and systems for video technology
PublicationTitleAbbrev TCSVT
PublicationYear 2018
Publisher IEEE
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Publisher_xml – name: IEEE
– name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
References ref57
ref13
ref56
ref12
ref15
ref14
ref53
ref52
ref55
ref11
ref10
weinberger (ref35) 2009; 10
ref16
ref18
ref51
ref50
joulin (ref19) 2014
ref46
ref45
ref48
ref47
ref42
ref41
ref44
ref43
ref49
ref8
ref7
ref9
ref4
ref3
ref6
ref5
ref40
ref34
ref37
ref36
ref31
ref30
ref33
ref32
wang (ref17) 0
han (ref25) 2015; 25
ref2
ref1
ref39
ref38
chatfield (ref54) 2014
xie (ref26) 2013; 22
ref24
shen (ref27) 2012
ref23
ref20
ref22
ref21
ref28
ref29
References_xml – ident: ref33
  doi: 10.1109/ICCV.2011.6126415
– volume: 25
  start-page: 1309
  year: 2015
  ident: ref25
  article-title: Background prior-based salient object detection via deep reconstruction residual
  publication-title: IEEE Trans Circuits Syst Video Technol
  doi: 10.1109/TCSVT.2014.2381471
– ident: ref32
  doi: 10.1109/CVPR.2015.7298995
– ident: ref8
  doi: 10.1007/s11263-016-0907-4
– volume: 10
  start-page: 207
  year: 2009
  ident: ref35
  article-title: Distance metric learning for large margin nearest neighbor classification
  publication-title: J Mach Learn Res
– ident: ref56
  doi: 10.1109/TGRS.2014.2374218
– ident: ref21
  doi: 10.1109/CVPR.2013.407
– ident: ref30
  doi: 10.1109/CVPR.2015.7298609
– ident: ref49
  doi: 10.1109/ICCV.2013.221
– ident: ref40
  doi: 10.1109/TNNLS.2015.2440430
– year: 0
  ident: ref17
  article-title: A unified spatiotemporal prior based on geodesic distance for video object segmentation
  publication-title: IEEE Trans Pattern Anal Mach Intell
– ident: ref3
  doi: 10.1109/TMM.2013.2271476
– ident: ref5
  doi: 10.1109/TIP.2013.2260166
– ident: ref29
  doi: 10.1109/CVPR.2015.7298629
– ident: ref20
  doi: 10.1109/TPAMI.2011.272
– ident: ref44
  doi: 10.1109/TMM.2012.2237023
– ident: ref28
  doi: 10.1109/CVPR.2014.242
– ident: ref53
  doi: 10.1016/j.isprsjprs.2016.03.014
– ident: ref7
  doi: 10.1109/LSP.2013.2292873
– ident: ref18
  doi: 10.1109/CVPR.2014.190
– ident: ref57
  doi: 10.1016/j.isprsjprs.2014.10.002
– ident: ref1
  doi: 10.1109/TIP.2011.2156803
– ident: ref37
  doi: 10.1109/TPAMI.2016.2567386
– start-page: 253
  year: 2014
  ident: ref19
  article-title: Efficient image and video co-localization with frank-wolfe algorithm
  publication-title: Proc Eur Conf Comput Vis
– ident: ref38
  doi: 10.1109/CVPR.2016.235
– ident: ref12
  doi: 10.1109/TIP.2015.2442915
– ident: ref43
  doi: 10.1016/j.image.2016.03.005
– ident: ref48
  doi: 10.1109/ICCV.2005.171
– ident: ref34
  doi: 10.1145/1015330.1015360
– ident: ref14
  doi: 10.1109/TIP.2015.2438550
– ident: ref36
  doi: 10.1109/ICCV.2013.331
– ident: ref31
  doi: 10.1109/CVPR.2015.7298717
– year: 2014
  ident: ref54
  publication-title: Return of the devil in the details Delving deep into convolutional nets
– ident: ref46
  doi: 10.1109/TGRS.2016.2523563
– ident: ref50
  doi: 10.1109/TGRS.2016.2601622
– ident: ref47
  doi: 10.1109/TPAMI.2012.120
– start-page: 853
  year: 2012
  ident: ref27
  article-title: A unified approach to salient object detection via low rank matrix recovery
  publication-title: Proc IEEE Int Conf Comput Vis Pattern Recognit
– ident: ref24
  doi: 10.1109/TCSVT.2013.2242594
– ident: ref45
  doi: 10.1109/TIP.2017.2652730
– ident: ref6
  doi: 10.1109/TNNLS.2015.2495161
– ident: ref52
  doi: 10.1109/JPROC.2017.2675998
– ident: ref42
  doi: 10.1109/TIP.2016.2607421
– ident: ref9
  doi: 10.1109/TIP.2014.2332399
– ident: ref15
  doi: 10.1109/TMM.2016.2545409
– ident: ref13
  doi: 10.1109/CVPR.2015.7298961
– ident: ref41
  doi: 10.1109/TPAMI.2011.170
– ident: ref39
  doi: 10.1109/TNNLS.2014.2361142
– ident: ref2
  doi: 10.1109/CVPR.2015.7298918
– ident: ref22
  doi: 10.1016/j.patrec.2013.04.009
– ident: ref10
  doi: 10.1109/LSP.2014.2364896
– volume: 22
  start-page: 1689
  year: 2013
  ident: ref26
  article-title: Bayesian saliency via low and mid level cues
  publication-title: IEEE Trans Image Process
  doi: 10.1109/TIP.2012.2216276
– ident: ref51
  doi: 10.1007/s11045-015-0370-3
– ident: ref55
  doi: 10.1109/TGRS.2015.2393857
– ident: ref23
  doi: 10.1007/s00138-013-0558-1
– ident: ref11
  doi: 10.1109/TIP.2017.2694222
– ident: ref16
  doi: 10.1109/TIP.2015.2456636
– ident: ref4
  doi: 10.1109/ICCV.2015.75
SSID ssj0014847
Score 2.6184008
Snippet Co-saliency detection, which focuses on extracting commonly salient objects in a group of relevant images, has been attracting research interest because of its...
SourceID proquest
crossref
ieee
SourceType Aggregation Database
Enrichment Source
Index Database
Publisher
StartPage 2473
SubjectTerms Co-saliency detection
Detectors
Feature extraction
feature learning
Learning
Learning systems
Measurement
metric learning
Object recognition
Regularization
Representations
Salience
State of the art
Support vector machines
Training
Title A Unified Metric Learning-Based Framework for Co-Saliency Detection
URI https://ieeexplore.ieee.org/document/7932195
https://www.proquest.com/docview/2126462792
Volume 28
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV07b9swED7YntKhebhBnRc4ZGtpi5L40Ji4NYwA7hKn8CZIRypDCjtI5CW_PkdKMtqkKLppIAmCd8f7Trz7DuCyRLrjkAxJRmh46irLTWELbiSZHl3KsbU-UFz8UPO79GYlVz34uquFcc6F5DM39p_hLd9ucOt_lU1Il8jAZB_6FLg1tVq7F4PUhGZiBBcEN-THugKZKJssp7c_lz6LS49jHRGCT_9wQqGryrurOPiX2T4sup01aSUP421djvHlDWnj_279AD62QJNdNZpxCD23PoIPv9EPDmF6xQhyVgRC2cI31kLWsq3e82tybpbNuswtRtCWTTf8llC7r9Vk31wdcrjWn-Bu9n05nfO2qQInqciaF7YyMcUoWttIl0Kiogi5UkkRYUxwwspYuUhYjYWWgmKdUiJiLJQWArWVJjmGwXqzdp-BJcqWZabRFKlKsxiNTTIbo7MiKbRJqhGI7pRzbBnHfeOLX3mIPKIsD5LJvWTyVjIj-LKb89jwbfxz9NAf9W5ke8ojOOuEmbcm-ZyTj1ap8nyJJ3-fdQp7tHZDdivOYFA_bd05IY66vAiq9gp5EdA7
linkProvider IEEE
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwzV1Lb9QwEB6VcgAOFCgVCy34ACfkbezEjxx6aLestrTbS7eot5CMHQ6gXdRmVcFv4a_w3xg7yYqXuFXiloOdh-fzzDfxPABeVkg6DmkjqQQtz3ztuC1dya2irUdKWToXHMXpqZ6cZ28v1MUafFvlwnjvY_CZH4bLeJbvFrgMv8p2CUu0wfoQymP_5ZoctKu9o0OS5ispx29mownveghwegnV8NLVVhIlN8YlphIKNTmEtU7LBCVZT6ek9olwBkujBFH7SiGiFNoIgcYpm9J9b8Ft4hlKttlhqzOKzMb2ZURQBLdkOfuUnCTfnY3O3s1C3JgZSpOQz5D9YvZiH5c_lH-0aOMN-N6vRRvI8nG4bKohfv2tTOT_ulgP4H5Hpdl-i_2HsObnj-DeTwUWN2G0z4hU10Sz2TS0DkPW1ZP9wA_IfDs27mPTGJF3NlrwM_JLQjYqO_RNjFKbP4bzG_mKLVifL-b-CbBUu6rKDdoy01ku0bo0dxK9E2lpbFoPQPRSLbCrqR5ae3wqom-V5EVEQhGQUHRIGMDr1ZzPbUWRf47eDKJdjeykOoDtHjxFp3SuCmIhOtOhIuTTv896AXcms-lJcXJ0evwM7tJz2tK-YhvWm8ul3yF-1VTPI8wZvL9pqPwAYZwrCw
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=A+Unified+Metric+Learning-Based+Framework+for+Co-Saliency+Detection&rft.jtitle=IEEE+transactions+on+circuits+and+systems+for+video+technology&rft.au=Han%2C+Junwei&rft.au=Cheng%2C+Gong&rft.au=Li%2C+Zhenpeng&rft.au=Zhang%2C+Dingwen&rft.date=2018-10-01&rft.pub=IEEE&rft.issn=1051-8215&rft.volume=28&rft.issue=10&rft.spage=2473&rft.epage=2483&rft_id=info:doi/10.1109%2FTCSVT.2017.2706264&rft.externalDocID=7932195
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1051-8215&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1051-8215&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1051-8215&client=summon