Robust appearance feature learning using pixel‐wise discrimination for visual tracking

Considering the high dimensions of video sequences, it is often challenging to acquire a sufficient dataset to train the tracking models. From this perspective, we propose to revisit the idea of hand‐crafted feature learning to avoid such a requirement from a dataset. The proposed tracking approach...

Full description

Saved in:
Bibliographic Details
Published inETRI journal Vol. 41; no. 4; pp. 483 - 493
Main Authors Kim, Minji, Kim, Sungchan
Format Journal Article
LanguageEnglish
Published Electronics and Telecommunications Research Institute (ETRI) 01.08.2019
한국전자통신연구원
Subjects
Online AccessGet full text
ISSN1225-6463
2233-7326
DOI10.4218/etrij.2018-0486

Cover

Abstract Considering the high dimensions of video sequences, it is often challenging to acquire a sufficient dataset to train the tracking models. From this perspective, we propose to revisit the idea of hand‐crafted feature learning to avoid such a requirement from a dataset. The proposed tracking approach is composed of two phases, detection and tracking, according to how severely the appearance of a target changes. The detection phase addresses severe and rapid variations by learning a new appearance model that classifies the pixels into foreground (or target) and background. We further combine the raw pixel features of the color intensity and spatial location with convolutional feature activations for robust target representation. The tracking phase tracks a target by searching for frame regions where the best pixel‐level agreement to the model learned from the detection phase is achieved. Our two‐phase approach results in efficient and accurate tracking, outperforming recent methods in various challenging cases of target appearance changes.
AbstractList Considering the high dimensions of video sequences, it is often challenging to acquire a sufficient dataset to train the tracking models. From this perspective, we propose to revisit the idea of hand‐crafted feature learning to avoid such a requirement from a dataset. The proposed tracking approach is composed of two phases, detection and tracking, according to how severely the appearance of a target changes. The detection phase addresses severe and rapid variations by learning a new appearance model that classifies the pixels into foreground (or target) and background. We further combine the raw pixel features of the color intensity and spatial location with convolutional feature activations for robust target representation. The tracking phase tracks a target by searching for frame regions where the best pixel‐level agreement to the model learned from the detection phase is achieved. Our two‐phase approach results in efficient and accurate tracking, outperforming recent methods in various challenging cases of target appearance changes. KCI Citation Count: 2
Considering the high dimensions of video sequences, it is often challenging to acquire a sufficient dataset to train the tracking models. From this perspective, we propose to revisit the idea of hand‐crafted feature learning to avoid such a requirement from a dataset. The proposed tracking approach is composed of two phases, detection and tracking, according to how severely the appearance of a target changes. The detection phase addresses severe and rapid variations by learning a new appearance model that classifies the pixels into foreground (or target) and background. We further combine the raw pixel features of the color intensity and spatial location with convolutional feature activations for robust target representation. The tracking phase tracks a target by searching for frame regions where the best pixel‐level agreement to the model learned from the detection phase is achieved. Our two‐phase approach results in efficient and accurate tracking, outperforming recent methods in various challenging cases of target appearance changes.
Considering the high dimensions of video sequences, it is often challenging to acquire a sufficient dataset to train the tracking models. From this perspective, we propose to revisit the idea of hand‐crafted feature learning to avoid such a requirement from a dataset. The proposed tracking approach is composed of two phases, detection and tracking , according to how severely the appearance of a target changes. The detection phase addresses severe and rapid variations by learning a new appearance model that classifies the pixels into foreground (or target) and background. We further combine the raw pixel features of the color intensity and spatial location with convolutional feature activations for robust target representation. The tracking phase tracks a target by searching for frame regions where the best pixel‐level agreement to the model learned from the detection phase is achieved. Our two‐phase approach results in efficient and accurate tracking, outperforming recent methods in various challenging cases of target appearance changes.
Author Kim, Sungchan
Kim, Minji
Author_xml – sequence: 1
  givenname: Minji
  surname: Kim
  fullname: Kim, Minji
  organization: Chonbuk National University
– sequence: 2
  givenname: Sungchan
  orcidid: 0000-0002-5887-5606
  surname: Kim
  fullname: Kim, Sungchan
  email: s.kim@chonbuk.ac.kr
  organization: Chonbuk National University
BackLink https://www.kci.go.kr/kciportal/ci/sereArticleSearch/ciSereArtiView.kci?sereArticleSearchBean.artiId=ART002492156$$DAccess content in National Research Foundation of Korea (NRF)
BookMark eNqFkc9q20AQxpeSQB0n51x17UHO_vfqaIzbGAIB40Jvy2g1MmurWrMrN_Utj5Bn7JNUkksOPbSXGZj9ft-w892Qqza0SMg9ozPJmXnALvr9jFNmciqN_kAmnAuRzwXXV2TCOFe5llp8JDcp7SnlVCozId82oTylLoPjESFC6zCrEbpTxKzpB61vd9kpDfXof2Lz6_XtxSfMKp9c9N99C50PbVaHmP3w6QRN1kVwh15_S65raBLe_elT8vXzart8zJ-ev6yXi6fcSV7I3EBVKiEoq0ohETg4WrmyZKgpyEKUCjRHBVKbojR9YboEoZxCxnihHYop-XTxbWNtD87bAH7su2AP0S4227VVhSrYnPba9UVbBdjbY_8BiOcRGAch7izEzrsGbSUNaBQFIKVSV3NT1DUD4KilQWqGvQ8XLxdDShHrdz9G7RCIHQOxQyB2CKQn1F-E8914v_5mvvkHpy_ci2_w_L81drXdcNY_SPEbMwKmPg
CitedBy_id crossref_primary_10_3390_math7111059
crossref_primary_10_3390_app112311570
Cites_doi 10.1109/ACCESS.2018.2872691
10.1109/TPAMI.2016.2516982
10.1109/TIP.2018.2868561
10.1109/TPAMI.2014.2388226
10.1049/el.2016.3011
10.1016/j.patrec.2018.10.002
10.1016/j.neucom.2016.06.048
10.1109/TIP.2017.2777183
10.1109/TCSVT.2012.2201794
10.1007/s10044-013-0347-5
10.1016/j.patcog.2018.05.017
10.1016/j.patcog.2018.03.029
10.1145/1015706.1015720
10.1109/TIP.2015.2481325
10.1109/TPAMI.2011.239
10.1016/j.cviu.2016.02.003
10.1109/CVPR.2009.5206848
ContentType Journal Article
Copyright 2019 ETRI
Copyright_xml – notice: 2019 ETRI
DBID AAYXX
CITATION
DOA
ACYCR
DOI 10.4218/etrij.2018-0486
DatabaseName CrossRef
DOAJ Directory of Open Access Journals
Korean Citation Index
DatabaseTitle CrossRef
DatabaseTitleList


CrossRef
Database_xml – sequence: 1
  dbid: DOA
  name: DOAJ Directory of Open Access Journals
  url: https://www.doaj.org/
  sourceTypes: Open Website
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
EISSN 2233-7326
EndPage 493
ExternalDocumentID oai_kci_go_kr_ARTI_5959170
oai_doaj_org_article_d48a6e39ae0046d789ff1aa2e648e08e
10_4218_etrij_2018_0486
ETR212184
Genre article
GrantInformation_xml – fundername: Chonbuk National University
GroupedDBID -~X
.4S
.DC
.UV
0R~
1OC
29G
2WC
5GY
5VS
9ZL
AAKPC
AAYBS
ACGFS
ACXQS
ACYCR
ADBBV
ADDVE
AENEX
ALMA_UNASSIGNED_HOLDINGS
ARCSS
AVUZU
BCNDV
DU5
E3Z
EBS
EDO
EJD
GROUPED_DOAJ
IPNFZ
ITG
ITH
JDI
KQ8
KVFHK
MK~
ML~
O9-
OK1
P5Y
RIG
RNS
TR2
TUS
WIN
XSB
AAYXX
ADMLS
CITATION
OVT
AAMMB
AEFGJ
AGXDD
AIDQK
AIDYY
08R
ID FETCH-LOGICAL-c4294-8adb53301db34ea2ac0dcbb1e60a493b5a62e5a4689b868916ba35c5e11296ce3
IEDL.DBID DOA
ISSN 1225-6463
IngestDate Tue Nov 21 21:43:30 EST 2023
Wed Aug 27 01:29:02 EDT 2025
Thu Apr 24 23:10:59 EDT 2025
Tue Jul 01 02:03:18 EDT 2025
Wed Jan 22 16:40:50 EST 2025
IsDoiOpenAccess true
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Issue 4
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c4294-8adb53301db34ea2ac0dcbb1e60a493b5a62e5a4689b868916ba35c5e11296ce3
Notes Funding information
This work was funded by the research funds of Chonbuk National University in 2014.
https://doi.org/10.4218/etrij.2018-0486
ORCID 0000-0002-5887-5606
OpenAccessLink https://doaj.org/article/d48a6e39ae0046d789ff1aa2e648e08e
PageCount 11
ParticipantIDs nrf_kci_oai_kci_go_kr_ARTI_5959170
doaj_primary_oai_doaj_org_article_d48a6e39ae0046d789ff1aa2e648e08e
crossref_primary_10_4218_etrij_2018_0486
crossref_citationtrail_10_4218_etrij_2018_0486
wiley_primary_10_4218_etrij_2018_0486_ETR212184
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate August 2019
PublicationDateYYYYMMDD 2019-08-01
PublicationDate_xml – month: 08
  year: 2019
  text: August 2019
PublicationDecade 2010
PublicationTitle ETRI journal
PublicationYear 2019
Publisher Electronics and Telecommunications Research Institute (ETRI)
한국전자통신연구원
Publisher_xml – name: Electronics and Telecommunications Research Institute (ETRI)
– name: 한국전자통신연구원
References 2015; 37
2012
2011
2010
2009
2018; 81
2016; 53
2018; 83
2016; 38
2012; 34
2018; 27
2014; 23
2015; 24
2018; 6
2013; 16
2019; 28
2018
2017
2016
2015
2014
2016; 214
2013
2012; 22
2016; 153
Hong S. (e_1_2_6_5_1) 2015
Hare S. (e_1_2_6_35_1) 2011
Simonyan K. (e_1_2_6_34_1) 2014
e_1_2_6_10_1
Tsai Y.‐H. (e_1_2_6_30_1) 2016
Lan X. (e_1_2_6_32_1) 2017
Lan X. (e_1_2_6_31_1) 2016
e_1_2_6_19_1
Wu Y. (e_1_2_6_2_1) 2013
e_1_2_6_36_1
Lan X. (e_1_2_6_33_1) 2018
Hariharan B. (e_1_2_6_15_1) 2015
e_1_2_6_17_1
e_1_2_6_18_1
e_1_2_6_16_1
Nam H. (e_1_2_6_7_1) 2016
e_1_2_6_21_1
e_1_2_6_20_1
Lan X. (e_1_2_6_27_1) 2014
Lee D. (e_1_2_6_13_1) 2014
Zhong W. (e_1_2_6_14_1) 2012
e_1_2_6_9_1
Song Y. (e_1_2_6_6_1) 2018
Jia X. (e_1_2_6_12_1) 2012
e_1_2_6_25_1
e_1_2_6_24_1
e_1_2_6_3_1
e_1_2_6_23_1
e_1_2_6_22_1
e_1_2_6_29_1
Danelljan M. (e_1_2_6_8_1) 2017
e_1_2_6_28_1
Ma C. (e_1_2_6_4_1) 2015
Alt N. (e_1_2_6_11_1) 2010
e_1_2_6_26_1
References_xml – start-page: 1355
  year: 2010
  end-page: 1362
  article-title: Rapid selection of reliable templates for visual tracking
  publication-title: IEEE Comput. Soc. Conf. Comput. Vision Pattern Recogn.
– start-page: 4293
  year: 2016
  end-page: 4302
  article-title: Learning multi‐domain convolutional neural networks for visual tracking
  publication-title: IEEE Conf. Comput. Vision Pattern Recogn.
– start-page: 1838
  year: 2012
  end-page: 1845
  article-title: Robust object tracking via sparsity‐based: collaborative model
  publication-title: IEEE Conf. Comput. Vision Pattern Recogn.
– volume: 6
  start-page: 56526
  year: 2018
  end-page: 56538
  article-title: Complementary tracking via dual color clustering and spatio‐temporal regularized correlation learning
  publication-title: IEEE Access
– start-page: 7008
  year: 2018
  end-page: 7015
  article-title: Robust collaborative discriminative learning for RGB‐infrared tracking
  publication-title: Proc. AAAI Conf. Artif. Intell.
– volume: 24
  start-page: 5826
  issue: 12
  year: 2015
  end-page: 5841
  article-title: Joint sparse representation and robust feature‐level fusion for multi‐cue visual tracking
  publication-title: IEEE Trans. Image Process.
– start-page: 3403
  year: 2016
  end-page: 3410
  article-title: Robust joint discriminative feature learning for visual tracking
  publication-title: Proc. Int. Joint Conf. Artif. Intell.
– volume: 22
  start-page: 1365
  issue: 9
  year: 2012
  end-page: 1376
  article-title: Pixel‐wise spatial pyramid‐based hybrid tracking
  publication-title: IEEE Trans. Circuits Syst. Video Technol.
– volume: 16
  start-page: 647
  issue: 4
  year: 2013
  end-page: 661
  article-title: Fast and effective color‐based object tracking by boosted color distribution
  publication-title: Pattern Anal. Appicat.
– volume: 37
  start-page: 1834
  issue: 9
  year: 2015
  end-page: 1848
  article-title: Object tracking benchmark
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
– start-page: 3074
  year: 2015
  end-page: 3082
  article-title: Hierarchical convolutional features for visual tracking
  publication-title: IEEE Int Conf. Comput
– volume: 83
  start-page: 185
  year: 2018
  end-page: 195
  article-title: Visual tracking using spatio‐temporally nonlocally regularized correlation filter
  publication-title: Pattern Recogn.
– start-page: 1194
  year: 2014
  end-page: 1201
  article-title: Multi‐cue visual tracking using robust feature‐level fusion based on joint sparse representation
  publication-title: IEEE Conf. Comput. Vision Pattern Recogn.
– start-page: 3899
  year: 2016
  end-page: 3908
  article-title: Video segmentation via object flow
  publication-title: IEEE Conf. Comput. Vision Pattern Recogn.
– start-page: 248
  year: 2009
  end-page: 255
– volume: 81
  start-page: 147
  year: 2018
  end-page: 160
  article-title: Visual tracking via boolean map representations
  publication-title: Pattern Recog.
– volume: 214
  start-page: 607
  year: 2016
  end-page: 617
  article-title: Robust visual tracking via patch based kernel correlation filters with adaptive multiple feature ensemble
  publication-title: Neurocomput.
– volume: 53
  start-page: 20
  issue: 1
  year: 2016
  end-page: 22
  article-title: Robust visual tracking via self‐similarity learning
  publication-title: Electron. Lett.
– start-page: 8990
  year: 2018
  end-page: 8999
  article-title: VITAL: VIsual Tracking via Adversarial Learning
  publication-title: IEEE/CVF Conf. COmput. CIsion Pattern Recogn.
– start-page: 6638
  year: 2017
  end-page: 6646
  article-title: ECO: efficient convolution operators for tracking
  publication-title: IEEE Conf. Comput. Vision Pattern Recogn.
– start-page: 597
  year: 2015
  end-page: 606
  article-title: Online tracking by learning discriminative saliency map with convolutional neural network
  publication-title: Int. Conf. Machine Learn.
– volume: 28
  start-page: 479
  issue: 1
  year: 2019
  end-page: 491
  article-title: Parallel attentive correlation tracking
  publication-title: IEEE Trans. Image Proc.
– volume: 153
  start-page: 100
  year: 2016
  end-page: 108
  article-title: Robust object tracking by online Fisher discrimination boosting feature selection
  publication-title: Comput. Vis. Image Underst.
– start-page: 3486
  year: 2014
  end-page: 3493
  article-title: Visual tracking using pertinent patch selection and masking
  publication-title: IEEE Conf. Comput. Vision Pattern Recogn.
– start-page: 1257
  year: 2012
  end-page: 1264
  article-title: Visual tracking via adaptive structural local sparse appearance model
  publication-title: IEEE Conf. Comput. Vision Pattern Recogn.
– volume: 34
  start-page: 1409
  issue: 7
  year: 2012
  end-page: 1422
  article-title: Tracking‐learning‐detection
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
– volume: 27
  start-page: 2022
  issue: 4
  year: 2018
  end-page: 2037
  article-title: Learning common and feature‐specific patterns: a novel multiple‐sparse‐representation‐based tracker
  publication-title: IEEE Trans. Image Proc.
– volume: 23
  start-page: 309
  issue: 3
  year: 2014
  end-page: 314
  article-title: Grabcut: interactive foreground extraction using iterated graph cuts
  publication-title: ACM Trans. Graphics
– start-page: 447
  year: 2015
  end-page: 456
  article-title: Hypercolumns for object segmentation and fine‐grained localization
  publication-title: IEEE Conf. Comput. Vision Pattern Recogn.
– year: 2018
  article-title: Modality‐correlation‐aware sparse representation for RGB‐infrared object tracking
  publication-title: Pattern Recog. Lett.
– start-page: 263
  year: 2011
  end-page: 270
  article-title: Struck: structured output tracking with kernels
  publication-title: Int. Conf. Comput
– start-page: 4118
  year: 2017
  end-page: 4125
  article-title: Robust MIL‐based feature template learning for object tracking
  publication-title: Proc. AAAI Conf. Artif. Intell.
– start-page: 2411
  year: 2013
  end-page: 2418
  article-title: Online object tracking: a benchmark
  publication-title: IEEE Conf Comput Vision Pattern Recogn.
– volume: 38
  start-page: 2137
  issue: 11
  year: 2016
  end-page: 2155
  article-title: A novel performance evaluation methodology for single‐target trackers
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
– year: 2014
  article-title: Very deep convolutional networks for large‐scale image recognition
  publication-title: arXiv:1409.1556.
– start-page: 263
  year: 2011
  ident: e_1_2_6_35_1
  article-title: Struck: structured output tracking with kernels
  publication-title: Int. Conf. Comput
– start-page: 1194
  year: 2014
  ident: e_1_2_6_27_1
  article-title: Multi‐cue visual tracking using robust feature‐level fusion based on joint sparse representation
  publication-title: IEEE Conf. Comput. Vision Pattern Recogn.
– start-page: 4293
  year: 2016
  ident: e_1_2_6_7_1
  article-title: Learning multi‐domain convolutional neural networks for visual tracking
  publication-title: IEEE Conf. Comput. Vision Pattern Recogn.
– ident: e_1_2_6_22_1
  doi: 10.1109/ACCESS.2018.2872691
– start-page: 3899
  year: 2016
  ident: e_1_2_6_30_1
  article-title: Video segmentation via object flow
  publication-title: IEEE Conf. Comput. Vision Pattern Recogn.
– ident: e_1_2_6_10_1
  doi: 10.1109/TPAMI.2016.2516982
– ident: e_1_2_6_21_1
  doi: 10.1109/TIP.2018.2868561
– ident: e_1_2_6_3_1
  doi: 10.1109/TPAMI.2014.2388226
– ident: e_1_2_6_18_1
  doi: 10.1049/el.2016.3011
– ident: e_1_2_6_26_1
  doi: 10.1016/j.patrec.2018.10.002
– start-page: 597
  year: 2015
  ident: e_1_2_6_5_1
  article-title: Online tracking by learning discriminative saliency map with convolutional neural network
  publication-title: Int. Conf. Machine Learn.
– start-page: 3486
  year: 2014
  ident: e_1_2_6_13_1
  article-title: Visual tracking using pertinent patch selection and masking
  publication-title: IEEE Conf. Comput. Vision Pattern Recogn.
– ident: e_1_2_6_19_1
  doi: 10.1016/j.neucom.2016.06.048
– ident: e_1_2_6_25_1
  doi: 10.1109/TIP.2017.2777183
– ident: e_1_2_6_17_1
  doi: 10.1109/TCSVT.2012.2201794
– start-page: 1355
  year: 2010
  ident: e_1_2_6_11_1
  article-title: Rapid selection of reliable templates for visual tracking
  publication-title: IEEE Comput. Soc. Conf. Comput. Vision Pattern Recogn.
– ident: e_1_2_6_16_1
  doi: 10.1007/s10044-013-0347-5
– start-page: 7008
  year: 2018
  ident: e_1_2_6_33_1
  article-title: Robust collaborative discriminative learning for RGB‐infrared tracking
  publication-title: Proc. AAAI Conf. Artif. Intell.
– start-page: 3074
  year: 2015
  ident: e_1_2_6_4_1
  article-title: Hierarchical convolutional features for visual tracking
  publication-title: IEEE Int Conf. Comput
– ident: e_1_2_6_20_1
  doi: 10.1016/j.patcog.2018.05.017
– start-page: 6638
  year: 2017
  ident: e_1_2_6_8_1
  article-title: ECO: efficient convolution operators for tracking
  publication-title: IEEE Conf. Comput. Vision Pattern Recogn.
– start-page: 1838
  year: 2012
  ident: e_1_2_6_14_1
  article-title: Robust object tracking via sparsity‐based: collaborative model
  publication-title: IEEE Conf. Comput. Vision Pattern Recogn.
– ident: e_1_2_6_24_1
  doi: 10.1016/j.patcog.2018.03.029
– ident: e_1_2_6_29_1
  doi: 10.1145/1015706.1015720
– start-page: 4118
  year: 2017
  ident: e_1_2_6_32_1
  article-title: Robust MIL‐based feature template learning for object tracking
  publication-title: Proc. AAAI Conf. Artif. Intell.
– start-page: 447
  year: 2015
  ident: e_1_2_6_15_1
  article-title: Hypercolumns for object segmentation and fine‐grained localization
  publication-title: IEEE Conf. Comput. Vision Pattern Recogn.
– start-page: 2411
  year: 2013
  ident: e_1_2_6_2_1
  article-title: Online object tracking: a benchmark
  publication-title: IEEE Conf Comput Vision Pattern Recogn.
– ident: e_1_2_6_28_1
  doi: 10.1109/TIP.2015.2481325
– ident: e_1_2_6_36_1
  doi: 10.1109/TPAMI.2011.239
– start-page: 1257
  year: 2012
  ident: e_1_2_6_12_1
  article-title: Visual tracking via adaptive structural local sparse appearance model
  publication-title: IEEE Conf. Comput. Vision Pattern Recogn.
– year: 2014
  ident: e_1_2_6_34_1
  article-title: Very deep convolutional networks for large‐scale image recognition
  publication-title: arXiv:1409.1556.
– ident: e_1_2_6_23_1
  doi: 10.1016/j.cviu.2016.02.003
– start-page: 3403
  year: 2016
  ident: e_1_2_6_31_1
  article-title: Robust joint discriminative feature learning for visual tracking
  publication-title: Proc. Int. Joint Conf. Artif. Intell.
– start-page: 8990
  year: 2018
  ident: e_1_2_6_6_1
  article-title: VITAL: VIsual Tracking via Adversarial Learning
  publication-title: IEEE/CVF Conf. COmput. CIsion Pattern Recogn.
– ident: e_1_2_6_9_1
  doi: 10.1109/CVPR.2009.5206848
SSID ssj0020458
Score 2.2197046
Snippet Considering the high dimensions of video sequences, it is often challenging to acquire a sufficient dataset to train the tracking models. From this...
SourceID nrf
doaj
crossref
wiley
SourceType Open Website
Enrichment Source
Index Database
Publisher
StartPage 483
SubjectTerms convolutional neural networks
detection
pixel‐wise feature learning
support vector machines
visual tracking
전자/정보통신공학
Title Robust appearance feature learning using pixel‐wise discrimination for visual tracking
URI https://onlinelibrary.wiley.com/doi/abs/10.4218%2Fetrij.2018-0486
https://doaj.org/article/d48a6e39ae0046d789ff1aa2e648e08e
https://www.kci.go.kr/kciportal/ci/sereArticleSearch/ciSereArtiView.kci?sereArticleSearchBean.artiId=ART002492156
Volume 41
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
ispartofPNX ETRI Journal, 2019, 41(4), , pp.483-493
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV1Nb9QwELVQT3BAlA-xtCALceBiNokd1z5S1KogwaFqpb1ZY3u8WlrtrvYDOPIT-I38ks4kadUeUC9cEimyE-vNKDMvGb8R4h02sdTaRaWLz8poQBUbU1S22dligDgJbxT--s2enJsvk3Zyq9UX14T18sA9cONsHFjUHpCpXD5wvpQaoEFrHFYO-e1b-eqaTA1Ui3__MdUib1XWWN2L-hiKZ2NuVPWda7qcYrm5O_Gok-2nKDNflbvJahdtjp-Ix0OaKD_2y9sVD3D-VDy6JR74TExOF3G73khYLslb2XiyYCfTKYdWEFPJVe1TuZz9wsu_v__8nK1R8jbcvpUXm0RSzip_zNZbethmBYk_nD8X58dHZ59O1NAnQSWKJkY5yJGLROsctUFoIFU5xVijrcB4HVuwDbZgrPPR0aG2EXSbWuRcyybUL8TOfDHHl0KaRCkfYNLYZNOA8wdV8rFJtrAqkCkj8eEarZAGEXHuZXEZiEwwvKGDNzC8geEdifc3E5a9fsa_hx4y_DfDWPi6u0DuEAZ3CPe5w0i8JeOFizTr5vN5uggXq0D04HNofUv0tBqJcWfb-1YUjs5OKbQTCX71P9a2Jx7SrX1fPrgvdjarLb6mlGYT33TeewVE-_P8
linkProvider Directory of Open Access Journals
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1Lb9QwELZge4AeKp5iWx4W4sAlNIkd1zkW1GoLbQ_VLlpxsfwYr0Kr3VU2Cxz5CfxGfgkzSVi1SAhxSaTIdqwZT2Y-Z_wNY68gdzET2iUiliGRwkLichmToIJWUVrEJHRQ-OxcjSby_bSYXjsL0_FDbDbcyDLa7zUZOG1Ik5VLdEukxaauPlNylk6IN-4226LYJh-wrcOPk0-TDeyiX4EEu3DlJkoq0RH80CD7fwxxwze1FP7oceZ1vBm4tp7n-B7b6UNGftjp-D67BfMHbPsakeBDNr1YuPWq4Xa5xJVLiuQRWspO3peFmHHKcJ_xZfUNrn5-__G1WgGnI7ldWS9SD8f4lX-pVmt8WVNbT5voj9jk-Gj8bpT0NRMSj55FJtoGRwmjWXBCgs2tT4N3LgOVWlkKV1iVQ2Gl0qXTeMmUs6LwBVDcpTyIx2wwX8zhCePSY_hnwQvIg8ytLg9SX7rcq0gMQTIO2Zvf0jK-JxSnuhZXBoEFide04jUkXkPiHbLXmw7Ljkvj703fkvg3zYgEu32wqGemtykTpLYKBM6RUH440GWMmbU5KKkh1TBkL1F55tJXbX-6zxbmsjYIFU5MURYIVdMh2291-68ZmaPxBbp5BMS7_93jBbszGp-dmtOT8w977C62KLscwqds0NRreIZxTeOe9wv3F-dZ9A0
linkToPdf http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1Lb9QwELaglRA9VDzFtjwsxIFLaBI7rnMspauWR4WqLqq4WH6MV6HVbpTNAkd-Ar-RX9KZJF1RJIS4JFJkO9Y8MvM548-MvYDcxUxol4hYhkQKC4nLZUyCClpFaRGT0EbhD8fqcCLfnhVX1YS0F6bnh1gtuJFndN9rcvA6RHJyiVGJlNg21ReqzdIJ0cbdZOvElYemvr73afJ5skJd9CeQUBcabqKkEj2_Dw2y88cQ10JTx-CPAWfWxOt5axd4xnfY5pAx8r1exXfZDZjdYxu_8QjeZ2cnc7dctNzWNRou6ZFH6Bg7-XAqxJRTgfuU19V3uPj14-e3agGcduT2p3qRdjimr_xrtVjiy9rGelpDf8Am44PT_cNkODIh8RhYZKJtcFQvmgUnJNjc-jR45zJQqZWlcIVVORRWKl06jZdMOSsKXwClXcqDeMjWZvMZPGJcesz-LHgBeZC51eVu6kuXexWJIEjGEXt1JS3jBz5xOtbiwiCuIPGaTryGxGtIvCP2ctWh7qk0_t70NYl_1Yw4sLsH82ZqBpcyQWqrQOAcCeSHXV3GmFmbg5IaUg0j9hyVZ8591fWn-3RuzhuDSOHIFGWBSDUdsZ1Ot_-akTk4PcEoj3h46797PGO3Pr4Zm_dHx--22W1sUPYVhI_ZWtss4QlmNa17OtjtJW5y8zY
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Robust+appearance+feature+learning+using+pixel%E2%80%90wise+discrimination+for+visual+tracking&rft.jtitle=ETRI+journal&rft.au=%EA%B9%80%EB%AF%BC%EC%A7%80&rft.au=%EA%B9%80%EC%84%B1%EC%B0%AC&rft.date=2019-08-01&rft.pub=%ED%95%9C%EA%B5%AD%EC%A0%84%EC%9E%90%ED%86%B5%EC%8B%A0%EC%97%B0%EA%B5%AC%EC%9B%90&rft.issn=1225-6463&rft.eissn=2233-7326&rft.spage=483&rft.epage=493&rft_id=info:doi/10.4218%2Fetrij.2018-0486&rft.externalDBID=n%2Fa&rft.externalDocID=oai_kci_go_kr_ARTI_5959170
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1225-6463&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1225-6463&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1225-6463&client=summon