Cost-Effective Active Learning for Deep Image Classification

Recent successes in learning-based image classification, however, heavily rely on the large number of annotated training samples, which may require considerable human effort. In this paper, we propose a novel active learning (AL) framework, which is capable of building a competitive classifier with...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on circuits and systems for video technology Vol. 27; no. 12; pp. 2591 - 2600
Main Authors Wang, Keze, Zhang, Dongyu, Li, Ya, Zhang, Ruimao, Lin, Liang
Format Journal Article
LanguageEnglish
Published New York IEEE 01.12.2017
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
Abstract Recent successes in learning-based image classification, however, heavily rely on the large number of annotated training samples, which may require considerable human effort. In this paper, we propose a novel active learning (AL) framework, which is capable of building a competitive classifier with optimal feature representation via a limited amount of labeled training instances in an incremental learning manner. Our approach advances the existing AL methods in two aspects. First, we incorporate deep convolutional neural networks into AL. Through the properly designed framework, the feature representation and the classifier can be simultaneously updated with progressively annotated informative samples. Second, we present a cost-effective sample selection strategy to improve the classification performance with less manual annotations. Unlike traditional methods focusing on only the uncertain samples of low prediction confidence, we especially discover the large amount of high-confidence samples from the unlabeled set for feature learning. Specifically, these high-confidence samples are automatically selected and iteratively assigned pseudolabels. We thus call our framework cost-effective AL (CEAL) standing for the two advantages. Extensive experiments demonstrate that the proposed CEAL framework can achieve promising results on two challenging image classification data sets, i.e., face recognition on the cross-age celebrity face recognition data set database and object categorization on Caltech-256.
AbstractList Recent successes in learning-based image classification, however, heavily rely on the large number of annotated training samples, which may require considerable human effort. In this paper, we propose a novel active learning (AL) framework, which is capable of building a competitive classifier with optimal feature representation via a limited amount of labeled training instances in an incremental learning manner. Our approach advances the existing AL methods in two aspects. First, we incorporate deep convolutional neural networks into AL. Through the properly designed framework, the feature representation and the classifier can be simultaneously updated with progressively annotated informative samples. Second, we present a cost-effective sample selection strategy to improve the classification performance with less manual annotations. Unlike traditional methods focusing on only the uncertain samples of low prediction confidence, we especially discover the large amount of high-confidence samples from the unlabeled set for feature learning. Specifically, these high-confidence samples are automatically selected and iteratively assigned pseudolabels. We thus call our framework cost-effective AL (CEAL) standing for the two advantages. Extensive experiments demonstrate that the proposed CEAL framework can achieve promising results on two challenging image classification data sets, i.e., face recognition on the cross-age celebrity face recognition data set database and object categorization on Caltech-256.
Author Dongyu Zhang
Ruimao Zhang
Keze Wang
Ya Li
Liang Lin
Author_xml – sequence: 1
  givenname: Keze
  surname: Wang
  fullname: Wang, Keze
– sequence: 2
  givenname: Dongyu
  orcidid: 0000-0002-7595-0137
  surname: Zhang
  fullname: Zhang, Dongyu
– sequence: 3
  givenname: Ya
  surname: Li
  fullname: Li, Ya
– sequence: 4
  givenname: Ruimao
  surname: Zhang
  fullname: Zhang, Ruimao
– sequence: 5
  givenname: Liang
  surname: Lin
  fullname: Lin, Liang
BookMark eNp9UE1LAzEUDFLBVv0DelnwvDUvm2wS8FLWqoWCB6vXkKYvJaXdrclW8N-7_cCDB5nDvMPMm_dmQHp1UyMhN0CHAFTfz6q3j9mQUSiHTCitpD4jfRBC5YxR0etmKiBXDMQFGaS0ohS44rJPHqomtfnYe3Rt-MJsdKQp2liHepn5JmaPiNtssrFLzKq1TSn44GwbmvqKnHu7Tnh94kvy_jSeVS_59PV5Uo2muStKaHNlBXpAJRg4KyWHkoG0HRC5nKMoCyf1woIT3pacWgHKd_f5OVqqNV8Ul-TuuHcbm88dptasml2su0gDWkrWQfFOxY4qF5uUInqzjWFj47cBavYtmUNLZt-SObXUmdQfkwvt4bk22rD-33p7tAZE_M2SgirNWfEDDvx2gA
CODEN ITCTEM
CitedBy_id crossref_primary_10_1016_j_commatsci_2024_113489
crossref_primary_10_1016_j_ifacol_2021_10_130
crossref_primary_10_1109_TCSVT_2021_3135023
crossref_primary_10_1111_cgf_15261
crossref_primary_10_1016_j_engappai_2024_109239
crossref_primary_10_1016_j_engappai_2023_106805
crossref_primary_10_1016_j_isprsjprs_2020_01_005
crossref_primary_10_1016_j_neucom_2020_01_018
crossref_primary_10_1109_TCSVT_2023_3262739
crossref_primary_10_1002_mp_15856
crossref_primary_10_1109_TSM_2020_2974867
crossref_primary_10_1109_TKDE_2021_3056894
crossref_primary_10_1016_j_knosys_2021_107639
crossref_primary_10_3390_s22145244
crossref_primary_10_1109_TNNLS_2022_3190420
crossref_primary_10_1007_s00521_023_08455_7
crossref_primary_10_1109_MCI_2024_3486680
crossref_primary_10_1039_D1SC02087K
crossref_primary_10_1109_TMI_2021_3135002
crossref_primary_10_1002_ett_4212
crossref_primary_10_1016_j_neucom_2017_02_053
crossref_primary_10_1021_acs_jmedchem_1c01683
crossref_primary_10_1049_joe_2019_1197
crossref_primary_10_1109_TCSVT_2020_3032650
crossref_primary_10_1016_j_neunet_2020_10_009
crossref_primary_10_1109_MGRS_2022_3161377
crossref_primary_10_1016_j_future_2023_05_028
crossref_primary_10_1093_jamia_ocad055
crossref_primary_10_1145_3617179
crossref_primary_10_1016_j_bspc_2021_103450
crossref_primary_10_1016_j_neucom_2021_08_091
crossref_primary_10_1109_JIOT_2023_3234600
crossref_primary_10_1109_TCSVT_2021_3100842
crossref_primary_10_1186_s13007_020_00575_8
crossref_primary_10_1109_ACCESS_2021_3130551
crossref_primary_10_1109_TMI_2024_3354673
crossref_primary_10_1007_s10032_023_00429_8
crossref_primary_10_1007_s11263_021_01563_8
crossref_primary_10_1142_S021812662550207X
crossref_primary_10_1007_s10462_024_10745_y
crossref_primary_10_1016_j_jnca_2018_11_001
crossref_primary_10_1016_j_ailsci_2023_100089
crossref_primary_10_1016_j_media_2023_102958
crossref_primary_10_1109_ACCESS_2022_3216065
crossref_primary_10_1109_TNNLS_2022_3186855
crossref_primary_10_1109_TPDS_2021_3122454
crossref_primary_10_3389_frai_2022_737363
crossref_primary_10_1109_JSTARS_2024_3432976
crossref_primary_10_1109_TITS_2022_3218403
crossref_primary_10_1109_TCSVT_2019_2898899
crossref_primary_10_1109_TMM_2024_3371192
crossref_primary_10_1109_TPAMI_2021_3093590
crossref_primary_10_1109_TCSVT_2017_2736553
crossref_primary_10_1007_s42979_022_01076_2
crossref_primary_10_1016_j_media_2022_102549
crossref_primary_10_1016_j_neucom_2019_10_056
crossref_primary_10_3390_rs13020287
crossref_primary_10_1109_TCSVT_2021_3108772
crossref_primary_10_1109_LRA_2023_3320488
crossref_primary_10_1016_j_patcog_2019_01_035
crossref_primary_10_1109_TGRS_2021_3095292
crossref_primary_10_1016_j_engappai_2023_107204
crossref_primary_10_1016_j_artmed_2020_101805
crossref_primary_10_1016_j_crmeth_2023_100557
crossref_primary_10_1109_TIP_2018_2867913
crossref_primary_10_3390_rs15061691
crossref_primary_10_1109_ACCESS_2024_3449915
crossref_primary_10_1109_TCSVT_2022_3224003
crossref_primary_10_1007_s11517_024_03094_z
crossref_primary_10_1016_j_cag_2024_104062
crossref_primary_10_1038_s41598_024_70125_y
crossref_primary_10_7717_peerj_cs_1480
crossref_primary_10_1109_TCI_2024_3396700
crossref_primary_10_1109_ACCESS_2022_3162253
crossref_primary_10_1007_s11036_019_01486_2
crossref_primary_10_1007_s41060_024_00605_x
crossref_primary_10_1016_j_compscitech_2025_111160
crossref_primary_10_1109_TCYB_2019_2962086
crossref_primary_10_1109_TCSVT_2017_2718188
crossref_primary_10_1109_TIM_2022_3204080
crossref_primary_10_1111_cgf_13844
crossref_primary_10_1007_s11837_021_04805_9
crossref_primary_10_1177_14759217221150376
crossref_primary_10_1016_j_optlastec_2024_111992
crossref_primary_10_3390_e22080901
crossref_primary_10_1016_j_ins_2018_10_056
crossref_primary_10_1007_s11390_020_9487_4
crossref_primary_10_1038_s41598_024_68866_x
crossref_primary_10_1145_3610187
crossref_primary_10_1109_TCSVT_2017_2710478
crossref_primary_10_1111_tpj_16176
crossref_primary_10_1007_s11432_019_2759_y
crossref_primary_10_1016_j_patcog_2023_110203
crossref_primary_10_1109_TVT_2021_3066210
crossref_primary_10_1016_j_intmar_2020_04_007
crossref_primary_10_1007_s11042_018_5986_5
crossref_primary_10_1109_TCSVT_2023_3234993
crossref_primary_10_1109_TITS_2018_2888698
crossref_primary_10_1088_1674_1056_ad23d8
crossref_primary_10_1109_ACCESS_2022_3141021
crossref_primary_10_1016_j_knosys_2021_107729
crossref_primary_10_1109_TIM_2020_3032190
crossref_primary_10_1016_j_eswa_2024_125266
crossref_primary_10_1109_TCSVT_2023_3343881
crossref_primary_10_1109_TCSVT_2019_2897482
crossref_primary_10_1109_TMM_2024_3521778
crossref_primary_10_3390_app11219783
crossref_primary_10_1007_s11263_025_02372_z
crossref_primary_10_1109_ACCESS_2024_3449449
crossref_primary_10_1016_j_comcom_2020_01_003
crossref_primary_10_1016_j_displa_2023_102434
crossref_primary_10_1016_j_neucom_2021_01_074
crossref_primary_10_1109_ACCESS_2020_3015917
crossref_primary_10_1007_s13369_024_09152_w
crossref_primary_10_1145_3214269
crossref_primary_10_1109_LGRS_2020_3036585
crossref_primary_10_1186_s13673_020_00219_9
crossref_primary_10_1145_3534932
crossref_primary_10_3390_e26020129
crossref_primary_10_1177_03611981241247046
crossref_primary_10_1007_s11517_022_02633_w
crossref_primary_10_1007_s42991_022_00224_8
crossref_primary_10_3390_rs15174136
crossref_primary_10_3390_rs16132507
crossref_primary_10_3390_s22228791
crossref_primary_10_1093_bioinformatics_btac764
crossref_primary_10_1109_COMST_2022_3200740
crossref_primary_10_3390_su141710938
crossref_primary_10_1088_1402_4896_acc0a6
crossref_primary_10_1109_TMI_2019_2907805
crossref_primary_10_1016_j_knosys_2024_112192
crossref_primary_10_1109_TVCG_2021_3114837
crossref_primary_10_1002_ppj2_20020
crossref_primary_10_1007_s11548_024_03098_y
crossref_primary_10_1109_TMI_2021_3061724
crossref_primary_10_1109_TGRS_2024_3477423
crossref_primary_10_34133_icomputing_0058
crossref_primary_10_1109_TCSVT_2018_2872957
crossref_primary_10_1109_TMI_2022_3215017
crossref_primary_10_1109_TGRS_2022_3195924
crossref_primary_10_1109_TCSVT_2021_3079991
crossref_primary_10_1177_14759217231183661
crossref_primary_10_1016_j_eswa_2021_115429
crossref_primary_10_1109_JSEN_2023_3279203
crossref_primary_10_1016_j_engappai_2023_106640
crossref_primary_10_34133_2019_1525874
crossref_primary_10_3390_s25051522
crossref_primary_10_3390_math12121898
crossref_primary_10_1109_TNNLS_2022_3152786
crossref_primary_10_1016_j_neunet_2023_02_004
crossref_primary_10_1109_TASLP_2024_3399614
crossref_primary_10_1016_j_knosys_2022_109817
crossref_primary_10_1109_TCAD_2018_2857338
crossref_primary_10_4018_IJRSDA_2019070103
crossref_primary_10_1109_TSMC_2020_3019531
crossref_primary_10_1177_17298814211044930
crossref_primary_10_3390_e22111314
crossref_primary_10_1007_s13218_024_00849_6
crossref_primary_10_1016_j_media_2023_103075
crossref_primary_10_1007_s00521_024_09510_7
crossref_primary_10_1109_TCSVT_2022_3196550
crossref_primary_10_1109_TMM_2021_3066118
crossref_primary_10_3390_rs13030373
crossref_primary_10_23940_ijpe_19_10_p16_27012708
crossref_primary_10_1093_astrogeo_ataa044
crossref_primary_10_1109_ACCESS_2018_2882269
crossref_primary_10_1109_TNNLS_2023_3257333
crossref_primary_10_1016_j_knosys_2019_02_013
crossref_primary_10_1109_TPAMI_2024_3471170
crossref_primary_10_1007_s10489_022_03752_5
crossref_primary_10_1007_s10462_019_09742_3
crossref_primary_10_1016_j_image_2022_116731
crossref_primary_10_1109_ACCESS_2020_3024948
crossref_primary_10_1016_j_jisa_2023_103423
crossref_primary_10_1109_LSP_2021_3072292
crossref_primary_10_1145_3457124
crossref_primary_10_14778_3476249_3476269
crossref_primary_10_1007_s10489_020_02121_4
crossref_primary_10_1007_s11063_022_10855_0
crossref_primary_10_3390_rs14194738
crossref_primary_10_1109_TCSVT_2022_3175762
crossref_primary_10_1016_j_procs_2024_09_491
crossref_primary_10_1155_2020_8826568
crossref_primary_10_1080_09540091_2023_2195596
crossref_primary_10_1145_3607871
crossref_primary_10_1016_j_media_2024_103201
crossref_primary_10_3788_AOS230758
crossref_primary_10_1109_TMM_2024_3453058
crossref_primary_10_1016_j_ins_2021_01_045
crossref_primary_10_1002_mp_15231
crossref_primary_10_1016_j_patrec_2021_05_009
crossref_primary_10_1088_1742_6596_1684_1_012114
crossref_primary_10_3390_electronics13010169
crossref_primary_10_1016_j_sysarc_2023_102978
crossref_primary_10_1177_1369433220924792
crossref_primary_10_1016_j_cmpb_2024_108111
crossref_primary_10_1007_s13218_020_00631_4
crossref_primary_10_1007_s10462_023_10456_w
crossref_primary_10_1049_iet_ipr_2019_0716
crossref_primary_10_1038_s41598_024_79249_7
crossref_primary_10_1109_TPAMI_2016_2598340
crossref_primary_10_1145_3472291
crossref_primary_10_1109_TKDE_2020_3045816
crossref_primary_10_1007_s00521_019_04681_0
crossref_primary_10_1109_JSTARS_2024_3385380
crossref_primary_10_1109_TPAMI_2024_3476683
crossref_primary_10_1109_JSAIT_2020_2991518
crossref_primary_10_1093_bioinformatics_btab123
crossref_primary_10_1109_LGRS_2021_3121611
crossref_primary_10_1109_TBME_2018_2889915
crossref_primary_10_3389_frai_2024_1498956
crossref_primary_10_1016_j_media_2025_103542
crossref_primary_10_1109_TKDE_2019_2897307
crossref_primary_10_1080_24725854_2023_2227659
crossref_primary_10_1109_JIOT_2024_3452200
crossref_primary_10_1016_j_isprsjprs_2020_09_003
crossref_primary_10_1016_j_eswa_2023_120391
crossref_primary_10_3390_ijgi7020065
crossref_primary_10_1038_s41598_024_52138_9
crossref_primary_10_32604_cmc_2020_012023
crossref_primary_10_1007_s10489_020_01953_4
crossref_primary_10_3390_fire7060201
crossref_primary_10_1109_ACCESS_2019_2908281
crossref_primary_10_1016_j_media_2020_101913
crossref_primary_10_1109_LRA_2023_3243474
crossref_primary_10_1109_ACCESS_2019_2939347
crossref_primary_10_1109_ACCESS_2021_3101867
crossref_primary_10_1016_j_ins_2022_10_066
crossref_primary_10_1145_3587253
crossref_primary_10_1109_TNNLS_2018_2852783
crossref_primary_10_1016_j_media_2021_102062
crossref_primary_10_1109_TETCI_2022_3217753
crossref_primary_10_1109_TNSE_2024_3483295
crossref_primary_10_1109_TPAMI_2023_3277738
crossref_primary_10_1016_j_earscirev_2025_105070
crossref_primary_10_1109_TCSVT_2024_3479313
crossref_primary_10_1088_1361_6560_ac678a
crossref_primary_10_1007_s10489_021_02515_y
crossref_primary_10_1109_TCSI_2018_2840092
crossref_primary_10_1109_TIP_2024_3451928
crossref_primary_10_1145_3633779
crossref_primary_10_1186_s12859_021_04047_1
crossref_primary_10_1016_j_neucom_2020_10_115
crossref_primary_10_1016_j_knosys_2022_108871
crossref_primary_10_1109_TNNLS_2019_2947789
crossref_primary_10_1109_JBHI_2022_3167314
crossref_primary_10_3390_electronics10040467
crossref_primary_10_1016_j_neunet_2024_106398
crossref_primary_10_1007_s10278_022_00767_9
crossref_primary_10_1016_j_inffus_2021_07_011
crossref_primary_10_1016_j_ins_2018_07_054
crossref_primary_10_1145_3617999
crossref_primary_10_1016_j_engappai_2020_103589
crossref_primary_10_1109_ACCESS_2022_3146413
crossref_primary_10_1109_TCSVT_2022_3196092
crossref_primary_10_3390_math11040820
crossref_primary_10_3389_fmats_2021_824441
crossref_primary_10_3390_s20174975
crossref_primary_10_3390_jcm10071496
crossref_primary_10_1007_s11760_023_02915_2
crossref_primary_10_1109_TMI_2022_3187170
crossref_primary_10_1016_j_neucom_2024_128131
crossref_primary_10_1016_j_neucom_2021_03_050
Cites_doi 10.1109/CVPR.2013.116
10.1007/978-3-319-10599-4_49
10.1145/2647868.2654889
10.1023/A:1007330508534
10.1007/3-540-44816-0_31
10.1145/1180639.1180721
10.1109/ICCV.2007.4408844
10.1109/TIT.2008.920189
10.1145/1143844.1143897
10.1109/ICCV.2009.5459392
10.1145/584091.584093
10.1109/TGRS.2010.2047352
10.1145/1553374.1553380
10.1109/TPAMI.2012.21
10.1007/978-3-319-14442-9_56
10.1109/ICCV.2013.33
10.1109/TKDE.2014.2365785
10.1109/CVPR.2011.5995430
10.1109/CVPR.2013.75
10.1109/CVPR.2009.5206627
10.1145/219587.219592
10.1007/s11263-015-0816-y
10.1145/2647868.2654918
10.1109/TIP.2014.2302675
10.1109/TGRS.2014.2358804
ContentType Journal Article
Copyright Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2017
Copyright_xml – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2017
DBID 97E
RIA
RIE
AAYXX
CITATION
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
DOI 10.1109/TCSVT.2016.2589879
DatabaseName IEEE Xplore (IEEE)
IEEE All-Society Periodicals Package (ASPP) 1998–Present
IEEE Electronic Library (IEL)
CrossRef
Computer and Information Systems Abstracts
Electronics & Communications Abstracts
Technology Research Database
ProQuest Computer Science Collection
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
DatabaseTitle CrossRef
Technology Research Database
Computer and Information Systems Abstracts – Academic
Electronics & Communications Abstracts
ProQuest Computer Science Collection
Computer and Information Systems Abstracts
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts Professional
DatabaseTitleList Technology Research Database

Database_xml – sequence: 1
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
EISSN 1558-2205
EndPage 2600
ExternalDocumentID 10_1109_TCSVT_2016_2589879
7508942
Genre orig-research
GrantInformation_xml – fundername: State Key Development Program
  grantid: 2016YFB1001000
– fundername: National Natural Science Foundation of China
  grantid: 61622214
  funderid: 10.13039/501100001809
– fundername: NVIDIA Corporation through the Tesla K40 GPU
  funderid: 10.13039/100007065
– fundername: CCF-Tencent Open Fund
– fundername: Special Program through the Applied Research on Super Computation of the Natural Science Foundation of China–Guangdong Joint Fund (the second phase)
  funderid: 10.13039/501100001809
GroupedDBID -~X
0R~
29I
4.4
5GY
5VS
6IK
97E
AAJGR
AARMG
AASAJ
AAWTH
ABAZT
ABQJQ
ABVLG
ACGFO
ACGFS
ACIWK
AENEX
AETIX
AGQYO
AGSQL
AHBIQ
AI.
AIBXA
AKJIK
AKQYR
ALLEH
ALMA_UNASSIGNED_HOLDINGS
ASUFR
ATWAV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
CS3
DU5
EBS
EJD
HZ~
H~9
ICLAB
IFIPE
IFJZH
IPLJI
JAVBF
LAI
M43
O9-
OCL
P2P
RIA
RIE
RNS
RXW
TAE
TN5
VH1
AAYXX
CITATION
RIG
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
ID FETCH-LOGICAL-c361t-8a5ef1e8521ca77416217a7a7ee47be563c79da1c5fa640a518f014fbea0994d3
IEDL.DBID RIE
ISSN 1051-8215
IngestDate Mon Jun 30 04:14:20 EDT 2025
Thu Apr 24 22:57:19 EDT 2025
Tue Jul 01 00:41:09 EDT 2025
Tue Aug 26 16:58:39 EDT 2025
IsPeerReviewed true
IsScholarly true
Issue 12
Language English
License https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c361t-8a5ef1e8521ca77416217a7a7ee47be563c79da1c5fa640a518f014fbea0994d3
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ORCID 0000-0002-7595-0137
PQID 1977272784
PQPubID 85433
PageCount 10
ParticipantIDs proquest_journals_1977272784
crossref_primary_10_1109_TCSVT_2016_2589879
ieee_primary_7508942
crossref_citationtrail_10_1109_TCSVT_2016_2589879
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 2017-12-01
PublicationDateYYYYMMDD 2017-12-01
PublicationDate_xml – month: 12
  year: 2017
  text: 2017-12-01
  day: 01
PublicationDecade 2010
PublicationPlace New York
PublicationPlace_xml – name: New York
PublicationTitle IEEE transactions on circuits and systems for video technology
PublicationTitleAbbrev TCSVT
PublicationYear 2017
Publisher IEEE
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Publisher_xml – name: IEEE
– name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
References ref35
donahue (ref41) 2014
ref12
li (ref13) 2014
ref37
ref15
griffin (ref2) 2007
ref36
ref14
ref33
ref11
ref32
ref10
zhao (ref28) 2015
ref1
ref17
jiang (ref29) 2014
ref38
ref16
settles (ref31) 2009
simonyan (ref26) 2015
cire?an (ref24) 2012
nair (ref39) 2010
ref42
ref22
ref21
jiang (ref30) 2014
kumar (ref34) 2010
krizhevsky (ref23) 2012
deng (ref25) 2009
brinker (ref4) 2003
ref8
ref7
ref9
tong (ref18) 2001; 2
mccallum (ref19) 1998
ref3
schohn (ref20) 2000
ref6
ref5
jiang (ref27) 2015
ref40
References_xml – ident: ref10
  doi: 10.1109/CVPR.2013.116
– start-page: 3642
  year: 2012
  ident: ref24
  article-title: Multi-column deep neural networks for image classification
  publication-title: Proc CVPR
– ident: ref1
  doi: 10.1007/978-3-319-10599-4_49
– start-page: 1189
  year: 2010
  ident: ref34
  article-title: Self-paced learning for latent variable models
  publication-title: Proc NIPS
– volume: 2
  start-page: 45
  year: 2001
  ident: ref18
  article-title: Support vector machine active learning with applications to text classification
  publication-title: J Mach Learn Res
– ident: ref42
  doi: 10.1145/2647868.2654889
– ident: ref36
  doi: 10.1023/A:1007330508534
– ident: ref32
  doi: 10.1007/3-540-44816-0_31
– year: 2009
  ident: ref31
  article-title: Active learning literature survey
– ident: ref22
  doi: 10.1145/1180639.1180721
– start-page: 21
  year: 2014
  ident: ref41
  article-title: DeCAF: A deep convolutional activation feature for generic visual recognition
  publication-title: Proc Int Conf Mach Learn (ICML)
– start-page: 1
  year: 2003
  ident: ref4
  article-title: Incorporating diversity in active learning with support vector machines
  publication-title: Proc ICML
– start-page: 234
  year: 2014
  ident: ref13
  article-title: Multi-level adaptive active learning for scene classification
  publication-title: Proc ECCV
– start-page: 350
  year: 1998
  ident: ref19
  article-title: Employing EM and pool-based active learning for text classification
  publication-title: Proc ICML
– start-page: 1
  year: 2015
  ident: ref26
  article-title: Very deep convolutional networks for large-scale image recognition
  publication-title: Proc ICLR
– ident: ref7
  doi: 10.1109/ICCV.2007.4408844
– ident: ref9
  doi: 10.1109/TIT.2008.920189
– start-page: 3196
  year: 2015
  ident: ref28
  article-title: Self-paced learning for matrix factorization
  publication-title: Proc AAAI
– ident: ref16
  doi: 10.1145/1143844.1143897
– ident: ref8
  doi: 10.1109/ICCV.2009.5459392
– start-page: 1
  year: 2000
  ident: ref20
  article-title: Less is more: Active learning with support vector machines
  publication-title: Proc ICML
– ident: ref33
  doi: 10.1145/584091.584093
– ident: ref17
  doi: 10.1109/TGRS.2010.2047352
– ident: ref35
  doi: 10.1145/1553374.1553380
– start-page: 1
  year: 2015
  ident: ref27
  article-title: Self-paced curriculum learning
  publication-title: Proc AAAI
– ident: ref37
  doi: 10.1109/TPAMI.2012.21
– ident: ref15
  doi: 10.1007/978-3-319-14442-9_56
– ident: ref11
  doi: 10.1109/ICCV.2013.33
– ident: ref5
  doi: 10.1109/TKDE.2014.2365785
– ident: ref21
  doi: 10.1109/CVPR.2011.5995430
– ident: ref38
  doi: 10.1109/CVPR.2013.75
– ident: ref6
  doi: 10.1109/CVPR.2009.5206627
– start-page: 2078
  year: 2014
  ident: ref30
  article-title: Self-paced learning with diversity
  publication-title: Proc NIPS
– start-page: 1097
  year: 2012
  ident: ref23
  article-title: ImageNet classification with deep convolutional neural networks
  publication-title: Proc NIPS
– ident: ref12
  doi: 10.1145/219587.219592
– ident: ref40
  doi: 10.1007/s11263-015-0816-y
– start-page: 547
  year: 2014
  ident: ref29
  article-title: Easy samples first: Self-paced reranking for zero-example multimedia search
  publication-title: Proc 22nd ACM Int Conf Multimedia
  doi: 10.1145/2647868.2654918
– ident: ref14
  doi: 10.1109/TIP.2014.2302675
– start-page: 1
  year: 2010
  ident: ref39
  article-title: Rectified linear units improve restricted Boltzmann machines
  publication-title: Proc ICML
– year: 2007
  ident: ref2
  article-title: Caltech-256 object category dataset
– start-page: 248
  year: 2009
  ident: ref25
  article-title: ImageNet: A large-scale hierarchical image database
  publication-title: Proc CVPR
– ident: ref3
  doi: 10.1109/TGRS.2014.2358804
SSID ssj0014847
Score 2.676432
Snippet Recent successes in learning-based image classification, however, heavily rely on the large number of annotated training samples, which may require...
SourceID proquest
crossref
ieee
SourceType Aggregation Database
Enrichment Source
Index Database
Publisher
StartPage 2591
SubjectTerms Active learning
Active learning (AL)
Annotations
Artificial neural networks
Classification
Classifiers
deep neural nets
Face recognition
Image classification
incremental learning
Learning
Learning systems
Machine learning
Measurement uncertainty
Neural networks
Object recognition
Representations
Training
Uncertainty
Visualization
Title Cost-Effective Active Learning for Deep Image Classification
URI https://ieeexplore.ieee.org/document/7508942
https://www.proquest.com/docview/1977272784
Volume 27
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV09T8MwED2VTjDwVRCFgjKwgdM4sRNHYqkKVUEqCy3qFjmOwwC0FU0Xfj1nJ434EkIZksGWojvn3rv4_A7gnEaBUcVixMuYR1hMGRGKKyI9TJ4N5nq2GHN0Hw4n7G7Kpw24rM_CaK1t8Zl2zaPdy8_mamV-lXUR3UTMMOBuYOJWntWqdwyYsM3EkC5QIhDH1gdkvLg77j88jk0VV-j6XGCSHX8BIdtV5Ucotvgy2IHR-s3KspJnd1Wkrnr_Jtr431ffhe2KaDq9cmXsQUPP9mHrk_xgC67682VBSv1iDHpOr7xVkqtPDvJZ51rrhXP7ilHHsf0zTWWRdeYBTAY34_6QVN0UiApCWhAhuc6pFojXSkaGiGE2IvHSmkWp5mGgojiTVPFchsyTnIoczZqnWiKLZFlwCM3ZfKaPwOFU-n6aY64hbUO_1Bcyi1KfpmEWYohoA12bN1GV1LjpePGS2JTDixPrksS4JKlc0oaLes6iFNr4c3TL2LgeWZm3DZ21F5PqW1wmFCmuH5kN1uPfZ53Apm_A2hapdKBZvK30KVKNIj2za-wDAejMSw
linkProvider IEEE
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwzV3JTsMwEB2xHIADO6KsOcAJucSunTgSHFABtWwXCuIWHMfhALSIpkLwLfwK_8bYSSs2cUNCOSQHO4o8TzNv4vEbgA0a1qwqFid-yn3CI8qJ1EIT5WPybGOu74oxT8-CxgU_uhJXQ_A6OAtjjHHFZ6ZqH91eftrRPfurbBujm4w4K0soj83zEyZo3d3mPlpzk7HDg1a9QcoeAkTXApoTqYTJqJEYpbQKLf1ADq7wMoaHiRFBTYdRqqgWmQq4rwSVGWYNWWIUciee1vC9wzCKPEOw4nTYYI-CS9e-DAkKJRIjZ_9Ijh9tt-rnly1bNxZUmZCY1kefwp7r4_LN-buIdjgFb_21KApZbqu9PKnqly8ykf91saZhsqTS3l6B_RkYMu1ZmPggsDgHO_VONyeFQjO6dW-vuJWisjceMnZv35gHr3mPftVzHUJt7ZSD6zxc_MnnL8BIu9M2i-AJqhhLMsymlGtZmDCp0jBhNAnSAJ1gBWjfnLEuxdRtT4-72CVVfhQ7CMQWAnEJgQpsDeY8FFIiv46eszYdjCzNWYGVPmri0tt0Y4oknoV2C3np51nrMNZonZ7EJ82z42UYZ5aauJKcFRjJH3tmFYlVnqw5fHtw_dcYeQc1VyoM
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Cost-Effective+Active+Learning+for+Deep+Image+Classification&rft.jtitle=IEEE+transactions+on+circuits+and+systems+for+video+technology&rft.au=Wang%2C+Keze&rft.au=Zhang%2C+Dongyu&rft.au=Li%2C+Ya&rft.au=Zhang%2C+Ruimao&rft.date=2017-12-01&rft.issn=1051-8215&rft.eissn=1558-2205&rft.volume=27&rft.issue=12&rft.spage=2591&rft.epage=2600&rft_id=info:doi/10.1109%2FTCSVT.2016.2589879&rft.externalDBID=n%2Fa&rft.externalDocID=10_1109_TCSVT_2016_2589879
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1051-8215&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1051-8215&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1051-8215&client=summon