Binary neural networks: A survey

•We summarize the binary neural network methods and categorize them into the naive binarization and the optimized binarization.•The binary neural networks are mainly optimized using techniques including minimizing quantization error, improving the loss function, and reducing the gradient error.•We a...

Full description

Saved in:
Bibliographic Details
Published inPattern recognition Vol. 105; p. 107281
Main Authors Qin, Haotong, Gong, Ruihao, Liu, Xianglong, Bai, Xiao, Song, Jingkuan, Sebe, Nicu
Format Journal Article
LanguageEnglish
Published Elsevier Ltd 01.09.2020
Subjects
Online AccessGet full text

Cover

Loading…
Abstract •We summarize the binary neural network methods and categorize them into the naive binarization and the optimized binarization.•The binary neural networks are mainly optimized using techniques including minimizing quantization error, improving the loss function, and reducing the gradient error.•We also discuss the hardware-friendly methods and the useful tricks of training binary neural networks.•We present the common datasets and network structures of evaluation, and compare the performance on different tasks.•We conclude and point out the future research trends. The binary neural network, largely saving the storage and computation, serves as a promising technique for deploying deep models on resource-limited devices. However, the binarization inevitably causes severe information loss, and even worse, its discontinuity brings difficulty to the optimization of the deep network. To address these issues, a variety of algorithms have been proposed, and achieved satisfying progress in recent years. In this paper, we present a comprehensive survey of these algorithms, mainly categorized into the native solutions directly conducting binarization, and the optimized ones using techniques like minimizing the quantization error, improving the network loss function, and reducing the gradient error. We also investigate other practical aspects of binary neural networks such as the hardware-friendly design and the training tricks. Then, we give the evaluation and discussions on different tasks, including image classification, object detection and semantic segmentation. Finally, the challenges that may be faced in future research are prospected.
AbstractList •We summarize the binary neural network methods and categorize them into the naive binarization and the optimized binarization.•The binary neural networks are mainly optimized using techniques including minimizing quantization error, improving the loss function, and reducing the gradient error.•We also discuss the hardware-friendly methods and the useful tricks of training binary neural networks.•We present the common datasets and network structures of evaluation, and compare the performance on different tasks.•We conclude and point out the future research trends. The binary neural network, largely saving the storage and computation, serves as a promising technique for deploying deep models on resource-limited devices. However, the binarization inevitably causes severe information loss, and even worse, its discontinuity brings difficulty to the optimization of the deep network. To address these issues, a variety of algorithms have been proposed, and achieved satisfying progress in recent years. In this paper, we present a comprehensive survey of these algorithms, mainly categorized into the native solutions directly conducting binarization, and the optimized ones using techniques like minimizing the quantization error, improving the network loss function, and reducing the gradient error. We also investigate other practical aspects of binary neural networks such as the hardware-friendly design and the training tricks. Then, we give the evaluation and discussions on different tasks, including image classification, object detection and semantic segmentation. Finally, the challenges that may be faced in future research are prospected.
ArticleNumber 107281
Author Qin, Haotong
Sebe, Nicu
Bai, Xiao
Gong, Ruihao
Liu, Xianglong
Song, Jingkuan
Author_xml – sequence: 1
  givenname: Haotong
  surname: Qin
  fullname: Qin, Haotong
  organization: State Key Lab of Software Development Environment, Beihang University, Beijing, China
– sequence: 2
  givenname: Ruihao
  surname: Gong
  fullname: Gong, Ruihao
  organization: State Key Lab of Software Development Environment, Beihang University, Beijing, China
– sequence: 3
  givenname: Xianglong
  orcidid: 0000-0001-8425-4195
  surname: Liu
  fullname: Liu, Xianglong
  email: xlliu@nlsde.buaa.edu.cn
  organization: State Key Lab of Software Development Environment, Beihang University, Beijing, China
– sequence: 4
  givenname: Xiao
  surname: Bai
  fullname: Bai, Xiao
  organization: School of Computer Science and Engineering, Beijing Advanced Innovation Center for Big Data and Brain Computing, Jiangxi Research Institute, Beihang University, Beijing, China
– sequence: 5
  givenname: Jingkuan
  orcidid: 0000-0002-2549-8322
  surname: Song
  fullname: Song, Jingkuan
  organization: Center for Future Media and School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China
– sequence: 6
  givenname: Nicu
  surname: Sebe
  fullname: Sebe, Nicu
  organization: Department of Information Engineering and Computer Science, University of Trento, Trento, Italy
BookMark eNqFj11LwzAYhYNMcJv-Ay_6B1rffHRJdyHM4RcMvNHrkKSppNZ0JNlk_96WeuWFXh04h-fAs0Az33uL0DWGAgNe3bTFXiXTvxcEyFhxIvAZmmPBaV5iRmZoDkBxTgnQC7SIsQXAfBjmKLtzXoVT5u0hqG6I9NWHj7jONlk8hKM9XaLzRnXRXv3kEr093L9un_Ldy-PzdrPLDYVVyjWBEkpGsWCgqpoxQnQFTKsGE65po4Upa2I4MG6s0KbSWnNVlUpXRBtR0iVaT78m9DEG20jjkkqu9yko10kMcnSVrZxc5egqJ9cBZr_gfXCfg9d_2O2E2UHs6GyQ0Tjrja1dsCbJund_H3wDeqVw0Q
CitedBy_id crossref_primary_10_1109_TPAMI_2023_3272925
crossref_primary_10_3390_electronics10222836
crossref_primary_10_1007_s10846_021_01336_y
crossref_primary_10_1109_ACCESS_2025_3547627
crossref_primary_10_3389_frcmn_2022_878170
crossref_primary_10_4316_AECE_2021_03001
crossref_primary_10_1016_j_neucom_2021_07_045
crossref_primary_10_1109_TCAD_2022_3213612
crossref_primary_10_1016_j_smhl_2023_100382
crossref_primary_10_1007_s11280_021_00878_3
crossref_primary_10_1007_s13042_023_01825_6
crossref_primary_10_1016_j_patcog_2024_111061
crossref_primary_10_1016_j_physleta_2022_128008
crossref_primary_10_1016_j_heliyon_2023_e15098
crossref_primary_10_1103_PhysRevApplied_18_014039
crossref_primary_10_1109_TPDS_2022_3170501
crossref_primary_10_1109_TNNLS_2022_3225715
crossref_primary_10_3390_mi12101243
crossref_primary_10_1109_ACCESS_2023_3328622
crossref_primary_10_3390_s23229254
crossref_primary_10_1109_ACCESS_2022_3208091
crossref_primary_10_1016_j_patcog_2024_110788
crossref_primary_10_1109_JIOT_2021_3079164
crossref_primary_10_1007_s10462_023_10464_w
crossref_primary_10_1007_s10915_022_02064_7
crossref_primary_10_1038_s41598_021_99191_2
crossref_primary_10_1007_s00138_023_01494_z
crossref_primary_10_1109_TETC_2023_3237778
crossref_primary_10_1002_cpe_6276
crossref_primary_10_1109_TII_2021_3139902
crossref_primary_10_1016_j_patcog_2020_107584
crossref_primary_10_1016_j_patcog_2020_107585
crossref_primary_10_1109_LSENS_2023_3273733
crossref_primary_10_1002_lpor_202400936
crossref_primary_10_1007_s10489_020_02125_0
crossref_primary_10_1016_j_neucom_2024_128722
crossref_primary_10_3389_fnins_2022_918793
crossref_primary_10_1364_PRJ_472741
crossref_primary_10_3390_electronics10091025
crossref_primary_10_1016_j_displa_2022_102355
crossref_primary_10_1016_j_engappai_2023_106232
crossref_primary_10_2478_popets_2022_0025
crossref_primary_10_1016_j_chaos_2022_112429
crossref_primary_10_3390_app11199173
crossref_primary_10_1016_j_knosys_2023_111319
crossref_primary_10_1016_j_conbuildmat_2023_130704
crossref_primary_10_1109_TCSII_2023_3241163
crossref_primary_10_1002_cpe_6147
crossref_primary_10_1587_elex_19_20220399
crossref_primary_10_1109_TMM_2022_3233255
crossref_primary_10_1109_TCAD_2022_3197499
crossref_primary_10_1109_ACCESS_2021_3070627
crossref_primary_10_1109_TETC_2024_3365354
crossref_primary_10_1109_TII_2024_3414489
crossref_primary_10_1109_OJCOMS_2024_3425531
crossref_primary_10_1016_j_cviu_2022_103415
crossref_primary_10_1016_j_dcan_2022_10_010
crossref_primary_10_1038_s41467_024_51509_0
crossref_primary_10_1109_TIFS_2023_3274391
crossref_primary_10_1109_TCASAI_2024_3491673
crossref_primary_10_1109_TPAMI_2023_3334614
crossref_primary_10_1145_3631610
crossref_primary_10_3389_fnins_2023_1225871
crossref_primary_10_1038_s41598_024_80272_x
crossref_primary_10_1109_ACCESS_2023_3347332
crossref_primary_10_1007_s42044_025_00242_y
crossref_primary_10_1063_5_0072913
crossref_primary_10_1109_ACCESS_2022_3175574
crossref_primary_10_1109_TMAG_2023_3315283
crossref_primary_10_1109_TNNLS_2021_3104646
crossref_primary_10_3390_mi12070838
crossref_primary_10_1016_j_mejo_2021_105319
crossref_primary_10_1103_PhysRevApplied_21_054028
crossref_primary_10_1016_j_patcog_2020_107608
crossref_primary_10_1016_j_biosystems_2023_104902
crossref_primary_10_1109_TIP_2021_3127849
crossref_primary_10_1109_TCAD_2021_3075420
crossref_primary_10_1109_MCAS_2021_3071629
crossref_primary_10_1109_TII_2024_3396348
crossref_primary_10_1016_j_asr_2023_02_025
crossref_primary_10_1007_s11633_023_1434_8
crossref_primary_10_3390_s22228694
crossref_primary_10_1109_TCSII_2024_3392600
crossref_primary_10_1016_j_eneco_2022_106049
crossref_primary_10_1016_j_patcog_2020_107611
crossref_primary_10_1109_TCSI_2024_3395442
crossref_primary_10_1109_TVT_2024_3382893
crossref_primary_10_1016_j_future_2022_12_028
crossref_primary_10_1016_j_ijdrr_2024_104435
crossref_primary_10_1145_3527169
crossref_primary_10_1109_JLT_2022_3200827
crossref_primary_10_1088_2634_4386_ac781a
crossref_primary_10_1109_ACCESS_2023_3258360
crossref_primary_10_1016_j_patcog_2020_107384
crossref_primary_10_1007_s11119_023_10073_1
crossref_primary_10_3390_electronics11233966
crossref_primary_10_1007_s00521_023_08718_3
crossref_primary_10_1016_j_memori_2023_100025
crossref_primary_10_1109_TGRS_2024_3417286
crossref_primary_10_1117_1_JEI_33_1_013013
crossref_primary_10_3390_app11136232
crossref_primary_10_1364_OL_451335
crossref_primary_10_3390_electronics13152904
crossref_primary_10_1109_JSTSP_2024_3467926
crossref_primary_10_1016_j_micpro_2021_104359
crossref_primary_10_1016_j_patcog_2023_109556
crossref_primary_10_1007_s11042_022_13043_3
crossref_primary_10_1109_TBCAS_2024_3389875
crossref_primary_10_1016_j_patcog_2022_109256
crossref_primary_10_3389_fenvs_2022_946729
crossref_primary_10_1007_s13369_022_06932_0
crossref_primary_10_1109_TCSVT_2022_3146240
crossref_primary_10_1109_TPAMI_2023_3328881
crossref_primary_10_1109_TPAMI_2024_3394390
crossref_primary_10_1177_17543371251324082
crossref_primary_10_1007_s10955_022_02893_8
crossref_primary_10_1016_j_cmpb_2022_106735
crossref_primary_10_1016_j_patcog_2023_109788
crossref_primary_10_1038_s41467_024_55220_y
crossref_primary_10_1109_JBHI_2024_3352927
crossref_primary_10_1515_nanoph_2023_0824
crossref_primary_10_1109_TMC_2021_3109940
crossref_primary_10_1093_imaiai_iaad036
crossref_primary_10_3389_fnano_2021_654418
crossref_primary_10_1016_j_aej_2024_01_075
crossref_primary_10_1016_j_vlsi_2024_102337
crossref_primary_10_3390_app11136213
crossref_primary_10_1016_j_patcog_2022_109263
crossref_primary_10_1109_TIP_2022_3216776
crossref_primary_10_1016_j_patcog_2020_107647
crossref_primary_10_3390_math11092112
crossref_primary_10_4271_2022_01_0156
crossref_primary_10_1016_j_patcog_2024_110929
crossref_primary_10_1007_s00521_022_08034_2
crossref_primary_10_1007_s11432_023_3958_4
crossref_primary_10_1016_j_knosys_2022_108962
crossref_primary_10_1145_3581757
crossref_primary_10_1016_j_apgeog_2023_103113
crossref_primary_10_1016_j_vlsi_2022_04_003
crossref_primary_10_1109_TVLSI_2022_3163233
crossref_primary_10_1016_j_patrec_2021_01_015
crossref_primary_10_1016_j_patcog_2022_108861
crossref_primary_10_3390_electronics10243141
crossref_primary_10_1016_j_neucom_2023_127169
crossref_primary_10_1109_TPAMI_2022_3212615
crossref_primary_10_1364_OL_464214
crossref_primary_10_1016_j_displa_2021_102028
crossref_primary_10_1016_j_ijleo_2025_172299
crossref_primary_10_1016_j_neunet_2024_107117
crossref_primary_10_1016_j_procs_2023_08_125
crossref_primary_10_3389_fnins_2023_1233037
crossref_primary_10_1109_TETCI_2023_3251404
crossref_primary_10_1109_JPHOT_2022_3163793
crossref_primary_10_1109_TNANO_2023_3336910
crossref_primary_10_1109_TNNLS_2022_3160939
crossref_primary_10_3390_s21175745
crossref_primary_10_1145_3604802
crossref_primary_10_1109_COMST_2023_3344351
crossref_primary_10_1109_JMASS_2020_3034205
crossref_primary_10_26599_TST_2021_9010084
crossref_primary_10_1109_TCSVT_2024_3398691
crossref_primary_10_1109_TMTT_2023_3293054
crossref_primary_10_3390_app142110025
crossref_primary_10_1002_isaf_1532
crossref_primary_10_1016_j_patcog_2023_109512
crossref_primary_10_1145_3472612
crossref_primary_10_1016_j_patcog_2021_108102
crossref_primary_10_1109_MGRS_2023_3321258
crossref_primary_10_1109_JEDS_2021_3123632
crossref_primary_10_3390_info11110501
crossref_primary_10_3390_electronics13091624
crossref_primary_10_1109_JIOT_2022_3179016
crossref_primary_10_1109_TSG_2023_3286490
crossref_primary_10_1109_ACCESS_2023_3281737
crossref_primary_10_1080_02331934_2023_2239852
crossref_primary_10_1109_TAI_2022_3229280
crossref_primary_10_1109_LSP_2022_3187318
crossref_primary_10_1109_TCSI_2021_3115787
crossref_primary_10_1016_j_eswa_2022_117931
crossref_primary_10_1016_j_eij_2025_100610
crossref_primary_10_1021_acs_jpclett_4c00284
crossref_primary_10_1109_TG_2021_3066245
crossref_primary_10_1088_1674_1056_ac5886
crossref_primary_10_1007_s10489_024_05444_8
crossref_primary_10_1016_j_device_2024_100546
crossref_primary_10_3389_frai_2024_1368569
crossref_primary_10_1109_MIS_2023_3241431
crossref_primary_10_3390_app11156790
crossref_primary_10_1016_j_ins_2024_121668
crossref_primary_10_1109_TCSII_2023_3290230
crossref_primary_10_3390_electronics10161943
crossref_primary_10_1038_s41467_024_44766_6
crossref_primary_10_1002_aelm_202400061
crossref_primary_10_1021_acsaelm_3c00698
crossref_primary_10_1109_JIOT_2023_3321299
crossref_primary_10_1007_s00521_021_06830_w
crossref_primary_10_1109_TCSI_2022_3178474
crossref_primary_10_1016_j_patcog_2020_107427
crossref_primary_10_1109_MNANO_2025_3533937
crossref_primary_10_1109_TGRS_2025_3529696
crossref_primary_10_1109_JIOT_2023_3341307
crossref_primary_10_1109_MDAT_2020_3031857
crossref_primary_10_1109_TWC_2023_3297790
crossref_primary_10_1145_3703447
crossref_primary_10_1016_j_patcog_2024_111086
crossref_primary_10_1109_TIFS_2024_3484936
crossref_primary_10_1145_3623402
crossref_primary_10_46300_9106_2021_15_52
crossref_primary_10_1109_TIP_2023_3328565
crossref_primary_10_1016_j_patcog_2024_110444
crossref_primary_10_1016_j_eswa_2024_124599
crossref_primary_10_3390_e23080933
crossref_primary_10_3724_SP_J_1089_2022_18920
crossref_primary_10_1109_ACCESS_2022_3157893
crossref_primary_10_1109_TIFS_2024_3356164
crossref_primary_10_1088_1755_1315_539_1_012102
crossref_primary_10_1109_TIE_2022_3146573
crossref_primary_10_1109_TDSC_2023_3271956
crossref_primary_10_1002_lpor_202200723
crossref_primary_10_1109_TCSI_2024_3397925
crossref_primary_10_1016_j_jvcir_2021_103289
crossref_primary_10_1038_s41598_024_56575_4
crossref_primary_10_1145_3626100
crossref_primary_10_1016_j_dcan_2024_11_013
Cites_doi 10.1016/j.patcog.2018.04.004
10.1109/JSSC.2016.2616357
10.1016/j.patcog.2019.07.002
10.1145/3123266.3129393
10.1109/CVPR42600.2020.00204
10.1016/j.patcog.2016.07.001
10.1007/s40687-018-0177-6
10.1016/j.neucom.2017.09.046
10.1109/5.726791
10.5772/intechopen.79562
10.1145/3020078.3021744
10.1007/978-3-319-10602-1_48
10.1016/j.patcog.2016.08.032
10.1162/neco.1990.2.2.226
10.1109/JETCAS.2019.2910232
10.1007/s11263-009-0275-4
10.1016/j.patcog.2019.107037
10.1049/trit.2018.1026
10.1016/j.patcog.2017.08.029
10.1145/3289602.3293990
10.1016/j.patcog.2017.10.007
10.1145/3020078.3021741
10.1016/j.patcog.2016.07.026
10.1038/nature14539
10.1007/s11265-017-1255-5
ContentType Journal Article
Copyright 2020 Elsevier Ltd
Copyright_xml – notice: 2020 Elsevier Ltd
DBID AAYXX
CITATION
DOI 10.1016/j.patcog.2020.107281
DatabaseName CrossRef
DatabaseTitle CrossRef
DatabaseTitleList
DeliveryMethod fulltext_linktorsrc
Discipline Computer Science
EISSN 1873-5142
ExternalDocumentID 10_1016_j_patcog_2020_107281
S0031320320300856
GroupedDBID --K
--M
-D8
-DT
-~X
.DC
.~1
0R~
123
1B1
1RT
1~.
1~5
29O
4.4
457
4G.
53G
5VS
7-5
71M
8P~
9JN
AABNK
AACTN
AAEDT
AAEDW
AAIAV
AAIKJ
AAKOC
AALRI
AAOAW
AAQFI
AAQXK
AAXUO
AAYFN
ABBOA
ABEFU
ABFNM
ABFRF
ABHFT
ABJNI
ABMAC
ABTAH
ABXDB
ABYKQ
ACBEA
ACDAQ
ACGFO
ACGFS
ACNNM
ACRLP
ACZNC
ADBBV
ADEZE
ADJOM
ADMUD
ADMXK
ADTZH
AEBSH
AECPX
AEFWE
AEKER
AENEX
AFKWA
AFTJW
AGHFR
AGUBO
AGYEJ
AHHHB
AHJVU
AHZHX
AIALX
AIEXJ
AIKHN
AITUG
AJBFU
AJOXV
ALMA_UNASSIGNED_HOLDINGS
AMFUW
AMRAJ
AOUOD
ASPBG
AVWKF
AXJTR
AZFZN
BJAXD
BKOJK
BLXMC
CS3
DU5
EBS
EFJIC
EFLBG
EJD
EO8
EO9
EP2
EP3
F0J
F5P
FD6
FDB
FEDTE
FGOYB
FIRID
FNPLU
FYGXN
G-Q
G8K
GBLVA
GBOLZ
HLZ
HVGLF
HZ~
H~9
IHE
J1W
JJJVA
KOM
KZ1
LG9
LMP
LY1
M41
MO0
N9A
O-L
O9-
OAUVE
OZT
P-8
P-9
P2P
PC.
Q38
R2-
RIG
RNS
ROL
RPZ
SBC
SDF
SDG
SDP
SDS
SES
SEW
SPC
SPCBC
SST
SSV
SSZ
T5K
TN5
UNMZH
VOH
WUQ
XJE
XPP
ZMT
ZY4
~G-
AATTM
AAXKI
AAYWO
AAYXX
ABDPE
ABWVN
ACRPL
ACVFH
ADCNI
ADNMO
AEIPS
AEUPX
AFJKZ
AFPUW
AFXIZ
AGCQF
AGQPQ
AGRNS
AIGII
AIIUN
AKBMS
AKRWK
AKYEP
ANKPU
APXCP
BNPGV
CITATION
SSH
ID FETCH-LOGICAL-c306t-b20505431840a9d4422b904baf127b3fb8c5d2c7047ce8bc9bbb7a95ab92bc853
IEDL.DBID .~1
ISSN 0031-3203
IngestDate Tue Jul 01 02:36:31 EDT 2025
Thu Apr 24 23:12:37 EDT 2025
Fri Feb 23 02:50:14 EST 2024
IsPeerReviewed true
IsScholarly true
Keywords Deep learning
Model acceleration
Binary neural network
Network quantization
Model compression
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c306t-b20505431840a9d4422b904baf127b3fb8c5d2c7047ce8bc9bbb7a95ab92bc853
ORCID 0000-0002-2549-8322
0000-0001-8425-4195
ParticipantIDs crossref_citationtrail_10_1016_j_patcog_2020_107281
crossref_primary_10_1016_j_patcog_2020_107281
elsevier_sciencedirect_doi_10_1016_j_patcog_2020_107281
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate September 2020
2020-09-00
PublicationDateYYYYMMDD 2020-09-01
PublicationDate_xml – month: 09
  year: 2020
  text: September 2020
PublicationDecade 2020
PublicationTitle Pattern recognition
PublicationYear 2020
Publisher Elsevier Ltd
Publisher_xml – name: Elsevier Ltd
References Yu, Liu, Liu, Yang, Zhang (bib0022) 2019; abs/1909.04839
Lebedev, Ganin, Rakhuba, Oseledets, Lempitsky (bib0043) 2015
Cheng, Yu, Feris, Kumar, Choudhary, Chang (bib0025) 2015
Li, Wang, Liang, Qin, Yan, Fan (bib0065) 2019
Denton, Zaremba, Bruna, LeCun, Fergus (bib0042) 2014
Darabi, Belbahri, Courbariaux, Nia (bib0090) 2018; abs/1812.11800
Nogueira, Penatti, dos Santos (bib0004) 2017; 61
Zhang, Yang, Ye, Hua (bib0061) 2018
Cheng, Wang, Zhou, Zhang (bib0015) 2017; abs/1710.09282
Srinivas, Babu (bib0027) 2015
Pang, Chen, Shi, Feng, Ouyang, Lin (bib0008) 2019
Rastegari, Ordonez, Redmon, Farhadi (bib0058) 2016
Hubara, Courbariaux, Soudry, El-Yaniv, Bengio (bib0057) 2016
Mishra, Nurvitadhi, Cook, Marr (bib0075) 2018
Ren, He, Girshick, Sun (bib0003) 2015
Gong, Liu, Jiang, Li, Hu, Lin, Yu, Yan (bib0063) 2019
Nakahara, Fujii, Sato (bib0117) 2017
Liu, Ding, Xia, Zhang, Gu, Liu, Ji, Doermann (bib0088) 2019
Nakahara, Yonekawa, Sasao, Iwamoto, Motomura (bib0115) 2016
Bulat, Tzimiropoulos, Kossaifi, Pantic (bib0095) 2019; abs/1904.05868
Ding, Chen, Huo (bib0056) 2019; 96
Vanhoucke, Senior, Mao (bib0034) 2011
Zhou, Ni, Zhou, Wen, Wu, Zou (bib0060) 2016; abs/1606.06160
Deng, Dong, Socher, Li, Li, Li (bib0118) 2009
Ge (bib0036) 2018
Chen, Zhang, Wang (bib0053) 2018
Liu, Wu, Luo, Yang, Liu, Cheng (bib0062) 2018
Choi, Wang, Venkataramani, Chuang, Srinivasan, Gopalakrishnan (bib0074) 2018; abs/1805.06085
Jacob, Kligys, Chen, Zhu, Tang, Howard, Adam, Kalenichenko (bib0131) 2018
Fraser, Umuroglu, Gambardella, Blott, Leong, Jahre, Vissers (bib0116) 2017
Lin, Goyal, Girshick, He, Dollar (bib0129) 2017; PP
Polino, Pascanu, Alistarh (bib0081) 2018
Chen, Krishna, Emer, Sze (bib0020) 2017; 52
Qin, Gong, Liu, Shen, Wei, Yu, Song (bib0096) 2020
Lecun, Bottou, Bengio, Haffner (bib0107) 1998; 86
Chen, Emer, Sze (bib0018) 2016; PP
Zhuang, Shen, Tan, Liu, Reid (bib0122) 2019
Mishra, Marr (bib0082) 2018
Faraone, Fraser, Blott, Leong (bib0077) 2018
Cai, He, Sun, Vasconcelos (bib0089) 2017
Yonekawa, Nakahara (bib0119) 2017
R. Zhao, W. Song, W. Zhang, T. Xing, J.-H. Lin, M. Srivastava, R. Gupta, Z. Zhang, Accelerating binarized convolutional neural networks with software-programmable fpgas, in: Proceedings of the 2017 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays.
Blott, Preusser, Fraser, Gambardella, Obrien, Umuroglu, Leeser, Vissers (bib0108) 2018; 11
Li, Liu, Zhang, Wang (bib0014) 2020; 98
Guo, Ma, Chen, Li, Xie, Wang (bib0112) 2018
Sun, Yin, Wang, Xu, Wu, Gu (bib0064) 2018; 3
Ma, Zhang, Zheng, Sun (bib0050) 2018
Zhu, Dong, Su (bib0067) 2019
Howard, Zhu, Chen, Kalenichenko, Wang, Weyand, Andreetto, Adam (bib0047) 2017; abs/1704.04861
Courbariaux, Bengio, David (bib0059) 2015
Cao, Ma, Xiao, Zhang, Liu, Zhang, Nie, Yang (bib0123) 2019
Sandler, Howard, Zhu, Zhmoginov, Chen (bib0048) 2018
Liang, Yin, Liu, Luk, Wei (bib0100) 2018; 275
Girshick (bib0007) 2015
Wu, Leng, Wang, Hu, Cheng (bib0033) 2016
Netzer, Wang, Coates, Bissacco, Wu, Ng (bib0111) 2011
Zhao, He, Cheng, Hu (bib0037) 2018
Hinton, Vinyals, Dean (bib0051) 2015; abs/1503.02531
Ge, Luo, Zhao, Jin, Zhang (bib0031) 2017
Chen, Yang, Emer, Sze (bib0016) 2019; 9
Lopes, de Aguiar, Souza, Oliveira-Santos (bib0010) 2017; 61
Han, Pool, Tran, Dally (bib0026) 2015
Y. Umuroglu, N.J. Fraser, G. Gambardella, M. Blott, P. Leong, M. Jahre, K. Vissers, Finn: a framework for fast, scalable binarized neural network inference, in: Proceedings of the 2017 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays.
Krizhevsky, Sutskever, Hinton (bib0011) 2012
Bai, Wang, Liberty (bib0092) 2019
Zhang, Liu, Liu, Xu, Yu, Ma, Li (bib0021) 2019; abs/1909.06978
Hu, Li, Wang, Zhang, Cheng (bib0038) 2018
Ge, Li, Ye, Luo (bib0009) 2017
Ding, Chin, Liu, Marculescu (bib0085) 2019
Leng, Dou, Li, Zhu, Jin (bib0121) 2017
Wen, Zhang, Xu, Yang, Han (bib0046) 2018; 81
Krishnamoorthi (bib0130) 2018; abs/1806.08342
Li, De, Xu, Studer, Samet, Goldstein (bib0098) 2017
Shen, Liu, Han, Gong, Wang, Xu (bib0078) 2020
Chen, Liu, Shi, Xu, Xu (bib0080) 2018
Hu, Wang, Cheng (bib0073) 2018
He, Zhang, Ren, Sun (bib0028) 2016
Szegedy, Liu, Jia, Sermanet, Reed, Anguelov, Erhan, Vanhoucke, Rabinovich (bib0002) 2015
Sze, Chen, Einer, Suleiman, Zhang (bib0017) 2017
Wu, Wang, Gao, Li (bib0013) 2018; 73
Lin, Gan, Han (bib0066) 2019
Lin, Maire, Belongie, Hays, Perona, Ramanan, Dollr, Zitnick (bib0127) 2014
Jokic, Emery, Benini (bib0110) 2018
Ghasemzadeh, Samragh, Koushanfar (bib0109) 2018
Lin, Zhao, Pan (bib0071) 2017
Bulat, Tzimiropoulos (bib0076) 2019; abs/1909.13863
Martinez, Yang, Bulat, Tzimiropoulos (bib0079) 2020
Han, Mao, Dally (bib0029) 2016
Hou, Yao, Kwok (bib0083) 2017
Lahoud, Achanta, Márquez-Neila, Süsstrunk (bib0094) 2019; abs/1902.00730
Yim, Joo, Bae, Kim (bib0054) 2017
Redmon, Farhadi (bib0099) 2018; abs/1804.02767
Izui, Pentland (bib0024) 1990; 2
Zhou, Redkar, Huang (bib0114) 2017
Kim, Smaragdis (bib0069) 2016; abs/1601.06071
Wu, Wu, Gong, Lv, Chen, Liang, Hu, Liu, Yan (bib0040) 2020
Bethge, Bornstein, Loy, Yang, Meinel (bib0105) 2018; abs/1812.01965
Liu, Liu, Zhang, Yu, Liu, He (bib0023) 2019; abs/1909.09034
Bengio, Léonard, Courville (bib0068) 2013; abs/1308.3432
C. Fu, S. Zhu, H. Su, C.-E. Lee, J. Zhao, Towards fast and energy-efficient binarized neural network inference on fpga, in: Proceedings of the 2019 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays.
H. Yang, M. Fritzsche, C. Bartz, C. Meinel, Bmxnet: an open-source binary neural network implementation based on mxnet, in: Proceedings of the 2017 ACM on Multimedia Conference.
Liu, Fieguth, Guo, Wang, Pietikäinen (bib0005) 2017; 62
F. Zhu, R. Gong, F. Yu, X. Liu, Y. Wang, Z. Li, X. Yang, J. Yan, Towards unified int8 training for convolutional neural network, 2019.
Deng, Liu, Li, Tao (bib0006) 2018; 77
Yang, Shen, Xing, Tian, Li, Deng, Huang, Hua (bib0093) 2019
Simonyan, Zisserman (bib0012) 2015
Alizadeh, Fernndez-Marqus, Lane, Gal (bib0124) 2019
Jaderberg, Vedaldi, Zisserman (bib0044) 2014
Wang, Lu, Tao, Zhou, Tian (bib0086) 2019
Gupta, Agrawal, Gopalakrishnan, Narayanan (bib0035) 2015
Zhang, Zhou, Lin, Sun (bib0049) 2018
Chen, Krishna, Emer, Sze (bib0019) 2016
Krizhevsky (bib0113) 2012
Zhou, Yao, Guo, Xu, Chen (bib0084) 2017
Wang, Hu, Zhang, Zhang, Liu, Cheng (bib0072) 2018
Zhuang, Shen, Tan, Liu, Reid (bib0125) 2018
Zagoruyko, Komodakis (bib0055) 2017
Xu, Cheung (bib0097) 2019; abs/1909.11366
Ren, He, Girshick, Jian (bib0128) 2015
Kung, Zhang, van der Wal, Chai, Mukhopadhyay (bib0120) 2018
Xu, Hsu, Huang (bib0052) 2018
He, Zhang, Sun (bib0030) 2017
Chen, Wang, Pan (bib0039) 2019
Everingham, Van Gool, Williams, Winn, Zisserman (bib0126) 2010; 88
Lebedev, Lempitsky (bib0045) 2016
Liu, Anguelov, Erhan, Szegedy, Reed, Fu, Berg (bib0132) 2016
Li, Ni, Zhang, Yang, Gao (bib0070) 2017
Xu, Dong, Li, Su (bib0087) 2019
Gong, Liu, Yang, Bourdev (bib0032) 2014; abs/1412.6115
Yin, Zhang, Lyu, Osher, Qi, Xin (bib0091) 2019; 6
Zhang, Pan, Yao, Zhao, Mei (bib0106) 2019
Lecun, Bengio, Hinton (bib0001) 2015; 521
Yu (10.1016/j.patcog.2020.107281_bib0022) 2019; abs/1909.04839
Alizadeh (10.1016/j.patcog.2020.107281_bib0124) 2019
Wang (10.1016/j.patcog.2020.107281_bib0072) 2018
Liu (10.1016/j.patcog.2020.107281_bib0132) 2016
Cai (10.1016/j.patcog.2020.107281_bib0089) 2017
Polino (10.1016/j.patcog.2020.107281_bib0081) 2018
Nogueira (10.1016/j.patcog.2020.107281_bib0004) 2017; 61
Zhang (10.1016/j.patcog.2020.107281_bib0061) 2018
Simonyan (10.1016/j.patcog.2020.107281_bib0012) 2015
Bethge (10.1016/j.patcog.2020.107281_bib0105) 2018; abs/1812.01965
Jacob (10.1016/j.patcog.2020.107281_bib0131) 2018
Zagoruyko (10.1016/j.patcog.2020.107281_bib0055) 2017
Zhuang (10.1016/j.patcog.2020.107281_bib0125) 2018
Li (10.1016/j.patcog.2020.107281_bib0014) 2020; 98
Srinivas (10.1016/j.patcog.2020.107281_bib0027) 2015
Zhuang (10.1016/j.patcog.2020.107281_bib0122) 2019
Jaderberg (10.1016/j.patcog.2020.107281_bib0044) 2014
Xu (10.1016/j.patcog.2020.107281_bib0097) 2019; abs/1909.11366
Gong (10.1016/j.patcog.2020.107281_bib0063) 2019
Ghasemzadeh (10.1016/j.patcog.2020.107281_bib0109) 2018
Cao (10.1016/j.patcog.2020.107281_bib0123) 2019
Deng (10.1016/j.patcog.2020.107281_bib0006) 2018; 77
Darabi (10.1016/j.patcog.2020.107281_bib0090) 2018; abs/1812.11800
Nakahara (10.1016/j.patcog.2020.107281_bib0117) 2017
Jokic (10.1016/j.patcog.2020.107281_bib0110) 2018
Chen (10.1016/j.patcog.2020.107281_bib0019) 2016
Sze (10.1016/j.patcog.2020.107281_bib0017) 2017
Szegedy (10.1016/j.patcog.2020.107281_bib0002) 2015
10.1016/j.patcog.2020.107281_bib0041
Choi (10.1016/j.patcog.2020.107281_bib0074) 2018; abs/1805.06085
Courbariaux (10.1016/j.patcog.2020.107281_bib0059) 2015
Chen (10.1016/j.patcog.2020.107281_bib0039) 2019
Lecun (10.1016/j.patcog.2020.107281_bib0001) 2015; 521
Yim (10.1016/j.patcog.2020.107281_bib0054) 2017
Cheng (10.1016/j.patcog.2020.107281_bib0025) 2015
Shen (10.1016/j.patcog.2020.107281_bib0078) 2020
Bengio (10.1016/j.patcog.2020.107281_bib0068) 2013; abs/1308.3432
Liu (10.1016/j.patcog.2020.107281_bib0005) 2017; 62
Krizhevsky (10.1016/j.patcog.2020.107281_bib0011) 2012
Chen (10.1016/j.patcog.2020.107281_bib0053) 2018
Guo (10.1016/j.patcog.2020.107281_bib0112) 2018
Lecun (10.1016/j.patcog.2020.107281_bib0107) 1998; 86
Liu (10.1016/j.patcog.2020.107281_bib0062) 2018
Lebedev (10.1016/j.patcog.2020.107281_bib0045) 2016
Yin (10.1016/j.patcog.2020.107281_bib0091) 2019; 6
Zhou (10.1016/j.patcog.2020.107281_bib0084) 2017
Han (10.1016/j.patcog.2020.107281_bib0026) 2015
Redmon (10.1016/j.patcog.2020.107281_bib0099) 2018; abs/1804.02767
Lopes (10.1016/j.patcog.2020.107281_bib0010) 2017; 61
Bulat (10.1016/j.patcog.2020.107281_bib0076) 2019; abs/1909.13863
He (10.1016/j.patcog.2020.107281_bib0028) 2016
Hinton (10.1016/j.patcog.2020.107281_bib0051) 2015; abs/1503.02531
Leng (10.1016/j.patcog.2020.107281_bib0121) 2017
Lin (10.1016/j.patcog.2020.107281_bib0127) 2014
Xu (10.1016/j.patcog.2020.107281_bib0087) 2019
Xu (10.1016/j.patcog.2020.107281_bib0052) 2018
Wu (10.1016/j.patcog.2020.107281_bib0040) 2020
Lin (10.1016/j.patcog.2020.107281_bib0129) 2017; PP
Denton (10.1016/j.patcog.2020.107281_bib0042) 2014
Zhang (10.1016/j.patcog.2020.107281_bib0049) 2018
Ren (10.1016/j.patcog.2020.107281_bib0128) 2015
Ma (10.1016/j.patcog.2020.107281_bib0050) 2018
Ding (10.1016/j.patcog.2020.107281_bib0056) 2019; 96
Hu (10.1016/j.patcog.2020.107281_bib0038) 2018
Faraone (10.1016/j.patcog.2020.107281_bib0077) 2018
Blott (10.1016/j.patcog.2020.107281_bib0108) 2018; 11
Hou (10.1016/j.patcog.2020.107281_bib0083) 2017
Deng (10.1016/j.patcog.2020.107281_bib0118) 2009
Kung (10.1016/j.patcog.2020.107281_bib0120) 2018
Hubara (10.1016/j.patcog.2020.107281_bib0057) 2016
Liang (10.1016/j.patcog.2020.107281_bib0100) 2018; 275
Everingham (10.1016/j.patcog.2020.107281_bib0126) 2010; 88
Mishra (10.1016/j.patcog.2020.107281_bib0075) 2018
Ge (10.1016/j.patcog.2020.107281_bib0009) 2017
Hu (10.1016/j.patcog.2020.107281_bib0073) 2018
Qin (10.1016/j.patcog.2020.107281_bib0096) 2020
Bulat (10.1016/j.patcog.2020.107281_bib0095) 2019; abs/1904.05868
Cheng (10.1016/j.patcog.2020.107281_bib0015) 2017; abs/1710.09282
Yang (10.1016/j.patcog.2020.107281_bib0093) 2019
Zhou (10.1016/j.patcog.2020.107281_bib0060) 2016; abs/1606.06160
Martinez (10.1016/j.patcog.2020.107281_bib0079) 2020
Izui (10.1016/j.patcog.2020.107281_bib0024) 1990; 2
Krishnamoorthi (10.1016/j.patcog.2020.107281_bib0130) 2018; abs/1806.08342
Liu (10.1016/j.patcog.2020.107281_bib0023) 2019; abs/1909.09034
Vanhoucke (10.1016/j.patcog.2020.107281_bib0034) 2011
Zhao (10.1016/j.patcog.2020.107281_bib0037) 2018
He (10.1016/j.patcog.2020.107281_bib0030) 2017
Pang (10.1016/j.patcog.2020.107281_bib0008) 2019
Kim (10.1016/j.patcog.2020.107281_bib0069) 2016; abs/1601.06071
Ge (10.1016/j.patcog.2020.107281_bib0031) 2017
Liu (10.1016/j.patcog.2020.107281_bib0088) 2019
Zhang (10.1016/j.patcog.2020.107281_bib0021) 2019; abs/1909.06978
Wen (10.1016/j.patcog.2020.107281_bib0046) 2018; 81
Rastegari (10.1016/j.patcog.2020.107281_bib0058) 2016
Han (10.1016/j.patcog.2020.107281_bib0029) 2016
Zhou (10.1016/j.patcog.2020.107281_bib0114) 2017
Li (10.1016/j.patcog.2020.107281_bib0098) 2017
Zhang (10.1016/j.patcog.2020.107281_bib0106) 2019
Sun (10.1016/j.patcog.2020.107281_bib0064) 2018; 3
Chen (10.1016/j.patcog.2020.107281_bib0020) 2017; 52
Ding (10.1016/j.patcog.2020.107281_bib0085) 2019
Netzer (10.1016/j.patcog.2020.107281_bib0111) 2011
Mishra (10.1016/j.patcog.2020.107281_bib0082) 2018
Chen (10.1016/j.patcog.2020.107281_sbref0018) 2016; PP
Bai (10.1016/j.patcog.2020.107281_bib0092) 2019
Lin (10.1016/j.patcog.2020.107281_bib0071) 2017
Zhu (10.1016/j.patcog.2020.107281_bib0067) 2019
Nakahara (10.1016/j.patcog.2020.107281_bib0115) 2016
Krizhevsky (10.1016/j.patcog.2020.107281_bib0113) 2012
Howard (10.1016/j.patcog.2020.107281_bib0047) 2017; abs/1704.04861
Sandler (10.1016/j.patcog.2020.107281_bib0048) 2018
Wu (10.1016/j.patcog.2020.107281_bib0013) 2018; 73
Chen (10.1016/j.patcog.2020.107281_bib0080) 2018
Gupta (10.1016/j.patcog.2020.107281_bib0035) 2015
10.1016/j.patcog.2020.107281_bib0101
10.1016/j.patcog.2020.107281_bib0103
Yonekawa (10.1016/j.patcog.2020.107281_bib0119) 2017
10.1016/j.patcog.2020.107281_bib0102
Lebedev (10.1016/j.patcog.2020.107281_bib0043) 2015
Chen (10.1016/j.patcog.2020.107281_bib0016) 2019; 9
10.1016/j.patcog.2020.107281_bib0104
Ren (10.1016/j.patcog.2020.107281_bib0003) 2015
Lin (10.1016/j.patcog.2020.107281_bib0066) 2019
Gong (10.1016/j.patcog.2020.107281_bib0032) 2014; abs/1412.6115
Lahoud (10.1016/j.patcog.2020.107281_bib0094) 2019; abs/1902.00730
Ge (10.1016/j.patcog.2020.107281_bib0036) 2018
Girshick (10.1016/j.patcog.2020.107281_bib0007) 2015
Li (10.1016/j.patcog.2020.107281_bib0065) 2019
Li (10.1016/j.patcog.2020.107281_bib0070) 2017
Wang (10.1016/j.patcog.2020.107281_bib0086) 2019
Fraser (10.1016/j.patcog.2020.107281_bib0116) 2017
Wu (10.1016/j.patcog.2020.107281_bib0033) 2016
References_xml – year: 2019
  ident: bib0065
  article-title: Fully quantized network for object detection
  publication-title: IEEE CVPR
– year: 2016
  ident: bib0132
  article-title: SSD: single shot multibox detector
  publication-title: ECCV
– start-page: 98
  year: 2017
  end-page: 105
  ident: bib0119
  article-title: On-chip memory based binarized convolutional deep neural network applying batch normalization free technique on an fpga
  publication-title: IEEE IPDPSW
– volume: 77
  start-page: 306
  year: 2018
  end-page: 315
  ident: bib0006
  article-title: Active multi-kernel domain adaptation for hyperspectral image classification
  publication-title: Pattern Recognit.
– start-page: 1737
  year: 2015
  end-page: 1746
  ident: bib0035
  article-title: Deep learning with limited numerical precision
  publication-title: ICML
– volume: abs/1710.09282
  year: 2017
  ident: bib0015
  article-title: A survey of model compression and acceleration for deep neural networks
  publication-title: CoRR
– volume: abs/1704.04861
  year: 2017
  ident: bib0047
  article-title: Mobilenets: efficient convolutional neural networks for mobile vision applications
  publication-title: CoRR
– year: 2015
  ident: bib0003
  article-title: Faster r-cnn: towards real-time object detection with region proposal networks
  publication-title: NeurIPS
– volume: abs/1805.06085
  year: 2018
  ident: bib0074
  article-title: PACT: parameterized clipping activation for quantized neural networks
  publication-title: CoRR
– year: 2019
  ident: bib0087
  article-title: A main/subsidiary network framework for simplifying binary neural networks
  publication-title: IEEE CVPR
– start-page: 1
  year: 2018
  end-page: 7
  ident: bib0110
  article-title: Binaryeye: a 20 kfps streaming camera system on fpga with real-time on-device image recognition using binary neural networks
  publication-title: IEEE SIES
– volume: abs/1909.09034
  year: 2019
  ident: bib0023
  article-title: Training robust deep neural networks via adversarial noise propagation
  publication-title: CoRR
– start-page: 667
  year: 2017
  end-page: 672
  ident: bib0031
  article-title: Compressing deep neural networks for efficient visual inference
  publication-title: IEEE ICME
– volume: abs/1412.6115
  year: 2014
  ident: bib0032
  article-title: Compressing deep convolutional networks using vector quantization
  publication-title: CoRR
– year: 2019
  ident: bib0123
  article-title: Seernet: predicting convolutional neural network feature-map sparsity through low-bit quantization
  publication-title: IEEE CVPR
– year: 2019
  ident: bib0093
  article-title: Quantization networks
  publication-title: IEEE CVPR
– reference: C. Fu, S. Zhu, H. Su, C.-E. Lee, J. Zhao, Towards fast and energy-efficient binarized neural network inference on fpga, in: Proceedings of the 2019 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays.
– year: 2017
  ident: bib0116
  article-title: Scaling binarized neural networks on reconfigurable logic
  publication-title: PARMA-DITAM@HiPEAC
– year: 2018
  ident: bib0082
  article-title: Apprentice: using knowledge distillation techniques to improve low-precision network accuracy
  publication-title: ICLR
– year: 2009
  ident: bib0118
  article-title: Imagenet: a large-scale hierarchical image database
  publication-title: IEEE CVPR
– year: 2018
  ident: bib0075
  article-title: WRPN: wide reduced-precision networks
  publication-title: ICLR
– start-page: 116
  year: 2018
  end-page: 131
  ident: bib0050
  article-title: Shufflenet v2: practical guidelines for efficient cnn architecture design
  publication-title: ECCV
– year: 2020
  ident: bib0096
  article-title: Forward and backward information retention for accurate binary neural networks
  publication-title: IEEE CVPR
– start-page: 277
  year: 2016
  end-page: 280
  ident: bib0115
  article-title: A memory-based realization of a binarized deep convolutional neural network
  publication-title: IEEE FPT
– reference: R. Zhao, W. Song, W. Zhang, T. Xing, J.-H. Lin, M. Srivastava, R. Gupta, Z. Zhang, Accelerating binarized convolutional neural networks with software-programmable fpgas, in: Proceedings of the 2017 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays.
– year: 2017
  ident: bib0089
  article-title: Deep learning with low precision by half-wave gaussian quantization
  publication-title: IEEE CVPR
– volume: abs/1601.06071
  year: 2016
  ident: bib0069
  article-title: Bitwise neural networks
  publication-title: CoRR
– volume: abs/1902.00730
  year: 2019
  ident: bib0094
  article-title: Self-binarizing networks
  publication-title: CoRR
– year: 2019
  ident: bib0063
  article-title: Differentiable soft quantization: bridging full-precision and low-bit neural networks
  publication-title: IEEE ICCV
– year: 2017
  ident: bib0030
  article-title: Channel pruning for accelerating very deep neural networks
  publication-title: IEEE ICCV
– volume: 86
  start-page: 2278
  year: 1998
  end-page: 2324
  ident: bib0107
  article-title: Gradient-based learning applied to document recognition
  publication-title: Proc. IEEE
– year: 2018
  ident: bib0120
  article-title: Efficient object detection using embedded binarized neural networks
  publication-title: J. Signal Process. Syst.
– year: 2019
  ident: bib0086
  article-title: Learning channel-wise interactions for binary convolutional neural networks
  publication-title: IEEE CVPR
– year: 2019
  ident: bib0106
  article-title: dabnn: a super fast inference framework for binary neural networks on ARM devices
  publication-title: ACM MM
– volume: PP
  start-page: 2999
  year: 2017
  end-page: 3007
  ident: bib0129
  article-title: Focal loss for dense object detection
  publication-title: IEEE Trans. Pattern Anal. Mach.Intell.
– year: 2017
  ident: bib0054
  article-title: A gift from knowledge distillation: fast optimization, network minimization and transfer learning
  publication-title: IEEE CVPR
– year: 2016
  ident: bib0028
  article-title: Deep residual learning for image recognition
  publication-title: IEEE CVPR
– year: 2015
  ident: bib0043
  article-title: Speeding-up convolutional neural networks using fine-tuned cp-decomposition
  publication-title: ICLR
– year: 2019
  ident: bib0124
  article-title: A systematic study of binary neural networks’ optimisation
  publication-title: ICLR
– volume: abs/1503.02531
  year: 2015
  ident: bib0051
  article-title: Distilling the knowledge in a neural network
  publication-title: CoRR
– volume: abs/1909.11366
  year: 2019
  ident: bib0097
  article-title: Accurate and compact convolutional neural networks with trained binarization
  publication-title: CoRR
– year: 2016
  ident: bib0033
  article-title: Quantized convolutional neural networks for mobile devices
  publication-title: IEEE CVPR
– year: 2017
  ident: bib0055
  article-title: Paying more attention to attention: improving the performance of convolutional neural networks via attention transfer
  publication-title: ICLR
– volume: 2
  start-page: 226
  year: 1990
  end-page: 238
  ident: bib0024
  article-title: Analysis of neural networks with redundancy
  publication-title: Neural Comput.
– year: 2015
  ident: bib0025
  article-title: An exploration of parameter redundancy in deep networks with circulant projections
  publication-title: IEEE ICCV
– volume: 61
  start-page: 539
  year: 2017
  end-page: 556
  ident: bib0004
  article-title: Towards better exploiting convolutional neural networks for remote sensing scene classification
  publication-title: Pattern Recognit.
– volume: abs/1909.13863
  year: 2019
  ident: bib0076
  article-title: Xnor-net++: improved binary neural networks
  publication-title: CoRR
– start-page: 2852
  year: 2018
  end-page: 2859
  ident: bib0053
  article-title: Darkrank: accelerating deep metric learning via cross sample similarities transfer
  publication-title: AAAI
– year: 2018
  ident: bib0131
  article-title: Quantization and training of neural networks for efficient integer-arithmetic-only inference
  publication-title: IEEE CVPR
– year: 2014
  ident: bib0042
  article-title: Exploiting linear structure within convolutional networks for efficient evaluation
  publication-title: NeurIPS
– year: 2016
  ident: bib0029
  article-title: Deep compression: compressing deep neural networks with pruning, trained quantization and Huffman coding
  publication-title: ICLR
– start-page: 281
  year: 2017
  end-page: 284
  ident: bib0114
  article-title: Deep learning binary neural network on an fpga
  publication-title: IEEE MWSCAS
– year: 2017
  ident: bib0017
  article-title: Hardware for machine learning: challenges and opportunities
  publication-title: CICC
– start-page: 740
  year: 2014
  end-page: 755
  ident: bib0127
  article-title: Microsoft coco: Common objects in context
  publication-title: Lect. Notes Comput. Sci.
– year: 2018
  ident: bib0036
  article-title: Efficient Deep Learning in Network Compression and Acceleration
  publication-title: Digital Systems, Vahid Asadpour
– volume: abs/1606.06160
  year: 2016
  ident: bib0060
  article-title: Dorefa-net: training low bitwidth convolutional neural networks with low bitwidth gradients
  publication-title: CoRR
– year: 2018
  ident: bib0062
  article-title: Bi-real net: enhancing the performance of 1-bit cnns with improved representational capability and advanced training algorithm
  publication-title: ECCV
– reference: H. Yang, M. Fritzsche, C. Bartz, C. Meinel, Bmxnet: an open-source binary neural network implementation based on mxnet, in: Proceedings of the 2017 ACM on Multimedia Conference.
– year: 2011
  ident: bib0111
  article-title: Reading digits in natural images with unsupervised feature learning
  publication-title: NeurIPS
– volume: 81
  start-page: 326
  year: 2018
  end-page: 340
  ident: bib0046
  article-title: Adaptive weighted nonnegative low-rank representation
  publication-title: Pattern Recognit.
– year: 2015
  ident: bib0007
  article-title: Fast r-cnn
  publication-title: IEEE ICCV
– year: 2014
  ident: bib0044
  article-title: Speeding up convolutional neural networks with low rank expansions
  publication-title: BMVC
– year: 2016
  ident: bib0045
  article-title: Fast convnets using group-wise brain damage
  publication-title: IEEE CVPR
– year: 2018
  ident: bib0073
  article-title: From hashing to cnns: training binary weight networks via hashing
  publication-title: AAAI
– year: 2020
  ident: bib0078
  article-title: Balanced binary neural networks with gated residual
  publication-title: ICASSP
– year: 2017
  ident: bib0084
  article-title: Incremental network quantization: towards lossless cnns with low-precision weights
  publication-title: ICLR
– year: 2016
  ident: bib0058
  article-title: Xnor-net: imagenet classification using binary convolutional neural networks
  publication-title: ECCV
– volume: 61
  start-page: 610
  year: 2017
  end-page: 628
  ident: bib0010
  article-title: Facial expression recognition with convolutional neural networks: coping with few data and the training sample order
  publication-title: Pattern Recognit.
– volume: 275
  start-page: 1072
  year: 2018
  end-page: 1086
  ident: bib0100
  article-title: Fp-bnn: binarized neural network on fpga
  publication-title: Neurocomputing
– year: 2011
  ident: bib0034
  article-title: Improving the speed of neural networks on cpus
– year: 2015
  ident: bib0128
  article-title: Faster r-cnn: towards real-time object detection with region proposal networks
  publication-title: NeurIPS
– start-page: 3918
  year: 2019
  end-page: 3928
  ident: bib0039
  article-title: Metaquant: learning to quantize by learning to penetrate non-differentiable quantization
  publication-title: NeurIPS
– year: 2012
  ident: bib0113
  article-title: Learning multiple layers of features from tiny images
  publication-title: University of Toronto
– year: 2018
  ident: bib0052
  article-title: Training shallow and thin networks for acceleration via knowledge distillation with conditional adversarial networks
  publication-title: ICLR
– year: 2019
  ident: bib0066
  article-title: Defensive quantization: when efficiency meets robustness
  publication-title: ICLR
– start-page: 51
  year: 2018
  end-page: 513
  ident: bib0112
  article-title: Fbna: a fully binarized neural network accelerator
  publication-title: IEEE FPL
– year: 2019
  ident: bib0008
  article-title: Libra r-cnn: towards balanced learning for object detection
  publication-title: IEEE CVPR
– year: 2019
  ident: bib0088
  article-title: Circulant binary convolutional networks: enhancing the performance of 1-bit dcnns with circulant back propagation
  publication-title: IEEE CVPR
– volume: abs/1909.04839
  year: 2019
  ident: bib0022
  article-title: Towards noise-robust neural networks via progressive adversarial training
  publication-title: CoRR
– start-page: 57
  year: 2018
  end-page: 64
  ident: bib0109
  article-title: Rebnet: residual binarized neural network
  publication-title: IEEE FCCM
– volume: 96
  year: 2019
  ident: bib0056
  article-title: Compressing CNN-DBLSTM models for OCR with teacher-student learning and tucker decomposition
  publication-title: Pattern Recognit.
– volume: PP
  year: 2016
  ident: bib0018
  article-title: Eyeriss: A spatial architecture for energy-efficient dataflow for convolutional neural networks
  publication-title: IEEE Micro
– start-page: 657
  year: 2018
  end-page: 673
  ident: bib0038
  article-title: Training binary weight networks via semi-binary decomposition
  publication-title: ECCV
– year: 2019
  ident: bib0067
  article-title: Binary ensemble neural network: more bits per network or more networks per bit?
  publication-title: IEEE CVPR
– start-page: 1
  year: 2017
  end-page: 4
  ident: bib0117
  article-title: A fully connected layer elimination for a binarizec convolutional neural network on an fpga
  publication-title: IEEE FPL
– year: 2018
  ident: bib0061
  article-title: Lq-nets: learned quantization for highly accurate and compact deep neural networks
  publication-title: ECCV
– year: 2017
  ident: bib0009
  article-title: Detecting masked faces in the wild with lle-cnns
  publication-title: IEEE CVPR
– year: 2019
  ident: bib0085
  article-title: Regularizing activation distribution for training binarized deep networks
  publication-title: IEEE CVPR
– year: 2019
  ident: bib0092
  article-title: Proxquant: quantized neural networks via proximal operators
  publication-title: ICLR
– volume: 9
  start-page: 292
  year: 2019
  end-page: 308
  ident: bib0016
  article-title: Eyeriss v2: a flexible accelerator for emerging deep neural networks on mobile devices
  publication-title: IEEE J. Emerg. Sel. Topics Circuits Syst.
– year: 2017
  ident: bib0121
  article-title: Extremely low bit neural network: squeeze the last bit out with admm
  publication-title: AAAI
– volume: abs/1812.11800
  year: 2018
  ident: bib0090
  article-title: BNN+: improved binary network training
  publication-title: CoRR
– year: 2016
  ident: bib0019
  article-title: Eyeriss: an energy-efficient reconfigurable accelerator for deep convolutional neural networks
  publication-title: ISSCC
– year: 2020
  ident: bib0040
  article-title: Rotation consistent margin loss for efficient low-bit face recognition
  publication-title: CoRR
– year: 2018
  ident: bib0072
  article-title: Two-step quantization for low-bit neural networks
  publication-title: IEEE CVPR
– year: 2017
  ident: bib0083
  article-title: Loss-aware binarization of deep networks
  publication-title: ICLR
– volume: 6
  start-page: 14
  year: 2019
  ident: bib0091
  article-title: Blended coarse gradient descent for full quantization of deep neural networks
  publication-title: Res. Math. Sci.
– volume: 52
  start-page: 127
  year: 2017
  end-page: 138
  ident: bib0020
  article-title: Eyeriss: an energy-efficient reconfigurable accelerator for deep convolutional neural networks
  publication-title: IEEE J. Solid-State Circuits
– start-page: 1545
  year: 2018
  end-page: 1552
  ident: bib0037
  article-title: Bitstream: Efficient computing architecture for real-time low-power inference of binary neural networks on cpus
  publication-title: ACM MM
– volume: abs/1812.01965
  year: 2018
  ident: bib0105
  article-title: Training competitive binary neural networks from scratch
  publication-title: CoRR
– volume: abs/1904.05868
  year: 2019
  ident: bib0095
  article-title: Improved training of binary networks for human pose estimation and image recognition
  publication-title: CoRR
– year: 2015
  ident: bib0002
  article-title: Going deeper with convolutions
  publication-title: IEEE CVPR
– volume: abs/1308.3432
  year: 2013
  ident: bib0068
  article-title: Estimating or propagating gradients through stochastic neurons for conditional computation
  publication-title: CoRR
– volume: 73
  start-page: 275
  year: 2018
  end-page: 288
  ident: bib0013
  article-title: Deep adaptive feature embedding with local sample distributions for person re-identification
  publication-title: Pattern Recognit.
– volume: 3
  start-page: 191
  year: 2018
  end-page: 197
  ident: bib0064
  article-title: Fast object detection based on binary deep convolution neural networks
  publication-title: CAAI Trans. Intell. Technol.
– volume: 98
  year: 2020
  ident: bib0014
  article-title: Spatio-temporal deformable 3d convnets with attention for action recognition
  publication-title: Pattern Recognit.
– year: 2015
  ident: bib0026
  article-title: Learning both weights and connections for efficient neural network
  publication-title: NeurIPS
– volume: 62
  start-page: 135
  year: 2017
  end-page: 160
  ident: bib0005
  article-title: Local binary features for texture classification: taxonomy and experimental study
  publication-title: Pattern Recognit.
– year: 2019
  ident: bib0122
  article-title: Structured binary neural networks for accurate image classification and semantic segmentation
  publication-title: IEEE CVPR
– start-page: 2830
  year: 2015
  end-page: 2838
  ident: bib0027
  article-title: Data-free parameter pruning for deep neural networks
  publication-title: Comput. Sci.
– year: 2017
  ident: bib0071
  article-title: Towards accurate binary convolutional neural network
  publication-title: NeurIPS
– year: 2020
  ident: bib0079
  article-title: Training binary neural networks with real-to-binary convolutions
  publication-title: ICLR
– reference: Y. Umuroglu, N.J. Fraser, G. Gambardella, M. Blott, P. Leong, M. Jahre, K. Vissers, Finn: a framework for fast, scalable binarized neural network inference, in: Proceedings of the 2017 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays.
– year: 2018
  ident: bib0125
  article-title: Towards effective low-bitwidth convolutional neural networks
  publication-title: IEEE CVPR
– year: 2018
  ident: bib0080
  article-title: Distilled binary neural network for monaural speech separation
  publication-title: IJCNN
– volume: abs/1909.06978
  year: 2019
  ident: bib0021
  article-title: Interpreting and improving adversarial robustness with neuron sensitivity
  publication-title: CoRR
– volume: abs/1804.02767
  year: 2018
  ident: bib0099
  article-title: Yolov3: an incremental improvement
  publication-title: CoRR
– year: 2012
  ident: bib0011
  article-title: Imagenet classification with deep convolutional neural networks
  publication-title: NeurIPS
– year: 2018
  ident: bib0077
  article-title: Syq: learning symmetric quantization for efficient deep neural networks
  publication-title: IEEE CVPR
– year: 2015
  ident: bib0012
  article-title: Very deep convolutional networks for large-scale image recognition
  publication-title: ICLR
– reference: F. Zhu, R. Gong, F. Yu, X. Liu, Y. Wang, Z. Li, X. Yang, J. Yan, Towards unified int8 training for convolutional neural network, 2019.
– volume: 88
  start-page: 303
  year: 2010
  end-page: 338
  ident: bib0126
  article-title: The pascal visual object classes (voc) challenge
  publication-title: Int. J. Comput. Vision
– start-page: 6848
  year: 2018
  end-page: 6856
  ident: bib0049
  article-title: Shufflenet: an extremely efficient convolutional neural network for mobile devices
  publication-title: IEEE CVPR
– volume: abs/1806.08342
  year: 2018
  ident: bib0130
  article-title: Quantizing deep convolutional networks for efficient inference: a whitepaper
  publication-title: CoRR
– year: 2018
  ident: bib0048
  article-title: Mobilenetv2: inverted residuals and linear bottlenecks
  publication-title: IEEE CVPR
– year: 2017
  ident: bib0070
  article-title: Performance guaranteed network acceleration via high-order residual quantization
  publication-title: IEEE ICCV
– volume: 11
  start-page: 16
  year: 2018
  ident: bib0108
  article-title: Finn-r: an end-to-end deep-learning framework for fast exploration of quantized neural networks
  publication-title: ACM TRETS
– volume: 521
  start-page: 436
  year: 2015
  ident: bib0001
  article-title: Deep learning
  publication-title: Nature
– year: 2016
  ident: bib0057
  article-title: Binarized neural networks
  publication-title: NeurIPS
– year: 2018
  ident: bib0081
  article-title: Model compression via distillation and quantization
  publication-title: ICLR
– year: 2017
  ident: bib0098
  article-title: Training quantized nets: a deeper understanding
  publication-title: NeurIPS
– year: 2015
  ident: bib0059
  article-title: Binaryconnect: training deep neural networks with binary weights during propagations
  publication-title: NeurIPS
– year: 2018
  ident: 10.1016/j.patcog.2020.107281_bib0062
  article-title: Bi-real net: enhancing the performance of 1-bit cnns with improved representational capability and advanced training algorithm
– year: 2018
  ident: 10.1016/j.patcog.2020.107281_bib0077
  article-title: Syq: learning symmetric quantization for efficient deep neural networks
– year: 2019
  ident: 10.1016/j.patcog.2020.107281_bib0086
  article-title: Learning channel-wise interactions for binary convolutional neural networks
– year: 2017
  ident: 10.1016/j.patcog.2020.107281_bib0071
  article-title: Towards accurate binary convolutional neural network
– year: 2019
  ident: 10.1016/j.patcog.2020.107281_bib0123
  article-title: Seernet: predicting convolutional neural network feature-map sparsity through low-bit quantization
– start-page: 667
  year: 2017
  ident: 10.1016/j.patcog.2020.107281_bib0031
  article-title: Compressing deep neural networks for efficient visual inference
– volume: 11
  start-page: 16
  issue: 3
  year: 2018
  ident: 10.1016/j.patcog.2020.107281_bib0108
  article-title: Finn-r: an end-to-end deep-learning framework for fast exploration of quantized neural networks
  publication-title: ACM TRETS
– volume: PP
  start-page: 2999
  issue: 99
  year: 2017
  ident: 10.1016/j.patcog.2020.107281_bib0129
  article-title: Focal loss for dense object detection
  publication-title: IEEE Trans. Pattern Anal. Mach.Intell.
– volume: abs/1904.05868
  year: 2019
  ident: 10.1016/j.patcog.2020.107281_bib0095
  article-title: Improved training of binary networks for human pose estimation and image recognition
  publication-title: CoRR
– volume: 81
  start-page: 326
  year: 2018
  ident: 10.1016/j.patcog.2020.107281_bib0046
  article-title: Adaptive weighted nonnegative low-rank representation
  publication-title: Pattern Recognit.
  doi: 10.1016/j.patcog.2018.04.004
– start-page: 657
  year: 2018
  ident: 10.1016/j.patcog.2020.107281_bib0038
  article-title: Training binary weight networks via semi-binary decomposition
– volume: abs/1804.02767
  year: 2018
  ident: 10.1016/j.patcog.2020.107281_bib0099
  article-title: Yolov3: an incremental improvement
  publication-title: CoRR
– volume: 52
  start-page: 127
  issue: 1
  year: 2017
  ident: 10.1016/j.patcog.2020.107281_bib0020
  article-title: Eyeriss: an energy-efficient reconfigurable accelerator for deep convolutional neural networks
  publication-title: IEEE J. Solid-State Circuits
  doi: 10.1109/JSSC.2016.2616357
– start-page: 2830
  year: 2015
  ident: 10.1016/j.patcog.2020.107281_bib0027
  article-title: Data-free parameter pruning for deep neural networks
  publication-title: Comput. Sci.
– volume: abs/1704.04861
  year: 2017
  ident: 10.1016/j.patcog.2020.107281_bib0047
  article-title: Mobilenets: efficient convolutional neural networks for mobile vision applications
  publication-title: CoRR
– year: 2019
  ident: 10.1016/j.patcog.2020.107281_bib0085
  article-title: Regularizing activation distribution for training binarized deep networks
– year: 2015
  ident: 10.1016/j.patcog.2020.107281_bib0007
  article-title: Fast r-cnn
– volume: abs/1412.6115
  year: 2014
  ident: 10.1016/j.patcog.2020.107281_bib0032
  article-title: Compressing deep convolutional networks using vector quantization
  publication-title: CoRR
– year: 2019
  ident: 10.1016/j.patcog.2020.107281_bib0092
  article-title: Proxquant: quantized neural networks via proximal operators
– year: 2017
  ident: 10.1016/j.patcog.2020.107281_bib0098
  article-title: Training quantized nets: a deeper understanding
– start-page: 6848
  year: 2018
  ident: 10.1016/j.patcog.2020.107281_bib0049
  article-title: Shufflenet: an extremely efficient convolutional neural network for mobile devices
– volume: abs/1909.13863
  year: 2019
  ident: 10.1016/j.patcog.2020.107281_bib0076
  article-title: Xnor-net++: improved binary neural networks
  publication-title: CoRR
– year: 2020
  ident: 10.1016/j.patcog.2020.107281_bib0096
  article-title: Forward and backward information retention for accurate binary neural networks
  publication-title: IEEE CVPR
– volume: abs/1812.01965
  year: 2018
  ident: 10.1016/j.patcog.2020.107281_bib0105
  article-title: Training competitive binary neural networks from scratch
  publication-title: CoRR
– year: 2017
  ident: 10.1016/j.patcog.2020.107281_bib0121
  article-title: Extremely low bit neural network: squeeze the last bit out with admm
– volume: 96
  year: 2019
  ident: 10.1016/j.patcog.2020.107281_bib0056
  article-title: Compressing CNN-DBLSTM models for OCR with teacher-student learning and tucker decomposition
  publication-title: Pattern Recognit.
  doi: 10.1016/j.patcog.2019.07.002
– year: 2015
  ident: 10.1016/j.patcog.2020.107281_bib0002
  article-title: Going deeper with convolutions
– year: 2018
  ident: 10.1016/j.patcog.2020.107281_bib0061
  article-title: Lq-nets: learned quantization for highly accurate and compact deep neural networks
– ident: 10.1016/j.patcog.2020.107281_bib0104
  doi: 10.1145/3123266.3129393
– year: 2020
  ident: 10.1016/j.patcog.2020.107281_bib0079
  article-title: Training binary neural networks with real-to-binary convolutions
– ident: 10.1016/j.patcog.2020.107281_bib0041
  doi: 10.1109/CVPR42600.2020.00204
– volume: PP
  issue: 99
  year: 2016
  ident: 10.1016/j.patcog.2020.107281_sbref0018
  article-title: Eyeriss: A spatial architecture for energy-efficient dataflow for convolutional neural networks
  publication-title: IEEE Micro
– year: 2018
  ident: 10.1016/j.patcog.2020.107281_bib0081
  article-title: Model compression via distillation and quantization
– year: 2016
  ident: 10.1016/j.patcog.2020.107281_bib0132
  article-title: SSD: single shot multibox detector
– volume: abs/1710.09282
  year: 2017
  ident: 10.1016/j.patcog.2020.107281_bib0015
  article-title: A survey of model compression and acceleration for deep neural networks
  publication-title: CoRR
– year: 2018
  ident: 10.1016/j.patcog.2020.107281_bib0080
  article-title: Distilled binary neural network for monaural speech separation
  publication-title: IJCNN
– volume: 61
  start-page: 539
  year: 2017
  ident: 10.1016/j.patcog.2020.107281_bib0004
  article-title: Towards better exploiting convolutional neural networks for remote sensing scene classification
  publication-title: Pattern Recognit.
  doi: 10.1016/j.patcog.2016.07.001
– year: 2020
  ident: 10.1016/j.patcog.2020.107281_bib0040
  article-title: Rotation consistent margin loss for efficient low-bit face recognition
  publication-title: CoRR
– volume: abs/1503.02531
  year: 2015
  ident: 10.1016/j.patcog.2020.107281_bib0051
  article-title: Distilling the knowledge in a neural network
  publication-title: CoRR
– year: 2017
  ident: 10.1016/j.patcog.2020.107281_bib0055
  article-title: Paying more attention to attention: improving the performance of convolutional neural networks via attention transfer
– year: 2017
  ident: 10.1016/j.patcog.2020.107281_bib0070
  article-title: Performance guaranteed network acceleration via high-order residual quantization
– year: 2014
  ident: 10.1016/j.patcog.2020.107281_bib0044
  article-title: Speeding up convolutional neural networks with low rank expansions
– volume: abs/1812.11800
  year: 2018
  ident: 10.1016/j.patcog.2020.107281_bib0090
  article-title: BNN+: improved binary network training
  publication-title: CoRR
– volume: 6
  start-page: 14
  issue: 1
  year: 2019
  ident: 10.1016/j.patcog.2020.107281_bib0091
  article-title: Blended coarse gradient descent for full quantization of deep neural networks
  publication-title: Res. Math. Sci.
  doi: 10.1007/s40687-018-0177-6
– year: 2017
  ident: 10.1016/j.patcog.2020.107281_bib0116
  article-title: Scaling binarized neural networks on reconfigurable logic
– volume: 275
  start-page: 1072
  year: 2018
  ident: 10.1016/j.patcog.2020.107281_bib0100
  article-title: Fp-bnn: binarized neural network on fpga
  publication-title: Neurocomputing
  doi: 10.1016/j.neucom.2017.09.046
– volume: 86
  start-page: 2278
  issue: 11
  year: 1998
  ident: 10.1016/j.patcog.2020.107281_bib0107
  article-title: Gradient-based learning applied to document recognition
  publication-title: Proc. IEEE
  doi: 10.1109/5.726791
– start-page: 277
  year: 2016
  ident: 10.1016/j.patcog.2020.107281_bib0115
  article-title: A memory-based realization of a binarized deep convolutional neural network
– volume: abs/1308.3432
  year: 2013
  ident: 10.1016/j.patcog.2020.107281_bib0068
  article-title: Estimating or propagating gradients through stochastic neurons for conditional computation
  publication-title: CoRR
– year: 2017
  ident: 10.1016/j.patcog.2020.107281_bib0089
  article-title: Deep learning with low precision by half-wave gaussian quantization
– year: 2011
  ident: 10.1016/j.patcog.2020.107281_bib0034
– volume: abs/1909.11366
  year: 2019
  ident: 10.1016/j.patcog.2020.107281_bib0097
  article-title: Accurate and compact convolutional neural networks with trained binarization
  publication-title: CoRR
– start-page: 1
  year: 2018
  ident: 10.1016/j.patcog.2020.107281_bib0110
  article-title: Binaryeye: a 20 kfps streaming camera system on fpga with real-time on-device image recognition using binary neural networks
– start-page: 1737
  year: 2015
  ident: 10.1016/j.patcog.2020.107281_bib0035
  article-title: Deep learning with limited numerical precision
  publication-title: ICML
– year: 2018
  ident: 10.1016/j.patcog.2020.107281_bib0036
  article-title: Efficient Deep Learning in Network Compression and Acceleration
  doi: 10.5772/intechopen.79562
– ident: 10.1016/j.patcog.2020.107281_bib0102
  doi: 10.1145/3020078.3021744
– start-page: 2852
  year: 2018
  ident: 10.1016/j.patcog.2020.107281_bib0053
  article-title: Darkrank: accelerating deep metric learning via cross sample similarities transfer
  publication-title: AAAI
– start-page: 740
  year: 2014
  ident: 10.1016/j.patcog.2020.107281_bib0127
  article-title: Microsoft coco: Common objects in context
  publication-title: Lect. Notes Comput. Sci.
  doi: 10.1007/978-3-319-10602-1_48
– volume: 62
  start-page: 135
  year: 2017
  ident: 10.1016/j.patcog.2020.107281_bib0005
  article-title: Local binary features for texture classification: taxonomy and experimental study
  publication-title: Pattern Recognit.
  doi: 10.1016/j.patcog.2016.08.032
– year: 2015
  ident: 10.1016/j.patcog.2020.107281_bib0012
  article-title: Very deep convolutional networks for large-scale image recognition
– year: 2017
  ident: 10.1016/j.patcog.2020.107281_bib0017
  article-title: Hardware for machine learning: challenges and opportunities
– year: 2015
  ident: 10.1016/j.patcog.2020.107281_bib0025
  article-title: An exploration of parameter redundancy in deep networks with circulant projections
– year: 2016
  ident: 10.1016/j.patcog.2020.107281_bib0029
  article-title: Deep compression: compressing deep neural networks with pruning, trained quantization and Huffman coding
– year: 2019
  ident: 10.1016/j.patcog.2020.107281_bib0067
  article-title: Binary ensemble neural network: more bits per network or more networks per bit?
– year: 2017
  ident: 10.1016/j.patcog.2020.107281_bib0083
  article-title: Loss-aware binarization of deep networks
– year: 2019
  ident: 10.1016/j.patcog.2020.107281_bib0106
  article-title: dabnn: a super fast inference framework for binary neural networks on ARM devices
– start-page: 51
  year: 2018
  ident: 10.1016/j.patcog.2020.107281_bib0112
  article-title: Fbna: a fully binarized neural network accelerator
– year: 2018
  ident: 10.1016/j.patcog.2020.107281_bib0075
  article-title: WRPN: wide reduced-precision networks
– year: 2019
  ident: 10.1016/j.patcog.2020.107281_bib0066
  article-title: Defensive quantization: when efficiency meets robustness
– year: 2019
  ident: 10.1016/j.patcog.2020.107281_bib0063
  article-title: Differentiable soft quantization: bridging full-precision and low-bit neural networks
– volume: 2
  start-page: 226
  issue: 2
  year: 1990
  ident: 10.1016/j.patcog.2020.107281_bib0024
  article-title: Analysis of neural networks with redundancy
  publication-title: Neural Comput.
  doi: 10.1162/neco.1990.2.2.226
– volume: 9
  start-page: 292
  issue: 2
  year: 2019
  ident: 10.1016/j.patcog.2020.107281_bib0016
  article-title: Eyeriss v2: a flexible accelerator for emerging deep neural networks on mobile devices
  publication-title: IEEE J. Emerg. Sel. Topics Circuits Syst.
  doi: 10.1109/JETCAS.2019.2910232
– year: 2018
  ident: 10.1016/j.patcog.2020.107281_bib0048
  article-title: Mobilenetv2: inverted residuals and linear bottlenecks
– year: 2019
  ident: 10.1016/j.patcog.2020.107281_bib0093
  article-title: Quantization networks
– volume: 88
  start-page: 303
  issue: 2
  year: 2010
  ident: 10.1016/j.patcog.2020.107281_bib0126
  article-title: The pascal visual object classes (voc) challenge
  publication-title: Int. J. Comput. Vision
  doi: 10.1007/s11263-009-0275-4
– year: 2019
  ident: 10.1016/j.patcog.2020.107281_bib0065
  article-title: Fully quantized network for object detection
– year: 2016
  ident: 10.1016/j.patcog.2020.107281_bib0033
  article-title: Quantized convolutional neural networks for mobile devices
– volume: abs/1805.06085
  year: 2018
  ident: 10.1016/j.patcog.2020.107281_bib0074
  article-title: PACT: parameterized clipping activation for quantized neural networks
  publication-title: CoRR
– year: 2018
  ident: 10.1016/j.patcog.2020.107281_bib0125
  article-title: Towards effective low-bitwidth convolutional neural networks
– year: 2017
  ident: 10.1016/j.patcog.2020.107281_bib0054
  article-title: A gift from knowledge distillation: fast optimization, network minimization and transfer learning
– year: 2018
  ident: 10.1016/j.patcog.2020.107281_bib0073
  article-title: From hashing to cnns: training binary weight networks via hashing
– volume: 98
  year: 2020
  ident: 10.1016/j.patcog.2020.107281_bib0014
  article-title: Spatio-temporal deformable 3d convnets with attention for action recognition
  publication-title: Pattern Recognit.
  doi: 10.1016/j.patcog.2019.107037
– volume: 3
  start-page: 191
  issue: 4
  year: 2018
  ident: 10.1016/j.patcog.2020.107281_bib0064
  article-title: Fast object detection based on binary deep convolution neural networks
  publication-title: CAAI Trans. Intell. Technol.
  doi: 10.1049/trit.2018.1026
– year: 2016
  ident: 10.1016/j.patcog.2020.107281_bib0058
  article-title: Xnor-net: imagenet classification using binary convolutional neural networks
– volume: abs/1601.06071
  year: 2016
  ident: 10.1016/j.patcog.2020.107281_bib0069
  article-title: Bitwise neural networks
  publication-title: CoRR
– volume: 73
  start-page: 275
  year: 2018
  ident: 10.1016/j.patcog.2020.107281_bib0013
  article-title: Deep adaptive feature embedding with local sample distributions for person re-identification
  publication-title: Pattern Recognit.
  doi: 10.1016/j.patcog.2017.08.029
– year: 2018
  ident: 10.1016/j.patcog.2020.107281_bib0072
  article-title: Two-step quantization for low-bit neural networks
– year: 2015
  ident: 10.1016/j.patcog.2020.107281_bib0003
  article-title: Faster r-cnn: towards real-time object detection with region proposal networks
– volume: abs/1909.06978
  year: 2019
  ident: 10.1016/j.patcog.2020.107281_bib0021
  article-title: Interpreting and improving adversarial robustness with neuron sensitivity
  publication-title: CoRR
– ident: 10.1016/j.patcog.2020.107281_bib0103
  doi: 10.1145/3289602.3293990
– volume: 77
  start-page: 306
  year: 2018
  ident: 10.1016/j.patcog.2020.107281_bib0006
  article-title: Active multi-kernel domain adaptation for hyperspectral image classification
  publication-title: Pattern Recognit.
  doi: 10.1016/j.patcog.2017.10.007
– year: 2016
  ident: 10.1016/j.patcog.2020.107281_bib0057
  article-title: Binarized neural networks
– year: 2016
  ident: 10.1016/j.patcog.2020.107281_bib0028
  article-title: Deep residual learning for image recognition
– volume: abs/1902.00730
  year: 2019
  ident: 10.1016/j.patcog.2020.107281_bib0094
  article-title: Self-binarizing networks
  publication-title: CoRR
– year: 2012
  ident: 10.1016/j.patcog.2020.107281_bib0113
  article-title: Learning multiple layers of features from tiny images
  publication-title: University of Toronto
– start-page: 1545
  year: 2018
  ident: 10.1016/j.patcog.2020.107281_bib0037
  article-title: Bitstream: Efficient computing architecture for real-time low-power inference of binary neural networks on cpus
– year: 2015
  ident: 10.1016/j.patcog.2020.107281_bib0059
  article-title: Binaryconnect: training deep neural networks with binary weights during propagations
– year: 2019
  ident: 10.1016/j.patcog.2020.107281_bib0087
  article-title: A main/subsidiary network framework for simplifying binary neural networks
– year: 2017
  ident: 10.1016/j.patcog.2020.107281_bib0009
  article-title: Detecting masked faces in the wild with lle-cnns
– year: 2012
  ident: 10.1016/j.patcog.2020.107281_bib0011
  article-title: Imagenet classification with deep convolutional neural networks
  publication-title: NeurIPS
– volume: abs/1806.08342
  year: 2018
  ident: 10.1016/j.patcog.2020.107281_bib0130
  article-title: Quantizing deep convolutional networks for efficient inference: a whitepaper
  publication-title: CoRR
– start-page: 1
  year: 2017
  ident: 10.1016/j.patcog.2020.107281_bib0117
  article-title: A fully connected layer elimination for a binarizec convolutional neural network on an fpga
– start-page: 281
  year: 2017
  ident: 10.1016/j.patcog.2020.107281_bib0114
  article-title: Deep learning binary neural network on an fpga
  publication-title: IEEE MWSCAS
– year: 2018
  ident: 10.1016/j.patcog.2020.107281_bib0082
  article-title: Apprentice: using knowledge distillation techniques to improve low-precision network accuracy
– start-page: 116
  year: 2018
  ident: 10.1016/j.patcog.2020.107281_bib0050
  article-title: Shufflenet v2: practical guidelines for efficient cnn architecture design
– year: 2009
  ident: 10.1016/j.patcog.2020.107281_bib0118
  article-title: Imagenet: a large-scale hierarchical image database
– volume: abs/1909.09034
  year: 2019
  ident: 10.1016/j.patcog.2020.107281_bib0023
  article-title: Training robust deep neural networks via adversarial noise propagation
  publication-title: CoRR
– ident: 10.1016/j.patcog.2020.107281_bib0101
  doi: 10.1145/3020078.3021741
– volume: 61
  start-page: 610
  year: 2017
  ident: 10.1016/j.patcog.2020.107281_bib0010
  article-title: Facial expression recognition with convolutional neural networks: coping with few data and the training sample order
  publication-title: Pattern Recognit.
  doi: 10.1016/j.patcog.2016.07.026
– year: 2017
  ident: 10.1016/j.patcog.2020.107281_bib0084
  article-title: Incremental network quantization: towards lossless cnns with low-precision weights
– year: 2019
  ident: 10.1016/j.patcog.2020.107281_bib0088
  article-title: Circulant binary convolutional networks: enhancing the performance of 1-bit dcnns with circulant back propagation
– year: 2018
  ident: 10.1016/j.patcog.2020.107281_bib0131
  article-title: Quantization and training of neural networks for efficient integer-arithmetic-only inference
– volume: 521
  start-page: 436
  issue: 7553
  year: 2015
  ident: 10.1016/j.patcog.2020.107281_bib0001
  article-title: Deep learning
  publication-title: Nature
  doi: 10.1038/nature14539
– start-page: 3918
  year: 2019
  ident: 10.1016/j.patcog.2020.107281_bib0039
  article-title: Metaquant: learning to quantize by learning to penetrate non-differentiable quantization
– year: 2011
  ident: 10.1016/j.patcog.2020.107281_bib0111
  article-title: Reading digits in natural images with unsupervised feature learning
– year: 2017
  ident: 10.1016/j.patcog.2020.107281_bib0030
  article-title: Channel pruning for accelerating very deep neural networks
– volume: abs/1909.04839
  year: 2019
  ident: 10.1016/j.patcog.2020.107281_bib0022
  article-title: Towards noise-robust neural networks via progressive adversarial training
  publication-title: CoRR
– year: 2018
  ident: 10.1016/j.patcog.2020.107281_bib0120
  article-title: Efficient object detection using embedded binarized neural networks
  publication-title: J. Signal Process. Syst.
  doi: 10.1007/s11265-017-1255-5
– year: 2019
  ident: 10.1016/j.patcog.2020.107281_bib0122
  article-title: Structured binary neural networks for accurate image classification and semantic segmentation
– year: 2019
  ident: 10.1016/j.patcog.2020.107281_bib0124
  article-title: A systematic study of binary neural networks’ optimisation
– year: 2019
  ident: 10.1016/j.patcog.2020.107281_bib0008
  article-title: Libra r-cnn: towards balanced learning for object detection
– year: 2016
  ident: 10.1016/j.patcog.2020.107281_bib0019
  article-title: Eyeriss: an energy-efficient reconfigurable accelerator for deep convolutional neural networks
– year: 2020
  ident: 10.1016/j.patcog.2020.107281_bib0078
  article-title: Balanced binary neural networks with gated residual
– volume: abs/1606.06160
  year: 2016
  ident: 10.1016/j.patcog.2020.107281_bib0060
  article-title: Dorefa-net: training low bitwidth convolutional neural networks with low bitwidth gradients
  publication-title: CoRR
– year: 2014
  ident: 10.1016/j.patcog.2020.107281_bib0042
  article-title: Exploiting linear structure within convolutional networks for efficient evaluation
– year: 2018
  ident: 10.1016/j.patcog.2020.107281_bib0052
  article-title: Training shallow and thin networks for acceleration via knowledge distillation with conditional adversarial networks
– start-page: 98
  year: 2017
  ident: 10.1016/j.patcog.2020.107281_bib0119
  article-title: On-chip memory based binarized convolutional deep neural network applying batch normalization free technique on an fpga
– year: 2015
  ident: 10.1016/j.patcog.2020.107281_bib0026
  article-title: Learning both weights and connections for efficient neural network
– year: 2015
  ident: 10.1016/j.patcog.2020.107281_bib0043
  article-title: Speeding-up convolutional neural networks using fine-tuned cp-decomposition
– start-page: 57
  year: 2018
  ident: 10.1016/j.patcog.2020.107281_bib0109
  article-title: Rebnet: residual binarized neural network
– year: 2015
  ident: 10.1016/j.patcog.2020.107281_bib0128
  article-title: Faster r-cnn: towards real-time object detection with region proposal networks
– year: 2016
  ident: 10.1016/j.patcog.2020.107281_bib0045
  article-title: Fast convnets using group-wise brain damage
SSID ssj0017142
Score 2.712768
Snippet •We summarize the binary neural network methods and categorize them into the naive binarization and the optimized binarization.•The binary neural networks are...
SourceID crossref
elsevier
SourceType Enrichment Source
Index Database
Publisher
StartPage 107281
SubjectTerms Binary neural network
Deep learning
Model acceleration
Model compression
Network quantization
Title Binary neural networks: A survey
URI https://dx.doi.org/10.1016/j.patcog.2020.107281
Volume 105
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV3NS8MwFA9jXrz4Lc6P0YPXuDZJl9TbHI6puJOD3UJf2shEurEPwYt_u3lNOxREwWMfedC-JO-D_t7vEXLJcqsESyMaZ_ibMbeCJhCH1F1KHhnGuylgofg46g7H4n4STxqkX_fCIKyy8v3ep5feupJ0Kmt25tMp9vgi7SAOAOeYOCDtthAST_nVxwbmgfO9PWM4jygurdvnSozX3Lm72bOrEhmKJFPRz-HpS8gZ7JGdKlcMev519kkjLw7Ibj2HIaiu5SEJbsqm2gC5Kd36wiO7l9dBL1iuF2_5-xEZD26f-kNaTT6gxqXwKwoMB8y52O7KrzTJhGAMklBAaiMmgVtQJs6YkaGQJldgEgCQaRKnkDAwLgIfk2YxK_ITEijFgWfKCSUIV0sk1jLLFCgbATfdsEV4_cHaVLTgOJ3iVdf4rxftzaTRTNqbqUXoRmvuaTH-WC9rW-pv26ud5_5V8_TfmmdkG588IOycNFeLdX7hMogVtMsj0iZbvbuH4egTA8vC1w
linkProvider Elsevier
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV3JTsMwELUqOMCFHVHWHOAYmthO4iBxKEvV0uXUSr2FjOOgIpRWXUC98FP8IJ4sFUgIJKReHVuKx86bGeXNPELOqYoFp6FtOhH-ZlQxN31wLFN_lMyWlLkhYKLY7rj1Hn_oO_0S-ShqYZBWmWN_hukpWucjldyaldFggDW-2HYQBcAZBg5uzqxsqvmbztsm1407fcgXlNbuu7d1M5cWMKWOkacmUFRw085T5zehH3FOKfgWhzC2qQcsBiGdiErP4p5UAqQPAF7oOyH4FKRAqQiN-6tcwwXKJly-L3glKCietShntonvVtTrpaSykcbX4ZNOSykOeVTYP_vDLz6utkU28uDUqGb73yYlleyQzUL4wchxYJcYN2kVr4HNMPX8JKOST66MqjGZjV_VfI_0lmKPfbKSDBN1QAwhGLBI6EEPuE5e_DimMRUgYhuYdK0yYcWGA5n3IUc5jJegIJw9B5mZAjRTkJmpTMzFqlHWh-OP-V5hy-DbfQq0q_h15eG_V56RtXq33QpajU7ziKzjk4yNdkxWpuOZOtHhyxRO0-tikMdl389PlN_-bQ
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Binary+neural+networks%3A+A+survey&rft.jtitle=Pattern+recognition&rft.au=Qin%2C+Haotong&rft.au=Gong%2C+Ruihao&rft.au=Liu%2C+Xianglong&rft.au=Bai%2C+Xiao&rft.date=2020-09-01&rft.issn=0031-3203&rft.volume=105&rft.spage=107281&rft_id=info:doi/10.1016%2Fj.patcog.2020.107281&rft.externalDBID=n%2Fa&rft.externalDocID=10_1016_j_patcog_2020_107281
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0031-3203&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0031-3203&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0031-3203&client=summon