T-CNN: Tubelets With Convolutional Neural Networks for Object Detection From Videos

The state-of-the-art performance for object detection has been significantly improved over the past two years. Besides the introduction of powerful deep neural networks, such as GoogleNet and VGG, novel object detection frameworks, such as R-CNN and its successors, Fast R-CNN, and Faster R-CNN, play...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on circuits and systems for video technology Vol. 28; no. 10; pp. 2896 - 2907
Main Authors Kang, Kai, Li, Hongsheng, Yan, Junjie, Zeng, Xingyu, Yang, Bin, Xiao, Tong, Zhang, Cong, Wang, Zhe, Wang, Ruohui, Wang, Xiaogang, Ouyang, Wanli
Format Journal Article
LanguageEnglish
Published New York IEEE 01.10.2018
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
Abstract The state-of-the-art performance for object detection has been significantly improved over the past two years. Besides the introduction of powerful deep neural networks, such as GoogleNet and VGG, novel object detection frameworks, such as R-CNN and its successors, Fast R-CNN, and Faster R-CNN, play an essential role in improving the state of the art. Despite their effectiveness on still images, those frameworks are not specifically designed for object detection from videos. Temporal and contextual information of videos are not fully investigated and utilized. In this paper, we propose a deep learning framework that incorporates temporal and contextual information from tubelets obtained in videos, which dramatically improves the baseline performance of existing still-image detection frameworks when they are applied to videos. It is called T-CNN, i.e., tubelets with convolutional neueral networks. The proposed framework won newly introduced an object-detection-from-video task with provided data in the ImageNet Large-Scale Visual Recognition Challenge 2015. Code is publicly available at https://github.com/myfavouritekk/T-CNN .
AbstractList The state-of-the-art performance for object detection has been significantly improved over the past two years. Besides the introduction of powerful deep neural networks, such as GoogleNet and VGG, novel object detection frameworks, such as R-CNN and its successors, Fast R-CNN, and Faster R-CNN, play an essential role in improving the state of the art. Despite their effectiveness on still images, those frameworks are not specifically designed for object detection from videos. Temporal and contextual information of videos are not fully investigated and utilized. In this paper, we propose a deep learning framework that incorporates temporal and contextual information from tubelets obtained in videos, which dramatically improves the baseline performance of existing still-image detection frameworks when they are applied to videos. It is called T-CNN, i.e., tubelets with convolutional neueral networks. The proposed framework won newly introduced an object-detection-from-video task with provided data in the ImageNet Large-Scale Visual Recognition Challenge 2015. Code is publicly available at https://github.com/myfavouritekk/T-CNN .
The state-of-the-art performance for object detection has been significantly improved over the past two years. Besides the introduction of powerful deep neural networks, such as GoogleNet and VGG, novel object detection frameworks, such as R-CNN and its successors, Fast R-CNN, and Faster R-CNN, play an essential role in improving the state of the art. Despite their effectiveness on still images, those frameworks are not specifically designed for object detection from videos. Temporal and contextual information of videos are not fully investigated and utilized. In this paper, we propose a deep learning framework that incorporates temporal and contextual information from tubelets obtained in videos, which dramatically improves the baseline performance of existing still-image detection frameworks when they are applied to videos. It is called T-CNN, i.e., tubelets with convolutional neural networks. The proposed framework won newly introduced an object-detection-from-video task with provided data in the ImageNet Large-Scale Visual Recognition Challenge 2015. Code is publicly available at https://github.com/myfavouritekk/T-CNN.
Author Wang, Zhe
Xiao, Tong
Zhang, Cong
Ouyang, Wanli
Zeng, Xingyu
Kang, Kai
Yang, Bin
Wang, Xiaogang
Wang, Ruohui
Yan, Junjie
Li, Hongsheng
Author_xml – sequence: 1
  givenname: Kai
  orcidid: 0000-0002-6707-4616
  surname: Kang
  fullname: Kang, Kai
  organization: The Chinese University of Hong Kong, Hong Kong
– sequence: 2
  givenname: Hongsheng
  surname: Li
  fullname: Li, Hongsheng
  organization: The Chinese University of Hong Kong, Hong Kong
– sequence: 3
  givenname: Junjie
  surname: Yan
  fullname: Yan, Junjie
  organization: SenseTime Group Ltd., Beijing, China
– sequence: 4
  givenname: Xingyu
  surname: Zeng
  fullname: Zeng, Xingyu
  organization: SenseTime Group Ltd., Beijing, China
– sequence: 5
  givenname: Bin
  surname: Yang
  fullname: Yang, Bin
  organization: Computer Science Department, University of Toronto, Toronto, ON, Canada
– sequence: 6
  givenname: Tong
  surname: Xiao
  fullname: Xiao, Tong
  organization: The Chinese University of Hong Kong, Hong Kong
– sequence: 7
  givenname: Cong
  surname: Zhang
  fullname: Zhang, Cong
  organization: Shanghai Jiao Tong University, Shanghai, China
– sequence: 8
  givenname: Zhe
  surname: Wang
  fullname: Wang, Zhe
  organization: The Chinese University of Hong Kong, Hong Kong
– sequence: 9
  givenname: Ruohui
  surname: Wang
  fullname: Wang, Ruohui
  organization: The Chinese University of Hong Kong, Hong Kong
– sequence: 10
  givenname: Xiaogang
  surname: Wang
  fullname: Wang, Xiaogang
  organization: The Chinese University of Hong Kong, Hong Kong
– sequence: 11
  givenname: Wanli
  orcidid: 0000-0002-9163-2761
  surname: Ouyang
  fullname: Ouyang, Wanli
  email: wlouyang@ee.cuhk.edu.hk
  organization: The Chinese University of Hong Kong, Hong Kong
BookMark eNp9kD1PwzAURS0EErTwB2CxxJzi7yRsKFBAqtqhAcbIcZ5FSqiL7YD496QUMTAw3Tfc83R1Rmh_7daA0CklE0pJflEWy8dywghNJyzlSkq-h46olFnCGJH7w00kTTJG5SEahbAihIpMpEdoWSbFfH6Jy76GDmLAT218xoVbv7uuj61b6w7PofffET-cfwnYOo8X9QpMxNcQhxhqeOrdK35sG3DhGB1Y3QU4-ckxepjelMVdMlvc3hdXs8SwXMYkr3maaqqIaKxW3DTGWDBCguWsAc3STDbacCVAWGsprXOhayWBahAq1zkfo_Pd3413bz2EWK1c74fFoWKUKaE442poZbuW8S4ED7YybdTbzdHrtqsoqbYKq2-F1VZh9aNwQNkfdOPbV-0__4fOdlALAL9ARgjnhPEvzth_ng
CODEN ITCTEM
CitedBy_id crossref_primary_10_1016_j_eswa_2023_122240
crossref_primary_10_31466_kfbd_734393
crossref_primary_10_1631_FITEE_2100366
crossref_primary_10_1007_s11554_024_01490_0
crossref_primary_10_3390_s18030774
crossref_primary_10_1109_TCSVT_2021_3082763
crossref_primary_10_1109_TCSVT_2023_3272891
crossref_primary_10_2478_amns_2025_0600
crossref_primary_10_1109_TCSVT_2022_3183646
crossref_primary_10_1007_s00500_020_04989_3
crossref_primary_10_32604_cmc_2021_017011
crossref_primary_10_1109_TCSVT_2019_2903421
crossref_primary_10_1016_j_neucom_2024_127973
crossref_primary_10_1109_JIOT_2024_3365957
crossref_primary_10_1109_TCSVT_2021_3076523
crossref_primary_10_1080_1206212X_2018_1525929
crossref_primary_10_1155_2021_5410049
crossref_primary_10_1109_TGRS_2021_3122515
crossref_primary_10_1109_JSTARS_2021_3062176
crossref_primary_10_3390_drones8040144
crossref_primary_10_3390_electronics11213425
crossref_primary_10_1109_TCSVT_2021_3100842
crossref_primary_10_1109_TMM_2022_3164253
crossref_primary_10_3390_drones5030066
crossref_primary_10_1007_s10489_022_03529_w
crossref_primary_10_1049_ipr2_12615
crossref_primary_10_2139_ssrn_4001358
crossref_primary_10_3390_urbansci7020065
crossref_primary_10_2139_ssrn_4001359
crossref_primary_10_1142_S1793962321500318
crossref_primary_10_1007_s13735_025_00355_x
crossref_primary_10_1016_j_measurement_2024_115779
crossref_primary_10_1109_TIM_2023_3334348
crossref_primary_10_1016_j_imavis_2021_104238
crossref_primary_10_1109_TMM_2023_3292615
crossref_primary_10_1007_s11220_022_00399_x
crossref_primary_10_1016_j_displa_2022_102230
crossref_primary_10_1109_ACCESS_2021_3138980
crossref_primary_10_1145_3606948
crossref_primary_10_1088_1742_6596_1659_1_012051
crossref_primary_10_1117_1_JEI_29_3_033015
crossref_primary_10_1145_3564663
crossref_primary_10_3390_app11104561
crossref_primary_10_1109_TAI_2024_3454566
crossref_primary_10_1109_TCSVT_2021_3094533
crossref_primary_10_1109_TCSII_2023_3241163
crossref_primary_10_1109_TCYB_2021_3114031
crossref_primary_10_1109_ACCESS_2024_3425166
crossref_primary_10_1109_TITS_2022_3176721
crossref_primary_10_1109_TVT_2020_2993863
crossref_primary_10_1016_j_image_2024_117224
crossref_primary_10_1109_TIP_2024_3364536
crossref_primary_10_3390_app122412896
crossref_primary_10_1109_TCSVT_2024_3412093
crossref_primary_10_1109_TPAMI_2021_3137605
crossref_primary_10_1186_s13634_023_01045_8
crossref_primary_10_3390_app131910578
crossref_primary_10_3390_mi13010072
crossref_primary_10_1109_TCSVT_2021_3066241
crossref_primary_10_1109_TNNLS_2021_3053249
crossref_primary_10_1155_2022_4000171
crossref_primary_10_1007_s42044_025_00242_y
crossref_primary_10_1016_j_compenvurbsys_2021_101754
crossref_primary_10_1109_TGRS_2020_2978512
crossref_primary_10_1109_ACCESS_2020_3006191
crossref_primary_10_1109_TMM_2022_3150169
crossref_primary_10_26599_BDMA_2024_9020049
crossref_primary_10_1109_ACCESS_2022_3184031
crossref_primary_10_1109_TCSVT_2024_3452497
crossref_primary_10_3390_sym16030299
crossref_primary_10_1109_TCSVT_2024_3432900
crossref_primary_10_1142_S1793962323500289
crossref_primary_10_1016_j_eswa_2020_114544
crossref_primary_10_1007_s00521_022_07368_1
crossref_primary_10_3390_s22103703
crossref_primary_10_3934_mbe_2023282
crossref_primary_10_1016_j_compag_2018_09_030
crossref_primary_10_1016_j_engappai_2024_109313
crossref_primary_10_1007_s42452_019_1393_4
crossref_primary_10_1109_TPAMI_2022_3223955
crossref_primary_10_3390_app10217834
crossref_primary_10_1007_s13735_022_00263_4
crossref_primary_10_1063_5_0040424
crossref_primary_10_1016_j_bspc_2024_107206
crossref_primary_10_1007_s12652_021_03309_3
crossref_primary_10_1016_j_knosys_2025_113237
crossref_primary_10_1016_j_isprsjprs_2021_04_004
crossref_primary_10_1016_j_promfg_2020_01_289
crossref_primary_10_1061__ASCE_CP_1943_5487_0000975
crossref_primary_10_1109_ACCESS_2020_3017411
crossref_primary_10_1109_JPROC_2023_3238524
crossref_primary_10_3390_s20030578
crossref_primary_10_1007_s10055_023_00853_5
crossref_primary_10_1007_s11045_021_00764_1
crossref_primary_10_1007_s11042_020_09827_0
crossref_primary_10_1109_TITS_2024_3491784
crossref_primary_10_3390_electronics11132093
crossref_primary_10_3390_machines13020162
crossref_primary_10_3390_en17205177
crossref_primary_10_1016_j_patcog_2022_108847
crossref_primary_10_1016_j_displa_2023_102448
crossref_primary_10_1109_TGRS_2023_3278075
crossref_primary_10_1109_TGRS_2025_3534524
crossref_primary_10_3390_math10214125
crossref_primary_10_1109_ACCESS_2019_2946861
crossref_primary_10_1145_3632181
crossref_primary_10_1007_s11263_024_02201_9
crossref_primary_10_1016_j_engappai_2024_109754
crossref_primary_10_1016_j_jvcir_2023_103823
crossref_primary_10_1109_ACCESS_2025_3544515
crossref_primary_10_1016_j_imavis_2020_103929
crossref_primary_10_1016_j_neunet_2023_11_041
crossref_primary_10_1109_TCE_2023_3325480
crossref_primary_10_3390_s19091987
crossref_primary_10_3390_s23041890
crossref_primary_10_3390_rs14194833
crossref_primary_10_1088_1757_899X_711_1_012095
crossref_primary_10_1007_s00521_023_08956_5
crossref_primary_10_1016_j_future_2019_05_007
crossref_primary_10_1109_ACCESS_2022_3207282
crossref_primary_10_1061_JTEPBS_TEENG_7130
crossref_primary_10_1109_LRA_2018_2792152
crossref_primary_10_1109_TPAMI_2021_3119563
crossref_primary_10_4018_IJICTRAME_2019070102
crossref_primary_10_32604_cmc_2022_021629
crossref_primary_10_1016_j_patcog_2021_107929
crossref_primary_10_1109_TMM_2020_2990070
crossref_primary_10_1109_TCSVT_2018_2882061
crossref_primary_10_1177_14759217211010422
crossref_primary_10_3390_s21155116
crossref_primary_10_1016_j_imavis_2020_103910
crossref_primary_10_1109_TIP_2021_3099409
crossref_primary_10_1016_j_displa_2021_102020
crossref_primary_10_1109_ACCESS_2023_3328341
crossref_primary_10_3390_electronics12163421
crossref_primary_10_1016_j_ijleo_2021_168002
crossref_primary_10_1109_TCYB_2019_2894261
crossref_primary_10_1109_TCSVT_2020_2965966
crossref_primary_10_1007_s11263_021_01507_2
crossref_primary_10_1016_j_eswa_2024_124201
crossref_primary_10_1007_s11042_020_08976_6
crossref_primary_10_1109_TCSVT_2024_3350913
crossref_primary_10_1007_s10845_021_01815_x
crossref_primary_10_1016_j_jfranklin_2019_11_074
crossref_primary_10_1016_j_aei_2021_101448
crossref_primary_10_1016_j_eswa_2023_122507
crossref_primary_10_1007_s11042_024_19856_8
crossref_primary_10_3390_s24092795
crossref_primary_10_1109_ACCESS_2023_3323588
crossref_primary_10_3390_s22186857
crossref_primary_10_1061__ASCE_CP_1943_5487_0000930
crossref_primary_10_1007_s11554_021_01121_y
crossref_primary_10_3390_app122312314
crossref_primary_10_1016_j_neucom_2020_03_110
crossref_primary_10_1631_FITEE_2000567
crossref_primary_10_1016_j_neucom_2022_09_007
crossref_primary_10_1007_s10489_021_02838_w
crossref_primary_10_1109_LSP_2023_3329419
crossref_primary_10_1007_s11042_022_13801_3
crossref_primary_10_1109_TPAMI_2019_2910529
crossref_primary_10_1109_ACCESS_2021_3120261
crossref_primary_10_17341_gazimmfd_541677
crossref_primary_10_1007_s11042_020_08977_5
crossref_primary_10_1002_stc_2857
crossref_primary_10_1109_TIM_2019_2959292
crossref_primary_10_1016_j_micpro_2020_103339
crossref_primary_10_1016_j_neucom_2024_127809
crossref_primary_10_1109_ACCESS_2024_3489714
crossref_primary_10_3390_electronics13010230
crossref_primary_10_1016_j_imavis_2019_10_007
crossref_primary_10_1109_TCSVT_2024_3421988
crossref_primary_10_1080_03772063_2020_1729258
crossref_primary_10_1007_s11042_023_17949_4
crossref_primary_10_1016_j_jiixd_2024_08_002
crossref_primary_10_2139_ssrn_4015043
crossref_primary_10_1109_MGRS_2021_3115137
crossref_primary_10_3390_electronics13204097
crossref_primary_10_1007_s00138_023_01504_0
crossref_primary_10_1016_j_patcog_2022_108544
crossref_primary_10_3390_app11031096
crossref_primary_10_1088_1742_6596_1827_1_012178
crossref_primary_10_1109_TCSVT_2024_3464631
crossref_primary_10_1007_s11554_022_01253_9
crossref_primary_10_1109_JBHI_2021_3084962
crossref_primary_10_3390_jmse12040643
crossref_primary_10_1109_TNNLS_2020_3043099
crossref_primary_10_1109_TCSVT_2018_2872575
crossref_primary_10_1007_s11263_021_01569_2
crossref_primary_10_1109_ACCESS_2022_3203399
crossref_primary_10_1007_s11760_025_03837_x
crossref_primary_10_1109_TMM_2023_3241548
crossref_primary_10_1042_BST20191048
crossref_primary_10_1145_3463530
crossref_primary_10_1007_s10489_022_03463_x
crossref_primary_10_1109_JSTARS_2024_3359252
crossref_primary_10_1109_TPAMI_2019_2957464
crossref_primary_10_1109_TCAD_2020_2966451
crossref_primary_10_1002_cpe_6517
crossref_primary_10_1109_TCSVT_2020_2980876
crossref_primary_10_1016_j_cviu_2021_103188
crossref_primary_10_1016_j_engappai_2024_109609
crossref_primary_10_1109_ACCESS_2019_2939201
crossref_primary_10_2139_ssrn_4196888
crossref_primary_10_1109_TPAMI_2024_3449994
crossref_primary_10_1016_j_neucom_2020_05_027
crossref_primary_10_1109_ACCESS_2020_3004992
crossref_primary_10_1109_ACCESS_2022_3165835
crossref_primary_10_1016_j_jksuci_2019_09_012
crossref_primary_10_1007_s11042_022_12715_4
crossref_primary_10_1088_1742_6596_1682_1_012012
crossref_primary_10_1007_s12652_019_01575_w
crossref_primary_10_1109_TCSVT_2023_3238818
Cites_doi 10.1007/s11263-013-0620-5
10.1109/ICCV.2015.135
10.1007/978-3-319-46448-0_2
10.1109/CVPR.2015.7298594
10.1109/CVPR.2014.170
10.1109/ICCV.2015.169
10.1109/TCSVT.2008.928221
10.1109/ICCV.2013.223
10.1109/TCSVT.2016.2589879
10.1109/TCSVT.2007.903781
10.1007/978-3-642-15561-1_33
10.1109/CVPR.2015.7298675
10.1007/978-3-319-46478-7_22
10.1109/CVPR.2015.7298632
10.1109/CVPR.2015.7299146
10.1109/CVPR.2014.81
10.1109/CVPR.2017.101
10.1109/CVPR.2014.276
10.1109/CVPR.2016.90
10.1109/ICCV.2015.232
10.1109/CVPR.2016.95
10.1109/TCSVT.2005.844447
10.1109/ICCV.2015.357
10.1007/978-3-319-10578-9_48
10.1109/TCSVT.2009.2020252
10.1109/TIP.2017.2651367
10.1109/CVPR.2015.7298854
10.1109/CVPR.2016.650
10.1109/CVPR.2015.7298965
10.1109/CVPR.2012.6248065
10.1007/978-3-319-10602-1_26
10.1109/ICCV.2015.363
10.1109/CVPR.2016.235
10.1109/CVPR.2015.7298641
10.1109/CVPR.2009.5206848
10.1109/CVPR.2016.91
10.1007/978-3-319-10599-4_17
10.1109/CVPR.2013.253
ContentType Journal Article
Copyright Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2018
Copyright_xml – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2018
DBID 97E
RIA
RIE
AAYXX
CITATION
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
DOI 10.1109/TCSVT.2017.2736553
DatabaseName IEEE Xplore (IEEE)
IEEE All-Society Periodicals Package (ASPP) 1998-Present
IEEE Xplore Digital Library
CrossRef
Computer and Information Systems Abstracts
Electronics & Communications Abstracts
Technology Research Database
ProQuest Computer Science Collection
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
DatabaseTitle CrossRef
Technology Research Database
Computer and Information Systems Abstracts – Academic
Electronics & Communications Abstracts
ProQuest Computer Science Collection
Computer and Information Systems Abstracts
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts Professional
DatabaseTitleList
Technology Research Database
Database_xml – sequence: 1
  dbid: RIE
  name: IEEE Xplore Digital Library
  url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
EISSN 1558-2205
EndPage 2907
ExternalDocumentID 10_1109_TCSVT_2017_2736553
8003302
Genre orig-research
GrantInformation_xml – fundername: National Natural Science Foundation of China
  grantid: 61371192
  funderid: 10.13039/501100001809
– fundername: Office of Naval Research
  grantid: N00014-15-1-2356
  funderid: 10.13039/100000006
– fundername: Hong Kong Innovation and Technology Support Programme
  grantid: ITS/121/15FX
– fundername: SenseTime Group Ltd.
– fundername: China Postdoctoral Science Foundation
  grantid: 2014M552339
  funderid: 10.13039/501100002858
– fundername: General Research Fund through the Research Grants Council of Hong Kong
  grantid: CUHK14213616; CUHK14206114; CUHK14205615; CUHK419412; CUHK14203015; CUHK14239816; CUHK14207814
GroupedDBID -~X
0R~
29I
4.4
5GY
5VS
6IK
97E
AAJGR
AARMG
AASAJ
AAWTH
ABAZT
ABQJQ
ABVLG
ACGFO
ACGFS
ACIWK
AENEX
AETIX
AGQYO
AGSQL
AHBIQ
AI.
AIBXA
AKJIK
AKQYR
ALLEH
ALMA_UNASSIGNED_HOLDINGS
ASUFR
ATWAV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
CS3
DU5
EBS
EJD
HZ~
H~9
ICLAB
IFIPE
IFJZH
IPLJI
JAVBF
LAI
M43
O9-
OCL
P2P
RIA
RIE
RNS
RXW
TAE
TN5
VH1
AAYXX
CITATION
RIG
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
ID FETCH-LOGICAL-c295t-9b377a1604dfa63cdccfec45ef32dea2785dac364e4fff11b94ab65e1ae469a93
IEDL.DBID RIE
ISSN 1051-8215
IngestDate Mon Jun 30 04:34:17 EDT 2025
Thu Apr 24 23:07:11 EDT 2025
Tue Jul 01 00:41:10 EDT 2025
Wed Aug 27 02:52:23 EDT 2025
IsPeerReviewed true
IsScholarly true
Issue 10
Language English
License https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c295t-9b377a1604dfa63cdccfec45ef32dea2785dac364e4fff11b94ab65e1ae469a93
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ORCID 0000-0002-6707-4616
0000-0002-9163-2761
PQID 2126463236
PQPubID 85433
PageCount 12
ParticipantIDs crossref_citationtrail_10_1109_TCSVT_2017_2736553
proquest_journals_2126463236
ieee_primary_8003302
crossref_primary_10_1109_TCSVT_2017_2736553
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 2018-10-01
PublicationDateYYYYMMDD 2018-10-01
PublicationDate_xml – month: 10
  year: 2018
  text: 2018-10-01
  day: 01
PublicationDecade 2010
PublicationPlace New York
PublicationPlace_xml – name: New York
PublicationTitle IEEE transactions on circuits and systems for video technology
PublicationTitleAbbrev TCSVT
PublicationYear 2018
Publisher IEEE
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Publisher_xml – name: IEEE
– name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
References ref13
ref12
ref15
ref14
ref11
li (ref41) 2017
krizhevsky (ref30) 2012
nam (ref43) 2015
ref17
ref19
ref18
ref51
ref50
ref46
ref45
ref48
ref42
ref44
sermanet (ref10) 2013
bell (ref16) 2015
han (ref20) 2016
ref8
ref9
ref4
ref3
ref6
ref40
ref35
ref34
ref37
ref36
ref31
ren (ref5) 2015
ref33
simonyan (ref2) 2015
ref1
ref39
ref38
ioffe (ref7) 2015
ref24
maxime (ref27) 2015
ref23
ref26
zeng (ref47) 2015
ref25
ref22
ref21
ref28
simonyan (ref32) 2014
ref29
szegedy (ref49) 2016
References_xml – ident: ref45
  doi: 10.1007/s11263-013-0620-5
– ident: ref13
  doi: 10.1109/ICCV.2015.135
– ident: ref19
  doi: 10.1007/978-3-319-46448-0_2
– ident: ref1
  doi: 10.1109/CVPR.2015.7298594
– year: 2015
  ident: ref47
  publication-title: Window-object relationship guided representation learning for generic object detections
– ident: ref34
  doi: 10.1109/CVPR.2014.170
– year: 2015
  ident: ref16
  publication-title: Inside-Outside Net Detecting Objects in Context with Skip Pooling and Recurrent Neural Networks
– ident: ref4
  doi: 10.1109/ICCV.2015.169
– ident: ref38
  doi: 10.1109/TCSVT.2008.928221
– ident: ref24
  doi: 10.1109/ICCV.2013.223
– year: 2013
  ident: ref10
  publication-title: Overfeat Integrated Recognition Localization and Detection Using Convolutional Networks
– ident: ref31
  doi: 10.1109/TCSVT.2016.2589879
– ident: ref37
  doi: 10.1109/TCSVT.2007.903781
– ident: ref50
  doi: 10.1007/978-3-642-15561-1_33
– ident: ref36
  doi: 10.1109/CVPR.2015.7298675
– ident: ref48
  doi: 10.1007/978-3-319-46478-7_22
– ident: ref35
  doi: 10.1109/CVPR.2015.7298632
– ident: ref17
  doi: 10.1109/CVPR.2015.7299146
– ident: ref3
  doi: 10.1109/CVPR.2014.81
– ident: ref22
  doi: 10.1109/CVPR.2017.101
– ident: ref11
  doi: 10.1109/CVPR.2014.276
– ident: ref6
  doi: 10.1109/CVPR.2016.90
– ident: ref15
  doi: 10.1109/ICCV.2015.232
– ident: ref51
  doi: 10.1109/CVPR.2016.95
– ident: ref39
  doi: 10.1109/TCSVT.2005.844447
– start-page: 568
  year: 2014
  ident: ref32
  article-title: Two-stream convolutional networks for action recognition in videos
  publication-title: Proc Conf Neural Inf Process Syst
– ident: ref42
  doi: 10.1109/ICCV.2015.357
– ident: ref9
  doi: 10.1007/978-3-319-10578-9_48
– year: 2015
  ident: ref7
  publication-title: Batch Normalization Accelerating Deep Network Training by Reducing Internal Covariate Shift
– year: 2015
  ident: ref43
  publication-title: Learning multi-domain convolutional neural networks for visual tracking
– ident: ref40
  doi: 10.1109/TCSVT.2009.2020252
– ident: ref21
  doi: 10.1109/TIP.2017.2651367
– ident: ref8
  doi: 10.1109/CVPR.2015.7298854
– year: 2016
  ident: ref49
  publication-title: Inception-v4 inception-resnet and the impact of residual connections on learning
– ident: ref44
  doi: 10.1109/CVPR.2016.650
– ident: ref33
  doi: 10.1109/CVPR.2015.7298965
– start-page: 685
  year: 2015
  ident: ref27
  article-title: Is object localization for free?-Weakly-supervised learning with convolutional neural networks
  publication-title: Proc Comput Vis Pattern Recognit
– ident: ref23
  doi: 10.1109/CVPR.2012.6248065
– ident: ref46
  doi: 10.1007/978-3-319-10602-1_26
– year: 2015
  ident: ref2
  article-title: Very deep convolutional networks for large-scale image recognition
  publication-title: Proc Int Conf Learn Represent
– ident: ref26
  doi: 10.1109/ICCV.2015.363
– ident: ref18
  doi: 10.1109/CVPR.2016.235
– year: 2016
  ident: ref20
  publication-title: Seq-nms for video object detection
– ident: ref14
  doi: 10.1109/CVPR.2015.7298641
– ident: ref29
  doi: 10.1109/CVPR.2009.5206848
– start-page: 1097
  year: 2012
  ident: ref30
  article-title: ImageNet classification with deep convolutional neural networks
  publication-title: Proc NIPS
– ident: ref12
  doi: 10.1109/CVPR.2016.91
– ident: ref25
  doi: 10.1007/978-3-319-10599-4_17
– start-page: 91
  year: 2015
  ident: ref5
  article-title: Faster R-CNN: Towards real-time object detection with region proposal networks
  publication-title: Proc NIPS
– start-page: 4126
  year: 2017
  ident: ref41
  article-title: Learning patch-based dynamic graph for visual tracking
  publication-title: Proc AAAI
– ident: ref28
  doi: 10.1109/CVPR.2013.253
SSID ssj0014847
Score 2.6791697
Snippet The state-of-the-art performance for object detection has been significantly improved over the past two years. Besides the introduction of powerful deep neural...
SourceID proquest
crossref
ieee
SourceType Aggregation Database
Enrichment Source
Index Database
Publisher
StartPage 2896
SubjectTerms Artificial neural networks
computer vision
Convolutional codes
Image detection
Machine learning
Neural networks
Object detection
Object recognition
Proposals
State of the art
Target tracking
Training
Trucks
Videos
Title T-CNN: Tubelets With Convolutional Neural Networks for Object Detection From Videos
URI https://ieeexplore.ieee.org/document/8003302
https://www.proquest.com/docview/2126463236
Volume 28
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1LT9wwELaAU3vgUVqxFJAPvUGW-BmHG9p2hSqxPRAot8iPsUAtCWKzPfTX13ayK2irqqfk4IkszzjfjD3zDUIfaGks9d5n3rkQoABRmVZUZ4UlWgQ49MbEc8jLmby45p9vxe0aOlnVwgBASj6DcXxNd_mutYt4VHaqYuexyBy5HgK3vlZrdWPAVWomFtwFkqmAY8sCmbw8rSZXN1XM4irGAaylEOwFCKWuKn_8ihO-TLfQ5XJmfVrJt_GiM2P78zfSxv-d-jbaHBxNfN5bxg5ag-YNev2MfnAXXVXZZDY7w9XCBPDp5vjrfXeHJ23zY7DHIB_JO9IjZYvPcfBx8RcTD2_wR-hSHleDp0_tA765d9DO36Lr6adqcpENTRYyS0vRZaVhRaGJzLnzWjLrrPVguQDPqANNCyWctkxy4EGlhJiSayMFEA0hstYle4c2mraBPYS9VN7J3NHCa840VULnwJUJkAdF6YoRIstVr-3AQB4bYXyvUySSl3XSVB01VQ-aGqHjlcxjz7_xz9G7celXI4dVH6GDpXLrYYvO64DZkktGmdz_u9R79Cp8uye_JQdoo3tawGHwQDpzlEzvFw9k2BM
linkProvider IEEE
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1Lb9QwELaqcigcgFIQC4X6wK1kGz_jcEMLqy10l0PT0lvkx1hUQIK6WQ78emwnu4JSVZySg61YM3a-mfHMNwi9oqWx1HufeeeCgwJEZVpRnRWWaBHg0BsT45DzhZyd8Q8X4mILvd7UwgBASj6DcXxNd_mutasYKjtSsfNYZI68E3BfkL5aa3NnwFVqJxYMBpKpgGTrEpm8PKomp-dVzOMqxgGupRDsLxhKfVX--RknhJk-QPP12vrEkq_jVWfG9tc12sb_XfxDdH8wNfHbfm_soi1oHqF7fxAQ7qHTKpssFm9wtTIBfrol_nzZfcGTtvk57MgwP9J3pEfKF1_iYOXiTyaGb_A76FImV4OnV-13fH7poF0-RmfT99Vklg1tFjJLS9FlpWFFoYnMufNaMuus9WC5AM-oA00LJZy2THLgQamEmJJrIwUQDcG31iV7grabtoGnCHupvJO5o4XXnGmqhM6BKxNAD4rSFSNE1lKv7cBBHlthfKuTL5KXddJUHTVVD5oaocPNnB89A8eto_ei6DcjB6mP0P5aufVwSJd1QG3JJaNMPrt51gHamVXzk_rkePHxObobvtNT4ZJ9tN1dreBFsEc68zJtw99_XNtc
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=T-CNN%3A+Tubelets+With+Convolutional+Neural+Networks+for+Object+Detection+From+Videos&rft.jtitle=IEEE+transactions+on+circuits+and+systems+for+video+technology&rft.au=Kang%2C+Kai&rft.au=Li%2C+Hongsheng&rft.au=Yan%2C+Junjie&rft.au=Zeng%2C+Xingyu&rft.date=2018-10-01&rft.pub=IEEE&rft.issn=1051-8215&rft.volume=28&rft.issue=10&rft.spage=2896&rft.epage=2907&rft_id=info:doi/10.1109%2FTCSVT.2017.2736553&rft.externalDocID=8003302
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1051-8215&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1051-8215&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1051-8215&client=summon