New Generation Deep Learning for Video Object Detection: A Survey
Video object detection, a basic task in the computer vision field, is rapidly evolving and widely used. In recent years, deep learning methods have rapidly become widespread in the field of video object detection, achieving excellent results compared with those of traditional methods. However, the p...
Saved in:
Published in | IEEE transaction on neural networks and learning systems Vol. 33; no. 8; pp. 3195 - 3215 |
---|---|
Main Authors | , , , , , , |
Format | Journal Article |
Language | English |
Published |
United States
IEEE
01.08.2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | Video object detection, a basic task in the computer vision field, is rapidly evolving and widely used. In recent years, deep learning methods have rapidly become widespread in the field of video object detection, achieving excellent results compared with those of traditional methods. However, the presence of duplicate information and abundant spatiotemporal information in video data poses a serious challenge to video object detection. Therefore, in recent years, many scholars have investigated deep learning detection algorithms in the context of video data and have achieved remarkable results. Considering the wide range of applications, a comprehensive review of the research related to video object detection is both a necessary and challenging task. This survey attempts to link and systematize the latest cutting-edge research on video object detection with the goal of classifying and analyzing video detection algorithms based on specific representative models. The differences and connections between video object detection and similar tasks are systematically demonstrated, and the evaluation metrics and video detection performance of nearly 40 models on two data sets are presented. Finally, the various applications and challenges facing video object detection are discussed. |
---|---|
AbstractList | Video object detection, a basic task in the computer vision field, is rapidly evolving and widely used. In recent years, deep learning methods have rapidly become widespread in the field of video object detection, achieving excellent results compared with those of traditional methods. However, the presence of duplicate information and abundant spatiotemporal information in video data poses a serious challenge to video object detection. Therefore, in recent years, many scholars have investigated deep learning detection algorithms in the context of video data and have achieved remarkable results. Considering the wide range of applications, a comprehensive review of the research related to video object detection is both a necessary and challenging task. This survey attempts to link and systematize the latest cutting-edge research on video object detection with the goal of classifying and analyzing video detection algorithms based on specific representative models. The differences and connections between video object detection and similar tasks are systematically demonstrated, and the evaluation metrics and video detection performance of nearly 40 models on two data sets are presented. Finally, the various applications and challenges facing video object detection are discussed. Video object detection, a basic task in the computer vision field, is rapidly evolving and widely used. In recent years, deep learning methods have rapidly become widespread in the field of video object detection, achieving excellent results compared with those of traditional methods. However, the presence of duplicate information and abundant spatiotemporal information in video data poses a serious challenge to video object detection. Therefore, in recent years, many scholars have investigated deep learning detection algorithms in the context of video data and have achieved remarkable results. Considering the wide range of applications, a comprehensive review of the research related to video object detection is both a necessary and challenging task. This survey attempts to link and systematize the latest cutting-edge research on video object detection with the goal of classifying and analyzing video detection algorithms based on specific representative models. The differences and connections between video object detection and similar tasks are systematically demonstrated, and the evaluation metrics and video detection performance of nearly 40 models on two data sets are presented. Finally, the various applications and challenges facing video object detection are discussed.Video object detection, a basic task in the computer vision field, is rapidly evolving and widely used. In recent years, deep learning methods have rapidly become widespread in the field of video object detection, achieving excellent results compared with those of traditional methods. However, the presence of duplicate information and abundant spatiotemporal information in video data poses a serious challenge to video object detection. Therefore, in recent years, many scholars have investigated deep learning detection algorithms in the context of video data and have achieved remarkable results. Considering the wide range of applications, a comprehensive review of the research related to video object detection is both a necessary and challenging task. This survey attempts to link and systematize the latest cutting-edge research on video object detection with the goal of classifying and analyzing video detection algorithms based on specific representative models. The differences and connections between video object detection and similar tasks are systematically demonstrated, and the evaluation metrics and video detection performance of nearly 40 models on two data sets are presented. Finally, the various applications and challenges facing video object detection are discussed. |
Author | Tang, Xu Jiao, Licheng Hou, Biao Yang, Shuyuan Zhang, Ruohan Liu, Fang Li, Lingling |
Author_xml | – sequence: 1 givenname: Licheng orcidid: 0000-0003-3354-9617 surname: Jiao fullname: Jiao, Licheng email: lchjiao@mail.xidian.edu.cn organization: Key Laboratory of Intelligent Perception and Image Understanding of Ministry of Education, School of Artificial Intelligence, Xidian University, Xi'an, China – sequence: 2 givenname: Ruohan orcidid: 0000-0002-7597-7700 surname: Zhang fullname: Zhang, Ruohan email: ruohan950427@gmail.com organization: Key Laboratory of Intelligent Perception and Image Understanding of Ministry of Education, School of Artificial Intelligence, Xidian University, Xi'an, China – sequence: 3 givenname: Fang orcidid: 0000-0002-5669-9354 surname: Liu fullname: Liu, Fang organization: Key Laboratory of Intelligent Perception and Image Understanding of Ministry of Education, School of Artificial Intelligence, Xidian University, Xi'an, China – sequence: 4 givenname: Shuyuan orcidid: 0000-0002-4796-5737 surname: Yang fullname: Yang, Shuyuan organization: Key Laboratory of Intelligent Perception and Image Understanding of Ministry of Education, School of Artificial Intelligence, Xidian University, Xi'an, China – sequence: 5 givenname: Biao surname: Hou fullname: Hou, Biao organization: Key Laboratory of Intelligent Perception and Image Understanding of Ministry of Education, School of Artificial Intelligence, Xidian University, Xi'an, China – sequence: 6 givenname: Lingling orcidid: 0000-0002-6130-2518 surname: Li fullname: Li, Lingling organization: Key Laboratory of Intelligent Perception and Image Understanding of Ministry of Education, School of Artificial Intelligence, Xidian University, Xi'an, China – sequence: 7 givenname: Xu surname: Tang fullname: Tang, Xu organization: Key Laboratory of Intelligent Perception and Image Understanding of Ministry of Education, School of Artificial Intelligence, Xidian University, Xi'an, China |
BackLink | https://www.ncbi.nlm.nih.gov/pubmed/33534715$$D View this record in MEDLINE/PubMed |
BookMark | eNp9kUtPAjEUhRujEUT-gCZmEjduwL47dUdQ0YTAAjTumtK5Y4bADHZmNPx7y0MWLLybc5N-p23OuUCneZEDQlcEdwnB-n46Gg0nXYop6TIsGOX6BDUpkbRDWRyfHnb10UDtspzjMBILyfU5ajAmGFdENFFvBD_RAHLwtsqKPHoEWEVDsD7P8s8oLXz0niVQROPZHFwVjqsgAXyIetGk9t-wvkRnqV2U0N5rC709P037L53hePDa7w07jhNVdRQTnFs1S0RMUocdYM1Bc6GIZClVjksrLXZMJLNEAdEJSSVljDqusaVgWQvd7e5d-eKrhrIyy6x0sFjYHIq6NJTHkkvGOQ_o7RE6L2qfh98ZKrUSUnNGA3Wzp-rZEhKz8tnS-rX5CycAdAc4X5Slh_SAEGw2JZhtCWZTgtmXEEzxkcll1Tbbytts8b_1emfNAODwlmYhpED8AhD2kQw |
CODEN | ITNNAL |
CitedBy_id | crossref_primary_10_1109_TMM_2023_3236212 crossref_primary_10_1117_1_JEI_33_4_043054 crossref_primary_10_1007_s11554_024_01490_0 crossref_primary_10_1109_ACCESS_2023_3329068 crossref_primary_10_3390_s24010005 crossref_primary_10_11834_jig_220660 crossref_primary_10_1109_TGRS_2023_3272552 crossref_primary_10_1109_TGRS_2022_3140856 crossref_primary_10_1109_TCSVT_2024_3421988 crossref_primary_10_1007_s00521_023_08956_5 crossref_primary_10_1007_s11042_023_17949_4 crossref_primary_10_1016_j_neunet_2024_107109 crossref_primary_10_1109_TNNLS_2023_3336774 crossref_primary_10_1016_j_neucom_2024_128102 crossref_primary_10_1016_j_neucom_2024_127973 crossref_primary_10_1109_LGRS_2024_3389042 crossref_primary_10_1109_TAES_2023_3342797 crossref_primary_10_3390_bioengineering10040404 crossref_primary_10_1109_TGRS_2023_3326613 crossref_primary_10_1088_1361_6501_ad4c86 crossref_primary_10_1109_ACCESS_2022_3226564 crossref_primary_10_1109_TNNLS_2023_3310985 crossref_primary_10_1109_TMM_2024_3386339 crossref_primary_10_1109_TCSVT_2023_3322470 crossref_primary_10_1007_s00521_023_08544_7 crossref_primary_10_3934_mbe_2023282 crossref_primary_10_3390_app14114757 crossref_primary_10_1007_s12559_024_10272_6 crossref_primary_10_1109_JIOT_2023_3329221 crossref_primary_10_1007_s44267_023_00019_6 crossref_primary_10_1109_TNNLS_2023_3264587 crossref_primary_10_1016_j_neucom_2024_128637 crossref_primary_10_1109_TCSVT_2024_3439692 crossref_primary_10_1109_TNNLS_2022_3182715 crossref_primary_10_1186_s13634_024_01139_x crossref_primary_10_1016_j_procs_2024_10_078 crossref_primary_10_56714_bjrs_50_1_5 crossref_primary_10_1109_ACCESS_2024_3418900 crossref_primary_10_3389_fncom_2025_1452203 crossref_primary_10_1109_TCYB_2022_3209978 crossref_primary_10_3390_electronics13101813 crossref_primary_10_1109_TGRS_2022_3228776 crossref_primary_10_1109_TGRS_2023_3243055 crossref_primary_10_1109_TCSVT_2024_3402242 crossref_primary_10_3390_s23104859 crossref_primary_10_1109_TNNLS_2023_3270579 crossref_primary_10_1109_TNNLS_2023_3331778 crossref_primary_10_1109_TPAMI_2024_3409824 crossref_primary_10_1109_JSTARS_2023_3316302 crossref_primary_10_1109_JSYST_2023_3270495 crossref_primary_10_3389_fninf_2023_1144301 crossref_primary_10_1007_s00371_023_03242_w crossref_primary_10_1109_TGRS_2023_3237606 crossref_primary_10_1109_TEVC_2023_3264641 crossref_primary_10_1109_TNNLS_2022_3174873 crossref_primary_10_1109_TGRS_2022_3179379 crossref_primary_10_1016_j_knosys_2025_113237 crossref_primary_10_1016_j_cmpb_2024_108109 crossref_primary_10_1142_S0218126624502268 crossref_primary_10_3389_fpls_2023_1257212 crossref_primary_10_1109_TAI_2024_3454566 crossref_primary_10_1016_j_asoc_2025_112837 crossref_primary_10_1007_s00530_022_00926_6 crossref_primary_10_1109_TPAMI_2022_3225573 crossref_primary_10_3390_s22218583 crossref_primary_10_1109_TETCI_2024_3518613 crossref_primary_10_14358_PERS_22_00101R2 crossref_primary_10_1016_j_knosys_2024_111816 crossref_primary_10_1016_j_measurement_2022_112371 crossref_primary_10_1109_TMM_2024_3395844 crossref_primary_10_32604_iasc_2023_029799 crossref_primary_10_1016_j_autcon_2024_105494 crossref_primary_10_3390_agriculture14010057 crossref_primary_10_3390_s23063195 crossref_primary_10_1016_j_neuroimage_2025_121113 crossref_primary_10_1109_TNNLS_2023_3286890 crossref_primary_10_1109_TNNLS_2023_3331004 crossref_primary_10_1109_TITS_2024_3491784 crossref_primary_10_1109_TPAMI_2024_3449994 crossref_primary_10_1109_TGRS_2022_3215177 crossref_primary_10_1007_s10115_025_02375_9 crossref_primary_10_1109_TNNLS_2023_3314031 crossref_primary_10_1109_TCYB_2022_3213537 crossref_primary_10_1007_s10462_023_10630_0 crossref_primary_10_3389_fmars_2024_1411717 crossref_primary_10_1016_j_procs_2024_10_095 crossref_primary_10_1109_ACCESS_2021_3108398 crossref_primary_10_34133_research_0467 crossref_primary_10_1109_TGRS_2022_3220755 crossref_primary_10_3390_app15010109 crossref_primary_10_1016_j_iot_2025_101558 crossref_primary_10_3390_math10214125 crossref_primary_10_1080_13682199_2023_2260663 crossref_primary_10_1109_TCYB_2022_3182993 crossref_primary_10_1109_TGRS_2022_3201530 |
Cites_doi | 10.1109/ACCESS.2019.2908016 10.1007/978-3-030-01237-3_30 10.1109/ICCV.2017.212 10.2307/2181436 10.1109/CVPR.2019.00441 10.1145/3065386 10.1109/TIP.2017.2651364 10.1109/WACV.2016.7477702 10.1109/ICCV.2017.322 10.1007/978-3-319-46448-0_2 10.1109/VTCSpring.2018.8417546 10.1162/neco.1997.9.8.1735 10.1109/ICETT.2016.7873685 10.1038/s41598-019-41172-7 10.1109/ICCV.2017.89 10.1109/TPAMI.2016.2577031 10.1109/CVPR.2017.660 10.1109/TCSVT.2020.2981652 10.1038/s41598-019-46970-7 10.1109/ICCV.2017.52 10.1109/CVPR.2017.557 10.1038/s41598-018-30182-6 10.5244/C.30.44 10.1109/TCSVT.2018.2857489 10.1109/CVPR42600.2020.01035 10.1109/CVPR.2008.4587597 10.1109/CVPR.2010.5539960 10.1109/34.1000236 10.1109/CVPR.2017.690 10.1109/CVPR.2018.00378 10.1109/TGRS.2019.2959120 10.1038/nature14539 10.1109/ACCESS.2019.2963363 10.1109/CVPR.2017.789 10.1109/DCAS.2018.8620111 10.1145/1146909.1146953 10.1109/ICCV.2015.169 10.1038/ncomms1399 10.1109/CVPR.2000.854761 10.1109/TIP.2016.2554321 10.1109/ICCV.2019.00712 10.1109/CVPR.2017.441 10.1016/j.cognition.2014.01.006 10.1109/ICCV.2017.257 10.1609/aaai.v33i01.33015321 10.1109/TNNLS.2013.2270314 10.1007/978-3-319-45886-1_3 10.1007/978-3-642-33765-9_50 10.1109/CSITSS.2018.8768743 10.1109/ICCV.2019.00985 10.1109/CVPR.2018.00815 10.1109/ICCV.2015.316 10.1109/MSP.2017.2749125 10.1038/nrgastro.2012.88 10.1109/GLOCOMW.2018.8644440 10.1109/CVPR.2018.00753 10.5220/0007260002260233 10.1109/CVPR.2017.733 10.1109/TCAD.2005.862751 10.1007/978-3-030-01252-6_24 10.1126/scitranslmed.aaa1233 10.1109/WACV.2014.6836013 10.1038/s41598-018-32931-z 10.1007/978-3-319-49409-8_69 10.1109/TNNLS.2018.2876865 10.1109/ICCV.2015.363 10.1109/ICCV.2019.00351 10.1109/ICCV.2019.00931 10.1109/CVPR.2017.351 10.1007/978-3-030-01240-3_7 10.1109/TCSVT.2017.2736553 10.1109/TIP.2017.2651367 10.1109/MSP.2017.2743118 10.1109/CVPR.2017.531 10.1109/CVPR.2016.95 10.1109/ACCESS.2019.2939201 10.1109/ICCV.2017.214 10.1109/IGARSS.2019.8898173 10.1109/ASPDAC.2008.4483962 10.3724/SP.J.1001.2009.00271 10.1109/ICASE.2017.8374287 10.1109/DSAA.2016.20 10.1109/CVPR.2018.00596 10.1126/scirobotics.aaw0863 10.1109/TPAMI.2015.2497689 10.1109/CVPR.2017.101 10.1109/ICCV.2017.330 10.1109/TPAMI.2014.2345390 10.1109/CVPR.2016.91 10.1109/CVPR.2016.90 10.1109/ICDMW.2019.00034 10.1109/IGARSS.2019.8900412 10.1007/s11263-015-0816-y 10.1007/978-3-319-48881-3_56 10.1007/978-3-030-01261-8_33 10.1109/ACCESS.2019.2956508 10.1109/CVPR.2014.81 10.1126/sciadv.aar4004 10.1109/TIP.2018.2861366 10.1007/978-3-030-01258-8_21 10.1109/ICCV.2019.00401 10.1109/ICCV.2017.324 10.4028/www.scientific.net/AMM.596.374 10.1115/1.3662552 10.1109/CVPR.2018.00286 10.1038/s41598-019-53091-8 10.1109/CCAA.2018.8777708 10.1109/DATE.2008.4484830 10.1109/CVPR.2017.549 10.1109/CVPR.2017.106 10.1109/CVPR.2017.243 |
ContentType | Journal Article |
Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022 |
Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022 |
DBID | 97E RIA RIE AAYXX CITATION NPM 7QF 7QO 7QP 7QQ 7QR 7SC 7SE 7SP 7SR 7TA 7TB 7TK 7U5 8BQ 8FD F28 FR3 H8D JG9 JQ2 KR7 L7M L~C L~D P64 7X8 |
DOI | 10.1109/TNNLS.2021.3053249 |
DatabaseName | IEEE All-Society Periodicals Package (ASPP) 2005–Present IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Electronic Library (IEL) CrossRef PubMed Aluminium Industry Abstracts Biotechnology Research Abstracts Calcium & Calcified Tissue Abstracts Ceramic Abstracts Chemoreception Abstracts Computer and Information Systems Abstracts Corrosion Abstracts Electronics & Communications Abstracts Engineered Materials Abstracts Materials Business File Mechanical & Transportation Engineering Abstracts Neurosciences Abstracts Solid State and Superconductivity Abstracts METADEX Technology Research Database ANTE: Abstracts in New Technology & Engineering Engineering Research Database Aerospace Database Materials Research Database ProQuest Computer Science Collection Civil Engineering Abstracts Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional Biotechnology and BioEngineering Abstracts MEDLINE - Academic |
DatabaseTitle | CrossRef PubMed Materials Research Database Technology Research Database Computer and Information Systems Abstracts – Academic Mechanical & Transportation Engineering Abstracts ProQuest Computer Science Collection Computer and Information Systems Abstracts Materials Business File Aerospace Database Engineered Materials Abstracts Biotechnology Research Abstracts Chemoreception Abstracts Advanced Technologies Database with Aerospace ANTE: Abstracts in New Technology & Engineering Civil Engineering Abstracts Aluminium Industry Abstracts Electronics & Communications Abstracts Ceramic Abstracts Neurosciences Abstracts METADEX Biotechnology and BioEngineering Abstracts Computer and Information Systems Abstracts Professional Solid State and Superconductivity Abstracts Engineering Research Database Calcium & Calcified Tissue Abstracts Corrosion Abstracts MEDLINE - Academic |
DatabaseTitleList | PubMed MEDLINE - Academic Materials Research Database |
Database_xml | – sequence: 1 dbid: NPM name: PubMed url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database – sequence: 2 dbid: RIE name: IEEE Electronic Library (IEL) url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/ sourceTypes: Publisher |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Computer Science |
EISSN | 2162-2388 |
EndPage | 3215 |
ExternalDocumentID | 33534715 10_1109_TNNLS_2021_3053249 9345705 |
Genre | orig-research Journal Article |
GrantInformation_xml | – fundername: Key Research and Development Program in Shaanxi Province of China grantid: 2019ZDLGY03-06 funderid: 10.13039/501100015401 – fundername: CAAI-Huawei MindSpore Open Fund – fundername: National Science Basic Research Plan in Shaanxi Province of China grantid: 2019JQ-659 – fundername: Foundation for Innovative Research Groups of the National Natural Science Foundation of China grantid: 61621005 funderid: 10.13039/501100001809 – fundername: National Natural Science Foundation of China grantid: U1701267; 62006177; 61871310; 61902298; 61573267; 61906150 funderid: 10.13039/501100001809 – fundername: Fund for Foreign Scholars in University Research and Teaching Program’s 111 Project grantid: B07048 – fundername: ST Innovation Project from the Chinese Ministry of Education – fundername: State Key Program of National Natural Science of China grantid: 61836009 funderid: 10.13039/501100001809 – fundername: Major Research Plan of the National Natural Science Foundation of China grantid: 91438201; 91438103; 61801124 funderid: 10.13039/501100001809 |
GroupedDBID | 0R~ 4.4 5VS 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABQJQ ABVLG ACIWK ACPRK AENEX AFRAH AGQYO AGSQL AHBIQ AKJIK AKQYR ALMA_UNASSIGNED_HOLDINGS ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ EBS EJD IFIPE IPLJI JAVBF M43 MS~ O9- OCL PQQKQ RIA RIE RNS AAYXX CITATION RIG NPM 7QF 7QO 7QP 7QQ 7QR 7SC 7SE 7SP 7SR 7TA 7TB 7TK 7U5 8BQ 8FD F28 FR3 H8D JG9 JQ2 KR7 L7M L~C L~D P64 7X8 |
ID | FETCH-LOGICAL-c417t-73544a7bd581fc0ce094e9457163f27c46a6a0c35dbd7e19d1f62332c490a2ea3 |
IEDL.DBID | RIE |
ISSN | 2162-237X 2162-2388 |
IngestDate | Fri Jul 11 16:25:14 EDT 2025 Mon Jun 30 03:08:41 EDT 2025 Mon Jul 21 06:08:01 EDT 2025 Tue Jul 01 00:27:37 EDT 2025 Thu Apr 24 22:52:56 EDT 2025 Wed Aug 27 02:23:36 EDT 2025 |
IsPeerReviewed | false |
IsScholarly | true |
Issue | 8 |
Language | English |
License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037 |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c417t-73544a7bd581fc0ce094e9457163f27c46a6a0c35dbd7e19d1f62332c490a2ea3 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 |
ORCID | 0000-0003-3354-9617 0000-0002-5669-9354 0000-0002-7597-7700 0000-0002-4796-5737 0000-0002-6130-2518 |
PMID | 33534715 |
PQID | 2697569432 |
PQPubID | 85436 |
PageCount | 21 |
ParticipantIDs | pubmed_primary_33534715 ieee_primary_9345705 proquest_journals_2697569432 proquest_miscellaneous_2486463444 crossref_primary_10_1109_TNNLS_2021_3053249 crossref_citationtrail_10_1109_TNNLS_2021_3053249 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 2022-08-01 |
PublicationDateYYYYMMDD | 2022-08-01 |
PublicationDate_xml | – month: 08 year: 2022 text: 2022-08-01 day: 01 |
PublicationDecade | 2020 |
PublicationPlace | United States |
PublicationPlace_xml | – name: United States – name: Piscataway |
PublicationTitle | IEEE transaction on neural networks and learning systems |
PublicationTitleAbbrev | TNNLS |
PublicationTitleAlternate | IEEE Trans Neural Netw Learn Syst |
PublicationYear | 2022 |
Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
References | ref57 ref56 ref59 ref58 ref54 Bojarski (ref114) 2016; 103 Li (ref53) 2018 Purkait (ref51) 2017 Mao-Guo (ref103) 2009; 20 ref50 ref46 ref45 ref48 Jiao (ref3) 1993 ref47 Han (ref13) 2016 ref42 ref41 ref44 Dai (ref52) ref8 Patraucean (ref83) 2015 ref7 ref9 ref6 ref5 ref100 Zhu (ref133) 2018 ref101 ref40 ref35 ref37 ref31 ref30 ref33 ref32 Xingjian (ref84); 2015 ref39 ref38 Shipmon (ref106) 2017 Fu (ref49) 2017 ref24 ref23 ref26 ref25 Liu (ref34) 2019 ref20 ref22 ref21 Redmon (ref12) 2018 ref28 ref29 Jiao (ref1) 2016; 39 Chin (ref36) 2019 ref15 ref128 ref14 ref129 ref97 ref126 ref96 ref127 ref11 ref99 ref124 ref10 ref98 ref125 ref17 ref16 ref19 Bucy (ref69) 1969; 17 ref93 Mao (ref27) 2018 ref134 ref95 ref131 ref94 ref132 ref130 ref91 ref90 ref89 ref86 Tang (ref92) ref137 ref85 ref138 ref88 Davoudi (ref119) 2018; 9 ref135 ref87 ref136 ref82 ref81 Simonyan (ref43) 2014 Mojtaba Marvasti-Zadeh (ref66) 2019 ref80 ref79 ref108 ref78 ref109 Wang (ref55) ref107 ref75 ref104 ref74 ref105 ref77 ref102 ref76 Jiao (ref2) 1990 ref71 ref111 ref70 ref112 ref73 ref72 ref110 ref68 ref67 Jiao (ref4) 2017 ref117 ref118 ref115 ref63 ref116 ref113 ref65 Hetang (ref18) 2017 Shrivastava (ref64) 2016 ref60 ref122 ref123 ref62 ref120 ref61 ref121 |
References_xml | – ident: ref107 doi: 10.1109/ACCESS.2019.2908016 – ident: ref23 doi: 10.1007/978-3-030-01237-3_30 – ident: ref50 doi: 10.1109/ICCV.2017.212 – year: 2016 ident: ref64 article-title: Beyond skip connections: Top-down modulation for object detection publication-title: arXiv:1612.06851 – ident: ref79 doi: 10.2307/2181436 – ident: ref78 doi: 10.1109/CVPR.2019.00441 – ident: ref6 doi: 10.1145/3065386 – ident: ref101 doi: 10.1109/TIP.2017.2651364 – ident: ref135 doi: 10.1109/WACV.2016.7477702 – ident: ref47 doi: 10.1109/ICCV.2017.322 – ident: ref48 doi: 10.1007/978-3-319-46448-0_2 – ident: ref111 doi: 10.1109/VTCSpring.2018.8417546 – ident: ref82 doi: 10.1162/neco.1997.9.8.1735 – ident: ref126 doi: 10.1109/ICETT.2016.7873685 – ident: ref124 doi: 10.1038/s41598-019-41172-7 – ident: ref132 doi: 10.1109/ICCV.2017.89 – year: 2018 ident: ref133 article-title: Towards high performance video object detection for mobiles publication-title: arXiv:1804.05830 – ident: ref9 doi: 10.1109/TPAMI.2016.2577031 – ident: ref63 doi: 10.1109/CVPR.2017.660 – ident: ref97 doi: 10.1109/TCSVT.2020.2981652 – ident: ref117 doi: 10.1038/s41598-019-46970-7 – ident: ref17 doi: 10.1109/ICCV.2017.52 – ident: ref60 doi: 10.1109/CVPR.2017.557 – ident: ref122 doi: 10.1038/s41598-018-30182-6 – ident: ref137 doi: 10.5244/C.30.44 – ident: ref38 doi: 10.1109/TCSVT.2018.2857489 – ident: ref20 doi: 10.1109/CVPR42600.2020.01035 – ident: ref138 doi: 10.1109/CVPR.2008.4587597 – ident: ref71 doi: 10.1109/CVPR.2010.5539960 – ident: ref68 doi: 10.1109/34.1000236 – ident: ref11 doi: 10.1109/CVPR.2017.690 – year: 2019 ident: ref34 article-title: Looking fast and slow: Memory-guided mobile video object detection publication-title: arXiv:1903.10172 – ident: ref91 doi: 10.1109/CVPR.2018.00378 – ident: ref98 doi: 10.1109/TGRS.2019.2959120 – volume: 39 start-page: 1697 issue: 8 year: 2016 ident: ref1 article-title: Seventy years of neural networks: Review and prospect publication-title: Chin. J. Comput. – ident: ref42 doi: 10.1038/nature14539 – year: 2019 ident: ref66 article-title: Deep learning for visual tracking: A comprehensive survey publication-title: arXiv:1912.00535 – volume: 2015 start-page: 802 volume-title: Proc. Adv. Neural Inf. Process. Syst. ident: ref84 article-title: Convolutional LSTM network: A machine learning approach for precipitation nowcasting – year: 2016 ident: ref13 article-title: Seq-NMS for video object detection publication-title: arXiv:1602.08465 – ident: ref100 doi: 10.1109/ACCESS.2019.2963363 – ident: ref129 doi: 10.1109/CVPR.2017.789 – ident: ref105 doi: 10.1109/DCAS.2018.8620111 – ident: ref89 doi: 10.1145/1146909.1146953 – year: 2018 ident: ref12 article-title: YOLOv3: An incremental improvement publication-title: arXiv:1804.02767 – ident: ref8 doi: 10.1109/ICCV.2015.169 – ident: ref123 doi: 10.1038/ncomms1399 – ident: ref67 doi: 10.1109/CVPR.2000.854761 – ident: ref40 doi: 10.1109/TIP.2016.2554321 – ident: ref35 doi: 10.1109/ICCV.2019.00712 – ident: ref16 doi: 10.1109/CVPR.2017.441 – ident: ref85 doi: 10.1016/j.cognition.2014.01.006 – ident: ref21 doi: 10.1109/ICCV.2017.257 – ident: ref33 doi: 10.1609/aaai.v33i01.33015321 – ident: ref104 doi: 10.1109/TNNLS.2013.2270314 – ident: ref127 doi: 10.1007/978-3-319-45886-1_3 – ident: ref72 doi: 10.1007/978-3-642-33765-9_50 – ident: ref41 doi: 10.1109/CSITSS.2018.8768743 – ident: ref25 doi: 10.1109/ICCV.2019.00985 – ident: ref32 doi: 10.1109/CVPR.2018.00815 – ident: ref80 doi: 10.1109/ICCV.2015.316 – ident: ref46 doi: 10.1109/MSP.2017.2749125 – ident: ref116 doi: 10.1038/nrgastro.2012.88 – year: 2017 ident: ref51 article-title: SPP-net: Deep absolute pose regression with synthetic views publication-title: arXiv:1712.03452 – ident: ref110 doi: 10.1109/GLOCOMW.2018.8644440 – ident: ref81 doi: 10.1109/CVPR.2018.00753 – ident: ref94 doi: 10.5220/0007260002260233 – ident: ref74 doi: 10.1109/CVPR.2017.733 – ident: ref88 doi: 10.1109/TCAD.2005.862751 – ident: ref56 doi: 10.1007/978-3-030-01252-6_24 – ident: ref118 doi: 10.1126/scitranslmed.aaa1233 – ident: ref134 doi: 10.1109/WACV.2014.6836013 – ident: ref121 doi: 10.1038/s41598-018-32931-z – ident: ref31 doi: 10.1007/978-3-319-49409-8_69 – ident: ref45 doi: 10.1109/TNNLS.2018.2876865 – ident: ref136 doi: 10.1109/ICCV.2015.363 – ident: ref30 doi: 10.1109/ICCV.2019.00351 – ident: ref93 doi: 10.1109/ICCV.2019.00931 – ident: ref65 doi: 10.1109/CVPR.2017.351 – ident: ref77 doi: 10.1007/978-3-030-01240-3_7 – ident: ref15 doi: 10.1109/TCSVT.2017.2736553 – volume-title: Neural Network Computing year: 1993 ident: ref3 – volume-title: Neural Network System Theory year: 1990 ident: ref2 – ident: ref131 doi: 10.1109/TIP.2017.2651367 – ident: ref115 doi: 10.1109/MSP.2017.2743118 – ident: ref75 doi: 10.1109/CVPR.2017.531 – ident: ref14 doi: 10.1109/CVPR.2016.95 – volume-title: Deep Learning, Recognition and Optimization year: 2017 ident: ref4 – ident: ref5 doi: 10.1109/ACCESS.2019.2939201 – ident: ref61 doi: 10.1109/ICCV.2017.214 – ident: ref109 doi: 10.1109/IGARSS.2019.8898173 – ident: ref87 doi: 10.1109/ASPDAC.2008.4483962 – volume: 20 start-page: 271 issue: 2 year: 2009 ident: ref103 article-title: Evolutionary multi-objective optimization algorithms publication-title: Softw. J. doi: 10.3724/SP.J.1001.2009.00271 – ident: ref39 doi: 10.1109/ICASE.2017.8374287 – ident: ref113 doi: 10.1109/DSAA.2016.20 – year: 2015 ident: ref83 article-title: Spatio-temporal video autoencoder with differentiable memory publication-title: arXiv:1511.06309 – ident: ref24 doi: 10.1109/CVPR.2018.00596 – ident: ref112 doi: 10.1126/scirobotics.aaw0863 – ident: ref96 doi: 10.1109/TPAMI.2015.2497689 – year: 2018 ident: ref53 article-title: DetNet: A backbone network for object detection publication-title: arXiv:1804.06215 – ident: ref22 doi: 10.1109/CVPR.2017.101 – ident: ref26 doi: 10.1109/ICCV.2017.330 – ident: ref73 doi: 10.1109/TPAMI.2014.2345390 – ident: ref10 doi: 10.1109/CVPR.2016.91 – ident: ref57 doi: 10.1109/CVPR.2016.90 – year: 2017 ident: ref49 article-title: DSSD: Deconvolutional single shot detector publication-title: arXiv:1701.06659 – ident: ref99 doi: 10.1109/ICDMW.2019.00034 – year: 2018 ident: ref27 article-title: CaTDet: Cascaded tracked detector for efficient object detection from video publication-title: arXiv:1810.00434 – year: 2017 ident: ref106 article-title: Time series anomaly detection; detection of anomalous drops with limited features and sparse examples in noisy highly periodic data publication-title: arXiv:1708.03665 – ident: ref108 doi: 10.1109/IGARSS.2019.8900412 – ident: ref128 doi: 10.1007/s11263-015-0816-y – ident: ref76 doi: 10.1007/978-3-319-48881-3_56 – ident: ref19 doi: 10.1007/978-3-030-01261-8_33 – ident: ref44 doi: 10.1109/ACCESS.2019.2956508 – ident: ref7 doi: 10.1109/CVPR.2014.81 – ident: ref125 doi: 10.1126/sciadv.aar4004 – year: 2017 ident: ref18 article-title: Impression network for video object detection publication-title: arXiv:1712.05896 – volume: 9 start-page: 8020 volume-title: Sci. Rep. year: 2018 ident: ref119 article-title: The intelligent ICU pilot study: Using artificial intelligence technology for autonomous patient monitoring – ident: ref102 doi: 10.1109/TIP.2018.2861366 – ident: ref28 doi: 10.1007/978-3-030-01258-8_21 – ident: ref29 doi: 10.1109/ICCV.2019.00401 – ident: ref59 doi: 10.1109/ICCV.2017.324 – start-page: 379 volume-title: Proc. Adv. Neural Inf. Process. Syst. ident: ref52 article-title: R-FCN: Object detection via region-based fully convolutional networks – year: 2014 ident: ref43 article-title: Very deep convolutional networks for large-scale image recognition publication-title: arXiv:1409.1556 – ident: ref95 doi: 10.4028/www.scientific.net/AMM.596.374 – volume: 17 start-page: 80 year: 1969 ident: ref69 article-title: Bayes theorem and digital realizations for non-linear filters publication-title: J. Astron. Sci. – ident: ref70 doi: 10.1115/1.3662552 – year: 2019 ident: ref36 article-title: AdaScale: Towards real-time video object detection using adaptive scaling publication-title: arXiv:1902.02910 – ident: ref90 doi: 10.1109/CVPR.2018.00286 – volume: 103 year: 2016 ident: ref114 article-title: End to end learning for self-driving cars publication-title: arXiv:1604.07316 – ident: ref120 doi: 10.1038/s41598-019-53091-8 – ident: ref37 doi: 10.1109/CCAA.2018.8777708 – start-page: 1963 volume-title: Proc. Adv. Neural Inf. Process. Syst. ident: ref55 article-title: Pelee: A real-time object detection system on mobile devices – ident: ref86 doi: 10.1109/DATE.2008.4484830 – ident: ref130 doi: 10.1109/ICCV.2019.00931 – start-page: 638 volume-title: Proc. Adv. Neural Inf. Process. Syst. ident: ref92 article-title: Shifting weights: Adapting object detectors from image to video – ident: ref54 doi: 10.1109/CVPR.2017.549 – ident: ref62 doi: 10.1109/CVPR.2017.106 – ident: ref58 doi: 10.1109/CVPR.2017.243 |
SSID | ssj0000605649 |
Score | 2.6914952 |
Snippet | Video object detection, a basic task in the computer vision field, is rapidly evolving and widely used. In recent years, deep learning methods have rapidly... |
SourceID | proquest pubmed crossref ieee |
SourceType | Aggregation Database Index Database Enrichment Source Publisher |
StartPage | 3195 |
SubjectTerms | Algorithms Computer vision Convolution Deep learning Detection algorithms Feature extraction Learning Learning systems Machine learning Meteorological satellites neural networks Object detection Object recognition pipeline processing Surveys Task analysis Telematics Video data video signal processing |
Title | New Generation Deep Learning for Video Object Detection: A Survey |
URI | https://ieeexplore.ieee.org/document/9345705 https://www.ncbi.nlm.nih.gov/pubmed/33534715 https://www.proquest.com/docview/2697569432 https://www.proquest.com/docview/2486463444 |
Volume | 33 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1Lb9QwELZKT1xooTwCBRmJG2Sb2GN7zW0FVBWiy6Et2lvkx7hCoGzVJkjw67GdhwQCxC2SncT2jO1v7JlvCHkRYoFX4MqIzm0yUERpAX2pwYADHwLjKd75dC1PLuD9Rmx2yKs5FgYRs_MZLtJjvsv3W9eno7IjzUGoRFh6KxpuQ6zWfJ5SRVwuM9pltWQl42ozxchU-uh8vf5wFq1BVi94yoUAiS2Uc8Hj2ix-2ZJyjpW_w8287RzvkdOpwYO3yZdF39mF-_Ebl-P_9mif3BnxJ10NCnOX7GB7j-xNuR3oONUPyCqufnTgpE6io28Rr-hIxnpJI9Klnz573NKPNh3kxOIu-3S1r-mKnvXX3_D7fXJx_O78zUk5plsoHdSqKxUXAEZZL5Z1cJXDaPmhju2LkC0w5UAaaSrHhbdeYa19HSJ24syBrgxDwx-Q3Xbb4iNCvfSVFV57Z0K0iCoTeOB2CV6gAW9kQeppxBs3cpGnlBhfm2yTVLrJAmuSwJpRYAV5Ob9zNTBx_LP2QRrtueY40AU5nATbjJP1pmFSKyE1cFaQ53NxnGbp7sS0uO1jHVhKkBwACvJwUIj525MePf7zP5-Q2yzFTGSvwUOy2133-DQimc4-yyr8Ewxi61g |
linkProvider | IEEE |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwzV1Lb9QwEB6VcoALBcojUMBIcELZJvbYWSNxWFGqLd0uh27R3lLHdlBVlK3aBFR-C3-F_4btPCQQcKvELZIdR_F8Hn9jzwPgRekaTIY6duy88AYKjwu0JpaoUKMpS8p8vPPBXEyP8P2SL9fg-xALY60Nzmd25B_DXb5Z6cYflW1LhjxLehfKfXv51RloF2_2dpw0X1K6-27xdhp3NQRijWlWxxnjiCorDB-npU60deaMlW4Yx0NKmmkUSqhEM24Kk9lUmrR0hIBRjTJR1Crmxr0G1x3P4LSNDhtOcBJnCYjAr2kqaExZtuyjchK5vZjPZ4fO_qTpiPnqC-jzkzLGmdsN-C-bYKjq8neCGza63Q340U9R699yOmrqYqS__ZY98n-dw9twq2PYZNIuiTuwZqu7sNFXryCdMtuEidPvpM267cFJdqw9I1262U_EcXny8cTYFflQ-KMq11wHr7XqNZmQw-b8i728B0dX8iP3Yb1aVfYhECNMUnAjjVals_kSVbKSFWM03Co0SkSQ9hLOdZdt3Rf9-JwHqyuReQBI7gGSdwCJ4NXwzlmba-SfvTe9dIeenWAj2OqBlHfq6CKnQmZcSGQ0gudDs1Mk_nZIVXbVuD44FigYIkbwoAXgMHaP20d__uYzuDFdHMzy2d58_zHcpD5CJPhIbsF6fd7YJ4631cXTsHwIHF811n4Cb0JH5w |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=New+Generation+Deep+Learning+for+Video+Object+Detection%3A+A+Survey&rft.jtitle=IEEE+transaction+on+neural+networks+and+learning+systems&rft.au=Jiao%2C+Licheng&rft.au=Zhang%2C+Ruohan&rft.au=Liu%2C+Fang&rft.au=Yang%2C+Shuyuan&rft.date=2022-08-01&rft.issn=2162-2388&rft.eissn=2162-2388&rft.volume=33&rft.issue=8&rft.spage=3195&rft_id=info:doi/10.1109%2FTNNLS.2021.3053249&rft.externalDBID=NO_FULL_TEXT |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2162-237X&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2162-237X&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2162-237X&client=summon |