Multi-Agent Deep Reinforcement Learning for Multi-Object Tracker
Multi-object tracking has been a key research subject in many computer vision applications. We propose a novel approach based on multi-agent deep reinforcement learning (MADRL) for multi-object tracking to solve the problems in the existing tracking methods, such as a varying number of targets, non-...
Saved in:
Published in | IEEE access Vol. 7; pp. 32400 - 32407 |
---|---|
Main Authors | , , , , , |
Format | Journal Article |
Language | English |
Published |
Piscataway
IEEE
2019
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
ISSN | 2169-3536 2169-3536 |
DOI | 10.1109/ACCESS.2019.2901300 |
Cover
Loading…
Abstract | Multi-object tracking has been a key research subject in many computer vision applications. We propose a novel approach based on multi-agent deep reinforcement learning (MADRL) for multi-object tracking to solve the problems in the existing tracking methods, such as a varying number of targets, non-causal, and non-realtime. At first, we choose YOLO V3 to detect the objects included in each frame. Unsuitable candidates were screened out and the rest of detection results are regarded as multiple agents and forming a multi-agent system. Independent Q-Learners (IQL) is used to learn the agents' policy, in which, each agent treats other agents as part of the environment. Then, we conducted offline learning in the training and online learning during the tracking. Our experiments demonstrate that the use of MADRL achieves better performance than the other state-of-art methods in precision, accuracy, and robustness. |
---|---|
AbstractList | Multi-object tracking has been a key research subject in many computer vision applications. We propose a novel approach based on multi-agent deep reinforcement learning (MADRL) for multi-object tracking to solve the problems in the existing tracking methods, such as a varying number of targets, non-causal, and non-realtime. At first, we choose YOLO V3 to detect the objects included in each frame. Unsuitable candidates were screened out and the rest of detection results are regarded as multiple agents and forming a multi-agent system. Independent Q-Learners (IQL) is used to learn the agents' policy, in which, each agent treats other agents as part of the environment. Then, we conducted offline learning in the training and online learning during the tracking. Our experiments demonstrate that the use of MADRL achieves better performance than the other state-of-art methods in precision, accuracy, and robustness. |
Author | Jia, Yinjie Pan, Zhigeng Wang, Haiyan Deng, Chao Jiang, Mingxin Hai, Tao |
Author_xml | – sequence: 1 givenname: Mingxin orcidid: 0000-0003-0766-5841 surname: Jiang fullname: Jiang, Mingxin email: jiangmingxin@126.com organization: Jiangsu Laboratory of Lake Environment Remote Sensing Technologies, Huaiyin Institute of Technology, Huaian, China – sequence: 2 givenname: Tao surname: Hai fullname: Hai, Tao organization: Computer Science Department, Baoji University of Arts and Sciences, Baoji, China – sequence: 3 givenname: Zhigeng surname: Pan fullname: Pan, Zhigeng email: zgpan@hznu.edu.cn organization: Digital Media and Interaction Research Center, Hangzhou Normal University, Hangzhou, China – sequence: 4 givenname: Haiyan surname: Wang fullname: Wang, Haiyan organization: Jiangsu Laboratory of Lake Environment Remote Sensing Technologies, Huaiyin Institute of Technology, Huaian, China – sequence: 5 givenname: Yinjie surname: Jia fullname: Jia, Yinjie organization: Jiangsu Laboratory of Lake Environment Remote Sensing Technologies, Huaiyin Institute of Technology, Huaian, China – sequence: 6 givenname: Chao surname: Deng fullname: Deng, Chao organization: School of Physics and Electronic Information Engineering, Henan Polytechnic University, Jiaozuo, China |
BookMark | eNqFUEtLAzEQDqKgVn-BlwXPW_PcJjdLfUJF8HEOkzgpqXVTs-nBf-_WFREvzmWGj-8xfIdkt00tEnLC6Jgxas6ms9nl4-OYU2bG3FAmKN0hB5w1phZKNLu_7n1y3HVL2o_uITU5IOd3m1WJ9XSBbakuENfVA8Y2pOzxbQvNEXIb20XVQ9XAvXdL9KV6yuBfMR-RvQCrDo-_94g8X10-zW7q-f317Ww6r72kutRSMMV8MA6CCN57bpgGxxrOmHCgjA568uK0dwhBNwaclKiNlNwoFtTEiBG5HXxfEiztOsc3yB82QbRfQMoLC7lEv0KrjZPeORSBBsmDAwOKgmeUuhC0dL3X6eC1zul9g12xy7TJbf--5VKpRmghdc8SA8vn1HUZw08qo3bbvB2at9vm7Xfzvcr8UflYoMTUlgxx9Y_2ZNBGRPxJ043q3-HiE2aNkno |
CODEN | IAECCG |
CitedBy_id | crossref_primary_10_1002_cav_2079 crossref_primary_10_1007_s11042_023_17297_3 crossref_primary_10_3390_electronics11101661 crossref_primary_10_1109_ACCESS_2024_3521334 crossref_primary_10_1109_JSEN_2020_3035355 crossref_primary_10_1109_ACCESS_2024_3355018 crossref_primary_10_1007_s00521_021_05738_9 crossref_primary_10_1115_1_4064478 crossref_primary_10_1016_j_neucom_2024_127954 crossref_primary_10_3390_app9163419 crossref_primary_10_1109_ACCESS_2021_3108623 crossref_primary_10_1016_j_eswa_2023_121258 crossref_primary_10_1115_1_4056660 crossref_primary_10_1109_ACCESS_2022_3203443 crossref_primary_10_3389_fpsyg_2022_979657 crossref_primary_10_1109_ACCESS_2023_3321290 crossref_primary_10_1109_TIP_2022_3208437 crossref_primary_10_1109_ACCESS_2019_2961174 crossref_primary_10_1109_TSG_2023_3310893 crossref_primary_10_1109_ACCESS_2019_2930640 crossref_primary_10_3390_electronics10192406 crossref_primary_10_3390_e25010135 crossref_primary_10_3390_app10217462 crossref_primary_10_3390_s23156822 crossref_primary_10_1007_s10462_021_10061_9 crossref_primary_10_1109_ACCESS_2023_3283411 |
Cites_doi | 10.1109/ICCV.2011.6126532 10.1109/ICCV.2015.169 10.1109/CVPRW.2016.59 10.1109/CVPR.2017.148 10.1109/TPAMI.2013.103 10.1109/TPAMI.2016.2577031 10.3390/s17010121 10.1007/978-3-319-46454-1_30 10.1109/ICCV.2015.534 10.1007/s11263-016-0960-z 10.1023/B:VISI.0000013087.49260.fb 10.1109/ICCV.2009.5459207 10.1109/CVPR.2016.91 10.1109/CVPR.2005.177 10.1109/CVPR.2013.241 10.1109/TIP.2016.2570553 10.1109/ROBOT.2004.1307456 10.1109/TPAMI.2015.2509979 10.1109/TPAMI.2010.232 10.1073/pnas.38.8.716 10.1109/TPAMI.2016.2533391 10.1109/CVPR.2017.403 10.1109/TNN.1998.712192 10.1613/jair.4818 10.1109/MSP.2017.2743240 10.1109/ICCV.2015.349 10.1109/TPAMI.2013.210 10.1109/TPAMI.2017.2691769 10.1109/TPAMI.2015.2505309 10.1038/nature16961 10.1109/CVPR.2017.292 10.1109/ICCV.2015.286 10.1109/HNICEM.2017.8269524 10.1109/CVPR.2017.748 10.5220/0006393400170026 10.1016/j.cviu.2016.07.003 10.1109/CVPR.2015.7298718 |
ContentType | Journal Article |
Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2019 |
Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2019 |
DBID | 97E ESBDL RIA RIE AAYXX CITATION 7SC 7SP 7SR 8BQ 8FD JG9 JQ2 L7M L~C L~D DOA |
DOI | 10.1109/ACCESS.2019.2901300 |
DatabaseName | IEEE Xplore (IEEE) IEEE Xplore Open Access Journals IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Electronic Library (IEL) CrossRef Computer and Information Systems Abstracts Electronics & Communications Abstracts Engineered Materials Abstracts METADEX Technology Research Database Materials Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional DOAJ Directory of Open Access Journals |
DatabaseTitle | CrossRef Materials Research Database Engineered Materials Abstracts Technology Research Database Computer and Information Systems Abstracts – Academic Electronics & Communications Abstracts ProQuest Computer Science Collection Computer and Information Systems Abstracts Advanced Technologies Database with Aerospace METADEX Computer and Information Systems Abstracts Professional |
DatabaseTitleList | Materials Research Database |
Database_xml | – sequence: 1 dbid: DOA name: DOAJ: Directory of Open Access Journal (DOAJ) url: https://www.doaj.org/ sourceTypes: Open Website – sequence: 2 dbid: RIE name: IEEE Electronic Library (IEL) url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/ sourceTypes: Publisher |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Engineering |
EISSN | 2169-3536 |
EndPage | 32407 |
ExternalDocumentID | oai_doaj_org_article_89b4cbbe3f0f42fba9a50ac100bff84b 10_1109_ACCESS_2019_2901300 8653482 |
Genre | orig-research |
GrantInformation_xml | – fundername: National Natural Science Foundation of China grantid: 61801188 funderid: 10.13039/501100001809 – fundername: 333 high-level talent training project of Jiangsu province grantid: BRA2018333 – fundername: Jiangsu Laboratory of Lake Environment Remote Sensing Technologies grantid: JSLERS-2018-005 – fundername: Major Program of Natural Science Foundation of the Higher Education Institutions of Jiangsu Province grantid: 18KJA520002 – fundername: Natural Science Foundation of Jiangsu Province grantid: BK20171267 funderid: 10.13039/501100004608 – fundername: Six Talent Peaks Project in Jiangsu Province grantid: 2016XYDXXJS-012 funderid: 10.13039/501100010014 – fundername: National Key Research and Development project grantid: 2017YFB1002803 – fundername: 533 Talents Engineering Project in Huaian grantid: HAA201738 |
GroupedDBID | 0R~ 4.4 5VS 6IK 97E AAJGR ABAZT ABVLG ACGFS ADBBV AGSQL ALMA_UNASSIGNED_HOLDINGS BCNDV BEFXN BFFAM BGNUA BKEBE BPEOZ EBS EJD ESBDL GROUPED_DOAJ IPLJI JAVBF KQ8 M43 M~E O9- OCL OK1 RIA RIE RNS AAYXX CITATION RIG 7SC 7SP 7SR 8BQ 8FD JG9 JQ2 L7M L~C L~D |
ID | FETCH-LOGICAL-c408t-43151cf9baf3fccc2918ab162113ba598f87db8cbeaf869ab44e89442951f5793 |
IEDL.DBID | RIE |
ISSN | 2169-3536 |
IngestDate | Wed Aug 27 01:32:55 EDT 2025 Sun Jun 29 12:41:29 EDT 2025 Tue Jul 01 02:18:11 EDT 2025 Thu Apr 24 23:12:37 EDT 2025 Wed Aug 27 02:50:09 EDT 2025 |
IsDoiOpenAccess | true |
IsOpenAccess | true |
IsPeerReviewed | true |
IsScholarly | true |
Language | English |
License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/OAPA.html |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c408t-43151cf9baf3fccc2918ab162113ba598f87db8cbeaf869ab44e89442951f5793 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ORCID | 0000-0003-0766-5841 |
OpenAccessLink | https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/document/8653482 |
PQID | 2455638348 |
PQPubID | 4845423 |
PageCount | 8 |
ParticipantIDs | doaj_primary_oai_doaj_org_article_89b4cbbe3f0f42fba9a50ac100bff84b crossref_citationtrail_10_1109_ACCESS_2019_2901300 proquest_journals_2455638348 ieee_primary_8653482 crossref_primary_10_1109_ACCESS_2019_2901300 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 20190000 2019-00-00 20190101 2019-01-01 |
PublicationDateYYYYMMDD | 2019-01-01 |
PublicationDate_xml | – year: 2019 text: 20190000 |
PublicationDecade | 2010 |
PublicationPlace | Piscataway |
PublicationPlace_xml | – name: Piscataway |
PublicationTitle | IEEE access |
PublicationTitleAbbrev | Access |
PublicationYear | 2019 |
Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
References | ref35 ref13 ref12 ref36 ref14 kumar k c (ref3) 2017; 39 ref33 ref11 ref32 ref10 ref2 ref1 ref39 ref17 ref38 redmon (ref40) 2018 ref16 ref19 milan (ref43) 2017 ref18 zhang (ref30) 2017 andriyenko (ref9) 2016; 38 ref24 ref45 ref23 ref48 ref26 ref25 ref20 ref42 ref41 ref22 ref44 ref21 jiang deng ch pan chen wang (ref46) 2018; 2018 luo (ref31) 2017 li (ref27) 2017 buçoniu (ref37) 2010; 310 jie (ref34) 2016 wu (ref15) 2012 ref8 ref7 silver (ref29) 2016; 529 ref4 ref6 ref5 jayaraman (ref28) 2016 leal-taixé (ref47) 2015 |
References_xml | – ident: ref13 doi: 10.1109/ICCV.2011.6126532 – start-page: 4225 year: 2017 ident: ref43 article-title: Online multi-target tracking using recurrent neural networksm publication-title: Proc AAAI – year: 2017 ident: ref31 publication-title: End-to-end active object tracking via reinforcement learning – volume: 310 start-page: 183 year: 2010 ident: ref37 publication-title: Multi-agent reinforcement learning an Overview – ident: ref10 doi: 10.1109/ICCV.2015.169 – ident: ref44 doi: 10.1109/CVPRW.2016.59 – volume: 2018 year: 2018 ident: ref46 article-title: Multiple object tracking in videos based on LSTM and deep reinforcement learning publication-title: Complexity – ident: ref32 doi: 10.1109/CVPR.2017.148 – year: 2017 ident: ref30 publication-title: Deep reinforcement learning for visual object tracking in videos – ident: ref12 doi: 10.1109/TPAMI.2013.103 – start-page: 127 year: 2016 ident: ref34 article-title: Tree-structured reinforcement learning for sequential object localization publication-title: Proc Adv Neural Inf Process – year: 2015 ident: ref47 publication-title: Motchallenge 2015 Towards a benchmark for multi-target tracking – ident: ref8 doi: 10.1109/TPAMI.2016.2577031 – year: 2017 ident: ref27 publication-title: Deep reinforcement learning An overview – ident: ref2 doi: 10.3390/s17010121 – ident: ref33 doi: 10.1007/978-3-319-46454-1_30 – ident: ref42 doi: 10.1109/ICCV.2015.534 – ident: ref45 doi: 10.1007/s11263-016-0960-z – ident: ref21 doi: 10.1023/B:VISI.0000013087.49260.fb – ident: ref23 doi: 10.1109/ICCV.2009.5459207 – ident: ref11 doi: 10.1109/CVPR.2016.91 – ident: ref22 doi: 10.1109/CVPR.2005.177 – ident: ref14 doi: 10.1109/CVPR.2013.241 – ident: ref19 doi: 10.1109/TIP.2016.2570553 – ident: ref26 doi: 10.1109/ROBOT.2004.1307456 – ident: ref17 doi: 10.1109/TPAMI.2015.2509979 – ident: ref18 doi: 10.1109/TPAMI.2010.232 – ident: ref39 doi: 10.1073/pnas.38.8.716 – volume: 39 start-page: 61 year: 2017 ident: ref3 article-title: Discriminative and efficient label propagation on complementary graphs for multi-object tracking publication-title: IEEE Trans Pattern Anal Mach Intell doi: 10.1109/TPAMI.2016.2533391 – ident: ref1 doi: 10.1109/CVPR.2017.403 – ident: ref25 doi: 10.1109/TNN.1998.712192 – start-page: 1948 year: 2012 ident: ref15 article-title: Coupling detection and data association for multiple object tracking publication-title: Proc IEEE Conf Comput Vis Pattern Recognit (CVPR) – ident: ref36 doi: 10.1613/jair.4818 – ident: ref24 doi: 10.1109/MSP.2017.2743240 – ident: ref48 doi: 10.1109/ICCV.2015.349 – ident: ref6 doi: 10.1109/TPAMI.2013.210 – ident: ref16 doi: 10.1109/TPAMI.2017.2691769 – volume: 38 start-page: 2054 year: 2016 ident: ref9 article-title: Multi-target tracking by discrete-continuous energy minimization publication-title: IEEE Trans Pattern Anal Mach Intell doi: 10.1109/TPAMI.2015.2505309 – volume: 529 start-page: 484 year: 2016 ident: ref29 article-title: Mastering the game of go with deep neural networks and tree search publication-title: Nature doi: 10.1038/nature16961 – ident: ref7 doi: 10.1109/CVPR.2017.292 – ident: ref35 doi: 10.1109/ICCV.2015.286 – ident: ref4 doi: 10.1109/HNICEM.2017.8269524 – ident: ref41 doi: 10.1109/CVPR.2017.748 – year: 2018 ident: ref40 publication-title: YOLOv3 An Incremental Improvement – ident: ref38 doi: 10.5220/0006393400170026 – ident: ref5 doi: 10.1016/j.cviu.2016.07.003 – ident: ref20 doi: 10.1109/CVPR.2015.7298718 – year: 2016 ident: ref28 publication-title: Look-ahead before you leap end-to-end active recognition by forecasting the effect of motion |
SSID | ssj0000816957 |
Score | 2.3647766 |
Snippet | Multi-object tracking has been a key research subject in many computer vision applications. We propose a novel approach based on multi-agent deep reinforcement... |
SourceID | doaj proquest crossref ieee |
SourceType | Open Website Aggregation Database Enrichment Source Index Database Publisher |
StartPage | 32400 |
SubjectTerms | Computer vision Deep learning Distance learning IQL MADRL Multi-object tracking Multiagent systems Multiple target tracking Neural networks Object detection Object recognition Real-time systems Reinforcement learning Target tracking Training YOLO V3 |
SummonAdditionalLinks | – databaseName: DOAJ Directory of Open Access Journals dbid: DOA link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV1LS8QwEA7iSQ_iE-uLHjxaN2mTdHJzXRURVBCFvYUkTbzIKrr-fzNpXAqCXrym6SMzzbwy8w0hx06a2nkWKmBMVdy1TQV4XOhdFyCa0NA0WI18eyevn_jNVEwHrb4wJ6yHB-4JNwJlubPWN4EGXgdrlBHUOEapDQG4Rekbdd7AmUoyGJhUos0wQ4yq0XgyiSvCXC51imeHDda0DVRRQuzPLVZ-yOWkbK7WyVq2Estx_3UbZMnPNsnqADtwi5yl0tlqjKVR5YX3b-WDTyioLgX8ygyc-lzGobKfe28x6FJG9YS5FNvk6erycXJd5XYIleMU5lVU9YK5oKwJTXDO1YqBsUxGF66xRigI0HYWnPUmgFTGcu5B8ahwBAsi7sMdsjx7nfldUspOeitpoMow3onoY3S-Vl3jqQ-utVCQ-psy2mWscGxZ8aKTz0CV7smpkZw6k7MgJ4ub3nqojN-nnyPJF1MR5zoNRO7rzH39F_cLsoUMWzwEpEC0noIcfDNQ5z35oWuOYGgQr-_9x6v3yQoupw_HHJDl-funP4wGytwepX_xC9d54OY priority: 102 providerName: Directory of Open Access Journals |
Title | Multi-Agent Deep Reinforcement Learning for Multi-Object Tracker |
URI | https://ieeexplore.ieee.org/document/8653482 https://www.proquest.com/docview/2455638348 https://doaj.org/article/89b4cbbe3f0f42fba9a50ac100bff84b |
Volume | 7 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1Lb9QwEB61PbUHHn2IhVLlwLHZ2omdtW8sC1WFVJAQlXqz7MmYA2hbtbuX_no8jjeqCkLcImcc2R478_DMNwDvsPMNkoy1kdLWCmdtbfi6kLCPJqnQpm05G_nyS3dxpT5f6-stOB1zYYgoB5_RlB_zXX5_g2t2lZ2ZTjMWyzZsJ8NtyNUa_SlcQMLqWQEWksKezReLNAeO3rJTvi1sOYvtkfDJGP2lqMoff-IsXs6fw-VmYENUyc_pehWm-PAEs_F_R_4CnhU9s5oPG-MlbNFyH_YeoQ8ewPucfFvPObmq-kh0W32jjKOK2WVYFejVH1Vqqgbar4HdNlUScByNcQhX55--Ly7qUlChRiXMqk7KgpYYbfCxjYjYWGl8kF0yAtvgtTXRzPpgMJCPprM-KEXGqiSytIw6neQj2FneLOkVVF3fUehEFNZL1etkpfTU2L4lQRFnwUyg2ay0w4I2zkUvfrlsdQjrBvY4Zo8r7JnA6djpdgDb-Df5B2bhSMpI2bkhLb0rB88ZGxSGQG0UUTUxeOu18CiFCDEaFSZwwOwaP1I4NYHjzYZw5VTfu0YxnJpJ71__vdcb2OUBDi6aY9hZ3a3pbVJaVuEkG_snec_-BkxZ6ZA |
linkProvider | IEEE |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1LT9wwEB4BPbQc6IMillLIoUey2ImdtW_dbkFLy1KpAombZTtjDqAFtbuX_vp6HG-E2qriFjl2NPZ4MuPxzDcAH3xjK488lIpzXQo_qktF14Xo26CiCa3qmrKRZxfN9Ep8uZbXa3DU58IgYgo-wyE9prv89t4vyVV2rBpJWCzr8CzqfaG7bK3eo0IlJLQcZWghzvTxeDKJs6D4LT2k-8Ka8tgeqZ-E0p_Lqvz1L04K5vQlzFakdXElt8Plwg39rz9QG59K-yvYypZmMe62xmtYw_kb2HyEP7gNH1P6bTmm9KriM-JD8R0TkqpPTsMig6_eFLGp6Pp-c-S4KaKKo3iMt3B1enI5mZa5pELpBVOLMpoLkvugnQ118N5XmivreBOPgbWzUqugRq1T3qENqtHWCYFKi6i0JA8yyvIObMzv57gLRdM26BoWmLZctDKeU1qsdFsjw-BHTg2gWq208RlvnMpe3Jl07mDadOwxxB6T2TOAo37QQwe38f_un4iFfVfCyk4NcelNFj2jtBPeOawDC6IKzmormfWcMReCEm4A28Su_iOZUwPYX20Ik-X6p6kEAaqp-H7v36MO4fn0cnZuzs8uvr6DF0Rs57DZh43FjyW-jybMwh2knfsbMS_r6w |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Multi-Agent+Deep+Reinforcement+Learning+for+Multi-Object+Tracker&rft.jtitle=IEEE+access&rft.au=Jiang%2C+Mingxin&rft.au=Hai%2C+Tao&rft.au=Pan%2C+Zhigeng&rft.au=Wang%2C+Haiyan&rft.date=2019&rft.issn=2169-3536&rft.eissn=2169-3536&rft.volume=7&rft.spage=32400&rft.epage=32407&rft_id=info:doi/10.1109%2FACCESS.2019.2901300&rft.externalDBID=n%2Fa&rft.externalDocID=10_1109_ACCESS_2019_2901300 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2169-3536&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2169-3536&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2169-3536&client=summon |