Learning Dual Semantic Relations With Graph Attention for Image-Text Matching
Image-Text Matching is one major task in cross-modal information processing. The main challenge is to learn the unified visual and textual representations. Previous methods that perform well on this task primarily focus on not only the alignment between region features in images and the correspondin...
Saved in:
Published in | IEEE transactions on circuits and systems for video technology Vol. 31; no. 7; pp. 2866 - 2879 |
---|---|
Main Authors | , , |
Format | Journal Article |
Language | English |
Published |
New York
IEEE
01.07.2021
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | Image-Text Matching is one major task in cross-modal information processing. The main challenge is to learn the unified visual and textual representations. Previous methods that perform well on this task primarily focus on not only the alignment between region features in images and the corresponding words in sentences, but also the alignment between relations of regions and relational words. However, the lack of joint learning of regional features and global features will cause the regional features to lose contact with the global context, leading to the mismatch with those non-object words which have global meanings in some sentences. In this work, in order to alleviate this issue, it is necessary to enhance the relations between regions and the relations between regional and global concepts to obtain a more accurate visual representation so as to be better correlated to the corresponding text. Thus, a novel multi-level semantic relations enhancement approach named Dual Semantic Relations Attention Network(DSRAN) is proposed which mainly consists of two modules, separate semantic relations module and the joint semantic relations module. DSRAN performs graph attention in both modules respectively for region-level relations enhancement and regional-global relations enhancement at the same time. With these two modules, different hierarchies of semantic relations are learned simultaneously, thus promoting the image-text matching process by providing more information for the final visual representation. Quantitative experimental results have been performed on MS-COCO and Flickr30K and our method outperforms previous approaches by a large margin due to the effectiveness of the dual semantic relations learning scheme. |
---|---|
AbstractList | Image-Text Matching is one major task in cross-modal information processing. The main challenge is to learn the unified visual and textual representations. Previous methods that perform well on this task primarily focus on not only the alignment between region features in images and the corresponding words in sentences, but also the alignment between relations of regions and relational words. However, the lack of joint learning of regional features and global features will cause the regional features to lose contact with the global context, leading to the mismatch with those non-object words which have global meanings in some sentences. In this work, in order to alleviate this issue, it is necessary to enhance the relations between regions and the relations between regional and global concepts to obtain a more accurate visual representation so as to be better correlated to the corresponding text. Thus, a novel multi-level semantic relations enhancement approach named Dual Semantic Relations Attention Network(DSRAN) is proposed which mainly consists of two modules, separate semantic relations module and the joint semantic relations module. DSRAN performs graph attention in both modules respectively for region-level relations enhancement and regional-global relations enhancement at the same time. With these two modules, different hierarchies of semantic relations are learned simultaneously, thus promoting the image-text matching process by providing more information for the final visual representation. Quantitative experimental results have been performed on MS-COCO and Flickr30K and our method outperforms previous approaches by a large margin due to the effectiveness of the dual semantic relations learning scheme. |
Author | Cheng, Qingrong Gu, Xiaodong Wen, Keyu |
Author_xml | – sequence: 1 givenname: Keyu orcidid: 0000-0002-5048-9014 surname: Wen fullname: Wen, Keyu organization: Department of Electronic Engineering, Fudan University, Shanghai, China – sequence: 2 givenname: Xiaodong orcidid: 0000-0002-7096-1830 surname: Gu fullname: Gu, Xiaodong email: xdgu@fudan.edu.cn organization: Department of Electronic Engineering, Fudan University, Shanghai, China – sequence: 3 givenname: Qingrong orcidid: 0000-0001-6631-1504 surname: Cheng fullname: Cheng, Qingrong organization: Department of Electronic Engineering, Fudan University, Shanghai, China |
BookMark | eNp9kU1LAzEQhoNUsK3-Ab0EPG_NxyabHEvVWqgIdtVjyKaz7ZY2W7Mp6L93-4EHD55mGN7nHeadHur42gNC15QMKCX6Lh_N3vMBI4wMOOFECnmGulQIlTBGRKftiaCJYlRcoF7TrAihqUqzLnqegg2-8gt8v7NrPION9bFy-BXWNla1b_BHFZd4HOx2iYcxgt9PcVkHPNnYBSQ5fEX8bKNbtiaX6Ly06wauTrWP3h4f8tFTMn0ZT0bDaeKYFjGRaWYLJTUvqOVzxrgq51wLCYJpkE6UUqdClLaQTgJVhdbOZXNOrMukKJzifXR79N2G-nMHTTSrehd8u9IwkWYyZZSkrUodVS7UTROgNK6Kh6tisNXaUGL24ZlDeGYfnjmF16LsD7oN1caG7_-hmyNUAcAvoFn7gkzzH3RmfEY |
CODEN | ITCTEM |
CitedBy_id | crossref_primary_10_1145_3689637 crossref_primary_10_1016_j_ipm_2023_103575 crossref_primary_10_1109_TMM_2023_3243665 crossref_primary_10_1016_j_knosys_2024_112912 crossref_primary_10_1016_j_knosys_2024_111503 crossref_primary_10_1109_TPAMI_2022_3178485 crossref_primary_10_1109_TCSVT_2023_3263468 crossref_primary_10_1109_TCSVT_2024_3358411 crossref_primary_10_7717_peerj_cs_2725 crossref_primary_10_1017_S0890060424000143 crossref_primary_10_1007_s11042_024_18431_5 crossref_primary_10_1007_s11042_023_17903_4 crossref_primary_10_1109_TCSVT_2024_3480949 crossref_primary_10_1109_TMM_2023_3248160 crossref_primary_10_1145_3714431 crossref_primary_10_1007_s00530_024_01383_z crossref_primary_10_1109_TCSVT_2023_3253548 crossref_primary_10_1145_3499027 crossref_primary_10_1016_j_ipm_2023_103288 crossref_primary_10_1109_TMM_2023_3261443 crossref_primary_10_1109_TCSVT_2023_3271318 crossref_primary_10_1109_TCSVT_2022_3162650 crossref_primary_10_1109_TCSVT_2022_3231463 crossref_primary_10_1016_j_neucom_2021_07_017 crossref_primary_10_1109_ACCESS_2025_3529942 crossref_primary_10_1007_s11042_023_17956_5 crossref_primary_10_1109_TCSVT_2022_3164230 crossref_primary_10_1109_TCSVT_2024_3392619 crossref_primary_10_1016_j_image_2023_117021 crossref_primary_10_1109_TCSVT_2022_3226488 crossref_primary_10_1109_TCSVT_2024_3360530 crossref_primary_10_1016_j_eswa_2025_126943 crossref_primary_10_1109_TCSVT_2023_3306738 crossref_primary_10_1016_j_neucom_2024_128082 crossref_primary_10_3390_app121910111 crossref_primary_10_1109_TMM_2023_3316077 crossref_primary_10_1016_j_ipm_2024_103990 crossref_primary_10_1016_j_neucom_2023_02_043 crossref_primary_10_1016_j_ipm_2022_103154 crossref_primary_10_1109_TIP_2023_3348297 crossref_primary_10_1109_TMM_2024_3521736 crossref_primary_10_1016_j_knosys_2025_113355 crossref_primary_10_1109_MMUL_2022_3144972 crossref_primary_10_1109_TCSVT_2021_3073718 crossref_primary_10_1109_TMM_2023_3297391 crossref_primary_10_1109_LSP_2022_3217682 crossref_primary_10_1007_s13735_022_00237_6 crossref_primary_10_1109_TCSVT_2024_3369656 crossref_primary_10_1109_ACCESS_2025_3549781 crossref_primary_10_1109_TNNLS_2022_3188569 crossref_primary_10_1007_s00530_024_01471_0 crossref_primary_10_1109_TCSVT_2022_3182426 crossref_primary_10_1109_TMM_2022_3217384 crossref_primary_10_1016_j_knosys_2022_109356 crossref_primary_10_1016_j_eswa_2022_118474 crossref_primary_10_1109_TCSVT_2022_3182549 crossref_primary_10_1109_TCSVT_2022_3176866 crossref_primary_10_1109_TCSVT_2023_3339489 crossref_primary_10_1109_TIP_2023_3286710 crossref_primary_10_1016_j_neucom_2025_129642 crossref_primary_10_1016_j_neunet_2024_106200 crossref_primary_10_1109_TCSVT_2023_3307554 |
Cites_doi | 10.1609/aaai.v34i07.7005 10.1109/TIP.2018.2882225 10.24963/ijcai.2018/124 10.1109/TCYB.2019.2956975 10.1016/j.neucom.2018.11.089 10.1109/CVPR.2019.00532 10.1145/3123266.3123326 10.1109/ICCV.2015.303 10.1109/ICCV.2015.279 10.1109/CVPR.2016.90 10.1109/TNNLS.2020.2967597 10.1109/ICCV.2019.00585 10.1109/CVPR.2018.00750 10.1109/TCSVT.2017.2705068 10.1109/CVPR.2015.7298935 10.1109/TNN.2008.2005605 10.1109/TCSVT.2019.2916167 10.1109/WACV45572.2020.9093614 10.1109/CVPR.2018.00611 10.1007/s11263-016-0981-7 10.1109/CVPR.2018.00608 10.1109/TCSVT.2020.3017344 10.1109/CVPR.2009.5206848 10.1145/3343031.3350875 10.1109/CVPR.2018.00636 10.24963/ijcai.2019/526 10.1109/CVPR.2017.232 10.24963/ijcai.2019/111 10.1109/ICCV.2019.00475 10.1109/TCSVT.2019.2953692 10.18653/v1/W15-2812 10.1109/CVPR.2016.541 10.3115/v1/D14-1179 10.1109/ACCESS.2020.2969808 10.1162/0899766042321814 10.1145/3284750 10.1109/ICCV.2019.01041 10.1007/978-3-030-01225-0_13 10.1162/neco.1997.9.8.1735 10.1109/TCSVT.2019.2892802 10.1109/CVPR.2015.7298932 |
ContentType | Journal Article |
Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2021 |
Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2021 |
DBID | 97E RIA RIE AAYXX CITATION 7SC 7SP 8FD JQ2 L7M L~C L~D |
DOI | 10.1109/TCSVT.2020.3030656 |
DatabaseName | IEEE Xplore (IEEE) IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Xplore CrossRef Computer and Information Systems Abstracts Electronics & Communications Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional |
DatabaseTitle | CrossRef Technology Research Database Computer and Information Systems Abstracts – Academic Electronics & Communications Abstracts ProQuest Computer Science Collection Computer and Information Systems Abstracts Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Professional |
DatabaseTitleList | Technology Research Database |
Database_xml | – sequence: 1 dbid: RIE name: IEEE Xplore url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/ sourceTypes: Publisher |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Engineering |
EISSN | 1558-2205 |
EndPage | 2879 |
ExternalDocumentID | 10_1109_TCSVT_2020_3030656 9222079 |
Genre | orig-research |
GrantInformation_xml | – fundername: National Natural Science Foundation of China grantid: 61771145; 61371148 funderid: 10.13039/501100001809 |
GroupedDBID | -~X 0R~ 29I 4.4 5GY 5VS 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABQJQ ABVLG ACGFO ACGFS ACIWK AENEX AETIX AGQYO AGSQL AHBIQ AI. AIBXA AKJIK AKQYR ALLEH ALMA_UNASSIGNED_HOLDINGS ASUFR ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ CS3 DU5 EBS EJD HZ~ H~9 ICLAB IFIPE IFJZH IPLJI JAVBF LAI M43 O9- OCL P2P RIA RIE RNS RXW TAE TN5 VH1 AAYXX CITATION RIG 7SC 7SP 8FD JQ2 L7M L~C L~D |
ID | FETCH-LOGICAL-c295t-647ab8693b1a3d2238fd3956e529e6c5f69455fab6c6e18b99cc7d30ac765bc83 |
IEDL.DBID | RIE |
ISSN | 1051-8215 |
IngestDate | Mon Jun 30 05:36:00 EDT 2025 Thu Apr 24 23:11:20 EDT 2025 Tue Jul 01 00:41:14 EDT 2025 Wed Aug 27 02:26:41 EDT 2025 |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 7 |
Language | English |
License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037 |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c295t-647ab8693b1a3d2238fd3956e529e6c5f69455fab6c6e18b99cc7d30ac765bc83 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ORCID | 0000-0001-6631-1504 0000-0002-7096-1830 0000-0002-5048-9014 |
PQID | 2547642104 |
PQPubID | 85433 |
PageCount | 14 |
ParticipantIDs | crossref_citationtrail_10_1109_TCSVT_2020_3030656 crossref_primary_10_1109_TCSVT_2020_3030656 proquest_journals_2547642104 ieee_primary_9222079 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 2021-07-01 |
PublicationDateYYYYMMDD | 2021-07-01 |
PublicationDate_xml | – month: 07 year: 2021 text: 2021-07-01 day: 01 |
PublicationDecade | 2020 |
PublicationPlace | New York |
PublicationPlace_xml | – name: New York |
PublicationTitle | IEEE transactions on circuits and systems for video technology |
PublicationTitleAbbrev | TCSVT |
PublicationYear | 2021 |
Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
References | lin (ref32) 2014 radford (ref36) 2018 ref52 ren (ref20) 2015 ref55 ref11 ref54 ref10 ref17 ref16 ref18 ref51 ref50 simonyan (ref14) 2014 ref46 ref45 ref47 ref42 ref41 ref44 vaswani (ref53) 2017 ref43 matsubara (ref37) 2019 ref49 ref8 ref7 kiros (ref13) 2014 veli?kovi? (ref31) 2017 ref9 ref4 ref6 ref5 ref40 kingma (ref56) 2014 ref35 ref34 ref30 ref33 andrew (ref12) 2013 ref2 ref1 faghri (ref3) 2017 ref39 goodfellow (ref38) 0; 2014 ref24 ref23 ioffe (ref57) 2015 ref26 ref25 ref22 kipf (ref48) 2016 ref21 ref28 ref27 ref29 devlin (ref19) 2018 krizhevsky (ref15) 2012 |
References_xml | – ident: ref55 doi: 10.1609/aaai.v34i07.7005 – ident: ref29 doi: 10.1109/TIP.2018.2882225 – ident: ref26 doi: 10.24963/ijcai.2018/124 – ident: ref9 doi: 10.1109/TCYB.2019.2956975 – ident: ref28 doi: 10.1016/j.neucom.2018.11.089 – ident: ref54 doi: 10.1109/CVPR.2019.00532 – ident: ref2 doi: 10.1145/3123266.3123326 – ident: ref33 doi: 10.1109/ICCV.2015.303 – ident: ref5 doi: 10.1109/ICCV.2015.279 – ident: ref16 doi: 10.1109/CVPR.2016.90 – ident: ref30 doi: 10.1109/TNNLS.2020.2967597 – ident: ref45 doi: 10.1109/ICCV.2019.00585 – year: 2017 ident: ref31 article-title: Graph attention networks publication-title: arXiv 1710 10903 – ident: ref40 doi: 10.1109/CVPR.2018.00750 – year: 2015 ident: ref57 article-title: Batch normalization: Accelerating deep network training by reducing internal covariate shift publication-title: arXiv 1502 03167 – ident: ref4 doi: 10.1109/TCSVT.2017.2705068 – ident: ref8 doi: 10.1109/CVPR.2015.7298935 – ident: ref51 doi: 10.1109/TNN.2008.2005605 – ident: ref44 doi: 10.1109/TCSVT.2019.2916167 – ident: ref23 doi: 10.1109/WACV45572.2020.9093614 – ident: ref24 doi: 10.1109/CVPR.2018.00611 – start-page: 1247 year: 2013 ident: ref12 article-title: Deep canonical correlation analysis publication-title: Proc Int Conf Mach Learn – ident: ref47 doi: 10.1007/s11263-016-0981-7 – year: 2014 ident: ref14 article-title: Very deep convolutional networks for large-scale image recognition publication-title: arXiv 1409 1556 – year: 2016 ident: ref48 article-title: Semi-supervised classification with graph convolutional networks publication-title: arXiv 1609 02907 – ident: ref7 doi: 10.1109/CVPR.2018.00608 – ident: ref41 doi: 10.1109/TCSVT.2020.3017344 – ident: ref35 doi: 10.1109/CVPR.2009.5206848 – ident: ref43 doi: 10.1145/3343031.3350875 – start-page: 1097 year: 2012 ident: ref15 article-title: ImageNet classification with deep convolutional neural networks publication-title: Proc Adv Neural Inf Process Syst – start-page: 91 year: 2015 ident: ref20 article-title: Faster R-CNN: Towards real-time object detection with region proposal networks publication-title: Proc Adv Neural Inf Process Syst – ident: ref6 doi: 10.1109/CVPR.2018.00636 – ident: ref46 doi: 10.24963/ijcai.2019/526 – start-page: 5998 year: 2017 ident: ref53 article-title: Attention is all you need publication-title: Proc Adv Neural Inf Process Syst – ident: ref42 doi: 10.1109/CVPR.2017.232 – year: 2018 ident: ref19 article-title: BERT: Pre-training of deep bidirectional transformers for language understanding publication-title: arXiv 1810 04805 – ident: ref49 doi: 10.24963/ijcai.2019/111 – year: 2017 ident: ref3 article-title: VSE++: Improving visual-semantic embeddings with hard negatives publication-title: arXiv 1707 05612 – year: 2019 ident: ref37 article-title: Target-oriented deformation of visual-semantic embedding space publication-title: arXiv 1910 06514 – year: 2018 ident: ref36 publication-title: Improving language understanding by generative pre-training – year: 2014 ident: ref56 article-title: Adam: A method for stochastic optimization publication-title: arXiv 1412 6980 – ident: ref22 doi: 10.1109/ICCV.2019.00475 – ident: ref50 doi: 10.1109/TCSVT.2019.2953692 – ident: ref25 doi: 10.18653/v1/W15-2812 – ident: ref34 doi: 10.1109/CVPR.2016.541 – ident: ref18 doi: 10.3115/v1/D14-1179 – ident: ref27 doi: 10.1109/ACCESS.2020.2969808 – ident: ref1 doi: 10.1162/0899766042321814 – ident: ref39 doi: 10.1145/3284750 – ident: ref52 doi: 10.1109/ICCV.2019.01041 – volume: 2014 start-page: 2672 year: 0 ident: ref38 article-title: Generative adversarial nets publication-title: Proc Adv Neural Inf Process Syst – ident: ref21 doi: 10.1007/978-3-030-01225-0_13 – ident: ref17 doi: 10.1162/neco.1997.9.8.1735 – ident: ref10 doi: 10.1109/TCSVT.2019.2892802 – ident: ref11 doi: 10.1109/CVPR.2015.7298932 – start-page: 740 year: 2014 ident: ref32 article-title: Microsoft coco: Common objects in context publication-title: Proc Eur Conf Comput Vis – year: 2014 ident: ref13 article-title: Unifying visual-semantic embeddings with multimodal neural language models publication-title: arXiv 1411 2539 |
SSID | ssj0014847 |
Score | 2.6022525 |
Snippet | Image-Text Matching is one major task in cross-modal information processing. The main challenge is to learn the unified visual and textual representations.... |
SourceID | proquest crossref ieee |
SourceType | Aggregation Database Enrichment Source Index Database Publisher |
StartPage | 2866 |
SubjectTerms | Alignment Automobiles Birds Cross-modal retrieval Data processing Feature extraction graph attention Hierarchies Image retrieval image text matching Learning Modules Representations semantic relation Semantic relations Semantics Sentences Task analysis Visualization Words (language) |
Title | Learning Dual Semantic Relations With Graph Attention for Image-Text Matching |
URI | https://ieeexplore.ieee.org/document/9222079 https://www.proquest.com/docview/2547642104 |
Volume | 31 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1NT8MwDLXGTnDgayAGA-XADbq1a5M2x2kwBtK4bIPdqjRNYIJtCLoLvx4n7aoJEOLWgyNFTmo_O882wLnB7BqBukNTbVM33MFYRTsIBlxtMmeMmuLkwT3rj4O7CZ1U4LKshVFKWfKZappP-5afLuTSpMpaHJ2ZG_IN2MDALa_VKl8MgsgOE0O44DkR-rFVgYzLW6Pu8GGEoWAbI1QDkc2w6jUnZKeq_DDF1r_0dmCw2llOK3lpLrOkKT-_NW3879Z3YbsAmqST34w9qKj5PmyttR-swaBorvpErpYoOlQzVPNUkpIgRx6n2TO5MT2tSSfLcmYkQZhLbmdoh5wRWnYyQGNu0lgHMO5dj7p9pxiv4Mg2p5nDglAkEeN-4gk_RZgQ6dTHcEnRNldMUs14QKkWCZNMeVHCuZRh6rtChowmMvIPoTpfzNUREFS4EB5H2UgEOpSIubTvJYZh6otUeHXwVvqOZdF73IzAeI1tDOLy2J5RbM4oLs6oDhflmre888af0jWj9FKy0HcdGqtjjYuf8yPGmDg09b1ucPz7qhPYbBvqimXlNqCavS_VKWKPLDmzl-4L93DT7Q |
linkProvider | IEEE |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1LT8MwDLZgHIADr4EYzxy4Qbd2bdPmOA3GeJQLHXCr0jSBCbYh6C78epy0qxAgxK0HR4qc1P7sfLYBjjRmVwjULT9TJnXDLIxVlIVgwFY6c0Z9XZwc3dD-wLt88B_m4KSqhZFSGvKZbOpP85afTcRUp8paDJ2ZHbB5WEC_7ztFtVb1ZuCFZpwYAgbHCtGTzUpkbNaKu7d3MQaDbYxRNUjW46q_uCEzV-WHMTYeprcK0WxvBbHkuTnN06b4-Na28b-bX4OVEmqSTnE31mFOjjdg-UsDwjpEZXvVR3I6RdFbOUJFDwWpKHLkfpg_kXPd1Zp08rzgRhIEuuRihJbIitG2kwjNuU5kbcKgdxZ3-1Y5YMESbebnFvUCnoaUuanD3QyBQqgyFwMm6beZpMJXlKGeFU-poNIJU8aECDLX5iKgfipCdwtq48lYbgNBhXPuMJQNuacCgahLuU6qOaYuz7jTAGem70SU3cf1EIyXxEQhNkvMGSX6jJLyjBpwXK15LXpv_Cld10qvJEt9N2BvdqxJ-Xu-JxgVB7rC1_Z2fl91CIv9OLpOri9urnZhqa2JLIajuwe1_G0q9xGJ5OmBuYCfL_HXNg |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Learning+Dual+Semantic+Relations+With+Graph+Attention+for+Image-Text+Matching&rft.jtitle=IEEE+transactions+on+circuits+and+systems+for+video+technology&rft.au=Wen%2C+Keyu&rft.au=Gu%2C+Xiaodong&rft.au=Cheng%2C+Qingrong&rft.date=2021-07-01&rft.pub=The+Institute+of+Electrical+and+Electronics+Engineers%2C+Inc.+%28IEEE%29&rft.issn=1051-8215&rft.eissn=1558-2205&rft.volume=31&rft.issue=7&rft.spage=2866&rft_id=info:doi/10.1109%2FTCSVT.2020.3030656&rft.externalDBID=NO_FULL_TEXT |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1051-8215&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1051-8215&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1051-8215&client=summon |