Configurable Graph Reasoning for Visual Relationship Detection
Visual commonsense knowledge has received growing attention in the reasoning of long-tailed visual relationships biased in terms of object and relation labels. Most current methods typically collect and utilize external knowledge for visual relationships by following the fixed reasoning path of {sub...
Saved in:
Published in | IEEE transaction on neural networks and learning systems Vol. 33; no. 1; pp. 117 - 129 |
---|---|
Main Authors | , , , , , , |
Format | Journal Article |
Language | English |
Published |
United States
IEEE
01.01.2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | Visual commonsense knowledge has received growing attention in the reasoning of long-tailed visual relationships biased in terms of object and relation labels. Most current methods typically collect and utilize external knowledge for visual relationships by following the fixed reasoning path of {subject, object <inline-formula> <tex-math notation="LaTeX">\to </tex-math></inline-formula> predicate} to facilitate the recognition of infrequent relationships. However, the knowledge incorporation for such fixed multidependent path suffers from the data set biased and exponentially grown combinations of object and relation labels and ignores the semantic gap between commonsense knowledge and real scenes. To alleviate this, we propose configurable graph reasoning (CGR) to decompose the reasoning path of visual relationships and the incorporation of external knowledge, achieving configurable knowledge selection and personalized graph reasoning for each relation type in each image. Given a commonsense knowledge graph, CGR learns to match and retrieve knowledge for different subpaths and selectively compose the knowledge routed path. CGR adaptively configures the reasoning path based on the knowledge graph, bridges the semantic gap between the commonsense knowledge, and the real-world scenes and achieves better knowledge generalization. Extensive experiments show that CGR consistently outperforms previous state-of-the-art methods on several popular benchmarks and works well with different knowledge graphs. Detailed analyses demonstrated that CGR learned explainable and compelling configurations of reasoning paths. |
---|---|
AbstractList | Visual commonsense knowledge has received growing attention in the reasoning of long-tailed visual relationships biased in terms of object and relation labels. Most current methods typically collect and utilize external knowledge for visual relationships by following the fixed reasoning path of {subject, object <inline-formula> <tex-math notation="LaTeX">\to </tex-math></inline-formula> predicate} to facilitate the recognition of infrequent relationships. However, the knowledge incorporation for such fixed multidependent path suffers from the data set biased and exponentially grown combinations of object and relation labels and ignores the semantic gap between commonsense knowledge and real scenes. To alleviate this, we propose configurable graph reasoning (CGR) to decompose the reasoning path of visual relationships and the incorporation of external knowledge, achieving configurable knowledge selection and personalized graph reasoning for each relation type in each image. Given a commonsense knowledge graph, CGR learns to match and retrieve knowledge for different subpaths and selectively compose the knowledge routed path. CGR adaptively configures the reasoning path based on the knowledge graph, bridges the semantic gap between the commonsense knowledge, and the real-world scenes and achieves better knowledge generalization. Extensive experiments show that CGR consistently outperforms previous state-of-the-art methods on several popular benchmarks and works well with different knowledge graphs. Detailed analyses demonstrated that CGR learned explainable and compelling configurations of reasoning paths. Visual commonsense knowledge has received growing attention in the reasoning of long-tailed visual relationships biased in terms of object and relation labels. Most current methods typically collect and utilize external knowledge for visual relationships by following the fixed reasoning path of {subject, object → predicate} to facilitate the recognition of infrequent relationships. However, the knowledge incorporation for such fixed multidependent path suffers from the data set biased and exponentially grown combinations of object and relation labels and ignores the semantic gap between commonsense knowledge and real scenes. To alleviate this, we propose configurable graph reasoning (CGR) to decompose the reasoning path of visual relationships and the incorporation of external knowledge, achieving configurable knowledge selection and personalized graph reasoning for each relation type in each image. Given a commonsense knowledge graph, CGR learns to match and retrieve knowledge for different subpaths and selectively compose the knowledge routed path. CGR adaptively configures the reasoning path based on the knowledge graph, bridges the semantic gap between the commonsense knowledge, and the real-world scenes and achieves better knowledge generalization. Extensive experiments show that CGR consistently outperforms previous state-of-the-art methods on several popular benchmarks and works well with different knowledge graphs. Detailed analyses demonstrated that CGR learned explainable and compelling configurations of reasoning paths. Visual commonsense knowledge has received growing attention in the reasoning of long-tailed visual relationships biased in terms of object and relation labels. Most current methods typically collect and utilize external knowledge for visual relationships by following the fixed reasoning path of {subject, object → predicate} to facilitate the recognition of infrequent relationships. However, the knowledge incorporation for such fixed multidependent path suffers from the data set biased and exponentially grown combinations of object and relation labels and ignores the semantic gap between commonsense knowledge and real scenes. To alleviate this, we propose configurable graph reasoning (CGR) to decompose the reasoning path of visual relationships and the incorporation of external knowledge, achieving configurable knowledge selection and personalized graph reasoning for each relation type in each image. Given a commonsense knowledge graph, CGR learns to match and retrieve knowledge for different subpaths and selectively compose the knowledge routed path. CGR adaptively configures the reasoning path based on the knowledge graph, bridges the semantic gap between the commonsense knowledge, and the real-world scenes and achieves better knowledge generalization. Extensive experiments show that CGR consistently outperforms previous state-of-the-art methods on several popular benchmarks and works well with different knowledge graphs. Detailed analyses demonstrated that CGR learned explainable and compelling configurations of reasoning paths.Visual commonsense knowledge has received growing attention in the reasoning of long-tailed visual relationships biased in terms of object and relation labels. Most current methods typically collect and utilize external knowledge for visual relationships by following the fixed reasoning path of {subject, object → predicate} to facilitate the recognition of infrequent relationships. However, the knowledge incorporation for such fixed multidependent path suffers from the data set biased and exponentially grown combinations of object and relation labels and ignores the semantic gap between commonsense knowledge and real scenes. To alleviate this, we propose configurable graph reasoning (CGR) to decompose the reasoning path of visual relationships and the incorporation of external knowledge, achieving configurable knowledge selection and personalized graph reasoning for each relation type in each image. Given a commonsense knowledge graph, CGR learns to match and retrieve knowledge for different subpaths and selectively compose the knowledge routed path. CGR adaptively configures the reasoning path based on the knowledge graph, bridges the semantic gap between the commonsense knowledge, and the real-world scenes and achieves better knowledge generalization. Extensive experiments show that CGR consistently outperforms previous state-of-the-art methods on several popular benchmarks and works well with different knowledge graphs. Detailed analyses demonstrated that CGR learned explainable and compelling configurations of reasoning paths. Visual commonsense knowledge has received growing attention in the reasoning of long-tailed visual relationships biased in terms of object and relation labels. Most current methods typically collect and utilize external knowledge for visual relationships by following the fixed reasoning path of {subject, object [Formula Omitted] predicate} to facilitate the recognition of infrequent relationships. However, the knowledge incorporation for such fixed multidependent path suffers from the data set biased and exponentially grown combinations of object and relation labels and ignores the semantic gap between commonsense knowledge and real scenes. To alleviate this, we propose configurable graph reasoning (CGR) to decompose the reasoning path of visual relationships and the incorporation of external knowledge, achieving configurable knowledge selection and personalized graph reasoning for each relation type in each image. Given a commonsense knowledge graph, CGR learns to match and retrieve knowledge for different subpaths and selectively compose the knowledge routed path. CGR adaptively configures the reasoning path based on the knowledge graph, bridges the semantic gap between the commonsense knowledge, and the real-world scenes and achieves better knowledge generalization. Extensive experiments show that CGR consistently outperforms previous state-of-the-art methods on several popular benchmarks and works well with different knowledge graphs. Detailed analyses demonstrated that CGR learned explainable and compelling configurations of reasoning paths. |
Author | Jiao, Jianbin Zhu, Yi Lin, Liang Ye, Qixiang Liang, Xiaodan Liang, Xiwen Lin, Bingqian |
Author_xml | – sequence: 1 givenname: Yi orcidid: 0000-0002-5087-895X surname: Zhu fullname: Zhu, Yi email: zhuyi215@mails.ucas.ac.cn organization: University of Chinese Academy of Sciences (UCAS), Beijing, China – sequence: 2 givenname: Xiwen orcidid: 0000-0002-2484-6962 surname: Liang fullname: Liang, Xiwen email: liangcici5@gmail.com organization: Sun Yat-sen University, Guangzhou, China – sequence: 3 givenname: Bingqian orcidid: 0000-0002-8763-9530 surname: Lin fullname: Lin, Bingqian email: bingqianlin@126.com organization: Sun Yat-sen University, Guangzhou, China – sequence: 4 givenname: Qixiang orcidid: 0000-0003-1215-6259 surname: Ye fullname: Ye, Qixiang email: qxye@ucas.ac.cn organization: University of Chinese Academy of Sciences (UCAS), Beijing, China – sequence: 5 givenname: Jianbin orcidid: 0000-0003-0454-3929 surname: Jiao fullname: Jiao, Jianbin email: jiaojb@ucas.ac.cn organization: University of Chinese Academy of Sciences (UCAS), Beijing, China – sequence: 6 givenname: Liang orcidid: 0000-0003-2248-3755 surname: Lin fullname: Lin, Liang email: linliang@ieee.org organization: Sun Yat-sen University, Guangzhou, China – sequence: 7 givenname: Xiaodan orcidid: 0000-0003-3213-3062 surname: Liang fullname: Liang, Xiaodan email: xdliang328@gmail.com organization: Sun Yat-sen University, Guangzhou, China |
BackLink | https://www.ncbi.nlm.nih.gov/pubmed/33119512$$D View this record in MEDLINE/PubMed |
BookMark | eNp9kU1P4zAQhi3Eis_-AZBQpL1wadczTuL4goS6fEkVK7FdxM2ahikYpXGxkwP_HpcWDhzWB8_Yft6xPe--2G59y0IcgRwBSPNrens7-TtCiXKkJOpCF1tiD6HEIaqq2v7K9cOuGMT4ItMoZVHmZkfsKgVgCsA9cTb27dw99YFmDWdXgZbP2R1T9K1rn7K5D9m9iz01abOhzvk2Prtl9ps7rlerQ_FjTk3kwSYeiH-XF9Px9XDy5-pmfD4Z1qqAbkgF5VqjMrWsZvlMpttTJEVSp7kCYCyqyqST2ihAqfQjENeGCAiRtToQp-u6y-Bfe46dXbhYc9NQy76PFvP0M2nyAhP68xv64vvQptdZLKHUUOWlTNTJhupnC360y-AWFN7sZ2cSUK2BOvgYA89t7bqPDnSBXGNB2pUP9sMHu_LBbnxIUvwm_az-X9HxWuSY-UtgMM9BSfUOStOQcQ |
CODEN | ITNNAL |
CitedBy_id | crossref_primary_10_1016_j_neucom_2024_128422 crossref_primary_10_1109_TIP_2022_3199089 crossref_primary_10_1007_s00521_022_06975_2 crossref_primary_10_1016_j_patcog_2023_109634 crossref_primary_10_1109_ACCESS_2023_3239837 crossref_primary_10_1016_j_neucom_2024_127571 crossref_primary_10_1109_TCYB_2023_3339629 crossref_primary_10_1109_TNNLS_2024_3380851 crossref_primary_10_1109_ACCESS_2022_3187263 crossref_primary_10_1016_j_jvcir_2023_103923 |
Cites_doi | 10.1109/ICCV.2017.121 10.1109/CVPR.2017.330 10.1109/CVPR.2019.00207 10.1109/CVPR.2017.129 10.1109/ICCV.2011.6126281 10.1109/CVPR.2019.01180 10.1109/CVPR.2019.00684 10.1109/ICCV.2019.00471 10.1007/978-3-030-01246-5_41 10.1609/aaai.v32i1.12271 10.1109/CVPR.2017.469 10.1162/neco.1997.9.8.1735 10.1109/ICCV.2017.142 10.1109/CVPR.2016.13 10.1109/CVPR.2017.331 10.1007/978-3-030-01219-9_20 10.1007/978-3-030-01246-5_21 10.3115/v1/D14-1179 10.1109/CVPR.2017.634 10.1109/CVPR.2019.00632 10.1109/CVPR.2019.00686 10.1109/CVPR.2017.106 10.3115/v1/D14-1162 10.1109/CVPR.2018.00756 10.1109/CVPR.2014.119 10.1007/978-3-319-46448-0_51 10.1109/CVPR.2017.352 10.1109/CVPR.2008.4587799 10.1109/CVPR.2018.00611 10.1109/CVPR.2019.00678 |
ContentType | Journal Article |
Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022 |
Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022 |
DBID | 97E RIA RIE AAYXX CITATION CGR CUY CVF ECM EIF NPM 7QF 7QO 7QP 7QQ 7QR 7SC 7SE 7SP 7SR 7TA 7TB 7TK 7U5 8BQ 8FD F28 FR3 H8D JG9 JQ2 KR7 L7M L~C L~D P64 7X8 |
DOI | 10.1109/TNNLS.2020.3027575 |
DatabaseName | IEEE All-Society Periodicals Package (ASPP) 2005–Present IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Electronic Library (IEL) - NZ CrossRef Medline MEDLINE MEDLINE (Ovid) MEDLINE MEDLINE PubMed Aluminium Industry Abstracts Biotechnology Research Abstracts Calcium & Calcified Tissue Abstracts Ceramic Abstracts Chemoreception Abstracts Computer and Information Systems Abstracts Corrosion Abstracts Electronics & Communications Abstracts Engineered Materials Abstracts Materials Business File Mechanical & Transportation Engineering Abstracts Neurosciences Abstracts Solid State and Superconductivity Abstracts METADEX Technology Research Database ANTE: Abstracts in New Technology & Engineering Engineering Research Database Aerospace Database Materials Research Database ProQuest Computer Science Collection Civil Engineering Abstracts Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional Biotechnology and BioEngineering Abstracts MEDLINE - Academic |
DatabaseTitle | CrossRef MEDLINE Medline Complete MEDLINE with Full Text PubMed MEDLINE (Ovid) Materials Research Database Technology Research Database Computer and Information Systems Abstracts – Academic Mechanical & Transportation Engineering Abstracts ProQuest Computer Science Collection Computer and Information Systems Abstracts Materials Business File Aerospace Database Engineered Materials Abstracts Biotechnology Research Abstracts Chemoreception Abstracts Advanced Technologies Database with Aerospace ANTE: Abstracts in New Technology & Engineering Civil Engineering Abstracts Aluminium Industry Abstracts Electronics & Communications Abstracts Ceramic Abstracts Neurosciences Abstracts METADEX Biotechnology and BioEngineering Abstracts Computer and Information Systems Abstracts Professional Solid State and Superconductivity Abstracts Engineering Research Database Calcium & Calcified Tissue Abstracts Corrosion Abstracts MEDLINE - Academic |
DatabaseTitleList | MEDLINE MEDLINE - Academic Materials Research Database |
Database_xml | – sequence: 1 dbid: NPM name: PubMed url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database – sequence: 2 dbid: EIF name: MEDLINE url: https://proxy.k.utb.cz/login?url=https://www.webofscience.com/wos/medline/basic-search sourceTypes: Index Database – sequence: 3 dbid: RIE name: IEEE/IET Electronic Library url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/ sourceTypes: Publisher |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Computer Science |
EISSN | 2162-2388 |
EndPage | 129 |
ExternalDocumentID | 33119512 10_1109_TNNLS_2020_3027575 9244130 |
Genre | orig-research Research Support, Non-U.S. Gov't Journal Article |
GrantInformation_xml | – fundername: National Key Research and Development Program of China grantid: 2018AAA0100300 funderid: 10.13039/501100012166 – fundername: Nature Science Foundation of Shenzhen grantid: 2019191361 – fundername: Guangdong Province Basic and Applied Basic Research (Regional Joint Fund-Key) grantid: 2019B1515120039 – fundername: National Natural Science Foundation of China grantid: 61836012; U19A2073; 61976233 funderid: 10.13039/501100001809 – fundername: Zhijiang Laboratory’s Open Fund grantid: 2020AA3AB14 |
GroupedDBID | 0R~ 4.4 5VS 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABQJQ ABVLG ACIWK ACPRK AENEX AFRAH AGQYO AGSQL AHBIQ AKJIK AKQYR ALMA_UNASSIGNED_HOLDINGS ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ EBS EJD IFIPE IPLJI JAVBF M43 MS~ O9- OCL PQQKQ RIA RIE RNS AAYXX CITATION RIG CGR CUY CVF ECM EIF NPM 7QF 7QO 7QP 7QQ 7QR 7SC 7SE 7SP 7SR 7TA 7TB 7TK 7U5 8BQ 8FD F28 FR3 H8D JG9 JQ2 KR7 L7M L~C L~D P64 7X8 |
ID | FETCH-LOGICAL-c351t-a5a477239c08b4b01198b4a3a074a3811e258894b0c9312037d1aec9aa1a22e73 |
IEDL.DBID | RIE |
ISSN | 2162-237X 2162-2388 |
IngestDate | Fri Jul 11 04:56:03 EDT 2025 Mon Jun 30 06:49:38 EDT 2025 Thu Jan 02 22:55:21 EST 2025 Tue Jul 01 00:27:36 EDT 2025 Thu Apr 24 22:52:56 EDT 2025 Wed Aug 27 03:03:36 EDT 2025 |
IsPeerReviewed | false |
IsScholarly | true |
Issue | 1 |
Language | English |
License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037 |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c351t-a5a477239c08b4b01198b4a3a074a3811e258894b0c9312037d1aec9aa1a22e73 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 |
ORCID | 0000-0003-2248-3755 0000-0002-2484-6962 0000-0002-8763-9530 0000-0002-5087-895X 0000-0003-1215-6259 0000-0003-3213-3062 0000-0003-0454-3929 |
PMID | 33119512 |
PQID | 2616718460 |
PQPubID | 85436 |
PageCount | 13 |
ParticipantIDs | proquest_journals_2616718460 crossref_citationtrail_10_1109_TNNLS_2020_3027575 crossref_primary_10_1109_TNNLS_2020_3027575 ieee_primary_9244130 proquest_miscellaneous_2456409452 pubmed_primary_33119512 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 2022-Jan. 2022-1-00 2022-01-00 20220101 |
PublicationDateYYYYMMDD | 2022-01-01 |
PublicationDate_xml | – month: 01 year: 2022 text: 2022-Jan. |
PublicationDecade | 2020 |
PublicationPlace | United States |
PublicationPlace_xml | – name: United States – name: Piscataway |
PublicationTitle | IEEE transaction on neural networks and learning systems |
PublicationTitleAbbrev | TNNLS |
PublicationTitleAlternate | IEEE Trans Neural Netw Learn Syst |
PublicationYear | 2022 |
Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
References | ref13 ref15 ref11 ref10 Maddison (ref45) ref17 ref16 ref19 ref18 Vaswani (ref36) ref51 ref50 Gilmer (ref26) 2017 ref46 ref48 ref42 Veličković (ref27) ref8 ref7 Newell (ref12) ref9 Li (ref37) ref4 ref3 ref6 ref5 Jiang (ref40) Kampffmeyer (ref24) 2018 Atwood (ref25) Simonyan (ref49) ref35 Gumbel (ref43) 1948; 33 Krishna (ref47) 2016 ref34 ref31 ref30 ref33 Ren (ref41) 2015 Kipf (ref38) ref2 ref39 Woo (ref14) Niepert (ref21) Jang (ref44) 2016 Frome (ref32) Hudson (ref1) ref20 Zhou (ref22) 2018 ref28 ref29 Hamilton (ref23) |
References_xml | – ident: ref8 doi: 10.1109/ICCV.2017.121 – start-page: 1024 volume-title: Proc. Adv. Neural Inf. Process. Syst. ident: ref23 article-title: Inductive representation learning on large graphs – ident: ref10 doi: 10.1109/CVPR.2017.330 – ident: ref9 doi: 10.1109/CVPR.2019.00207 – volume-title: arXiv:1506.01497 year: 2015 ident: ref41 article-title: Faster R-CNN: Towards real-time object detection with region proposal networks – ident: ref31 doi: 10.1109/CVPR.2017.129 – start-page: 1 volume-title: Proc. Int. Conf. Learn. Represent. ident: ref37 article-title: Gated graph sequence neural networks – ident: ref30 doi: 10.1109/ICCV.2011.6126281 – start-page: 2121 volume-title: Proc. Adv. Neural Inf. Process. Syst. ident: ref32 article-title: Devise: A deep visual-semantic embedding model – ident: ref7 doi: 10.1109/CVPR.2019.01180 – ident: ref46 doi: 10.1109/CVPR.2019.00684 – ident: ref19 doi: 10.1109/ICCV.2019.00471 – ident: ref17 doi: 10.1007/978-3-030-01246-5_41 – ident: ref20 doi: 10.1609/aaai.v32i1.12271 – start-page: 5903 volume-title: Proc. 33rd Conf. Neural Inf. Process. Syst. ident: ref1 article-title: Learning by abstraction: The neural state machine for visual reasoning – volume-title: arXiv:1805.11724 year: 2018 ident: ref24 article-title: Rethinking knowledge graph propagation for zero-shot learning – volume-title: arXiv:1611.01144 year: 2016 ident: ref44 article-title: Categorical reparameterization with gumbel-softmax – ident: ref13 doi: 10.1109/CVPR.2017.469 – start-page: 1552 volume-title: Proc. Adv. Neural Inf. Process. Syst. ident: ref40 article-title: Hybrid knowledge routed modules for large-scale object detection – start-page: 1 volume-title: Proc. Int. Conf. Learn. Represent. ident: ref38 article-title: Semi-supervised classification with graph convolutional networks – volume-title: arXiv:1602.07332 year: 2016 ident: ref47 article-title: Visual genome: Connecting language and vision using crowdsourced dense image annotations – volume-title: arXiv:1812.08434 year: 2018 ident: ref22 article-title: Graph neural networks: A review of methods and applications – ident: ref29 doi: 10.1162/neco.1997.9.8.1735 – start-page: 2014 volume-title: Proc. Int. Conf. Mach. Learn. ident: ref21 article-title: Learning convolutional neural networks for graphs – ident: ref11 doi: 10.1109/ICCV.2017.142 – start-page: 1993 volume-title: Proc. Adv. Neural Inf. Process. Syst. ident: ref25 article-title: Diffusion-convolutional neural networks – ident: ref33 doi: 10.1109/CVPR.2016.13 – ident: ref2 doi: 10.1109/CVPR.2017.331 – start-page: 2171 volume-title: Proc. Adv. Neural Inf. Process. Syst. ident: ref12 article-title: Pixels to graphs by associative embedding – ident: ref15 doi: 10.1007/978-3-030-01219-9_20 – ident: ref16 doi: 10.1007/978-3-030-01246-5_21 – start-page: 5998 volume-title: Proc. Adv. Neural Inf. Process. Syst. ident: ref36 article-title: Attention is all you need – ident: ref28 doi: 10.3115/v1/D14-1179 – ident: ref50 doi: 10.1109/CVPR.2017.634 – ident: ref5 doi: 10.1109/CVPR.2019.00632 – volume-title: arXiv:1704.01212 year: 2017 ident: ref26 article-title: Neural message passing for quantum chemistry – volume: 33 volume-title: Statistical Theory of Extreme Values and Some Practical Applications: A Series of Lectures year: 1948 ident: ref43 – start-page: 1 volume-title: Proc. Int. Conf. Learn. Represent. ident: ref27 article-title: Graph attention networks – ident: ref48 doi: 10.1109/CVPR.2019.00686 – ident: ref51 doi: 10.1109/CVPR.2017.106 – ident: ref42 doi: 10.3115/v1/D14-1162 – ident: ref39 doi: 10.1109/CVPR.2018.00756 – ident: ref34 doi: 10.1109/CVPR.2014.119 – start-page: 560 volume-title: Proc. Adv. Neural Inf. Process. Syst. ident: ref14 article-title: LinkNet: Relational embedding for scene graph – ident: ref3 doi: 10.1007/978-3-319-46448-0_51 – ident: ref4 doi: 10.1109/CVPR.2017.352 – ident: ref35 doi: 10.1109/CVPR.2008.4587799 – ident: ref6 doi: 10.1109/CVPR.2018.00611 – ident: ref18 doi: 10.1109/CVPR.2019.00678 – start-page: 1 volume-title: Proc. Int. Conf. Learn. Represent. ident: ref45 article-title: The concrete distribution: A continuous relaxation of discrete random variables – start-page: 1 volume-title: Proc. Int. Conf. Learn. Represent. ident: ref49 article-title: Very deep convolutional networks for large-scale image recognition |
SSID | ssj0000605649 |
Score | 2.4194005 |
Snippet | Visual commonsense knowledge has received growing attention in the reasoning of long-tailed visual relationships biased in terms of object and relation labels.... |
SourceID | proquest pubmed crossref ieee |
SourceType | Aggregation Database Index Database Enrichment Source Publisher |
StartPage | 117 |
SubjectTerms | Algorithms Benchmarks Cognition Correlation Feature extraction Graph learning Knowledge Knowledge engineering Knowledge representation Labels Neural Networks, Computer Object recognition Proposals Reasoning Recognition, Psychology scene graph generation Semantics visual reasoning visual relationship detection (VRD) Visualization |
Title | Configurable Graph Reasoning for Visual Relationship Detection |
URI | https://ieeexplore.ieee.org/document/9244130 https://www.ncbi.nlm.nih.gov/pubmed/33119512 https://www.proquest.com/docview/2616718460 https://www.proquest.com/docview/2456409452 |
Volume | 33 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1LT9wwEB4BJy7l1cdSilKJW8kS20k2viChAkUI9sCj2ltkO7OwapVFbHLpr2fGeUggijglkh0n8Yw934znAbBXRAVmReTCaYo2pA1PhSQFbRjpIp46kk-JDxS-HKdnt_H5JJkswX4fC4OI3vkMh3zrz_KLuavZVHZAugLvucuwTIpbE6vV21MiwuWpR7tSpDKUajTpYmQifXAzHl9ckzYoSUnlg7oRV6xRivOdCflMJPkaK_-Hm17snK7BZffBjbfJn2Fd2aH79yKX43v_aB0-tPgzOGoYZgOWsNyEta62Q9Au9S045FDA2V39yKFVwS9Oax1coVl4621ASDf4PVvUNFLvTXc_ewiOsfKuXeVHuD09ufl5Fra1FkKnElGFJjExAW2lXZTZ2HImOLoaZQhiGJLqAmWSZZpanFZCRmpUCINOGyOMlDhSn2ClnJf4BYKUY2NTVI72gthm0ywWGl2spLZKT60agOimO3dtInKuh_E39wpJpHNPrZyplbfUGsCP_pmHJg3Hm723eKr7nu0sD2Cno2rertRFThpkSvI5Tqn5e99Ma4wPTkyJ85r6cM4d0oMTOYDPDTf0Y3dMtP36O7_CquSACW-02YGV6rHGbwRjKrvr-fcJvQjosQ |
linkProvider | IEEE |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1Lb9QwEB615dBeWqCvhQJB4gbZxnaSjS9ICCgL7O6hbKu9RbYzCytQtuomF349M85DAlHEKZHsOIln7Hl4vhmAF0VUYFZELlymaEPa8FRIUtCGkS7ipSP5lHig8HSWjq_iT4tksQWveiwMIvrgMxzyrT_LL9auZlfZOdkKvOduwz2S-4ls0Fq9RyUizTz1-q4UqQylGi06lEykz-ez2eQL2YOSzFQ-qhtxzRqlOOOZkL8JJV9l5W6F0wueiwOYdp_cxJt8H9aVHbqff2Rz_N9_ug_7rQYavGlY5gFsYfkQDrrqDkG72A_hNYMBV1_rWwZXBR84sXVwiWbj_bcB6brB9WpT00h9PN231U3wDisf3FUewdXF-_nbcdhWWwidSkQVmsTEpGor7aLMxpZzwdHVKENKhiG5LlAmWaapxWklZKRGhTDotDHCSIkjdQw75brEUwhSRsemqBztBrHNllksNLpYSW2VXlo1ANFNd-7aVORcEeNH7k2SSOeeWjlTK2-pNYCX_TM3TSKOf_Y-5Knue7azPICzjqp5u1Y3OdmQKUnoOKXm530zrTI-OjElrmvqw1l3yBJO5ABOGm7ox-6Y6NHf3_kMdsfz6SSffJx9fgx7kuET3oVzBjvVbY1PSKmp7FPPy78AECjr-w |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Configurable+Graph+Reasoning+for+Visual+Relationship+Detection&rft.jtitle=IEEE+transaction+on+neural+networks+and+learning+systems&rft.au=Zhu%2C+Yi&rft.au=Liang%2C+Xiwen&rft.au=Lin%2C+Bingqian&rft.au=Ye%2C+Qixiang&rft.date=2022-01-01&rft.pub=IEEE&rft.issn=2162-237X&rft.volume=33&rft.issue=1&rft.spage=117&rft.epage=129&rft_id=info:doi/10.1109%2FTNNLS.2020.3027575&rft_id=info%3Apmid%2F33119512&rft.externalDocID=9244130 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2162-237X&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2162-237X&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2162-237X&client=summon |