OASNet: Object Affordance State Recognition Network With Joint Visual Features and Relational Semantic Embeddings

Traditional affordance learning tasks aim to understand object's interactive functions in an image, such as affordance recognition and affordance detection. However, these tasks cannot determine whether the object is currently interacting, which is crucial for many follow-up tasks, including ro...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on circuits and systems for video technology Vol. 34; no. 5; pp. 3368 - 3382
Main Authors Chen, Dongpan, Kong, Dehui, Li, Jinghua, Wang, Lichun, Gao, Junna, Yin, Baocai
Format Journal Article
LanguageEnglish
Published New York IEEE 01.05.2024
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Traditional affordance learning tasks aim to understand object's interactive functions in an image, such as affordance recognition and affordance detection. However, these tasks cannot determine whether the object is currently interacting, which is crucial for many follow-up tasks, including robotic manipulation and planning task. To fill this gap, this paper proposes a novel object affrodance state (OAS) recognition task, i.e., simultaneously recognizing an object's affordances and the partner objects that are interacting with it. Accordingly, to facilitate the application of deep learning technology, an OAS recognition task related dataset OAS10k is constructed by collecting and labeling over 10k images. In the dataset, a sample is defined as a set of an image and its OAS labels, each label is represented as <inline-formula> <tex-math notation="LaTeX">\left \langle{ \rm {\textit {subject, subject's affrodance, interacted object}} }\right \rangle </tex-math></inline-formula>. These triplet labels have rich relational semantic information, which can improve OAS recognition performance. We hence construct a directed OAS knowledge graph of affordance states, and extract an OAS matrix from it for modelling the semantic relationships of the triplets. Based on the matrix, we propose an OAS recognition network (OASNet), which utilizes GCN to capture the relational semantic embeddings, and uses a transformer to fuse them with the visual features from an image to recognize the affordance states of objects in the image. Experimental results on OAS10k dataset and other triplet label recognition datasets demonstrate that the proposed OASNet achieves the best performance compared to the state-of-the-art methods. The dataset and codes will be released on https://github.com/mxmdpc/OAS .
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1051-8215
1558-2205
DOI:10.1109/TCSVT.2023.3324595