A large scale multi-view RGBD visual affordance learning dataset
The physical and textural attributes of objects have been widely studied for recognition, detection and segmentation tasks in computer vision.~A number of datasets, such as large scale ImageNet, have been proposed for feature learning using data hungry deep neural networks and for hand-crafted featu...
Saved in:
Main Authors | , |
---|---|
Format | Journal Article |
Language | English |
Published |
26.03.2022
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | The physical and textural attributes of objects have been widely studied for
recognition, detection and segmentation tasks in computer vision.~A number of
datasets, such as large scale ImageNet, have been proposed for feature learning
using data hungry deep neural networks and for hand-crafted feature extraction.
To intelligently interact with objects, robots and intelligent machines need
the ability to infer beyond the traditional physical/textural attributes, and
understand/learn visual cues, called visual affordances, for affordance
recognition, detection and segmentation. To date there is no publicly available
large dataset for visual affordance understanding and learning. In this paper,
we introduce a large scale multi-view RGBD visual affordance learning dataset,
a benchmark of 47210 RGBD images from 37 object categories, annotated with 15
visual affordance categories. To the best of our knowledge, this is the first
ever and the largest multi-view RGBD visual affordance learning dataset. We
benchmark the proposed dataset for affordance segmentation and recognition
tasks using popular Vision Transformer and Convolutional Neural Networks.
Several state-of-the-art deep learning networks are evaluated each for
affordance recognition and segmentation tasks. Our experimental results
showcase the challenging nature of the dataset and present definite prospects
for new and robust affordance learning algorithms. The dataset is publicly
available at https://sites.google.com/view/afaqshah/dataset. |
---|---|
DOI: | 10.48550/arxiv.2203.14092 |