What can i do around here? Deep functional scene understanding for cognitive robots

For robots that have the capability to interact with the physical environment through their end effectors, understanding the surrounding scenes is not merely a task of image classification or object recognition. To perform actual tasks, it is critical for the robot to have a functional understanding...

Full description

Saved in:
Bibliographic Details
Published in2017 IEEE International Conference on Robotics and Automation (ICRA) pp. 4604 - 4611
Main Authors Chengxi Ye, Yezhou Yang, Ren Mao, Fermuller, Cornelia, Aloimonos, Yiannis
Format Conference Proceeding
LanguageEnglish
Japanese
Published IEEE 01.05.2017
Subjects
Online AccessGet full text
DOI10.1109/ICRA.2017.7989535

Cover

Loading…
More Information
Summary:For robots that have the capability to interact with the physical environment through their end effectors, understanding the surrounding scenes is not merely a task of image classification or object recognition. To perform actual tasks, it is critical for the robot to have a functional understanding of the visual scene. Here, we address the problem of localization and recognition of functional areas in an arbitrary indoor scene, formulated as a two-stage deep learning based detection pipeline. A new scene functionality testing-bed, which is compiled from two publicly available indoor scene datasets, is used for evaluation. Our method is evaluated quantitatively on the new dataset, demonstrating the ability to perform efficient recognition of functional areas from arbitrary indoor scenes. We also demonstrate that our detection model can be generalized to novel indoor scenes by cross validating it with images from two different datasets.
DOI:10.1109/ICRA.2017.7989535