Learning Affordance Landscapes for Interaction Exploration in 3D Environments
Embodied agents operating in human spaces must be able to master how their environment works: what objects can the agent use, and how can it use them? We introduce a reinforcement learning approach for exploration for interaction, whereby an embodied agent autonomously discovers the affordance lands...
Saved in:
Main Authors | , |
---|---|
Format | Journal Article |
Language | English |
Published |
20.08.2020
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Embodied agents operating in human spaces must be able to master how their
environment works: what objects can the agent use, and how can it use them? We
introduce a reinforcement learning approach for exploration for interaction,
whereby an embodied agent autonomously discovers the affordance landscape of a
new unmapped 3D environment (such as an unfamiliar kitchen). Given an
egocentric RGB-D camera and a high-level action space, the agent is rewarded
for maximizing successful interactions while simultaneously training an
image-based affordance segmentation model. The former yields a policy for
acting efficiently in new environments to prepare for downstream interaction
tasks, while the latter yields a convolutional neural network that maps image
regions to the likelihood they permit each action, densifying the rewards for
exploration. We demonstrate our idea with AI2-iTHOR. The results show agents
can learn how to use new home environments intelligently and that it prepares
them to rapidly address various downstream tasks like "find a knife and put it
in the drawer." Project page:
http://vision.cs.utexas.edu/projects/interaction-exploration/ |
---|---|
DOI: | 10.48550/arxiv.2008.09241 |