Vision-Based Navigation With Language-Based Assistance via Imitation Learning With Indirect Intervention

We present Vision-based Navigation with Language-based Assistance (VNLA), a grounded vision-language task where an agent with visual perception is guided via language to find objects in photorealistic indoor environments. The task emulates a real-world scenario in that (a) the requester may not know...

Full description

Saved in:
Bibliographic Details
Published in2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) pp. 12519 - 12529
Main Authors Nguyen, Khanh, Dey, Debadeepta, Brockett, Chris, Dolan, Bill
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.06.2019
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:We present Vision-based Navigation with Language-based Assistance (VNLA), a grounded vision-language task where an agent with visual perception is guided via language to find objects in photorealistic indoor environments. The task emulates a real-world scenario in that (a) the requester may not know how to navigate to the target objects and thus makes requests by only specifying high-level end-goals, and (b) the agent is capable of sensing when it is lost and querying an advisor, who is more qualified at the task, to obtain language subgoals to make progress. To model language-based assistance, we develop a general framework termed Imitation Learning with Indirect Intervention (I3L), and propose a solution that is effective on the VNLA task. Empirical results show that this approach significantly improves the success rate of the learning agent over other baselines on both seen and unseen environments. Our code and data are publicly available at https://github.com/debadeepta/vnla .
ISSN:2575-7075
DOI:10.1109/CVPR.2019.01281