Instance-Level Semantic Maps for Vision Language Navigation

Humans have a natural ability to perform semantic associations with the surrounding objects in the environment. This allows them to create a mental map of the environment, allowing them to navigate on-demand when given linguistic instructions. A natural goal in Vision Language Navigation (VLN) resea...

Full description

Saved in:
Bibliographic Details
Published in2023 32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN) pp. 507 - 512
Main Authors Nanwani, Laksh, Agarwal, Anmol, Jain, Kanishk, Prabhakar, Raghav, Monis, Aaron, Mathur, Aditya, Jatavallabhula, Krishna Murthy, Abdul Hafez, A. H., Gandhi, Vineet, Krishna, K. Madhava
Format Conference Proceeding
LanguageEnglish
Published IEEE 28.08.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Humans have a natural ability to perform semantic associations with the surrounding objects in the environment. This allows them to create a mental map of the environment, allowing them to navigate on-demand when given linguistic instructions. A natural goal in Vision Language Navigation (VLN) research is to impart autonomous agents with similar capabilities. Recent works take a step towards this goal by creating a semantic spatial map representation of the environment without any labeled data. However, their representations are limited for practical applicability as they do not distinguish between different instances of the same object. In this work, we address this limitation by integrating instance-level information into spatial map representation using a community detection algorithm and utilizing word ontology learned by large language models (LLMs) to perform open-set semantic associations in the mapping representation. The resulting map representation improves the navigation performance by two-fold (233%) on realistic language commands with instance-specific descriptions compared to the baseline. We validate the practicality and effectiveness of our approach through extensive qualitative and quantitative experiments.
ISSN:1944-9437
DOI:10.1109/RO-MAN57019.2023.10309534