Nearest Neighbor Future Captioning: Generating Descriptions for Possible Collisions in Object Placement Tasks

Domestic service robots (DSRs) that support people in everyday environments have been widely investigated. However, their ability to predict and describe future risks resulting from their own actions remains insufficient. In this study, we focus on the linguistic explainability of DSRs. Most existin...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Komatsu, Takumi, Kambara, Motonari, Hatanaka, Shumpei, Matsuo, Haruka, Hirakawa, Tsubasa, Yamashita, Takayoshi, Fujiyoshi, Hironobu, Sugiura, Komei
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 18.07.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Domestic service robots (DSRs) that support people in everyday environments have been widely investigated. However, their ability to predict and describe future risks resulting from their own actions remains insufficient. In this study, we focus on the linguistic explainability of DSRs. Most existing methods do not explicitly model the region of possible collisions; thus, they do not properly generate descriptions of these regions. In this paper, we propose the Nearest Neighbor Future Captioning Model that introduces the Nearest Neighbor Language Model for future captioning of possible collisions, which enhances the model output with a nearest neighbors retrieval mechanism. Furthermore, we introduce the Collision Attention Module that attends regions of possible collisions, which enables our model to generate descriptions that adequately reflect the objects associated with possible collisions. To validate our method, we constructed a new dataset containing samples of collisions that can occur when a DSR places an object in a simulation environment. The experimental results demonstrated that our method outperformed baseline methods, based on the standard metrics. In particular, on CIDEr-D, the baseline method obtained 25.09 points, whereas our method obtained 33.08 points.
ISSN:2331-8422