Multimodal Interactive Learning of Primitive Actions
We describe an ongoing project in learning to perform primitive actions from demonstrations using an interactive interface. In our previous work, we have used demonstrations captured from humans performing actions as training samples for a neural network-based trajectory model of actions to be perfo...
Saved in:
Main Authors | , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
01.10.2018
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | We describe an ongoing project in learning to perform primitive actions from
demonstrations using an interactive interface. In our previous work, we have
used demonstrations captured from humans performing actions as training samples
for a neural network-based trajectory model of actions to be performed by a
computational agent in novel setups. We found that our original framework had
some limitations that we hope to overcome by incorporating communication
between the human and the computational agent, using the interaction between
them to fine-tune the model learned by the machine. We propose a framework that
uses multimodal human-computer interaction to teach action concepts to
machines, making use of both live demonstration and communication through
natural language, as two distinct teaching modalities, while requiring few
training samples. |
---|---|
Bibliography: | AI-HRI/2018/02 |
DOI: | 10.48550/arxiv.1810.00838 |