Online Prototype Alignment for Few-shot Policy Transfer
Domain adaptation in reinforcement learning (RL) mainly deals with the changes of observation when transferring the policy to a new environment. Many traditional approaches of domain adaptation in RL manage to learn a mapping function between the source and target domain in explicit or implicit ways...
Saved in:
Main Authors | , , , , , , , , , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
12.06.2023
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Domain adaptation in reinforcement learning (RL) mainly deals with the
changes of observation when transferring the policy to a new environment. Many
traditional approaches of domain adaptation in RL manage to learn a mapping
function between the source and target domain in explicit or implicit ways.
However, they typically require access to abundant data from the target domain.
Besides, they often rely on visual clues to learn the mapping function and may
fail when the source domain looks quite different from the target domain. To
address these problems, we propose a novel framework Online Prototype Alignment
(OPA) to learn the mapping function based on the functional similarity of
elements and is able to achieve the few-shot policy transfer within only
several episodes. The key insight of OPA is to introduce an exploration
mechanism that can interact with the unseen elements of the target domain in an
efficient and purposeful manner, and then connect them with the seen elements
in the source domain according to their functionalities (instead of visual
clues). Experimental results show that when the target domain looks visually
different from the source domain, OPA can achieve better transfer performance
even with much fewer samples from the target domain, outperforming prior
methods. |
---|---|
DOI: | 10.48550/arxiv.2306.07307 |