Transfer RL via the Undo Maps Formalism
Transferring knowledge across domains is one of the most fundamental problems in machine learning, but doing so effectively in the context of reinforcement learning remains largely an open problem. Current methods make strong assumptions on the specifics of the task, often lack principled objectives...
Saved in:
Main Authors | , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
25.11.2022
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Transferring knowledge across domains is one of the most fundamental problems
in machine learning, but doing so effectively in the context of reinforcement
learning remains largely an open problem. Current methods make strong
assumptions on the specifics of the task, often lack principled objectives, and
-- crucially -- modify individual policies, which might be sub-optimal when the
domains differ due to a drift in the state space, i.e., it is intrinsic to the
environment and therefore affects every agent interacting with it. To address
these drawbacks, we propose TvD: transfer via distribution matching, a
framework to transfer knowledge across interactive domains. We approach the
problem from a data-centric perspective, characterizing the discrepancy in
environments by means of (potentially complex) transformation between their
state spaces, and thus posing the problem of transfer as learning to undo this
transformation. To accomplish this, we introduce a novel optimization objective
based on an optimal transport distance between two distributions over
trajectories -- those generated by an already-learned policy in the source
domain and a learnable pushforward policy in the target domain. We show this
objective leads to a policy update scheme reminiscent of imitation learning,
and derive an efficient algorithm to implement it. Our experiments in simple
gridworlds show that this method yields successful transfer learning across a
wide range of environment transformations. |
---|---|
DOI: | 10.48550/arxiv.2211.14469 |