Logical Specifications-guided Dynamic Task Sampling for Reinforcement Learning Agents

Reinforcement Learning (RL) has made significant strides in enabling artificial agents to learn diverse behaviors. However, learning an effective policy often requires a large number of environment interactions. To mitigate sample complexity issues, recent approaches have used high-level task specif...

Full description

Saved in:
Bibliographic Details
Published inProceedings of the International Conference on Automated Planning and Scheduling Vol. 34; pp. 532 - 540
Main Authors Shukla, Yash, Burman, Tanushree, Kulkarni, Abhishek N., Wright, Robert, Velasquez, Alvaro, Sinapov, Jivko
Format Journal Article
LanguageEnglish
Published 30.05.2024
Online AccessGet full text

Cover

Loading…
More Information
Summary:Reinforcement Learning (RL) has made significant strides in enabling artificial agents to learn diverse behaviors. However, learning an effective policy often requires a large number of environment interactions. To mitigate sample complexity issues, recent approaches have used high-level task specifications, such as Linear Temporal Logic (LTLf) formulas or Reward Machines (RM), to guide the learning progress of the agent. In this work, we propose a novel approach, called Logical Specifications-guided Dynamic Task Sampling (LSTS), that learns a set of RL policies to guide an agent from an initial state to a goal state based on a high-level task specification, while minimizing the number of environmental interactions. Unlike previous work, LSTS does not assume information about the environment dynamics or the Reward Machine, and dynamically samples promising tasks that lead to successful goal policies. We evaluate LSTS on a gridworld and show that it achieves improved time-to-threshold performance on complex sequential decision-making problems compared to state-of-the-art RM and Automaton-guided RL baselines, such as Q-Learning for Reward Machines and Compositional RL from logical Specifications (DIRL). Moreover, we demonstrate that our method outperforms RM and Automaton-guided RL baselines in terms of sample-efficiency, both in a partially observable robotic task and in a continuous control robotic manipulation task.
ISSN:2334-0835
2334-0843
DOI:10.1609/icaps.v34i1.31514