Combining neural networks and tree search for task and motion planning in challenging environments

Task and motion planning subject to Linear Temporal Logic (LTL) specifications in complex, dynamic environments requires efficient exploration of many possible future worlds. Model-free reinforcement learning has proven successful in a number of challenging tasks, but shows poor performance on tasks...

Full description

Saved in:
Bibliographic Details
Published inProceedings of the ... IEEE/RSJ International Conference on Intelligent Robots and Systems pp. 6059 - 6066
Main Authors Paxton, Chris, Raman, Vasumathi, Hager, Gregory D., Kobilarov, Marin
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.09.2017
Subjects
Online AccessGet full text
ISSN2153-0866
DOI10.1109/IROS.2017.8206505

Cover

More Information
Summary:Task and motion planning subject to Linear Temporal Logic (LTL) specifications in complex, dynamic environments requires efficient exploration of many possible future worlds. Model-free reinforcement learning has proven successful in a number of challenging tasks, but shows poor performance on tasks that require long-term planning. In this work, we integrate Monte Carlo Tree Search with hierarchical neural net policies trained on expressive LTL specifications. We use reinforcement learning to find deep neural networks representing both low-level control policies and task-level "option policies" that achieve high-level goals. Our combined architecture generates safe and responsive motion plans that respect the LTL constraints. We demonstrate our approach in a simulated autonomous driving setting, where a vehicle must drive down a road in traffic, avoid collisions, and navigate an intersection, all while obeying rules of the road.
ISSN:2153-0866
DOI:10.1109/IROS.2017.8206505