Differentiable Architecture Search for Reinforcement Learning

In this paper, we investigate the fundamental question: To what extent are gradient-based neural architecture search (NAS) techniques applicable to RL? Using the original DARTS as a convenient baseline, we discover that the discrete architectures found can achieve up to 250% performance compared to...

Full description

Saved in:
Bibliographic Details
Main Authors Miao, Yingjie, Song, Xingyou, Co-Reyes, John D, Peng, Daiyi, Yue, Summer, Brevdo, Eugene, Faust, Aleksandra
Format Journal Article
LanguageEnglish
Published 03.06.2021
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:In this paper, we investigate the fundamental question: To what extent are gradient-based neural architecture search (NAS) techniques applicable to RL? Using the original DARTS as a convenient baseline, we discover that the discrete architectures found can achieve up to 250% performance compared to manual architecture designs on both discrete and continuous action space environments across off-policy and on-policy RL algorithms, at only 3x more computation time. Furthermore, through numerous ablation studies, we systematically verify that not only does DARTS correctly upweight operations during its supernet phrase, but also gradually improves resulting discrete cells up to 30x more efficiently than random search, suggesting DARTS is surprisingly an effective tool for improving architectures in RL.
DOI:10.48550/arxiv.2106.02229