Constrained Deep Q-Learning Gradually Approaching Ordinary Q-Learning

A deep Q network (DQN) (Mnih et al., 2013) is an extension of Q learning, which is a typical deep reinforcement learning method. In DQN, a Q function expresses all action values under all states, and it is approximated using a convolutional neural network. Using the approximated Q function, an optim...

Full description

Saved in:
Bibliographic Details
Published inFrontiers in neurorobotics Vol. 13; p. 103
Main Authors Ohnishi, Shota, Uchibe, Eiji, Yamaguchi, Yotaro, Nakanishi, Kosuke, Yasui, Yuji, Ishii, Shin
Format Journal Article
LanguageEnglish
Published Switzerland Frontiers Research Foundation 10.12.2019
Frontiers Media S.A
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:A deep Q network (DQN) (Mnih et al., 2013) is an extension of Q learning, which is a typical deep reinforcement learning method. In DQN, a Q function expresses all action values under all states, and it is approximated using a convolutional neural network. Using the approximated Q function, an optimal policy can be derived. In DQN, a target network, which calculates a target value and is updated by the Q function at regular intervals, is introduced to stabilize the learning process. A less frequent updates of the target network would result in a more stable learning process. However, because the target value is not propagated unless the target network is updated, DQN usually requires a large number of samples. In this study, we proposed Constrained DQN that uses the difference between the outputs of the Q function and the target network as a constraint on the target value. Constrained DQN updates parameters conservatively when the difference between the outputs of the Q function and the target network is large, and it updates them aggressively when this difference is small. In the proposed method, as learning progresses, the number of times that the constraints are activated decreases. Consequently, the update method gradually approaches conventional Q learning. We found that Constrained DQN converges with a smaller training dataset than in the case of DQN and that it is robust against changes in the update frequency of the target network and settings of a certain parameter of the optimizer. Although Constrained DQN alone does not show better performance in comparison to integrated approaches nor distributed methods, experimental results show that Constrained DQN can be used as an additional components to those methods.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
Edited by: Hong Qiao, University of Chinese Academy of Sciences, China
Reviewed by: Jiwen Lu, Tsinghua University, China; David Haim Silver, Independent Researcher, Haifa, Israel; Timothy P. Lillicrap, Google, United States
ISSN:1662-5218
1662-5218
DOI:10.3389/fnbot.2019.00103