Active Object Detection Using Double DQN and Prioritized Experience Replay
Visual object detection is one of the fundamental tasks in computer vision and robotics. Small scale, partial capture and occlusion often occur in robotic applications, most existing object detection algorithms perform poorly in such situations. While a robot can look at one object from different vi...
Saved in:
Published in | 2018 International Joint Conference on Neural Networks (IJCNN) pp. 1 - 7 |
---|---|
Main Authors | , , , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
01.07.2018
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Visual object detection is one of the fundamental tasks in computer vision and robotics. Small scale, partial capture and occlusion often occur in robotic applications, most existing object detection algorithms perform poorly in such situations. While a robot can look at one object from different views and plan its trajectory in the next few steps, which can lead to better observations. We formulate it as a sequential action-decision process, and develop a deep reinforcement learning architecture to solve the active object detection problem. A double deep Q-learning network (DQN) is applied to predict an action at each step. Experimental validation on the Active Vision Dataset shows the efficiency of the proposed method. |
---|---|
ISSN: | 2161-4407 |
DOI: | 10.1109/IJCNN.2018.8489296 |