Multiagent Deep Reinforcement Learning With Demonstration Cloning for Target Localization

In target localization applications, readings from multiple sensing agents are processed to identify a target location. The localization systems using stationary sensors use data fusion methods to estimate the target location, whereas other systems use mobile sensing agents (UAVs, robots) to search...

Full description

Saved in:
Bibliographic Details
Published inIEEE internet of things journal Vol. 10; no. 15; pp. 13556 - 13570
Main Authors Alagha, Ahmed, Mizouni, Rabeb, Bentahar, Jamal, Otrok, Hadi, Singh, Shakti
Format Journal Article
LanguageEnglish
Published Piscataway The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 01.08.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:In target localization applications, readings from multiple sensing agents are processed to identify a target location. The localization systems using stationary sensors use data fusion methods to estimate the target location, whereas other systems use mobile sensing agents (UAVs, robots) to search the area for the target. However, such methods are designed for specific environments, and hence are deemed infeasible if the environment changes. For instance, the presence of walls increases the environment’s complexity and affects the collected readings and the mobility of the agents. Recent works explored deep reinforcement learning (DRL) as an efficient and adaptable approach to tackle the target search problem. However, such methods are either designed for single-agent systems or for noncomplex environments. This work proposes two novel multiagent DRL models for target localization through search in complex environments. The first model utilizes proximal policy optimization, convolutional neural networks, Convolutional AutoEncoders to create embeddings, and a shaped reward function using breadth first search to obtain cooperative agents that achieve fast localization at low cost. The second model improves the first model in terms of computational complexity by replacing the shaped reward with a simple sparse reward, subject to the availability of Expert Demonstrations. Expert demonstrations are used in Demonstration Cloning, a novel method that utilizes demonstrations to guide the learning of new agents. The proposed models are tested on a scenario of radioactive target localization, and benchmarked with existing methods, showing efficacy in terms of localization time and cost, in addition to learning speed and stability.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2327-4662
2327-4662
DOI:10.1109/JIOT.2023.3262663