AnimalEnvNet: A Deep Reinforcement Learning Method for Constructing Animal Agents Using Multimodal Data Fusion

Simulating animal movement has long been a central focus of study in the area of wildlife behaviour studies. Conventional modelling methods have difficulties in accurately representing changes over time and space in the data, and they generally do not effectively use telemetry data. Thus, this paper...

Full description

Saved in:
Bibliographic Details
Published inApplied sciences Vol. 14; no. 14; p. 6382
Main Authors Chen, Zhao, Wang, Dianchang, Zhao, Feixiang, Dai, Lingnan, Zhao, Xinrong, Jiang, Xian, Zhang, Huaiqing
Format Journal Article
LanguageEnglish
Published Basel MDPI AG 01.07.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Simulating animal movement has long been a central focus of study in the area of wildlife behaviour studies. Conventional modelling methods have difficulties in accurately representing changes over time and space in the data, and they generally do not effectively use telemetry data. Thus, this paper introduces a new and innovative deep reinforcement learning technique known as AnimalEnvNet. This approach combines historical trajectory data and remote sensing images to create an animal agent using deep reinforcement learning techniques. It overcomes the constraints of conventional modelling approaches. We selected pandas as the subject of our research and carried out research using GPS trajectory data, Google Earth images, and Sentinel-2A remote sensing images. The experimental findings indicate that AnimalEnvNet reaches convergence during supervised learning training, attaining a minimal mean absolute error (MAE) of 28.4 m in single-step prediction when compared to actual trajectories. During reinforcement learning training, the agent has the capability to replicate animal locomotion for a maximum of 12 iterations, while maintaining an error margin of 1000 m. This offers a novel approach and viewpoint for mimicking animal behaviour.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2076-3417
2076-3417
DOI:10.3390/app14146382