Enhancing temple surveillance through human activity recognition: A novel dataset and YOLOv4-ConvLSTM approach

Automated identification of human activities remains a complex endeavor, particularly in unique settings like temple environments. This study focuses on employing machine learning and deep learning techniques to analyze human activities for intelligent temple surveillance. However, due to the scarci...

Full description

Saved in:
Bibliographic Details
Published inJournal of intelligent & fuzzy systems Vol. 45; no. 6; pp. 11217 - 11232
Main Authors Ashwin Shenoy, M., Thillaiarasu, N.
Format Journal Article
LanguageEnglish
Published Amsterdam IOS Press BV 02.12.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Automated identification of human activities remains a complex endeavor, particularly in unique settings like temple environments. This study focuses on employing machine learning and deep learning techniques to analyze human activities for intelligent temple surveillance. However, due to the scarcity of standardized datasets tailored for temple surveillance, there is a need for specialized data. In response, this research introduces a pioneering dataset featuring Eight distinct classes of human activities, predominantly centered on hand gestures and body postures. To identify the most effective solution for Human Activity Recognition (HAR), a comprehensive ablation study is conducted, involving a variety of conventional machine learning and deep learning models. By integrating YOLOv4’s robust object detection capabilities with ConvLSTM’s ability to model both spatial and temporal dependencies in spatio-temporal data, the approach becomes capable of recognizing and understanding human activities in sequences of images or video frames. Notably, the proposed YOLOv4-ConvLSTM approach emerges as the optimal choice, showcasing a remarkable accuracy of 93.68%. This outcome underscores the suitability of the outlined methodology for diverse HAR applications in temple environments.
ISSN:1064-1246
1875-8967
DOI:10.3233/JIFS-233919