Keyframe-guided Video Swin Transformer with Multi-path Excitation for Violence Detection

Violence detection is a critical task aimed at identifying violent behavior in video by extracting frames and applying classification models. However, the complexity of video data and the suddenness of violent events present significant hurdles in accurately pinpointing instances of violence, making...

Full description

Saved in:
Bibliographic Details
Published inComputer journal Vol. 67; no. 5; pp. 1826 - 1837
Main Authors Li, Chenghao, Yang, Xinyan, Liang, Gang
Format Journal Article
LanguageEnglish
Published Oxford University Press 22.06.2024
Subjects
Online AccessGet full text
ISSN0010-4620
1460-2067
DOI10.1093/comjnl/bxad103

Cover

Loading…
More Information
Summary:Violence detection is a critical task aimed at identifying violent behavior in video by extracting frames and applying classification models. However, the complexity of video data and the suddenness of violent events present significant hurdles in accurately pinpointing instances of violence, making the extraction of frames that indicate violence a challenging endeavor. Furthermore, designing and applying high-performance models for violence detection remains an open problem. Traditional models embed extracted spatial features from sampled frames directly into a temporal sequence, which ignores the spatio-temporal characteristics of video and limits the ability to express continuous changes between adjacent frames. To address the existing challenges, this paper proposes a novel framework called ACTION-VST. First, a keyframe extraction algorithm is developed to select frames that are most likely to represent violent scenes in videos. To transform visual sequences into spatio-temporal feature maps, a multi-path excitation module is proposed to activate spatio-temporal, channel and motion features. Next, an advanced Video Swin Transformer-based network is employed for both global and local spatio-temporal modeling, which enables comprehensive feature extraction and representation of violence. The proposed method was validated on two large-scale datasets, RLVS and RWF-2000, achieving accuracies of over 98 and 93%, respectively, surpassing the state of the art.
ISSN:0010-4620
1460-2067
DOI:10.1093/comjnl/bxad103