A highly robust deep learning technique for overlap detection using audio fingerprinting

Due to the proliferation of video-based applications, there is a high demand for automated systems to support various video-based tasks that are free from human intervention, i.e., manual tagging. In this paper, we present a novel approach for detecting the presence of overlap between two videos by...

Full description

Saved in:
Bibliographic Details
Published inMultimedia tools and applications Vol. 83; no. 10; pp. 29119 - 29137
Main Authors Uikey, Akash, Bedi, Anterpreet Kaur, Choudhary, Priyankar, Ooi, Wei Tsang, Saini, Mukesh
Format Journal Article
LanguageEnglish
Published New York Springer US 01.03.2024
Springer Nature B.V
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Due to the proliferation of video-based applications, there is a high demand for automated systems to support various video-based tasks that are free from human intervention, i.e., manual tagging. In this paper, we present a novel approach for detecting the presence of overlap between two videos by exploiting their corresponding audio signals, which is a crucial preprocessing step for audio, and further video alignment and synchronisation. Several existing approaches have limitations related to timestamps, overlapping regions, and the length of video clips. For the proposed work, we target the challenging scenario consisting of simultaneously recorded videos in an unconstrained manner by multiple users attending performance events. xOur work is an attempt towards developing a robust framework that not only considers noisy components present in the audio but is also free from the limitations mentioned above. We compare our framework with several other existing approaches. Our proposed framework outperforms other approaches by an average of 13.71% in terms of accuracy.
ISSN:1573-7721
1380-7501
1573-7721
DOI:10.1007/s11042-023-16713-y