Multi-Task Learning for Acoustic Event Detection Using Event and Frame Position Information

Acoustic event detection deals with the acoustic signals to determine the sound type and to estimate the audio event boundaries. Multi-label classification based approaches are commonly used to detect the frame wise event types with a median filter applied to determine the happening acoustic events....

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on multimedia Vol. 22; no. 3; pp. 569 - 578
Main Authors Xia, Xianjun, Togneri, Roberto, Sohel, Ferdous, Zhao, Yuanjun, Huang, Defeng
Format Journal Article
LanguageEnglish
Published Piscataway IEEE 01.03.2020
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Acoustic event detection deals with the acoustic signals to determine the sound type and to estimate the audio event boundaries. Multi-label classification based approaches are commonly used to detect the frame wise event types with a median filter applied to determine the happening acoustic events. However, the multi-label classifiers are trained only on the acoustic event types ignoring the frame position within the audio events. To deal with this, this paper proposes to construct a joint learning based multi-task system. The first task performs the acoustic event type detection and the second task is to predict the frame position information. By sharing representations between the two tasks, we can enable the acoustic models to generalize better than the original classifier by averaging respective noise patterns to be implicitly regularized. Experimental results on the monophonic UPC-TALP and the polyphonic TUT Sound Event datasets demonstrate the superior performance of the joint learning method by achieving lower error rate and higher F-score compared to the baseline AED system.
ISSN:1520-9210
1941-0077
DOI:10.1109/TMM.2019.2933330