CATSE: A Context-Aware Framework for Causal Target Sound Extraction
Target Sound Extraction (TSE) focuses on the problem of separating sources of interest, indicated by a user's cue, from the input mixture. Most existing solutions operate in an offline fashion and are not suited to the low-latency causal processing constraints imposed by applications in live-st...
Saved in:
Main Authors | , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
21.03.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Target Sound Extraction (TSE) focuses on the problem of separating sources of
interest, indicated by a user's cue, from the input mixture. Most existing
solutions operate in an offline fashion and are not suited to the low-latency
causal processing constraints imposed by applications in live-streamed content
such as augmented hearing. We introduce a family of context-aware low-latency
causal TSE models suitable for real-time processing. First, we explore the
utility of context by providing the TSE model with oracle information about
what sound classes make up the input mixture, where the objective of the model
is to extract one or more sources of interest indicated by the user. Since the
practical applications of oracle models are limited due to their assumptions,
we introduce a composite multi-task training objective involving separation and
classification losses. Our evaluation involving single- and multi-source
extraction shows the benefit of using context information in the model either
by means of providing full context or via the proposed multi-task training loss
without the need for full context information. Specifically, we show that our
proposed model outperforms size- and latency-matched Waveformer, a
state-of-the-art model for real-time TSE. |
---|---|
DOI: | 10.48550/arxiv.2403.14246 |