TalkNCE: Improving Active Speaker Detection with Talk-Aware Contrastive Learning

The goal of this work is Active Speaker Detection (ASD), a task to determine whether a person is speaking or not in a series of video frames. Previous works have dealt with the task by exploring network architectures while learning effective representations has been less explored. In this work, we p...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Jung, Chaeyoung, Lee, Suyeon, Nam, Kihyun, Rho, Kyeongha, You Jin Kim, Jang, Youngjoon, Joon Son Chung
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 21.09.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:The goal of this work is Active Speaker Detection (ASD), a task to determine whether a person is speaking or not in a series of video frames. Previous works have dealt with the task by exploring network architectures while learning effective representations has been less explored. In this work, we propose TalkNCE, a novel talk-aware contrastive loss. The loss is only applied to part of the full segments where a person on the screen is actually speaking. This encourages the model to learn effective representations through the natural correspondence of speech and facial movements. Our loss can be jointly optimized with the existing objectives for training ASD models without the need for additional supervision or training data. The experiments demonstrate that our loss can be easily integrated into the existing ASD frameworks, improving their performance. Our method achieves state-of-the-art performances on AVA-ActiveSpeaker and ASW datasets.
ISSN:2331-8422