Audio-Visual Fusion Layers for Event Type Aware Video Recognition
Human brain is continuously inundated with the multisensory information and their complex interactions coming from the outside world at any given moment. Such information is automatically analyzed by binding or segregating in our brain. While this task might seem effortless for human brains, it is e...
Saved in:
Main Authors | , , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
11.02.2022
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Human brain is continuously inundated with the multisensory information and
their complex interactions coming from the outside world at any given moment.
Such information is automatically analyzed by binding or segregating in our
brain. While this task might seem effortless for human brains, it is extremely
challenging to build a machine that can perform similar tasks since complex
interactions cannot be dealt with single type of integration but requires more
sophisticated approaches. In this paper, we propose a new model to address the
multisensory integration problem with individual event-specific layers in a
multi-task learning scheme. Unlike previous works where single type of fusion
is used, we design event-specific layers to deal with different audio-visual
relationship tasks, enabling different ways of audio-visual formation.
Experimental results show that our event-specific layers can discover unique
properties of the audio-visual relationships in the videos. Moreover, although
our network is formulated with single labels, it can output additional true
multi-labels to represent the given videos. We demonstrate that our proposed
framework also exposes the modality bias of the video data category-wise and
dataset-wise manner in popular benchmark datasets. |
---|---|
DOI: | 10.48550/arxiv.2202.05961 |