Learning Multi-Modal Dictionaries

Real-world phenomena involve complex interactions between multiple signal modalities. As a consequence, humans are used to integrate at each instant perceptions from all their senses in order to enrich their understanding of the surrounding world. This paradigm can be also extremely useful in many s...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on image processing Vol. 16; no. 9; pp. 2272 - 2283
Main Authors Monaci, Gianluca, Jost, Philippe, Vandergheynst, Pierre, Mailhé, Boris, Lesage, Sylvain, Gribonval, Rémi
Format Journal Article
LanguageEnglish
Published Institute of Electrical and Electronics Engineers 2007
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Real-world phenomena involve complex interactions between multiple signal modalities. As a consequence, humans are used to integrate at each instant perceptions from all their senses in order to enrich their understanding of the surrounding world. This paradigm can be also extremely useful in many signal processing and computer vision problems involving mutually related signals. The simultaneous processing of multimodal data can, in fact, reveal information that is otherwise hidden when considering the signals independently. However, in natural multimodal signals, the statistical dependencies between modalities are in general not obvious. Learning fundamental multimodal patterns could offer deep insight into the structure of such signals. In this paper, we present a novel model of multimodal signals based on their sparse decomposition over a dictionary of multimodal structures. An algorithm for iteratively learning multimodal generating functions that can be shifted at all positions in the signal is proposed, as well. The learning is defined in such a way that it can be accomplished by iteratively solving a generalized eigenvector problem, which makes the algorithm fast, flexible, and free of user-defined parameters. The proposed algorithm is applied to audiovisual sequences and it is able to discover underlying structures in the data. The detection of such audio-video patterns in audiovisual clips allows to effectively localize the sound source on the video in presence of substantial acoustic and visual distractors, outperforming state-of-the-art audiovisual localization algorithms.
ISSN:1057-7149
DOI:10.1109/TIP.2007.901813