Time-varying Normalizing Flow for Generative Modeling of Dynamical Signals

We develop a time-varying normalizing flow (TVNF) for explicit generative modeling of dynamical signals. Being explicit, it can generate samples of dynamical signals, and compute the likelihood of a (given) dynamical signal sample. In the proposed model, signal flow in the layers of the normalizing...

Full description

Saved in:
Bibliographic Details
Published in2022 30th European Signal Processing Conference (EUSIPCO) pp. 1492 - 1496
Main Authors Ghosh, Anubhab, Fontcuberta, Aleix Espuna, Abdalmoaty, Mohamed R.-H., Chatterjee, Saikat
Format Conference Proceeding
LanguageEnglish
Published EUSIPCO 29.08.2022
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:We develop a time-varying normalizing flow (TVNF) for explicit generative modeling of dynamical signals. Being explicit, it can generate samples of dynamical signals, and compute the likelihood of a (given) dynamical signal sample. In the proposed model, signal flow in the layers of the normalizing flow is a function of time, which is realized using an encoded representation that is the output of a recurrent neural network (RNN). Given a set of dynamical signals, the parameters of TVNF are learned according to maximum-likelihood approach in conjunction with gradient descent (backpropagation). Use of the proposed model is illustrated for a toy application scenario - maximum-likelihood based speech-phone classification task.
ISSN:2076-1465