Generating Realistic Videos From Keyframes With Concatenated GANs
Given two video frames <inline-formula> <tex-math notation="LaTeX">X_{0} </tex-math></inline-formula> and <inline-formula> <tex-math notation="LaTeX">X_{n+1} </tex-math></inline-formula>, we aim to generate a series of intermediate...
Saved in:
Published in | IEEE transactions on circuits and systems for video technology Vol. 29; no. 8; pp. 2337 - 2348 |
---|---|
Main Authors | , , , , |
Format | Journal Article |
Language | English |
Published |
New York
IEEE
01.08.2019
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Given two video frames <inline-formula> <tex-math notation="LaTeX">X_{0} </tex-math></inline-formula> and <inline-formula> <tex-math notation="LaTeX">X_{n+1} </tex-math></inline-formula>, we aim to generate a series of intermediate frames <inline-formula> <tex-math notation="LaTeX">Y_{1}, Y_{2}, \ldots, Y_{n} </tex-math></inline-formula>, such that the resulting video consisting of frames <inline-formula> <tex-math notation="LaTeX">X_{0}, Y_{1}-Y_{n}, and X_{n+1} </tex-math></inline-formula> appears realistic to a human watcher. Such video generation has numerous important applications, including video compression, movie production, slow-motion filming, video surveillance, and forensic analysis. Yet, video generation is highly challenging due to the vast search space of possible frames. Previous methods, mostly based on video prediction and/or video interpolation, tend to generate poor-quality videos with severe motion blur. This paper proposes a novel, end-to-end approach to video generation using generative adversarial networks (GANs). In particular, our design involves two concatenated GANs, one capturing motions and the other generating frame details. The loss function is also carefully engineered to include adversarial loss, gradient difference (for motion learning), and normalized product correlation loss (for frame details). Experiments using three video datasets, namely, Google Robotic Push, KTH human actions, and UCF101, demonstrate that the proposed solution generates high-quality, realistic, and sharp videos, whereas all previous solutions output noisy and blurry results. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ISSN: | 1051-8215 1558-2205 |
DOI: | 10.1109/TCSVT.2018.2867934 |