Controllable video characters with natural motions extracted from real-world videos

A video generation system is described that extracts one or more characters or other objects from a video, re-animates the character, and generates a new video in which the extracted characters. The system enables the extracted character(s) to be positioned and controlled within a new background sce...

Full description

Saved in:
Bibliographic Details
Main Authors Gafni, Oran, Wolf, Lior, Taigman, Yaniv Nechemia
Format Patent
LanguageEnglish
Published 15.08.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:A video generation system is described that extracts one or more characters or other objects from a video, re-animates the character, and generates a new video in which the extracted characters. The system enables the extracted character(s) to be positioned and controlled within a new background scene different from the original background scene of the source video. In one example, the video generation system comprises a pose prediction neural network having a pose model trained with (i) a set of character pose training images extracted from an input video of the character and (ii) a simulated motion control signal generated from the input video. In operation, the pose prediction neural network generates, in response to a motion control input from a user, a sequence of images representing poses of a character. A frame generation neural network generates output video frames that render the character within a scene.
Bibliography:Application Number: US202117322160