Contour Counts: Restricting Deformation for Accurate Animation Interpolation

Animated videos with low frame rates commonly degrade the visual experience with choppy motion. Recently, video frame interpolation has been growing rapidly which can increase frame rates. However, the sparse texture information and complex motion scenes of animated videos make the objects in the fr...

Full description

Saved in:
Bibliographic Details
Published inIEEE signal processing letters Vol. 31; pp. 1479 - 1483
Main Authors Li, Lei, Xu, Xin, Jiang, Kui, Liu, Wei, Wang, Zheng
Format Journal Article
LanguageEnglish
Published New York IEEE 2024
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Animated videos with low frame rates commonly degrade the visual experience with choppy motion. Recently, video frame interpolation has been growing rapidly which can increase frame rates. However, the sparse texture information and complex motion scenes of animated videos make the objects in the frames generated by existing video frame interpolation methods appear significantly deformation, causing distortion in the content of generated frames. To address this issue, the Restricting Deformation by Contour Network ( RDC-Net ) is proposed to repair and fill the content by optimizing object contours and leveraging context features for high-quality animation interpolation. Specifically, the RDC-Net was proposed to learn optical flow maps utilized to effectively capture the spatial shifts in the motion subject's contours across time intervals. Furthermore, contour information is used to refine the structure of the object in optical flow estimation and moderate the object deformation scale in the generated frame. In addition, the context information is explored to characterize the motion between adjacent frames via bidirectional optical flow learning, enabling the filling of distortion in the content of generated frames by feature-filling technology. Experiments on commonly used benchmarks show our state-of-the-art performance.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1070-9908
1558-2361
DOI:10.1109/LSP.2024.3404138