Context-Aware Synthesis for Video Frame Interpolation
Video frame interpolation algorithms typically estimate optical flow or its variations and then use it to guide the synthesis of an intermediate frame between two consecutive original frames. To handle challenges like occlusion, bidirectional flow between the two input frames is often estimated and...
Saved in:
Published in | 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition pp. 1701 - 1710 |
---|---|
Main Authors | , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
01.06.2018
|
Subjects | |
Online Access | Get full text |
ISSN | 1063-6919 |
DOI | 10.1109/CVPR.2018.00183 |
Cover
Loading…
Abstract | Video frame interpolation algorithms typically estimate optical flow or its variations and then use it to guide the synthesis of an intermediate frame between two consecutive original frames. To handle challenges like occlusion, bidirectional flow between the two input frames is often estimated and used to warp and blend the input frames. However, how to effectively blend the two warped frames still remains a challenging problem. This paper presents a context-aware synthesis approach that warps not only the input frames but also their pixel-wise contextual information and uses them to interpolate a high-quality intermediate frame. Specifically, we first use a pre-trained neural network to extract per-pixel contextual information for input frames. We then employ a state-of-the-art optical flow algorithm to estimate bidirectional flow between them and pre-warp both input frames and their context maps. Finally, unlike common approaches that blend the pre-warped frames, our method feeds them and their context maps to a video frame synthesis neural network to produce the interpolated frame in a context-aware fashion. Our neural network is fully convolutional and is trained end to end. Our experiments show that our method can handle challenging scenarios such as occlusion and large motion and outperforms representative state-of-the-art approaches. |
---|---|
AbstractList | Video frame interpolation algorithms typically estimate optical flow or its variations and then use it to guide the synthesis of an intermediate frame between two consecutive original frames. To handle challenges like occlusion, bidirectional flow between the two input frames is often estimated and used to warp and blend the input frames. However, how to effectively blend the two warped frames still remains a challenging problem. This paper presents a context-aware synthesis approach that warps not only the input frames but also their pixel-wise contextual information and uses them to interpolate a high-quality intermediate frame. Specifically, we first use a pre-trained neural network to extract per-pixel contextual information for input frames. We then employ a state-of-the-art optical flow algorithm to estimate bidirectional flow between them and pre-warp both input frames and their context maps. Finally, unlike common approaches that blend the pre-warped frames, our method feeds them and their context maps to a video frame synthesis neural network to produce the interpolated frame in a context-aware fashion. Our neural network is fully convolutional and is trained end to end. Our experiments show that our method can handle challenging scenarios such as occlusion and large motion and outperforms representative state-of-the-art approaches. |
Author | Niklaus, Simon Liu, Feng |
Author_xml | – sequence: 1 givenname: Simon surname: Niklaus fullname: Niklaus, Simon – sequence: 2 givenname: Feng surname: Liu fullname: Liu, Feng |
BookMark | eNotjc1Kw0AURkdRsNasXbjJCyTeOzOZubMswdZCQfGn2zJxbjDSZsokoH17A7r5zuZwvmtx0ceehbhFKBHB3dfb55dSAlIJ06gzkTlLWCkyRktw52KGYFRhHLorkQ3DFwBIQ4p0NRNVHfuRf8Zi8e0T56-nfvzkoRvyNqZ82wWO-TL5A-frSUvHuPdjF_sbcdn6_cDZP-fiffnwVj8Wm6fVul5sik5qHAvtANz0jZZc4MZJZM26QWwaY6xsCTy2HxQIXbCWrQtV8CaA9BqawKDm4u6v2zHz7pi6g0-nHVWWJKH6BR2OR0E |
CODEN | IEEPAD |
ContentType | Conference Proceeding |
DBID | 6IE 6IH CBEJK RIE RIO |
DOI | 10.1109/CVPR.2018.00183 |
DatabaseName | IEEE Electronic Library (IEL) Conference Proceedings IEEE Proceedings Order Plan (POP) 1998-present by volume IEEE Xplore All Conference Proceedings IEEE Electronic Library (IEL) IEEE Proceedings Order Plans (POP) 1998-present |
DatabaseTitleList | |
Database_xml | – sequence: 1 dbid: RIE name: IEEE Electronic Library (IEL) url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/ sourceTypes: Publisher |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Applied Sciences |
EISBN | 9781538664209 1538664208 |
EISSN | 1063-6919 |
EndPage | 1710 |
ExternalDocumentID | 8578281 |
Genre | orig-research |
GroupedDBID | 6IE 6IH 6IL 6IN AAWTH ABLEC ADZIZ ALMA_UNASSIGNED_HOLDINGS BEFXN BFFAM BGNUA BKEBE BPEOZ CBEJK CHZPO IEGSK IJVOP OCL RIE RIL RIO |
ID | FETCH-LOGICAL-i241t-490090631789deb921e4e4b11bb6672f80a1fc8d819d77e79d5da6d02a40bde03 |
IEDL.DBID | RIE |
IngestDate | Wed Aug 27 02:52:16 EDT 2025 |
IsPeerReviewed | false |
IsScholarly | true |
Language | English |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-i241t-490090631789deb921e4e4b11bb6672f80a1fc8d819d77e79d5da6d02a40bde03 |
PageCount | 10 |
ParticipantIDs | ieee_primary_8578281 |
PublicationCentury | 2000 |
PublicationDate | 2018-06 |
PublicationDateYYYYMMDD | 2018-06-01 |
PublicationDate_xml | – month: 06 year: 2018 text: 2018-06 |
PublicationDecade | 2010 |
PublicationTitle | 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition |
PublicationTitleAbbrev | CVPR |
PublicationYear | 2018 |
Publisher | IEEE |
Publisher_xml | – name: IEEE |
SSID | ssj0002683845 ssj0003211698 |
Score | 2.5668826 |
Snippet | Video frame interpolation algorithms typically estimate optical flow or its variations and then use it to guide the synthesis of an intermediate frame between... |
SourceID | ieee |
SourceType | Publisher |
StartPage | 1701 |
SubjectTerms | Adaptive optics Computer vision Estimation Interpolation Neural networks Optical computing Optical imaging |
Title | Context-Aware Synthesis for Video Frame Interpolation |
URI | https://ieeexplore.ieee.org/document/8578281 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1NSwMxEA21J09VW_GbHDy6bXY3m02OUixFUIra0lvJxywUYVfaLUV_vZPdtYp48JbkFGYI82bmzQsh10lmudHGK1xmKkBErANtpcCHF2UsNhwheMW2eBTjKb-fJ_MWudnNwgBART6Dvl9WvXxX2I0vlQ2k1173c9Z7mLjVs1q7ekokZCybDpnfx5jZCCUbNZ-QqcFwNnnyXC5Pngy9TOCP71SqaDLqkIeve9Qkktf-pjR9-_FLovG_Fz0gve-5PTrZRaRD0oL8iHQaoEmbZ7zukqTSpMKU93arV0Cf33OEgevlmiKCpbOlg4KOPGmL1pTEoubL9ch0dPcyHAfN_wnBEuNyGXCFAAohSJhK5cCoKAQO3IShMUKgLyTTYWalQ1Dg0hRS5RKnhWOR5sw4YPExaedFDieECo6uVhAhmDQ8yoxJrWNGJtw3lC0Tp6TrrbB4qyUyFo0Bzv4-Pif73g814-qCtMvVBi4xtpfmqnLqJ1e3ocA |
linkProvider | IEEE |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV3PT8IwFG4IHvSECsbf9uDRQbt1XXs0RIIKhCgQbqRdu4SYbAZGjP71vm4TjfHgre2p6Uvzfe373vcQug6TmGmlncNlIj1gxMpTseBw8fyEBJoBBS_UFiPen7KHeTivoZttLYy1thCf2bYbFrl8k8Ub91XWEc573dVZ7wDuh7Ss1tr-qPhcBKLKkbl5AG8bLkXl50OJ7HRn4yen5nLySeqMAn80VCnwpNdAw6-dlDKSl_Ym1-3445dJ43-3uo9a35V7eLzFpANUs-khalRUE1cXed1EYeFKBY_e2ze1svj5PQUiuF6uMXBYPFsam-Gek23hUpSYlYq5Fpr27ibdvld1UPCWgMy5xyRQKCAhNBLSWC19apllmlKtOYdoCKJoEgsDtMBEkY2kCY3ihviKEW0sCY5QPc1Se4wwZxBsaX2gk5r5idZRbIgWIXMp5ZjwE9R0p7B4LU0yFtUBnP69fIV2-5PhYDG4Hz2eoT0Xk1J_dY7q-WpjLwDpc31ZBPgT_AClCQ |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Abook&rft.genre=proceeding&rft.title=2018+IEEE%2FCVF+Conference+on+Computer+Vision+and+Pattern+Recognition&rft.atitle=Context-Aware+Synthesis+for+Video+Frame+Interpolation&rft.au=Niklaus%2C+Simon&rft.au=Liu%2C+Feng&rft.date=2018-06-01&rft.pub=IEEE&rft.eissn=1063-6919&rft.spage=1701&rft.epage=1710&rft_id=info:doi/10.1109%2FCVPR.2018.00183&rft.externalDocID=8578281 |