Online Video Deblurring via Dynamic Temporal Blending Network

State-of-the-art video deblurring methods are capable of removing non-uniform blur caused by unwanted camera shake and/or object motion in dynamic scenes. However, most existing methods are based on batch processing and thus need access to all recorded frames, rendering them computationally demandin...

Full description

Saved in:
Bibliographic Details
Published inProceedings / IEEE International Conference on Computer Vision pp. 4058 - 4067
Main Authors Tae Hyun Kim, Kyoung Mu Lee, Scholkopf, Bernhard, Hirsch, Michael
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.10.2017
Subjects
Online AccessGet full text
ISSN2380-7504
DOI10.1109/ICCV.2017.435

Cover

Loading…
Abstract State-of-the-art video deblurring methods are capable of removing non-uniform blur caused by unwanted camera shake and/or object motion in dynamic scenes. However, most existing methods are based on batch processing and thus need access to all recorded frames, rendering them computationally demanding and time-consuming and thus limiting their practical use. In contrast, we propose an online (sequential) video deblurring method based on a spatio-temporal recurrent network that allows for realtime performance. In particular, we introduce a novel architecture which extends the receptive field while keeping the overall size of the network small to enable fast execution. In doing so, our network is able to remove even large blur caused by strong camera shake and/or fast moving objects. Furthermore, we propose a novel network layer that enforces temporal consistency between consecutive frames by dynamic temporal blending which compares and adaptively (at test time) shares features obtained at different time steps. We show the superiority of the proposed method in an extensive experimental evaluation.
AbstractList State-of-the-art video deblurring methods are capable of removing non-uniform blur caused by unwanted camera shake and/or object motion in dynamic scenes. However, most existing methods are based on batch processing and thus need access to all recorded frames, rendering them computationally demanding and time-consuming and thus limiting their practical use. In contrast, we propose an online (sequential) video deblurring method based on a spatio-temporal recurrent network that allows for realtime performance. In particular, we introduce a novel architecture which extends the receptive field while keeping the overall size of the network small to enable fast execution. In doing so, our network is able to remove even large blur caused by strong camera shake and/or fast moving objects. Furthermore, we propose a novel network layer that enforces temporal consistency between consecutive frames by dynamic temporal blending which compares and adaptively (at test time) shares features obtained at different time steps. We show the superiority of the proposed method in an extensive experimental evaluation.
Author Hirsch, Michael
Scholkopf, Bernhard
Tae Hyun Kim
Kyoung Mu Lee
Author_xml – sequence: 1
  surname: Tae Hyun Kim
  fullname: Tae Hyun Kim
  email: tkim@tuebingen.mpg.de
– sequence: 2
  surname: Kyoung Mu Lee
  fullname: Kyoung Mu Lee
  email: kyoungmu@snu.ac.kr
– sequence: 3
  givenname: Bernhard
  surname: Scholkopf
  fullname: Scholkopf, Bernhard
  email: bernhard.schoelkopf@tuebingen.mpg.de
– sequence: 4
  givenname: Michael
  surname: Hirsch
  fullname: Hirsch, Michael
  email: michael.hirsch@tuebingen.mpg.de
BookMark eNotzLtOwzAUgGGDQKItHZlY_AIJto-vAwOkXCpVdCldq9g-QYbEqZIC6tsjBNM_fNI_JWe5z0jIFWcl58zdLKtqWwrGTSlBnZC5M5YrsJozEO6UTARYVhjF5AWZjuM7Y-CE1RNyu85tyki3KWJPF-jbz2FI-Y1-pZoujrnuUqAb7Pb9ULf0vsUcf_UFD9_98HFJzpu6HXH-3xl5fXzYVM_Fav20rO5WReJGHQrppAAvY3CNAg86NF4o7xB0A8ZK3_DA0angJTAZZHTa2Fo4hppFFz2HGbn--yZE3O2H1NXDcWcFGO0M_ACYuEkP
CODEN IEEPAD
ContentType Conference Proceeding
DBID 6IE
6IH
CBEJK
RIE
RIO
DOI 10.1109/ICCV.2017.435
DatabaseName IEEE Electronic Library (IEL) Conference Proceedings
IEEE Proceedings Order Plan (POP) 1998-present by volume
IEEE Xplore All Conference Proceedings
IEEE Electronic Library (IEL)
IEEE Proceedings Order Plans (POP) 1998-present
DatabaseTitleList
Database_xml – sequence: 1
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Applied Sciences
EISBN 9781538610329
1538610329
EISSN 2380-7504
EndPage 4067
ExternalDocumentID 8237697
Genre orig-research
GroupedDBID 29O
6IE
6IF
6IH
6IK
6IL
6IM
6IN
AAJGR
AAWTH
ACGFS
ADZIZ
ALMA_UNASSIGNED_HOLDINGS
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
CBEJK
CHZPO
IPLJI
M43
OCL
RIE
RIL
RIO
RNS
ID FETCH-LOGICAL-i175t-49423b4dc9f53b36cfb25b9e36f3784bf1c1e95cb4304c4d9678a290e60d9db13
IEDL.DBID RIE
IngestDate Wed Aug 27 02:42:16 EDT 2025
IsPeerReviewed false
IsScholarly true
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-i175t-49423b4dc9f53b36cfb25b9e36f3784bf1c1e95cb4304c4d9678a290e60d9db13
PageCount 10
ParticipantIDs ieee_primary_8237697
PublicationCentury 2000
PublicationDate 2017-Oct.
PublicationDateYYYYMMDD 2017-10-01
PublicationDate_xml – month: 10
  year: 2017
  text: 2017-Oct.
PublicationDecade 2010
PublicationTitle Proceedings / IEEE International Conference on Computer Vision
PublicationTitleAbbrev ICCV
PublicationYear 2017
Publisher IEEE
Publisher_xml – name: IEEE
SSID ssj0039286
Score 2.4684005
Snippet State-of-the-art video deblurring methods are capable of removing non-uniform blur caused by unwanted camera shake and/or object motion in dynamic scenes....
SourceID ieee
SourceType Publisher
StartPage 4058
SubjectTerms Cameras
Decoding
Dynamics
Estimation
Kernel
Network architecture
Streaming media
Title Online Video Deblurring via Dynamic Temporal Blending Network
URI https://ieeexplore.ieee.org/document/8237697
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV3PT8IwFH4BTp5Qwfg7PXh0g21d1x68iBI0gXgAwo20XWeIZBAdHvzrfV0HGOPB29LDurRpv--9fe97ADehlKFkWeJpDAY8mmnqiYinXojRBpI5O2DzHcMRG0zo8yye1eB2VwtjjCnFZ8a3j-W__HSlNzZV1rHGKkwkdahj4OZqtba3LsI8Z3sPzc5Trze1wq3Ep7aT24_OKSVw9Jsw3E7p9CJv_qZQvv765cb43286hPa-RI-87MDnCGomP4ZmxSlJdWI_WnDnvETJdJGaFcHbZWlzfvkr-VxI8uDa0ZOx86daknvEIPs-MnLq8DZM-o_j3sCrWiZ4C-QBhUcF0iNFUy2yOFIR05kKYyVMxLIo4VRlgQ6MiLWiUZdqmgrEKhmKrmHdVKQqiE6gka9ycwpEGqM41RxDmpgGXCNVEyLEMyy15kEWn0HLrsZ87Vwx5tVCnP89fAEHdjecDO4SGsX7xlwhnBfqutzHbzP6n88
linkProvider IEEE
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV3PT8IwFH5BPOgJFYy_7cGjHWxrt_bgRZSAAvEAhBtZu9YQyTA6PPjX265DjPHgbelhXdq03_fevvc9gKsgSYIk0jGWJhjAREuCechSHJhow5A5O2DzHYNh1B2ThymdVuD6uxZGKVWIz5RnH4t_-elSrmyqrGmNVSIeb8G2wX3qu2qt9b1rgJ5FGxfNZq_dnljpVuwR28vtR--UAjo6NRisJ3WKkRdvlQtPfv7yY_zvV-1BY1Okh56-4WcfKio7gFrJKlF5Zt_rcOPcRNFknqolMvfLwmb9smf0MU_QnWtIj0bOoWqBbg0K2fehodOHN2DcuR-1u7hsmoDnhgnkmHBDkARJJdc0FGEktQio4CqMdBgzIrQvfcWpFCRsEUlSbtAqCXhLRa2Up8IPD6GaLTN1BChRSjAimQlqKPGZNGSN88Cc4kRK5mt6DHW7GrNX54sxKxfi5O_hS9jpjgb9Wb83fDyFXbszThR3BtX8baXODbjn4qLY0y-8LKMY
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Abook&rft.genre=proceeding&rft.title=Proceedings+%2F+IEEE+International+Conference+on+Computer+Vision&rft.atitle=Online+Video+Deblurring+via+Dynamic+Temporal+Blending+Network&rft.au=Tae+Hyun+Kim&rft.au=Kyoung+Mu+Lee&rft.au=Scholkopf%2C+Bernhard&rft.au=Hirsch%2C+Michael&rft.date=2017-10-01&rft.pub=IEEE&rft.eissn=2380-7504&rft.spage=4058&rft.epage=4067&rft_id=info:doi/10.1109%2FICCV.2017.435&rft.externalDocID=8237697