Intelligent Texture Reconstruction of Missing Data in Video Sequences Using Neural Networks

The paper provides an intelligent method of texture reconstruction after removal of non-disabled objects or artifacts in video sequences. Data under subtitles, logotypes, damages of information medium or small size objects are referred to as missing data. A novel implementation of separated neural n...

Full description

Saved in:
Bibliographic Details
Published inAdvanced Techniques for Knowledge Engineering and Innovative Applications pp. 163 - 176
Main Authors Favorskaya, Margarita, Damov, Mikhail, Zotin, Alexander
Format Book Chapter
LanguageEnglish
Published Berlin, Heidelberg Springer Berlin Heidelberg 2013
SeriesCommunications in Computer and Information Science
Subjects
Online AccessGet full text
ISBN9783642420160
3642420168
ISSN1865-0929
1865-0937
DOI10.1007/978-3-642-42017-7_12

Cover

Loading…
More Information
Summary:The paper provides an intelligent method of texture reconstruction after removal of non-disabled objects or artifacts in video sequences. Data under subtitles, logotypes, damages of information medium or small size objects are referred to as missing data. A novel implementation of separated neural network was used to receive spatial texture estimations in missing data region. Usually several types of textures are located under removed object. A fast wave algorithm was developed for boundary interpolation between different types of texture into a missing data region. Three strategies of wave algorithm for contour optimization were suggested. A fully connected one-level neural network was applied for choice of texture inpainting method (blurring, texture tile, and texture synthesis). The proposed technique was tested for visual reconstruction of missing text regions (subtitles, logotypes) and missing objects with area less 8-12% of frame in animation and movies. In the first case, a simplified decision without stage of boundaries approximation may be applied; in the second case, the reconstruction results are significantly determined by a background complexity and motion in scene.
ISBN:9783642420160
3642420168
ISSN:1865-0929
1865-0937
DOI:10.1007/978-3-642-42017-7_12