Unsupervised Learning of the Total Variation Flow

The total variation (TV) flow generates a scale-space representation of an image based on the TV functional. This gradient flow observes desirable features for images, such as sharp edges and enables spectral, scale, and texture analysis. Solving the TV flow is challenging; one reason is the the non...

Full description

Saved in:
Bibliographic Details
Main Authors Grossmann, Tamara G, Dittmer, Sören, Korolev, Yury, Schönlieb, Carola-Bibiane
Format Journal Article
LanguageEnglish
Published 09.06.2022
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:The total variation (TV) flow generates a scale-space representation of an image based on the TV functional. This gradient flow observes desirable features for images, such as sharp edges and enables spectral, scale, and texture analysis. Solving the TV flow is challenging; one reason is the the non-uniqueness of the subgradients. The standard numerical approach for TV flow requires solving multiple non-smooth optimisation problems. Even with state-of-the-art convex optimisation techniques, this is often prohibitively expensive and strongly motivates the use of alternative, faster approaches. Inspired by and extending the framework of physics-informed neural networks (PINNs), we propose the TVflowNET, an unsupervised neural network approach, to approximate the solution of the TV flow given an initial image and a time instance. The TVflowNET requires no ground truth data but rather makes use of the PDE for optimisation of the network parameters. We circumvent the challenges related to the non-uniqueness of the subgradients by additionally learning the related diffusivity term. Our approach significantly speeds up the computation time and we show that the TVflowNET approximates the TV flow solution with high fidelity for different image sizes and image types. Additionally, we give a full comparison of different network architecture designs as well as training regimes to underscore the effectiveness of our approach.
DOI:10.48550/arxiv.2206.04406