Topological Regularization for Representation Learning via Persistent Homology

Generalization is challenging in small-sample-size regimes with over-parameterized deep neural networks, and a better representation is generally beneficial for generalization. In this paper, we present a novel method for controlling the internal representation of deep neural networks from a topolog...

Full description

Saved in:
Bibliographic Details
Published inMathematics (Basel) Vol. 11; no. 4; p. 1008
Main Authors Chen, Muyi, Wang, Daling, Feng, Shi, Zhang, Yifei
Format Journal Article
LanguageEnglish
Published Basel MDPI AG 01.02.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Generalization is challenging in small-sample-size regimes with over-parameterized deep neural networks, and a better representation is generally beneficial for generalization. In this paper, we present a novel method for controlling the internal representation of deep neural networks from a topological perspective. Leveraging the power of topology data analysis (TDA), we study the push-forward probability measure induced by the feature extractor, and we formulate a notion of “separation” to characterize a property of this measure in terms of persistent homology for the first time. Moreover, we perform a theoretical analysis of this property and prove that enforcing this property leads to better generalization. To impose this property, we propose a novel weight function to extract topological information, and we introduce a new regularizer including three items to guide the representation learning in a topology-aware manner. Experimental results in the point cloud optimization task show that our method is effective and powerful. Furthermore, results in the image classification task show that our method outperforms the previous methods by a significant margin.
ISSN:2227-7390
2227-7390
DOI:10.3390/math11041008