Dual Contrastive Learning Network for Graph Clustering
Graph representation is an important part of graph clustering. Recently, contrastive learning, which maximizes the mutual information between augmented graph views that share the same semantics, has become a popular and powerful paradigm for graph representation. However, in the process of patch con...
Saved in:
Published in | IEEE transaction on neural networks and learning systems Vol. 35; no. 8; pp. 10846 - 10856 |
---|---|
Main Authors | , , , , |
Format | Journal Article |
Language | English |
Published |
United States
IEEE
01.08.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Graph representation is an important part of graph clustering. Recently, contrastive learning, which maximizes the mutual information between augmented graph views that share the same semantics, has become a popular and powerful paradigm for graph representation. However, in the process of patch contrasting, existing literature tends to learn all features into similar variables, i.e., representation collapse, leading to less discriminative graph representations. To tackle this problem, we propose a novel self-supervised learning method called dual contrastive learning network (DCLN), which aims to reduce the redundant information of learned latent variables in a dual manner. Specifically, the dual curriculum contrastive module (DCCM) is proposed, which approximates the node similarity matrix and feature similarity matrix to a high-order adjacency matrix and an identity matrix, respectively. By doing this, the informative information in high-order neighbors could be well collected and preserved while the irrelevant redundant features among representations could be eliminated, hence improving the discriminative capacity of the graph representation. Moreover, to alleviate the problem of sample imbalance during the contrastive process, we design a curriculum learning strategy, which enables the network to simultaneously learn reliable information from two levels. Extensive experiments on six benchmark datasets have demonstrated the effectiveness and superiority of the proposed algorithm compared with state-of-the-art methods. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 23 |
ISSN: | 2162-237X 2162-2388 2162-2388 |
DOI: | 10.1109/TNNLS.2023.3244397 |