Multiscale Subgraph Adversarial Contrastive Learning

Graph contrastive learning (GCL), as a typical self-supervised learning paradigm, has been able to achieve promising performance without labels and gradually attracts much attention. Graph-level method aims to learn representations of each graph by contrasting two augmented graphs. Previous studies...

Full description

Saved in:
Bibliographic Details
Published inIEEE transaction on neural networks and learning systems Vol. 36; no. 8; pp. 15001 - 15014
Main Authors Liu, Yanbei, Zhao, Yu, Xiao, Zhitao, Geng, Lei, Wang, Xiao, Pang, Yanwei, Chun-Wei Lin, Jerry Chun-Wei
Format Journal Article
LanguageEnglish
Published United States IEEE 01.08.2025
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Graph contrastive learning (GCL), as a typical self-supervised learning paradigm, has been able to achieve promising performance without labels and gradually attracts much attention. Graph-level method aims to learn representations of each graph by contrasting two augmented graphs. Previous studies usually simply apply contrastive learning to keep the embeddings of augmented views from the same anchor graph (positive pairs) close to each other, as well as separate the embeddings of augmented views from different anchor graphs (negative pairs). However, it is well-known that the structure of graph is always complex and multiscale, which gives rise to a fundamental question: after graph augmentation, will the previous assumption still hold in reality? Through experimental analytics, we find that the semantic information of two augmented graphs from the same anchor graph may be not consistent, and whether two augmented graphs are positive or negative sample pairs is highly correlated with the multiscale structure of the graph. Based on this observation, we then propose a multiscale subgraph contrastive learning method, named MSSGCL, which can characterize the fine-grained semantic information. Specifically, we generate global and local views at different scales based on subgraph sampling and construct multiple contrastive relationships according to their semantic associations to provide richer self-supervised information. Furthermore, to further improve the generalization performance of the model, we propose an extended model called MSSGCL++. It adopts an asymmetric structure to avoid pushing semantically similar negative samples far away. We further introduce adversarial training to perturb the augmented view and thus construct a more difficult self-supervised training task. Finally, a min-max saddle point problem is optimized and the "free" strategy is used to speed up the training process. Extensive experiments and parametric analysis on 16 real-world graph classification datasets confirm the effectiveness of our proposed approach. Compared with state of the art (SOTA) method, our method achieves improvements of 2% and 1.6% in unsupervised and transfer learning settings, respectively.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:2162-237X
2162-2388
2162-2388
DOI:10.1109/TNNLS.2025.3543954