Architecture Matters: Uncovering Implicit Mechanisms in Graph Contrastive Learning
With the prosperity of contrastive learning for visual representation learning (VCL), it is also adapted to the graph domain and yields promising performance. However, through a systematic study of various graph contrastive learning (GCL) methods, we observe that some common phenomena among existing...
Saved in:
Main Authors | , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
05.11.2023
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | With the prosperity of contrastive learning for visual representation
learning (VCL), it is also adapted to the graph domain and yields promising
performance. However, through a systematic study of various graph contrastive
learning (GCL) methods, we observe that some common phenomena among existing
GCL methods that are quite different from the original VCL methods, including
1) positive samples are not a must for GCL; 2) negative samples are not
necessary for graph classification, neither for node classification when
adopting specific normalization modules; 3) data augmentations have much less
influence on GCL, as simple domain-agnostic augmentations (e.g., Gaussian
noise) can also attain fairly good performance. By uncovering how the implicit
inductive bias of GNNs works in contrastive learning, we theoretically provide
insights into the above intriguing properties of GCL. Rather than directly
porting existing VCL methods to GCL, we advocate for more attention toward the
unique architecture of graph learning and consider its implicit influence when
designing GCL methods. Code is available at https:
//github.com/PKU-ML/ArchitectureMattersGCL. |
---|---|
DOI: | 10.48550/arxiv.2311.02687 |