Graph Privacy Funnel: A Variational Approach for Privacy-Preserving Representation Learning on Graphs

This paper investigates the problem of learning privacy-preserving graph representations with graph neural networks (GNNs). Different from existing works based on adversarial training, we introduce a variational approach, called vGPF , to encourage the isolation of sensitive attributes from the lear...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on dependable and secure computing Vol. 22; no. 2; pp. 967 - 978
Main Authors Lin, Wanyu, Lan, Hao, Cao, Jiannong
Format Journal Article
LanguageEnglish
Published Washington IEEE 01.03.2025
IEEE Computer Society
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:This paper investigates the problem of learning privacy-preserving graph representations with graph neural networks (GNNs). Different from existing works based on adversarial training, we introduce a variational approach, called vGPF , to encourage the isolation of sensitive attributes from the learned representations. Specifically, we first formulate a non-asymptotic information-theoretic problem for characterizing the best achievable privacy subject to the utility constraints of graph representations, termed as G raph P rivacy F unnel (GPF). Then we theoretically analyze that the GPF objective can be directly optimized over through a variational approximation upper bound. vGPF allows us to parameterize the privacy-preserving graph mapping with GNN encoders and use the reparameterization trick for training. Compared with existing adversarial approaches, vGPF exhibits more stable predictive performance as it does not rely on an additional adversarial network that may incur training stability in practice. Experiments across multiple datasets from various domains demonstrate that vGPF outperforms its state-of-the-art alternatives in terms of predictive accuracy, performance stability, and robustness to attribute inference attacks. We also show that vGPF enjoys high flexibility in the sense that it is compatible with various graph learning tasks with different GNN encoder architectures, and it can enforce privacy over any combinations of sensitive attributes in one shot.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1545-5971
1941-0018
DOI:10.1109/TDSC.2024.3417513