GraPASA: Parametric graph embedding via siamese architecture

Graph representation learning or graph embedding is a classical topic in data mining. Current embedding methods are mostly non-parametric, where all the embedding points are unconstrained free points in the target space. These approaches suffer from limited scalability and an over-flexible represent...

Full description

Saved in:
Bibliographic Details
Published inInformation sciences Vol. 512; pp. 1442 - 1457
Main Authors Chen, Yujun, Sun, Ke, Pu, Juhua, Xiong, Zhang, Zhang, Xiangliang
Format Journal Article
LanguageEnglish
Published Elsevier Inc 01.02.2020
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Graph representation learning or graph embedding is a classical topic in data mining. Current embedding methods are mostly non-parametric, where all the embedding points are unconstrained free points in the target space. These approaches suffer from limited scalability and an over-flexible representation. In this paper, we propose a parametric graph embedding by fusing graph topology information and node content information. The embedding points are obtained through a highly flexible non-linear transformation from node content features to the target space. This transformation is learned using the contrastive loss function of the siamese network to preserve node adjacency in the input graph. On several benchmark network datasets, the proposed GraPASA method shows a significant margin over state-of-the-art techniques on benchmark graph representation tasks.
ISSN:0020-0255
1872-6291
DOI:10.1016/j.ins.2019.10.027