GraPASA: Parametric graph embedding via siamese architecture
Graph representation learning or graph embedding is a classical topic in data mining. Current embedding methods are mostly non-parametric, where all the embedding points are unconstrained free points in the target space. These approaches suffer from limited scalability and an over-flexible represent...
Saved in:
Published in | Information sciences Vol. 512; pp. 1442 - 1457 |
---|---|
Main Authors | , , , , |
Format | Journal Article |
Language | English |
Published |
Elsevier Inc
01.02.2020
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Graph representation learning or graph embedding is a classical topic in data mining. Current embedding methods are mostly non-parametric, where all the embedding points are unconstrained free points in the target space. These approaches suffer from limited scalability and an over-flexible representation. In this paper, we propose a parametric graph embedding by fusing graph topology information and node content information. The embedding points are obtained through a highly flexible non-linear transformation from node content features to the target space. This transformation is learned using the contrastive loss function of the siamese network to preserve node adjacency in the input graph. On several benchmark network datasets, the proposed GraPASA method shows a significant margin over state-of-the-art techniques on benchmark graph representation tasks. |
---|---|
ISSN: | 0020-0255 1872-6291 |
DOI: | 10.1016/j.ins.2019.10.027 |