EGNN: Graph structure learning based on evolutionary computation helps more in graph neural networks

In recent years, graph neural networks (GNNs) have been successfully applied in many fields due to their characteristics of neighborhood aggregation and have achieved state-of-the-art performance. While most GNNs process graph data, the original graph data is frequently noisy or incomplete, resultin...

Full description

Saved in:
Bibliographic Details
Published inApplied soft computing Vol. 135; p. 110040
Main Authors Liu, Zhaowei, Yang, Dong, Wang, Yingjie, Lu, Mingjie, Li, Ranran
Format Journal Article
LanguageEnglish
Published Elsevier B.V 01.03.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:In recent years, graph neural networks (GNNs) have been successfully applied in many fields due to their characteristics of neighborhood aggregation and have achieved state-of-the-art performance. While most GNNs process graph data, the original graph data is frequently noisy or incomplete, resulting in suboptimal GNN performance. In order to solve this problem, a Graph Structure Learning (GSL) method has recently emerged to improve the performance of graph neural networks by learning a graph structure that conforms to the ground truth. However, the current strategy of GSL is to iteratively optimize the optimal graph structure and a single GNN, which will encounter several problems in training, namely vulnerability and overfitting. A novel GSL approach called evolutionary graph neural network (EGNN) has been introduced in this work in order to improve defense against adversarial attacks and enhance GNN performance. Unlike the existing GSL method, which optimizes the graph structure and enhances the parameters of a single GNN model through alternating training methods, evolutionary theory has been applied to graph structure learning for the first time in this work. Specifically, different graph structures generated by mutation operations are used to evolve a set of model parameters in order to adapt to the environment (i.e., to improve the classification performance of unlabeled nodes). An evaluation mechanism is then used to measure the quality of the generated samples in order to retain only the model parameters (progeny) with good performance. Finally, the progeny that adapt to the environment are retained and used for further optimization. Through this process, EGNN overcomes the instability of graph structure learning and always evolves the best progeny, providing new solutions for the advancement and development of GSL. Extensive experiments on various benchmark datasets demonstrate the effectiveness of EGNN and the benefits of evolutionary computation-based graph structure learning. •A GNN model parameter evolution problem is studied within an evolutionary framework. The evolution process adheres to the principle of survival of the fittest and steadily improves the quality of node embedding through “mutation-evaluation” of model parameters.•Different graph structure estimators are used as mutation operations to create model parameter populations and evaluate the best individual from the mutation progeny to fit the environment.•By conducting extensive tests on multiple challenging datasets, EGNN’s ability to achieve a desirable performance is demonstrated. In addition, several significant characteristics of EGNN are investigated.
ISSN:1568-4946
1872-9681
DOI:10.1016/j.asoc.2023.110040