Research advances and challenges on graph foundation model: perspective from graph neural network

Graph foundation model (GFM) represents the extension of foundation model concepts in graph learning. These models were pre-trained on extensive graph data and fine-tuned for various downstream tasks. Unlike current approaches that utilized large language model (LLM) for GFM, the construction of GFM...

Full description

Saved in:
Bibliographic Details
Published inTongxin Xuebao Vol. 46; pp. 226 - 248
Main Authors WU Tao, NIE Fazhi, XIAN Xingping, WANG Chao, YUAN Lin, QIAO Shaojie, NIU Weina
Format Journal Article
LanguageChinese
Published Editorial Department of Journal on Communications 01.07.2025
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Graph foundation model (GFM) represents the extension of foundation model concepts in graph learning. These models were pre-trained on extensive graph data and fine-tuned for various downstream tasks. Unlike current approaches that utilized large language model (LLM) for GFM, the construction of GFM was emphasized from the perspective of graph neural network (GNN). Firstly, the current research of GFM was analyzed, and the key concepts were defined. Secondly, the research in the backbone architectures and fundamental representation units of GFM were summarized. Then, based on the differences in pretext tasks and fine-tuning strategies, the pre-training techniques and fine-tuning methods of graph models were summarized. Additionally, the evaluation metrics related to GFM were introduced. Finally, the unresolved issues and future research directions were discussed.
ISSN:1000-436X