GraphLIME: Local Interpretable Model Explanations for Graph Neural Networks

Recently, graph neural networks (GNN) were shown to be successful in effectively representing graph structured data because of their good performance and generalization ability. However, explaining the effectiveness of GNN models is a challenging task because of the complex nonlinear transformations...

Full description

Saved in:
Bibliographic Details
Published inIEEE Transactions on Knowledge and Data Engineering Vol. 35; no. 7; pp. 6968 - 6972
Main Authors Huang, Qiang, Yamada, Makoto, Tian, Yuan, Singh, Dinesh, Chang, Yi
Format Journal Article
LanguageEnglish
Japanese
Published New York IEEE 01.07.2023
Institute of Electrical and Electronics Engineers (IEEE)
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Recently, graph neural networks (GNN) were shown to be successful in effectively representing graph structured data because of their good performance and generalization ability. However, explaining the effectiveness of GNN models is a challenging task because of the complex nonlinear transformations made over the iterations. In this paper, we propose GraphLIME, a local interpretable model explanation for graphs using the Hilbert-Schmidt Independence Criterion (HSIC) Lasso, which is a nonlinear feature selection method. GraphLIME is a generic GNN-model explanation framework that learns a nonlinear interpretable model locally in the subgraph of the node being explained. Through experiments on two real-world datasets, the explanations of GraphLIME are found to be of extraordinary degree and more descriptive in comparison to the existing explanation methods.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1041-4347
2326-3865
1558-2191
DOI:10.1109/TKDE.2022.3187455