GraphLIME: Local Interpretable Model Explanations for Graph Neural Networks
Recently, graph neural networks (GNN) were shown to be successful in effectively representing graph structured data because of their good performance and generalization ability. However, explaining the effectiveness of GNN models is a challenging task because of the complex nonlinear transformations...
Saved in:
Published in | IEEE Transactions on Knowledge and Data Engineering Vol. 35; no. 7; pp. 6968 - 6972 |
---|---|
Main Authors | , , , , |
Format | Journal Article |
Language | English Japanese |
Published |
New York
IEEE
01.07.2023
Institute of Electrical and Electronics Engineers (IEEE) The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Recently, graph neural networks (GNN) were shown to be successful in effectively representing graph structured data because of their good performance and generalization ability. However, explaining the effectiveness of GNN models is a challenging task because of the complex nonlinear transformations made over the iterations. In this paper, we propose GraphLIME, a local interpretable model explanation for graphs using the Hilbert-Schmidt Independence Criterion (HSIC) Lasso, which is a nonlinear feature selection method. GraphLIME is a generic GNN-model explanation framework that learns a nonlinear interpretable model locally in the subgraph of the node being explained. Through experiments on two real-world datasets, the explanations of GraphLIME are found to be of extraordinary degree and more descriptive in comparison to the existing explanation methods. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ISSN: | 1041-4347 2326-3865 1558-2191 |
DOI: | 10.1109/TKDE.2022.3187455 |