GraphXAI: a survey of graph neural networks (GNNs) for explainable AI (XAI)
Graphs find wide applications in numerous domains, ranging from simulating physical systems to learning molecular fingerprints, predicting protein interfaces, diagnosing diseases, etc. These applications encompass simulations in non-Euclidean space, in which a graph serves as an ideal representation...
Saved in:
Published in | Neural computing & applications Vol. 37; no. 17; pp. 10949 - 11000 |
---|---|
Main Authors | , , |
Format | Journal Article |
Language | English |
Published |
London
Springer London
01.06.2025
Springer Nature B.V |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Graphs find wide applications in numerous domains, ranging from simulating physical systems to learning molecular fingerprints, predicting protein interfaces, diagnosing diseases, etc. These applications encompass simulations in non-Euclidean space, in which a graph serves as an ideal representation, and are also an indispensable means of illustrating the connections and interdependencies among its various constituents. Graph neural networks (GNNs) are specific types of neural networks that are specifically built to handle data possessing a graph structure. They are highly effective at capturing intricate relationships among different entities. Nonetheless, their “black-box” characteristics pose difficulties in transparency, trust, and interpretability, especially in critical sectors like heath care, banking, and autonomous systems. Explainable artificial intelligence (XAI) has emerged to clarify these obscure decision-making processes, thus enhancing trust and accountability in AI systems. This survey paper delves into the intricate interplay between GNNs and XAI, including an exhaustive taxonomy of the various explainability methods designed for graph-structured data. It classifies the existing explainability methods into post hoc and self-interpretable models. The paper analyzes their practical applications in diversified fields, highlighting the significance of transparent GNNs in essential sectors such as fraud detection, drug development, and network security. The survey also delineates evaluation parameters for assessing explainability along with addressing persistent issues in scalability and fairness. The paper concludes by addressing prospective advancements in the subject, including the creation of innovative XAI methodologies tailored for GNN architectures, integration with federated learning, and utilization of these models in interdisciplinary fields. This study bridges the gap between GNNs and XAI, providing an essential resource for researchers and practitioners aiming to enhance the interpretability and efficacy of graph-based AI systems. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ISSN: | 0941-0643 1433-3058 |
DOI: | 10.1007/s00521-025-11054-3 |