On the interpretability of quantum neural networks
Interpretability of artificial intelligence (AI) methods, particularly deep neural networks, is of great interest. This heightened focus stems from the widespread use of AI-backed systems. These systems, often relying on intricate neural architectures, can exhibit behavior that is challenging to exp...
Saved in:
Published in | Quantum machine intelligence Vol. 6; no. 2 |
---|---|
Main Authors | , |
Format | Journal Article |
Language | English |
Published |
Cham
Springer International Publishing
01.12.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Interpretability of artificial intelligence (AI) methods, particularly deep neural networks, is of great interest. This heightened focus stems from the widespread use of AI-backed systems. These systems, often relying on intricate neural architectures, can exhibit behavior that is challenging to explain and comprehend. The interpretability of such models is a crucial component of building trusted systems. Many methods exist to approach this problem, but they do not apply straightforwardly to the quantum setting. Here, we explore the interpretability of quantum neural networks using local model-agnostic interpretability measures commonly utilized for classical neural networks. Following this analysis, we generalize a classical technique called LIME, introducing Q-LIME, which produces explanations of quantum neural networks. A feature of our explanations is the delineation of the region in which data samples have been given a random label, likely subjects of inherently random quantum measurements. We view this as a step toward understanding how to build responsible and accountable quantum AI models. |
---|---|
ISSN: | 2524-4906 2524-4914 |
DOI: | 10.1007/s42484-024-00191-y |