Dynamic Explanation of Bayesian Networks with Abductive Bayes Factor Qualitative Propagation and Entropy-Based Qualitative Explanation

The success of Artificial Intelligence (AI) systems as decision aids is often largely contingent on the ability to trust their recommendations. This trust is greatly enhanced when the AI systems are able to provide explanations to justify the presented results which when implemented properly, also s...

Full description

Saved in:
Bibliographic Details
Published in2021 IEEE 24th International Conference on Information Fusion (FUSION) pp. 1 - 9
Main Authors Matsumoto, Shou, Barreto, Alexandre, Costa, Paulo C. G., Benyo, Brett, Atighetchi, Michael, Javorsek, Daniel
Format Conference Proceeding
LanguageEnglish
Published International Society of Information Fusion (ISIF) 01.11.2021
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:The success of Artificial Intelligence (AI) systems as decision aids is often largely contingent on the ability to trust their recommendations. This trust is greatly enhanced when the AI systems are able to provide explanations to justify the presented results which when implemented properly, also serve as a means to better understand unfamiliar domains. Unfortunately, underlying models in such systems can be often non-intuitive for humans and thus hard to interpret and explore. Explainable AI systems allow users to effectively understand, trust, and operate the models. Under this context, a recognizable model for qualitatively displaying probabilistic information is the Bayesian Network (BN), which provides a graphical visualization of quantitative beliefs about the conditional dependence and independence among random variables. We focus primarily on the dynamic explanation, which explains the reasoning process of a BN by exploring the means for analyzing the changes in the model in the light of new evidence. We extend Druzdzel's concept about qualitative belief propagation, by introducing the idea of qualitative strength of edges in the active path, which is proportional to the Bayes factors of the Most Probable Explanation (MPE), or to the pairwise information entropy of variables (a.k.a. mutual information). We also present a full implementation in Java.
DOI:10.23919/FUSION49465.2021.9626961