A Multi-Component Framework for the Analysis and Design of Explainable Artificial Intelligence

The rapid growth of research in explainable artificial intelligence (XAI) follows on two substantial developments. First, the enormous application success of modern machine learning methods, especially deep and reinforcement learning, have created high expectations for industrial, commercial, and so...

Full description

Saved in:
Bibliographic Details
Published inMachine learning and knowledge extraction Vol. 3; no. 4; pp. 900 - 921
Main Authors Kim, Mi-Young, Atakishiyev, Shahin, Babiker, Housam Khalifa Bashier, Farruque, Nawshad, Goebel, Randy, Zaïane, Osmar R., Motallebi, Mohammad-Hossein, Rabelo, Juliano, Syed, Talat, Yao, Hengshuai, Chun, Peter
Format Journal Article
LanguageEnglish
Published Basel MDPI AG 01.12.2021
Subjects
Online AccessGet full text
ISSN2504-4990
2504-4990
DOI10.3390/make3040045

Cover

Loading…
More Information
Summary:The rapid growth of research in explainable artificial intelligence (XAI) follows on two substantial developments. First, the enormous application success of modern machine learning methods, especially deep and reinforcement learning, have created high expectations for industrial, commercial, and social value. Second, the emerging and growing concern for creating ethical and trusted AI systems, including compliance with regulatory principles to ensure transparency and trust. These two threads have created a kind of “perfect storm” of research activity, all motivated to create and deliver any set of tools and techniques to address the XAI demand. As some surveys of current XAI suggest, there is yet to appear a principled framework that respects the literature of explainability in the history of science and which provides a basis for the development of a framework for transparent XAI. We identify four foundational components, including the requirements for (1) explicit explanation knowledge representation, (2) delivery of alternative explanations, (3) adjusting explanations based on knowledge of the explainee, and (4) exploiting the advantage of interactive explanation. With those four components in mind, we intend to provide a strategic inventory of XAI requirements, demonstrate their connection to a basic history of XAI ideas, and then synthesize those ideas into a simple framework that can guide the design of AI systems that require XAI.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2504-4990
2504-4990
DOI:10.3390/make3040045