The application of eXplainable artificial intelligence in studying cognition: A scoping review

The rapid advancement of artificial intelligence (AI) has sparked renewed discussions on its trustworthiness and the concept of eXplainable AI (XAI). Recent research in neuroscience has emphasized the relevance of XAI in studying cognition. This scoping review aims to identify and analyze various XA...

Full description

Saved in:
Bibliographic Details
Published inIbrain Vol. 10; no. 3; pp. 245 - 265
Main Authors Mahmood, Shakran, Teo, Colin, Sim, Jeremy, Zhang, Wei, Muyun, Jiang, Bhuvana, R., Teo, Kejia, Yeo, Tseng Tsai, Lu, Jia, Gulyas, Balazs, Guan, Cuntai
Format Journal Article
LanguageEnglish
Published United States John Wiley and Sons Inc 05.09.2024
Wiley-VCH
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:The rapid advancement of artificial intelligence (AI) has sparked renewed discussions on its trustworthiness and the concept of eXplainable AI (XAI). Recent research in neuroscience has emphasized the relevance of XAI in studying cognition. This scoping review aims to identify and analyze various XAI methods used to study the mechanisms and features of cognitive function and dysfunction. In this study, the collected evidence is qualitatively assessed to develop an effective framework for approaching XAI in cognitive neuroscience. Based on the Joanna Briggs Institute and preferred reporting items for systematic reviews and meta‐analyses extension for scoping review guidelines, we searched for peer‐reviewed articles on MEDLINE, Embase, Web of Science, Cochrane Central Register of Controlled Trials, and Google Scholar. Two reviewers performed data screening, extraction, and thematic analysis in parallel. Twelve eligible experimental studies published in the past decade were included. The results showed that the majority (75%) focused on normal cognitive functions such as perception, social cognition, language, executive function, and memory, while others (25%) examined impaired cognition. The predominant XAI methods employed were intrinsic XAI (58.3%), followed by attribution‐based (41.7%) and example‐based (8.3%) post hoc methods. Explainability was applied at a local (66.7%) or global (33.3%) scope. The findings, predominantly correlational, were anatomical (83.3%) or nonanatomical (16.7%). In conclusion, while these XAI techniques were lauded for their predictive power, robustness, testability, and plausibility, limitations included oversimplification, confounding factors, and inconsistencies. The reviewed studies showcased the potential of XAI models while acknowledging current challenges in causality and oversimplification, particularly emphasizing the need for reproducibility. Experimental research in neuroscience has highlighted the significance of eXplainable artificial intelligence (XAI) in studying cognition. Cognition can be characterized by key domains such as perceptual‐motor control, social cognition, executive function, and memory. Recent research efforts have begun to address existing knowledge gaps in specific aspects of cognition or a cognitive disease by applying XAI's explanatory techniques to extensive data sets. These XAI methods, varying in effectiveness, have attempted to elucidate the underlying AI processes in identifying or modeling (patho)physiologic mechanisms and features of a particular cognitive function. This scoping review therefore broadly mapped out pertinent evidence available in the current literature on the different XAI models used in cognitive studies. Qualitative analysis was subsequently performed in a thematic fashion.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
ObjectType-Review-3
content type line 23
ISSN:2313-1934
2769-2795
2769-2795
DOI:10.1002/ibra.12174