The application of eXplainable artificial intelligence in studying cognition: A scoping review

The rapid advancement of artificial intelligence (AI) has sparked renewed discussions on its trustworthiness and the concept of eXplainable AI (XAI). Recent research in neuroscience has emphasized the relevance of XAI in studying cognition. This scoping review aims to identify and analyze various XA...

Full description

Saved in:
Bibliographic Details
Published inIbrain Vol. 10; no. 3; pp. 245 - 265
Main Authors Mahmood, Shakran, Teo, Colin, Sim, Jeremy, Zhang, Wei, Muyun, Jiang, Bhuvana, R., Teo, Kejia, Yeo, Tseng Tsai, Lu, Jia, Gulyas, Balazs, Guan, Cuntai
Format Journal Article
LanguageEnglish
Published United States John Wiley and Sons Inc 05.09.2024
Wiley-VCH
Subjects
Online AccessGet full text

Cover

Loading…
Abstract The rapid advancement of artificial intelligence (AI) has sparked renewed discussions on its trustworthiness and the concept of eXplainable AI (XAI). Recent research in neuroscience has emphasized the relevance of XAI in studying cognition. This scoping review aims to identify and analyze various XAI methods used to study the mechanisms and features of cognitive function and dysfunction. In this study, the collected evidence is qualitatively assessed to develop an effective framework for approaching XAI in cognitive neuroscience. Based on the Joanna Briggs Institute and preferred reporting items for systematic reviews and meta‐analyses extension for scoping review guidelines, we searched for peer‐reviewed articles on MEDLINE, Embase, Web of Science, Cochrane Central Register of Controlled Trials, and Google Scholar. Two reviewers performed data screening, extraction, and thematic analysis in parallel. Twelve eligible experimental studies published in the past decade were included. The results showed that the majority (75%) focused on normal cognitive functions such as perception, social cognition, language, executive function, and memory, while others (25%) examined impaired cognition. The predominant XAI methods employed were intrinsic XAI (58.3%), followed by attribution‐based (41.7%) and example‐based (8.3%) post hoc methods. Explainability was applied at a local (66.7%) or global (33.3%) scope. The findings, predominantly correlational, were anatomical (83.3%) or nonanatomical (16.7%). In conclusion, while these XAI techniques were lauded for their predictive power, robustness, testability, and plausibility, limitations included oversimplification, confounding factors, and inconsistencies. The reviewed studies showcased the potential of XAI models while acknowledging current challenges in causality and oversimplification, particularly emphasizing the need for reproducibility. Experimental research in neuroscience has highlighted the significance of eXplainable artificial intelligence (XAI) in studying cognition. Cognition can be characterized by key domains such as perceptual‐motor control, social cognition, executive function, and memory. Recent research efforts have begun to address existing knowledge gaps in specific aspects of cognition or a cognitive disease by applying XAI's explanatory techniques to extensive data sets. These XAI methods, varying in effectiveness, have attempted to elucidate the underlying AI processes in identifying or modeling (patho)physiologic mechanisms and features of a particular cognitive function. This scoping review therefore broadly mapped out pertinent evidence available in the current literature on the different XAI models used in cognitive studies. Qualitative analysis was subsequently performed in a thematic fashion.
AbstractList The rapid advancement of artificial intelligence (AI) has sparked renewed discussions on its trustworthiness and the concept of eXplainable AI (XAI). Recent research in neuroscience has emphasized the relevance of XAI in studying cognition. This scoping review aims to identify and analyze various XAI methods used to study the mechanisms and features of cognitive function and dysfunction. In this study, the collected evidence is qualitatively assessed to develop an effective framework for approaching XAI in cognitive neuroscience. Based on the Joanna Briggs Institute and preferred reporting items for systematic reviews and meta‐analyses extension for scoping review guidelines, we searched for peer‐reviewed articles on MEDLINE, Embase, Web of Science, Cochrane Central Register of Controlled Trials, and Google Scholar. Two reviewers performed data screening, extraction, and thematic analysis in parallel. Twelve eligible experimental studies published in the past decade were included. The results showed that the majority (75%) focused on normal cognitive functions such as perception, social cognition, language, executive function, and memory, while others (25%) examined impaired cognition. The predominant XAI methods employed were intrinsic XAI (58.3%), followed by attribution‐based (41.7%) and example‐based (8.3%) post hoc methods. Explainability was applied at a local (66.7%) or global (33.3%) scope. The findings, predominantly correlational, were anatomical (83.3%) or nonanatomical (16.7%). In conclusion, while these XAI techniques were lauded for their predictive power, robustness, testability, and plausibility, limitations included oversimplification, confounding factors, and inconsistencies. The reviewed studies showcased the potential of XAI models while acknowledging current challenges in causality and oversimplification, particularly emphasizing the need for reproducibility.
The rapid advancement of artificial intelligence (AI) has sparked renewed discussions on its trustworthiness and the concept of eXplainable AI (XAI). Recent research in neuroscience has emphasized the relevance of XAI in studying cognition. This scoping review aims to identify and analyze various XAI methods used to study the mechanisms and features of cognitive function and dysfunction. In this study, the collected evidence is qualitatively assessed to develop an effective framework for approaching XAI in cognitive neuroscience. Based on the Joanna Briggs Institute and preferred reporting items for systematic reviews and meta-analyses extension for scoping review guidelines, we searched for peer-reviewed articles on MEDLINE, Embase, Web of Science, Cochrane Central Register of Controlled Trials, and Google Scholar. Two reviewers performed data screening, extraction, and thematic analysis in parallel. Twelve eligible experimental studies published in the past decade were included. The results showed that the majority (75%) focused on normal cognitive functions such as perception, social cognition, language, executive function, and memory, while others (25%) examined impaired cognition. The predominant XAI methods employed were intrinsic XAI (58.3%), followed by attribution-based (41.7%) and example-based (8.3%) post hoc methods. Explainability was applied at a local (66.7%) or global (33.3%) scope. The findings, predominantly correlational, were anatomical (83.3%) or nonanatomical (16.7%). In conclusion, while these XAI techniques were lauded for their predictive power, robustness, testability, and plausibility, limitations included oversimplification, confounding factors, and inconsistencies. The reviewed studies showcased the potential of XAI models while acknowledging current challenges in causality and oversimplification, particularly emphasizing the need for reproducibility.The rapid advancement of artificial intelligence (AI) has sparked renewed discussions on its trustworthiness and the concept of eXplainable AI (XAI). Recent research in neuroscience has emphasized the relevance of XAI in studying cognition. This scoping review aims to identify and analyze various XAI methods used to study the mechanisms and features of cognitive function and dysfunction. In this study, the collected evidence is qualitatively assessed to develop an effective framework for approaching XAI in cognitive neuroscience. Based on the Joanna Briggs Institute and preferred reporting items for systematic reviews and meta-analyses extension for scoping review guidelines, we searched for peer-reviewed articles on MEDLINE, Embase, Web of Science, Cochrane Central Register of Controlled Trials, and Google Scholar. Two reviewers performed data screening, extraction, and thematic analysis in parallel. Twelve eligible experimental studies published in the past decade were included. The results showed that the majority (75%) focused on normal cognitive functions such as perception, social cognition, language, executive function, and memory, while others (25%) examined impaired cognition. The predominant XAI methods employed were intrinsic XAI (58.3%), followed by attribution-based (41.7%) and example-based (8.3%) post hoc methods. Explainability was applied at a local (66.7%) or global (33.3%) scope. The findings, predominantly correlational, were anatomical (83.3%) or nonanatomical (16.7%). In conclusion, while these XAI techniques were lauded for their predictive power, robustness, testability, and plausibility, limitations included oversimplification, confounding factors, and inconsistencies. The reviewed studies showcased the potential of XAI models while acknowledging current challenges in causality and oversimplification, particularly emphasizing the need for reproducibility.
Abstract The rapid advancement of artificial intelligence (AI) has sparked renewed discussions on its trustworthiness and the concept of eXplainable AI (XAI). Recent research in neuroscience has emphasized the relevance of XAI in studying cognition. This scoping review aims to identify and analyze various XAI methods used to study the mechanisms and features of cognitive function and dysfunction. In this study, the collected evidence is qualitatively assessed to develop an effective framework for approaching XAI in cognitive neuroscience. Based on the Joanna Briggs Institute and preferred reporting items for systematic reviews and meta‐analyses extension for scoping review guidelines, we searched for peer‐reviewed articles on MEDLINE, Embase, Web of Science, Cochrane Central Register of Controlled Trials, and Google Scholar. Two reviewers performed data screening, extraction, and thematic analysis in parallel. Twelve eligible experimental studies published in the past decade were included. The results showed that the majority (75%) focused on normal cognitive functions such as perception, social cognition, language, executive function, and memory, while others (25%) examined impaired cognition. The predominant XAI methods employed were intrinsic XAI (58.3%), followed by attribution‐based (41.7%) and example‐based (8.3%) post hoc methods. Explainability was applied at a local (66.7%) or global (33.3%) scope. The findings, predominantly correlational, were anatomical (83.3%) or nonanatomical (16.7%). In conclusion, while these XAI techniques were lauded for their predictive power, robustness, testability, and plausibility, limitations included oversimplification, confounding factors, and inconsistencies. The reviewed studies showcased the potential of XAI models while acknowledging current challenges in causality and oversimplification, particularly emphasizing the need for reproducibility.
The rapid advancement of artificial intelligence (AI) has sparked renewed discussions on its trustworthiness and the concept of eXplainable AI (XAI). Recent research in neuroscience has emphasized the relevance of XAI in studying cognition. This scoping review aims to identify and analyze various XAI methods used to study the mechanisms and features of cognitive function and dysfunction. In this study, the collected evidence is qualitatively assessed to develop an effective framework for approaching XAI in cognitive neuroscience. Based on the Joanna Briggs Institute and preferred reporting items for systematic reviews and meta‐analyses extension for scoping review guidelines, we searched for peer‐reviewed articles on MEDLINE, Embase, Web of Science, Cochrane Central Register of Controlled Trials, and Google Scholar. Two reviewers performed data screening, extraction, and thematic analysis in parallel. Twelve eligible experimental studies published in the past decade were included. The results showed that the majority (75%) focused on normal cognitive functions such as perception, social cognition, language, executive function, and memory, while others (25%) examined impaired cognition. The predominant XAI methods employed were intrinsic XAI (58.3%), followed by attribution‐based (41.7%) and example‐based (8.3%) post hoc methods. Explainability was applied at a local (66.7%) or global (33.3%) scope. The findings, predominantly correlational, were anatomical (83.3%) or nonanatomical (16.7%). In conclusion, while these XAI techniques were lauded for their predictive power, robustness, testability, and plausibility, limitations included oversimplification, confounding factors, and inconsistencies. The reviewed studies showcased the potential of XAI models while acknowledging current challenges in causality and oversimplification, particularly emphasizing the need for reproducibility. Experimental research in neuroscience has highlighted the significance of eXplainable artificial intelligence (XAI) in studying cognition. Cognition can be characterized by key domains such as perceptual‐motor control, social cognition, executive function, and memory. Recent research efforts have begun to address existing knowledge gaps in specific aspects of cognition or a cognitive disease by applying XAI's explanatory techniques to extensive data sets. These XAI methods, varying in effectiveness, have attempted to elucidate the underlying AI processes in identifying or modeling (patho)physiologic mechanisms and features of a particular cognitive function. This scoping review therefore broadly mapped out pertinent evidence available in the current literature on the different XAI models used in cognitive studies. Qualitative analysis was subsequently performed in a thematic fashion.
Author Guan, Cuntai
Zhang, Wei
Mahmood, Shakran
Bhuvana, R.
Muyun, Jiang
Lu, Jia
Sim, Jeremy
Gulyas, Balazs
Teo, Colin
Teo, Kejia
Yeo, Tseng Tsai
AuthorAffiliation 5 Defence Medical and Environmental Research Institute DSO National Laboratories Singapore Singapore
2 Centre for Neuroimaging Research Nanyang Technological University Singapore Singapore
3 Division of Neurosurgery, Department of Surgery National University Hospital Singapore Singapore
4 School of Computer Science and Engineering Nanyang Technological University Singapore Singapore
1 Lee Kong Chian School of Medicine Nanyang Technological University Singapore Singapore
AuthorAffiliation_xml – name: 1 Lee Kong Chian School of Medicine Nanyang Technological University Singapore Singapore
– name: 3 Division of Neurosurgery, Department of Surgery National University Hospital Singapore Singapore
– name: 2 Centre for Neuroimaging Research Nanyang Technological University Singapore Singapore
– name: 4 School of Computer Science and Engineering Nanyang Technological University Singapore Singapore
– name: 5 Defence Medical and Environmental Research Institute DSO National Laboratories Singapore Singapore
Author_xml – sequence: 1
  givenname: Shakran
  orcidid: 0009-0006-5062-2504
  surname: Mahmood
  fullname: Mahmood, Shakran
  email: SHAKRAN001@e.ntu.edu.sg
  organization: Nanyang Technological University
– sequence: 2
  givenname: Colin
  surname: Teo
  fullname: Teo, Colin
  organization: National University Hospital
– sequence: 3
  givenname: Jeremy
  surname: Sim
  fullname: Sim, Jeremy
  organization: Nanyang Technological University
– sequence: 4
  givenname: Wei
  surname: Zhang
  fullname: Zhang, Wei
  organization: Nanyang Technological University
– sequence: 5
  givenname: Jiang
  surname: Muyun
  fullname: Muyun, Jiang
  organization: Nanyang Technological University
– sequence: 6
  givenname: R.
  surname: Bhuvana
  fullname: Bhuvana, R.
  organization: Nanyang Technological University
– sequence: 7
  givenname: Kejia
  surname: Teo
  fullname: Teo, Kejia
  organization: National University Hospital
– sequence: 8
  givenname: Tseng Tsai
  surname: Yeo
  fullname: Yeo, Tseng Tsai
  organization: National University Hospital
– sequence: 9
  givenname: Jia
  surname: Lu
  fullname: Lu, Jia
  organization: DSO National Laboratories
– sequence: 10
  givenname: Balazs
  surname: Gulyas
  fullname: Gulyas, Balazs
  organization: Nanyang Technological University
– sequence: 11
  givenname: Cuntai
  surname: Guan
  fullname: Guan, Cuntai
  organization: Nanyang Technological University
BackLink https://www.ncbi.nlm.nih.gov/pubmed/39346792$$D View this record in MEDLINE/PubMed
BookMark eNp9kk1v1DAQhi1UREvphR-AckRIKf5K7HBB24qWlSohoSJxwrKdydaV1w52ttX-e5ymVPTCyfY7z7wztuc1OggxAEJvCT4lGNOPziR9SigR_AU6oqLtaiq65qDsGWE16Rg_RCc53-ICS8EaJl-hQ1bkVnT0CP26voFKj6N3Vk8uhioOFfwcvXZBG19CaXKDs077yoUJvHcbCBbKocrTrt-7sKls3AQ3J3-qVlW2cZzFBHcO7t-gl4P2GU4e12P04-LL9fnX-urb5fp8dVVb1hJea246YXoGnBtmZS8ks9pw2WDWA-M9NFqCbCxg2hFouQEJoqHAaUeF1A07RuvFt4_6Vo3JbXXaq6idehBi2qj5JtaD0njgWmBpsAUuDetA8EEIwY1srdC8eH1evMad2UJvIUxJ-2emzyPB3ahNvFOE8NINwcXh_aNDir93kCe1ddmWx9MB4i4rRgihuOkwLei7f4s9Vfn7QwX4sAA2xZwTDE8IwWqeADVPgHqYgAKTBb53Hvb_IdX67PtqyfkD616zoA
Cites_doi 10.1080/01691864.2021.1955001
10.3389/fnsys.2021.766980
10.1097/XEB.0000000000000050
10.1073/pnas.1403112111
10.3389/fnins.2019.01346
10.1002/widm.1391
10.1016/j.knosys.2023.110273
10.1177/15500594211063662
10.24963/ijcai.2019/876
10.1155/2018/4283427
10.1145/3359786
10.1016/j.ijar.2023.109112
10.1016/j.ijinfomgt.2021.102383
10.7326/M18-0850
10.1109/VLHCC.2013.6645235
10.1016/j.inffus.2023.101805
10.1101/2022.07.23.501266
10.1038/s41593-023-01304-9
10.1016/j.xinn.2021.100179
10.1145/3236386.3241340
10.3389/fnins.2022.912798
10.1016/j.jbi.2020.103655
10.1186/s12874-018-0611-x
10.6028/NIST.IR.8312
10.1007/s11023-019-09502-w
10.3389/fncom.2020.00029
10.1016/j.aei.2023.102024
10.1109/PICMET.2016.7806752
10.1109/ACCESS.2018.2870052
10.1523/JNEUROSCI.0508-17.2018
10.1093/jamia/ocaa053
10.1007/s10676-022-09634-1
10.1016/j.media.2021.101986
10.1002/ail2.61
10.1007/s13218-020-00679-2
10.1007/s10994-023-06335-8
10.1007/s12045-021-1119-y
10.1016/j.inffus.2019.12.012
10.4103/ijd.IJD_421_20
10.3390/make5010006
10.1016/j.inffus.2021.11.003
10.1038/s41592-021-01256-7
10.1007/s11023-023-09637-x
10.7326/0003-4819-151-4-200908180-00135
10.1016/j.eswa.2020.113941
10.1038/s42003-021-02534-y
10.31887/DCNS.2019.21.3/pharvey
10.1016/j.eswa.2023.122588
10.1109/CVPR.2019.00612
10.1111/ejn.12994
10.1148/radiol.2019190613
10.1016/j.egyai.2022.100169
10.1145/3491102.3501826
10.1109/TNNLS.2020.3027314
10.1016/j.engappai.2022.105606
10.1016/j.neuron.2018.03.044
ContentType Journal Article
Copyright 2024 The Author(s). published by Affiliated Hospital of Zunyi Medical University (AHZMU) and Wiley‐VCH GmbH.
2024 The Author(s). Ibrain published by Affiliated Hospital of Zunyi Medical University (AHZMU) and Wiley‐VCH GmbH.
Copyright_xml – notice: 2024 The Author(s). published by Affiliated Hospital of Zunyi Medical University (AHZMU) and Wiley‐VCH GmbH.
– notice: 2024 The Author(s). Ibrain published by Affiliated Hospital of Zunyi Medical University (AHZMU) and Wiley‐VCH GmbH.
DBID 24P
AAYXX
CITATION
NPM
7X8
5PM
DOA
DOI 10.1002/ibra.12174
DatabaseName Wiley Online Library Journals (Open Access)
CrossRef
PubMed
MEDLINE - Academic
PubMed Central (Full Participant titles)
DOAJ Directory of Open Access Journals
DatabaseTitle CrossRef
PubMed
MEDLINE - Academic
DatabaseTitleList CrossRef
MEDLINE - Academic
PubMed



Database_xml – sequence: 1
  dbid: DOA
  name: DOAJ Open Access Full Text
  url: https://www.doaj.org/
  sourceTypes: Open Website
– sequence: 2
  dbid: 24P
  name: Wiley Online Library Open Access
  url: https://authorservices.wiley.com/open-science/open-access/browse-journals.html
  sourceTypes: Publisher
– sequence: 3
  dbid: NPM
  name: PubMed
  url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
DeliveryMethod fulltext_linktorsrc
Discipline Anatomy & Physiology
DocumentTitleAlternate MAHMOOD et al
EISSN 2769-2795
EndPage 265
ExternalDocumentID oai_doaj_org_article_a0f4a708b0ce48b39e74f7774b86c7a4
PMC11427810
39346792
10_1002_ibra_12174
IBRA12174
Genre reviewArticle
Journal Article
Scoping Review
GrantInformation_xml – fundername: None
GroupedDBID 0R~
1OC
24P
7X7
8FI
8FJ
AAFWJ
AAHHS
ABUWG
ACCFJ
ACCMX
ADPDF
ADZOD
AEEZP
AEQDE
AFKRA
AFPKN
AIWBW
AJBDE
ALMA_UNASSIGNED_HOLDINGS
ALUQN
BENPR
CCPQU
EBS
FYUFA
GROUPED_DOAJ
HMCUK
NQS
OK1
OVD
OVEED
PIMPY
PSYQQ
RPM
TEORI
UKHRP
AAYXX
CITATION
PHGZM
PHGZT
NPM
7X8
AAMMB
AEFGJ
AGXDD
AIDQK
AIDYY
5PM
WIN
PUEGO
ID FETCH-LOGICAL-c3614-a4b97bd3e44b3c8d783cab48503de34de5a8e85ce0291e64be8e752e429278a53
IEDL.DBID DOA
ISSN 2313-1934
2769-2795
IngestDate Wed Aug 27 01:10:28 EDT 2025
Thu Aug 21 18:31:17 EDT 2025
Fri Jul 11 09:31:28 EDT 2025
Fri Jan 31 01:44:17 EST 2025
Tue Jul 01 01:01:19 EDT 2025
Wed Jan 22 17:14:24 EST 2025
IsDoiOpenAccess true
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Issue 3
Keywords eXplainable artificial intelligence
cognition
cognitive neuroscience
XAI models
neuroscience
artificial intelligence
Language English
License Attribution
2024 The Author(s). Ibrain published by Affiliated Hospital of Zunyi Medical University (AHZMU) and Wiley‐VCH GmbH.
This is an open access article under the terms of the http://creativecommons.org/licenses/by/4.0/ License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c3614-a4b97bd3e44b3c8d783cab48503de34de5a8e85ce0291e64be8e752e429278a53
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
ObjectType-Review-3
content type line 23
ORCID 0009-0006-5062-2504
OpenAccessLink https://doaj.org/article/a0f4a708b0ce48b39e74f7774b86c7a4
PMID 39346792
PQID 3111205902
PQPubID 23479
PageCount 21
ParticipantIDs doaj_primary_oai_doaj_org_article_a0f4a708b0ce48b39e74f7774b86c7a4
pubmedcentral_primary_oai_pubmedcentral_nih_gov_11427810
proquest_miscellaneous_3111205902
pubmed_primary_39346792
crossref_primary_10_1002_ibra_12174
wiley_primary_10_1002_ibra_12174_IBRA12174
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 20240905
PublicationDateYYYYMMDD 2024-09-05
PublicationDate_xml – month: 9
  year: 2024
  text: 20240905
  day: 5
PublicationDecade 2020
PublicationPlace United States
PublicationPlace_xml – name: United States
– name: Hoboken
PublicationTitle Ibrain
PublicationTitleAlternate Ibrain
PublicationYear 2024
Publisher John Wiley and Sons Inc
Wiley-VCH
Publisher_xml – name: John Wiley and Sons Inc
– name: Wiley-VCH
References 2021; 69
2021; 26
2019; 291
2023; 33
2023; 263
2023; 5
2019; 13
2018; 169
2022; 24
2020; 14
2020; 58
2009; 151
2021; 165
2018; 6
2021; 35
2021; 113
2022; 81
2019; 63
2023; 26
2019; 21
2015; 42
2019; 29
2018; 38
2015; 13
2023; 54
2021; 8
2023; 57
2021; 4
2023; 99
2021; 2
2024; 241
2020; 34
2014; 111
2018; 18
2021; 15
2018; 2018
2021; 11
2021
2021; 18
2023; 112
2022; 9
2023; 117
2019
2020; 27
2017
2016
2020; 23
2020; 65
2021; 60
2018; 98
2018; 16
2022; 16
2024; 171
e_1_2_14_31_1
e_1_2_14_52_1
e_1_2_14_50_1
e_1_2_14_10_1
e_1_2_14_56_1
e_1_2_14_12_1
e_1_2_14_33_1
e_1_2_14_54_1
Basu K (e_1_2_14_5_1) 2020; 65
Andreu‐Perez J (e_1_2_14_38_1) 2021; 4
e_1_2_14_14_1
e_1_2_14_39_1
e_1_2_14_37_1
e_1_2_14_58_1
Fellous JM (e_1_2_14_16_1) 2019; 13
Palacio S (e_1_2_14_20_1) 2021
e_1_2_14_60_1
e_1_2_14_4_1
e_1_2_14_62_1
e_1_2_14_45_1
e_1_2_14_24_1
Linardatos P (e_1_2_14_29_1) 2020; 23
e_1_2_14_43_1
e_1_2_14_22_1
e_1_2_14_28_1
e_1_2_14_49_1
e_1_2_14_26_1
e_1_2_14_47_1
Das A (e_1_2_14_35_1) 2022; 16
Sano T (e_1_2_14_42_1) 2021; 35
e_1_2_14_30_1
e_1_2_14_53_1
e_1_2_14_51_1
Lombardi A (e_1_2_14_19_1) 2021; 15
Partamian H (e_1_2_14_41_1) 2021; 8
e_1_2_14_11_1
e_1_2_14_34_1
e_1_2_14_57_1
e_1_2_14_13_1
e_1_2_14_32_1
e_1_2_14_55_1
e_1_2_14_15_1
e_1_2_14_17_1
e_1_2_14_36_1
e_1_2_14_59_1
e_1_2_14_7_1
Meek T (e_1_2_14_2_1) 2016
e_1_2_14_9_1
Anagnostou M (e_1_2_14_6_1) 2022; 24
e_1_2_14_3_1
e_1_2_14_40_1
e_1_2_14_61_1
e_1_2_14_23_1
e_1_2_14_46_1
e_1_2_14_21_1
e_1_2_14_44_1
e_1_2_14_27_1
e_1_2_14_25_1
e_1_2_14_48_1
e_1_2_14_18_1
Kamath U (e_1_2_14_8_1) 2021
References_xml – volume: 26
  start-page: 191
  issue: 2
  year: 2021
  end-page: 210
  article-title: Measuring causality
  publication-title: Resonance
– volume: 63
  start-page: 68
  issue: 1
  year: 2019
  end-page: 77
  article-title: Techniques for interpretable machine learning
  publication-title: Commun ACM
– volume: 98
  start-page: 630
  issue: 3
  year: 2018
  end-page: 644
  article-title: A Task‐optimized neural network replicates human auditory behavior, predicts brain responses, and reveals a cortical processing hierarchy
  publication-title: Neuron
– volume: 4
  start-page: 1077
  issue: 1
  year: 2021
  article-title: Explainable artificial intelligence based analysis for interpreting infant fNIRS data in developmental cognitive neuroscience
  publication-title: Communications biology
– volume: 2
  issue: 4
  year: 2021
  article-title: Artificial intelligence: a powerful paradigm for scientific research
  publication-title: Innovation (Cambridge (Mass.))
– volume: 16
  start-page: 31
  issue: 3
  year: 2018
  end-page: 57
  article-title: The mythos of model interpretability
  publication-title: Queue
– volume: 151
  start-page: 264
  issue: 4
  year: 2009
  end-page: 269
  article-title: Preferred reporting items for systematic reviews and meta‐analyses: the PRISMA statement
  publication-title: Ann Intern Med
– volume: 58
  start-page: 82
  year: 2020
  end-page: 115
  article-title: Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI
  publication-title: Information Fusion
– volume: 241
  year: 2024
  article-title: A model‐agnostic, network theory‐based framework for supporting xai on classifiers
  publication-title: Expert Syst with Appl
– volume: 35
  start-page: 1068
  issue: 17
  year: 2021
  end-page: 1077
  article-title: Temperament estimation of toddlers from child–robot interaction with explainable artificial intelligence
  publication-title: Adv Robot
– volume: 263
  year: 2023
  article-title: Explainable AI (XAI): a systematic meta‐survey of current challenges and future opportunities
  publication-title: Knowl Based Syst
– year: 2021
– volume: 11
  issue: 1
  year: 2021
  article-title: A historical perspective of explainable artificial intelligence
  publication-title: Wiley Interdiscip Rev Data Min Knowl Discov
– volume: 2018
  year: 2018
  article-title: Social cognition through the lens of cognitive and clinical neuroscience
  publication-title: BioMed Res Int
– volume: 16
  year: 2022
  article-title: Multimodal explainable AI predicts upcoming speech behavior in adults who stutter
  publication-title: Front Neurosci
– volume: 165
  year: 2021
  article-title: Post‐hoc explanation of black‐box classifiers using confident itemsets
  publication-title: Expert Syst Appl
– volume: 60
  year: 2021
  article-title: Artificial intelligence in information systems research: a systematic literature review and research agenda
  publication-title: Int J Inf Manage
– start-page: 121
  year: 2021
  end-page: 165
  article-title: Model interpretability: advances in interpretable machine learning. explainable artificial intelligence: an introduction to interpretable machine learning
  publication-title: Published online
– volume: 81
  start-page: 59
  year: 2022
  end-page: 83
  article-title: Counterfactuals and causability in explainable artificial intelligence: theory, algorithms, and applications
  publication-title: Inf Fusion
– volume: 5
  start-page: 78
  issue: 1
  year: 2023
  end-page: 108
  article-title: XAIR: a systematic metareview of explainable AI (XAI) aligned to the software development process
  publication-title: Mach Learn Knowl Extr
– volume: 33
  start-page: 347
  issue: 2
  year: 2023
  end-page: 377
  article-title: Explainable AI and causal understanding: counterfactual approaches considered
  publication-title: Minds Machines
– volume: 8
  start-page: 1
  issue: 1
  year: 2021
  end-page: 19
  article-title: A deep model for EEG seizure detection with explainable AI using connectivity features focal electrically administered seizure therapy (FEAST) view project reconfigurable active solid state devices view project a deep model for eeg seizure detection with explainable ai using connectivity features
  publication-title: Int J Biomed Eng Sci (IJBES)
– volume: 38
  start-page: 1601
  issue: 7
  year: 2018
  end-page: 1607
  article-title: A shared vision for machine learning in neuroscience
  publication-title: J Neurosci
– volume: 14
  start-page: 29
  year: 2020
  article-title: Attention in psychology, neuroscience, and machine learning
  publication-title: Front Comput Neurosci
– volume: 291
  start-page: 781
  issue: 3
  year: 2019
  end-page: 791
  article-title: A roadmap for foundational research on artificial intelligence in medical imaging: from the 2018 NIH/RSNA/ACR/The academy workshop
  publication-title: Radiology
– volume: 15
  year: 2021
  article-title: Editorial: explainable artificial intelligence (XAI) in systems neuroscience
  publication-title: Front Syst Neurosci
– volume: 13
  start-page: 141
  issue: 3
  year: 2015
  end-page: 146
  article-title: Guidance for conducting systematic scoping reviews
  publication-title: Int J Evid Based Healthc
– volume: 2
  issue: 4
  year: 2021
  article-title: DARPA's explainable AI (XAI) program: a retrospective
  publication-title: Applied AI Letters
– year: 2019
– volume: 117
  year: 2023
  article-title: An analysis of explainability methods for convolutional neural networks
  publication-title: Eng Appl of Artif Intell
– volume: 34
  start-page: 571
  issue: 4
  year: 2020
  end-page: 584
  article-title: Towards explanatory interactive image captioning using top‐down and bottom‐up features, beam search and re‐ranking
  publication-title: KI ‐ Kunstliche Intelligenz
– volume: 18
  start-page: 1
  issue: 1
  year: 2018
  end-page: 7
  article-title: Systematic review or scoping review? Guidance for authors when choosing between a systematic or scoping review approach
  publication-title: BMC Med Res Methodol
– volume: 21
  start-page: 227
  issue: 3
  year: 2019
  end-page: 237
  article-title: Domains of cognition and their assessment
  publication-title: Dialogues Clin Neurosci
– volume: 27
  start-page: 1173
  issue: 7
  year: 2020
  end-page: 1185
  article-title: Explainable artificial intelligence models using real‐world electronic health record data: a systematic scoping review
  publication-title: Journal of the American Medical Informatics Association: JAMIA
– volume: 112
  start-page: 3333
  issue: 9
  year: 2023
  end-page: 3359
  article-title: Considerations when learning additive explanations for black‐box models
  publication-title: Mach Learn
– volume: 54
  start-page: 51
  issue: 1
  year: 2023
  end-page: 60
  article-title: An explainable artificial intelligence approach to study MCI to AD conversion via HD‐EEG processing
  publication-title: Clinical EEG and neuroscience
– volume: 18
  start-page: 1132
  issue: 10
  year: 2021
  end-page: 1135
  article-title: Reproducibility standards for machine learning in the life sciences
  publication-title: Nature Methods
– volume: 6
  start-page: 52138
  year: 2018
  end-page: 52160
  article-title: Peeking inside the black‐box: a survey on explainable artificial intelligence (XAI)
  publication-title: IEEE Access
– volume: 171
  year: 2024
  article-title: On the failings of Shapley values for explainability
  publication-title: Int J Approx Reason
– start-page: 682
  year: 2016
  end-page: 693
– volume: 29
  start-page: 441
  issue: 3
  year: 2019
  end-page: 459
  article-title: The pragmatic turn in explainable artificial intelligence (XAI)
  publication-title: Minds Mach (Dordr)
– volume: 23
  start-page: 1
  issue: 1
  year: 2020
  end-page: 45
  article-title: Explainable AI: a review of machine learning interpretability methods
  publication-title: Entropy (Basel, Switzerland)
– volume: 111
  start-page: 8619
  issue: 23
  year: 2014
  end-page: 8624
  article-title: Performance‐optimized hierarchical models predict neural responses in higher visual cortex
  publication-title: Proc Natl Acad Sci U S A
– volume: 65
  start-page: 365
  issue: 5
  year: 2020
  end-page: 370
  article-title: Artificial intelligence: how is it changing medical sciences and its future?
  publication-title: Indian J Dermatol
– volume: 57
  year: 2023
  article-title: Explainable artificial intelligence (XAI): precepts, models, and opportunities for research in construction
  publication-title: Advanced Engineering Informatics
– volume: 113
  year: 2021
  article-title: The role of explainability in creating trustworthy artificial intelligence for health care: a comprehensive survey of the terminology, design choices, and evaluation strategies
  publication-title: J Biomed Inf
– volume: 26
  start-page: 858
  issue: 5
  year: 2023
  end-page: 866
  article-title: Semantic reconstruction of continuous language from non‐invasive brain recordings
  publication-title: Nature Neurosci
– volume: 13
  start-page: 1346
  year: 2019
  article-title: Explainable artificial intelligence for neuroscience: behavioral neurostimulation
  publication-title: Front Neurosci
– volume: 99
  year: 2023
  article-title: Explainable artificial intelligence (XAI): what we know and what is left to attain trustworthy artificial intelligence
  publication-title: Information Fusion
– volume: 169
  start-page: 467
  issue: 7
  year: 2018
  end-page: 473
  article-title: PRISMA extension for scoping reviews (PRISMA‐ScR): checklist and explanation
  publication-title: Ann Intern Med
– volume: 42
  start-page: 2003
  issue: 4
  year: 2015
  end-page: 2021
  article-title: Computing reward‐prediction error: an integrated account of cortical timing and basal‐ganglia pathways for appetitive and aversive learning
  publication-title: Eur J Neurosci
– volume: 24
  start-page: 1
  issue: 3
  year: 2022
  end-page: 18
  article-title: Characteristics and challenges in the industries towards responsible AI: a systematic literature review
  publication-title: Ethics Inf Technol
– year: 2017
– volume: 9
  year: 2022
  article-title: Explainable artificial intelligence (XAI) techniques for energy and power systems: review, challenges and opportunities
  publication-title: Energy and AI
– volume: 69
  year: 2021
  article-title: Combining anatomical and functional networks for neuropathology identification: a case study on autism spectrum disorder
  publication-title: Med Image Anal
– volume: 35
  start-page: 1068
  issue: 17
  year: 2021
  ident: e_1_2_14_42_1
  article-title: Temperament estimation of toddlers from child–robot interaction with explainable artificial intelligence
  publication-title: Adv Robot
  doi: 10.1080/01691864.2021.1955001
– volume: 8
  start-page: 1
  issue: 1
  year: 2021
  ident: e_1_2_14_41_1
  article-title: A deep model for EEG seizure detection with explainable AI using connectivity features focal electrically administered seizure therapy (FEAST) view project reconfigurable active solid state devices view project a deep model for eeg seizure detection with explainable ai using connectivity features
  publication-title: Int J Biomed Eng Sci (IJBES)
– volume: 15
  year: 2021
  ident: e_1_2_14_19_1
  article-title: Editorial: explainable artificial intelligence (XAI) in systems neuroscience
  publication-title: Front Syst Neurosci
  doi: 10.3389/fnsys.2021.766980
– ident: e_1_2_14_24_1
  doi: 10.1097/XEB.0000000000000050
– ident: e_1_2_14_33_1
  doi: 10.1073/pnas.1403112111
– volume: 13
  start-page: 1346
  year: 2019
  ident: e_1_2_14_16_1
  article-title: Explainable artificial intelligence for neuroscience: behavioral neurostimulation
  publication-title: Front Neurosci
  doi: 10.3389/fnins.2019.01346
– ident: e_1_2_14_10_1
  doi: 10.1002/widm.1391
– ident: e_1_2_14_12_1
  doi: 10.1016/j.knosys.2023.110273
– ident: e_1_2_14_40_1
  doi: 10.1177/15500594211063662
– ident: e_1_2_14_54_1
  doi: 10.24963/ijcai.2019/876
– ident: e_1_2_14_18_1
  doi: 10.1155/2018/4283427
– ident: e_1_2_14_28_1
  doi: 10.1145/3359786
– ident: e_1_2_14_59_1
  doi: 10.1016/j.ijar.2023.109112
– ident: e_1_2_14_4_1
  doi: 10.1016/j.ijinfomgt.2021.102383
– ident: e_1_2_14_26_1
  doi: 10.7326/M18-0850
– ident: e_1_2_14_62_1
  doi: 10.1109/VLHCC.2013.6645235
– ident: e_1_2_14_49_1
  doi: 10.1016/j.inffus.2023.101805
– ident: e_1_2_14_37_1
  doi: 10.1101/2022.07.23.501266
– ident: e_1_2_14_36_1
  doi: 10.1038/s41593-023-01304-9
– ident: e_1_2_14_3_1
  doi: 10.1016/j.xinn.2021.100179
– ident: e_1_2_14_7_1
  doi: 10.1145/3236386.3241340
– volume: 16
  year: 2022
  ident: e_1_2_14_35_1
  article-title: Multimodal explainable AI predicts upcoming speech behavior in adults who stutter
  publication-title: Front Neurosci
  doi: 10.3389/fnins.2022.912798
– ident: e_1_2_14_15_1
  doi: 10.1016/j.jbi.2020.103655
– ident: e_1_2_14_25_1
  doi: 10.1186/s12874-018-0611-x
– ident: e_1_2_14_14_1
  doi: 10.6028/NIST.IR.8312
– ident: e_1_2_14_61_1
  doi: 10.1007/s11023-019-09502-w
– ident: e_1_2_14_45_1
  doi: 10.3389/fncom.2020.00029
– ident: e_1_2_14_47_1
  doi: 10.1016/j.aei.2023.102024
– start-page: 682
  volume-title: 2016 Portland International Conference on Management of Engineering and Technology (PICMET)
  year: 2016
  ident: e_1_2_14_2_1
  doi: 10.1109/PICMET.2016.7806752
– ident: e_1_2_14_9_1
  doi: 10.1109/ACCESS.2018.2870052
– start-page: 121
  year: 2021
  ident: e_1_2_14_8_1
  article-title: Model interpretability: advances in interpretable machine learning. explainable artificial intelligence: an introduction to interpretable machine learning
  publication-title: Published online
– ident: e_1_2_14_21_1
  doi: 10.1523/JNEUROSCI.0508-17.2018
– ident: e_1_2_14_57_1
  doi: 10.1093/jamia/ocaa053
– volume: 24
  start-page: 1
  issue: 3
  year: 2022
  ident: e_1_2_14_6_1
  article-title: Characteristics and challenges in the industries towards responsible AI: a systematic literature review
  publication-title: Ethics Inf Technol
  doi: 10.1007/s10676-022-09634-1
– volume: 23
  start-page: 1
  issue: 1
  year: 2020
  ident: e_1_2_14_29_1
  article-title: Explainable AI: a review of machine learning interpretability methods
  publication-title: Entropy (Basel, Switzerland)
– ident: e_1_2_14_39_1
  doi: 10.1016/j.media.2021.101986
– ident: e_1_2_14_46_1
  doi: 10.1002/ail2.61
– ident: e_1_2_14_56_1
  doi: 10.1007/s13218-020-00679-2
– ident: e_1_2_14_60_1
  doi: 10.1007/s10994-023-06335-8
– ident: e_1_2_14_23_1
– ident: e_1_2_14_51_1
  doi: 10.1007/s12045-021-1119-y
– ident: e_1_2_14_50_1
  doi: 10.1016/j.inffus.2019.12.012
– volume: 65
  start-page: 365
  issue: 5
  year: 2020
  ident: e_1_2_14_5_1
  article-title: Artificial intelligence: how is it changing medical sciences and its future?
  publication-title: Indian J Dermatol
  doi: 10.4103/ijd.IJD_421_20
– ident: e_1_2_14_48_1
  doi: 10.3390/make5010006
– ident: e_1_2_14_52_1
  doi: 10.1016/j.inffus.2021.11.003
– ident: e_1_2_14_58_1
  doi: 10.1038/s41592-021-01256-7
– ident: e_1_2_14_53_1
  doi: 10.1007/s11023-023-09637-x
– ident: e_1_2_14_27_1
  doi: 10.7326/0003-4819-151-4-200908180-00135
– ident: e_1_2_14_30_1
  doi: 10.1016/j.eswa.2020.113941
– volume: 4
  start-page: 1077
  issue: 1
  year: 2021
  ident: e_1_2_14_38_1
  article-title: Explainable artificial intelligence based analysis for interpreting infant fNIRS data in developmental cognitive neuroscience
  publication-title: Communications biology
  doi: 10.1038/s42003-021-02534-y
– volume-title: IEEE/CVF International Conference on Computer Vision Workshops (ICCVW)
  year: 2021
  ident: e_1_2_14_20_1
– ident: e_1_2_14_17_1
  doi: 10.31887/DCNS.2019.21.3/pharvey
– ident: e_1_2_14_31_1
  doi: 10.1016/j.eswa.2023.122588
– ident: e_1_2_14_55_1
  doi: 10.1109/CVPR.2019.00612
– ident: e_1_2_14_43_1
  doi: 10.1111/ejn.12994
– ident: e_1_2_14_22_1
  doi: 10.1148/radiol.2019190613
– ident: e_1_2_14_32_1
  doi: 10.1016/j.egyai.2022.100169
– ident: e_1_2_14_44_1
  doi: 10.1145/3491102.3501826
– ident: e_1_2_14_11_1
  doi: 10.1109/TNNLS.2020.3027314
– ident: e_1_2_14_13_1
  doi: 10.1016/j.engappai.2022.105606
– ident: e_1_2_14_34_1
  doi: 10.1016/j.neuron.2018.03.044
SSID ssj0002873538
ssib050732362
Score 2.281174
SecondaryResourceType review_article
Snippet The rapid advancement of artificial intelligence (AI) has sparked renewed discussions on its trustworthiness and the concept of eXplainable AI (XAI). Recent...
Abstract The rapid advancement of artificial intelligence (AI) has sparked renewed discussions on its trustworthiness and the concept of eXplainable AI (XAI)....
SourceID doaj
pubmedcentral
proquest
pubmed
crossref
wiley
SourceType Open Website
Open Access Repository
Aggregation Database
Index Database
Publisher
StartPage 245
SubjectTerms artificial intelligence
cognition
cognitive neuroscience
eXplainable artificial intelligence
neuroscience
Review
XAI models
SummonAdditionalLinks – databaseName: Wiley Online Library Journals (Open Access)
  dbid: 24P
  link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1Lb9QwELZKuXBBQHlsecgIxAEpauKZxBPEZYuoChIIISrtCctOJlBEk6qPA_-esZPd7gqExC2JR07iGXs-2zOflXoOdRA3TT5jE7oMCSkTv4MZ-BaKLiYz2piN_OFjdXiE7xflYku9XubCjPwQqwW32DPSeB07uA_ne1ekoVEkciNYvKaux9zaGNBn8NPSmgTogIGJDO9HWkayUKajrQXTQCbIBVd8pWbvqroND5WI_P-GPv8MolwHt8k7HdxSNydYqeejHdxWW9zfUTvzXqbUJ7_0C50CPdMK-o76Kqah1zau9dBpXpz-nPKodDSmkVdCH68RdsqNTmS04uz0FHU09K_0XMfMlvhwTIO5q44O3n55c5hNxyxkDYhzzjyG2oYWGDFAQ60laHxAKnNoGbDl0hNT2XBu6oIrDExsS8PxoCtLvoR7arsfen6gNBYyfbPBUp7mZlUIwGQ8FSKHnW9m6tmyad3pyKbhRt5k46ICXFLATO3HVl9JRAbs9GA4--amDuV83qG3OYW8YaQANVvsrIDZQFVjvVTydKkzJz0mboP4nofLcwcyvJtEWzNT90cdrl4FYhSVraWENrS78S2bJf3x98TKHZOSLRX5TL1MhvCPH3Tv9j_P09Xu_wg_VDeMQKoU4VY-UtsXZ5f8WCDRRXiSLP83_t0FGA
  priority: 102
  providerName: Wiley-Blackwell
Title The application of eXplainable artificial intelligence in studying cognition: A scoping review
URI https://onlinelibrary.wiley.com/doi/abs/10.1002%2Fibra.12174
https://www.ncbi.nlm.nih.gov/pubmed/39346792
https://www.proquest.com/docview/3111205902
https://pubmed.ncbi.nlm.nih.gov/PMC11427810
https://doaj.org/article/a0f4a708b0ce48b39e74f7774b86c7a4
Volume 10
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV1Nb9QwEB1BuXBBQPkI0JURiANS1MSexJPedlGrgkRVVVTaE5GdOGoRZCtoD_33HU_SVVYguHCJEttSYs8488aeeQZ4ayrPZppcGrTvUiSklO0Opsa1Ju9iMqON2cifj8rDU_y0LJaTo75iTNhADzwM3K7LOnQ2I581AcmbKljsLIMWT2VjnTCBss2bOFOsSQxyjDYjEd43WUKyppBjrRnPmJRRC665SvVudEwjzYLFDeskJP5_Qp6_B1BOga1YpoOH8GCElGo-dOUR3An9Y9ie9-xO_7hW75QEecrq-TZ8ZbVQk01rtepUWF58H3OoVByPgVNCnU_IOvlBCREtGzo1Rhyt-j01VzGrJRYOKTBP4PRg_8uHw3Q8YiFtDBvm1KGvrG9NQPSmodaSaZxHKjLTBoNtKBwFKpqQ6SoPJfpAwRY6xEOuLLnCPIWtftWH56AwZ9fNekuZ-GWl9yaQdpRzO-xck8Cb26GtLwYmjXrgTNZ1FEAtAkhgEUd93SKyX0sB60Q96kT9L51I4PWtzGqeLXELxPVhdfWrNvxr10JZk8CzQYbrVxlWitJWXEMb0t34ls2a_vxMGLljQrKlPEvgvSjCXzpYf1yczOXuxf_o6ku4rxlmSdRb8Qq2Ln9ehR2GSZd-Bnc1HvPVLu0M7i32j45PZjJLZrKmdQOk9hEs
linkProvider Directory of Open Access Journals
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1Nb9QwELWgHOBSAeVjoS1GIA5IURPPJHZ626JWW2grhFppT1h2MqGtSlKV9sC_79jJbncFQuKWxJaTeGY8z_bMsxDvofTspo1LSPkmQYMmYb-DCbgasiYkM-qQjXx4VExO8PM0nw6xOSEXpueHmC-4BcuI43Uw8LAgvXXHGhrqBHIEjffFAyyUDgcYKPw6UydGOqBgYMM7j-tIGvJ4tjWDGkgYuuCcsFRt3TW35KIik__f4OefUZSL6Da6p73HYnXAlXLcK8ITcY_ap2Jt3PKc-udv-UHGSM-4hL4mvrNuyIWda9k1kqaXF0MilQza1BNLyLMFxk6-kZGNlr2dHMKOunZbjmVIbQkP-zyYZ-Jkb_f40yQZzllIKmDvnDj0pfY1EKKHytTaQOU8mjyFmgBryp0hk1eUqjKjAj0Z0rmicNKVNi6H52Kl7Vp6KSRmPH_TXps0Ts4K74GMcibjeti4aiTezbrWXvZ0GrYnTlY2CMBGAYzETuj1eY1AgR0fdFc_7GBR1qUNOp0an1aExkNJGhvNaNabotKOG3k7k5llkwn7IK6l7uaXBR7fVeStGYkXvQznrwJWikKXXGKWpLv0Lcsl7dlppOUOWcnaZOlIfIyK8I8ftPs738bx6tX_VH4jHk6ODw_swf7Rl9fikWJ8FcPd8nWxcn11QxuMj679ZrSCWxTRCIQ
linkToPdf http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1Nb9QwEB2VIiEuCCjQbQsYgTggRU08TuwgLltg1fJRVYhKe8KykwkUtcmqtIf-e8ZOdrsrEBK3JLacxDPjebZnngFeYOnZTRuXkPRNoowyCfsdlaCrMWtCMqMO2cifD4v9Y_Vhmk_X4M08F6bnh1gsuAXLiON1MPBZ3exek4aGKoEbQasbcDPs9oWALqmO5trEQAclDmR4P-MyksY8Hm3NmAYTRi5qwVcqd6-bW_FQkcj_b-jzzyDKZXAbvdPkLtwZYKUY93pwD9aovQ8b45an1GdX4qWIgZ5xBX0DvrFqiKWNa9E1gqaz0yGPSgRl6nklxMkSYSffiEhGy85ODFFHXftajEXIbAkP-zSYB3A8ef_17X4yHLOQVMjOOXHKl9rXSEp5rEytDVbOK5OnWBOqmnJnyOQVpbLMqFCeDOlcUjjoShuX40NYb7uWNkGojKdv2muTxrlZ4T2Skc5kXE81rhrB83nX2lnPpmF73mRpgwBsFMAI9kKvL2oEBuz4oDv_bgeDsi5tlNOp8WlFyngsSatGM5j1pqi040aezWVm2WLCNohrqbv8ZZGHdxlpa0bwqJfh4lXISlHokkvMinRXvmW1pD35EVm5Q1KyNlk6gldREf7xg_Zg78s4Xm39T-WncOvo3cR-Ojj8uA23JaOrGOyW78D6xfklPWZ0dOGfRCP4DYIzB7Y
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=The+application+of+eXplainable+artificial+intelligence+in+studying+cognition%3A+A+scoping+review&rft.jtitle=Ibrain&rft.au=Mahmood%2C+Shakran&rft.au=Teo%2C+Colin&rft.au=Sim%2C+Jeremy&rft.au=Zhang%2C+Wei&rft.date=2024-09-05&rft.issn=2313-1934&rft.eissn=2769-2795&rft_id=info:doi/10.1002%2Fibra.12174&rft.externalDBID=n%2Fa&rft.externalDocID=10_1002_ibra_12174
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2313-1934&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2313-1934&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2313-1934&client=summon