Explainable AI (XAI): A systematic meta-survey of current challenges and future opportunities

The past decade has seen significant progress in artificial intelligence (AI), which has resulted in algorithms being adopted for resolving a variety of problems. However, this success has been met by increasing model complexity and employing black-box AI models that lack transparency. In response t...

Full description

Saved in:
Bibliographic Details
Published inKnowledge-based systems Vol. 263; p. 110273
Main Authors Saeed, Waddah, Omlin, Christian
Format Journal Article
LanguageEnglish
Published Elsevier B.V 05.03.2023
Subjects
Online AccessGet full text

Cover

Loading…
Abstract The past decade has seen significant progress in artificial intelligence (AI), which has resulted in algorithms being adopted for resolving a variety of problems. However, this success has been met by increasing model complexity and employing black-box AI models that lack transparency. In response to this need, Explainable AI (XAI) has been proposed to make AI more transparent and thus advance the adoption of AI in critical domains. Although there are several reviews of XAI topics in the literature that have identified challenges and potential research directions of XAI, these challenges and research directions are scattered. This study, hence, presents a systematic meta-survey of challenges and future research directions in XAI organized in two themes: (1) general challenges and research directions of XAI and (2) challenges and research directions of XAI based on machine learning life cycle’s phases: design, development, and deployment. We believe that our meta-survey contributes to XAI literature by providing a guide for future exploration in the XAI area.
AbstractList The past decade has seen significant progress in artificial intelligence (AI), which has resulted in algorithms being adopted for resolving a variety of problems. However, this success has been met by increasing model complexity and employing black-box AI models that lack transparency. In response to this need, Explainable AI (XAI) has been proposed to make AI more transparent and thus advance the adoption of AI in critical domains. Although there are several reviews of XAI topics in the literature that have identified challenges and potential research directions of XAI, these challenges and research directions are scattered. This study, hence, presents a systematic meta-survey of challenges and future research directions in XAI organized in two themes: (1) general challenges and research directions of XAI and (2) challenges and research directions of XAI based on machine learning life cycle’s phases: design, development, and deployment. We believe that our meta-survey contributes to XAI literature by providing a guide for future exploration in the XAI area.
ArticleNumber 110273
Author Omlin, Christian
Saeed, Waddah
Author_xml – sequence: 1
  givenname: Waddah
  orcidid: 0000-0002-2280-4427
  surname: Saeed
  fullname: Saeed, Waddah
  email: waddah.saeed@dmu.ac.uk
  organization: Center for Artificial Intelligence (CAIR), University of Agder, Jon Lilletuns vei 9, Grimstad, 4879, Agder, Norway
– sequence: 2
  givenname: Christian
  surname: Omlin
  fullname: Omlin, Christian
  email: christian.omlin@uia.no
  organization: Center for Artificial Intelligence (CAIR), University of Agder, Jon Lilletuns vei 9, Grimstad, 4879, Agder, Norway
BookMark eNqFkDFLAzEYhoNUsK3-A4eMOtyZ5O6aXAehSNVCwUXBRUKa-6Kp16QkuWL_vVfq5KDTNz0P7_eM0MB5BwhdUpJTQic36_zT-biPOSOsyCkljBcnaEgFZxkvST1AQ1JXJOOkomdoFOOaEMIYFUP0Nv_atso6tWoBzxb46nW2uJ7iGe51CTYqWY03kFQWu7CDPfYG6y4EcAnrD9W24N4hYuUabLrUBcB-u_Uhdc4mC_EcnRrVRrj4uWP0cj9_vnvMlk8Pi7vZMtNFxVJWFOUKGKxMoWotgNQU6rLghuvVRDCtBOcUoKlMTQVhRnPdlNTUQpeVMApEMUbTo1cHH2MAI7VN_XbvUlC2lZTIQyi5lsdQ8hBKHkP1cPkL3ga7UWH_H3Z7xKB_bGchyKgtOA2NDaCTbLz9W_ANalCIaw
CitedBy_id crossref_primary_10_1016_j_eswa_2024_123853
crossref_primary_10_1016_j_caeai_2024_100266
crossref_primary_10_1016_j_knosys_2025_113092
crossref_primary_10_1016_j_knosys_2023_111107
crossref_primary_10_1016_j_isatra_2025_01_013
crossref_primary_10_1016_j_bspc_2024_106457
crossref_primary_10_1007_s42001_024_00248_9
crossref_primary_10_1016_j_engappai_2024_109359
crossref_primary_10_3390_ai4030033
crossref_primary_10_3390_biomedinformatics4020075
crossref_primary_10_1063_5_0226151
crossref_primary_10_1007_s11042_024_18287_9
crossref_primary_10_3389_frobt_2024_1444763
crossref_primary_10_1016_j_dss_2023_114121
crossref_primary_10_1145_3711123
crossref_primary_10_1016_j_acags_2024_100206
crossref_primary_10_1007_s00330_023_09902_8
crossref_primary_10_29109_gujsc_1506335
crossref_primary_10_3390_diagnostics15030273
crossref_primary_10_1016_j_tele_2024_102135
crossref_primary_10_1016_j_ijmedinf_2024_105689
crossref_primary_10_1088_1361_6501_ad36d9
crossref_primary_10_1007_s10462_024_10854_8
crossref_primary_10_1080_17425255_2023_2298827
crossref_primary_10_1016_j_rineng_2024_103036
crossref_primary_10_1111_inm_13303
crossref_primary_10_1016_j_procs_2024_08_026
crossref_primary_10_1186_s13561_023_00422_1
crossref_primary_10_1007_s11042_023_17666_y
crossref_primary_10_1016_j_eswa_2024_123993
crossref_primary_10_1016_j_inffus_2024_102457
crossref_primary_10_1016_j_compbiomed_2024_108685
crossref_primary_10_2174_0126662558286756231206062720
crossref_primary_10_1029_2024EF004588
crossref_primary_10_1109_ACCESS_2024_3516045
crossref_primary_10_7240_jeps_1506705
crossref_primary_10_1002_jcsm_13282
crossref_primary_10_1016_j_asoc_2024_112674
crossref_primary_10_1007_s10462_024_10890_4
crossref_primary_10_1007_s43681_025_00668_x
crossref_primary_10_1016_j_ecolind_2024_112636
crossref_primary_10_1007_s00138_024_01653_w
crossref_primary_10_1016_j_eswa_2023_120042
crossref_primary_10_1016_j_imavis_2024_105298
crossref_primary_10_1109_ACCESS_2024_3418499
crossref_primary_10_1016_j_eswa_2024_124710
crossref_primary_10_1016_j_aiia_2024_12_004
crossref_primary_10_3390_asi7050093
crossref_primary_10_3389_forgp_2025_1538438
crossref_primary_10_1016_j_neucom_2024_127759
crossref_primary_10_1016_j_neucom_2024_128969
crossref_primary_10_1093_radadv_umae003
crossref_primary_10_1145_3696319
crossref_primary_10_1109_ACCESS_2025_3546681
crossref_primary_10_3390_electronics13214152
crossref_primary_10_1016_j_rineng_2024_103290
crossref_primary_10_1016_j_ymssp_2024_111948
crossref_primary_10_31857_S0002338824010122
crossref_primary_10_1016_j_xops_2025_100710
crossref_primary_10_1109_ACCESS_2025_3537859
crossref_primary_10_1080_15309576_2025_2469784
crossref_primary_10_3389_fenvs_2024_1426942
crossref_primary_10_1007_s11063_025_11732_2
crossref_primary_10_1088_1741_2552_ad6593
crossref_primary_10_3390_electronics12214510
crossref_primary_10_1016_j_enganabound_2023_12_024
crossref_primary_10_3390_bs14070616
crossref_primary_10_1016_j_ins_2024_120160
crossref_primary_10_1016_j_knosys_2024_112086
crossref_primary_10_3390_app14072737
crossref_primary_10_1109_TAFFC_2023_3296373
crossref_primary_10_1016_j_knosys_2025_113215
crossref_primary_10_1016_j_jenvman_2024_123364
crossref_primary_10_1162_coli_a_00549
crossref_primary_10_1109_OJVT_2024_3369691
crossref_primary_10_1016_j_chb_2024_108422
crossref_primary_10_1007_s44230_023_00058_8
crossref_primary_10_4018_JGIM_354062
crossref_primary_10_1016_j_eswa_2024_124733
crossref_primary_10_3390_s25030854
crossref_primary_10_3390_app14051811
crossref_primary_10_3390_agronomy13051397
crossref_primary_10_3390_w17050676
crossref_primary_10_1016_j_knosys_2023_111147
crossref_primary_10_1007_s11831_024_10103_9
crossref_primary_10_1007_s12559_024_10373_2
crossref_primary_10_3233_JIFS_219407
crossref_primary_10_1016_j_procs_2024_09_443
crossref_primary_10_1007_s43926_025_00092_x
crossref_primary_10_3390_app14198884
crossref_primary_10_1007_s12559_024_10325_w
crossref_primary_10_1007_s42484_025_00254_8
crossref_primary_10_1088_1361_6501_ad99f4
crossref_primary_10_1016_j_eswa_2025_126557
crossref_primary_10_1016_j_jrt_2025_100108
crossref_primary_10_1109_TNNLS_2023_3270027
crossref_primary_10_4236_ojapps_2024_149167
crossref_primary_10_1007_s10676_025_09821_w
crossref_primary_10_2478_jaiscr_2025_0013
crossref_primary_10_1109_OJCOMS_2025_3534626
crossref_primary_10_1080_09377255_2024_2362012
crossref_primary_10_3389_fdgth_2023_1208350
crossref_primary_10_3390_app15020538
crossref_primary_10_1016_j_imu_2023_101436
crossref_primary_10_1109_ACCESS_2024_3422416
crossref_primary_10_15388_23_INFOR526
crossref_primary_10_1016_j_engappai_2025_110175
crossref_primary_10_1109_ACCESS_2024_3360484
crossref_primary_10_1109_ACCESS_2024_3365135
crossref_primary_10_1007_s10462_024_10916_x
crossref_primary_10_1007_s00146_024_02128_2
crossref_primary_10_1145_3719014
crossref_primary_10_1002_jum_16535
crossref_primary_10_51583_IJLTEMAS_2024_130524
crossref_primary_10_1016_j_compind_2024_104233
crossref_primary_10_1109_TKDE_2024_3420180
crossref_primary_10_1371_journal_pone_0308758
crossref_primary_10_3390_smartcities7040064
crossref_primary_10_2478_jaiscr_2023_0018
crossref_primary_10_1007_s40860_024_00240_0
crossref_primary_10_1109_TLT_2024_3383325
crossref_primary_10_1109_ACCESS_2024_3450299
crossref_primary_10_1016_j_ins_2024_121735
crossref_primary_10_3390_medicina61030405
crossref_primary_10_1007_s00146_024_02056_1
crossref_primary_10_1080_01969722_2023_2296251
crossref_primary_10_1002_rse2_415
crossref_primary_10_1145_3604281
crossref_primary_10_3389_frai_2023_1264372
crossref_primary_10_1016_j_compbiomed_2025_109749
crossref_primary_10_1007_s10479_024_06088_0
crossref_primary_10_1007_s10506_024_09397_8
crossref_primary_10_1080_10447318_2024_2400388
crossref_primary_10_1007_s10462_024_10910_3
crossref_primary_10_1016_j_eswa_2023_122778
crossref_primary_10_1108_IJHG_11_2024_0140
crossref_primary_10_1371_journal_pone_0301429
crossref_primary_10_3390_ai6030059
crossref_primary_10_3390_bdcc8110160
crossref_primary_10_1111_cbdd_14262
crossref_primary_10_1021_acs_jcim_4c00720
crossref_primary_10_1109_TAI_2023_3308555
crossref_primary_10_1016_j_cor_2024_106914
crossref_primary_10_1007_s12312_024_01374_1
crossref_primary_10_3390_smartcities7060132
crossref_primary_10_1007_s10639_025_13385_z
crossref_primary_10_1016_j_engappai_2025_110363
crossref_primary_10_1134_S1064230724700138
crossref_primary_10_1109_ACCESS_2024_3431437
crossref_primary_10_3390_electronics14050929
crossref_primary_10_1109_ACCESS_2024_3467062
crossref_primary_10_2174_0126662558266152231128060222
crossref_primary_10_1109_ACCESS_2024_3412789
crossref_primary_10_1007_s10489_023_04857_1
crossref_primary_10_3390_electronics13132438
crossref_primary_10_3390_computers13100252
crossref_primary_10_2174_0126662558285074231120063921
crossref_primary_10_1016_j_compind_2024_104128
crossref_primary_10_57197_JDR_2024_0101
crossref_primary_10_1016_j_ejrad_2024_111884
crossref_primary_10_1016_j_scrs_2024_101037
crossref_primary_10_3389_fmed_2025_1529993
crossref_primary_10_1007_s44230_024_00066_2
crossref_primary_10_1371_journal_pone_0315762
crossref_primary_10_1007_s10462_024_10972_3
crossref_primary_10_3390_s24123728
crossref_primary_10_2196_53863
crossref_primary_10_1016_j_compbiomed_2025_109838
crossref_primary_10_1109_ACCESS_2025_3536095
crossref_primary_10_1111_exsy_70017
crossref_primary_10_1007_s00146_024_02014_x
crossref_primary_10_1145_3705724
crossref_primary_10_1016_j_oceaneng_2025_120460
crossref_primary_10_1109_TC_2024_3500377
crossref_primary_10_1016_j_knosys_2024_112363
crossref_primary_10_1007_s10462_023_10525_0
crossref_primary_10_18502_kss_v9i32_17439
crossref_primary_10_1007_s13748_025_00367_y
crossref_primary_10_1109_TFUZZ_2024_3485212
crossref_primary_10_1016_j_eswa_2023_121365
crossref_primary_10_1016_j_future_2023_12_003
crossref_primary_10_1016_j_knosys_2024_112015
crossref_primary_10_1016_j_ress_2025_110834
crossref_primary_10_1145_3688569
crossref_primary_10_1016_j_cose_2024_103842
crossref_primary_10_1051_sands_2024020
crossref_primary_10_1080_01431161_2024_2349267
crossref_primary_10_1007_s00146_024_02040_9
crossref_primary_10_3390_make6030098
crossref_primary_10_3390_electronics13193806
crossref_primary_10_1002_capr_12764
crossref_primary_10_1016_j_icte_2024_05_007
crossref_primary_10_1186_s12911_024_02649_2
crossref_primary_10_1016_j_compeleceng_2024_109370
crossref_primary_10_32604_cmc_2024_057877
crossref_primary_10_1016_j_knosys_2024_111812
crossref_primary_10_1002_ibra_12174
crossref_primary_10_1016_j_inffus_2024_102424
crossref_primary_10_1109_TMI_2024_3467384
crossref_primary_10_3390_info15010004
crossref_primary_10_1145_3702004
crossref_primary_10_1186_s42400_024_00241_9
crossref_primary_10_1007_s00521_024_10437_2
crossref_primary_10_32604_cmc_2024_046880
crossref_primary_10_1109_ACCESS_2024_3519741
crossref_primary_10_3390_make6010031
crossref_primary_10_1016_j_asoc_2025_112771
crossref_primary_10_1109_TSC_2024_3407588
crossref_primary_10_1145_3674501
crossref_primary_10_3390_math11204272
crossref_primary_10_1016_j_watres_2024_123080
crossref_primary_10_1109_ACCESS_2024_3387547
crossref_primary_10_1016_j_compeleceng_2024_109246
crossref_primary_10_1016_j_aos_2024_101567
crossref_primary_10_1109_TSC_2023_3327822
crossref_primary_10_1007_s11128_024_04391_0
crossref_primary_10_1002_asi_24889
Cites_doi 10.1145/3377325.3377514
10.1016/j.visinf.2020.04.005
10.1145/3236009
10.1016/j.media.2022.102470
10.1109/MCG.2018.042731661
10.3390/make3040048
10.1371/journal.pone.0181142
10.1109/CVPR.2019.00509
10.1109/MC.2020.2996587
10.1109/JPROC.2021.3060483
10.1145/3522747
10.1109/ACCESS.2021.3051315
10.1093/jamia/ocaa053
10.1631/FITEE.1700808
10.1126/scirobotics.abm4183
10.1016/j.clsr.2017.08.007
10.1109/TCDS.2016.2628365
10.1007/s10462-019-09716-5
10.1038/s42256-019-0048-x
10.1016/j.media.2021.101985
10.3390/electronics10050593
10.1109/ACCESS.2020.3032756
10.1109/CVPR.2018.00867
10.3390/app11104573
10.3389/frai.2021.550030
10.1016/0893-6080(95)00086-0
10.1016/j.scitotenv.2021.149797
10.1109/CVPR52688.2022.01514
10.1038/s41586-019-1923-7
10.1109/ACCESS.2021.3070212
10.1073/pnas.1900654116
10.1145/3375627.3375830
10.1145/3485766
10.1148/ryai.2020190043
10.1006/knac.1993.1008
10.1016/j.neucom.2020.08.011
10.1145/3457607
10.1016/j.patter.2020.100049
10.1109/ACCESS.2018.2870052
10.1016/j.jclepro.2015.04.041
10.1016/j.artint.2021.103471
10.1109/COMST.2021.3058573
10.1109/TVCG.2017.2744358
10.1109/TRPMS.2021.3066428
10.1016/j.inffus.2019.12.012
10.1145/3400051.3400058
10.1109/MIC.2020.3031769
10.1145/3236386.3241340
10.1126/scirobotics.aay7120
10.1109/TKDE.2020.2983930
10.1016/j.jbi.2020.103655
10.1016/j.knosys.2015.02.018
10.1177/1473871620904671
10.1016/0950-7051(96)81920-4
10.3390/electronics8080832
10.1145/2939672.2939778
10.1016/j.neucom.2020.01.036
10.1016/j.artint.2018.07.007
10.1038/s41551-018-0324-9
10.3390/app11115088
10.1007/s11257-017-9195-0
10.1007/s10115-013-0679-x
10.1007/BF00993103
10.1109/ACCESS.2022.3197671
10.1613/jair.1.12228
10.3233/SW-190382
10.1109/CVPR.2018.00915
10.1016/j.imavis.2021.104194
10.1109/TVCG.2017.2744938
10.1145/3457188
10.1109/TKDE.2016.2606428
10.1561/1500000066
ContentType Journal Article
Copyright 2023 The Author(s)
Copyright_xml – notice: 2023 The Author(s)
DBID 6I.
AAFTH
AAYXX
CITATION
DOI 10.1016/j.knosys.2023.110273
DatabaseName ScienceDirect Open Access Titles
Elsevier:ScienceDirect:Open Access
CrossRef
DatabaseTitle CrossRef
DatabaseTitleList
DeliveryMethod fulltext_linktorsrc
Discipline Computer Science
EISSN 1872-7409
ExternalDocumentID 10_1016_j_knosys_2023_110273
S0950705123000230
GroupedDBID --K
--M
.DC
.~1
0R~
1B1
1~.
1~5
4.4
457
4G.
5VS
6I.
7-5
71M
77K
8P~
9JN
AACTN
AAEDT
AAEDW
AAFTH
AAIAV
AAIKJ
AAKOC
AALRI
AAOAW
AAQFI
AAXUO
AAYFN
ABAOU
ABBOA
ABIVO
ABJNI
ABMAC
ABYKQ
ACAZW
ACDAQ
ACGFS
ACRLP
ACZNC
ADBBV
ADEZE
ADGUI
ADTZH
AEBSH
AECPX
AEKER
AENEX
AFKWA
AFTJW
AGHFR
AGUBO
AGYEJ
AHHHB
AHJVU
AHZHX
AIALX
AIEXJ
AIKHN
AITUG
AJOXV
ALMA_UNASSIGNED_HOLDINGS
AMFUW
AMRAJ
AOUOD
ARUGR
AXJTR
BJAXD
BKOJK
BLXMC
CS3
DU5
EBS
EFJIC
EFLBG
EO8
EO9
EP2
EP3
FDB
FIRID
FNPLU
FYGXN
G-Q
GBLVA
GBOLZ
IHE
J1W
JJJVA
KOM
LG9
LY7
M41
MHUIS
MO0
N9A
O-L
O9-
OAUVE
OZT
P-8
P-9
P2P
PC.
PQQKQ
Q38
ROL
RPZ
SDF
SDG
SDP
SES
SEW
SPC
SPCBC
SST
SSV
SSW
SSZ
T5K
WH7
XPP
ZMT
~02
~G-
29L
AAQXK
AATTM
AAXKI
AAYWO
AAYXX
ABDPE
ABWVN
ABXDB
ACNNM
ACRPL
ACVFH
ADCNI
ADJOM
ADMUD
ADNMO
AEIPS
AEUPX
AFJKZ
AFPUW
AFXIZ
AGCQF
AGQPQ
AGRNS
AIGII
AIIUN
AKBMS
AKRWK
AKYEP
ANKPU
APXCP
ASPBG
AVWKF
AZFZN
BNPGV
CITATION
EJD
FEDTE
FGOYB
G-2
HLZ
HVGLF
HZ~
R2-
RIG
SBC
SET
SSH
UHS
WUQ
ID FETCH-LOGICAL-c352t-334be2ebf3a9c8e091e9437f7cb682ca8771eed5f91802fc7cd41f98c458fae83
IEDL.DBID .~1
ISSN 0950-7051
IngestDate Thu Apr 24 22:59:03 EDT 2025
Tue Jul 01 00:20:24 EDT 2025
Fri Feb 23 02:39:42 EST 2024
IsDoiOpenAccess true
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Keywords Deep learning
Meta-survey
Black-box
Explainable AI (XAI)
Machine learning
Responsible AI
Interpretable AI
Language English
License This is an open access article under the CC BY license.
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c352t-334be2ebf3a9c8e091e9437f7cb682ca8771eed5f91802fc7cd41f98c458fae83
ORCID 0000-0002-2280-4427
OpenAccessLink https://www.sciencedirect.com/science/article/pii/S0950705123000230
ParticipantIDs crossref_citationtrail_10_1016_j_knosys_2023_110273
crossref_primary_10_1016_j_knosys_2023_110273
elsevier_sciencedirect_doi_10_1016_j_knosys_2023_110273
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 2023-03-05
PublicationDateYYYYMMDD 2023-03-05
PublicationDate_xml – month: 03
  year: 2023
  text: 2023-03-05
  day: 05
PublicationDecade 2020
PublicationTitle Knowledge-based systems
PublicationYear 2023
Publisher Elsevier B.V
Publisher_xml – name: Elsevier B.V
References Towell, Shavlik (b121) 1993; 13
M. Danilevsky, K. Qian, R. Aharonov, Y. Katsis, B. Kawas, P. Sen, A Survey of the State of Explainable AI for Natural Language Processing, in: Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing, 2020, pp. 447–459.
Weller (b112) 2019
Lu, Garcia, Hansen, Gleicher, Maciejewski (b77) 2017
Štrumbelj, Kononenko (b89) 2014; 41
Tudorache (b158) 2020; 11
Xie, Niu, Liu, Chen, Tang, Yu (b118) 2021; 69
Liang, Li, Yan, Li, Jiang (b105) 2021; 419
Zhang, Chen (b114) 2020; 14
D.H. Park, L.A. Hendricks, Z. Akata, A. Rohrbach, B. Schiele, T. Darrell, M. Rohrbach, Multimodal Explanations: Justifying Decisions and Pointing to the Evidence, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, CVPR, 2018.
Vilone, Longo (b70) 2020
Van Deemter (b95) 2012
Wells, Bednarz (b110) 2021; 4
S.J. Oh, M. Augustin, B. Schiele, M. Fritz, Towards Reverse-Engineering Black-Box Neural Networks, in: International Conference on Learning Representations, 2018.
Lipton (b19) 2018; 16
He, Wang, Miao, Sun (b39) 2021; 112
Daniel (b96) 2017
Lee, Yune, Mansouri, Kim, Tajmir, Guerrier, Ebert, Pomerantz, Romero, Kamalian (b129) 2019; 3
Goodman, Flaxman (b7) 2017; 38
Confalonieri, Coba, Wagner, Besold (b13) 2021; 11
Black, Nederpelt (b117) 2020
M.T. Ribeiro, S. Singh, C. Guestrin, ” Why should i trust you?” Explaining the predictions of any classifier, in: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2016, pp. 1135–1144.
Moraffah, Karami, Guo, Raglin, Liu (b74) 2020; 22
Saraswat, Bhattacharya, Verma, Prasad, Tanwar, Sharma, Bokoro, Sharma (b80) 2022
Friedman (b98) 2001
Senior, Evans, Jumper, Kirkpatrick, Sifre, Green, Qin, Žídek, Nelson, Bridgland (b142) 2020; 577
Nguyen, Yosinski, Clune (b17) 2015
Ras, Xie, van Gerven, Doran (b50) 2022; 73
Gunning, Stefik, Choi, Miller, Stumpf, Yang (b34) 2019; 4
D. Slack, S. Hilgard, E. Jia, S. Singh, H. Lakkaraju, Fooling lime and shap: Adversarial attacks on post hoc explanation methods, in: Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 180–186.
Rawal, Mccoy, Rawat, Sadler, Amant (b58) 2021; 1
Messina, Pino, Parra, Soto, Besa, Uribe, Andía, Tejos, Prieto, Capurro (b52) 2022; 54
Mueller, Hoffman, Clancey, Emrey, Klein (b60) 2019
Payrovnaziri, Chen, Rengifo-Moreno, Miller, Bian, Chen, Liu, He (b9) 2020; 27
Madsen, Reddy, Chandar (b56) 2022
Hall, Gill, Schmidt (b130) 2019
D.L. Arendt, N. Nur, Z. Huang, G. Fair, W. Dou, Parallel embeddings: a visualization technique for contrasting learned representations, in: Proceedings of the 25th International Conference on Intelligent User Interfaces, 2020, pp. 259–274.
Szegedy, Zaremba, Sutskever, Bruna, Erhan, Goodfellow, Fergus (b16) 2013
Akata, Balliet, de Rijke, Dignum, Dignum, Eiben, Fokkens, Grossi, Hindriks, Hoos, Hung, Jonker, Monz, Neerincx, Oliehoek, Prakken, Schlobach, van der Gaag, van Harmelen, van Hoof, van Riemsdijk, van Wynsberghe, Verbrugge, Verheij, Vossen, Welling (b21) 2020; 53
Mi, Li, Zhou (b46) 2020; 8
D. Gunning, Broad Agency Announcement Explainable Artificial Intelligence (XAI), Technical report, 2016.
Lucieri, Bajwa, Dengel, Ahmed (b59) 2020
Sai, Mohankumar, Khapra (b94) 2022; 55
Tjoa, Guan (b109) 2020
He, Ma, Wang (b104) 2020; 387
O. Biran, C. Cotton, Explanation and justification in machine learning: A survey, in: IJCAI-17 Workshop on Explainable AI, Vol. 8, XAI, (1) 2017, pp. 8–13.
Pezzotti, Höllt, Van Gemert, Lelieveldt, Eisemann, Vilanova (b125) 2018; 24
Cheng, Wang, Li (b64) 2020
Carvalho, Pereira, Cardoso (b26) 2019; 8
Joshi, Walambe, Kotecha (b108) 2021; 9
Zhang, Tiňo, Leonardis, Tang (b23) 2020
Zhou, Gandomi, Chen, Holzinger (b27) 2021; 10
Fe-Fei, Fergus, Perona (b119) 2003
Grant, Wischik (b144) 2020; 88
J.M. Darias, B. Díaz-Agudo, J.A. Recio-García, A Systematic Review on Model-agnostic XAI Libraries, in: ICCBR Workshops, 2021, pp. 28–39.
Chatzimparmpas, Martins, Jusufi, Kerren (b24) 2020; 19
Barredo Arrieta, Díaz-Rodríguez, Del Ser, Bennetot, Tabik, Barbado, Garcia, Gil-Lopez, Molina, Benjamins, Chatila, Herrera (b2) 2020; 58
Beaudouin, Bloch, Bounie, Clémençon, d’Alché Buc, Eagan, Maxwell, Mozharovskyi, Parekh (b100) 2020
Preece (b35) 2018
Dao, Lee (b101) 2020
Li, Cao, Shi, Bai, Gao, Qiu, Wang, Gao, Zhang, Xue, Chen (b47) 2020
Samek, Wiegand, Müller (b14) 2017
Burkart, Huber (b18) 2021; 70
Zeiler, Fergus (b131) 2014
Choo, Liu (b102) 2018; 38
Naiseh, Jiang, Ma, Ali (b106) 2020
Murdoch, Singh, Kumbier, Abbasi-Asl, Yu (b90) 2019; 116
Salehi, Selamat, Fujita (b29) 2015; 80
Gade, Geyik, Kenthapadi, Mithal, Taly (b6) 2020
Qazi, Fayaz, Wadi, Raj, Rahim, Khan (b31) 2015; 104
Buhrmester, Münch, Arens (b51) 2021; 3
Omlin, Giles (b122) 1996; 9
Reyes, Meier, Pereira, Silva, Dahlweid, Tengg-Kobligk, Summers, Wiest (b43) 2020; 2
Ahmad, Eckert, Teredesai (b116) 2019
Seeliger, Pfaff, Krcmar (b49) 2019
Gulum, Trombley, Kantardzic (b54) 2021; 11
Panigutti, Perotti, Pedreschi (b155) 2020
Islam, Eberle, Ghafoor, Ahmed (b61) 2021
Liu, Shi, Cao, Zhu, Liu (b126) 2018; 24
Deeks (b82) 2019; 119
LeDell, Poirier (b152) 2020
Ras, van Gerven, Haselager (b36) 2018
Dazeley, Vamplew, Cruz (b75) 2021
Miller (b67) 2019; 267
Keele (b28) 2007
Huang, Kroening, Ruan, Sharp, Sun, Thamo, Wu, Yi (b111) 2020; 37
Williamson, Feng (b91) 2020
Samek, Montavon, Lapuschkin, Anders, Müller (b53) 2021; 109
Tomsett, Preece, Braines, Cerutti, Chakraborty, Srivastava, Pearson, Kaplan (b81) 2020; 1
Arras, Horn, Montavon, Müller, Samek (b15) 2017; 12
Ahmad, Teredesai, Eckert (b32) 2018
Bénard, Biau, Da Veiga, Scornet (b92) 2022
Zhang, Zhu (b103) 2018; 19
Confalonieri, Weyde, Besold, Moscoso del Prado Martín (b156) 2021; 296
Liu, Kailkhura, Loveland, Han (b141) 2019
Rajapaksha, Bergmeir, Hyndman (b85) 2022
Dikshit, Pradhan (b87) 2021; 801
Płońska, Płoński (b153) 2021
Vaswani, Shazeer, Parmar, Uszkoreit, Jones, Gomez, Kaiser, Polosukhin (b97) 2017
Rudin (b25) 2019; 1
Langley, Meadows, Sridharan, Choi (b150) 2017
Mehrabi, Morstatter, Saxena, Lerman, Galstyan (b40) 2021; 54
Konečný, McMahan, Yu, Richtárik, Suresh, Bacon (b148) 2016
T. Orekondy, B. Schiele, M. Fritz, Knockoff Nets: Stealing Functionality of Black-Box Models, in: Conference on Computer Vision and Pattern Recognition, 2019.
Belle, Papantonis (b65) 2021
Gaur, Faldu, Sheth (b120) 2021; 25
Chakraborty, Tomsett, Raghavendra, Harborne, Alzantot, Cerutti, Srivastava, Preece, Julier, Rao, Kelley, Braines, Sensoy, Willis, Gurram (b22) 2017
van der Velden, Kuijf, Gilhuijs, Viergever (b127) 2022; 79
C.F. Baumgartner, L.M. Koch, K.C. Tezcan, J.X. Ang, E. Konukoglu, Visual feature attribution using wasserstein gans, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 8309–8319.
Xuan, Zhang, Kwon, Ma (b134) 2022; 28
Longo, Goebel, Lecue, Kieseberg, Holzinger (b44) 2020
Yuan, Yu, Gui, Ji (b72) 2022
Wallkötter, Tulli, Castellano, Paiva, Chetouani (b57) 2021; 10
Stepin, Alonso, Catala, Pereira-Fariña (b76) 2021; 9
Xie, Katariya, Tang, Huang, Rao, Subbian, Ji (b86) 2022
Li, Fujiwara, Choi, Kim, Ma (b132) 2020; 4
Neerincx, van der Waa, Kaptein, van Diggelen (b151) 2018
Wahab, Mourad, Otrok, Taleb (b147) 2021; 23
Rojat, Puget, Filliat, Del Ser, Gelin, Díaz-Rodríguez (b73) 2021
Jiang, Yang, Gao, Zhang, Ma, Qian (b128) 2019
Fox, Long, Magazzeni (b33) 2017
A. Kotriwala, B. Klöpper, M. Dix, G. Gopalakrishnan, D. Ziobro, A. Potschka, XAI for Operations in the Process Industry-Applications, Theses, and Research Directions, in: AAAI Spring Symposium: Combining Machine Learning with Knowledge Engineering, 2021.
Doshi-Velez, Kim (b5) 2017
Doshi-Velez, Kortz, Budish, Bavitz, Gershman, O’Brien, Scott, Schieber, Waldo, Weinberger (b99) 2017
Krening, Harrison, Feigh, Isbell, Riedl, Thomaz (b143) 2017; 9
Molnar, Casalicchio, Bischl (b42) 2020
Samek, Müller (b8) 2019
Guidotti, Monreale, Ruggieri, Turini, Giannotti, Pedreschi (b10) 2018; 51
Antoniadi, Du, Guendouz, Wei, Mazo, Becker, Mooney (b55) 2021; 11
Molnar (b11) 2019
Naiseh, Jiang, Ma, Ali (b62) 2020
Markus, Kors, Rijnbeek (b20) 2021; 113
Murtaza, Shuib, Abdul Wahab, Mujtaba, Nweke, Al-garadi, Zulfiqar, Raza, Azmi (b30) 2020; 53
Wang, Yeung (b135) 2016; 28
Gruber (b154) 1993; 5
(b78) 2019
Holzinger, Kieseberg, Weippl, Tjoa (b113) 2018
Schwalbe, Finzel (b37) 2021
Zilke, Loza Mencía, Janssen (b124) 2016
Andrews, Diederich, Tickle (b123) 1995; 8
L. Veiber, K. Allix, Y. Arslan, T.F. Bissyandé, J. Klein, Challenges towards production-ready explainable machine learning, in: 2020 {USENIX} Conference on Operational Machine Learning, OpML 20, 2020.
Fan, Xiong, Li, Wang (b38) 2021
Došilović, Brčić, Hlupić (b41) 2018
Villaronga, Kieseberg, Li (b145) 2018; 34
Kovalerchuk, Ahmad, Teredesai (b68) 2021
Atakishiyev, Salameh, Yao, Goebel (b66) 2021
Yuan, Gao, Zheng, Edmonds, Wu, Rossano, Lu, Zhu, Zhu (b136) 2022; 7
Huang, Joseph, Nelson, Rubinstein, Tygar (b139) 2011
Hoffmann, Magazzeni (b149) 2019
Reiter (b93) 2019
Choi, Bahadori, Schuetz, Stewart, Sun (b157) 2016; vol. 56
Adadi, Berrada (b1) 2018; 6
Anjomshoae, Najjar, Calvaresi, Främling (b115) 2019
Pocevičiūtė, Eilertsen, Lundström (b45) 2020
Nunes, Jannach (b48) 2017; 27
McMahan, Moore, Ramage, Hampson, y Arcas (b146) 2017
S. Chen, Q. Zhao, REX: Reasoning-aware and Grounded Explanation, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 15586–15595.
Abdul, Vermeulen, Wang, Lim, Kankanhalli (b69) 2018
Madsen (10.1016/j.knosys.2023.110273_b56) 2022
Lee (10.1016/j.knosys.2023.110273_b129) 2019; 3
Rajapaksha (10.1016/j.knosys.2023.110273_b85) 2022
10.1016/j.knosys.2023.110273_b12
Vilone (10.1016/j.knosys.2023.110273_b70) 2020
Towell (10.1016/j.knosys.2023.110273_b121) 1993; 13
Vaswani (10.1016/j.knosys.2023.110273_b97) 2017
Langley (10.1016/j.knosys.2023.110273_b150) 2017
10.1016/j.knosys.2023.110273_b107
Chatzimparmpas (10.1016/j.knosys.2023.110273_b24) 2020; 19
Yuan (10.1016/j.knosys.2023.110273_b72) 2022
Ras (10.1016/j.knosys.2023.110273_b50) 2022; 73
Goodman (10.1016/j.knosys.2023.110273_b7) 2017; 38
Antoniadi (10.1016/j.knosys.2023.110273_b55) 2021; 11
Chakraborty (10.1016/j.knosys.2023.110273_b22) 2017
Xie (10.1016/j.knosys.2023.110273_b118) 2021; 69
Hoffmann (10.1016/j.knosys.2023.110273_b149) 2019
Buhrmester (10.1016/j.knosys.2023.110273_b51) 2021; 3
Fox (10.1016/j.knosys.2023.110273_b33) 2017
Carvalho (10.1016/j.knosys.2023.110273_b26) 2019; 8
Kovalerchuk (10.1016/j.knosys.2023.110273_b68) 2021
Salehi (10.1016/j.knosys.2023.110273_b29) 2015; 80
10.1016/j.knosys.2023.110273_b88
Zeiler (10.1016/j.knosys.2023.110273_b131) 2014
Lipton (10.1016/j.knosys.2023.110273_b19) 2018; 16
Wang (10.1016/j.knosys.2023.110273_b135) 2016; 28
Gunning (10.1016/j.knosys.2023.110273_b34) 2019; 4
Rawal (10.1016/j.knosys.2023.110273_b58) 2021; 1
Guidotti (10.1016/j.knosys.2023.110273_b10) 2018; 51
Miller (10.1016/j.knosys.2023.110273_b67) 2019; 267
Grant (10.1016/j.knosys.2023.110273_b144) 2020; 88
Lu (10.1016/j.knosys.2023.110273_b77) 2017
10.1016/j.knosys.2023.110273_b3
10.1016/j.knosys.2023.110273_b4
Deeks (10.1016/j.knosys.2023.110273_b82) 2019; 119
van der Velden (10.1016/j.knosys.2023.110273_b127) 2022; 79
Gade (10.1016/j.knosys.2023.110273_b6) 2020
Choi (10.1016/j.knosys.2023.110273_b157) 2016; vol. 56
Mueller (10.1016/j.knosys.2023.110273_b60) 2019
Burkart (10.1016/j.knosys.2023.110273_b18) 2021; 70
Došilović (10.1016/j.knosys.2023.110273_b41) 2018
Seeliger (10.1016/j.knosys.2023.110273_b49) 2019
Mi (10.1016/j.knosys.2023.110273_b46) 2020; 8
LeDell (10.1016/j.knosys.2023.110273_b152) 2020
Payrovnaziri (10.1016/j.knosys.2023.110273_b9) 2020; 27
Messina (10.1016/j.knosys.2023.110273_b52) 2022; 54
Li (10.1016/j.knosys.2023.110273_b132) 2020; 4
Choo (10.1016/j.knosys.2023.110273_b102) 2018; 38
Islam (10.1016/j.knosys.2023.110273_b61) 2021
Daniel (10.1016/j.knosys.2023.110273_b96) 2017
Liu (10.1016/j.knosys.2023.110273_b126) 2018; 24
Lucieri (10.1016/j.knosys.2023.110273_b59) 2020
Fe-Fei (10.1016/j.knosys.2023.110273_b119) 2003
Black (10.1016/j.knosys.2023.110273_b117) 2020
Ahmad (10.1016/j.knosys.2023.110273_b32) 2018
Wallkötter (10.1016/j.knosys.2023.110273_b57) 2021; 10
Konečný (10.1016/j.knosys.2023.110273_b148) 2016
Pocevičiūtė (10.1016/j.knosys.2023.110273_b45) 2020
Ahmad (10.1016/j.knosys.2023.110273_b116) 2019
Sai (10.1016/j.knosys.2023.110273_b94) 2022; 55
Zhou (10.1016/j.knosys.2023.110273_b27) 2021; 10
Confalonieri (10.1016/j.knosys.2023.110273_b13) 2021; 11
Reyes (10.1016/j.knosys.2023.110273_b43) 2020; 2
Beaudouin (10.1016/j.knosys.2023.110273_b100) 2020
Molnar (10.1016/j.knosys.2023.110273_b11) 2019
Rudin (10.1016/j.knosys.2023.110273_b25) 2019; 1
Dikshit (10.1016/j.knosys.2023.110273_b87) 2021; 801
Qazi (10.1016/j.knosys.2023.110273_b31) 2015; 104
Barredo Arrieta (10.1016/j.knosys.2023.110273_b2) 2020; 58
Zilke (10.1016/j.knosys.2023.110273_b124) 2016
Bénard (10.1016/j.knosys.2023.110273_b92) 2022
(10.1016/j.knosys.2023.110273_b78) 2019
Zhang (10.1016/j.knosys.2023.110273_b103) 2018; 19
Samek (10.1016/j.knosys.2023.110273_b8) 2019
Naiseh (10.1016/j.knosys.2023.110273_b106) 2020
Tjoa (10.1016/j.knosys.2023.110273_b109) 2020
Confalonieri (10.1016/j.knosys.2023.110273_b156) 2021; 296
Krening (10.1016/j.knosys.2023.110273_b143) 2017; 9
Xuan (10.1016/j.knosys.2023.110273_b134) 2022; 28
Saraswat (10.1016/j.knosys.2023.110273_b80) 2022
Van Deemter (10.1016/j.knosys.2023.110273_b95) 2012
Gruber (10.1016/j.knosys.2023.110273_b154) 1993; 5
Murdoch (10.1016/j.knosys.2023.110273_b90) 2019; 116
Friedman (10.1016/j.knosys.2023.110273_b98) 2001
Senior (10.1016/j.knosys.2023.110273_b142) 2020; 577
Jiang (10.1016/j.knosys.2023.110273_b128) 2019
Liu (10.1016/j.knosys.2023.110273_b141) 2019
Moraffah (10.1016/j.knosys.2023.110273_b74) 2020; 22
Gaur (10.1016/j.knosys.2023.110273_b120) 2021; 25
10.1016/j.knosys.2023.110273_b140
Preece (10.1016/j.knosys.2023.110273_b35) 2018
Adadi (10.1016/j.knosys.2023.110273_b1) 2018; 6
Molnar (10.1016/j.knosys.2023.110273_b42) 2020
Omlin (10.1016/j.knosys.2023.110273_b122) 1996; 9
Gulum (10.1016/j.knosys.2023.110273_b54) 2021; 11
Doshi-Velez (10.1016/j.knosys.2023.110273_b99) 2017
McMahan (10.1016/j.knosys.2023.110273_b146) 2017
Keele (10.1016/j.knosys.2023.110273_b28) 2007
Atakishiyev (10.1016/j.knosys.2023.110273_b66) 2021
Pezzotti (10.1016/j.knosys.2023.110273_b125) 2018; 24
Wahab (10.1016/j.knosys.2023.110273_b147) 2021; 23
Dazeley (10.1016/j.knosys.2023.110273_b75) 2021
Murtaza (10.1016/j.knosys.2023.110273_b30) 2020; 53
Stepin (10.1016/j.knosys.2023.110273_b76) 2021; 9
Huang (10.1016/j.knosys.2023.110273_b139) 2011
Zhang (10.1016/j.knosys.2023.110273_b23) 2020
He (10.1016/j.knosys.2023.110273_b104) 2020; 387
Tudorache (10.1016/j.knosys.2023.110273_b158) 2020; 11
Arras (10.1016/j.knosys.2023.110273_b15) 2017; 12
Płońska (10.1016/j.knosys.2023.110273_b153) 2021
Naiseh (10.1016/j.knosys.2023.110273_b62) 2020
Joshi (10.1016/j.knosys.2023.110273_b108) 2021; 9
Štrumbelj (10.1016/j.knosys.2023.110273_b89) 2014; 41
Neerincx (10.1016/j.knosys.2023.110273_b151) 2018
Tomsett (10.1016/j.knosys.2023.110273_b81) 2020; 1
Xie (10.1016/j.knosys.2023.110273_b86) 2022
Reiter (10.1016/j.knosys.2023.110273_b93) 2019
Hall (10.1016/j.knosys.2023.110273_b130) 2019
Abdul (10.1016/j.knosys.2023.110273_b69) 2018
Huang (10.1016/j.knosys.2023.110273_b111) 2020; 37
Panigutti (10.1016/j.knosys.2023.110273_b155) 2020
Belle (10.1016/j.knosys.2023.110273_b65) 2021
Andrews (10.1016/j.knosys.2023.110273_b123) 1995; 8
Weller (10.1016/j.knosys.2023.110273_b112) 2019
10.1016/j.knosys.2023.110273_b79
Akata (10.1016/j.knosys.2023.110273_b21) 2020; 53
Cheng (10.1016/j.knosys.2023.110273_b64) 2020
10.1016/j.knosys.2023.110273_b84
10.1016/j.knosys.2023.110273_b83
Longo (10.1016/j.knosys.2023.110273_b44) 2020
Nguyen (10.1016/j.knosys.2023.110273_b17) 2015
Mehrabi (10.1016/j.knosys.2023.110273_b40) 2021; 54
Fan (10.1016/j.knosys.2023.110273_b38) 2021
Dao (10.1016/j.knosys.2023.110273_b101) 2020
Samek (10.1016/j.knosys.2023.110273_b14) 2017
Szegedy (10.1016/j.knosys.2023.110273_b16) 2013
Anjomshoae (10.1016/j.knosys.2023.110273_b115) 2019
Williamson (10.1016/j.knosys.2023.110273_b91) 2020
10.1016/j.knosys.2023.110273_b133
He (10.1016/j.knosys.2023.110273_b39) 2021; 112
10.1016/j.knosys.2023.110273_b63
Liang (10.1016/j.knosys.2023.110273_b105) 2021; 419
Yuan (10.1016/j.knosys.2023.110273_b136) 2022; 7
Li (10.1016/j.knosys.2023.110273_b47) 2020
Nunes (10.1016/j.knosys.2023.110273_b48) 2017; 27
Villaronga (10.1016/j.knosys.2023.110273_b145) 2018; 34
Holzinger (10.1016/j.knosys.2023.110273_b113) 2018
10.1016/j.knosys.2023.110273_b71
Wells (10.1016/j.knosys.2023.110273_b110) 2021; 4
Markus (10.1016/j.knosys.2023.110273_b20) 2021; 113
Schwalbe (10.1016/j.knosys.2023.110273_b37) 2021
Doshi-Velez (10.1016/j.knosys.2023.110273_b5) 2017
Ras (10.1016/j.knosys.2023.110273_b36) 2018
Samek (10.1016/j.knosys.2023.110273_b53) 2021; 109
10.1016/j.knosys.2023.110273_b138
Rojat (10.1016/j.knosys.2023.110273_b73) 2021
Zhang (10.1016/j.knosys.2023.110273_b114) 2020; 14
10.1016/j.knosys.2023.110273_b137
References_xml – volume: 8
  start-page: 373
  year: 1995
  end-page: 389
  ident: b123
  article-title: Survey and critique of techniques for extracting rules from trained artificial neural networks
  publication-title: Knowl.-Based Syst.
– year: 2021
  ident: b37
  article-title: A comprehensive taxonomy for explainable artificial intelligence: A systematic survey of surveys on methods and concepts
– reference: A. Kotriwala, B. Klöpper, M. Dix, G. Gopalakrishnan, D. Ziobro, A. Potschka, XAI for Operations in the Process Industry-Applications, Theses, and Research Directions, in: AAAI Spring Symposium: Combining Machine Learning with Knowledge Engineering, 2021.
– volume: 34
  start-page: 304
  year: 2018
  end-page: 313
  ident: b145
  article-title: Humans forget, machines remember: Artificial intelligence and the right to be forgotten
  publication-title: Comput. Law Secur. Rev.
– year: 2020
  ident: b47
  article-title: A survey of data-driven and knowledge-aware explainable AI
  publication-title: IEEE Trans. Knowl. Data Eng.
– volume: 14
  start-page: 1
  year: 2020
  end-page: 101
  ident: b114
  article-title: Explainable recommendation: A survey and new perspectives
  publication-title: Found. Trends® Inform. Retr.
– volume: vol. 56
  start-page: 301
  year: 2016
  end-page: 318
  ident: b157
  article-title: Doctor AI: Predicting clinical events via recurrent neural networks
  publication-title: Proceedings of the 1st Machine Learning for Healthcare Conference
– start-page: 1189
  year: 2001
  end-page: 1232
  ident: b98
  article-title: Greedy function approximation: a gradient boosting machine
  publication-title: Ann. Statist.
– start-page: 475
  year: 2020
  end-page: 486
  ident: b64
  article-title: Interpretability of deep learning: A survey
  publication-title: The International Conference on Natural Computation, Fuzzy Systems and Knowledge Discovery
– volume: 9
  start-page: 59800
  year: 2021
  end-page: 59821
  ident: b108
  article-title: A review on explainability in multimodal deep neural nets
  publication-title: IEEE Access
– volume: 577
  start-page: 706
  year: 2020
  end-page: 710
  ident: b142
  article-title: Improved protein structure prediction using potentials from deep learning
  publication-title: Nature
– volume: 10
  start-page: 593
  year: 2021
  ident: b27
  article-title: Evaluating the quality of machine learning explanations: A survey on methods and metrics
  publication-title: Electronics
– volume: 54
  year: 2022
  ident: b52
  article-title: A survey on deep learning and explainability for automatic report generation from medical images
  publication-title: ACM Comput. Surv.
– start-page: 1
  year: 2018
  end-page: 18
  ident: b69
  article-title: Trends and trajectories for explainable, accountable and intelligible systems: An HCI research agenda
  publication-title: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems
– reference: M. Danilevsky, K. Qian, R. Aharonov, Y. Katsis, B. Kawas, P. Sen, A Survey of the State of Explainable AI for Natural Language Processing, in: Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing, 2020, pp. 447–459.
– reference: S. Chen, Q. Zhao, REX: Reasoning-aware and Grounded Explanation, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 15586–15595.
– volume: 27
  start-page: 1173
  year: 2020
  end-page: 1185
  ident: b9
  article-title: Explainable artificial intelligence models using real-world electronic health record data: a systematic scoping review
  publication-title: J. Am. Med. Inform. Assoc.
– volume: 2
  year: 2020
  ident: b43
  article-title: On the interpretability of artificial intelligence in radiology: Challenges and opportunities
  publication-title: Radiol. Artif. Intell.
– volume: 8
  year: 2019
  ident: b26
  article-title: Machine learning interpretability: A survey on methods and metrics
  publication-title: Electronics
– year: 2022
  ident: b86
  article-title: Task-agnostic graph explanations
– year: 2017
  ident: b33
  article-title: Explainable planning
– start-page: 204
  year: 2018
  end-page: 214
  ident: b151
  article-title: Using perceptual and cognitive explanations for enhanced human-agent team performance
  publication-title: Engineering Psychology and Cognitive Ergonomics
– year: 2019
  ident: b78
  article-title: Explainable AI: The Basics
– start-page: 1
  year: 2018
  end-page: 8
  ident: b113
  article-title: Current advances, trends and challenges of machine learning and knowledge extraction: From machine learning to explainable AI
  publication-title: Machine Learning and Knowledge Extraction
– start-page: 56
  year: 2020
  end-page: 88
  ident: b45
  article-title: Survey of XAI in digital pathology
  publication-title: Artificial Intelligence and Machine Learning for Digital Pathology: State-of-the-Art and Future Challenges
– start-page: 0210
  year: 2018
  end-page: 0215
  ident: b41
  article-title: Explainable artificial intelligence: A survey
  publication-title: 2018 41st International Convention on Information and Communication Technology, Electronics and Microelectronics
– volume: 387
  start-page: 346
  year: 2020
  end-page: 358
  ident: b104
  article-title: Extract interpretability-accuracy balanced rules from artificial neural networks: A review
  publication-title: Neurocomputing
– start-page: 23
  year: 2019
  end-page: 40
  ident: b112
  article-title: Transparency: Motivations and challenges
  publication-title: Explainable AI: Interpreting, Explaining and Visualizing Deep Learning
– volume: 296
  year: 2021
  ident: b156
  article-title: Using ontologies to enhance human understandability of global post-hoc explanations of black-box models
  publication-title: Artificial Intelligence
– start-page: 39
  year: 2021
  ident: b65
  article-title: Principles and practice of explainable machine learning
  publication-title: Front. Big Data
– volume: 28
  start-page: 2326
  year: 2022
  end-page: 2337
  ident: b134
  article-title: VAC-CNN: A visual analytics system for comparative studies of deep convolutional neural networks
  publication-title: IEEE Trans. Vis. Comput. Graphics
– reference: T. Orekondy, B. Schiele, M. Fritz, Knockoff Nets: Stealing Functionality of Black-Box Models, in: Conference on Computer Vision and Pattern Recognition, 2019.
– volume: 11
  year: 2021
  ident: b13
  article-title: A historical perspective of explainable artificial intelligence
  publication-title: WIREs Data Min. Knowl. Discov.
– volume: 53
  start-page: 1655
  year: 2020
  end-page: 1720
  ident: b30
  article-title: Deep learning-based breast cancer classification through medical imaging modalities: state of the art and research challenges
  publication-title: Artif. Intell. Rev.
– year: 2007
  ident: b28
  article-title: Guidelines for Performing Systematic Literature Reviews in Software Engineering
– start-page: 5563
  year: 2022
  end-page: 5582
  ident: b92
  article-title: SHAFF: Fast and consistent shapley effect estimates via random forests
  publication-title: International Conference on Artificial Intelligence and Statistics
– volume: 24
  start-page: 98
  year: 2018
  end-page: 108
  ident: b125
  article-title: DeepEyes: Progressive visual analytics for designing deep neural networks
  publication-title: IEEE Trans. Vis. Comput. Graphics
– start-page: 5998
  year: 2017
  end-page: 6008
  ident: b97
  article-title: Attention is all you need
  publication-title: Advances in Neural Information Processing Systems
– reference: S.J. Oh, M. Augustin, B. Schiele, M. Fritz, Towards Reverse-Engineering Black-Box Neural Networks, in: International Conference on Learning Representations, 2018.
– volume: 1
  year: 2020
  ident: b81
  article-title: Rapid trust calibration through interpretable and uncertainty-aware AI
  publication-title: Patterns
– start-page: 1134
  year: 2003
  end-page: 1141
  ident: b119
  article-title: A Bayesian approach to unsupervised one-shot learning of object categories
  publication-title: Proceedings Ninth IEEE International Conference on Computer Vision, Vol. 2
– start-page: 1
  year: 2020
  end-page: 16
  ident: b44
  article-title: Explainable artificial intelligence: Concepts, applications, research challenges and visions
  publication-title: Machine Learning and Knowledge Extraction
– volume: 267
  start-page: 1
  year: 2019
  end-page: 38
  ident: b67
  article-title: Explanation in artificial intelligence: Insights from the social sciences
  publication-title: Artificial Intelligence
– volume: 19
  start-page: 207
  year: 2020
  end-page: 233
  ident: b24
  article-title: A survey of surveys on the use of visualization for interpreting machine learning models
  publication-title: Inf. Vis.
– year: 2013
  ident: b16
  article-title: Intriguing properties of neural networks
– year: 2019
  ident: b116
  article-title: The challenge of imputation in explainable artificial intelligence models
  publication-title: Proceedings of the Workshop on Artificial Intelligence Safety
– year: 2021
  ident: b75
  article-title: Explainable reinforcement learning for broad-xai: A conceptual framework and survey
– year: 2017
  ident: b96
  article-title: Thinking, fast and slow
– volume: 22
  start-page: 18
  year: 2020
  end-page: 33
  ident: b74
  article-title: Causal interpretability for machine learning - problems, methods and evaluation
  publication-title: SIGKDD Explor. Newsl.
– volume: 9
  start-page: 11974
  year: 2021
  end-page: 12001
  ident: b76
  article-title: A survey of contrastive and counterfactual explanation generation methods for explainable artificial intelligence
  publication-title: IEEE Access
– volume: 10
  year: 2021
  ident: b57
  article-title: Explainable embodied agents through social cues: A review
  publication-title: J. Hum.-Robot Interact.
– year: 2022
  ident: b80
  article-title: Explainable AI for healthcare 5.0: Opportunities and challenges
  publication-title: IEEE Access
– volume: 55
  start-page: 1
  year: 2022
  end-page: 39
  ident: b94
  article-title: A survey of evaluation metrics used for NLG systems
  publication-title: ACM Comput. Surv.
– start-page: 43
  year: 2011
  end-page: 58
  ident: b139
  article-title: Adversarial machine learning
  publication-title: Proceedings of the 4th ACM Workshop on Security and Artificial Intelligence
– start-page: 1
  year: 2019
  end-page: 16
  ident: b49
  article-title: Semantic web technologies for explainable machine learning models: A literature review
  publication-title: PROFILES/SEMEX@ ISWC, Vol. 2465
– volume: 3
  start-page: 966
  year: 2021
  end-page: 989
  ident: b51
  article-title: Analysis of explainers of black box deep neural networks for computer vision: A survey
  publication-title: Mach. Learn. Knowl. Extr.
– volume: 16
  start-page: 31
  year: 2018
  end-page: 57
  ident: b19
  article-title: The mythos of model interpretability: In machine learning, the concept of interpretability is both important and slippery
  publication-title: Queue
– year: 2021
  ident: b73
  article-title: Explainable artificial intelligence (xai) on timeseries data: A survey
– volume: 25
  start-page: 51
  year: 2021
  end-page: 59
  ident: b120
  article-title: Semantics of the black-box: Can knowledge graphs help make deep learning systems more interpretable and explainable?
  publication-title: IEEE Internet Comput.
– year: 2017
  ident: b14
  article-title: Explainable artificial intelligence: Understanding, visualizing and interpreting deep learning models
– start-page: 19
  year: 2018
  end-page: 36
  ident: b36
  article-title: Explanation methods in deep learning: Users, values, concerns and challenges
  publication-title: Explainable and Interpretable Models in Computer Vision and Machine Learning
– volume: 6
  start-page: 52138
  year: 2018
  end-page: 52160
  ident: b1
  article-title: Peeking inside the black-box: A survey on explainable artificial intelligence (XAI)
  publication-title: IEEE Access
– volume: 113
  year: 2021
  ident: b20
  article-title: The role of explainability in creating trustworthy artificial intelligence for health care: A comprehensive survey of the terminology, design choices, and evaluation strategies
  publication-title: J. Biomed. Inform.
– volume: 88
  start-page: 1350
  year: 2020
  ident: b144
  article-title: Show us the data: Privacy, explainability, and why the law can’t have both
  publication-title: Geo. Wash. L. Rev.
– year: 2022
  ident: b56
  article-title: Post-hoc interpretability for neural NLP: A survey
  publication-title: ACM Comput. Surv.
– year: 2021
  ident: b66
  article-title: Explainable artificial intelligence for autonomous driving: A comprehensive overview and field guide for future research directions
– start-page: 1078
  year: 2019
  end-page: 1088
  ident: b115
  article-title: Explainable agents and robots: Results from a systematic literature review
  publication-title: Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems
– year: 2020
  ident: b59
  article-title: Achievements and challenges in explaining deep learning based computer-aided diagnosis systems
– volume: 37
  year: 2020
  ident: b111
  article-title: A survey of safety and trustworthiness of deep neural networks: Verification, testing, adversarial attack and defence, and interpretability
  publication-title: Comp. Sci. Rev.
– volume: 3
  start-page: 173
  year: 2019
  end-page: 182
  ident: b129
  article-title: An explainable deep-learning algorithm for the detection of acute intracranial haemorrhage from small datasets
  publication-title: Nat. Biomed. Eng.
– volume: 801
  year: 2021
  ident: b87
  article-title: Interpretable and explainable AI (XAI) model for spatial drought prediction
  publication-title: Sci. Total Environ.
– start-page: 277
  year: 2019
  end-page: 282
  ident: b149
  article-title: Explainable AI planning (XAIP): Overview and the case of contrastive explanation (extended abstract)
  publication-title: Reasoning Web. Explainable Artificial Intelligence: 15th International Summer School 2019, Bolzano, Italy, September 20–24, 2019, Tutorial Lectures
– reference: M.T. Ribeiro, S. Singh, C. Guestrin, ” Why should i trust you?” Explaining the predictions of any classifier, in: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2016, pp. 1135–1144.
– start-page: 629
  year: 2020
  end-page: 639
  ident: b155
  article-title: Doctor XAI: An ontology-based approach to black-box sequential data classification explanations
  publication-title: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency
– volume: 11
  start-page: 5088
  year: 2021
  ident: b55
  article-title: Current challenges and future opportunities for XAI in machine learning-based clinical decision support systems: a systematic review
  publication-title: Appl. Sci.
– volume: 4
  start-page: 122
  year: 2020
  end-page: 131
  ident: b132
  article-title: A visual analytics system for multi-model comparison on clinical data predictions
  publication-title: Vis. Inform.
– volume: 23
  start-page: 1342
  year: 2021
  end-page: 1397
  ident: b147
  article-title: Federated machine learning: Survey, multi-level classification, desirable criteria and future directions in communication and networking systems
  publication-title: IEEE Commun. Surv. Tutor.
– year: 2020
  ident: b117
  article-title: Dimensions of Data Quality (DDQ)
– year: 2017
  ident: b5
  article-title: Towards a rigorous science of interpretable machine learning
– volume: 419
  start-page: 168
  year: 2021
  end-page: 182
  ident: b105
  article-title: Explaining the black-box model: A survey of local interpretation methods for deep neural networks
  publication-title: Neurocomputing
– volume: 11
  start-page: 125
  year: 2020
  end-page: 138
  ident: b158
  article-title: Ontology engineering: Current state, challenges, and future directions
  publication-title: Semant. Web
– volume: 1
  start-page: 206
  year: 2019
  end-page: 215
  ident: b25
  article-title: Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
  publication-title: Nat. Mach. Intell.
– start-page: 1
  year: 2019
  end-page: 5
  ident: b141
  article-title: Generative counterfactual introspection for explainable deep learning
  publication-title: 2019 IEEE Global Conference on Signal and Information Processing
– start-page: 63
  year: 2018
  end-page: 72
  ident: b35
  article-title: Asking ‘Why’ in AI: Explainability of intelligent systems – perspectives and challenges
  publication-title: Intelligent Systems in Accounting, Finance and Management, Vol. 25, No. 2
– reference: L. Veiber, K. Allix, Y. Arslan, T.F. Bissyandé, J. Klein, Challenges towards production-ready explainable machine learning, in: 2020 {USENIX} Conference on Operational Machine Learning, OpML 20, 2020.
– volume: 109
  start-page: 247
  year: 2021
  end-page: 278
  ident: b53
  article-title: Explaining deep neural networks and beyond: A review of methods and applications
  publication-title: Proc. IEEE
– volume: 41
  start-page: 647
  year: 2014
  end-page: 665
  ident: b89
  article-title: Explaining prediction models and individual predictions with feature contributions
  publication-title: Knowl. Inf. Syst.
– volume: 104
  start-page: 1
  year: 2015
  end-page: 12
  ident: b31
  article-title: The artificial neural network for solar radiation prediction and designing solar systems: a systematic literature review
  publication-title: J. Clean. Prod.
– volume: 73
  year: 2022
  ident: b50
  article-title: Explainable deep learning: A field guide for the uninitiated
  publication-title: J. Artif. Int. Res.
– year: 2021
  ident: b61
  article-title: Explainable artificial intelligence approaches: A survey
– start-page: 1
  year: 2020
  end-page: 21
  ident: b109
  article-title: A survey on explainable artificial intelligence (XAI): Toward medical XAI
  publication-title: IEEE Trans. Neural Netw. Learn. Syst.
– start-page: 1273
  year: 2017
  end-page: 1282
  ident: b146
  article-title: Communication-efficient learning of deep networks from decentralized data
  publication-title: Artificial Intelligence and Statistics
– volume: 27
  start-page: 393
  year: 2017
  end-page: 444
  ident: b48
  article-title: A systematic review and taxonomy of explanations in decision support and recommender systems
  publication-title: User Model. User Adapt. Interact.
– reference: D.H. Park, L.A. Hendricks, Z. Akata, A. Rohrbach, B. Schiele, T. Darrell, M. Rohrbach, Multimodal Explanations: Justifying Decisions and Pointing to the Evidence, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, CVPR, 2018.
– start-page: 3
  year: 2019
  end-page: 7
  ident: b93
  article-title: Natural language generation challenges for explainable AI
  publication-title: Proceedings of the 1st Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence
– reference: O. Biran, C. Cotton, Explanation and justification in machine learning: A survey, in: IJCAI-17 Workshop on Explainable AI, Vol. 8, XAI, (1) 2017, pp. 8–13.
– volume: 5
  start-page: 199
  year: 1993
  end-page: 220
  ident: b154
  article-title: A translation approach to portable ontology specifications
  publication-title: Knowl. Acquis.
– start-page: 539
  year: 2017
  end-page: 562
  ident: b77
  article-title: The state-of-the-art in predictive visual analytics
  publication-title: Computer Graphics Forum, Vol. 36, No. 3
– volume: 7
  start-page: eabm4183
  year: 2022
  ident: b136
  article-title: In situ bidirectional human-robot value alignment
  publication-title: Science Robotics
– year: 2019
  ident: b130
  article-title: Proposed guidelines for the responsible use of explainable machine learning
– volume: 1
  start-page: 1
  year: 2021
  ident: b58
  article-title: Recent advances in trustworthy explainable artificial intelligence: Status, challenges and perspectives
  publication-title: IEEE Trans. Artif. Intell.
– year: 2019
  ident: b11
  article-title: Interpretable Machine Learning
– start-page: 212
  year: 2020
  end-page: 228
  ident: b106
  article-title: Explainable recommendations in intelligent systems: Delivery methods, modalities and risks
  publication-title: Research Challenges in Information Science
– volume: 28
  start-page: 3395
  year: 2016
  end-page: 3408
  ident: b135
  article-title: Towards Bayesian deep learning: A framework and some existing methods
  publication-title: IEEE Trans. Knowl. Data Eng.
– start-page: 5
  year: 2019
  end-page: 22
  ident: b8
  article-title: Towards explainable artificial intelligence
  publication-title: Explainable AI: Interpreting, Explaining and Visualizing Deep Learning
– volume: 38
  start-page: 84
  year: 2018
  end-page: 92
  ident: b102
  article-title: Visual analytics for explainable deep learning
  publication-title: IEEE Comput. Graph. Appl.
– volume: 70
  start-page: 245
  year: 2021
  end-page: 317
  ident: b18
  article-title: A survey on the explainability of supervised machine learning
  publication-title: J. Artificial Intelligence Res.
– volume: 112
  year: 2021
  ident: b39
  article-title: Interpretable visual reasoning: A survey
  publication-title: Image Vis. Comput.
– reference: D. Gunning, Broad Agency Announcement Explainable Artificial Intelligence (XAI), Technical report, 2016.
– year: 2020
  ident: b152
  article-title: H2O autoML: Scalable automatic machine learning
  publication-title: 7th ICML Workshop on Automated Machine Learning
– volume: 12
  year: 2017
  ident: b15
  article-title: ” What is relevant in a text document?”: An interpretable machine learning approach
  publication-title: PLoS One
– start-page: 1
  year: 2022
  end-page: 19
  ident: b72
  article-title: Explainability in graph neural networks: A taxonomic survey
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
– reference: J.M. Darias, B. Díaz-Agudo, J.A. Recio-García, A Systematic Review on Model-agnostic XAI Libraries, in: ICCBR Workshops, 2021, pp. 28–39.
– start-page: 417
  year: 2020
  end-page: 431
  ident: b42
  article-title: Interpretable machine learning – A brief history, state-of-the-art and challenges
  publication-title: ECML PKDD 2020 Workshops
– volume: 80
  start-page: 78
  year: 2015
  end-page: 97
  ident: b29
  article-title: Systematic mapping study on granular computing
  publication-title: Knowl.-Based Syst.
– volume: 4
  year: 2019
  ident: b34
  article-title: XAI—Explainable artificial intelligence
  publication-title: Science Robotics
– volume: 24
  start-page: 77
  year: 2018
  end-page: 87
  ident: b126
  article-title: Analyzing the training processes of deep generative models
  publication-title: IEEE Trans. Vis. Comput. Graphics
– start-page: 217
  year: 2021
  end-page: 267
  ident: b68
  article-title: Survey of explainable machine learning with visual and granular methods beyond quasi-explanations
  publication-title: Interpretable Artificial Intelligence: A Perspective of Granular Computing
– volume: 119
  start-page: 1829
  year: 2019
  end-page: 1850
  ident: b82
  article-title: The judicial demand for explainable artificial intelligence
  publication-title: Columbia Law Rev.
– volume: 116
  start-page: 22071
  year: 2019
  end-page: 22080
  ident: b90
  article-title: Definitions, methods, and applications in interpretable machine learning
  publication-title: Proc. Natl. Acad. Sci.
– start-page: 447
  year: 2018
  ident: b32
  article-title: Interpretable machine learning in healthcare
  publication-title: 2018 IEEE International Conference on Healthcare Informatics
– volume: 38
  start-page: 50
  year: 2017
  end-page: 57
  ident: b7
  article-title: European union regulations on algorithmic decision-making and a “right to explanation”
  publication-title: AI Mag.
– volume: 79
  year: 2022
  ident: b127
  article-title: Explainable artificial intelligence (XAI) in deep learning-based medical image analysis
  publication-title: Med. Image Anal.
– volume: 51
  year: 2018
  ident: b10
  article-title: A survey of methods for explaining black box models
  publication-title: ACM Comput. Surv.
– start-page: 10282
  year: 2020
  end-page: 10291
  ident: b91
  article-title: Efficient nonparametric statistical inference on population feature importance using Shapley values
  publication-title: International Conference on Machine Learning
– start-page: 4762
  year: 2017
  end-page: 4763
  ident: b150
  article-title: Explainable agency for intelligent autonomous systems
  publication-title: Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence
– year: 2021
  ident: b153
  article-title: MLJAR: State-of-the-art Automated Machine Learning Framework for Tabular Data. Version 0.10.3
– year: 2016
  ident: b148
  article-title: Federated learning: Strategies for improving communication efficiency
– year: 2020
  ident: b70
  article-title: Explainable artificial intelligence: a systematic review
– year: 2020
  ident: b100
  article-title: Flexible and context-specific AI explainability: a multidisciplinary approach
– volume: 4
  start-page: 48
  year: 2021
  ident: b110
  article-title: Explainable AI and reinforcement learning—A systematic review of current approaches and trends
  publication-title: Front. Artif. Intell.
– volume: 13
  start-page: 71
  year: 1993
  end-page: 101
  ident: b121
  article-title: Extracting refined rules from knowledge-based neural networks
  publication-title: Mach. Learn.
– year: 2019
  ident: b60
  article-title: Explanation in human-AI systems: A literature meta-review, synopsis of key ideas and publications, and bibliography for explainable AI
– start-page: 427
  year: 2015
  end-page: 436
  ident: b17
  article-title: Deep neural networks are easily fooled: High confidence predictions for unrecognizable images
  publication-title: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
– start-page: 2045
  year: 2019
  end-page: 2048
  ident: b128
  article-title: An interpretable ensemble deep learning model for diabetic retinopathy disease classification
  publication-title: 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society
– year: 2017
  ident: b99
  article-title: Accountability of AI under the law: The role of explanation
– start-page: 699
  year: 2020
  ident: b6
  article-title: Explainable AI in industry: Practical challenges and lessons learned: Implications tutorial
  publication-title: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency
– start-page: 818
  year: 2014
  end-page: 833
  ident: b131
  article-title: Visualizing and understanding convolutional networks
  publication-title: Computer Vision – ECCV 2014
– volume: 58
  start-page: 82
  year: 2020
  end-page: 115
  ident: b2
  article-title: Explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
  publication-title: Inf. Fusion
– volume: 54
  year: 2021
  ident: b40
  article-title: A survey on bias and fairness in machine learning
  publication-title: ACM Comput. Surv.
– volume: 8
  start-page: 191969
  year: 2020
  end-page: 191985
  ident: b46
  article-title: Review study of interpretation methods for future interpretable machine learning
  publication-title: IEEE Access
– start-page: 518
  year: 2020
  end-page: 533
  ident: b62
  article-title: Personalising explainable recommendations: Literature and conceptualisation
  publication-title: Trends and Innovations in Information Systems and Technologies
– reference: D. Slack, S. Hilgard, E. Jia, S. Singh, H. Lakkaraju, Fooling lime and shap: Adversarial attacks on post hoc explanation methods, in: Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 180–186.
– year: 2020
  ident: b23
  article-title: A survey on neural network interpretability
– volume: 11
  year: 2021
  ident: b54
  article-title: A review of explainable deep learning cancer detection models in medical imaging
  publication-title: Appl. Sci.
– year: 2020
  ident: b101
  article-title: Demystifying deep neural networks through interpretation: A survey
– volume: 53
  start-page: 18
  year: 2020
  end-page: 28
  ident: b21
  article-title: A research agenda for hybrid intelligence: Augmenting human intellect with collaborative, adaptive, responsible, and explainable artificial intelligence
  publication-title: Computer
– volume: 9
  start-page: 44
  year: 2017
  end-page: 55
  ident: b143
  article-title: Learning from explanations using sentiment and advice in RL
  publication-title: IEEE Trans. Cogn. Dev. Syst.
– start-page: 1
  year: 2017
  end-page: 6
  ident: b22
  article-title: Interpretability of deep learning models: A survey of results
  publication-title: 2017 IEEE SmartWorld, Ubiquitous Intelligence Computing, Advanced Trusted Computed, Scalable Computing Communications, Cloud Big Data Computing, Internet of People and Smart City Innovation
– start-page: 457
  year: 2016
  end-page: 473
  ident: b124
  article-title: DeepRED – rule extraction from deep neural networks
  publication-title: Discovery Science
– year: 2021
  ident: b38
  article-title: On interpretability of artificial neural networks: A survey
  publication-title: IEEE Trans. Radiat. Plasma Med. Sci.
– volume: 19
  start-page: 27
  year: 2018
  end-page: 39
  ident: b103
  article-title: Visual interpretability for deep learning: a survey
  publication-title: Front. Inf. Technol. Electron. Eng.
– year: 2022
  ident: b85
  article-title: LoMEF: A framework to produce local explanations for global model time series forecasts
  publication-title: Int. J. Forecast.
– reference: C.F. Baumgartner, L.M. Koch, K.C. Tezcan, J.X. Ang, E. Konukoglu, Visual feature attribution using wasserstein gans, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 8309–8319.
– volume: 69
  year: 2021
  ident: b118
  article-title: A survey on incorporating domain knowledge into deep learning for medical image analysis
  publication-title: Med. Image Anal.
– volume: 9
  start-page: 41
  year: 1996
  end-page: 52
  ident: b122
  article-title: Extraction of rules from discrete-time recurrent neural networks
  publication-title: Neural Netw.
– year: 2012
  ident: b95
  article-title: Not Exactly: In Praise of Vagueness
– reference: D.L. Arendt, N. Nur, Z. Huang, G. Fair, W. Dou, Parallel embeddings: a visualization technique for contrasting learned representations, in: Proceedings of the 25th International Conference on Intelligent User Interfaces, 2020, pp. 259–274.
– start-page: 212
  year: 2020
  ident: 10.1016/j.knosys.2023.110273_b106
  article-title: Explainable recommendations in intelligent systems: Delivery methods, modalities and risks
– start-page: 19
  year: 2018
  ident: 10.1016/j.knosys.2023.110273_b36
  article-title: Explanation methods in deep learning: Users, values, concerns and challenges
– year: 2017
  ident: 10.1016/j.knosys.2023.110273_b99
– ident: 10.1016/j.knosys.2023.110273_b133
  doi: 10.1145/3377325.3377514
– ident: 10.1016/j.knosys.2023.110273_b107
– year: 2016
  ident: 10.1016/j.knosys.2023.110273_b148
– year: 2020
  ident: 10.1016/j.knosys.2023.110273_b152
  article-title: H2O autoML: Scalable automatic machine learning
– volume: 4
  start-page: 122
  issue: 2
  year: 2020
  ident: 10.1016/j.knosys.2023.110273_b132
  article-title: A visual analytics system for multi-model comparison on clinical data predictions
  publication-title: Vis. Inform.
  doi: 10.1016/j.visinf.2020.04.005
– volume: 51
  issue: 5
  year: 2018
  ident: 10.1016/j.knosys.2023.110273_b10
  article-title: A survey of methods for explaining black box models
  publication-title: ACM Comput. Surv.
  doi: 10.1145/3236009
– volume: 79
  year: 2022
  ident: 10.1016/j.knosys.2023.110273_b127
  article-title: Explainable artificial intelligence (XAI) in deep learning-based medical image analysis
  publication-title: Med. Image Anal.
  doi: 10.1016/j.media.2022.102470
– volume: 38
  start-page: 84
  issue: 04
  year: 2018
  ident: 10.1016/j.knosys.2023.110273_b102
  article-title: Visual analytics for explainable deep learning
  publication-title: IEEE Comput. Graph. Appl.
  doi: 10.1109/MCG.2018.042731661
– volume: 38
  start-page: 50
  issue: 3
  year: 2017
  ident: 10.1016/j.knosys.2023.110273_b7
  article-title: European union regulations on algorithmic decision-making and a “right to explanation”
  publication-title: AI Mag.
– volume: 3
  start-page: 966
  issue: 4
  year: 2021
  ident: 10.1016/j.knosys.2023.110273_b51
  article-title: Analysis of explainers of black box deep neural networks for computer vision: A survey
  publication-title: Mach. Learn. Knowl. Extr.
  doi: 10.3390/make3040048
– volume: 12
  issue: 8
  year: 2017
  ident: 10.1016/j.knosys.2023.110273_b15
  article-title: ” What is relevant in a text document?”: An interpretable machine learning approach
  publication-title: PLoS One
  doi: 10.1371/journal.pone.0181142
– ident: 10.1016/j.knosys.2023.110273_b137
  doi: 10.1109/CVPR.2019.00509
– start-page: 447
  year: 2018
  ident: 10.1016/j.knosys.2023.110273_b32
  article-title: Interpretable machine learning in healthcare
– year: 2017
  ident: 10.1016/j.knosys.2023.110273_b14
– year: 2019
  ident: 10.1016/j.knosys.2023.110273_b130
– year: 2017
  ident: 10.1016/j.knosys.2023.110273_b96
– volume: 53
  start-page: 18
  issue: 8
  year: 2020
  ident: 10.1016/j.knosys.2023.110273_b21
  article-title: A research agenda for hybrid intelligence: Augmenting human intellect with collaborative, adaptive, responsible, and explainable artificial intelligence
  publication-title: Computer
  doi: 10.1109/MC.2020.2996587
– volume: 109
  start-page: 247
  issue: 3
  year: 2021
  ident: 10.1016/j.knosys.2023.110273_b53
  article-title: Explaining deep neural networks and beyond: A review of methods and applications
  publication-title: Proc. IEEE
  doi: 10.1109/JPROC.2021.3060483
– volume: 54
  issue: 10s
  year: 2022
  ident: 10.1016/j.knosys.2023.110273_b52
  article-title: A survey on deep learning and explainability for automatic report generation from medical images
  publication-title: ACM Comput. Surv.
  doi: 10.1145/3522747
– start-page: 10282
  year: 2020
  ident: 10.1016/j.knosys.2023.110273_b91
  article-title: Efficient nonparametric statistical inference on population feature importance using Shapley values
– volume: 37
  year: 2020
  ident: 10.1016/j.knosys.2023.110273_b111
  article-title: A survey of safety and trustworthiness of deep neural networks: Verification, testing, adversarial attack and defence, and interpretability
  publication-title: Comp. Sci. Rev.
– volume: 9
  start-page: 11974
  year: 2021
  ident: 10.1016/j.knosys.2023.110273_b76
  article-title: A survey of contrastive and counterfactual explanation generation methods for explainable artificial intelligence
  publication-title: IEEE Access
  doi: 10.1109/ACCESS.2021.3051315
– ident: 10.1016/j.knosys.2023.110273_b71
– start-page: 629
  year: 2020
  ident: 10.1016/j.knosys.2023.110273_b155
  article-title: Doctor XAI: An ontology-based approach to black-box sequential data classification explanations
– volume: 27
  start-page: 1173
  issue: 7
  year: 2020
  ident: 10.1016/j.knosys.2023.110273_b9
  article-title: Explainable artificial intelligence models using real-world electronic health record data: a systematic scoping review
  publication-title: J. Am. Med. Inform. Assoc.
  doi: 10.1093/jamia/ocaa053
– year: 2017
  ident: 10.1016/j.knosys.2023.110273_b5
– year: 2020
  ident: 10.1016/j.knosys.2023.110273_b23
– start-page: 818
  year: 2014
  ident: 10.1016/j.knosys.2023.110273_b131
  article-title: Visualizing and understanding convolutional networks
– volume: vol. 56
  start-page: 301
  year: 2016
  ident: 10.1016/j.knosys.2023.110273_b157
  article-title: Doctor AI: Predicting clinical events via recurrent neural networks
– volume: 19
  start-page: 27
  year: 2018
  ident: 10.1016/j.knosys.2023.110273_b103
  article-title: Visual interpretability for deep learning: a survey
  publication-title: Front. Inf. Technol. Electron. Eng.
  doi: 10.1631/FITEE.1700808
– start-page: 699
  year: 2020
  ident: 10.1016/j.knosys.2023.110273_b6
  article-title: Explainable AI in industry: Practical challenges and lessons learned: Implications tutorial
– start-page: 56
  year: 2020
  ident: 10.1016/j.knosys.2023.110273_b45
  article-title: Survey of XAI in digital pathology
– year: 2021
  ident: 10.1016/j.knosys.2023.110273_b61
– volume: 7
  start-page: eabm4183
  issue: 68
  year: 2022
  ident: 10.1016/j.knosys.2023.110273_b136
  article-title: In situ bidirectional human-robot value alignment
  publication-title: Science Robotics
  doi: 10.1126/scirobotics.abm4183
– volume: 34
  start-page: 304
  issue: 2
  year: 2018
  ident: 10.1016/j.knosys.2023.110273_b145
  article-title: Humans forget, machines remember: Artificial intelligence and the right to be forgotten
  publication-title: Comput. Law Secur. Rev.
  doi: 10.1016/j.clsr.2017.08.007
– start-page: 1
  year: 2019
  ident: 10.1016/j.knosys.2023.110273_b49
  article-title: Semantic web technologies for explainable machine learning models: A literature review
– year: 2020
  ident: 10.1016/j.knosys.2023.110273_b101
– volume: 9
  start-page: 44
  issue: 1
  year: 2017
  ident: 10.1016/j.knosys.2023.110273_b143
  article-title: Learning from explanations using sentiment and advice in RL
  publication-title: IEEE Trans. Cogn. Dev. Syst.
  doi: 10.1109/TCDS.2016.2628365
– volume: 53
  start-page: 1655
  issue: 3
  year: 2020
  ident: 10.1016/j.knosys.2023.110273_b30
  article-title: Deep learning-based breast cancer classification through medical imaging modalities: state of the art and research challenges
  publication-title: Artif. Intell. Rev.
  doi: 10.1007/s10462-019-09716-5
– start-page: 417
  year: 2020
  ident: 10.1016/j.knosys.2023.110273_b42
  article-title: Interpretable machine learning – A brief history, state-of-the-art and challenges
– start-page: 475
  year: 2020
  ident: 10.1016/j.knosys.2023.110273_b64
  article-title: Interpretability of deep learning: A survey
– volume: 1
  start-page: 206
  issue: 5
  year: 2019
  ident: 10.1016/j.knosys.2023.110273_b25
  article-title: Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
  publication-title: Nat. Mach. Intell.
  doi: 10.1038/s42256-019-0048-x
– volume: 28
  start-page: 2326
  issue: 6
  year: 2022
  ident: 10.1016/j.knosys.2023.110273_b134
  article-title: VAC-CNN: A visual analytics system for comparative studies of deep convolutional neural networks
  publication-title: IEEE Trans. Vis. Comput. Graphics
– year: 2021
  ident: 10.1016/j.knosys.2023.110273_b153
– volume: 69
  year: 2021
  ident: 10.1016/j.knosys.2023.110273_b118
  article-title: A survey on incorporating domain knowledge into deep learning for medical image analysis
  publication-title: Med. Image Anal.
  doi: 10.1016/j.media.2021.101985
– start-page: 2045
  year: 2019
  ident: 10.1016/j.knosys.2023.110273_b128
  article-title: An interpretable ensemble deep learning model for diabetic retinopathy disease classification
– volume: 11
  issue: 1
  year: 2021
  ident: 10.1016/j.knosys.2023.110273_b13
  article-title: A historical perspective of explainable artificial intelligence
  publication-title: WIREs Data Min. Knowl. Discov.
– volume: 10
  start-page: 593
  issue: 5
  year: 2021
  ident: 10.1016/j.knosys.2023.110273_b27
  article-title: Evaluating the quality of machine learning explanations: A survey on methods and metrics
  publication-title: Electronics
  doi: 10.3390/electronics10050593
– volume: 88
  start-page: 1350
  year: 2020
  ident: 10.1016/j.knosys.2023.110273_b144
  article-title: Show us the data: Privacy, explainability, and why the law can’t have both
  publication-title: Geo. Wash. L. Rev.
– ident: 10.1016/j.knosys.2023.110273_b12
– volume: 8
  start-page: 191969
  year: 2020
  ident: 10.1016/j.knosys.2023.110273_b46
  article-title: Review study of interpretation methods for future interpretable machine learning
  publication-title: IEEE Access
  doi: 10.1109/ACCESS.2020.3032756
– ident: 10.1016/j.knosys.2023.110273_b140
  doi: 10.1109/CVPR.2018.00867
– start-page: 5998
  year: 2017
  ident: 10.1016/j.knosys.2023.110273_b97
  article-title: Attention is all you need
– start-page: 1
  year: 2020
  ident: 10.1016/j.knosys.2023.110273_b109
  article-title: A survey on explainable artificial intelligence (XAI): Toward medical XAI
  publication-title: IEEE Trans. Neural Netw. Learn. Syst.
– volume: 11
  issue: 10
  year: 2021
  ident: 10.1016/j.knosys.2023.110273_b54
  article-title: A review of explainable deep learning cancer detection models in medical imaging
  publication-title: Appl. Sci.
  doi: 10.3390/app11104573
– year: 2019
  ident: 10.1016/j.knosys.2023.110273_b11
– volume: 4
  start-page: 48
  year: 2021
  ident: 10.1016/j.knosys.2023.110273_b110
  article-title: Explainable AI and reinforcement learning—A systematic review of current approaches and trends
  publication-title: Front. Artif. Intell.
  doi: 10.3389/frai.2021.550030
– ident: 10.1016/j.knosys.2023.110273_b63
– volume: 9
  start-page: 41
  issue: 1
  year: 1996
  ident: 10.1016/j.knosys.2023.110273_b122
  article-title: Extraction of rules from discrete-time recurrent neural networks
  publication-title: Neural Netw.
  doi: 10.1016/0893-6080(95)00086-0
– year: 2012
  ident: 10.1016/j.knosys.2023.110273_b95
– year: 2019
  ident: 10.1016/j.knosys.2023.110273_b78
– start-page: 1
  year: 2020
  ident: 10.1016/j.knosys.2023.110273_b44
  article-title: Explainable artificial intelligence: Concepts, applications, research challenges and visions
– start-page: 1273
  year: 2017
  ident: 10.1016/j.knosys.2023.110273_b146
  article-title: Communication-efficient learning of deep networks from decentralized data
– start-page: 427
  year: 2015
  ident: 10.1016/j.knosys.2023.110273_b17
  article-title: Deep neural networks are easily fooled: High confidence predictions for unrecognizable images
– start-page: 0210
  year: 2018
  ident: 10.1016/j.knosys.2023.110273_b41
  article-title: Explainable artificial intelligence: A survey
– volume: 801
  year: 2021
  ident: 10.1016/j.knosys.2023.110273_b87
  article-title: Interpretable and explainable AI (XAI) model for spatial drought prediction
  publication-title: Sci. Total Environ.
  doi: 10.1016/j.scitotenv.2021.149797
– ident: 10.1016/j.knosys.2023.110273_b84
  doi: 10.1109/CVPR52688.2022.01514
– start-page: 518
  year: 2020
  ident: 10.1016/j.knosys.2023.110273_b62
  article-title: Personalising explainable recommendations: Literature and conceptualisation
– volume: 577
  start-page: 706
  issue: 7792
  year: 2020
  ident: 10.1016/j.knosys.2023.110273_b142
  article-title: Improved protein structure prediction using potentials from deep learning
  publication-title: Nature
  doi: 10.1038/s41586-019-1923-7
– ident: 10.1016/j.knosys.2023.110273_b4
– year: 2021
  ident: 10.1016/j.knosys.2023.110273_b37
– volume: 9
  start-page: 59800
  year: 2021
  ident: 10.1016/j.knosys.2023.110273_b108
  article-title: A review on explainability in multimodal deep neural nets
  publication-title: IEEE Access
  doi: 10.1109/ACCESS.2021.3070212
– volume: 116
  start-page: 22071
  issue: 44
  year: 2019
  ident: 10.1016/j.knosys.2023.110273_b90
  article-title: Definitions, methods, and applications in interpretable machine learning
  publication-title: Proc. Natl. Acad. Sci.
  doi: 10.1073/pnas.1900654116
– year: 2020
  ident: 10.1016/j.knosys.2023.110273_b100
– start-page: 277
  year: 2019
  ident: 10.1016/j.knosys.2023.110273_b149
  article-title: Explainable AI planning (XAIP): Overview and the case of contrastive explanation (extended abstract)
– start-page: 1
  year: 2017
  ident: 10.1016/j.knosys.2023.110273_b22
  article-title: Interpretability of deep learning models: A survey of results
– year: 2021
  ident: 10.1016/j.knosys.2023.110273_b73
– ident: 10.1016/j.knosys.2023.110273_b79
  doi: 10.1145/3375627.3375830
– year: 2007
  ident: 10.1016/j.knosys.2023.110273_b28
– start-page: 39
  year: 2021
  ident: 10.1016/j.knosys.2023.110273_b65
  article-title: Principles and practice of explainable machine learning
  publication-title: Front. Big Data
– year: 2022
  ident: 10.1016/j.knosys.2023.110273_b56
  article-title: Post-hoc interpretability for neural NLP: A survey
  publication-title: ACM Comput. Surv.
– volume: 55
  start-page: 1
  issue: 2
  year: 2022
  ident: 10.1016/j.knosys.2023.110273_b94
  article-title: A survey of evaluation metrics used for NLG systems
  publication-title: ACM Comput. Surv.
  doi: 10.1145/3485766
– volume: 2
  issue: 3
  year: 2020
  ident: 10.1016/j.knosys.2023.110273_b43
  article-title: On the interpretability of artificial intelligence in radiology: Challenges and opportunities
  publication-title: Radiol. Artif. Intell.
  doi: 10.1148/ryai.2020190043
– volume: 5
  start-page: 199
  issue: 2
  year: 1993
  ident: 10.1016/j.knosys.2023.110273_b154
  article-title: A translation approach to portable ontology specifications
  publication-title: Knowl. Acquis.
  doi: 10.1006/knac.1993.1008
– volume: 419
  start-page: 168
  year: 2021
  ident: 10.1016/j.knosys.2023.110273_b105
  article-title: Explaining the black-box model: A survey of local interpretation methods for deep neural networks
  publication-title: Neurocomputing
  doi: 10.1016/j.neucom.2020.08.011
– start-page: 4762
  year: 2017
  ident: 10.1016/j.knosys.2023.110273_b150
  article-title: Explainable agency for intelligent autonomous systems
– volume: 54
  issue: 6
  year: 2021
  ident: 10.1016/j.knosys.2023.110273_b40
  article-title: A survey on bias and fairness in machine learning
  publication-title: ACM Comput. Surv.
  doi: 10.1145/3457607
– volume: 1
  issue: 4
  year: 2020
  ident: 10.1016/j.knosys.2023.110273_b81
  article-title: Rapid trust calibration through interpretable and uncertainty-aware AI
  publication-title: Patterns
  doi: 10.1016/j.patter.2020.100049
– start-page: 5563
  year: 2022
  ident: 10.1016/j.knosys.2023.110273_b92
  article-title: SHAFF: Fast and consistent shapley effect estimates via random forests
– volume: 119
  start-page: 1829
  issue: 7
  year: 2019
  ident: 10.1016/j.knosys.2023.110273_b82
  article-title: The judicial demand for explainable artificial intelligence
  publication-title: Columbia Law Rev.
– volume: 6
  start-page: 52138
  year: 2018
  ident: 10.1016/j.knosys.2023.110273_b1
  article-title: Peeking inside the black-box: A survey on explainable artificial intelligence (XAI)
  publication-title: IEEE Access
  doi: 10.1109/ACCESS.2018.2870052
– start-page: 3
  year: 2019
  ident: 10.1016/j.knosys.2023.110273_b93
  article-title: Natural language generation challenges for explainable AI
– volume: 104
  start-page: 1
  year: 2015
  ident: 10.1016/j.knosys.2023.110273_b31
  article-title: The artificial neural network for solar radiation prediction and designing solar systems: a systematic literature review
  publication-title: J. Clean. Prod.
  doi: 10.1016/j.jclepro.2015.04.041
– start-page: 63
  year: 2018
  ident: 10.1016/j.knosys.2023.110273_b35
  article-title: Asking ‘Why’ in AI: Explainability of intelligent systems – perspectives and challenges
– year: 2020
  ident: 10.1016/j.knosys.2023.110273_b117
– volume: 296
  year: 2021
  ident: 10.1016/j.knosys.2023.110273_b156
  article-title: Using ontologies to enhance human understandability of global post-hoc explanations of black-box models
  publication-title: Artificial Intelligence
  doi: 10.1016/j.artint.2021.103471
– year: 2020
  ident: 10.1016/j.knosys.2023.110273_b70
– volume: 23
  start-page: 1342
  issue: 2
  year: 2021
  ident: 10.1016/j.knosys.2023.110273_b147
  article-title: Federated machine learning: Survey, multi-level classification, desirable criteria and future directions in communication and networking systems
  publication-title: IEEE Commun. Surv. Tutor.
  doi: 10.1109/COMST.2021.3058573
– year: 2021
  ident: 10.1016/j.knosys.2023.110273_b75
– volume: 24
  start-page: 98
  issue: 1
  year: 2018
  ident: 10.1016/j.knosys.2023.110273_b125
  article-title: DeepEyes: Progressive visual analytics for designing deep neural networks
  publication-title: IEEE Trans. Vis. Comput. Graphics
  doi: 10.1109/TVCG.2017.2744358
– year: 2021
  ident: 10.1016/j.knosys.2023.110273_b38
  article-title: On interpretability of artificial neural networks: A survey
  publication-title: IEEE Trans. Radiat. Plasma Med. Sci.
  doi: 10.1109/TRPMS.2021.3066428
– start-page: 1
  year: 2019
  ident: 10.1016/j.knosys.2023.110273_b141
  article-title: Generative counterfactual introspection for explainable deep learning
– volume: 58
  start-page: 82
  year: 2020
  ident: 10.1016/j.knosys.2023.110273_b2
  article-title: Explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
  publication-title: Inf. Fusion
  doi: 10.1016/j.inffus.2019.12.012
– start-page: 43
  year: 2011
  ident: 10.1016/j.knosys.2023.110273_b139
  article-title: Adversarial machine learning
– start-page: 1189
  year: 2001
  ident: 10.1016/j.knosys.2023.110273_b98
  article-title: Greedy function approximation: a gradient boosting machine
  publication-title: Ann. Statist.
– volume: 22
  start-page: 18
  issue: 1
  year: 2020
  ident: 10.1016/j.knosys.2023.110273_b74
  article-title: Causal interpretability for machine learning - problems, methods and evaluation
  publication-title: SIGKDD Explor. Newsl.
  doi: 10.1145/3400051.3400058
– volume: 25
  start-page: 51
  issue: 1
  year: 2021
  ident: 10.1016/j.knosys.2023.110273_b120
  article-title: Semantics of the black-box: Can knowledge graphs help make deep learning systems more interpretable and explainable?
  publication-title: IEEE Internet Comput.
  doi: 10.1109/MIC.2020.3031769
– start-page: 1078
  year: 2019
  ident: 10.1016/j.knosys.2023.110273_b115
  article-title: Explainable agents and robots: Results from a systematic literature review
– start-page: 5
  year: 2019
  ident: 10.1016/j.knosys.2023.110273_b8
  article-title: Towards explainable artificial intelligence
– volume: 16
  start-page: 31
  issue: 3
  year: 2018
  ident: 10.1016/j.knosys.2023.110273_b19
  article-title: The mythos of model interpretability: In machine learning, the concept of interpretability is both important and slippery
  publication-title: Queue
  doi: 10.1145/3236386.3241340
– volume: 73
  year: 2022
  ident: 10.1016/j.knosys.2023.110273_b50
  article-title: Explainable deep learning: A field guide for the uninitiated
  publication-title: J. Artif. Int. Res.
– volume: 4
  issue: 37
  year: 2019
  ident: 10.1016/j.knosys.2023.110273_b34
  article-title: XAI—Explainable artificial intelligence
  publication-title: Science Robotics
  doi: 10.1126/scirobotics.aay7120
– year: 2020
  ident: 10.1016/j.knosys.2023.110273_b47
  article-title: A survey of data-driven and knowledge-aware explainable AI
  publication-title: IEEE Trans. Knowl. Data Eng.
  doi: 10.1109/TKDE.2020.2983930
– volume: 113
  year: 2021
  ident: 10.1016/j.knosys.2023.110273_b20
  article-title: The role of explainability in creating trustworthy artificial intelligence for health care: A comprehensive survey of the terminology, design choices, and evaluation strategies
  publication-title: J. Biomed. Inform.
  doi: 10.1016/j.jbi.2020.103655
– volume: 80
  start-page: 78
  year: 2015
  ident: 10.1016/j.knosys.2023.110273_b29
  article-title: Systematic mapping study on granular computing
  publication-title: Knowl.-Based Syst.
  doi: 10.1016/j.knosys.2015.02.018
– year: 2022
  ident: 10.1016/j.knosys.2023.110273_b85
  article-title: LoMEF: A framework to produce local explanations for global model time series forecasts
  publication-title: Int. J. Forecast.
– start-page: 23
  year: 2019
  ident: 10.1016/j.knosys.2023.110273_b112
  article-title: Transparency: Motivations and challenges
– ident: 10.1016/j.knosys.2023.110273_b138
– volume: 19
  start-page: 207
  issue: 3
  year: 2020
  ident: 10.1016/j.knosys.2023.110273_b24
  article-title: A survey of surveys on the use of visualization for interpreting machine learning models
  publication-title: Inf. Vis.
  doi: 10.1177/1473871620904671
– start-page: 1
  year: 2018
  ident: 10.1016/j.knosys.2023.110273_b69
  article-title: Trends and trajectories for explainable, accountable and intelligible systems: An HCI research agenda
– year: 2019
  ident: 10.1016/j.knosys.2023.110273_b116
  article-title: The challenge of imputation in explainable artificial intelligence models
– volume: 8
  start-page: 373
  issue: 6
  year: 1995
  ident: 10.1016/j.knosys.2023.110273_b123
  article-title: Survey and critique of techniques for extracting rules from trained artificial neural networks
  publication-title: Knowl.-Based Syst.
  doi: 10.1016/0950-7051(96)81920-4
– start-page: 457
  year: 2016
  ident: 10.1016/j.knosys.2023.110273_b124
  article-title: DeepRED – rule extraction from deep neural networks
– volume: 8
  issue: 8
  year: 2019
  ident: 10.1016/j.knosys.2023.110273_b26
  article-title: Machine learning interpretability: A survey on methods and metrics
  publication-title: Electronics
  doi: 10.3390/electronics8080832
– ident: 10.1016/j.knosys.2023.110273_b88
  doi: 10.1145/2939672.2939778
– ident: 10.1016/j.knosys.2023.110273_b3
– year: 2021
  ident: 10.1016/j.knosys.2023.110273_b66
– volume: 387
  start-page: 346
  year: 2020
  ident: 10.1016/j.knosys.2023.110273_b104
  article-title: Extract interpretability-accuracy balanced rules from artificial neural networks: A review
  publication-title: Neurocomputing
  doi: 10.1016/j.neucom.2020.01.036
– volume: 267
  start-page: 1
  year: 2019
  ident: 10.1016/j.knosys.2023.110273_b67
  article-title: Explanation in artificial intelligence: Insights from the social sciences
  publication-title: Artificial Intelligence
  doi: 10.1016/j.artint.2018.07.007
– volume: 3
  start-page: 173
  issue: 3
  year: 2019
  ident: 10.1016/j.knosys.2023.110273_b129
  article-title: An explainable deep-learning algorithm for the detection of acute intracranial haemorrhage from small datasets
  publication-title: Nat. Biomed. Eng.
  doi: 10.1038/s41551-018-0324-9
– year: 2017
  ident: 10.1016/j.knosys.2023.110273_b33
– year: 2020
  ident: 10.1016/j.knosys.2023.110273_b59
– start-page: 539
  year: 2017
  ident: 10.1016/j.knosys.2023.110273_b77
  article-title: The state-of-the-art in predictive visual analytics
– volume: 11
  start-page: 5088
  issue: 11
  year: 2021
  ident: 10.1016/j.knosys.2023.110273_b55
  article-title: Current challenges and future opportunities for XAI in machine learning-based clinical decision support systems: a systematic review
  publication-title: Appl. Sci.
  doi: 10.3390/app11115088
– year: 2013
  ident: 10.1016/j.knosys.2023.110273_b16
– volume: 27
  start-page: 393
  issue: 3
  year: 2017
  ident: 10.1016/j.knosys.2023.110273_b48
  article-title: A systematic review and taxonomy of explanations in decision support and recommender systems
  publication-title: User Model. User Adapt. Interact.
  doi: 10.1007/s11257-017-9195-0
– volume: 1
  start-page: 1
  issue: 01
  year: 2021
  ident: 10.1016/j.knosys.2023.110273_b58
  article-title: Recent advances in trustworthy explainable artificial intelligence: Status, challenges and perspectives
  publication-title: IEEE Trans. Artif. Intell.
– volume: 41
  start-page: 647
  issue: 3
  year: 2014
  ident: 10.1016/j.knosys.2023.110273_b89
  article-title: Explaining prediction models and individual predictions with feature contributions
  publication-title: Knowl. Inf. Syst.
  doi: 10.1007/s10115-013-0679-x
– volume: 13
  start-page: 71
  issue: 1
  year: 1993
  ident: 10.1016/j.knosys.2023.110273_b121
  article-title: Extracting refined rules from knowledge-based neural networks
  publication-title: Mach. Learn.
  doi: 10.1007/BF00993103
– start-page: 217
  year: 2021
  ident: 10.1016/j.knosys.2023.110273_b68
  article-title: Survey of explainable machine learning with visual and granular methods beyond quasi-explanations
– start-page: 1
  year: 2018
  ident: 10.1016/j.knosys.2023.110273_b113
  article-title: Current advances, trends and challenges of machine learning and knowledge extraction: From machine learning to explainable AI
– start-page: 1
  year: 2022
  ident: 10.1016/j.knosys.2023.110273_b72
  article-title: Explainability in graph neural networks: A taxonomic survey
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
– year: 2022
  ident: 10.1016/j.knosys.2023.110273_b80
  article-title: Explainable AI for healthcare 5.0: Opportunities and challenges
  publication-title: IEEE Access
  doi: 10.1109/ACCESS.2022.3197671
– volume: 70
  start-page: 245
  year: 2021
  ident: 10.1016/j.knosys.2023.110273_b18
  article-title: A survey on the explainability of supervised machine learning
  publication-title: J. Artificial Intelligence Res.
  doi: 10.1613/jair.1.12228
– start-page: 204
  year: 2018
  ident: 10.1016/j.knosys.2023.110273_b151
  article-title: Using perceptual and cognitive explanations for enhanced human-agent team performance
– year: 2019
  ident: 10.1016/j.knosys.2023.110273_b60
– volume: 11
  start-page: 125
  issue: 1
  year: 2020
  ident: 10.1016/j.knosys.2023.110273_b158
  article-title: Ontology engineering: Current state, challenges, and future directions
  publication-title: Semant. Web
  doi: 10.3233/SW-190382
– ident: 10.1016/j.knosys.2023.110273_b83
  doi: 10.1109/CVPR.2018.00915
– start-page: 1134
  year: 2003
  ident: 10.1016/j.knosys.2023.110273_b119
  article-title: A Bayesian approach to unsupervised one-shot learning of object categories
– volume: 112
  year: 2021
  ident: 10.1016/j.knosys.2023.110273_b39
  article-title: Interpretable visual reasoning: A survey
  publication-title: Image Vis. Comput.
  doi: 10.1016/j.imavis.2021.104194
– volume: 24
  start-page: 77
  issue: 1
  year: 2018
  ident: 10.1016/j.knosys.2023.110273_b126
  article-title: Analyzing the training processes of deep generative models
  publication-title: IEEE Trans. Vis. Comput. Graphics
  doi: 10.1109/TVCG.2017.2744938
– year: 2022
  ident: 10.1016/j.knosys.2023.110273_b86
– volume: 10
  issue: 3
  year: 2021
  ident: 10.1016/j.knosys.2023.110273_b57
  article-title: Explainable embodied agents through social cues: A review
  publication-title: J. Hum.-Robot Interact.
  doi: 10.1145/3457188
– volume: 28
  start-page: 3395
  issue: 12
  year: 2016
  ident: 10.1016/j.knosys.2023.110273_b135
  article-title: Towards Bayesian deep learning: A framework and some existing methods
  publication-title: IEEE Trans. Knowl. Data Eng.
  doi: 10.1109/TKDE.2016.2606428
– volume: 14
  start-page: 1
  issue: 1
  year: 2020
  ident: 10.1016/j.knosys.2023.110273_b114
  article-title: Explainable recommendation: A survey and new perspectives
  publication-title: Found. Trends® Inform. Retr.
  doi: 10.1561/1500000066
SSID ssj0002218
Score 2.720917
Snippet The past decade has seen significant progress in artificial intelligence (AI), which has resulted in algorithms being adopted for resolving a variety of...
SourceID crossref
elsevier
SourceType Enrichment Source
Index Database
Publisher
StartPage 110273
SubjectTerms Black-box
Deep learning
Explainable AI (XAI)
Interpretable AI
Machine learning
Meta-survey
Responsible AI
Title Explainable AI (XAI): A systematic meta-survey of current challenges and future opportunities
URI https://dx.doi.org/10.1016/j.knosys.2023.110273
Volume 263
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1LSwMxEA6lXrz4Fuuj5OBBD7E2yTaJt6VYWsVetNCLLEk2C_WxW9qt0Iu_3WQ3axVEwWOWGVgmk5lJ-OYbAE4TbXMIE20UGy0RlVQgRYxBTMSBIYG2C_egfzfs9Ef0ZhyMa6Bb9cI4WKWP_WVML6K1_9Ly1mxNJ5PWvS0OrL_ahEUK1hZ3b6eUOS-_eF_BPDAu3vicMHLSVftcgfF6TrP50pF2Y-Lw8JiRn9PTl5TT2wIbvlaEYfk726Bm0h2wWc1hgP5Y7oJHB6TzXVAwHMCzcTg4v4IhXNE0w1eTSzRfzN7MEmYJ1CUtE9TVLJU5lGkMS4YRmE1dVb5IC7bVPTDqXT90-8iPTUDaVlM5IoQqg41KiBSaG1sQGEEJS5hWHY615Iy1bWYMEuHY3xLNdEzbieCaBjyRhpN9UE-z1BwASJVQ9uptd-xSUopjrgnnismYmI7GSjYAqawVac8p7kZbvEQVeOwpKm0cORtHpY0bAH1qTUtOjT_kWbUR0TffiGzY_1Xz8N-aR2DdrQq0WXAM6vlsYU5s-ZGrZuFfTbAWDm77ww8LNtxk
linkProvider Elsevier
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV1LSwMxEB60HvTiW3ybg4IeYm2SbbKCh-KD1tdFhV5kTbJZ8LUttlV68U_5B012s1ZBFASP-8iSnQzzTcI33wCsJ9piCA8rODZaYiZZiBU1BvMwDgwNtL1wB_pn59X6FTtuBs0heCtqYRyt0sf-PKZn0drfKXtrltu3t-ULmxxYf7WARTPVlh3PrDwx_Re7b-vsNQ7sIm8QcnR4uV_HvrUA1jbj6GJKmTLEqITKUAtjQdOEjPKEa1UVREvBecWiR5CETiEt0VzHrJKEQrNAJNIIar87DCPMhgvXNmH7dcArISQ7VHSzw256Rb1eRiq7T1udvlMJJ9QR8Amn3-PhJ4w7moRxn5yiWv7_UzBk0mmYKBo_IB8HZuDaMfd82RWqNdBms9bY2kU1NNCFRo-mK3Gn9_Rs-qiVIJ3rQCFdNG_pIJnGKJc0Qa222wb00kzedRau_sWYc1BKW6mZB8RUqOxe37rIjmSMxEJTIRSXMTVVTZRcAFpYK9JexNz10niICrbaXZTbOHI2jnIbLwD-GNXORTx-eZ8XCxF9ccbI4syPIxf_PHINRuuXZ6fRaeP8ZAnG3JOM6hYsQ6n71DMrNvfpqtXM1xDc_LdzvwMmahk7
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Explainable+AI+%28XAI%29%3A+A+systematic+meta-survey+of+current+challenges+and+future+opportunities&rft.jtitle=Knowledge-based+systems&rft.au=Saeed%2C+Waddah&rft.au=Omlin%2C+Christian&rft.date=2023-03-05&rft.pub=Elsevier+B.V&rft.issn=0950-7051&rft.eissn=1872-7409&rft.volume=263&rft_id=info:doi/10.1016%2Fj.knosys.2023.110273&rft.externalDocID=S0950705123000230
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0950-7051&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0950-7051&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0950-7051&client=summon