Explainable Artificial Intelligence (XAI): What we know and what is left to attain Trustworthy Artificial Intelligence
Artificial intelligence (AI) is currently being utilized in a wide range of sophisticated applications, but the outcomes of many AI models are challenging to comprehend and trust due to their black-box nature. Usually, it is essential to understand the reasoning behind an AI model’s decision-making....
Saved in:
Published in | Information fusion Vol. 99; p. 101805 |
---|---|
Main Authors | , , , , , , , , , |
Format | Journal Article |
Language | English |
Published |
Elsevier B.V
01.11.2023
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | Artificial intelligence (AI) is currently being utilized in a wide range of sophisticated applications, but the outcomes of many AI models are challenging to comprehend and trust due to their black-box nature. Usually, it is essential to understand the reasoning behind an AI model’s decision-making. Thus, the need for eXplainable AI (XAI) methods for improving trust in AI models has arisen. XAI has become a popular research subject within the AI field in recent years. Existing survey papers have tackled the concepts of XAI, its general terms, and post-hoc explainability methods but there have not been any reviews that have looked at the assessment methods, available tools, XAI datasets, and other related aspects. Therefore, in this comprehensive study, we provide readers with an overview of the current research and trends in this rapidly emerging area with a case study example. The study starts by explaining the background of XAI, common definitions, and summarizing recently proposed techniques in XAI for supervised machine learning. The review divides XAI techniques into four axes using a hierarchical categorization system: (i) data explainability, (ii) model explainability, (iii) post-hoc explainability, and (iv) assessment of explanations. We also introduce available evaluation metrics as well as open-source packages and datasets with future research directions. Then, the significance of explainability in terms of legal demands, user viewpoints, and application orientation is outlined, termed as XAI concerns. This paper advocates for tailoring explanation content to specific user types. An examination of XAI techniques and evaluation was conducted by looking at 410 critical articles, published between January 2016 and October 2022, in reputed journals and using a wide range of research databases as a source of information. The article is aimed at XAI researchers who are interested in making their AI models more trustworthy, as well as towards researchers from other disciplines who are looking for effective XAI methods to complete tasks with confidence while communicating meaning from data.
•A novel four-axis framework to examine a model for robustness and explainability.•Formulation of research questions at each axis and its corresponding taxonomy.•Discussion of different explainability assessment methods.•A novel methodological workflow for determining the model and explainability criteria.•Revisited discussion on challenges and future directions of XAI and Trustworthy AI. |
---|---|
AbstractList | Artificial intelligence (AI) is currently being utilized in a wide range of sophisticated applications, but the outcomes of many AI models are challenging to comprehend and trust due to their black-box nature. Usually, it is essential to understand the reasoning behind an AI model’s decision-making. Thus, the need for eXplainable AI (XAI) methods for improving trust in AI models has arisen. XAI has become a popular research subject within the AI field in recent years. Existing survey papers have tackled the concepts of XAI, its general terms, and post-hoc explainability methods but there have not been any reviews that have looked at the assessment methods, available tools, XAI datasets, and other related aspects. Therefore, in this comprehensive study, we provide readers with an overview of the current research and trends in this rapidly emerging area with a case study example. The study starts by explaining the background of XAI, common definitions, and summarizing recently proposed techniques in XAI for supervised machine learning. The review divides XAI techniques into four axes using a hierarchical categorization system: (i) data explainability, (ii) model explainability, (iii) post-hoc explainability, and (iv) assessment of explanations. We also introduce available evaluation metrics as well as open-source packages and datasets with future research directions. Then, the significance of explainability in terms of legal demands, user viewpoints, and application orientation is outlined, termed as XAI concerns. This paper advocates for tailoring explanation content to specific user types. An examination of XAI techniques and evaluation was conducted by looking at 410 critical articles, published between January 2016 and October 2022, in reputed journals and using a wide range of research databases as a source of information. The article is aimed at XAI researchers who are interested in making their AI models more trustworthy, as well as towards researchers from other disciplines who are looking for effective XAI methods to complete tasks with confidence while communicating meaning from data.
•A novel four-axis framework to examine a model for robustness and explainability.•Formulation of research questions at each axis and its corresponding taxonomy.•Discussion of different explainability assessment methods.•A novel methodological workflow for determining the model and explainability criteria.•Revisited discussion on challenges and future directions of XAI and Trustworthy AI. |
ArticleNumber | 101805 |
Author | Alonso-Moral, Jose M. El-Sappagh, Shaker Abuhmed, Tamer Díaz-Rodríguez, Natalia Ali, Sajid Confalonieri, Roberto Guidotti, Riccardo Del Ser, Javier Herrera, Francisco Muhammad, Khan |
Author_xml | – sequence: 1 givenname: Sajid surname: Ali fullname: Ali, Sajid organization: Information Laboratory (InfoLab), Department of Electrical and Computer Engineering, College of Information and Communication Engineering, Sungkyunkwan University, Suwon 16419, South Korea – sequence: 2 givenname: Tamer surname: Abuhmed fullname: Abuhmed, Tamer email: tamer@skku.edu organization: Information Laboratory (InfoLab), Department of Computer Science and Engineering, College of Computing and Informatics, Sungkyunkwan University, Suwon 16419, South Korea – sequence: 3 givenname: Shaker surname: El-Sappagh fullname: El-Sappagh, Shaker organization: Information Laboratory (InfoLab), Department of Computer Science and Engineering, College of Computing and Informatics, Sungkyunkwan University, Suwon 16419, South Korea – sequence: 4 givenname: Khan orcidid: 0000-0003-4055-7412 surname: Muhammad fullname: Muhammad, Khan email: khan.muhammad@ieee.org organization: Visual Analytics for Knowledge Laboratory (VIS2KNOW Lab), Department of Applied Artificial Intelligence, College of Computing and Informatics, Sungkyunkwan University, Seoul 03063, South Korea – sequence: 5 givenname: Jose M. surname: Alonso-Moral fullname: Alonso-Moral, Jose M. organization: Centro Singular de Investigación en Tecnoloxías Intelixentes (CiTIUS), Universidade de Santiago de Compostela, Rúa de Jenaro de la Fuente Domínguez, s/n, 15782 Santiago de Compostela, A Coruña, Spain – sequence: 6 givenname: Roberto surname: Confalonieri fullname: Confalonieri, Roberto organization: Department of Mathematics ‘Tullio Levi-Civita’, University of Padua, Padova 35121, Italy – sequence: 7 givenname: Riccardo surname: Guidotti fullname: Guidotti, Riccardo organization: Department of Computer Science, University of Pisa, Pisa 56127, Italy – sequence: 8 givenname: Javier surname: Del Ser fullname: Del Ser, Javier organization: TECNALIA, Basque Research and Technology Alliance (BRTA), 48160 Derio, Spain – sequence: 9 givenname: Natalia surname: Díaz-Rodríguez fullname: Díaz-Rodríguez, Natalia organization: Department of Computer Science and Artificial Intelligence, Andalusian Research Institute in Data Science and Computational Intelligence (DaSCI), University of Granada, Granada 18071, Spain – sequence: 10 givenname: Francisco surname: Herrera fullname: Herrera, Francisco organization: Department of Computer Science and Artificial Intelligence, Andalusian Research Institute in Data Science and Computational Intelligence (DaSCI), University of Granada, Granada 18071, Spain |
BookMark | eNqFkEFLwzAYhoNMcJv-Aw856qEzaZo23UEYMnUw8DLRW0izxGXGdCTZ6v69LfUkqKfv4_14H_ieERi42ikALjGaYITzm-3EOK33YZKilHQRQ_QEDDEr0iQniA7aneZ5klJCz8AohC1CuEAED8Fh_rmzwjhRWQVnPhptpBEWLlxU1po35aSCV6-zxfUUvmxEhI2C765uoHBr2HSBCdAqHWGsoYixRcGV34fY1D5ujr8hz8GpFjaoi-85Bs_389XdY7J8eljczZaJzDCLSUWZXOeZwqmQKK0YJajURVkSXMq8JJpVmAmcVe2pTCvBCqaLHFFCMlUyJDMyBtOeK30dgleaSxNFNLWLXhjLMeKdQb7lvUHeGeS9wbac_SjvvPkQ_vhf7bavqfaxg1GeB2m6p9fGKxn5ujZ_A74AaoqPfw |
CitedBy_id | crossref_primary_10_1016_j_ejor_2023_10_003 crossref_primary_10_1007_s12672_025_02064_7 crossref_primary_10_1016_j_inffus_2024_102598 crossref_primary_10_1515_cclm_2023_1037 crossref_primary_10_1371_journal_pone_0309208 crossref_primary_10_1109_ACCESS_2024_3421330 crossref_primary_10_1145_3659943 crossref_primary_10_1038_s41598_024_63187_5 crossref_primary_10_1109_ACCESS_2024_3428401 crossref_primary_10_1109_ACCESS_2024_3439096 crossref_primary_10_46387_bjesr_1344414 crossref_primary_10_3390_info15090557 crossref_primary_10_3390_math12233768 crossref_primary_10_1016_j_inffus_2023_101896 crossref_primary_10_1109_JIOT_2024_3363654 crossref_primary_10_1145_3711123 crossref_primary_10_1016_j_annals_2024_103791 crossref_primary_10_1016_j_inffus_2025_102981 crossref_primary_10_1016_j_dibe_2025_100617 crossref_primary_10_1016_j_tree_2024_04_007 crossref_primary_10_17798_bitlisfen_1362136 crossref_primary_10_1007_s11357_024_01301_1 crossref_primary_10_1016_j_csbj_2024_08_005 crossref_primary_10_1016_j_procs_2024_05_046 crossref_primary_10_2174_0115672050322607240529075641 crossref_primary_10_1111_jocn_17584 crossref_primary_10_1109_JBHI_2024_3370502 crossref_primary_10_3390_app14188442 crossref_primary_10_1007_s00146_025_02248_3 crossref_primary_10_1109_ACCESS_2024_3514197 crossref_primary_10_1016_j_jag_2024_104102 crossref_primary_10_3390_ma17051208 crossref_primary_10_1073_pnas_2407655121 crossref_primary_10_3390_app14156638 crossref_primary_10_1016_j_engappai_2024_109350 crossref_primary_10_1109_MIS_2023_3334639 crossref_primary_10_3390_app15010325 crossref_primary_10_1002_aisy_202400304 crossref_primary_10_7759_cureus_54518 crossref_primary_10_23947_2334_8496_2024_12_3_561_569 crossref_primary_10_1007_s11042_023_17666_y crossref_primary_10_1038_s41598_024_79208_2 crossref_primary_10_4108_eetsis_4435 crossref_primary_10_3390_math12050635 crossref_primary_10_1016_j_engappai_2025_110409 crossref_primary_10_1016_j_compbiomed_2024_108685 crossref_primary_10_1016_j_enbuild_2024_114426 crossref_primary_10_31083_j_fbl2906220 crossref_primary_10_1016_j_engappai_2024_109810 crossref_primary_10_3389_fnbot_2023_1267561 crossref_primary_10_1080_00207721_2025_2455998 crossref_primary_10_1007_s10462_024_10890_4 crossref_primary_10_1016_j_eswa_2023_121138 crossref_primary_10_1016_j_buildenv_2024_112035 crossref_primary_10_1016_j_neucom_2024_128282 crossref_primary_10_1186_s12884_024_07095_6 crossref_primary_10_1109_ACCESS_2024_3418499 crossref_primary_10_1021_acsomega_4c09719 crossref_primary_10_1108_IDD_09_2024_0133 crossref_primary_10_1111_bjet_13466 crossref_primary_10_1007_s00194_025_00742_y crossref_primary_10_3390_s24123859 crossref_primary_10_1103_PhysRevE_111_L033301 crossref_primary_10_3390_asi7050093 crossref_primary_10_1016_j_aei_2024_102749 crossref_primary_10_1007_s00521_023_09232_2 crossref_primary_10_1016_j_neucom_2024_127759 crossref_primary_10_1016_j_jocs_2024_102510 crossref_primary_10_1016_j_rineng_2024_103724 crossref_primary_10_1016_j_inffus_2023_102040 crossref_primary_10_3390_economies12050112 crossref_primary_10_3390_electronics13214152 crossref_primary_10_1016_j_nepr_2024_104142 crossref_primary_10_1007_s44196_024_00508_6 crossref_primary_10_4103_mtsp_mtsp_14_23 crossref_primary_10_1016_j_inffus_2024_102337 crossref_primary_10_1016_j_geoderma_2024_116903 crossref_primary_10_1016_j_infsof_2024_107566 crossref_primary_10_1016_j_compag_2024_109269 crossref_primary_10_3233_SW_243529 crossref_primary_10_3390_math11234782 crossref_primary_10_1016_j_engfailanal_2025_109503 crossref_primary_10_1080_15309576_2025_2469784 crossref_primary_10_1002_jmrs_860 crossref_primary_10_1109_TIV_2023_3339673 crossref_primary_10_1016_j_cjca_2024_06_029 crossref_primary_10_1007_s41870_025_02486_0 crossref_primary_10_1177_17562848251321915 crossref_primary_10_1186_s13321_024_00824_1 crossref_primary_10_3390_math13020290 crossref_primary_10_1007_s00521_024_10942_4 crossref_primary_10_3390_app15063317 crossref_primary_10_3390_diagnostics14020174 crossref_primary_10_1016_j_ijar_2023_109112 crossref_primary_10_1016_j_jobe_2024_110613 crossref_primary_10_1016_j_ijcci_2024_100708 crossref_primary_10_1016_j_ins_2024_121370 crossref_primary_10_1186_s40537_024_00928_3 crossref_primary_10_1007_s40290_024_00536_9 crossref_primary_10_1007_s10661_024_12443_2 crossref_primary_10_1007_s10994_024_06725_6 crossref_primary_10_1016_j_asoc_2025_112817 crossref_primary_10_1007_s12559_024_10387_w crossref_primary_10_1007_s43832_025_00206_0 crossref_primary_10_1080_1206212X_2025_2464543 crossref_primary_10_3390_forensicsci4020017 crossref_primary_10_3390_ijms25126572 crossref_primary_10_1016_j_aej_2023_09_050 crossref_primary_10_1186_s12911_024_02780_0 crossref_primary_10_3390_electronics14040800 crossref_primary_10_1177_10711813241276463 crossref_primary_10_3390_ai5040099 crossref_primary_10_1007_s40820_024_01423_3 crossref_primary_10_1016_j_dsp_2025_105068 crossref_primary_10_1016_j_csa_2024_100072 crossref_primary_10_1080_1061186X_2023_2284096 crossref_primary_10_1371_journal_pntd_0012675 crossref_primary_10_3390_bioengineering11111143 crossref_primary_10_3390_electronics12183893 crossref_primary_10_1007_s10916_024_02087_7 crossref_primary_10_1109_TIM_2023_3318717 crossref_primary_10_3390_educsci14020172 crossref_primary_10_1186_s42269_024_01295_y crossref_primary_10_1109_ACCESS_2024_3397871 crossref_primary_10_1016_j_energy_2024_130350 crossref_primary_10_1016_j_procs_2024_12_029 crossref_primary_10_1007_s13177_024_00445_w crossref_primary_10_1016_j_ijhcs_2025_103484 crossref_primary_10_1016_j_cscee_2024_100822 crossref_primary_10_3390_app15020650 crossref_primary_10_1007_s10676_024_09773_7 crossref_primary_10_1109_TFUZZ_2024_3409412 crossref_primary_10_47745_ausi_2024_0007 crossref_primary_10_1080_09544828_2024_2355758 crossref_primary_10_1007_s43926_025_00092_x crossref_primary_10_1007_s11573_023_01181_5 crossref_primary_10_1016_j_energy_2024_133074 crossref_primary_10_3390_electronics12224572 crossref_primary_10_1007_s13278_024_01389_5 crossref_primary_10_3233_AAC_220011 crossref_primary_10_1016_j_jrt_2025_100108 crossref_primary_10_32604_cmc_2024_054886 crossref_primary_10_1177_00472875241275945 crossref_primary_10_1007_s43681_024_00576_6 crossref_primary_10_1016_j_iswa_2024_200430 crossref_primary_10_1111_exsy_13767 crossref_primary_10_1016_j_compbiomed_2024_109589 crossref_primary_10_1007_s44163_024_00114_7 crossref_primary_10_1021_acs_jcim_4c02153 crossref_primary_10_32604_cmes_2024_051363 crossref_primary_10_3389_fphar_2023_1297353 crossref_primary_10_1007_s10462_024_11005_9 crossref_primary_10_3390_rs17010081 crossref_primary_10_1093_icb_icae127 crossref_primary_10_1002_ctd2_272 crossref_primary_10_1177_20552076241277025 crossref_primary_10_3390_bdcc8110149 crossref_primary_10_1016_j_rineng_2024_103498 crossref_primary_10_3390_analytics3040028 crossref_primary_10_1029_2023WR036360 crossref_primary_10_1016_j_ins_2024_120159 crossref_primary_10_3390_bdcc8110142 crossref_primary_10_1002_eng2_13085 crossref_primary_10_1016_j_jretconser_2024_103859 crossref_primary_10_1016_j_array_2024_100345 crossref_primary_10_1111_jdv_20347 crossref_primary_10_32604_cmes_2024_056605 crossref_primary_10_51583_IJLTEMAS_2024_130524 crossref_primary_10_7759_cureus_60119 crossref_primary_10_1016_j_compind_2024_104233 crossref_primary_10_1016_j_eswa_2024_125997 crossref_primary_10_1016_j_patcog_2024_111221 crossref_primary_10_2196_56774 crossref_primary_10_35467_sdq_197248 crossref_primary_10_1016_j_cie_2024_110310 crossref_primary_10_3390_electronics14010121 crossref_primary_10_3390_jcm13237009 crossref_primary_10_1007_s10207_024_00969_y crossref_primary_10_1016_j_psychres_2024_116255 crossref_primary_10_1016_j_rineng_2024_102906 crossref_primary_10_3390_app132312608 crossref_primary_10_1145_3670692 crossref_primary_10_1038_s41598_023_42796_6 crossref_primary_10_1109_JIOT_2023_3287678 crossref_primary_10_3390_life14040454 crossref_primary_10_1016_j_rineng_2025_104137 crossref_primary_10_1016_j_energy_2025_134781 crossref_primary_10_1109_ACCESS_2024_3450299 crossref_primary_10_1177_20539517241235871 crossref_primary_10_9766_KIMST_2024_27_6_665 crossref_primary_10_1016_j_compag_2024_109414 crossref_primary_10_1007_s13347_024_00837_6 crossref_primary_10_1016_j_jclepro_2024_140606 crossref_primary_10_1093_bib_bbae593 crossref_primary_10_3233_NAI_240754 crossref_primary_10_3390_jimaging10100252 crossref_primary_10_1007_s43681_024_00590_8 crossref_primary_10_3390_make6010016 crossref_primary_10_1109_ACCESS_2024_3408062 crossref_primary_10_1145_3670685 crossref_primary_10_1016_j_jneumeth_2024_110318 crossref_primary_10_1007_s10639_024_13074_3 crossref_primary_10_1007_s00330_024_11107_6 crossref_primary_10_1007_s43681_024_00622_3 crossref_primary_10_1016_j_arr_2023_102144 crossref_primary_10_1016_j_ymssp_2024_111522 crossref_primary_10_3847_1538_3881_adab79 crossref_primary_10_1016_j_compbiomed_2023_107450 crossref_primary_10_1371_journal_pone_0301429 crossref_primary_10_3390_bdcc8110160 crossref_primary_10_1016_j_iswa_2024_200466 crossref_primary_10_1016_j_engappai_2023_107716 crossref_primary_10_1109_ACCESS_2024_3375766 crossref_primary_10_4467_26581264ARC_24_008_20572 crossref_primary_10_1016_j_cie_2024_110214 crossref_primary_10_1016_j_engappai_2024_109415 crossref_primary_10_1016_j_ncl_2024_03_001 crossref_primary_10_1155_2024_6087208 crossref_primary_10_1016_j_compbiomed_2024_108844 crossref_primary_10_2478_plua_2024_0012 crossref_primary_10_1038_s41467_024_53956_1 crossref_primary_10_3389_frai_2024_1471208 crossref_primary_10_1186_s40708_023_00211_w crossref_primary_10_3390_diagnostics14212385 crossref_primary_10_1016_j_enbuild_2024_115115 crossref_primary_10_1016_j_engappai_2025_110363 crossref_primary_10_1002_pc_29055 crossref_primary_10_1038_s41598_024_60549_x crossref_primary_10_1016_j_eswa_2024_126063 crossref_primary_10_1109_ACCESS_2024_3431437 crossref_primary_10_1016_j_jjimei_2024_100315 crossref_primary_10_1016_j_neunet_2024_106114 crossref_primary_10_1007_s44163_024_00214_4 crossref_primary_10_1016_j_neunet_2025_107401 crossref_primary_10_1016_j_dche_2024_100168 crossref_primary_10_3390_en17133295 crossref_primary_10_1007_s00146_024_02132_6 crossref_primary_10_1002_env_70000 crossref_primary_10_1016_j_pnucene_2024_105295 crossref_primary_10_1038_s41598_024_82483_8 crossref_primary_10_1016_j_heliyon_2025_e42077 crossref_primary_10_1111_exsy_70021 crossref_primary_10_1002_wcms_1716 crossref_primary_10_3390_app14167342 crossref_primary_10_1007_s12559_024_10297_x crossref_primary_10_1016_j_chemosphere_2024_144041 crossref_primary_10_3390_make6030101 crossref_primary_10_3390_rs17060967 crossref_primary_10_1016_j_eclinm_2024_102550 crossref_primary_10_3390_batteries10100356 crossref_primary_10_57197_JDR_2024_0101 crossref_primary_10_3233_JIFS_237539 crossref_primary_10_1007_s13198_024_02498_2 crossref_primary_10_3390_app14125057 crossref_primary_10_1007_s11042_024_20348_y crossref_primary_10_23736_S0393_0564_24_00003_0 crossref_primary_10_7759_cureus_61220 crossref_primary_10_3390_info15100626 crossref_primary_10_1002_job_2856 crossref_primary_10_1016_j_engappai_2025_110138 crossref_primary_10_1016_j_ins_2024_121844 crossref_primary_10_3390_educsci15010068 crossref_primary_10_1038_s41598_024_82222_z crossref_primary_10_3390_brainsci14070658 crossref_primary_10_1007_s10462_024_10972_3 crossref_primary_10_26735_LPAO2070 crossref_primary_10_3390_infrastructures9120225 crossref_primary_10_1080_23311975_2024_2333063 crossref_primary_10_1038_s44220_024_00237_x crossref_primary_10_1002_widm_1565 crossref_primary_10_59398_ahd_1330341 crossref_primary_10_1016_j_compbiomed_2025_109838 crossref_primary_10_3389_fpubh_2023_1309490 crossref_primary_10_1016_j_cmpbup_2024_100145 crossref_primary_10_3389_frsc_2025_1561404 crossref_primary_10_1111_exsy_70017 crossref_primary_10_1080_87559129_2024_2432924 crossref_primary_10_1016_j_csbj_2024_09_010 crossref_primary_10_1109_TGRS_2024_3493763 crossref_primary_10_1061_JCCEE5_CPENG_5990 crossref_primary_10_1007_s11042_024_20059_4 crossref_primary_10_2196_50342 crossref_primary_10_3390_make6020038 crossref_primary_10_1145_3705724 crossref_primary_10_1016_j_ejrad_2024_111393 crossref_primary_10_1109_JETCAS_2024_3477348 crossref_primary_10_3390_a17100462 crossref_primary_10_1016_j_tplants_2024_04_013 crossref_primary_10_3389_fpsyg_2024_1378904 crossref_primary_10_3390_app14135950 crossref_primary_10_4236_jcc_2024_126011 crossref_primary_10_3389_fpsyg_2024_1394045 crossref_primary_10_1002_hsr2_70173 crossref_primary_10_1016_j_engappai_2024_108854 crossref_primary_10_1016_j_websem_2024_100852 crossref_primary_10_1002_sres_3079 crossref_primary_10_1007_s12559_024_10272_6 crossref_primary_10_1007_s12599_025_00932_8 crossref_primary_10_1007_s41660_024_00478_4 crossref_primary_10_1016_j_preteyeres_2025_101352 crossref_primary_10_1109_OJCOMS_2024_3466225 crossref_primary_10_1016_j_preteyeres_2025_101353 crossref_primary_10_2196_54704 crossref_primary_10_3390_ijms26052004 crossref_primary_10_1016_j_ecoinf_2024_102904 crossref_primary_10_1016_j_trc_2024_104953 crossref_primary_10_1016_j_neucom_2024_128111 crossref_primary_10_1016_j_compgeo_2025_107103 crossref_primary_10_53053_IWHN9016 crossref_primary_10_1016_j_jenvman_2023_119675 crossref_primary_10_1093_bib_bbae461 crossref_primary_10_1016_j_cose_2024_103842 crossref_primary_10_1016_j_inffus_2024_102893 crossref_primary_10_1051_sands_2024020 crossref_primary_10_1016_j_measurement_2024_114123 crossref_primary_10_1007_s11042_024_18381_y crossref_primary_10_1016_j_inffus_2023_102135 crossref_primary_10_1021_acsomega_4c04474 crossref_primary_10_1016_j_artmed_2024_102933 crossref_primary_10_1016_j_cscm_2024_e03084 crossref_primary_10_1016_j_jclepro_2024_142761 crossref_primary_10_1016_j_idairyj_2024_106143 crossref_primary_10_1109_JBHI_2024_3399288 crossref_primary_10_3390_make6020055 crossref_primary_10_1016_j_compeleceng_2024_109258 crossref_primary_10_2196_69068 crossref_primary_10_3389_fcomp_2024_1412341 crossref_primary_10_1002_aaai_12184 crossref_primary_10_1016_j_engappai_2024_109500 crossref_primary_10_3390_electronics13193806 crossref_primary_10_1029_2024JH000122 crossref_primary_10_1186_s13040_024_00414_9 crossref_primary_10_1109_ACCESS_2024_3391130 crossref_primary_10_1186_s12911_024_02649_2 crossref_primary_10_3390_app15052517 crossref_primary_10_1080_10255842_2023_2270102 crossref_primary_10_3390_app14188108 crossref_primary_10_3390_s24123931 crossref_primary_10_1016_j_ejmcr_2024_100230 crossref_primary_10_1016_j_inffus_2024_102301 crossref_primary_10_4236_ijis_2025_151002 crossref_primary_10_1108_JEBDE_07_2024_0019 crossref_primary_10_1002_ibra_12174 crossref_primary_10_1016_j_inffus_2024_102303 crossref_primary_10_1016_j_neuroimage_2025_121077 crossref_primary_10_1016_j_inffus_2024_102423 crossref_primary_10_1039_D4EM00662C crossref_primary_10_1109_ACCESS_2024_3465511 crossref_primary_10_1186_s42400_024_00241_9 crossref_primary_10_1007_s00521_024_10437_2 crossref_primary_10_1177_09636625241291192 crossref_primary_10_3390_electronics14071251 crossref_primary_10_1109_ACCESS_2025_3546925 crossref_primary_10_3390_electronics14071255 crossref_primary_10_1016_j_knosys_2024_112591 crossref_primary_10_1016_j_imu_2025_101632 crossref_primary_10_1109_ACCESS_2024_3401578 crossref_primary_10_1109_TBDATA_2024_3433467 crossref_primary_10_1109_ACCESS_2024_3387547 crossref_primary_10_1016_j_patter_2024_101046 crossref_primary_10_1093_bib_bbae449 crossref_primary_10_1145_3675392 crossref_primary_10_1007_s10994_024_06733_6 crossref_primary_10_1039_D4EN00213J crossref_primary_10_1007_s12559_024_10332_x crossref_primary_10_1002_asi_24889 |
Cites_doi | 10.32614/RJ-2019-036 10.1109/CVPR.2018.00920 10.1145/1620545.1620576 10.1016/j.neunet.2022.08.002 10.14763/2020.2.1469 10.1145/3025453.3025912 10.1007/s00146-020-01019-6 10.1609/hcomp.v7i1.5284 10.1016/j.ipm.2022.103111 10.1109/TVCG.2017.2744938 10.1037/0033-2909.121.1.133 10.1145/3287560.3287589 10.1109/TVCG.2014.2346482 10.1016/j.inffus.2021.01.008 10.1109/72.839008 10.1109/CVPR.2016.319 10.1017/S0269888921000011 10.1198/016214507000001283 10.1016/j.ins.2022.10.010 10.18653/v1/2021.acl-long.415 10.2753/MIS0742-1222230410 10.24963/ijcai.2017/767 10.1109/21.299696 10.1145/3457607 10.1145/3278721.3278802 10.1145/3301275.3302289 10.1007/s11063-011-9207-8 10.1016/j.artint.2021.103627 10.1016/j.media.2022.102470 10.1145/2487575.2487579 10.1016/j.dsp.2017.10.011 10.1109/TVCG.2017.2744358 10.1214/11-AOAS495 10.1214/ss/1009213726 10.1038/s41586-019-0912-1 10.1016/j.eswa.2008.01.039 10.1016/j.inffus.2021.11.003 10.1016/j.patrec.2021.06.030 10.1609/hcomp.v7i1.5285 10.1109/72.809084 10.21105/joss.02607 10.1016/j.eswa.2010.08.023 10.1145/3290607.3312787 10.3390/electronics8080832 10.1016/0954-1810(93)90011-4 10.1145/2166966.2166996 10.1109/ACCESS.2022.3189432 10.1609/aimag.v35i4.2513 10.1145/3458723 10.1016/S0957-4174(99)80008-6 10.1080/10580530.2020.1849465 10.1243/0954405971516239 10.1109/CVPR46437.2021.01128 10.1016/j.ssci.2009.03.015 10.9785/cri-2019-200402 10.1109/ICCV.2015.364 10.1207/S15327566IJCE0401_04 10.1371/journal.pone.0130140 10.1111/rssb.12377 10.1145/3375627.3375843 10.1016/j.inffus.2021.05.009 10.1146/annurev.psych.57.102904.190100 10.1145/3351095.3372855 10.1016/j.inffus.2021.11.015 10.1145/2856767.2856811 10.1016/j.artint.2021.103473 10.1371/journal.pone.0276836 10.1038/89044 10.1007/s10115-017-1116-3 10.1109/ACCESS.2018.2870052 10.1016/j.cviu.2017.10.001 10.1162/tacl_a_00041 10.1016/j.inffus.2019.12.004 10.1016/S0304-3800(02)00257-0 10.1162/0899766053630350 10.1016/j.ins.2022.10.098 10.1080/01621459.2017.1307116 10.1109/TVCG.2016.2515592 10.1145/1124772.1124832 10.1007/s13347-019-00382-7 10.1002/sam.11543 10.1145/3450614.3463354 10.1016/j.isci.2021.102249 10.1631/FITEE.1700808 10.1007/s13347-018-0330-6 10.1145/3173574.3173951 10.1016/j.asoc.2009.08.008 10.1007/s10994-015-5528-6 10.1109/TKDE.2011.101 10.1016/S0957-4174(98)00041-4 10.1609/aaai.v32i1.11504 10.1109/TVCG.2017.2744718 10.1016/j.eswa.2007.11.024 10.1145/3290605.3300641 10.1016/0165-0114(96)00066-8 10.1038/s42256-020-0212-3 10.1016/j.knosys.2022.110189 10.1016/S0019-9958(65)90241-X 10.1109/ACCESS.2022.3230590 10.1145/3290605.3300233 10.1111/spsr.12439 10.1016/j.ipm.2023.103276 10.1038/s42256-019-0138-9 10.1007/978-3-030-12524-0_2 10.1609/aaai.v33i01.33013681 10.1145/3468507.3468509 10.1016/j.eswa.2022.117265 10.1007/s11257-008-9051-3 10.1145/3583558 10.1109/CVPR.2018.00915 10.1016/j.cognition.2008.10.007 10.1002/widm.1424 10.1145/3236009 10.1109/TKDE.2015.2399298 10.1016/0020-0255(75)90036-5 10.1145/2939672.2939778 10.3115/v1/W14-4307 10.1016/j.neunet.2006.07.005 10.1109/TSMC.1973.5408575 10.1186/s13075-016-0930-4 10.1145/3173574.3174098 10.1016/j.imavis.2021.104310 10.1016/j.patcog.2021.107899 10.1007/s10618-014-0363-0 10.1016/j.patcog.2016.11.008 10.1016/j.inffus.2020.09.006 10.1145/3233231 10.1145/2858036.2858529 10.1109/ACCESS.2021.3051315 10.1109/CVPR.2016.265 10.1561/1500000066 10.1016/j.knosys.2020.106685 10.1016/j.artint.2022.103822 10.1023/A:1008307919726 10.1136/medethics-2021-107462 10.1016/j.dss.2011.01.013 10.21105/joss.01798 10.5121/ijdkp.2014.4302 10.1016/S1364-6613(99)01294-2 10.1214/21-SS133 10.1007/s10845-020-01648-0 10.1038/s42256-019-0048-x 10.1109/ICCV.2017.244 10.1002/isaf.1422 10.1109/ACCESS.2022.3207765 10.1109/TPDS.2020.2996273 10.1162/neco.1995.7.1.108 10.1145/1518701.1519023 10.1080/15265161.2020.1819469 10.24963/ijcai.2019/876 10.1109/MIC.2020.3031769 10.1145/2556288.2557167 10.1109/CVPR52688.2022.01042 10.1109/TIP.2021.3089943 10.1145/2166966.2167019 10.1007/s13218-020-00636-z 10.1109/ACCESS.2022.3197671 10.24963/ijcai.2017/371 10.1016/j.neucom.2005.12.127 10.1109/ICCV.2017.74 10.1038/s41583-020-00395-8 10.1109/TSE.2013.59 10.1016/j.autcon.2021.103821 10.24251/HICSS.2021.281 10.1016/j.ins.2017.08.012 10.1007/s11948-019-00146-8 10.1109/TVCG.2017.2744158 10.23915/distill.00022 10.1145/3290605.3300831 10.1177/0018720813497052 10.1177/0018720812465081 10.1007/s10115-022-01756-8 10.1243/095440603322769929 10.1145/2858036.2858558 10.1109/TVCG.2016.2598831 10.1145/3458652 10.1145/3290605.3300509 10.1080/10618600.2014.907095 10.1016/j.tcs.2019.05.046 10.1017/S026988890200019X 10.1016/j.inffus.2021.09.022 10.1109/TVCG.2019.2934659 10.1609/aaai.v32i1.11501 10.1016/j.neunet.2018.07.006 10.1145/3306618.3314273 10.1016/j.ins.2012.10.039 10.1016/j.artint.2021.103471 10.3390/e23010018 10.1145/3172944.3172946 10.1163/9789004368811_003 10.1145/3287560.3287595 10.1016/j.inffus.2021.07.016 10.1145/3173574.3174156 10.1007/s10462-020-09942-2 10.1016/S2589-7500(21)00208-9 10.1243/0954406C20004 10.3390/make3030027 10.1007/BF00993103 10.1016/j.visinf.2017.01.006 10.1145/3131895 10.1007/978-3-030-28954-6_9 10.1109/CVPRW50498.2020.00020 10.1002/widm.1391 10.1371/journal.pone.0243615 10.1145/358916.358995 10.21105/joss.00786 10.1145/2601248.2601268 10.1016/0950-7051(96)81920-4 10.1609/aaai.v32i1.11491 10.1007/s10618-014-0368-8 10.1109/TNNLS.2016.2599820 10.1613/jair.1.12228 10.1016/j.artint.2018.07.007 10.1145/2678025.2701399 10.1145/3025171.3025209 10.1243/09544062C18304 10.1017/dap.2023.2 10.1145/2702123.2702174 10.1016/j.inffus.2019.12.012 10.1016/j.knosys.2006.03.001 10.21105/joss.01075 10.1016/j.knosys.2022.109947 10.1145/2939672.2939874 10.1609/hcomp.v7i1.5280 10.1016/j.eswa.2017.04.030 10.1109/69.774103 10.1145/3306618.3314293 10.1007/s11704-014-3297-1 10.18653/v1/N19-1404 10.1109/TVCG.2015.2467554 10.1109/CVPR.2016.318 10.1109/MIS.2017.1 10.1016/j.ijhcs.2013.12.007 10.1038/s41467-019-08987-4 10.1109/91.755397 10.1038/s41598-021-82098-3 10.1126/scirobotics.aay7120 10.23915/distill.00010 10.1609/hcomp.v6i1.13337 10.1007/s42979-021-00592-x 10.1016/j.tics.2006.08.004 10.1073/pnas.1900654116 10.1109/ACCESS.2023.3233976 |
ContentType | Journal Article |
Copyright | 2023 The Author(s) |
Copyright_xml | – notice: 2023 The Author(s) |
DBID | 6I. AAFTH AAYXX CITATION |
DOI | 10.1016/j.inffus.2023.101805 |
DatabaseName | ScienceDirect Open Access Titles Elsevier:ScienceDirect:Open Access CrossRef |
DatabaseTitle | CrossRef |
DatabaseTitleList | |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Mathematics |
EISSN | 1872-6305 |
ExternalDocumentID | 10_1016_j_inffus_2023_101805 S1566253523001148 |
GroupedDBID | --K --M .DC .~1 0R~ 1B1 1~. 1~5 29I 4.4 457 4G. 5GY 5VS 6I. 7-5 71M 8P~ AACTN AAEDT AAEDW AAFTH AAIAV AAIKJ AAKOC AALRI AAOAW AAQFI AAQXK AAXUO AAYFN ABBOA ABFNM ABJNI ABMAC ABXDB ABYKQ ACDAQ ACGFS ACNNM ACRLP ACZNC ADBBV ADEZE ADJOM ADMUD ADTZH AEBSH AECPX AEKER AENEX AFKWA AFTJW AGHFR AGUBO AGYEJ AHJVU AHZHX AIALX AIEXJ AIKHN AITUG AJBFU AJOXV ALMA_UNASSIGNED_HOLDINGS AMFUW AMRAJ AOUOD ASPBG AVWKF AXJTR AZFZN BJAXD BKOJK BLXMC CS3 DU5 EBS EFJIC EFLBG EJD EO8 EO9 EP2 EP3 F0J F5P FDB FEDTE FGOYB FIRID FNPLU FYGXN G-Q GBLVA GBOLZ HVGLF HZ~ IHE J1W JJJVA KOM M41 MO0 N9A O-L O9- OAUVE OZT P-8 P-9 P2P PC. Q38 R2- RIG ROL RPZ SDF SDG SDP SES SEW SPC SPCBC SST SSV SSZ T5K UHS ZMT ~G- AATTM AAXKI AAYWO AAYXX ABWVN ACRPL ACVFH ADCNI ADNMO AEIPS AEUPX AFJKZ AFPUW AFXIZ AGCQF AGQPQ AGRNS AIGII AIIUN AKBMS AKRWK AKYEP ANKPU APXCP BNPGV CITATION SSH |
ID | FETCH-LOGICAL-c418t-b58cd64e12ac02b85309f799319c693f8b18a14b2b892ba878f7605334e980c43 |
IEDL.DBID | .~1 |
ISSN | 1566-2535 |
IngestDate | Thu Apr 24 23:02:33 EDT 2025 Tue Jul 01 04:14:41 EDT 2025 Fri Feb 23 02:33:32 EST 2024 |
IsDoiOpenAccess | true |
IsOpenAccess | true |
IsPeerReviewed | true |
IsScholarly | true |
Keywords | Explainable Artificial Intelligence XAI assessment Data Fusion Post-hoc explainability Interpretable machine learning Trustworthy AI AI principles Deep Learning |
Language | English |
License | This is an open access article under the CC BY license. |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c418t-b58cd64e12ac02b85309f799319c693f8b18a14b2b892ba878f7605334e980c43 |
ORCID | 0000-0003-4055-7412 |
OpenAccessLink | https://www.sciencedirect.com/science/article/pii/S1566253523001148 |
ParticipantIDs | crossref_citationtrail_10_1016_j_inffus_2023_101805 crossref_primary_10_1016_j_inffus_2023_101805 elsevier_sciencedirect_doi_10_1016_j_inffus_2023_101805 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | November 2023 2023-11-00 |
PublicationDateYYYYMMDD | 2023-11-01 |
PublicationDate_xml | – month: 11 year: 2023 text: November 2023 |
PublicationDecade | 2020 |
PublicationTitle | Information fusion |
PublicationYear | 2023 |
Publisher | Elsevier B.V |
Publisher_xml | – name: Elsevier B.V |
References | Johansson, König, Niklasson (b288) 2004 Puiutta, Veith (b60) 2020 Hendricks, Akata, Rohrbach, Donahue, Schiele, Darrell (b199) 2016 Y. Lou, R. Caruana, J. Gehrke, G. Hooker, Accurate intelligible models with pairwise interactions, in: Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2013, pp. 623–631. Nie, Wang, Zhang, Yan, Zhang, Chua (b5) 2015; 27 Kulesza, Stumpf, Burnett, Yang, Kwan, Wong (b352) 2013 M. Lin, Q. Chen, S. Yan, Network in network, in: International Conference on Learning Representations, 2013. D. Pham, S. Dimov, The RULES-3 Plus inductive learning algorithm, in: Proceedings of the Third World Congress on Expert Systems, 1996, pp. 917–924. Rojat, Puget, Filliat, Del Ser, Gelin, Díaz-Rodríguez (b131) 2021 Liu, Wang, Liu, Zhu (b55) 2017; 1 S. Saisubramanian, S. Galhotra, S. Zilberstein, Balancing the tradeoff between clustering value and interpretability, in: Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 351–357. A. Jain, H.S. Koppula, B. Raghavan, S. Soh, A. Saxena, Car that knows before you do: Anticipating maneuvers via learning temporal driving models, in: Proceedings of the IEEE International Conference on Computer Vision, 2015, pp. 3182–3190. B.Y. Lim, A.K. Dey, D. Avrahami, Why and why not explanations improve the intelligibility of context-aware intelligent systems, in: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2009, pp. 2119–2128. S. Penney, J. Dodge, C. Hilderbrand, A. Anderson, L. Simpson, M. Burnett, Toward foraging for understanding of StarCraft agents: An empirical study, in: 23rd International Conference on Intelligent User Interfaces, 2018, pp. 225–237. Smith (b120) 2021; 36 French (b450) 1999; 3 G. Bansal, B. Nushi, E. Kamar, W.S. Lasecki, D.S. Weld, E. Horvitz, Beyond accuracy: The role of mental models in human-AI team performance, in: Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, Vol. 7, 2019, pp. 2–11. Montavon, Samek, Müller (b58) 2018; 73 Pham, Dimov (b271) 1997; 211 Lombrozo (b353) 2009; 110 H. Wang, Z. Wang, M. Du, F. Yang, Z. Zhang, S. Ding, P. Mardziel, X. Hu, Score-CAM: Score-weighted visual explanations for convolutional neural networks, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2020, pp. 24–25. Bussone, Stumpf, O’Sullivan (b363) 2015 AI (b514) 2021 Bien, Tibshirani (b159) 2011; 5 Lepri, Oliver, Pentland (b121) 2021 Cabrera, Epperson, Hohman, Kahng, Morgenstern, Chau (b445) 2019 Linardatos, Papastefanopoulos, Kotsiantis (b73) 2021; 23 Craven, Shavlik (b297) 1994 S. Lapuschkin, A. Binder, G. Montavon, K.-R. Muller, W. Samek, Analyzing classifiers: Fisher vectors and deep neural networks, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 2912–2920. Hussain, Hussain, Hossain (b54) 2021 Lapuschkin, Wäldchen, Binder, Montavon, Samek, Müller (b462) 2019; 10 Apley, Zhu (b228) 2020; 82 Piccialli, Di Somma, Giampaolo, Cuomo, Fortino (b12) 2021; 66 Kaminski (b507) 2019; 34 Guidotti, Monreale, Ruggieri, Turini, Giannotti, Pedreschi (b18) 2018; 51 Molnar (b149) 2020 Q. Zhang, Y.N. Wu, S.-C. Zhu, Interpretable convolutional neural networks, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 8827–8836. Angelino, Larus-Stone, Alabi, Seltzer, Rudin (b194) 2018 Herman (b136) 2017 Simonyan, Vedaldi, Zisserman (b239) 2014 B.A. Myers, D.A. Weitzman, A.J. Ko, D.H. Chau, Answering why and why not questions in user interfaces, in: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2006, pp. 397–406. Pham, Aksoy (b269) 1995; 8 Liu, Du, Li (b124) 2021 Martens, Vanthienen, Verbeke, Baesens (b333) 2011; 51 Confalonieri, Coba, Wagner, Besold (b115) 2021; 11 Roth (b242) 1988 Papernot, McDaniel (b195) 2018 Cortez, Embrechts (b323) 2011 M. Eiband, D. Buschek, A. Kremer, H. Hussmann, The impact of placebic explanations on trust in intelligent systems, in: Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems, 2019, pp. 1–6. Panov, Soldatova, Džeroski (b176) 2014; 28 Sung (b319) 1998; 15 Alonso Moral, Castiello, Magdalena, Mencar (b106) 2021 Biecek (b404) 2019 Nori, Jenkins, Koch, Caruana (b392) 2019 Bargal, Zunino, Petsiuk, Zhang, Murino, Sclaroff, Saenko (b468) 2022 M. Hind, D. Wei, M. Campbell, N.C. Codella, A. Dhurandhar, A. Mojsilović, K. Natesan Ramamurthy, K.R. Varshney, TED: Teaching AI to explain its decisions, in: Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, 2019, pp. 123–129. E. Rader, R. Gray, Understanding user beliefs about algorithmic curation in the Facebook news feed, in: Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, 2015, pp. 173–182. Gosiewska, Biecek (b418) 2019; 11 McCarty, Poole, Rosenthal (b46) 2016 Rožanec, Fortuna, Mladenić (b171) 2022; 81 Kwon, Lee (b457) 2022 Núñez-Molina, Fernández-Olivares, Pérez (b487) 2022; 202 Meske, Bunde (b49) 2020 Cannataro, Comito (b174) 2003 Mayer (b419) 2020 R. Masuoka, N. Watanabe, A. Kawamura, Y. Owada, K. Asakawa, Neurofuzzy system-fuzzy inference using a structured neural network, in: Proceedings of the International Conference on Fuzzy Logic & Neural Networks, 1990, pp. 173–177. Taha, Ghosh (b299) 1999; 11 Tiddi, Schlobach (b66) 2022; 302 Tan, Caruana, Hooker, Lou (b310) 2018 Zadeh (b37) 1973; 3 Confalonieri, Coba, Wagner, Besold (b83) 2021; 11 Gedikli, Jannach, Ge (b345) 2014; 72 Kaczmarek-Majer, Casalino, Castellano, Dominiak, Hryniewicz, Kamińska, Vessio, Díaz-Rodríguez (b220) 2022; 614 Sundararajan, Taly, Yan (b252) 2017 TensorFlow (b423) 2022 Confalonieri, Weyde, Besold, del Prado Martín (b177) 2021; 296 Robnik-Šikonja, Bohanec (b328) 2018 oracle (b395) 2018 Yang, Ye, Xia (b71) 2022; 77 C.-K. Yeh, B. Kim, S. Arik, C.-L. Li, P. Ravikumar, T. Pfister, On concept-based explanations in deep neural networks, in: ICLR 2020 Conference, 2019, pp. 1–17. Brandão, Carbonera, De Souza, Ferreira, Gonçalves, Leitão (b489) 2019 Wachter, Mittelstadt, Russell (b261) 2017; 31 Y. Zhang, B. Wallace, A sensitivity analysis of (and practitioners’ guide to) convolutional neural networks for sentence classification, in: Proceedings of the the 8th International Joint Conference on Natural Language Processing, 2017, pp. 253–263. V. Petsiuk, A. Das, K. Saenko, RISE: Randomized Input Sampling for Explanation of Black-box Models, in: Proceedings of the British Machine Vision Conference, BMVC, 2018, pp. 1–13. Kim, Wattenberg, Gilmer, Cai, Wexler, Viegas (b230) 2018 Sicara (b396) 2019 Calegari, Ciatto, Omicini (b485) 2020; 14 Kohavi, Becker (b164) 1996 Chandrasekaran, Tanner, Josephson (b33) 1989; 4 Molnar, König, Herbinger, Freiesleben, Dandl, Scholbeck, Casalicchio, Grosse-Wentrup, Bischl (b472) 2022 Pezzotti, Höllt, Van Gemert, Lelieveldt, Eisemann, Vilanova (b371) 2017; 24 Augasta, Kathirvalavakumar (b301) 2012; 35 Zafar, Khan (b146) 2021; 3 Towell, Shavlik (b286) 1993; 13 Anjomshoae, Omeiza, Jiang (b92) 2021; 116 Wexler (b155) 2017 R.M. Byrne, Counterfactuals in Explainable Artificial Intelligence (XAI): Evidence from Human Reasoning, in: IJCAI, 2019, pp. 6276–6282. Groce, Kulesza, Zhang, Shamasunder, Burnett, Wong, Stumpf, Das, Shinsel, Bice (b373) 2013; 40 S. Teso, K. Kersting, Explanatory interactive machine learning, in: Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, 2019, pp. 239–245. Hedström, Weber, Krakowczyk, Bareeva, Motzkus, Samek, Lapuschkin, Höhne (b421) 2023; 24 Seo, Arik, Yoon, Zhang, Sohn, Pfister (b181) 2021; 34 Rauber, Zimmermann, Bethge, Brendel (b435) 2020; 5 Vilone, Longo (b14) 2021; 76 M.W. Craven, J.W. Shavlik, Extracting tree-structured representations of trained networks, in: Proceedings of NIPS, 1995, pp. 24–30. Cramer, Evers, Ramlal, Van Someren, Rutledge, Stash, Aroyo, Wielinga (b41) 2008; 18 Elgibreen, Aksoy (b283) 2014; 8 Translate (b512) 2022 A. Ghorbani, A. Abid, J. Zou, Interpretation of neural networks is fragile, in: Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 33, 2019, pp. 3681–3688. J.-Y. Zhu, T. Park, P. Isola, A.A. Efros, Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks, in: Computer Vision (ICCV), 2017 IEEE International Conference on, 2017. Van der Maaten, Hinton (b161) 2008; 9 OpenMined (b433) 2021 M. Wu, M.C. Hughes, S. Parbhoo, M. Zazzi, V. Roth, F. Doshi-Velez, Beyond sparsity: Tree regularization of deep models for interpretability, in: Thirty-Second AAAI Conference on Artificial Intelligence, 2018, pp. 1670–1678. Bastani, Inala, Solar-Lezama (b470) 2022 Diamantini, Potena, Storti (b175) 2009 B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, A. Torralba, Learning deep features for discriminative localization, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 2921–2929. Guidotti (b69) 2022 Sheh (b130) 2021 Díaz-Rodríguez, Binkytė, Bakkali, Bookseller, Tubaro, Bacevičius, Zhioua, Chatila (b224) 2023 Langer, Oster, Speith, Hermanns, Kästner, Schmidt, Sesing, Baum (b74) 2021; 296 Vassiliades, Bassiliades, Patkos (b53) 2021; 36 Henelius, Puolamäki, Boström, Asker, Papapetrou (b254) 2014; 28 Hruschka, Ebecken (b302) 2006; 70 M. Ribera, A. Lapedriza, Can we do better explanations? A proposal of user-centered Explainable AI, in: IUI Workshops, Vol. 2327, 2019, p. 38. Arya, Bellamy, Chen, Dhurandhar, Hind, Hoffman, Houde, Liao, Luss, Mojsilovic (b394) 2020; 21 H. Lakkaraju, S.H. Bach, J. Leskovec, Interpretable decision sets: A joint framework for description and prediction, in: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2016, pp. 1675–1684. Liu, Liu, Zhu, Liao, Wei, Pan (b358) 2015; 22 Widmer, Sarker, Nadella, Fiechter, Juvina, Minnery, Hitzler, Schwartz, Raymer (b215) 2022 Hyvärinen, Hurri, Hoyer (b241) 2009 Hohman, Park, Robinson, Chau (b444) 2019; 26 Galli, Marrone, Moscato, Sansone (b112) 2021 Li, Liu, Yang, Peng, Zhou (b13) 2021 Doyle, Radzicki, Trees (b447) 2008 Grice (b91) 1975 Confalonieri (10.1016/j.inffus.2023.101805_b83) 2021; 11 Gebru (10.1016/j.inffus.2023.101805_b166) 2021; 64 Sheh (10.1016/j.inffus.2023.101805_b130) 2021 Edwards (10.1016/j.inffus.2023.101805_b19) 2017; 16 Wanner (10.1016/j.inffus.2023.101805_b209) 2021 Augasta (10.1016/j.inffus.2023.101805_b306) 2012 Montavon (10.1016/j.inffus.2023.101805_b245) 2017; 65 Bigot (10.1016/j.inffus.2023.101805_b274) 2011; 225 Pham (10.1016/j.inffus.2023.101805_b271) 1997; 211 Zahavy (10.1016/j.inffus.2023.101805_b378) 2016 Danks (10.1016/j.inffus.2023.101805_b499) 2017; 32 Núñez Molina (10.1016/j.inffus.2023.101805_b486) 2022 Ivanovs (10.1016/j.inffus.2023.101805_b147) 2021 Naser (10.1016/j.inffus.2023.101805_b101) 2021; 129 Goswami (10.1016/j.inffus.2023.101805_b6) 2014 Cahour (10.1016/j.inffus.2023.101805_b364) 2009; 47 Lapuschkin (10.1016/j.inffus.2023.101805_b462) 2019; 10 Craven (10.1016/j.inffus.2023.101805_b297) 1994 Wang (10.1016/j.inffus.2023.101805_b370) 2016; 22 Ghassemi (10.1016/j.inffus.2023.101805_b111) 2021; 3 Rochford (10.1016/j.inffus.2023.101805_b424) 2018 Zadeh (10.1016/j.inffus.2023.101805_b37) 1973; 3 Das (10.1016/j.inffus.2023.101805_b379) 2017; 163 Larsson (10.1016/j.inffus.2023.101805_b94) 2020; 9 Suffian (10.1016/j.inffus.2023.101805_b266) 2022; 10 Hedström (10.1016/j.inffus.2023.101805_b421) 2023; 24 Liu (10.1016/j.inffus.2023.101805_b55) 2017; 1 Piccialli (10.1016/j.inffus.2023.101805_b12) 2021; 66 Thomas (10.1016/j.inffus.2023.101805_b413) 2019 Ahn (10.1016/j.inffus.2023.101805_b389) 2019; 26 Wang (10.1016/j.inffus.2023.101805_b182) 2021 Schmitz (10.1016/j.inffus.2023.101805_b203) 1999; 10 Preece (10.1016/j.inffus.2023.101805_b137) 2018; 25 Cui (10.1016/j.inffus.2023.101805_b104) 2019 Núñez-Molina (10.1016/j.inffus.2023.101805_b487) 2022; 202 Došilović (10.1016/j.inffus.2023.101805_b64) 2018 10.1016/j.inffus.2023.101805_b2 Lipton (10.1016/j.inffus.2023.101805_b28) 2018 Pham (10.1016/j.inffus.2023.101805_b273) 2003; 217 Tilouche (10.1016/j.inffus.2023.101805_b148) 2021; 14 Bao (10.1016/j.inffus.2023.101805_b180) 2022 Pekala (10.1016/j.inffus.2023.101805_b416) 2020 Madsen (10.1016/j.inffus.2023.101805_b385) 2000 Gerlings (10.1016/j.inffus.2023.101805_b62) 2021 Goebel (10.1016/j.inffus.2023.101805_b81) 2018 Smuha (10.1016/j.inffus.2023.101805_b484) 2019; 20 Zadeh (10.1016/j.inffus.2023.101805_b36) 1965; 8 Strobelt (10.1016/j.inffus.2023.101805_b360) 2017; 24 Gunning (10.1016/j.inffus.2023.101805_b20) 2019; 4 Adebayoj (10.1016/j.inffus.2023.101805_b429) 2017 Widmer (10.1016/j.inffus.2023.101805_b215) 2022 Holland (10.1016/j.inffus.2023.101805_b100) 2021 Linardatos (10.1016/j.inffus.2023.101805_b73) 2021; 23 Puiutta (10.1016/j.inffus.2023.101805_b60) 2020 Core (10.1016/j.inffus.2023.101805_b84) 2006 Tramer (10.1016/j.inffus.2023.101805_b428) 2017 Higgins (10.1016/j.inffus.2023.101805_b153) 2018 Hussain (10.1016/j.inffus.2023.101805_b54) 2021 Wexler (10.1016/j.inffus.2023.101805_b412) 2019; 26 Pezzotti (10.1016/j.inffus.2023.101805_b371) 2017; 24 Lesort (10.1016/j.inffus.2023.101805_b451) 2020 Alonso Moral (10.1016/j.inffus.2023.101805_b106) 2021 Krause (10.1016/j.inffus.2023.101805_b374) 2017 Enholm (10.1016/j.inffus.2023.101805_b9) 2021 Papernot (10.1016/j.inffus.2023.101805_b436) 2020 Bastani (10.1016/j.inffus.2023.101805_b470) 2022 Lepri (10.1016/j.inffus.2023.101805_b121) 2021 Kieseberg (10.1016/j.inffus.2023.101805_b495) 2016; 104 ElGibreen (10.1016/j.inffus.2023.101805_b282) 2013; 47 Confalonieri (10.1016/j.inffus.2023.101805_b177) 2021; 296 Mayer (10.1016/j.inffus.2023.101805_b419) 2020 Lughofer (10.1016/j.inffus.2023.101805_b329) 2017; 420 Confalonieri (10.1016/j.inffus.2023.101805_b47) 2022 Vassiliades (10.1016/j.inffus.2023.101805_b53) 2021; 36 Bécue (10.1016/j.inffus.2023.101805_b126) 2021; 54 Van der Maaten (10.1016/j.inffus.2023.101805_b161) 2008; 9 Chandrasekaran (10.1016/j.inffus.2023.101805_b33) 1989; 4 Breiman (10.1016/j.inffus.2023.101805_b142) 2001; 16 Hruschka (10.1016/j.inffus.2023.101805_b302) 2006; 70 McCarty (10.1016/j.inffus.2023.101805_b46) 2016 Pu (10.1016/j.inffus.2023.101805_b367) 2006 Hyvärinen (10.1016/j.inffus.2023.101805_b241) 2009 Wu (10.1016/j.inffus.2023.101805_b432) 2020; 807 Sturmfels (10.1016/j.inffus.2023.101805_b244) 2020; 5 Holzinger (10.1016/j.inffus.2023.101805_b331) 2020; 34 Commision (10.1016/j.inffus.2023.101805_b503) 2018 Diamantini (10.1016/j.inffus.2023.101805_b175) 2009 DuMouchel (10.1016/j.inffus.2023.101805_b163) 2002 El Bekri (10.1016/j.inffus.2023.101805_b247) 2019 Seo (10.1016/j.inffus.2023.101805_b181) 2021; 34 Li (10.1016/j.inffus.2023.101805_b26) 2022; 64 Zhang (10.1016/j.inffus.2023.101805_b128) 2021 Bennetot (10.1016/j.inffus.2023.101805_b218) 2022; 258 Al-Shedivat (10.1016/j.inffus.2023.101805_b183) 2020; 21 Roese (10.1016/j.inffus.2023.101805_b264) 1997; 121 Grice (10.1016/j.inffus.2023.101805_b91) 1975; 3 Samek (10.1016/j.inffus.2023.101805_b317) 2016; 28 Lovells (10.1016/j.inffus.2023.101805_b510) 2022 Angelotti (10.1016/j.inffus.2023.101805_b221) 2023; 260 Cabrera (10.1016/j.inffus.2023.101805_b445) 2019 Towell (10.1016/j.inffus.2023.101805_b286) 1993; 13 Samp (10.1016/j.inffus.2023.101805_b511) 2021 Rojat (10.1016/j.inffus.2023.101805_b131) 2021 Becking (10.1016/j.inffus.2023.101805_b463) 2022 Zhang (10.1016/j.inffus.2023.101805_b76) 2020; 14 Lundén (10.1016/j.inffus.2023.101805_b7) 2016 Hsiao (10.1016/j.inffus.2023.101805_b116) 2021 Kim (10.1016/j.inffus.2023.101805_b262) 2014 Islam (10.1016/j.inffus.2023.101805_b51) 2021 Hoffman (10.1016/j.inffus.2023.101805_b438) 2018 Khalilpourazari (10.1016/j.inffus.2023.101805_b98) 2021; 32 Keil (10.1016/j.inffus.2023.101805_b339) 2006; 57 Hinton (10.1016/j.inffus.2023.101805_b309) 2015 Arrieta (10.1016/j.inffus.2023.101805_b63) 2020; 58 Khan (10.1016/j.inffus.2023.101805_b320) 2001; 7 10.1016/j.inffus.2023.101805_b189 Jaeger (10.1016/j.inffus.2023.101805_b214) 2014 Trusted-AI (10.1016/j.inffus.2023.101805_b434) 2022 Holzinger (10.1016/j.inffus.2023.101805_b509) 2021; 71 J.M. Alonso (10.1016/j.inffus.2023.101805_b206) 2015 Das (10.1016/j.inffus.2023.101805_b27) 2020 Miller (10.1016/j.inffus.2023.101805_b52) 2019; 267 Rieg (10.1016/j.inffus.2023.101805_b22) 2020; 15 10.1016/j.inffus.2023.101805_b191 Yang (10.1016/j.inffus.2023.101805_b71) 2022; 77 Rudin (10.1016/j.inffus.2023.101805_b132) 2022; 16 10.1016/j.inffus.2023.101805_b198 10.1016/j.inffus.2023.101805_b196 Carvalho (10.1016/j.inffus.2023.101805_b77) 2019; 8 10.1016/j.inffus.2023.101805_b193 Shrikumar (10.1016/j.inffus.2023.101805_b234) 2017 10.1016/j.inffus.2023.101805_b192 Bengfort (10.1016/j.inffus.2023.101805_b425) 2019; 4 Gadiraju (10.1016/j.inffus.2023.101805_b481) 2020 Molnar (10.1016/j.inffus.2023.101805_b257) 2021 Smith (10.1016/j.inffus.2023.101805_b120) 2021; 36 Lesort (10.1016/j.inffus.2023.101805_b222) 2018; 108 Pham (10.1016/j.inffus.2023.101805_b279) 2006; 220 Freitas (10.1016/j.inffus.2023.101805_b108) 2014 H2O (10.1016/j.inffus.2023.101805_b399) 2019 Andrews (10.1016/j.inffus.2023.101805_b39) 1995; 8 UNESCO (10.1016/j.inffus.2023.101805_b515) 2020 Bien (10.1016/j.inffus.2023.101805_b159) 2011; 5 Gunning (10.1016/j.inffus.2023.101805_b497) 2021 Wachter (10.1016/j.inffus.2023.101805_b261) 2017; 31 Malle (10.1016/j.inffus.2023.101805_b90) 2006 Manhaeve (10.1016/j.inffus.2023.101805_b488) 2018; 31 Translate (10.1016/j.inffus.2023.101805_b512) 2022 Biecek (10.1016/j.inffus.2023.101805_b398) 2018; 19 Stepin (10.1016/j.inffus.2023.101805_b72) 2021; 9 Verbeke (10.1016/j.inffus.2023.101805_b332) 2011; 38 Guidotti (10.1016/j.inffus.2023.101805_b246) 2018 Dubois (10.1016/j.inffus.2023.101805_b285) 1996; 84 Xu (10.1016/j.inffus.2023.101805_b312) 2018 Caruana (10.1016/j.inffus.2023.101805_b169) 1999 Weng (10.1016/j.inffus.2023.101805_b225) 2018 Li (10.1016/j.inffus.2023.101805_b13) 2021 Sukhbaatar (10.1016/j.inffus.2023.101805_b212) 2015; 28 Baniecki (10.1016/j.inffus.2023.101805_b403) 2019; 4 Anjomshoae (10.1016/j.inffus.2023.101805_b92) 2021; 116 Commission (10.1016/j.inffus.2023.101805_b474) 2021 Gaur (10.1016/j.inffus.2023.101805_b173) 2021; 25 Hailesilassie (10.1016/j.inffus.2023.101805_b307) 2016 Schulman (10.1016/j.inffus.2023.101805_b476) 2022 Zerilli (10.1016/j.inffus.2023.101805_b504) 2019; 32 Lundberg (10.1016/j.inffus.2023.101805_b334) 2020; 2 Ciravegna (10.1016/j.inffus.2023.101805_b216) 2023; 314 Guidotti (10.1016/j.inffus.2023.101805_b69) 2022 Molnar (10.1016/j.inffus.2023.101805_b472) 2022 Adler (10.1016/j.inffus.2023.101805_b426) 2018; 54 Nie (10.1016/j.inffus.2023.101805_b5) 2015; 27 Lei (10.1016/j.inffus.2023.101805_b200) 2016 Amershi (10.1016/j.inffus.2023.101805_b440) 2014; 35 AI (10.1016/j.inffus.2023.101805_b514) 2021 Doshi-Velez (10.1016/j.inffus.2023.101805_b15) 2017 Vilone (10.1016/j.inffus.2023.101805_b75) 2020 Park (10.1016/j.inffus.2023.101805_b45) 2016; 18 Bobko (10.1016/j.inffus.2023.101805_b384) 2014; 56 Reichstein (10.1016/j.inffus.2023.101805_b179) 2019; 566 Henelius (10.1016/j.inffus.2023.101805_b254) 2014; 28 Kim (10.1016/j.inffus.2023.101805_b230) 2018 Bussone (10.1016/j.inffus.2023.101805_b363) 2015 Zeiler (10.1016/j.inffus.2023.101805_b238) 2014 Sung (10.1016/j.inffus.2023.101805_b319) 1998; 15 Klaise (10.1016/j.inffus.2023.101805_b393) 2021; 22 Kroll (10.1016/j.inffus.2023.101805_b498) 2015 Zilke (10.1016/j.inffus.2023.101805_b304) 2016 Kingston (10.1016/j.inffus.2023.101805_b500) 2016 Hohman (10.1016/j.inffus.2023.101805_b444) 2019; 26 Pham (10.1016/j.inffus.2023.101805_b278) 2006; 220 Sarker (10.1016/j.inffus.2023.101805_b10) 2021; 2 Robnik-Šikonja (10.1016/j.inffus.2023.101805_b328) 2018 10.1016/j.inffus.2023.101805_b388 10.1016/j.inffus.2023.101805_b145 Piatyszet (10.1016/j.inffus.2023.101805_b414) 2020 Wojna (10.1016/j.inffus.2023.101805_b197) 2017 Kulesza (10.1016/j.inffus.2023.101805_b372) 2010 10.1016/j.inffus.2023.101805_b272 El-Sappagh (10.1016/j.inffus.2023.101805_b119) 2021; 11 Georgiev (10.1016/j.inffus.2023.101805_b1) 2017; 1 Aung (10.1016/j.inffus.2023.101805_b289) 2007 Arya (10.1016/j.inffus.2023.101805_b394) 2020; 21 Kaminski (10.1016/j.inffus.2023.101805_b507) 2019; 34 Wang (10.1016/j.inffus.2023.101805_b138) 2007; 23 Pearl (10.1016/j.inffus.2023.101805_b17) 2018 10.1016/j.inffus.2023.1 |
References_xml | – volume: 47 start-page: 339 year: 2021 end-page: 340 ident: b23 article-title: We might be afraid of black-box algorithms publication-title: J. Med. Ethics – year: 2021 ident: b415 article-title: fairmodels: A flexible tool for bias detection, visualization, and mitigation – start-page: 69 year: 2022 end-page: 88 ident: b469 article-title: CLEVR-X: A visual reasoning dataset for natural language explanations publication-title: International Workshop on Extending Explainable AI beyond Deep Models and Classifiers – start-page: 818 year: 2014 end-page: 833 ident: b238 article-title: Visualizing and understanding convolutional networks publication-title: European Conference on Computer Vision – reference: B.Y. Lim, A.K. Dey, Assessing demand for intelligibility in context-aware applications, in: Proceedings of the 11th International Conference on Ubiquitous Computing, 2009, pp. 195–204. – year: 2012 ident: b277 article-title: A Novel Rule Induction Algorithm with Improved Handling of Continuous Valued Attributes – year: 2020 ident: b75 article-title: Explainable artificial intelligence: a systematic review – start-page: 60 year: 2018 end-page: 69 ident: b430 article-title: A reductions approach to fair classification publication-title: International Conference on Machine Learning – start-page: 13 year: 2017 end-page: 24 ident: b390 article-title: Understanding hidden memories of recurrent neural networks publication-title: 2017 IEEE Conference on Visual Analytics Science and Technology – volume: 3 year: 2021 ident: b25 article-title: Scaleable input gradient regularization for adversarial robustness publication-title: Mach. Learn. Appl. – year: 2017 ident: b429 article-title: GitHub - adebayoj/fairml – volume: 220 start-page: 1433 year: 2006 end-page: 1447 ident: b278 article-title: RULES-F: A fuzzy inductive learning algorithm publication-title: Proc. Inst. Mech. Eng. C – volume: 34 start-page: 11196 year: 2021 end-page: 11207 ident: b181 article-title: Controlling neural networks with rule representations publication-title: Adv. Neural Inf. Process. Syst. – year: 2020 ident: b449 article-title: Adaptive explainable neural networks (axnns) – volume: 618 start-page: 379 year: 2022 end-page: 399 ident: b387 article-title: An empirical study on how humans appreciate automated counterfactual explanations which embrace imprecise information publication-title: Inform. Sci. – year: 2022 ident: b476 article-title: ChatGPT: Optimizing language models for dialogue – reference: M. Wu, M.C. Hughes, S. Parbhoo, M. Zazzi, V. Roth, F. Doshi-Velez, Beyond sparsity: Tree regularization of deep models for interpretability, in: Thirty-Second AAAI Conference on Artificial Intelligence, 2018, pp. 1670–1678. – start-page: 1 year: 2014 end-page: 8 ident: b239 article-title: Deep inside convolutional networks: Visualising image classification models and saliency maps publication-title: Workshop At International Conference on Learning Representations – volume: 65 start-page: 211 year: 2017 end-page: 222 ident: b245 article-title: Explaining nonlinear classification decisions with deep taylor decomposition publication-title: Pattern Recognit. – reference: W. Curran, T. Moore, T. Kulesza, W.-K. Wong, S. Todorovic, S. Stumpf, R. White, M. Burnett, Towards recognizing” cool” can end users help computer vision recognize subjective attributes of objects in images?, in: Proceedings of the 2012 ACM International Conference on Intelligent User Interfaces, 2012, pp. 285–288. – reference: M. Eiband, D. Buschek, A. Kremer, H. Hussmann, The impact of placebic explanations on trust in intelligent systems, in: Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems, 2019, pp. 1–6. – start-page: 432 year: 2019 end-page: 448 ident: b186 article-title: An introductory survey on attention mechanisms in NLP problems publication-title: Proceedings of SAI Intelligent Systems Conference – volume: 83 start-page: 187 year: 2017 end-page: 205 ident: b3 article-title: Deep learning networks for stock market analysis and prediction: Methodology, data representations, and case studies publication-title: Expert Syst. Appl. – reference: R. Kocielnik, S. Amershi, P.N. Bennett, Will you accept an imperfect AI? exploring designs for adjusting end-user expectations of AI systems, in: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 2019, pp. 1–14. – volume: 20 start-page: 1614 year: 2014 end-page: 1623 ident: b357 article-title: INFUSE: interactive feature selection for predictive modeling of high dimensional data publication-title: IEEE Trans. Vis. Comput. Graphics – reference: Q. Zhang, Y.N. Wu, S.-C. Zhu, Interpretable convolutional neural networks, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 8827–8836. – year: 2016 ident: b8 article-title: Characterizing driving styles with deep learning – start-page: 141 year: 2021 end-page: 159 ident: b466 article-title: Machine unlearning publication-title: 2021 IEEE Symposium on Security and Privacy – reference: S. Teso, K. Kersting, Explanatory interactive machine learning, in: Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, 2019, pp. 239–245. – start-page: 579 year: 2002 end-page: 591 ident: b163 article-title: Data squashing: constructing summary data sets publication-title: Handbook of Massive Data Sets – reference: E. Costanza, J.E. Fischer, J.A. Colley, T. Rodden, S.D. Ramchurn, N.R. Jennings, Doing the laundry with agents: a field trial of a future smart energy system in the home, in: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014, pp. 813–822. – year: 2016 ident: b255 article-title: Model-agnostic interpretability of machine learning – volume: 31 start-page: 841 year: 2017 ident: b261 article-title: Counterfactual explanations without opening the black box: Automated decisions and the GDPR publication-title: Harv. JL Tech. – volume: 20 start-page: 1 year: 2019 end-page: 8 ident: b401 article-title: iNNvestigate neural networks! publication-title: J. Mach. Learn. Res. – reference: Y. Zhang, B. Wallace, A sensitivity analysis of (and practitioners’ guide to) convolutional neural networks for sentence classification, in: Proceedings of the the 8th International Joint Conference on Natural Language Processing, 2017, pp. 253–263. – volume: 26 start-page: 1096 year: 2019 end-page: 1106 ident: b444 article-title: S ummit: Scaling deep learning interpretability by visualizing activation and attribution summarizations publication-title: IEEE Trans. Vis. Comput. Graphics – start-page: 1 year: 2021 end-page: 5 ident: b129 article-title: XAI-AV: Explainable Artificial Intelligence for trust management in autonomous vehicles publication-title: 2021 International Conference on Communications, Computing, Cybersecurity, and Informatics – year: 2021 ident: b147 article-title: Perturbation-based methods for explaining deep neural networks: A survey publication-title: Pattern Recognit. Lett. – start-page: 1 year: 2021 end-page: 25 ident: b128 article-title: Artificial Intelligence in cyber security: research advances, challenges, and opportunities publication-title: Artif. Intell. Rev. – start-page: 427 year: 2022 end-page: 440 ident: b439 article-title: The next frontier: AI we can really trust publication-title: Machine Learning and Principles and Practice of Knowledge Discovery in Databases: International Workshops of ECML PKDD 2021, Virtual Event, September 13-17, 2021, Proceedings, Part I – volume: 3 year: 2018 ident: b375 article-title: The building blocks of interpretability publication-title: Distill – year: 2021 ident: b182 article-title: Physics-guided deep learning for dynamical systems: A survey – reference: S. Coppers, J. Van den Bergh, K. Luyten, K. Coninx, I. Van der Lek-Ciudin, T. Vanallemeersch, V. Vandeghinste, Intellingo: an intelligible translation environment, in: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, 2018, pp. 1–13. – volume: 36 start-page: 1513 year: 2009 end-page: 1522 ident: b303 article-title: Rule extraction from trained adaptive neural networks using artificial immune systems publication-title: Expert Syst. Appl. – year: 2020 ident: b414 article-title: Arena for the exploration and comparison of any ML models – reference: E. Choi, M.T. Bahadori, J.A. Kulas, A. Schuetz, W.F. Stewart, J. Sun, Retain: An interpretable predictive model for healthcare using reverse time attention mechanism, in: 30th Conference on Neural Information Processing Systems, NIPS, 2016. – start-page: 3 year: 2018 end-page: 10 ident: b344 article-title: Theory publication-title: Macrocognition Metrics and Scenarios – reference: X. Situ, I. Zukerman, C. Paris, S. Maruf, G. Haffari, Learning to explain: Generating stable explanations fast, in: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, Volume 1: Long Papers, 2021, pp. 5340–5355. – volume: 28 start-page: 1503 year: 2014 end-page: 1529 ident: b254 article-title: A peek into the black box: exploring classifiers by randomization publication-title: Data Min. Knowl. Discov. – volume: 8 start-page: 537 year: 2014 end-page: 562 ident: b283 article-title: RULES-IT: incremental transfer learning with RULES family publication-title: Front. Comput. Sci. – year: 2019 ident: b452 article-title: DisCoRL: Continual reinforcement learning via policy distillation publication-title: ICML 2019 Workshop on Multi-Task Reinforcement Learning – year: 2018 ident: b453 article-title: S-RL toolbox: Environments, datasets and evaluation metrics for state representation learning publication-title: NeurIPS Workshop on Deep Reinforcement Learning – volume: 56 start-page: 489 year: 2014 end-page: 508 ident: b384 article-title: The construct of state-level suspicion: A model and research agenda for automated and information technology (IT) contexts publication-title: Hum. Factors – volume: 60 year: 2023 ident: b65 article-title: A survey on XAI and natural language explanations publication-title: Inf. Process. Manage. – start-page: 401 year: 2017 end-page: 416 ident: b428 article-title: Fairtest: Discovering unwarranted associations in data-driven applications publication-title: 2017 IEEE European Symposium on Security and Privacy (EuroS&P) – reference: R. Binns, M. Van Kleek, M. Veale, U. Lyngs, J. Zhao, N. Shadbolt, ’It’s Reducing a Human Being to a Percentage’ Perceptions of Justice in Algorithmic Decisions, in: Proceedings of the 2018 Chi Conference on Human Factors in Computing Systems, 2018, pp. 1–14. – year: 2021 ident: b514 article-title: OECD AI’s live repository of over 260 AI strategies & policies - OECD.AI – volume: 296 year: 2021 ident: b177 article-title: Using ontologies to enhance human understandability of global post-hoc explanations of Black-box models publication-title: Artificial Intelligence – start-page: 1 year: 2022 end-page: 55 ident: b69 article-title: Counterfactual explanations and how to find them: literature review and benchmarking publication-title: Data Min. Knowl. Discov. – volume: 36 start-page: 3336 year: 2009 end-page: 3341 ident: b263 article-title: A simple and fast algorithm for K-medoids clustering publication-title: Expert Syst. Appl. – volume: 79 start-page: 58 year: 2021 end-page: 83 ident: b219 article-title: Explainable neural-symbolic learning (X-NeSyL) methodology to fuse deep learning representations with expert knowledge graphs: the monuMAI cultural heritage use case publication-title: Inf. Fusion – volume: 22 start-page: 55 year: 2021 end-page: 67 ident: b11 article-title: If deep learning is the answer, what is the question? publication-title: Nat. Rev. Neurosci. – reference: R.R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, D. Batra, Grad-cam: Visual explanations from deep networks via gradient-based localization, in: Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 618–626. – volume: 202 year: 2022 ident: b487 article-title: Learning to select goals in Automated Planning with Deep-Q Learning publication-title: Expert Syst. Appl. – year: 2018 ident: b195 article-title: Deep k-nearest neighbors: Towards confident, interpretable and robust deep learning – start-page: 39 year: 2022 end-page: 68 ident: b472 article-title: General pitfalls of model-agnostic interpretation methods for machine learning models publication-title: International Workshop on Extending Explainable AI beyond Deep Models and Classifiers – year: 2021 ident: b135 article-title: Explanatory Model Analysis: Explore, Explain, and Examine Predictive Models – volume: 22 start-page: 2508 year: 2016 end-page: 2521 ident: b370 article-title: TopicPanorama: A full picture of relevant topics publication-title: IEEE Trans. Vis. Comput. Graphics – volume: 57 start-page: 227 year: 2006 end-page: 254 ident: b339 article-title: Explanation and understanding publication-title: Annu. Rev. Psychol. – year: 2021 ident: b102 article-title: Counterfactuals and causability in Explainable Artificial Intelligence: Theory, algorithms, and applications – volume: 10 year: 2015 ident: b236 article-title: On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation publication-title: PLoS One – year: 2021 ident: b420 article-title: Landscape of R packages for eXplainable Artificial Intelligence – start-page: 900 year: 2004 end-page: 907 ident: b85 article-title: An Explainable Artificial Intelligence system for small-unit tactical behavior publication-title: Proceedings of the National Conference on Artificial Intelligence – volume: 11 start-page: 377 year: 2000 end-page: 389 ident: b296 article-title: Extracting rules from trained neural networks publication-title: IEEE Trans. Neural Netw. – year: 2021 ident: b474 article-title: Proposal for a regulation of the European Parliament and the Council laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union legislative acts – year: 2020 ident: b149 article-title: Interpretable Machine Learning – year: 2020 ident: b475 article-title: Toward trustworthy AI development: mechanisms for supporting verifiable claims – volume: 8 start-page: 832 year: 2019 ident: b77 article-title: Machine learning interpretability: A survey on methods and metrics publication-title: Electronics – reference: A. Rosenfeld, Better Metrics for Evaluating Explainable Artificial Intelligence, in: Proceedings of the 20th International Conference on Autonomous Agents and MultiAgent Systems, 2021, pp. 45–50. – volume: 220 start-page: 537 year: 2006 end-page: 552 ident: b279 article-title: SRI: a scalable rule induction algorithm publication-title: Proc. Inst. Mech. Eng. C – reference: O. Biran, C. Cotton, Explanation and justification in machine learning: A survey, in: IJCAI-17 Workshop on Explainable AI, Vol. 8, XAI, 2017, pp. 8–13. – year: 2018 ident: b226 article-title: CORELS: Learning certifiably optimal rule lists – year: 2014 ident: b248 article-title: Striving for simplicity: The all convolutional net – volume: 31 year: 2018 ident: b488 article-title: Deepproblog: Neural probabilistic logic programming publication-title: Adv. Neural Inf. Process. Syst. – volume: 1 start-page: 206 year: 2019 end-page: 215 ident: b505 article-title: Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead publication-title: Nat. Mach. Intell. – start-page: 122 year: 2019 end-page: 125 ident: b211 article-title: An explainable hybrid model for bankruptcy prediction based on the decision tree and deep neural network publication-title: 2019 IEEE 2nd International Conference on Knowledge Innovation and Invention – volume: 82 start-page: 1059 year: 2020 end-page: 1086 ident: b228 article-title: Visualizing the effects of predictor variables in black box supervised learning models publication-title: J. R. Stat. Soc. Ser. B Stat. Methodol. – volume: 24 start-page: 1114 year: 1994 end-page: 1124 ident: b295 article-title: Rule generation from neural networks publication-title: IEEE Trans. Syst. Man Cybern. – volume: 113 start-page: 1094 year: 2018 end-page: 1111 ident: b314 article-title: Distribution-free predictive inference for regression publication-title: J. Amer. Statist. Assoc. – volume: 1 start-page: 48 year: 2017 end-page: 56 ident: b55 article-title: Towards better analysis of machine learning models: A visual analytics perspective publication-title: Vis. Inform. – reference: C.-K. Yeh, B. Kim, S. Arik, C.-L. Li, P. Ravikumar, T. Pfister, On concept-based explanations in deep neural networks, in: ICLR 2020 Conference, 2019, pp. 1–17. – volume: 32 start-page: 10967 year: 2019 end-page: 10978 ident: b406 article-title: On the (in) fidelity and sensitivity of explanations publication-title: Adv. Neural Inf. Process. Syst. – reference: A. Ghorbani, J. Wexler, J. Zou, B. Kim, Towards automatic concept-based explanations, in: 33rd Conference on Neural Information Processing Systems, NeurIPS 2019, 2019. – volume: 71 start-page: 28 year: 2021 end-page: 37 ident: b509 article-title: Towards multi-modal causability with graph neural networks enabling information fusion for explainable AI publication-title: Inf. Fusion – volume: 51 start-page: 1 year: 2018 end-page: 42 ident: b18 article-title: A survey of methods for explaining black box models publication-title: ACM Comput. Surv. – volume: 24 start-page: 1435 year: 2011 end-page: 1447 ident: b276 article-title: EDISC: a class-tailored discretization technique for rule-based classification publication-title: IEEE Trans. Knowl. Data Eng. – year: 2020 ident: b515 article-title: Recommendation on the ethics of Artificial Intelligence – volume: 314 year: 2023 ident: b216 article-title: Logic explained networks publication-title: Artificial Intelligence – volume: 104 start-page: 32 year: 2016 end-page: 33 ident: b495 article-title: Trust for the doctor-in-the-loop publication-title: ERCIM News – volume: 18 start-page: 1 year: 2016 end-page: 11 ident: b45 article-title: Comparable long-term efficacy, as assessed by patient-reported outcomes, safety and pharmacokinetics, of CT-P13 and reference infliximab in patients with ankylosing spondylitis: 54-week results from the randomized, parallel-group PLANETAS study publication-title: Arthritis Res. Ther. – year: 2017 ident: b205 article-title: Simple rules for complex decisions publication-title: Cogn. Soc. Sci. EJ. – reference: J. Matejka, G. Fitzmaurice, Same stats, different graphs: generating datasets with varied appearance and identical statistics through simulated annealing, in: Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, 2017, pp. 1290–1294. – volume: 23 start-page: 1 year: 2021 end-page: 3 ident: b96 article-title: Introduction to the special section on bias and fairness in AI publication-title: ACM SIGKDD Explor. Newsl. – volume: 55 start-page: 520 year: 2013 end-page: 534 ident: b383 article-title: I trust it, but I don’t know why: Effects of implicit attitudes toward automation on trust in an automated system publication-title: Hum. Factors – start-page: 404 year: 2012 end-page: 408 ident: b306 article-title: Rule extraction from neural networks—A comparative study publication-title: International Conference on Pattern Recognition, Informatics and Medical Engineering – reference: S. Saisubramanian, S. Galhotra, S. Zilberstein, Balancing the tradeoff between clustering value and interpretability, in: Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 351–357. – volume: 3 start-page: 28 year: 1973 end-page: 44 ident: b37 article-title: Outline of a new approach to the analysis of complex systems and decision processes publication-title: IEEE Trans. Syst. Man Cybern. – year: 2016 ident: b200 article-title: Rationalizing neural predictions publication-title: Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing – year: 1988 ident: b242 article-title: The Shapley Value: Essays in Honor of Lloyd S. Shapley – volume: 66 start-page: 111 year: 2021 end-page: 137 ident: b12 article-title: A survey on deep learning in medicine: Why, how and when? publication-title: Inf. Fusion – volume: 2 start-page: 1 year: 2021 end-page: 21 ident: b10 article-title: Machine learning: Algorithms, real-world applications and research directions publication-title: SN Comput. Sci. – start-page: 179 year: 2022 end-page: 204 ident: b47 article-title: A unified framework for managing sex and gender bias in AI models for healthcare publication-title: Sex and Gender Bias in Technology and Artificial Intelligence – volume: 108 start-page: 379 year: 2018 end-page: 392 ident: b222 article-title: State representation learning for control: An overview publication-title: Neural Netw. – volume: 166 start-page: 195 year: 1996 ident: b382 article-title: Swift trust and temporary group. Trust in organisations publication-title: Front. Theory Res. – volume: 420 start-page: 16 year: 2017 end-page: 36 ident: b329 article-title: Explaining classifier decisions linguistically for stimulating and improving operators labeling behavior publication-title: Inform. Sci. – start-page: 1 year: 2022 end-page: 9 ident: b460 article-title: Can post-hoc explanations effectively detect out-of-distribution samples? publication-title: 2022 IEEE International Conference on Fuzzy Systems – year: 2020 ident: b150 article-title: Umap: Uniform manifold approximation and projection for dimension reduction – reference: D. Wang, Q. Yang, A. Abdul, B.Y. Lim, Designing theory-driven user-centric explainable AI, in: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 2019, pp. 1–15. – volume: 81 start-page: 59 year: 2022 end-page: 83 ident: b67 article-title: Counterfactuals and causability in explainable artificial intelligence: Theory, algorithms, and applications publication-title: Information Fusion – volume: 29 year: 2016 ident: b158 article-title: Examples are not enough, learn to criticize! criticism for interpretability publication-title: Adv. Neural Inf. Process. Syst. – year: 2022 ident: b434 article-title: GitHub - Trusted-AI/adversarial-robustness-toolbox: adversarial robustness toolbox (ART) - Python library for machine learning security - evasion, poisoning, extraction, inference - Red and blue teams – volume: 10 start-page: 1392 year: 1999 end-page: 1401 ident: b203 article-title: ANN-DT: an algorithm for extraction of decision trees from artificial neural networks publication-title: IEEE Trans. Neural Netw. – reference: B. Nushi, E. Kamar, E. Horvitz, Towards accountable AI: Hybrid human-machine analyses for characterizing system failure, in: Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, Vol. 6, 2018, pp. 126–135. – volume: 11 year: 2021 ident: b114 article-title: Explainable Artificial Intelligence: an analytical review publication-title: Wiley Interdiscip. Rev. Data Min. Knowl. Discov. – year: 2019 ident: b190 article-title: Attention is not explanation – volume: 26 start-page: 56 year: 2019 end-page: 65 ident: b412 article-title: The what-if tool: Interactive probing of machine learning models publication-title: IEEE Trans. Vis. Comput. Graphics – start-page: 702 year: 1985 ident: b35 article-title: Rule-Based Expert Systems: The Mycin Experiments of the Stanford Heuristic Programming Project: BG Buchanan and EH Shortliffe – start-page: 267 year: 2019 end-page: 280 ident: b249 article-title: The (un) reliability of saliency methods publication-title: Explainable AI: Interpreting, Explaining and Visualizing Deep Learning – volume: 2 start-page: 476 year: 2020 end-page: 486 ident: b459 article-title: Making deep neural networks right for the right scientific reasons by interacting with their explanations publication-title: Nat. Mach. Intell. – year: 2018 ident: b312 article-title: Interpreting deep classifier by visual distillation of dark knowledge – year: 2022 ident: b494 article-title: Explainable AI for healthcare 5.0: opportunities and challenges publication-title: IEEE Access – volume: 22 start-page: 250 year: 2015 end-page: 259 ident: b358 article-title: An uncertainty-aware approach for exploratory microblog retrieval publication-title: IEEE Trans. Vis. Comput. Graphics – volume: 18 start-page: 455 year: 2008 ident: b41 article-title: The effects of transparency on trust in and acceptance of a content-based art recommender publication-title: User Model. User-Adapt. Interact. – start-page: 162 year: 2017 end-page: 172 ident: b374 article-title: A workflow for visual diagnostics of binary classifiers using instance-level explanations publication-title: 2017 IEEE Conference on Visual Analytics Science and Technology – start-page: 101 year: 2013 end-page: 121 ident: b240 article-title: The Taylor decomposition: A unified generalization of the Oaxaca method to nonlinear models publication-title: Archive Ouverte En Sciences de L’Homme Et de la Société – reference: H. Lakkaraju, S.H. Bach, J. Leskovec, Interpretable decision sets: A joint framework for description and prediction, in: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2016, pp. 1675–1684. – year: 2021 ident: b508 article-title: A European approach to artificial intelligence | Shaping Europe’s digital future – volume: 11 start-page: 1 year: 2021 end-page: 45 ident: b87 article-title: A multidisciplinary survey and framework for design and evaluation of explainable AI systems publication-title: ACM Trans. Interact. Intell. Syst. (TiiS) – start-page: 457 year: 2016 end-page: 473 ident: b304 article-title: Deepred–rule extraction from deep neural networks publication-title: International Conference on Discovery Science – volume: 8 start-page: 338 year: 1965 end-page: 353 ident: b36 article-title: Fuzzy sets publication-title: Inf. Control – volume: 35 start-page: 131 year: 2012 end-page: 150 ident: b301 article-title: Reverse engineering the neural networks for rule extraction in classification problems publication-title: Neural Process. Lett. – volume: 23 start-page: 18 year: 2021 ident: b73 article-title: Explainable AI: A review of machine learning interpretability methods publication-title: Entropy – volume: 4 year: 2019 ident: b20 article-title: XAI: Explainable artificial intelligence publication-title: Science Robotics – year: 2019 ident: b381 article-title: Quantifying interpretability and trust in machine learning systems – year: 2022 ident: b431 article-title: GitHub - tensorflow/privacy: Library for training machine learning models with privacy for training data – volume: 7 start-page: 151 year: 1999 end-page: 159 ident: b287 article-title: Heuristic constraints enforcement for training of and rule extraction from a fuzzy/neural architecture publication-title: IEEE Trans. Fuzzy Syst. – volume: 76 start-page: 89 year: 2021 end-page: 106 ident: b14 article-title: Notions of explainability and evaluation approaches for explainable artificial intelligence publication-title: Inf. Fusion – volume: 225 start-page: 1 year: 2013 end-page: 17 ident: b322 article-title: Using sensitivity analysis and visualization techniques to open black box data mining models publication-title: Inform. Sci. – year: 2017 ident: b15 article-title: Towards a rigorous science of interpretable machine learning – volume: 6 start-page: 52138 year: 2018 end-page: 52160 ident: b21 article-title: Peeking inside the black-box: a survey on explainable artificial intelligence (XAI) publication-title: IEEE Access – reference: S.M. Lundberg, S.-I. Lee, A unified approach to interpreting model predictions, in: Proceedings of the 31st International Conference on Neural Information Processing Systems, 2017, pp. 4768–4777. – volume: 47 start-page: 1260 year: 2009 end-page: 1270 ident: b364 article-title: Does projection into use improve trust and exploration? An example with a cruise control system publication-title: Saf. Sci. – reference: E. Rader, R. Gray, Understanding user beliefs about algorithmic curation in the Facebook news feed, in: Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, 2015, pp. 173–182. – reference: D. Holliday, S. Wilson, S. Stumpf, User trust in intelligent systems: A journey over time, in: Proceedings of the 21st International Conference on Intelligent User Interfaces, 2016, pp. 164–168. – start-page: 1 year: 2007 end-page: 8 ident: b170 article-title: Scene summarization for online image collections publication-title: 2007 IEEE 11th International Conference on Computer Vision – year: 2018 ident: b225 article-title: Attention? Attention! – start-page: 844 year: 2017 end-page: 850 ident: b197 article-title: Attention-based extraction of structured information from street view imagery publication-title: 2017 14th IAPR International Conference on Document Analysis and Recognition, Vol. 1 – volume: 64 start-page: 86 year: 2021 end-page: 92 ident: b166 article-title: Datasheets for datasets publication-title: Commun. ACM – year: 2021 ident: b121 article-title: Ethical machines: the human-centric use of Artificial Intelligence publication-title: Iscience – start-page: 2668 year: 2018 end-page: 2677 ident: b230 article-title: Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (tcav) publication-title: International Conference on Machine Learning – volume: 24 start-page: 1 year: 2023 end-page: 11 ident: b421 article-title: Quantus: An explainable AI toolkit for responsible evaluation of neural network explanations and beyond publication-title: Journal of Machine Learning Research – year: 2018 ident: b194 article-title: Learning certifiably optimal rule lists for categorical data publication-title: J. Mach. Learn. Res. – start-page: 35 year: 2019 end-page: 46 ident: b247 article-title: A study on trust in black box models and post-hoc explanations publication-title: International Workshop on Soft Computing Models in Industrial and Environmental Applications – start-page: 80 year: 2018 end-page: 89 ident: b501 article-title: Explaining explanations: An overview of interpretability of machine learning publication-title: 2018 IEEE 5th International Conference on Data Science and Advanced Analytics – volume: 3 start-page: 525 year: 2021 end-page: 541 ident: b146 article-title: Deterministic local interpretable model-agnostic explanations for stable explainability publication-title: Mach. Learn. Knowl. Extr. – volume: 64 start-page: 34 year: 2021 end-page: 36 ident: b502 article-title: Medical artificial intelligence: the European legal perspective publication-title: Commun. ACM – volume: 32 start-page: 661 year: 2019 end-page: 683 ident: b504 article-title: Transparency in algorithmic and human decision-making: is there a double standard? publication-title: Philos. Technol. – reference: R.M. Byrne, Counterfactuals in Explainable Artificial Intelligence (XAI): Evidence from Human Reasoning, in: IJCAI, 2019, pp. 6276–6282. – year: 2006 ident: b90 article-title: How the Mind Explains Behavior: Folk Explanations, Meaning, and Social Interaction – reference: R. Masuoka, N. Watanabe, A. Kawamura, Y. Owada, K. Asakawa, Neurofuzzy system-fuzzy inference using a structured neural network, in: Proceedings of the International Conference on Fuzzy Logic & Neural Networks, 1990, pp. 173–177. – start-page: 505 year: 1995 end-page: 512 ident: b305 article-title: Extracting rules from artificial neural networks with distributed representations publication-title: Adv. Neural Inf. Process. Syst. – year: 2022 ident: b477 article-title: Hierarchical text-conditional image generation with clip latents – volume: 15 start-page: 405 year: 1998 end-page: 411 ident: b319 article-title: Ranking importance of input parameters of neural networks publication-title: Expert Syst. Appl. – year: 2019 ident: b489 article-title: Mediation challenges and socio-technical gaps for explainable deep learning applications – reference: C. Panigutti, A. Perotti, D. Pedreschi, Doctor XAI: an ontology-based approach to black-box sequential data classification explanations, in: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 2020, pp. 629–639. – start-page: 1 year: 2022 ident: b213 article-title: Neural-symbolic learning and reasoning: A survey and interpretation publication-title: Neuro-Symbolic Artificial Intelligence: The State of the Art, Vol. 342 – volume: 3 start-page: 786 year: 2018 ident: b397 article-title: iml: An R package for interpretable machine learning publication-title: J. Open Source Softw. – volume: 68 year: 2018 ident: b315 article-title: Model class reliance: Variable importance measures for any machine learning model class, from the Rashomon publication-title: Perspective – year: 2022 ident: b456 article-title: Beyond explaining: Opportunities and challenges of XAI-based model improvement – reference: A. Abdul, J. Vermeulen, D. Wang, B.Y. Lim, M. Kankanhalli, Trends and trajectories for explainable, accountable and intelligible systems: An HCI research agenda, in: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, 2018, pp. 1–18. – year: 2016 ident: b307 article-title: Rule extraction algorithm for deep neural networks: A review – volume: 32 start-page: 88 year: 2017 end-page: 91 ident: b499 article-title: Regulating autonomous systems: Beyond standards publication-title: IEEE Intell. Syst. – volume: 28 start-page: 2660 year: 2016 end-page: 2673 ident: b317 article-title: Evaluating the visualization of what a deep neural network has learned publication-title: IEEE Trans. Neural Netw. Learn. Syst. – volume: 70 start-page: 245 year: 2021 end-page: 317 ident: b61 article-title: A survey on the explainability of supervised machine learning publication-title: J. Artificial Intelligence Res. – reference: A. Ghorbani, A. Abid, J. Zou, Interpretation of neural networks is fragile, in: Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 33, 2019, pp. 3681–3688. – year: 2019 ident: b404 article-title: CeterisParibus: Ceteris paribus profiles – volume: 20 start-page: 97 year: 2019 end-page: 106 ident: b484 article-title: The EU approach to ethics guidelines for trustworthy artificial intelligence publication-title: Comput. Law Rev. Int. – volume: 77 start-page: 29 year: 2022 end-page: 52 ident: b71 article-title: Unbox the black-box for the medical Explainable AI via multi-modal and multi-centre data fusion: A mini-review, two showcases and beyond publication-title: Inf. Fusion – reference: L.A. Gatys, A.S. Ecker, M. Bethge, Image style transfer using convolutional neural networks, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 2414–2423. – start-page: 13 year: 2022 end-page: 38 ident: b68 article-title: Explainable AI methods-a brief overview publication-title: International Workshop on Extending Explainable AI beyond Deep Models and Classifiers – year: 2022 ident: b512 article-title: Provisions on the management of algorithmic recommendations in internet information services — – volume: 34 start-page: 193 year: 2020 end-page: 198 ident: b331 article-title: Measuring the quality of explanations: the system causability scale (SCS) comparing human and machine explanations publication-title: KI-Künstliche Intelligenz – start-page: 160 year: 2015 end-page: 169 ident: b363 article-title: The role of explanations on trust and reliance in clinical decision support systems publication-title: 2015 International Conference on Healthcare Informatics – start-page: 245 year: 2021 end-page: 258 ident: b209 article-title: Stop ordering machine learning algorithms by their explainability! An empirical investigation of the tradeoff between performance and explainability publication-title: Conference on E-Business, E-Services and E-Society – year: 2014 ident: b108 article-title: Comprehensible Classification Models: A Position Paper – volume: 14 start-page: 7 year: 2020 end-page: 32 ident: b485 article-title: On the integration of symbolic and sub-symbolic techniques for XAI: A survey publication-title: Intell. Artif. – volume: 116 start-page: 22071 year: 2019 end-page: 22080 ident: b151 article-title: Definitions, methods, and applications in interpretable machine learning publication-title: Proc. Natl. Acad. Sci. – reference: P.-J. Kindermans, K.T. Schütt, M. Alber, K.-R. Müller, D. Erhan, B. Kim, S. Dähne, Learning how to explain neural networks: Patternnet and patternattribution, in: 6th International Conference on Learning Representations, ICLR 2018, 2018. – volume: 260 year: 2023 ident: b221 article-title: Towards a more efficient computation of individual attribute and policy contribution for post-hoc explanation of cooperative multi-agent systems using Myerson values publication-title: Knowl.-Based Syst. – volume: 13 start-page: 71 year: 1993 end-page: 101 ident: b286 article-title: Extracting refined rules from knowledge-based neural networks publication-title: Mach. Learn. – volume: 15 year: 2020 ident: b22 article-title: Demonstration of the potential of white-box machine learning approaches to gain insights from cardiovascular disease electrocardiograms publication-title: PLoS One – volume: 51 start-page: 782 year: 2011 end-page: 793 ident: b333 article-title: Performance of classification models from a user perspective publication-title: Decis. Support Syst. – volume: 54 start-page: 95 year: 2018 end-page: 122 ident: b426 article-title: Auditing black-box models for indirect influence publication-title: Knowl. Inf. Syst. – start-page: 658 year: 2004 end-page: 663 ident: b288 article-title: The truth is in there-rule extraction from opaque models using genetic programming. publication-title: FLAIRS Conference – start-page: 255 year: 2022 end-page: 269 ident: b468 article-title: Beyond the visual analysis of deep model saliency publication-title: International Workshop on Extending Explainable AI beyond Deep Models and Classifiers – start-page: 139 year: 2022 end-page: 166 ident: b467 article-title: Towards causal algorithmic recourse publication-title: International Workshop on Extending Explainable AI beyond Deep Models and Classifiers – year: 2018 ident: b438 article-title: Metrics for explainable AI: Challenges and prospects – year: 2023 ident: b82 article-title: From anecdotal evidence to quantitative evaluation methods: a systematic review on evaluating explainable AI publication-title: ACM Comput. Surv. – volume: 19 start-page: 3245 year: 2018 end-page: 3249 ident: b398 article-title: DALEX: explainers for complex predictive models in R publication-title: J. Mach. Learn. Res. – volume: 11 start-page: 448 year: 1999 end-page: 463 ident: b299 article-title: Symbolic interpretation of artificial neural networks publication-title: IEEE Trans. Knowl. Data Eng. – volume: 4 start-page: 53 year: 2000 end-page: 71 ident: b386 article-title: Foundations for an empirically determined scale of trust in automated systems publication-title: Int. J. Cogn. Ergon. – volume: 20 start-page: 7 year: 2020 end-page: 17 ident: b482 article-title: Identifying ethical considerations for machine learning healthcare applications publication-title: Am. J. Bioethics – year: 2021 ident: b51 article-title: Explainable Artificial Intelligence approaches: A survey – start-page: 1 year: 2014 end-page: 7 ident: b6 article-title: MDLFace: Memorability augmented deep learning for video face recognition publication-title: IEEE International Joint Conference on Biometrics – volume: 11 start-page: 1 year: 2021 end-page: 26 ident: b119 article-title: A multilayer multimodal detection and prediction model based on explainable artificial intelligence for Alzheimer’s disease publication-title: Sci. Rep. – volume: 40 start-page: 44 year: 2019 end-page: 58 ident: b496 article-title: DARPA’s Explainable Artificial Intelligence (XAI) program publication-title: AI Mag. – start-page: 1 year: 2022 end-page: 18 ident: b454 article-title: Explain to not forget: defending against catastrophic forgetting with xai publication-title: Machine Learning and Knowledge Extraction: 6th IFIP TC 5, TC 12, WG 8.4, WG 8.9, WG 12.9 International Cross-Domain Conference, CD-MAKE 2022, 2022, Proceedings – reference: J. Dodge, S. Penney, A. Anderson, M.M. Burnett, What Should Be in an XAI Explanation? What IFT Reveals, in: IUI Workshops, 2018, pp. 1–4. – reference: A.S. Ross, M.C. Hughes, F. Doshi-Velez, Right for the right reasons: Training differentiable models by constraining their explanations, in: Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI-17, 2017, pp. 2662–2670. – reference: B.Y. Lim, A.K. Dey, D. Avrahami, Why and why not explanations improve the intelligibility of context-aware intelligent systems, in: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2009, pp. 2119–2128. – volume: 5 start-page: e5 year: 2023 ident: b335 article-title: Explainable machine learning for public policy: Use cases, gaps, and research directions publication-title: Data & Policy – volume: 61 start-page: 36 year: 2018 end-page: 43 ident: b143 article-title: The mythos of model interpretability publication-title: Commun. ACM – volume: 8 start-page: 277 year: 1993 end-page: 282 ident: b270 article-title: An algorithm for automatic rule induction publication-title: Artif. Intell. Eng. – volume: 8 start-page: 59 year: 1995 end-page: 65 ident: b269 article-title: RULES: A simple rule extraction system publication-title: Expert Syst. Appl. – volume: 2 start-page: 56 year: 2020 end-page: 67 ident: b334 article-title: From local explanations to global understanding with explainable AI for trees publication-title: Nat. Mach. Intell. – start-page: 229 year: 2022 end-page: 254 ident: b471 article-title: Interpreting and improving deep-learning models with reality checks publication-title: International Workshop on Extending Explainable AI beyond Deep Models and Classifiers – volume: 58 start-page: 82 year: 2020 end-page: 115 ident: b63 article-title: Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI publication-title: Inf. Fusion – reference: M. Hind, D. Wei, M. Campbell, N.C. Codella, A. Dhurandhar, A. Mojsilović, K. Natesan Ramamurthy, K.R. Varshney, TED: Teaching AI to explain its decisions, in: Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, 2019, pp. 123–129. – reference: D. Bahdanau, K. Cho, Y. Bengio, Neural machine translation by jointly learning to align and translate, in: 3rd International Conference on Learning Representations, 2015. – volume: 3 start-page: 41 year: 1975 end-page: 58 ident: b91 article-title: Logic and conversation, syntax and semantics publication-title: Speech Acts – year: 2018 ident: b153 article-title: Towards a definition of disentangled representations – start-page: 1177 year: 2007 end-page: 1186 ident: b289 article-title: Comparing analytical decision support models through boolean rule extraction: A case study of ovarian tumour malignancy publication-title: International Symposium on Neural Networks – volume: 12 start-page: 15 year: 2000 end-page: 25 ident: b294 article-title: FERNN: An algorithm for fast extraction of rules from neural networks publication-title: Appl. Intell. – volume: 40 start-page: 307 year: 2013 end-page: 323 ident: b373 article-title: You are the only possible oracle: Effective test selection for end users of interactive machine learning systems publication-title: IEEE Trans. Softw. Eng. – volume: 38 start-page: 2354 year: 2011 end-page: 2364 ident: b332 article-title: Building comprehensible customer churn prediction models with advanced rule induction techniques publication-title: Expert Syst. Appl. – volume: 1 year: 2019 ident: b492 article-title: Why are we using black box models in AI when we don’t need to? A lesson from an explainable AI competition publication-title: Harv. Data Sci. Rev. – volume: 25 start-page: 63 year: 2018 end-page: 72 ident: b137 article-title: Asking ‘Why’in AI: Explainability of intelligent systems–perspectives and challenges publication-title: Intell. Syst. Account. Finance Manag. – start-page: 3 year: 2016 end-page: 19 ident: b199 article-title: Generating visual explanations publication-title: European Conference on Computer Vision – volume: 34 start-page: 9391 year: 2021 end-page: 9404 ident: b455 article-title: Reliable post hoc explanations: Modeling uncertainty in explainability publication-title: Adv. Neural Inf. Process. Syst. – volume: 54 start-page: 1 year: 2021 end-page: 35 ident: b24 article-title: A survey on bias and fairness in machine learning publication-title: ACM Comput. Surv. – volume: 11 start-page: 85 year: 2019 end-page: 98 ident: b418 article-title: auditor: an R package for model-agnostic visual validation and diagnostics publication-title: R J. – volume: 17 start-page: 107 year: 2002 end-page: 127 ident: b40 article-title: A review of explanation methods for Bayesian networks publication-title: Knowl. Eng. Rev. – reference: C.J. Cai, J. Jongejan, J. Holbrook, The effects of example-based explanations in a machine learning interface, in: Proceedings of the 24th International Conference on Intelligent User Interfaces, 2019, pp. 258–262. – volume: 163 start-page: 90 year: 2017 end-page: 100 ident: b379 article-title: Human attention in visual question answering: Do humans and deep networks look at the same regions? publication-title: Comput. Vis. Image Underst. – volume: 3 start-page: 128 year: 1999 end-page: 135 ident: b450 article-title: Catastrophic forgetting in connectionist networks publication-title: Trends in Cognitive Sciences – reference: J.M. Schoenborn, K.-D. Althoff, Recent Trends in XAI: A Broad Overview on current Approaches, Methodologies and Interactions, in: ICCBR Workshops, 2019, pp. 51–60. – start-page: 243 year: 2021 end-page: 256 ident: b112 article-title: Reliability of explainable artificial intelligence in adversarial perturbation scenarios publication-title: International Conference on Pattern Recognition – volume: 64 start-page: 3197 year: 2022 end-page: 3234 ident: b26 article-title: Interpretable deep learning: interpretation, interpretability, trustworthiness, and beyond publication-title: Knowledge and Information Systems – volume: 23 start-page: 217 year: 2007 end-page: 246 ident: b138 article-title: Recommendation agents for electronic commerce: Effects of explanation facilities on trusting beliefs publication-title: J. Manage. Inf. Syst. – volume: 7 start-page: 108 year: 1995 end-page: 116 ident: b251 article-title: Training with noise is equivalent to Tikhonov regularization publication-title: Neural Comput. – year: 2021 ident: b257 article-title: Interpretable machine learning – volume: 26 start-page: 2051 year: 2020 end-page: 2068 ident: b105 article-title: Artificial intelligence, responsibility attribution, and a relational justification of explainability publication-title: Sci. Eng. Ethics – reference: M.T. Ribeiro, S. Singh, C. Guestrin, Anchors: High-precision model-agnostic explanations, in: Proceedings of the AAAI Conference on Artificial Intelligence, 2018, pp. 1–9. – start-page: 1870 year: 2001 end-page: 1875 ident: b293 article-title: Rule extraction from neural networks via decision tree induction publication-title: IJCNN’01. International Joint Conference on Neural Networks. Proceedings, Vol. 3 – volume: 4 start-page: 1798 year: 2019 ident: b403 article-title: modelStudio: Interactive studio with explanations for ML predictive models publication-title: J. Open Source Softw. – volume: 10 start-page: 1 year: 2019 end-page: 8 ident: b462 article-title: Unmasking Clever Hans predictors and assessing what machines really learn publication-title: Nature Commun. – volume: 30 start-page: 5875 year: 2021 end-page: 5888 ident: b408 article-title: Layercam: Exploring hierarchical class activation maps for localization publication-title: IEEE Trans. Image Process. – year: 2021 ident: b13 article-title: A survey of convolutional neural networks: analysis, applications, and prospects publication-title: IEEE Trans. Neural Netw. Learn. Syst. – volume: 129 year: 2021 ident: b101 article-title: An engineer’s guide to eXplainable Artificial Intelligence and Interpretable Machine Learning: Navigating causality, forced goodness, and the false perception of inference publication-title: Autom. Constr. – volume: 3 start-page: e745 year: 2021 end-page: e750 ident: b111 article-title: The false hope of current approaches to explainable Artificial Intelligence in health care publication-title: Lancet Digit. Health – volume: 16 start-page: 199 year: 2001 end-page: 231 ident: b142 article-title: Statistical modeling: The two cultures (with comments and a rejoinder by the author) publication-title: Statist. Sci. – reference: G. Bansal, B. Nushi, E. Kamar, W.S. Lasecki, D.S. Weld, E. Horvitz, Beyond accuracy: The role of mental models in human-AI team performance, in: Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, Vol. 7, 2019, pp. 2–11. – year: 2018 ident: b411 article-title: GitHub - EthicalML/XAI: XAI - An eXplainability toolbox for machine learning – year: 2021 ident: b131 article-title: Explainable Artificial Intelligence (XAI) on timeseries data: A survey – reference: D. Schreiber-Gregory, Regulation techniques for multicollinearity: Lasso, ridge, and elastic nets, in: SAS Conference Proceedings: Western Users of SAS Software 2018, 2018, pp. 1–23. – volume: 11 start-page: 1803 year: 2010 end-page: 1831 ident: b260 article-title: How to explain individual classification decisions publication-title: J. Mach. Learn. Res. – start-page: 448 year: 2020 end-page: 469 ident: b265 article-title: Multi-objective counterfactual explanations publication-title: International Conference on Parallel Problem Solving from Nature – year: 2021 ident: b513 article-title: Brazil: Proposed AI regulation – volume: 24 start-page: 98 year: 2017 end-page: 108 ident: b371 article-title: Deepeyes: Progressive visual analytics for designing deep neural networks publication-title: IEEE Trans. Vis. Comput. Graphics – start-page: 269 year: 2008 end-page: 294 ident: b447 article-title: Measuring change in mental models of complex dynamic systems publication-title: Complex Decision Making – volume: 10 start-page: 464 year: 2006 end-page: 470 ident: b89 article-title: The structure and function of explanations publication-title: Trends in Cognitive Sciences – reference: N. Kokhlikyan, V. Miglani, M. Martin, E. Wang, B. Alsallakh, J. Reynolds, A. Melnikov, N. Kliushkina, C. Araya, S. Yan, et al., Captum: A unified and generic model interpretability library for pytorch, in: ICLR 2021 Workshop on Responsible AI:, 2021. – start-page: 1 year: 2021 end-page: 23 ident: b106 article-title: Toward explainable artificial intelligence through fuzzy systems publication-title: Explainable Fuzzy Systems – volume: 5 start-page: 2403 year: 2011 end-page: 2424 ident: b159 article-title: Prototype selection for interpretable classification publication-title: Ann. Appl. Stat. – volume: 267 start-page: 1 year: 2019 end-page: 38 ident: b52 article-title: Explanation in artificial intelligence: Insights from the social sciences publication-title: Artificial Intelligence – reference: M.W. Craven, J.W. Shavlik, Extracting tree-structured representations of trained networks, in: Proceedings of NIPS, 1995, pp. 24–30. – year: 2020 ident: b419 article-title: GitHub - mayer79/flashlight: Machine learning explanations – volume: 24 start-page: 44 year: 2015 end-page: 65 ident: b237 article-title: Peeking inside the black box: Visualizing statistical learning with plots of individual conditional expectation publication-title: J. Comput. Graph. Statist. – reference: M. Kay, T. Kola, J.R. Hullman, S.A. Munson, When (ish) is my bus? user-centered visualizations of uncertainty in everyday, mobile predictive systems, in: Proceedings of the 2016 Chi Conference on Human Factors in Computing Systems, 2016, pp. 5092–5103. – reference: C. Chen, O. Li, A. Barnett, J.K. Su, C. Rudin, This looks like that: deep learning for interpretable image recognition, in: Proceedings of the 33rd International Conference on Neural Information Processing Systems, 2019, pp. 1–12. – start-page: 3 year: 2013 end-page: 10 ident: b352 article-title: Too much, too little, or just right? Ways explanations impact end users’ mental models publication-title: 2013 IEEE Symposium on Visual Languages and Human Centric Computing – volume: 34 start-page: 265 year: 2021 end-page: 288 ident: b125 article-title: Solving the black box problem: a normative framework for explainable Artificial Intelligence publication-title: Philos. Technol. – start-page: 6 year: 2005 end-page: pp ident: b275 article-title: Rules-6: a simple rule induction algorithm for supporting decision making publication-title: 31st Annual Conference of IEEE Industrial Electronics Society, 2005. IECON 2005 – reference: J. Yosinski, J. Clune, A. Nguyen, T. Fuchs, H. Lipson, Understanding neural networks through deep visualization, in: ICML Deep Learning Workshop, 2015. – year: 2020 ident: b436 article-title: Technical report on the CleverHans v2. 1.0 adversarial examples library – start-page: 0210 year: 2018 end-page: 0215 ident: b64 article-title: Explainable artificial intelligence: A survey publication-title: 2018 41st International Convention on Information and Communication Technology, Electronics and Microelectronics – volume: 36 start-page: 535 year: 2021 end-page: 545 ident: b120 article-title: Clinical AI: opacity, accountability, responsibility and liability publication-title: AI Soc. – year: 1996 ident: b164 article-title: UCI machine learning repository: Adult data set – start-page: 46 year: 2019 end-page: 56 ident: b445 article-title: FairVis: Visual analytics for discovering intersectional bias in machine learning publication-title: 2019 IEEE Conference on Visual Analytics Science and Technology – start-page: 151 year: 2009 end-page: 175 ident: b241 article-title: Independent component analysis publication-title: Natural Image Statistics – volume: 24 start-page: 77 year: 2017 end-page: 87 ident: b391 article-title: Analyzing the training processes of deep generative models publication-title: IEEE Trans. Vis. Comput. Graphics – year: 2019 ident: b325 article-title: Explaining classifiers with causal concept effect (cace) – start-page: 3319 year: 2017 end-page: 3328 ident: b252 article-title: Axiomatic attribution for deep networks publication-title: International Conference on Machine Learning – reference: V. Petsiuk, A. Das, K. Saenko, RISE: Randomized Input Sampling for Explanation of Black-box Models, in: Proceedings of the British Machine Vision Conference, BMVC, 2018, pp. 1–13. – volume: 11 start-page: 3494 year: 2023 end-page: 3510 ident: b110 article-title: Deep learning for predictive analytics in reversible steganography publication-title: IEEE Access – year: 2017 ident: b133 article-title: What does explainable AI really mean? A new conceptualization of perspectives – start-page: 1952 year: 2014 end-page: 1960 ident: b262 article-title: The bayesian case model: A generative approach for case-based reasoning and prototype classification publication-title: Advances in Neural Information Processing Systems – start-page: 19 year: 2018 end-page: 36 ident: b57 article-title: Explanation methods in deep learning: Users, values, concerns and challenges publication-title: Explainable and Interpretable Models in Computer Vision and Machine Learning – reference: M. Lin, Q. Chen, S. Yan, Network in network, in: International Conference on Learning Representations, 2013. – start-page: 235 year: 2022 ident: b327 article-title: A survey on methods and metrics for the assessment of explainability under the proposed AI Act publication-title: Legal Knowledge and Information Systems: JURIX 2021: The Thirty-Fourth Annual Conference, Vilnius, Lithuania, 8-10 December 2021. Vol. 346 – volume: 566 start-page: 195 year: 2019 end-page: 204 ident: b179 article-title: Deep learning and process understanding for data-driven Earth system science publication-title: Nature – start-page: 1 year: 2021 end-page: 28 ident: b122 article-title: Four responsibility gaps with artificial intelligence: Why they matter and how to address them publication-title: Philos. Technol. – reference: Y. Lou, R. Caruana, J. Gehrke, G. Hooker, Accurate intelligible models with pairwise interactions, in: Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2013, pp. 623–631. – year: 2021 ident: b417 article-title: Creates shiny application from A DALEX explainer – start-page: 2154 year: 2021 end-page: 2164 ident: b491 article-title: Have you been properly notified? Automatic compliance analysis of privacy policy text with GDPR article 13 publication-title: Proceedings of the Web Conference 2021 – start-page: 153 year: 2021 end-page: 185 ident: b267 article-title: Factual and counterfactual explanation of fuzzy information granules publication-title: Interpretable Artificial Intelligence: A Perspective of Granular Computing – reference: D. Smilkov, N. Thorat, B. Kim, F. Viégas, M. Wattenberg, Smoothgrad: removing noise by adding noise, in: Workshop on Visualization for Deep Learning, ICML, 2017. – volume: 24 start-page: 88 year: 2017 end-page: 97 ident: b356 article-title: ActiVis: Visual exploration of industry-scale deep neural network models publication-title: IEEE Trans. Vis. Comput. Graphics – volume: 116 year: 2021 ident: b92 article-title: Context-based image explanations for deep neural networks publication-title: Image Vis. Comput. – reference: S. Berkovsky, R. Taib, D. Conway, How to recommend? User trust factors in movie recommender systems, in: Proceedings of the 22nd International Conference on Intelligent User Interfaces, 2017, pp. 287–300. – start-page: 5863 year: 2022 end-page: 5864 ident: b486 article-title: Application of neurosymbolic AI to sequential decision making publication-title: Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence – start-page: 93 year: 2006 end-page: 100 ident: b367 article-title: Trust building with explanation interfaces publication-title: Proceedings of the 11th International Conference on Intelligent User Interfaces – year: 2022 ident: b423 article-title: GitHub - tensorflow/model-analysis: Model analysis tools for TensorFlow – reference: J.L. Herlocker, J.A. Konstan, J. Riedl, Explaining collaborative filtering recommendations, in: Proceedings of the 2000 ACM Conference on Computer Supported Cooperative Work, 2000, pp. 241–250. – volume: 28 year: 2015 ident: b212 article-title: End-to-end memory networks publication-title: Adv. Neural Inf. Process. Syst. – volume: 73 start-page: 1 year: 2018 end-page: 15 ident: b58 article-title: Methods for interpreting and understanding deep neural networks publication-title: Digit. Signal Process. – start-page: 11 year: 2019 end-page: 16 ident: b483 article-title: The IEEE global initiative on ethics of autonomous and intelligent systems publication-title: Robot. Well-Being – reference: T. Kulesza, M. Burnett, W.-K. Wong, S. Stumpf, Principles of explanatory debugging to personalize interactive machine learning, in: Proceedings of the 20th International Conference on Intelligent User Interfaces, 2015, pp. 126–137. – volume: 5 year: 2020 ident: b244 article-title: Visualizing the impact of feature attribution baselines publication-title: Distill – volume: 20 start-page: 78 year: 2007 end-page: 93 ident: b298 article-title: Neural network explanation using inversion publication-title: Neural Netw. – volume: 211 start-page: 239 year: 1997 end-page: 249 ident: b271 article-title: An algorithm for incremental inductive learning publication-title: Proc. Inst. Mech. Eng. B – volume: 115 year: 2021 ident: b464 article-title: Pruning by explaining: A novel criterion for deep neural network pruning publication-title: Pattern Recognit. – volume: 258 year: 2022 ident: b218 article-title: Greybox XAI: A Neural-Symbolic learning framework to produce interpretable predictions for image classification publication-title: Knowl.-Based Syst. – start-page: 37 year: 1994 end-page: 45 ident: b297 article-title: Using sampling and queries to extract rules from trained neural networks publication-title: Machine Learning Proceedings 1994 – volume: 22 start-page: 1 year: 2021 end-page: 7 ident: b393 article-title: Alibi explain: Algorithms for explaining machine learning models publication-title: J. Mach. Learn. Res. – year: 2018 ident: b310 article-title: Distill-and-Compare: Auditing Black-Box Models Using Transparent Model Distillation – year: 1996 ident: b88 article-title: Abductive Inference: Computation, Philosophy, Technology – volume: 217 start-page: 1273 year: 2003 end-page: 1286 ident: b273 article-title: RULES-5: a rule induction algorithm for classification problems involving continuous attributes publication-title: Proc. Inst. Mech. Eng. C – year: 2019 ident: b392 article-title: Interpretml: A unified framework for machine learning interpretability – volume: 35 start-page: 105 year: 2014 end-page: 120 ident: b440 article-title: Power to the people: The role of humans in interactive machine learning publication-title: AI Magaz. – year: 2018 ident: b380 article-title: A human-grounded evaluation benchmark for local explanations of machine learning – volume: 39 start-page: 53 year: 2022 end-page: 63 ident: b50 article-title: Explainable artificial intelligence: objectives, stakeholders, and future research opportunities publication-title: Inform. Syst. Manag. – volume: 1 start-page: 1 year: 2017 end-page: 19 ident: b1 article-title: Low-resource multi-task audio sensing for mobile and embedded devices via shared deep neural network representations publication-title: Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. – volume: 9 start-page: 1 year: 2020 end-page: 16 ident: b94 article-title: Transparency in Artificial Intelligence publication-title: Internet Policy Rev. – start-page: 341 year: 2011 end-page: 348 ident: b323 article-title: Opening black box data mining models using sensitivity analysis publication-title: 2011 IEEE Symposium on Computational Intelligence and Data Mining – year: 2019 ident: b399 article-title: GitHub - h2oai/mli-resources: H2O.AI Machine Learning Interpretability Resources – volume: 6 start-page: 587 year: 2018 end-page: 604 ident: b168 article-title: Data statements for natural language processing: Toward mitigating system bias and enabling better science publication-title: Trans. Assoc. Comput. Linguist. – volume: 14 start-page: 1 year: 2020 end-page: 101 ident: b76 article-title: Explainable recommendation: A survey and new perspectives publication-title: Found. Trends Inform. Retr. – reference: S. Penney, J. Dodge, C. Hilderbrand, A. Anderson, L. Simpson, M. Burnett, Toward foraging for understanding of StarCraft agents: An empirical study, in: 23rd International Conference on Intelligent User Interfaces, 2018, pp. 225–237. – start-page: 31 year: 2018 end-page: 57 ident: b28 article-title: The mythos of model interpretability: In machine learning, the concept of interpretability is both important and slippery publication-title: Commun. ACM (CACM) – year: 2014 ident: b44 article-title: Intelligent Tutoring Systems: Evolutions in Design – volume: 225 start-page: 1018 year: 2011 end-page: 1038 ident: b274 article-title: A new rule space representation scheme for rule induction in classification and control applications publication-title: Proc. Inst. Mech. Eng. I J. Syst. Control Eng. – volume: 5 start-page: 2607 year: 2020 ident: b435 article-title: Foolbox native: Fast adversarial attacks to benchmark the robustness of machine learning models in pytorch, tensorflow, and jax publication-title: J. Open Source Softw. – year: 2017 ident: b4 article-title: A deep causal inference approach to measuring the effects of forming group loans in online non-profit microfinance platform – year: 2003 ident: b43 article-title: A Review of Explanation and Explanation in Case-Based Reasoning – year: 2018 ident: b424 article-title: GitHub - AustinRochford/PyCEbox: python individual conditional expectation plot toolbox – start-page: 39 year: 2018 end-page: 48 ident: b80 article-title: Explainable artificial intelligence: Understanding, visualizing and interpreting deep learning models publication-title: ITU J. ICT Discoveries – volume: 21 start-page: 1 year: 2020 end-page: 6 ident: b394 article-title: AI explainability 360: An extensible toolkit for understanding data and machine learning models publication-title: J. Mach. Learn. Res. – volume: 296 year: 2021 ident: b74 article-title: What do we want from Explainable Artificial Intelligence (XAI)?–A stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research publication-title: Artificial Intelligence – reference: M. Ribera, A. Lapedriza, Can we do better explanations? A proposal of user-centered Explainable AI, in: IUI Workshops, Vol. 2327, 2019, p. 38. – start-page: 77 year: 2020 end-page: 95 ident: b60 article-title: Explainable reinforcement learning: A survey publication-title: International Cross-Domain Conference for Machine Learning and Knowledge Extraction – volume: 31 start-page: 2524 year: 2020 end-page: 2541 ident: b97 article-title: Towards fair and privacy-preserving federated deep models publication-title: IEEE Trans. Parallel Distrib. Syst. – year: 2018 ident: b503 article-title: Regulation of the European Parliament and of the Council – year: 2021 ident: b86 article-title: A survey of visual analytics for Explainable Artificial Intelligence methods publication-title: Comput. Graph. – volume: 121 start-page: 133 year: 1997 end-page: 148 ident: b264 article-title: Counterfactual thinking publication-title: Psychol. Bull. – start-page: 271 year: 2022 end-page: 296 ident: b463 article-title: ECQ: Explainability-driven quantization for low-bit and sparse DNNs publication-title: International Workshop on Extending Explainable AI beyond Deep Models and Classifiers – volume: 7 start-page: 673 year: 2001 end-page: 679 ident: b320 article-title: Classification and diagnostic prediction of cancers using gene expression profiling and artificial neural networks publication-title: Nat. Med. – start-page: 2611 year: 2021 end-page: 2619 ident: b99 article-title: Evaluating model robustness and stability to dataset shift publication-title: International Conference on Artificial Intelligence and Statistics – start-page: 8 year: 2017 ident: b136 article-title: The promise and peril of human evaluation for model interpretability – year: 2021 ident: b497 article-title: DARPA’s Explainable AI (XAI) program: A retrospective – reference: Z.F. Hu, T. Kuflik, I.G. Mocanu, S. Najafian, A. Shulner Tal, Recent Studies of XAI-Review, in: Adjunct Proceedings of the 29th ACM Conference on User Modeling, Adaptation and Personalization, 2021, pp. 421–431. – year: 2021 ident: b54 article-title: Explainable Artificial Intelligence (XAI): An engineering perspective – volume: 160 start-page: 249 year: 2003 end-page: 264 ident: b321 article-title: Review and comparison of methods to study the contribution of variables in artificial neural network models publication-title: Ecol. Model. – year: 2022 ident: b493 article-title: Explainable artificial intelligence (XAI) in deep learning-based medical image analysis publication-title: Med. Image Anal. – year: 2018 ident: b154 article-title: The intriguing properties of model explanations – volume: 102 start-page: 349 year: 2016 end-page: 391 ident: b202 article-title: Supersparse linear integer models for optimized medical scoring systems publication-title: Mach. Learn. – year: 2022 ident: b510 article-title: AI & Algorithms (Part 6): Spain to create Europe’s first supervisory - Hogan Lovells Engage – start-page: 41 year: 2010 end-page: 48 ident: b372 article-title: Explanatory debugging: Supporting end-user debugging of machine-learned programs publication-title: 2010 IEEE Symposium on Visual Languages and Human-Centric Computing – reference: L.M. Zintgraf, T.S. Cohen, T. Adel, M. Welling, Visualizing deep neural network decisions: Prediction difference analysis, in: ICLR Conference, 2017. – volume: 10 start-page: 132564 year: 2022 end-page: 132583 ident: b217 article-title: OG-SGG: Ontology-guided scene graph generation. A case study in transfer learning for telepresence robotics publication-title: IEEE Access – reference: S. Lapuschkin, A. Binder, G. Montavon, K.-R. Muller, W. Samek, Analyzing classifiers: Fisher vectors and deep neural networks, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 2912–2920. – reference: S.A. Friedler, C. Scheidegger, S. Venkatasubramanian, S. Choudhary, E.P. Hamilton, D. Roth, A comparative study of fairness-enhancing interventions in machine learning, in: Proceedings of the Conference on Fairness, Accountability, and Transparency, 2019, pp. 329–338. – year: 2023 ident: b224 article-title: Gender and sex bias in COVID-19 epidemiological data through the lenses of causality publication-title: Inf. Process. Manage. – volume: 302 year: 2022 ident: b66 article-title: Knowledge graphs as tools for explainable machine learning: A survey publication-title: Artificial Intelligence – start-page: 865 year: 2021 end-page: 873 ident: b100 article-title: Robustness and scalability under heavy tails, without strong convexity publication-title: International Conference on Artificial Intelligence and Statistics – reference: R. Rombach, A. Blattmann, D. Lorenz, P. Esser, B. Ommer, High-Resolution Image Synthesis With Latent Diffusion Models, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR, 2022, pp. 10684–10695. – volume: 4 start-page: 9 year: 1989 end-page: 15 ident: b33 article-title: Explaining control strategies in problem solving publication-title: IEEE Intell. Syst. – reference: I. Ahern, A. Noack, L. Guzman-Nateras, D. Dou, B. Li, J. Huan, NormLime: A new feature importance metric for explaining deep neural networks, in: ICLR 2020 Conference, 2020. – volume: 84 start-page: 169 year: 1996 end-page: 185 ident: b285 article-title: What are fuzzy rules and how to use them publication-title: Fuzzy Sets and Systems – start-page: 212 year: 1999 ident: b169 article-title: Case-based explanation of non-case-based learning methods publication-title: Proceedings of the AMIA Symposium – year: 2019 ident: b32 article-title: Explanation in human-AI systems: A literature meta-review, synopsis of key ideas and publications, and bibliography for explainable AI – reference: H. Lin, J. Bilmes, A class of submodular functions for document summarization, in: Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, 2011, pp. 510–520. – reference: V. Petsiuk, R. Jain, V. Manjunatha, V.I. Morariu, A. Mehra, V. Ordonez, K. Saenko, Black-box explanation of object detectors via saliency maps, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 11443–11452. – start-page: 1 year: 2021 end-page: 14 ident: b123 article-title: The European Commission report on ethics of connected and automated vehicles and the future of ethics of transportation publication-title: Ethics Inform. Technol. – year: 2022 ident: b407 article-title: OmniXAI: A library for explainable AI – reference: C. Wohlin, Guidelines for snowballing in systematic literature studies and a replication in software engineering, in: Proceedings of the 18th International Conference on Evaluation and Assessment in Software Engineering, 2014, pp. 1–10. – year: 2016 ident: b46 article-title: Polarized America: The Dance of Ideology and Unequal Riches – start-page: 207 year: 2022 end-page: 228 ident: b470 article-title: Interpretable, verifiable, and robust reinforcement learning via program synthesis publication-title: International Workshop on Extending Explainable AI beyond Deep Models and Classifiers – volume: 28 start-page: 1222 year: 2014 end-page: 1265 ident: b176 article-title: Ontology of core data mining entities publication-title: Data Min. Knowl. Discov. – year: 2023 ident: b410 article-title: Shapash Python library for interpretable and understandable machine learning – reference: H. Wang, Z. Wang, M. Du, F. Yang, Z. Zhang, S. Ding, P. Mardziel, X. Hu, Score-CAM: Score-weighted visual explanations for convolutional neural networks, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2020, pp. 24–25. – volume: 36 year: 2021 ident: b53 article-title: Argumentation and Explainable Artificial Intelligence: a survey publication-title: Knowl. Eng. Rev. – reference: M.T. Ribeiro, S. Singh, C. Guestrin, ”Why should i trust you?” Explaining the predictions of any classifier, in: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2016, pp. 1135–1144. – volume: 614 start-page: 374 year: 2022 end-page: 399 ident: b220 article-title: PLENARY: Explaining black-box models in natural language through fuzzy linguistic summaries publication-title: Inform. Sci. – volume: 7 start-page: 155 year: 1983 end-page: 170 ident: b268 article-title: Structure-mapping: A theoretical framework for analogy publication-title: Cogn. Sci. – start-page: 13 year: 2009 end-page: 24 ident: b175 article-title: Kddonto: An ontology for discovery and composition of kdd algorithms publication-title: Third Generation Data Mining: Towards Service-Oriented Knowledge Discovery – start-page: 81 year: 2015 end-page: 87 ident: b308 article-title: Deep learning, dark knowledge, and dark matter publication-title: NIPS 2014 Workshop on High-Energy Physics and Machine Learning – volume: 8 start-page: 373 year: 1995 end-page: 389 ident: b39 article-title: Survey and critique of techniques for extracting rules from trained artificial neural networks publication-title: Knowl.-Based Syst. – start-page: 543 year: 1993 end-page: 585 ident: b34 article-title: Explanation in second generation expert systems publication-title: Second Generation Expert Systems – volume: 10 start-page: 304 year: 2010 end-page: 317 ident: b292 article-title: A soft computing-based approach for integrated training and rule extraction from artificial neural networks: DIFACONN-miner publication-title: Appl. Soft Comput. – year: 2015 ident: b311 article-title: Distilling knowledge from deep networks with applications to healthcare domain – start-page: 1766 year: 2006 end-page: 1773 ident: b84 article-title: Building Explainable Artificial Intelligence systems publication-title: AAAI – volume: 19 start-page: 388 year: 2006 end-page: 395 ident: b284 article-title: A new algorithm for automatic knowledge acquisition in inductive learning publication-title: Knowl.-Based Syst. – start-page: 159 year: 2018 end-page: 175 ident: b328 article-title: Perturbation-based explanations of prediction models publication-title: Human and Machine Learning – year: 2021 ident: b511 article-title: US and EU pledge to promote “innovative and trustworthy” AI | Insights | DLA piper global law firm – volume: 19 start-page: 27 year: 2018 end-page: 39 ident: b56 article-title: Visual interpretability for deep learning: A survey publication-title: Front. Inf. Technol. Electr. Eng. – year: 2022 ident: b70 article-title: Explainable AI for Time Series Classification: A review, taxonomy and research directions publication-title: IEEE Access – volume: 9 year: 2008 ident: b161 article-title: Visualizing data using t-SNE publication-title: J. Mach. Learn. Res. – start-page: 329 year: 2018 end-page: 349 ident: b134 article-title: Measures of model interpretability for model selection publication-title: International Cross-Domain Conference for Machine Learning and Knowledge Extraction – year: 2018 ident: b246 article-title: Local rule-based explanations of black box decision systems – reference: D. Alvarez-Melis, T.S. Jaakkola, Towards robust interpretability with self-explaining neural networks, in: 32nd Conference on Neural Information Processing Systems, 2018. – start-page: 297 year: 2022 end-page: 313 ident: b461 article-title: A whale’s tail-finding the right whale in an uncertain world publication-title: International Workshop on Extending Explainable AI beyond Deep Models and Classifiers – volume: 17 year: 2022 ident: b473 article-title: SkiNet: A deep learning framework for skin lesion diagnosis with uncertainty estimation and explainability publication-title: PLoS One – reference: M. Ancona, E. Ceolini, C. Öztireli, M. Gross, Towards better understanding of gradient-based attribution methods for deep neural networks, in: ICLR 2018 Conference, 2018. – volume: 70 start-page: 384 year: 2006 end-page: 397 ident: b302 article-title: Extracting rules from multilayer perceptrons in classification problems: A clustering-based approach publication-title: Neurocomputing – start-page: 447 year: 1994 end-page: 450 ident: b318 article-title: Sensitivity analysis for minimization of input data dimension for feedforward neural network publication-title: Proceedings of IEEE International Symposium on Circuits and Systems-ISCAS’94, Vol. 6 – year: 2021 ident: b116 article-title: Roadmap of designing cognitive metrics for Explainable Artificial Intelligence (XAI) – year: 2015 ident: b165 article-title: The data visualisation catalogue – volume: 54 start-page: 3849 year: 2021 end-page: 3886 ident: b126 article-title: Artificial Intelligence, cyber-threats and Industry 4.0: Challenges and opportunities publication-title: Artif. Intell. Rev. – volume: 4 year: 2019 ident: b425 article-title: Yellowbrick: Visualizing the scikit-learn model selection process publication-title: J. Open Source Softw. – start-page: 1 year: 2021 end-page: 14 ident: b124 article-title: Psychological consequences of legal responsibility misattribution associated with automated vehicles publication-title: Ethics Inform. Technol. – volume: 4 start-page: 9 year: 2014 ident: b291 article-title: Evaluation of rule extraction algorithms publication-title: Int. J. Data Min. Knowl. Manag. Process – volume: 15 start-page: 318 year: 2010 end-page: 324 ident: b281 article-title: RULES3-EXT improvements on rules-3 induction algorithm publication-title: Math. Comput. Appl. – reference: S. Stumpf, S. Skrebe, G. Aymer, J. Hobson, Explaining smart heating systems to discourage fiddling with optimized behavior, in: CEUR Workshop Proceedings, Vol. 2068, 2018, pp. 1–5. – start-page: 269 year: 2016 end-page: 279 ident: b500 article-title: Artificial intelligence and legal liability publication-title: International Conference on Innovative Techniques and Applications of Artificial Intelligence – volume: 81 start-page: 91 year: 2022 end-page: 102 ident: b171 article-title: Knowledge graph-based rich and confidentiality preserving Explainable Artificial Intelligence (XAI) publication-title: Inf. Fusion – year: 2019 ident: b413 article-title: GitHub - deel-AI/xplique: Xplique is a Neural Networks Explainability Toolbox – start-page: 2148 year: 2018 end-page: 2149 ident: b48 article-title: Graph theoretical properties of logic based argumentation frameworks publication-title: AAMAS: Autonomous Agents and Multiagent Systems – reference: F. Gualdi, A. Cordella, Artificial intelligence and decision-making: The question of accountability, in: Proceedings of the 54th Hawaii International Conference on System Sciences, 2021, p. 2297. – reference: E. Thelisson, Towards Trust, Transparency and Liability in AI/AS systems, in: IJCAI, 2017, pp. 5215–5216. – start-page: 1 year: 2019 end-page: 10 ident: b104 article-title: An integrative 3C evaluation framework for explainable artificial intelligence publication-title: AMCIS – reference: I. Lage, E. Chen, J. He, M. Narayanan, B. Kim, S.J. Gershman, F. Doshi-Velez, Human evaluation of models built for interpretability, in: Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, Vol. 7, 2019, pp. 59–67. – start-page: 295 year: 2018 end-page: 303 ident: b81 article-title: Explainable AI: the new 42? publication-title: International Cross-Domain Conference for Machine Learning and Knowledge Extraction – reference: D. Pham, S. Dimov, The RULES-3 Plus inductive learning algorithm, in: Proceedings of the Third World Congress on Expert Systems, 1996, pp. 917–924. – start-page: 1 year: 2016 end-page: 6 ident: b7 article-title: Deep learning for HRRP-based target recognition in multistatic radar systems publication-title: 2016 IEEE Radar Conference – start-page: 3145 year: 2017 end-page: 3153 ident: b234 article-title: Learning important features through propagating activation differences publication-title: International Conference on Machine Learning – reference: J.-Y. Zhu, T. Park, P. Isola, A.A. Efros, Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks, in: Computer Vision (ICCV), 2017 IEEE International Conference on, 2017. – volume: 103 start-page: 584 year: 2008 end-page: 593 ident: b162 article-title: Daytime arctic cloud detection based on multi-angle satellite data with case studies publication-title: J. Amer. Statist. Assoc. – reference: J. Krause, A. Perer, K. Ng, Interacting with predictions: Visual inspection of black-box machine learning models, in: Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, 2016, pp. 5686–5697. – volume: 32 start-page: 1621 year: 2021 end-page: 1647 ident: b98 article-title: Designing energy-efficient high-precision multi-pass turning processes via robust optimization and artificial intelligence publication-title: J. Intell. Manuf. – volume: 27 start-page: 2107 year: 2015 end-page: 2119 ident: b5 article-title: Disease inference from health-related questions via sparse deep learning publication-title: IEEE Trans. Knowl. Data Eng. – volume: 807 start-page: 298 year: 2020 end-page: 329 ident: b432 article-title: A game-based approximate verification of deep neural networks with provable guarantees publication-title: Theoret. Comput. Sci. – year: 2022 ident: b457 article-title: Explainability-based mix-up approach for text data augmentation publication-title: ACM Trans. Knowl. Discov. Data (TKDD) – volume: 47 start-page: 28 year: 2013 end-page: 40 ident: b282 article-title: RULES–TL: a simple and improved RULES algorithm for incomplete and large data publication-title: J. Theor. Appl. Inform. Technol. – volume: 16 start-page: 1 year: 2022 end-page: 85 ident: b132 article-title: Interpretable machine learning: Fundamental principles and 10 grand challenges publication-title: Stat. Surv. – reference: B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, A. Torralba, Learning deep features for discriminative localization, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 2921–2929. – reference: D. Card, M. Zhang, N.A. Smith, Deep weighted averaging classifiers, in: Proceedings of the Conference on Fairness, Accountability, and Transparency, 2019, pp. 369–378. – volume: 23 start-page: 91 year: 2016 end-page: 100 ident: b359 article-title: Towards better analysis of deep convolutional neural networks publication-title: IEEE Trans. Vis. Comput. Graphics – start-page: 382 year: 2021 end-page: 387 ident: b130 article-title: Explainable Artificial Intelligence requirements for safe, intelligent robots publication-title: 2021 IEEE International Conference on Intelligence and Safety for Robotics – reference: S. Amershi, D. Weld, M. Vorvoreanu, A. Fourney, B. Nushi, P. Collisson, J. Suh, S. Iqbal, P.N. Bennett, K. Inkpen, et al., Guidelines for human-AI interaction, in: Proceedings of the 2019 Chi Conference on Human Factors in Computing Systems, 2019, pp. 1–13. – reference: D.T. Pham, A.J. Soroka, An Immune-network inspired rule generation algorithm (RULES-IS), in: Third Virtual International Conference on Innovative Production Machines and Systems, 2007, pp. 1–6. – volume: 25 start-page: 51 year: 2021 end-page: 59 ident: b173 article-title: Semantics of the black-box: Can knowledge graphs help make deep learning systems more interpretable and explainable? publication-title: IEEE Internet Comput. – reference: B.A. Myers, D.A. Weitzman, A.J. Ko, D.H. Chau, Answering why and why not questions in user interfaces, in: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2006, pp. 397–406. – volume: 20 start-page: 551 year: 2019 end-page: 588 ident: b152 article-title: Automated scalable Bayesian inference via Hilbert coresets publication-title: J. Mach. Learn. Res. – start-page: 1 year: 2021 end-page: 26 ident: b9 article-title: Artificial intelligence and business value: A literature review publication-title: Inform. Syst. Front. – start-page: 55 year: 2012 end-page: 60 ident: b300 article-title: KDRuleEx: A novel approach for enhancing user comprehensibility using rule extraction publication-title: 2012 Third International Conference on Intelligent Systems Modelling and Simulation – year: 2019 ident: b400 article-title: GitHub - TeamHG-Memex/eli5: A library for debugging/inspecting machine learning classifiers and explaining their predictions – start-page: 113 year: 2003 end-page: 134 ident: b174 article-title: A data mining ontology for grid programming publication-title: Proc. 1st Int. Workshop on Semantics in Peer-to-Peer and Grid Computing – start-page: 219 year: 2015 end-page: 237 ident: b206 article-title: Interpretability of fuzzy systems: Current research trends and prospects publication-title: Springer Handbook of Computational Intelligence – reference: M. Yin, J. Wortman Vaughan, H. Wallach, Understanding the effect of accuracy on trust in machine learning models, in: Proceedings of the 2019 Chi Conference on Human Factors in Computing Systems, 2019, pp. 1–12. – start-page: 1899 year: 2016 end-page: 1908 ident: b378 article-title: Graying the black box: Understanding dqns publication-title: International Conference on Machine Learning – volume: 110 start-page: 248 year: 2009 end-page: 253 ident: b353 article-title: Explanation and categorization: How “why?” informs “what?” publication-title: Cognition – year: 2015 ident: b309 article-title: Distilling the knowledge in a neural network – start-page: 54 year: 2020 end-page: 69 ident: b49 article-title: Transparency and trust in human-AI-interaction: The role of model-agnostic explanations in computer vision-based decision support publication-title: International Conference on Human-Computer Interaction – volume: 9 start-page: 11974 year: 2021 end-page: 12001 ident: b72 article-title: A survey of contrastive and counterfactual explanation generation methods for Explainable Artificial Intelligence publication-title: IEEE Access – volume: 11 year: 2021 ident: b115 article-title: A historical perspective of explainable Artificial Intelligence publication-title: Wiley Interdiscip. Rev. Data Min. Knowl. Discov. – year: 2018 ident: b395 article-title: GitHub - oracle/Skater: Python library for model interpretation/explanations – volume: 10 start-page: 72363 year: 2022 end-page: 72372 ident: b266 article-title: FCE: Feedback based Counterfactual Explanations for Explainable AI publication-title: IEEE Access – reference: S. Tan, Interpretable approaches to detect bias in black-box models, in: Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, 2018, pp. 382–383. – reference: A.S. Ross, F. Doshi-Velez, Improving the adversarial robustness and interpretability of deep neural networks by regularizing their input gradients, in: Thirty-Second AAAI Conference on Artificial Intelligence, 2018, pp. 1–10. – reference: A. Kotriwala, B. Klöpper, M. Dix, G. Gopalakrishnan, D. Ziobro, A. Potschka, XAI for Operations in the Process Industry-Applications, Theses, and Research Directions, in: AAAI Spring Symposium: Combining Machine Learning with Knowledge Engineering, 2021, pp. 1–12. – reference: A. Jain, H.S. Koppula, B. Raghavan, S. Soh, A. Saxena, Car that knows before you do: Anticipating maneuvers via learning temporal driving models, in: Proceedings of the IEEE International Conference on Computer Vision, 2015, pp. 3182–3190. – volume: 26 start-page: 1086 year: 2019 end-page: 1095 ident: b389 article-title: Fairsight: Visual analytics for fairness in decision making publication-title: IEEE Trans. Vis. Comput. Graphics – volume: 24 start-page: 667 year: 2017 end-page: 676 ident: b360 article-title: LSTMVis: A tool for visual analysis of hidden state dynamics in recurrent neural networks publication-title: IEEE Trans. Vis. Comput. Graphics – year: 2021 ident: b62 article-title: Reviewing the need for Explainable Artificial Intelligence (XAI) publication-title: HICSS – volume: 11 year: 2021 ident: b83 article-title: A historical perspective of explainable Artificial Intelligence publication-title: WIREs Data Min. Knowl. Discov. – reference: A. Bunt, M. Lount, C. Lauzon, Are explanations always important? A study of deployed, low-cost intelligent interactive systems, in: Proceedings of the 2012 ACM International Conference on Intelligent User Interfaces, 2012, pp. 169–178. – year: 2017 ident: b155 article-title: Facets: An open source visualization tool for machine learning training data – year: 1996 ident: b178 article-title: Extracting Comprehensible Models from Trained Neural Networks – year: 2019 ident: b396 article-title: GitHub - sicara/tf-explain: Interpretability Methods for tf.keras models with Tensorflow 2.x – year: 2018 ident: b17 article-title: The Book of Why: The New Science of Cause and Effect – year: 2014 ident: b214 article-title: Controlling recurrent neural networks by conceptors – year: 2020 ident: b416 article-title: GitHub - ModelOriented/triplot: Triplot: Instance- and data-level explanations for the groups of correlated features – reference: C.J. Anders, D. Neumann, T. Marinc, W. Samek, K.-R. Müller, S. Lapuschkin, XAI for Analyzing and Unlearning Spurious Correlations in ImageNet, in: ICML’20 Workshop on Extending Explainable AI beyond Deep Models and Classifiers, XXAI, Vienna, Austria, 2020. – volume: 21 year: 2020 ident: b183 article-title: Contextual explanation networks publication-title: J. Mach. Learn. Res. – volume: 38 start-page: 50 year: 2017 end-page: 57 ident: b437 article-title: European union regulations on algorithmic decision-making and a “right to explanation” publication-title: AI Mag. – year: 2015 ident: b498 article-title: Accountable Algorithms – year: 2021 ident: b433 article-title: GitHub - OpenMined/PyGrid: a peer-to-peer platform for secure, privacy-preserving, decentralized data science – volume: 34 start-page: 189 year: 2019 ident: b507 article-title: The right to explanation, explained publication-title: Berkeley Tech. LJ – volume: 17 start-page: 1223 year: 2005 end-page: 1263 ident: b330 article-title: Rule extraction from recurrent neural networks: Ataxonomy and review publication-title: Neural Comput. – start-page: 6 year: 2000 end-page: 8 ident: b385 article-title: Measuring human-computer trust publication-title: 11th Australasian Conference on Information Systems, Vol. 53 – reference: F. Nothdurft, F. Richter, W. Minker, Probabilistic human-computer trust handling, in: Proceedings of the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue, SIGDIAL, 2014, pp. 51–59. – volume: 214 year: 2021 ident: b59 article-title: Explainability in deep reinforcement learning publication-title: Knowl.-Based Syst. – volume: 16 start-page: 18 year: 2017 ident: b19 article-title: Slave to the algorithm: Why a right to an explanation is probably not the remedy you are looking for publication-title: Duke L. Tech. Rev. – start-page: 2018 year: 2011 end-page: 2025 ident: b256 article-title: Adaptive deconvolutional networks for mid and high level feature learning publication-title: 2011 International Conference on Computer Vision – year: 2020 ident: b103 article-title: Causal Interpretability for Machine Learning - Problems, Methods and Evaluation – start-page: 3 year: 2021 end-page: 17 ident: b113 article-title: The methods and approaches of explainable Artificial Intelligence publication-title: International Conference on Computational Science – start-page: 7 year: 2020 end-page: 13 ident: b481 article-title: What can crowd computing do for the next generation of AI systems? publication-title: CSW@ NeurIPS – volume: 72 start-page: 367 year: 2014 end-page: 382 ident: b345 article-title: How should I explain? A comparison of different explanation types for recommender systems publication-title: Int. J. Hum.-Comput. Stud. – start-page: 265 year: 2021 ident: b16 article-title: Explainable Fuzzy Systems - Paving the Way from Interpretable Fuzzy Systems to Explainable AI Systems, Vol. 970 – start-page: 1 year: 2021 end-page: 26 ident: b95 article-title: Educating software and AI stakeholders about algorithmic fairness, accountability, transparency and ethics publication-title: Int. J. Artif. Intell. Educ. – start-page: 118 year: 2022 end-page: 128 ident: b180 article-title: Physics guided neural networks for spatio-temporal super-resolution of turbulent flows publication-title: Uncertainty in Artificial Intelligence – volume: 8 start-page: 199 year: 1975 end-page: 249 ident: b38 article-title: The concept of a linguistic variable and its application to approximate reasoning publication-title: Inform. Sci. – year: 2020 ident: b27 article-title: Opportunities and challenges in Explainable Artificial Intelligence (XAI): A survey – reference: W. Brendel, M. Bethge, Approximating CNNs with Bag-of-local-Features models works surprisingly well on ImageNet, in: International Conference on Learning Representations, 2019, pp. 1–15. – reference: M. Nourani, S. Kabir, S. Mohseni, E.D. Ragan, The effects of meaningful and meaningless explanations on trust and perceived system accuracy in intelligent systems, in: Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, Vol. 7, 2019, pp. 97–105. – year: 2020 ident: b451 article-title: Continual learning for robotics: Definition, framework, learning strategies, opportunities and challenges publication-title: Inf. Fusion – year: 2016 ident: b157 article-title: Embedding projector: Interactive visualization and interpretation of embeddings – volume: 14 start-page: 501 year: 2021 end-page: 515 ident: b148 article-title: Parallel coordinate order for high-dimensional data publication-title: Stat. Anal. Data Min. ASA Data Sci. J. – volume: 27 start-page: 170 year: 2021 end-page: 179 ident: b127 article-title: Artificial Intelligence, forward-looking governance and the future of security publication-title: Swiss Polit. Sci. Rev. – year: 2022 ident: b215 article-title: Towards human-compatible XAI: Explaining data differentials with concept induction over background knowledge – reference: D.H. Park, L.A. Hendricks, Z. Akata, A. Rohrbach, B. Schiele, T. Darrell, M. Rohrbach, Multimodal explanations: Justifying decisions and pointing to the evidence, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 8779–8788. – year: 2022 ident: b422 article-title: FAT forensics: a python toolbox for algorithmic fairness, accountability and transparency – year: 2018 ident: b167 article-title: The dataset nutrition label: A framework to drive higher data quality standards – volume: 155 start-page: 95 year: 2022 end-page: 118 ident: b223 article-title: Explaining Aha! moments in artificial agents through IKE-XAI: Implicit Knowledge Extraction for eXplainable AI publication-title: Neural Netw. – reference: R. Ghaeini, X.Z. Fern, H. Shahbazi, P. Tadepalli, Saliency learning: Teaching the model where to pay attention, in: Proceedings of NAACL-HLT 2019, 2019, pp. 4016–4025. – volume: 11 start-page: 1 issue: 3–4 year: 2021 ident: 10.1016/j.inffus.2023.101805_b87 article-title: A multidisciplinary survey and framework for design and evaluation of explainable AI systems publication-title: ACM Trans. Interact. Intell. Syst. (TiiS) – year: 2020 ident: 10.1016/j.inffus.2023.101805_b475 – start-page: 55 year: 2012 ident: 10.1016/j.inffus.2023.101805_b300 article-title: KDRuleEx: A novel approach for enhancing user comprehensibility using rule extraction – volume: 11 start-page: 85 year: 2019 ident: 10.1016/j.inffus.2023.101805_b418 article-title: auditor: an R package for model-agnostic visual validation and diagnostics publication-title: R J. doi: 10.32614/RJ-2019-036 – start-page: 1 year: 2022 ident: 10.1016/j.inffus.2023.101805_b460 article-title: Can post-hoc explanations effectively detect out-of-distribution samples? – ident: 10.1016/j.inffus.2023.101805_b193 doi: 10.1109/CVPR.2018.00920 – start-page: 269 year: 2016 ident: 10.1016/j.inffus.2023.101805_b500 article-title: Artificial intelligence and legal liability – ident: 10.1016/j.inffus.2023.101805_b338 doi: 10.1145/1620545.1620576 – volume: 155 start-page: 95 year: 2022 ident: 10.1016/j.inffus.2023.101805_b223 article-title: Explaining Aha! moments in artificial agents through IKE-XAI: Implicit Knowledge Extraction for eXplainable AI publication-title: Neural Netw. doi: 10.1016/j.neunet.2022.08.002 – volume: 9 start-page: 1 year: 2020 ident: 10.1016/j.inffus.2023.101805_b94 article-title: Transparency in Artificial Intelligence publication-title: Internet Policy Rev. doi: 10.14763/2020.2.1469 – year: 2019 ident: 10.1016/j.inffus.2023.101805_b404 – year: 2021 ident: 10.1016/j.inffus.2023.101805_b433 – volume: 104 start-page: 32 issue: 1 year: 2016 ident: 10.1016/j.inffus.2023.101805_b495 article-title: Trust for the doctor-in-the-loop publication-title: ERCIM News – ident: 10.1016/j.inffus.2023.101805_b156 doi: 10.1145/3025453.3025912 – volume: 36 start-page: 535 issue: 2 year: 2021 ident: 10.1016/j.inffus.2023.101805_b120 article-title: Clinical AI: opacity, accountability, responsibility and liability publication-title: AI Soc. doi: 10.1007/s00146-020-01019-6 – ident: 10.1016/j.inffus.2023.101805_b361 doi: 10.1609/hcomp.v7i1.5284 – ident: 10.1016/j.inffus.2023.101805_b160 – start-page: 2611 year: 2021 ident: 10.1016/j.inffus.2023.101805_b99 article-title: Evaluating model robustness and stability to dataset shift – volume: 47 start-page: 28 issue: 1 year: 2013 ident: 10.1016/j.inffus.2023.101805_b282 article-title: RULES–TL: a simple and improved RULES algorithm for incomplete and large data publication-title: J. Theor. Appl. Inform. Technol. – volume: 60 issue: 1 year: 2023 ident: 10.1016/j.inffus.2023.101805_b65 article-title: A survey on XAI and natural language explanations publication-title: Inf. Process. Manage. doi: 10.1016/j.ipm.2022.103111 – start-page: 505 year: 1995 ident: 10.1016/j.inffus.2023.101805_b305 article-title: Extracting rules from artificial neural networks with distributed representations publication-title: Adv. Neural Inf. Process. Syst. – volume: 24 start-page: 77 year: 2017 ident: 10.1016/j.inffus.2023.101805_b391 article-title: Analyzing the training processes of deep generative models publication-title: IEEE Trans. Vis. Comput. Graphics doi: 10.1109/TVCG.2017.2744938 – year: 2016 ident: 10.1016/j.inffus.2023.101805_b157 – volume: 9 year: 2008 ident: 10.1016/j.inffus.2023.101805_b161 article-title: Visualizing data using t-SNE publication-title: J. Mach. Learn. Res. – volume: 19 start-page: 3245 issue: 1 year: 2018 ident: 10.1016/j.inffus.2023.101805_b398 article-title: DALEX: explainers for complex predictive models in R publication-title: J. Mach. Learn. Res. – start-page: 818 year: 2014 ident: 10.1016/j.inffus.2023.101805_b238 article-title: Visualizing and understanding convolutional networks – year: 2018 ident: 10.1016/j.inffus.2023.101805_b153 – year: 2018 ident: 10.1016/j.inffus.2023.101805_b17 – start-page: 271 year: 2022 ident: 10.1016/j.inffus.2023.101805_b463 article-title: ECQ: Explainability-driven quantization for low-bit and sparse DNNs – start-page: 81 year: 2015 ident: 10.1016/j.inffus.2023.101805_b308 article-title: Deep learning, dark knowledge, and dark matter – volume: 121 start-page: 133 year: 1997 ident: 10.1016/j.inffus.2023.101805_b264 article-title: Counterfactual thinking publication-title: Psychol. Bull. doi: 10.1037/0033-2909.121.1.133 – ident: 10.1016/j.inffus.2023.101805_b427 doi: 10.1145/3287560.3287589 – start-page: 207 year: 2022 ident: 10.1016/j.inffus.2023.101805_b470 article-title: Interpretable, verifiable, and robust reinforcement learning via program synthesis – ident: 10.1016/j.inffus.2023.101805_b343 – start-page: 448 year: 2020 ident: 10.1016/j.inffus.2023.101805_b265 article-title: Multi-objective counterfactual explanations – year: 1988 ident: 10.1016/j.inffus.2023.101805_b242 – volume: 20 start-page: 1614 year: 2014 ident: 10.1016/j.inffus.2023.101805_b357 article-title: INFUSE: interactive feature selection for predictive modeling of high dimensional data publication-title: IEEE Trans. Vis. Comput. Graphics doi: 10.1109/TVCG.2014.2346482 – volume: 71 start-page: 28 year: 2021 ident: 10.1016/j.inffus.2023.101805_b509 article-title: Towards multi-modal causability with graph neural networks enabling information fusion for explainable AI publication-title: Inf. Fusion doi: 10.1016/j.inffus.2021.01.008 – year: 1996 ident: 10.1016/j.inffus.2023.101805_b178 – start-page: 3319 year: 2017 ident: 10.1016/j.inffus.2023.101805_b252 article-title: Axiomatic attribution for deep networks – start-page: 427 year: 2022 ident: 10.1016/j.inffus.2023.101805_b439 article-title: The next frontier: AI we can really trust – volume: 11 start-page: 377 issue: 2 year: 2000 ident: 10.1016/j.inffus.2023.101805_b296 article-title: Extracting rules from trained neural networks publication-title: IEEE Trans. Neural Netw. doi: 10.1109/72.839008 – start-page: 432 year: 2019 ident: 10.1016/j.inffus.2023.101805_b186 article-title: An introductory survey on attention mechanisms in NLP problems – ident: 10.1016/j.inffus.2023.101805_b235 doi: 10.1109/CVPR.2016.319 – start-page: 219 year: 2015 ident: 10.1016/j.inffus.2023.101805_b206 article-title: Interpretability of fuzzy systems: Current research trends and prospects – year: 2022 ident: 10.1016/j.inffus.2023.101805_b423 – start-page: 13 year: 2022 ident: 10.1016/j.inffus.2023.101805_b68 article-title: Explainable AI methods-a brief overview – start-page: 139 year: 2022 ident: 10.1016/j.inffus.2023.101805_b467 article-title: Towards causal algorithmic recourse – start-page: 179 year: 2022 ident: 10.1016/j.inffus.2023.101805_b47 article-title: A unified framework for managing sex and gender bias in AI models for healthcare – volume: 36 year: 2021 ident: 10.1016/j.inffus.2023.101805_b53 article-title: Argumentation and Explainable Artificial Intelligence: a survey publication-title: Knowl. Eng. Rev. doi: 10.1017/S0269888921000011 – volume: 103 start-page: 584 issue: 482 year: 2008 ident: 10.1016/j.inffus.2023.101805_b162 article-title: Daytime arctic cloud detection based on multi-angle satellite data with case studies publication-title: J. Amer. Statist. Assoc. doi: 10.1198/016214507000001283 – volume: 614 start-page: 374 year: 2022 ident: 10.1016/j.inffus.2023.101805_b220 article-title: PLENARY: Explaining black-box models in natural language through fuzzy linguistic summaries publication-title: Inform. Sci. doi: 10.1016/j.ins.2022.10.010 – ident: 10.1016/j.inffus.2023.101805_b409 doi: 10.18653/v1/2021.acl-long.415 – year: 2021 ident: 10.1016/j.inffus.2023.101805_b511 – volume: 23 start-page: 217 issue: 4 year: 2007 ident: 10.1016/j.inffus.2023.101805_b138 article-title: Recommendation agents for electronic commerce: Effects of explanation facilities on trusting beliefs publication-title: J. Manage. Inf. Syst. doi: 10.2753/MIS0742-1222230410 – year: 2019 ident: 10.1016/j.inffus.2023.101805_b489 – year: 2021 ident: 10.1016/j.inffus.2023.101805_b415 – ident: 10.1016/j.inffus.2023.101805_b93 doi: 10.24963/ijcai.2017/767 – volume: 24 start-page: 1114 issue: 8 year: 1994 ident: 10.1016/j.inffus.2023.101805_b295 article-title: Rule generation from neural networks publication-title: IEEE Trans. Syst. Man Cybern. doi: 10.1109/21.299696 – volume: 54 start-page: 1 year: 2021 ident: 10.1016/j.inffus.2023.101805_b24 article-title: A survey on bias and fairness in machine learning publication-title: ACM Comput. Surv. doi: 10.1145/3457607 – start-page: 160 year: 2015 ident: 10.1016/j.inffus.2023.101805_b363 article-title: The role of explanations on trust and reliance in clinical decision support systems – ident: 10.1016/j.inffus.2023.101805_b313 doi: 10.1145/3278721.3278802 – ident: 10.1016/j.inffus.2023.101805_b441 doi: 10.1145/3301275.3302289 – start-page: 255 year: 2022 ident: 10.1016/j.inffus.2023.101805_b468 article-title: Beyond the visual analysis of deep model saliency – year: 2023 ident: 10.1016/j.inffus.2023.101805_b410 – volume: 3 year: 2021 ident: 10.1016/j.inffus.2023.101805_b25 article-title: Scaleable input gradient regularization for adversarial robustness publication-title: Mach. Learn. Appl. – volume: 35 start-page: 131 issue: 2 year: 2012 ident: 10.1016/j.inffus.2023.101805_b301 article-title: Reverse engineering the neural networks for rule extraction in classification problems publication-title: Neural Process. Lett. doi: 10.1007/s11063-011-9207-8 – volume: 302 year: 2022 ident: 10.1016/j.inffus.2023.101805_b66 article-title: Knowledge graphs as tools for explainable machine learning: A survey publication-title: Artificial Intelligence doi: 10.1016/j.artint.2021.103627 – year: 2017 ident: 10.1016/j.inffus.2023.101805_b133 – year: 2022 ident: 10.1016/j.inffus.2023.101805_b493 article-title: Explainable artificial intelligence (XAI) in deep learning-based medical image analysis publication-title: Med. Image Anal. doi: 10.1016/j.media.2022.102470 – start-page: 101 year: 2013 ident: 10.1016/j.inffus.2023.101805_b240 article-title: The Taylor decomposition: A unified generalization of the Oaxaca method to nonlinear models – ident: 10.1016/j.inffus.2023.101805_b207 doi: 10.1145/2487575.2487579 – volume: 73 start-page: 1 year: 2018 ident: 10.1016/j.inffus.2023.101805_b58 article-title: Methods for interpreting and understanding deep neural networks publication-title: Digit. Signal Process. doi: 10.1016/j.dsp.2017.10.011 – start-page: 8 year: 2017 ident: 10.1016/j.inffus.2023.101805_b136 – ident: 10.1016/j.inffus.2023.101805_b229 – volume: 24 start-page: 98 year: 2017 ident: 10.1016/j.inffus.2023.101805_b371 article-title: Deepeyes: Progressive visual analytics for designing deep neural networks publication-title: IEEE Trans. Vis. Comput. Graphics doi: 10.1109/TVCG.2017.2744358 – volume: 5 start-page: 2403 issue: 4 year: 2011 ident: 10.1016/j.inffus.2023.101805_b159 article-title: Prototype selection for interpretable classification publication-title: Ann. Appl. Stat. doi: 10.1214/11-AOAS495 – volume: 16 start-page: 199 issue: 3 year: 2001 ident: 10.1016/j.inffus.2023.101805_b142 article-title: Statistical modeling: The two cultures (with comments and a rejoinder by the author) publication-title: Statist. Sci. doi: 10.1214/ss/1009213726 – volume: 566 start-page: 195 issue: 7743 year: 2019 ident: 10.1016/j.inffus.2023.101805_b179 article-title: Deep learning and process understanding for data-driven Earth system science publication-title: Nature doi: 10.1038/s41586-019-0912-1 – volume: 36 start-page: 3336 issue: 2 year: 2009 ident: 10.1016/j.inffus.2023.101805_b263 article-title: A simple and fast algorithm for K-medoids clustering publication-title: Expert Syst. Appl. doi: 10.1016/j.eswa.2008.01.039 – volume: 81 start-page: 59 issn: 1566-2535 year: 2022 ident: 10.1016/j.inffus.2023.101805_b67 article-title: Counterfactuals and causability in explainable artificial intelligence: Theory, algorithms, and applications publication-title: Information Fusion doi: 10.1016/j.inffus.2021.11.003 – start-page: 1 year: 2016 ident: 10.1016/j.inffus.2023.101805_b7 article-title: Deep learning for HRRP-based target recognition in multistatic radar systems – start-page: 13 year: 2017 ident: 10.1016/j.inffus.2023.101805_b390 article-title: Understanding hidden memories of recurrent neural networks – year: 2021 ident: 10.1016/j.inffus.2023.101805_b147 article-title: Perturbation-based methods for explaining deep neural networks: A survey publication-title: Pattern Recognit. Lett. doi: 10.1016/j.patrec.2021.06.030 – ident: 10.1016/j.inffus.2023.101805_b350 doi: 10.1609/hcomp.v7i1.5285 – year: 2020 ident: 10.1016/j.inffus.2023.101805_b75 – volume: 4 start-page: 9 issue: 01 year: 1989 ident: 10.1016/j.inffus.2023.101805_b33 article-title: Explaining control strategies in problem solving publication-title: IEEE Intell. Syst. – volume: 10 start-page: 1392 issue: 6 year: 1999 ident: 10.1016/j.inffus.2023.101805_b203 article-title: ANN-DT: an algorithm for extraction of decision trees from artificial neural networks publication-title: IEEE Trans. Neural Netw. doi: 10.1109/72.809084 – volume: 5 start-page: 2607 issue: 53 year: 2020 ident: 10.1016/j.inffus.2023.101805_b435 article-title: Foolbox native: Fast adversarial attacks to benchmark the robustness of machine learning models in pytorch, tensorflow, and jax publication-title: J. Open Source Softw. doi: 10.21105/joss.02607 – year: 2016 ident: 10.1016/j.inffus.2023.101805_b46 – start-page: 341 year: 2011 ident: 10.1016/j.inffus.2023.101805_b323 article-title: Opening black box data mining models using sensitivity analysis – year: 2019 ident: 10.1016/j.inffus.2023.101805_b400 – volume: 38 start-page: 2354 year: 2011 ident: 10.1016/j.inffus.2023.101805_b332 article-title: Building comprehensible customer churn prediction models with advanced rule induction techniques publication-title: Expert Syst. Appl. doi: 10.1016/j.eswa.2010.08.023 – ident: 10.1016/j.inffus.2023.101805_b365 doi: 10.1145/3290607.3312787 – volume: 8 start-page: 832 issue: 8 year: 2019 ident: 10.1016/j.inffus.2023.101805_b77 article-title: Machine learning interpretability: A survey on methods and metrics publication-title: Electronics doi: 10.3390/electronics8080832 – volume: 8 start-page: 277 issue: 4 year: 1993 ident: 10.1016/j.inffus.2023.101805_b270 article-title: An algorithm for automatic rule induction publication-title: Artif. Intell. Eng. doi: 10.1016/0954-1810(93)90011-4 – ident: 10.1016/j.inffus.2023.101805_b355 doi: 10.1145/2166966.2166996 – volume: 10 start-page: 72363 year: 2022 ident: 10.1016/j.inffus.2023.101805_b266 article-title: FCE: Feedback based Counterfactual Explanations for Explainable AI publication-title: IEEE Access doi: 10.1109/ACCESS.2022.3189432 – volume: 35 start-page: 105 year: 2014 ident: 10.1016/j.inffus.2023.101805_b440 article-title: Power to the people: The role of humans in interactive machine learning publication-title: AI Magaz. doi: 10.1609/aimag.v35i4.2513 – volume: 20 start-page: 551 issue: 1 year: 2019 ident: 10.1016/j.inffus.2023.101805_b152 article-title: Automated scalable Bayesian inference via Hilbert coresets publication-title: J. Mach. Learn. Res. – volume: 64 start-page: 86 issue: 12 year: 2021 ident: 10.1016/j.inffus.2023.101805_b166 article-title: Datasheets for datasets publication-title: Commun. ACM doi: 10.1145/3458723 – volume: 8 start-page: 59 issue: 1 year: 1995 ident: 10.1016/j.inffus.2023.101805_b269 article-title: RULES: A simple rule extraction system publication-title: Expert Syst. Appl. doi: 10.1016/S0957-4174(99)80008-6 – volume: 39 start-page: 53 issue: 1 year: 2022 ident: 10.1016/j.inffus.2023.101805_b50 article-title: Explainable artificial intelligence: objectives, stakeholders, and future research opportunities publication-title: Inform. Syst. Manag. doi: 10.1080/10580530.2020.1849465 – volume: 38 start-page: 50 issue: 3 year: 2017 ident: 10.1016/j.inffus.2023.101805_b437 article-title: European union regulations on algorithmic decision-making and a “right to explanation” publication-title: AI Mag. – volume: 211 start-page: 239 issue: 3 year: 1997 ident: 10.1016/j.inffus.2023.101805_b271 article-title: An algorithm for incremental inductive learning publication-title: Proc. Inst. Mech. Eng. B doi: 10.1243/0954405971516239 – year: 2018 ident: 10.1016/j.inffus.2023.101805_b195 – year: 2019 ident: 10.1016/j.inffus.2023.101805_b190 – ident: 10.1016/j.inffus.2023.101805_b258 doi: 10.1109/CVPR46437.2021.01128 – volume: 47 start-page: 1260 year: 2009 ident: 10.1016/j.inffus.2023.101805_b364 article-title: Does projection into use improve trust and exploration? An example with a cruise control system publication-title: Saf. Sci. doi: 10.1016/j.ssci.2009.03.015 – year: 2020 ident: 10.1016/j.inffus.2023.101805_b416 – volume: 20 start-page: 97 issue: 4 year: 2019 ident: 10.1016/j.inffus.2023.101805_b484 article-title: The EU approach to ethics guidelines for trustworthy artificial intelligence publication-title: Comput. Law Rev. Int. doi: 10.9785/cri-2019-200402 – ident: 10.1016/j.inffus.2023.101805_b2 doi: 10.1109/ICCV.2015.364 – start-page: 162 year: 2017 ident: 10.1016/j.inffus.2023.101805_b374 article-title: A workflow for visual diagnostics of binary classifiers using instance-level explanations – volume: 4 start-page: 53 year: 2000 ident: 10.1016/j.inffus.2023.101805_b386 article-title: Foundations for an empirically determined scale of trust in automated systems publication-title: Int. J. Cogn. Ergon. doi: 10.1207/S15327566IJCE0401_04 – volume: 10 issue: 7 year: 2015 ident: 10.1016/j.inffus.2023.101805_b236 article-title: On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation publication-title: PLoS One doi: 10.1371/journal.pone.0130140 – volume: 82 start-page: 1059 issue: 4 year: 2020 ident: 10.1016/j.inffus.2023.101805_b228 article-title: Visualizing the effects of predictor variables in black box supervised learning models publication-title: J. R. Stat. Soc. Ser. B Stat. Methodol. doi: 10.1111/rssb.12377 – ident: 10.1016/j.inffus.2023.101805_b210 doi: 10.1145/3375627.3375843 – year: 2014 ident: 10.1016/j.inffus.2023.101805_b108 – ident: 10.1016/j.inffus.2023.101805_b253 – volume: 76 start-page: 89 year: 2021 ident: 10.1016/j.inffus.2023.101805_b14 article-title: Notions of explainability and evaluation approaches for explainable artificial intelligence publication-title: Inf. Fusion doi: 10.1016/j.inffus.2021.05.009 – volume: 57 start-page: 227 year: 2006 ident: 10.1016/j.inffus.2023.101805_b339 article-title: Explanation and understanding publication-title: Annu. Rev. Psychol. doi: 10.1146/annurev.psych.57.102904.190100 – start-page: 329 year: 2018 ident: 10.1016/j.inffus.2023.101805_b134 article-title: Measures of model interpretability for model selection – start-page: 69 year: 2022 ident: 10.1016/j.inffus.2023.101805_b469 article-title: CLEVR-X: A visual reasoning dataset for natural language explanations – ident: 10.1016/j.inffus.2023.101805_b172 doi: 10.1145/3351095.3372855 – volume: 81 start-page: 91 year: 2022 ident: 10.1016/j.inffus.2023.101805_b171 article-title: Knowledge graph-based rich and confidentiality preserving Explainable Artificial Intelligence (XAI) publication-title: Inf. Fusion doi: 10.1016/j.inffus.2021.11.015 – volume: 7 start-page: 155 year: 1983 ident: 10.1016/j.inffus.2023.101805_b268 article-title: Structure-mapping: A theoretical framework for analogy publication-title: Cogn. Sci. – volume: 22 start-page: 1 issue: 181 year: 2021 ident: 10.1016/j.inffus.2023.101805_b393 article-title: Alibi explain: Algorithms for explaining machine learning models publication-title: J. Mach. Learn. Res. – ident: 10.1016/j.inffus.2023.101805_b446 doi: 10.1145/2856767.2856811 – volume: 225 start-page: 1018 issue: 7 year: 2011 ident: 10.1016/j.inffus.2023.101805_b274 article-title: A new rule space representation scheme for rule induction in classification and control applications publication-title: Proc. Inst. Mech. Eng. I J. Syst. Control Eng. – volume: 296 year: 2021 ident: 10.1016/j.inffus.2023.101805_b74 article-title: What do we want from Explainable Artificial Intelligence (XAI)?–A stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research publication-title: Artificial Intelligence doi: 10.1016/j.artint.2021.103473 – year: 2015 ident: 10.1016/j.inffus.2023.101805_b165 – volume: 17 issue: 10 year: 2022 ident: 10.1016/j.inffus.2023.101805_b473 article-title: SkiNet: A deep learning framework for skin lesion diagnosis with uncertainty estimation and explainability publication-title: PLoS One doi: 10.1371/journal.pone.0276836 – volume: 7 start-page: 673 issue: 6 year: 2001 ident: 10.1016/j.inffus.2023.101805_b320 article-title: Classification and diagnostic prediction of cancers using gene expression profiling and artificial neural networks publication-title: Nat. Med. doi: 10.1038/89044 – year: 2015 ident: 10.1016/j.inffus.2023.101805_b498 – year: 2019 ident: 10.1016/j.inffus.2023.101805_b392 – volume: 14 start-page: 7 issue: 1 year: 2020 ident: 10.1016/j.inffus.2023.101805_b485 article-title: On the integration of symbolic and sub-symbolic techniques for XAI: A survey publication-title: Intell. Artif. – year: 1996 ident: 10.1016/j.inffus.2023.101805_b164 – volume: 54 start-page: 95 issue: 1 year: 2018 ident: 10.1016/j.inffus.2023.101805_b426 article-title: Auditing black-box models for indirect influence publication-title: Knowl. Inf. Syst. doi: 10.1007/s10115-017-1116-3 – year: 2022 ident: 10.1016/j.inffus.2023.101805_b477 – start-page: 122 year: 2019 ident: 10.1016/j.inffus.2023.101805_b211 article-title: An explainable hybrid model for bankruptcy prediction based on the decision tree and deep neural network – volume: 6 start-page: 52138 year: 2018 ident: 10.1016/j.inffus.2023.101805_b21 article-title: Peeking inside the black-box: a survey on explainable artificial intelligence (XAI) publication-title: IEEE Access doi: 10.1109/ACCESS.2018.2870052 – start-page: 1899 year: 2016 ident: 10.1016/j.inffus.2023.101805_b378 article-title: Graying the black box: Understanding dqns – start-page: 41 year: 2010 ident: 10.1016/j.inffus.2023.101805_b372 article-title: Explanatory debugging: Supporting end-user debugging of machine-learned programs – volume: 163 start-page: 90 year: 2017 ident: 10.1016/j.inffus.2023.101805_b379 article-title: Human attention in visual question answering: Do humans and deep networks look at the same regions? publication-title: Comput. Vis. Image Underst. doi: 10.1016/j.cviu.2017.10.001 – year: 2020 ident: 10.1016/j.inffus.2023.101805_b414 – volume: 6 start-page: 587 year: 2018 ident: 10.1016/j.inffus.2023.101805_b168 article-title: Data statements for natural language processing: Toward mitigating system bias and enabling better science publication-title: Trans. Assoc. Comput. Linguist. doi: 10.1162/tacl_a_00041 – year: 2020 ident: 10.1016/j.inffus.2023.101805_b451 article-title: Continual learning for robotics: Definition, framework, learning strategies, opportunities and challenges publication-title: Inf. Fusion doi: 10.1016/j.inffus.2019.12.004 – volume: 160 start-page: 249 issue: 3 year: 2003 ident: 10.1016/j.inffus.2023.101805_b321 article-title: Review and comparison of methods to study the contribution of variables in artificial neural network models publication-title: Ecol. Model. doi: 10.1016/S0304-3800(02)00257-0 – year: 2021 ident: 10.1016/j.inffus.2023.101805_b54 – volume: 17 start-page: 1223 year: 2005 ident: 10.1016/j.inffus.2023.101805_b330 article-title: Rule extraction from recurrent neural networks: Ataxonomy and review publication-title: Neural Comput. doi: 10.1162/0899766053630350 – volume: 618 start-page: 379 year: 2022 ident: 10.1016/j.inffus.2023.101805_b387 article-title: An empirical study on how humans appreciate automated counterfactual explanations which embrace imprecise information publication-title: Inform. Sci. doi: 10.1016/j.ins.2022.10.098 – year: 2018 ident: 10.1016/j.inffus.2023.101805_b167 – volume: 68 year: 2018 ident: 10.1016/j.inffus.2023.101805_b315 article-title: Model class reliance: Variable importance measures for any machine learning model class, from the Rashomon publication-title: Perspective – volume: 113 start-page: 1094 issue: 523 year: 2018 ident: 10.1016/j.inffus.2023.101805_b314 article-title: Distribution-free predictive inference for regression publication-title: J. Amer. Statist. Assoc. doi: 10.1080/01621459.2017.1307116 – ident: 10.1016/j.inffus.2023.101805_b259 – volume: 22 start-page: 2508 year: 2016 ident: 10.1016/j.inffus.2023.101805_b370 article-title: TopicPanorama: A full picture of relevant topics publication-title: IEEE Trans. Vis. Comput. Graphics doi: 10.1109/TVCG.2016.2515592 – ident: 10.1016/j.inffus.2023.101805_b388 doi: 10.1145/1124772.1124832 – year: 2018 ident: 10.1016/j.inffus.2023.101805_b154 – start-page: 19 year: 2018 ident: 10.1016/j.inffus.2023.101805_b57 article-title: Explanation methods in deep learning: Users, values, concerns and challenges – volume: 34 start-page: 265 year: 2021 ident: 10.1016/j.inffus.2023.101805_b125 article-title: Solving the black box problem: a normative framework for explainable Artificial Intelligence publication-title: Philos. Technol. doi: 10.1007/s13347-019-00382-7 – volume: 14 start-page: 501 issue: 5 year: 2021 ident: 10.1016/j.inffus.2023.101805_b148 article-title: Parallel coordinate order for high-dimensional data publication-title: Stat. Anal. Data Min. ASA Data Sci. J. doi: 10.1002/sam.11543 – ident: 10.1016/j.inffus.2023.101805_b145 doi: 10.1145/3450614.3463354 – year: 2022 ident: 10.1016/j.inffus.2023.101805_b457 article-title: Explainability-based mix-up approach for text data augmentation publication-title: ACM Trans. Knowl. Discov. Data (TKDD) – year: 2021 ident: 10.1016/j.inffus.2023.101805_b121 article-title: Ethical machines: the human-centric use of Artificial Intelligence publication-title: Iscience doi: 10.1016/j.isci.2021.102249 – ident: 10.1016/j.inffus.2023.101805_b272 – volume: 19 start-page: 27 year: 2018 ident: 10.1016/j.inffus.2023.101805_b56 article-title: Visual interpretability for deep learning: A survey publication-title: Front. Inf. Technol. Electr. Eng. doi: 10.1631/FITEE.1700808 – start-page: 1 year: 2021 ident: 10.1016/j.inffus.2023.101805_b128 article-title: Artificial Intelligence in cyber security: research advances, challenges, and opportunities publication-title: Artif. Intell. Rev. – volume: 32 start-page: 661 issue: 4 year: 2019 ident: 10.1016/j.inffus.2023.101805_b504 article-title: Transparency in algorithmic and human decision-making: is there a double standard? publication-title: Philos. Technol. doi: 10.1007/s13347-018-0330-6 – ident: 10.1016/j.inffus.2023.101805_b351 doi: 10.1145/3173574.3173951 – volume: 10 start-page: 304 issue: 1 year: 2010 ident: 10.1016/j.inffus.2023.101805_b292 article-title: A soft computing-based approach for integrated training and rule extraction from artificial neural networks: DIFACONN-miner publication-title: Appl. Soft Comput. doi: 10.1016/j.asoc.2009.08.008 – start-page: 1 year: 2014 ident: 10.1016/j.inffus.2023.101805_b6 article-title: MDLFace: Memorability augmented deep learning for video face recognition – ident: 10.1016/j.inffus.2023.101805_b506 – volume: 102 start-page: 349 issue: 3 year: 2016 ident: 10.1016/j.inffus.2023.101805_b202 article-title: Supersparse linear integer models for optimized medical scoring systems publication-title: Mach. Learn. doi: 10.1007/s10994-015-5528-6 – volume: 24 start-page: 1435 issue: 8 year: 2011 ident: 10.1016/j.inffus.2023.101805_b276 article-title: EDISC: a class-tailored discretization technique for rule-based classification publication-title: IEEE Trans. Knowl. Data Eng. doi: 10.1109/TKDE.2011.101 – volume: 26 start-page: 1086 year: 2019 ident: 10.1016/j.inffus.2023.101805_b389 article-title: Fairsight: Visual analytics for fairness in decision making publication-title: IEEE Trans. Vis. Comput. Graphics – year: 2017 ident: 10.1016/j.inffus.2023.101805_b4 – start-page: 3 year: 2016 ident: 10.1016/j.inffus.2023.101805_b199 article-title: Generating visual explanations – volume: 15 start-page: 405 issue: 3–4 year: 1998 ident: 10.1016/j.inffus.2023.101805_b319 article-title: Ranking importance of input parameters of neural networks publication-title: Expert Syst. Appl. doi: 10.1016/S0957-4174(98)00041-4 – ident: 10.1016/j.inffus.2023.101805_b376 doi: 10.1609/aaai.v32i1.11504 – volume: 24 start-page: 88 year: 2017 ident: 10.1016/j.inffus.2023.101805_b356 article-title: ActiVis: Visual exploration of industry-scale deep neural network models publication-title: IEEE Trans. Vis. Comput. Graphics doi: 10.1109/TVCG.2017.2744718 – year: 2018 ident: 10.1016/j.inffus.2023.101805_b453 article-title: S-RL toolbox: Environments, datasets and evaluation metrics for state representation learning – volume: 36 start-page: 1513 issue: 2 year: 2009 ident: 10.1016/j.inffus.2023.101805_b303 article-title: Rule extraction from trained adaptive neural networks using artificial immune systems publication-title: Expert Syst. Appl. doi: 10.1016/j.eswa.2007.11.024 – volume: 34 start-page: 189 year: 2019 ident: 10.1016/j.inffus.2023.101805_b507 article-title: The right to explanation, explained publication-title: Berkeley Tech. LJ – year: 2015 ident: 10.1016/j.inffus.2023.101805_b311 – year: 2021 ident: 10.1016/j.inffus.2023.101805_b420 – ident: 10.1016/j.inffus.2023.101805_b442 doi: 10.1145/3290605.3300641 – ident: 10.1016/j.inffus.2023.101805_b144 – ident: 10.1016/j.inffus.2023.101805_b402 – volume: 84 start-page: 169 year: 1996 ident: 10.1016/j.inffus.2023.101805_b285 article-title: What are fuzzy rules and how to use them publication-title: Fuzzy Sets and Systems doi: 10.1016/0165-0114(96)00066-8 – start-page: 1 year: 2019 ident: 10.1016/j.inffus.2023.101805_b104 article-title: An integrative 3C evaluation framework for explainable artificial intelligence – volume: 2 start-page: 476 issue: 8 year: 2020 ident: 10.1016/j.inffus.2023.101805_b459 article-title: Making deep neural networks right for the right scientific reasons by interacting with their explanations publication-title: Nat. Mach. Intell. doi: 10.1038/s42256-020-0212-3 – start-page: 1177 year: 2007 ident: 10.1016/j.inffus.2023.101805_b289 article-title: Comparing analytical decision support models through boolean rule extraction: A case study of ovarian tumour malignancy – year: 2018 ident: 10.1016/j.inffus.2023.101805_b310 – volume: 166 start-page: 195 year: 1996 ident: 10.1016/j.inffus.2023.101805_b382 article-title: Swift trust and temporary group. Trust in organisations publication-title: Front. Theory Res. – start-page: 5863 year: 2022 ident: 10.1016/j.inffus.2023.101805_b486 article-title: Application of neurosymbolic AI to sequential decision making – volume: 260 year: 2023 ident: 10.1016/j.inffus.2023.101805_b221 article-title: Towards a more efficient computation of individual attribute and policy contribution for post-hoc explanation of cooperative multi-agent systems using Myerson values publication-title: Knowl.-Based Syst. doi: 10.1016/j.knosys.2022.110189 – volume: 8 start-page: 338 year: 1965 ident: 10.1016/j.inffus.2023.101805_b36 article-title: Fuzzy sets publication-title: Inf. Control doi: 10.1016/S0019-9958(65)90241-X – start-page: 151 year: 2009 ident: 10.1016/j.inffus.2023.101805_b241 article-title: Independent component analysis – year: 2017 ident: 10.1016/j.inffus.2023.101805_b205 article-title: Simple rules for complex decisions publication-title: Cogn. Soc. Sci. EJ. – volume: 10 start-page: 132564 year: 2022 ident: 10.1016/j.inffus.2023.101805_b217 article-title: OG-SGG: Ontology-guided scene graph generation. A case study in transfer learning for telepresence robotics publication-title: IEEE Access doi: 10.1109/ACCESS.2022.3230590 – ident: 10.1016/j.inffus.2023.101805_b448 doi: 10.1145/3290605.3300233 – start-page: 3 year: 2021 ident: 10.1016/j.inffus.2023.101805_b113 article-title: The methods and approaches of explainable Artificial Intelligence – volume: 27 start-page: 170 year: 2021 ident: 10.1016/j.inffus.2023.101805_b127 article-title: Artificial Intelligence, forward-looking governance and the future of security publication-title: Swiss Polit. Sci. Rev. doi: 10.1111/spsr.12439 – year: 2017 ident: 10.1016/j.inffus.2023.101805_b155 – year: 2023 ident: 10.1016/j.inffus.2023.101805_b224 article-title: Gender and sex bias in COVID-19 epidemiological data through the lenses of causality publication-title: Inf. Process. Manage. doi: 10.1016/j.ipm.2023.103276 – volume: 2 start-page: 56 issue: 1 year: 2020 ident: 10.1016/j.inffus.2023.101805_b334 article-title: From local explanations to global understanding with explainable AI for trees publication-title: Nat. Mach. Intell. doi: 10.1038/s42256-019-0138-9 – start-page: 11 year: 2019 ident: 10.1016/j.inffus.2023.101805_b483 article-title: The IEEE global initiative on ethics of autonomous and intelligent systems publication-title: Robot. Well-Being doi: 10.1007/978-3-030-12524-0_2 – ident: 10.1016/j.inffus.2023.101805_b250 doi: 10.1609/aaai.v33i01.33013681 – volume: 23 start-page: 1 year: 2021 ident: 10.1016/j.inffus.2023.101805_b96 article-title: Introduction to the special section on bias and fairness in AI publication-title: ACM SIGKDD Explor. Newsl. doi: 10.1145/3468507.3468509 – year: 2016 ident: 10.1016/j.inffus.2023.101805_b255 – year: 2022 ident: 10.1016/j.inffus.2023.101805_b476 – year: 2016 ident: 10.1016/j.inffus.2023.101805_b200 article-title: Rationalizing neural predictions – start-page: 6 year: 2005 ident: 10.1016/j.inffus.2023.101805_b275 article-title: Rules-6: a simple rule induction algorithm for supporting decision making – volume: 202 year: 2022 ident: 10.1016/j.inffus.2023.101805_b487 article-title: Learning to select goals in Automated Planning with Deep-Q Learning publication-title: Expert Syst. Appl. doi: 10.1016/j.eswa.2022.117265 – volume: 18 start-page: 455 issue: 5 year: 2008 ident: 10.1016/j.inffus.2023.101805_b41 article-title: The effects of transparency on trust in and acceptance of a content-based art recommender publication-title: User Model. User-Adapt. Interact. doi: 10.1007/s11257-008-9051-3 – start-page: 295 year: 2018 ident: 10.1016/j.inffus.2023.101805_b81 article-title: Explainable AI: the new 42? – start-page: 404 year: 2012 ident: 10.1016/j.inffus.2023.101805_b306 article-title: Rule extraction from neural networks—A comparative study – volume: 32 start-page: 10967 year: 2019 ident: 10.1016/j.inffus.2023.101805_b406 article-title: On the (in) fidelity and sensitivity of explanations publication-title: Adv. Neural Inf. Process. Syst. – year: 2023 ident: 10.1016/j.inffus.2023.101805_b82 article-title: From anecdotal evidence to quantitative evaluation methods: a systematic review on evaluating explainable AI publication-title: ACM Comput. Surv. doi: 10.1145/3583558 – ident: 10.1016/j.inffus.2023.101805_b191 doi: 10.1109/CVPR.2018.00915 – start-page: 1 year: 2014 ident: 10.1016/j.inffus.2023.101805_b239 article-title: Deep inside convolutional networks: Visualising image classification models and saliency maps – year: 2012 ident: 10.1016/j.inffus.2023.101805_b277 – volume: 110 start-page: 248 year: 2009 ident: 10.1016/j.inffus.2023.101805_b353 article-title: Explanation and categorization: How “why?” informs “what?” publication-title: Cognition doi: 10.1016/j.cognition.2008.10.007 – start-page: 80 year: 2018 ident: 10.1016/j.inffus.2023.101805_b501 article-title: Explaining explanations: An overview of interpretability of machine learning – volume: 11 year: 2021 ident: 10.1016/j.inffus.2023.101805_b114 article-title: Explainable Artificial Intelligence: an analytical review publication-title: Wiley Interdiscip. Rev. Data Min. Knowl. Discov. doi: 10.1002/widm.1424 – volume: 11 start-page: 1803 year: 2010 ident: 10.1016/j.inffus.2023.101805_b260 article-title: How to explain individual classification decisions publication-title: J. Mach. Learn. Res. – year: 2022 ident: 10.1016/j.inffus.2023.101805_b510 – volume: 24 start-page: 1 issue: 34 year: 2023 ident: 10.1016/j.inffus.2023.101805_b421 article-title: Quantus: An explainable AI toolkit for responsible evaluation of neural network explanations and beyond publication-title: Journal of Machine Learning Research – year: 2020 ident: 10.1016/j.inffus.2023.101805_b515 – volume: 51 start-page: 1 issue: 5 year: 2018 ident: 10.1016/j.inffus.2023.101805_b18 article-title: A survey of methods for explaining black box models publication-title: ACM Comput. Surv. doi: 10.1145/3236009 – start-page: 702 year: 1985 ident: 10.1016/j.inffus.2023.101805_b35 – start-page: 1870 year: 2001 ident: 10.1016/j.inffus.2023.101805_b293 article-title: Rule extraction from neural networks via decision tree induction – year: 2019 ident: 10.1016/j.inffus.2023.101805_b32 – volume: 27 start-page: 2107 issue: 8 year: 2015 ident: 10.1016/j.inffus.2023.101805_b5 article-title: Disease inference from health-related questions via sparse deep learning publication-title: IEEE Trans. Knowl. Data Eng. doi: 10.1109/TKDE.2015.2399298 – volume: 8 start-page: 199 year: 1975 ident: 10.1016/j.inffus.2023.101805_b38 article-title: The concept of a linguistic variable and its application to approximate reasoning publication-title: Inform. Sci. doi: 10.1016/0020-0255(75)90036-5 – start-page: 3145 year: 2017 ident: 10.1016/j.inffus.2023.101805_b234 article-title: Learning important features through propagating activation differences – ident: 10.1016/j.inffus.2023.101805_b29 doi: 10.1145/2939672.2939778 – ident: 10.1016/j.inffus.2023.101805_b227 – start-page: 265 year: 2021 ident: 10.1016/j.inffus.2023.101805_b16 – ident: 10.1016/j.inffus.2023.101805_b366 doi: 10.3115/v1/W14-4307 – volume: 20 start-page: 78 issue: 1 year: 2007 ident: 10.1016/j.inffus.2023.101805_b298 article-title: Neural network explanation using inversion publication-title: Neural Netw. doi: 10.1016/j.neunet.2006.07.005 – year: 2019 ident: 10.1016/j.inffus.2023.101805_b413 – year: 2022 ident: 10.1016/j.inffus.2023.101805_b407 – volume: 3 start-page: 28 year: 1973 ident: 10.1016/j.inffus.2023.101805_b37 article-title: Outline of a new approach to the analysis of complex systems and decision processes publication-title: IEEE Trans. Syst. Man Cybern. doi: 10.1109/TSMC.1973.5408575 – year: 2019 ident: 10.1016/j.inffus.2023.101805_b396 – volume: 18 start-page: 1 year: 2016 ident: 10.1016/j.inffus.2023.101805_b45 article-title: Comparable long-term efficacy, as assessed by patient-reported outcomes, safety and pharmacokinetics, of CT-P13 and reference infliximab in patients with ankylosing spondylitis: 54-week results from the randomized, parallel-group PLANETAS study publication-title: Arthritis Res. Ther. doi: 10.1186/s13075-016-0930-4 – ident: 10.1016/j.inffus.2023.101805_b354 doi: 10.1145/3173574.3174098 – start-page: 46 year: 2019 ident: 10.1016/j.inffus.2023.101805_b445 article-title: FairVis: Visual analytics for discovering intersectional bias in machine learning – ident: 10.1016/j.inffus.2023.101805_b377 – volume: 116 year: 2021 ident: 10.1016/j.inffus.2023.101805_b92 article-title: Context-based image explanations for deep neural networks publication-title: Image Vis. Comput. doi: 10.1016/j.imavis.2021.104310 – start-page: 159 year: 2018 ident: 10.1016/j.inffus.2023.101805_b328 article-title: Perturbation-based explanations of prediction models – year: 2022 ident: 10.1016/j.inffus.2023.101805_b456 – volume: 115 year: 2021 ident: 10.1016/j.inffus.2023.101805_b464 article-title: Pruning by explaining: A novel criterion for deep neural network pruning publication-title: Pattern Recognit. doi: 10.1016/j.patcog.2021.107899 – volume: 28 start-page: 1222 issue: 5 year: 2014 ident: 10.1016/j.inffus.2023.101805_b176 article-title: Ontology of core data mining entities publication-title: Data Min. Knowl. Discov. doi: 10.1007/s10618-014-0363-0 – volume: 65 start-page: 211 year: 2017 ident: 10.1016/j.inffus.2023.101805_b245 article-title: Explaining nonlinear classification decisions with deep taylor decomposition publication-title: Pattern Recognit. doi: 10.1016/j.patcog.2016.11.008 – start-page: 113 year: 2003 ident: 10.1016/j.inffus.2023.101805_b174 article-title: A data mining ontology for grid programming – volume: 66 start-page: 111 year: 2021 ident: 10.1016/j.inffus.2023.101805_b12 article-title: A survey on deep learning in medicine: Why, how and when? publication-title: Inf. Fusion doi: 10.1016/j.inffus.2020.09.006 – start-page: 2154 year: 2021 ident: 10.1016/j.inffus.2023.101805_b491 article-title: Have you been properly notified? Automatic compliance analysis of privacy policy text with GDPR article 13 – volume: 61 start-page: 36 issue: 10 year: 2018 ident: 10.1016/j.inffus.2023.101805_b143 article-title: The mythos of model interpretability publication-title: Commun. ACM doi: 10.1145/3233231 – start-page: 7 year: 2020 ident: 10.1016/j.inffus.2023.101805_b481 article-title: What can crowd computing do for the next generation of AI systems? – volume: 34 start-page: 11196 year: 2021 ident: 10.1016/j.inffus.2023.101805_b181 article-title: Controlling neural networks with rule representations publication-title: Adv. Neural Inf. Process. Syst. – start-page: 35 year: 2019 ident: 10.1016/j.inffus.2023.101805_b247 article-title: A study on trust in black box models and post-hoc explanations – ident: 10.1016/j.inffus.2023.101805_b139 doi: 10.1145/2858036.2858529 – volume: 9 start-page: 11974 year: 2021 ident: 10.1016/j.inffus.2023.101805_b72 article-title: A survey of contrastive and counterfactual explanation generation methods for Explainable Artificial Intelligence publication-title: IEEE Access doi: 10.1109/ACCESS.2021.3051315 – ident: 10.1016/j.inffus.2023.101805_b107 – ident: 10.1016/j.inffus.2023.101805_b479 doi: 10.1109/CVPR.2016.265 – year: 2022 ident: 10.1016/j.inffus.2023.101805_b215 – ident: 10.1016/j.inffus.2023.101805_b405 – year: 2017 ident: 10.1016/j.inffus.2023.101805_b15 – volume: 14 start-page: 1 issue: 1 year: 2020 ident: 10.1016/j.inffus.2023.101805_b76 article-title: Explainable recommendation: A survey and new perspectives publication-title: Found. Trends Inform. Retr. doi: 10.1561/1500000066 – year: 2003 ident: 10.1016/j.inffus.2023.101805_b43 – year: 2018 ident: 10.1016/j.inffus.2023.101805_b438 – start-page: 1 year: 2007 ident: 10.1016/j.inffus.2023.101805_b170 article-title: Scene summarization for online image collections – year: 2022 ident: 10.1016/j.inffus.2023.101805_b431 – volume: 214 year: 2021 ident: 10.1016/j.inffus.2023.101805_b59 article-title: Explainability in deep reinforcement learning publication-title: Knowl.-Based Syst. doi: 10.1016/j.knosys.2020.106685 – year: 1996 ident: 10.1016/j.inffus.2023.101805_b88 – volume: 314 year: 2023 ident: 10.1016/j.inffus.2023.101805_b216 article-title: Logic explained networks publication-title: Artificial Intelligence doi: 10.1016/j.artint.2022.103822 – volume: 12 start-page: 15 issue: 1 year: 2000 ident: 10.1016/j.inffus.2023.101805_b294 article-title: FERNN: An algorithm for fast extraction of rules from neural networks publication-title: Appl. Intell. doi: 10.1023/A:1008307919726 – volume: 21 start-page: 1 issue: 130 year: 2020 ident: 10.1016/j.inffus.2023.101805_b394 article-title: AI explainability 360: An extensible toolkit for understanding data and machine learning models publication-title: J. Mach. Learn. Res. – start-page: 1 year: 2021 ident: 10.1016/j.inffus.2023.101805_b122 article-title: Four responsibility gaps with artificial intelligence: Why they matter and how to address them publication-title: Philos. Technol. – volume: 47 start-page: 339 year: 2021 ident: 10.1016/j.inffus.2023.101805_b23 article-title: We might be afraid of black-box algorithms publication-title: J. Med. Ethics doi: 10.1136/medethics-2021-107462 – start-page: 2018 year: 2011 ident: 10.1016/j.inffus.2023.101805_b256 article-title: Adaptive deconvolutional networks for mid and high level feature learning – ident: 10.1016/j.inffus.2023.101805_b188 – volume: 51 start-page: 782 year: 2011 ident: 10.1016/j.inffus.2023.101805_b333 article-title: Performance of classification models from a user perspective publication-title: Decis. Support Syst. doi: 10.1016/j.dss.2011.01.013 – year: 2018 ident: 10.1016/j.inffus.2023.101805_b194 article-title: Learning certifiably optimal rule lists for categorical data publication-title: J. Mach. Learn. Res. – volume: 4 start-page: 1798 issue: 43 year: 2019 ident: 10.1016/j.inffus.2023.101805_b403 article-title: modelStudio: Interactive studio with explanations for ML predictive models publication-title: J. Open Source Softw. doi: 10.21105/joss.01798 – year: 2017 ident: 10.1016/j.inffus.2023.101805_b429 – start-page: 243 year: 2021 ident: 10.1016/j.inffus.2023.101805_b112 article-title: Reliability of explainable artificial intelligence in adversarial perturbation scenarios – ident: 10.1016/j.inffus.2023.101805_b118 – start-page: 865 year: 2021 ident: 10.1016/j.inffus.2023.101805_b100 article-title: Robustness and scalability under heavy tails, without strong convexity – start-page: 54 year: 2020 ident: 10.1016/j.inffus.2023.101805_b49 article-title: Transparency and trust in human-AI-interaction: The role of model-agnostic explanations in computer vision-based decision support – volume: 4 start-page: 9 issue: 3 year: 2014 ident: 10.1016/j.inffus.2023.101805_b291 article-title: Evaluation of rule extraction algorithms publication-title: Int. J. Data Min. Knowl. Manag. Process doi: 10.5121/ijdkp.2014.4302 – volume: 20 start-page: 1 issue: 93 year: 2019 ident: 10.1016/j.inffus.2023.101805_b401 article-title: iNNvestigate neural networks! publication-title: J. Mach. Learn. Res. – volume: 3 start-page: 128 issue: 4 year: 1999 ident: 10.1016/j.inffus.2023.101805_b450 article-title: Catastrophic forgetting in connectionist networks publication-title: Trends in Cognitive Sciences doi: 10.1016/S1364-6613(99)01294-2 – year: 2019 ident: 10.1016/j.inffus.2023.101805_b399 – year: 2021 ident: 10.1016/j.inffus.2023.101805_b508 – volume: 16 start-page: 1 issue: none year: 2022 ident: 10.1016/j.inffus.2023.101805_b132 article-title: Interpretable machine learning: Fundamental principles and 10 grand challenges publication-title: Stat. Surv. doi: 10.1214/21-SS133 – volume: 32 start-page: 1621 year: 2021 ident: 10.1016/j.inffus.2023.101805_b98 article-title: Designing energy-efficient high-precision multi-pass turning processes via robust optimization and artificial intelligence publication-title: J. Intell. Manuf. doi: 10.1007/s10845-020-01648-0 – ident: 10.1016/j.inffus.2023.101805_b280 – year: 2021 ident: 10.1016/j.inffus.2023.101805_b182 – volume: 1 start-page: 206 issue: 5 year: 2019 ident: 10.1016/j.inffus.2023.101805_b505 article-title: Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead publication-title: Nat. Mach. Intell. doi: 10.1038/s42256-019-0048-x – year: 2022 ident: 10.1016/j.inffus.2023.101805_b512 – ident: 10.1016/j.inffus.2023.101805_b480 doi: 10.1109/ICCV.2017.244 – volume: 25 start-page: 63 issue: 2 year: 2018 ident: 10.1016/j.inffus.2023.101805_b137 article-title: Asking ‘Why’in AI: Explainability of intelligent systems–perspectives and challenges publication-title: Intell. Syst. Account. Finance Manag. doi: 10.1002/isaf.1422 – year: 2021 ident: 10.1016/j.inffus.2023.101805_b13 article-title: A survey of convolutional neural networks: analysis, applications, and prospects publication-title: IEEE Trans. Neural Netw. Learn. Syst. – year: 2022 ident: 10.1016/j.inffus.2023.101805_b70 article-title: Explainable AI for Time Series Classification: A review, taxonomy and research directions publication-title: IEEE Access doi: 10.1109/ACCESS.2022.3207765 – volume: 31 start-page: 2524 year: 2020 ident: 10.1016/j.inffus.2023.101805_b97 article-title: Towards fair and privacy-preserving federated deep models publication-title: IEEE Trans. Parallel Distrib. Syst. doi: 10.1109/TPDS.2020.2996273 – start-page: 1 year: 2022 ident: 10.1016/j.inffus.2023.101805_b213 article-title: Neural-symbolic learning and reasoning: A survey and interpretation – volume: 7 start-page: 108 issue: 1 year: 1995 ident: 10.1016/j.inffus.2023.101805_b251 article-title: Training with noise is equivalent to Tikhonov regularization publication-title: Neural Comput. doi: 10.1162/neco.1995.7.1.108 – year: 2020 ident: 10.1016/j.inffus.2023.101805_b27 – year: 2018 ident: 10.1016/j.inffus.2023.101805_b312 – start-page: 77 year: 2020 ident: 10.1016/j.inffus.2023.101805_b60 article-title: Explainable reinforcement learning: A survey – ident: 10.1016/j.inffus.2023.101805_b346 doi: 10.1145/1518701.1519023 – year: 2021 ident: 10.1016/j.inffus.2023.101805_b474 – volume: 28 year: 2015 ident: 10.1016/j.inffus.2023.101805_b212 article-title: End-to-end memory networks publication-title: Adv. Neural Inf. Process. Syst. – volume: 20 start-page: 7 issue: 11 year: 2020 ident: 10.1016/j.inffus.2023.101805_b482 article-title: Identifying ethical considerations for machine learning healthcare applications publication-title: Am. J. Bioethics doi: 10.1080/15265161.2020.1819469 – start-page: 297 year: 2022 ident: 10.1016/j.inffus.2023.101805_b461 article-title: A whale’s tail-finding the right whale in an uncertain world – start-page: 1 year: 2021 ident: 10.1016/j.inffus.2023.101805_b123 article-title: The European Commission report on ethics of connected and automated vehicles and the future of ethics of transportation publication-title: Ethics Inform. Technol. – volume: 16 start-page: 18 year: 2017 ident: 10.1016/j.inffus.2023.101805_b19 article-title: Slave to the algorithm: Why a right to an explanation is probably not the remedy you are looking for publication-title: Duke L. Tech. Rev. – start-page: 1952 year: 2014 ident: 10.1016/j.inffus.2023.101805_b262 article-title: The bayesian case model: A generative approach for case-based reasoning and prototype classification – start-page: 382 year: 2021 ident: 10.1016/j.inffus.2023.101805_b130 article-title: Explainable Artificial Intelligence requirements for safe, intelligent robots – ident: 10.1016/j.inffus.2023.101805_b78 doi: 10.24963/ijcai.2019/876 – volume: 25 start-page: 51 issue: 1 year: 2021 ident: 10.1016/j.inffus.2023.101805_b173 article-title: Semantics of the black-box: Can knowledge graphs help make deep learning systems more interpretable and explainable? publication-title: IEEE Internet Comput. doi: 10.1109/MIC.2020.3031769 – ident: 10.1016/j.inffus.2023.101805_b336 doi: 10.1145/2556288.2557167 – start-page: 269 year: 2008 ident: 10.1016/j.inffus.2023.101805_b447 article-title: Measuring change in mental models of complex dynamic systems – ident: 10.1016/j.inffus.2023.101805_b478 doi: 10.1109/CVPR52688.2022.01042 – start-page: 1766 year: 2006 ident: 10.1016/j.inffus.2023.101805_b84 article-title: Building Explainable Artificial Intelligence systems – volume: 30 start-page: 5875 year: 2021 ident: 10.1016/j.inffus.2023.101805_b408 article-title: Layercam: Exploring hierarchical class activation maps for localization publication-title: IEEE Trans. Image Process. doi: 10.1109/TIP.2021.3089943 – ident: 10.1016/j.inffus.2023.101805_b347 doi: 10.1145/2166966.2167019 – volume: 21 year: 2020 ident: 10.1016/j.inffus.2023.101805_b183 article-title: Contextual explanation networks publication-title: J. Mach. Learn. Res. – year: 2021 ident: 10.1016/j.inffus.2023.101805_b131 – volume: 34 start-page: 193 issue: 2 year: 2020 ident: 10.1016/j.inffus.2023.101805_b331 article-title: Measuring the quality of explanations: the system causability scale (SCS) comparing human and machine explanations publication-title: KI-Künstliche Intelligenz doi: 10.1007/s13218-020-00636-z – year: 2022 ident: 10.1016/j.inffus.2023.101805_b494 article-title: Explainable AI for healthcare 5.0: opportunities and challenges publication-title: IEEE Access doi: 10.1109/ACCESS.2022.3197671 – ident: 10.1016/j.inffus.2023.101805_b196 doi: 10.24963/ijcai.2017/371 – volume: 70 start-page: 384 issue: 1–3 year: 2006 ident: 10.1016/j.inffus.2023.101805_b302 article-title: Extracting rules from multilayer perceptrons in classification problems: A clustering-based approach publication-title: Neurocomputing doi: 10.1016/j.neucom.2005.12.127 – ident: 10.1016/j.inffus.2023.101805_b31 doi: 10.1109/ICCV.2017.74 – start-page: 6 year: 2000 ident: 10.1016/j.inffus.2023.101805_b385 article-title: Measuring human-computer trust – volume: 22 start-page: 55 year: 2021 ident: 10.1016/j.inffus.2023.101805_b11 article-title: If deep learning is the answer, what is the question? publication-title: Nat. Rev. Neurosci. doi: 10.1038/s41583-020-00395-8 – start-page: 543 year: 1993 ident: 10.1016/j.inffus.2023.101805_b34 article-title: Explanation in second generation expert systems – year: 2022 ident: 10.1016/j.inffus.2023.101805_b422 – start-page: 1 year: 2021 ident: 10.1016/j.inffus.2023.101805_b124 article-title: Psychological consequences of legal responsibility misattribution associated with automated vehicles publication-title: Ethics Inform. Technol. – volume: 1 issue: 2 year: 2019 ident: 10.1016/j.inffus.2023.101805_b492 article-title: Why are we using black box models in AI when we don’t need to? A lesson from an explainable AI competition publication-title: Harv. Data Sci. Rev. – volume: 40 start-page: 307 year: 2013 ident: 10.1016/j.inffus.2023.101805_b373 article-title: You are the only possible oracle: Effective test selection for end users of interactive machine learning systems publication-title: IEEE Trans. Softw. Eng. doi: 10.1109/TSE.2013.59 – year: 2021 ident: 10.1016/j.inffus.2023.101805_b513 – year: 2015 ident: 10.1016/j.inffus.2023.101805_b309 – start-page: 401 year: 2017 ident: 10.1016/j.inffus.2023.101805_b428 article-title: Fairtest: Discovering unwarranted associations in data-driven applications – start-page: 212 year: 1999 ident: 10.1016/j.inffus.2023.101805_b169 article-title: Case-based explanation of non-case-based learning methods – volume: 129 year: 2021 ident: 10.1016/j.inffus.2023.101805_b101 article-title: An engineer’s guide to eXplainable Artificial Intelligence and Interpretable Machine Learning: Navigating causality, forced goodness, and the false perception of inference publication-title: Autom. Constr. doi: 10.1016/j.autcon.2021.103821 – ident: 10.1016/j.inffus.2023.101805_b198 – start-page: 658 year: 2004 ident: 10.1016/j.inffus.2023.101805_b288 article-title: The truth is in there-rule extraction from opaque models using genetic programming. – start-page: 900 year: 2004 ident: 10.1016/j.inffus.2023.101805_b85 article-title: An Explainable Artificial Intelligence system for small-unit tactical behavior – ident: 10.1016/j.inffus.2023.101805_b490 doi: 10.24251/HICSS.2021.281 – start-page: 844 year: 2017 ident: 10.1016/j.inffus.2023.101805_b197 article-title: Attention-based extraction of structured information from street view imagery – volume: 420 start-page: 16 year: 2017 ident: 10.1016/j.inffus.2023.101805_b329 article-title: Explaining classifier decisions linguistically for stimulating and improving operators labeling behavior publication-title: Inform. Sci. doi: 10.1016/j.ins.2017.08.012 – year: 2021 ident: 10.1016/j.inffus.2023.101805_b135 – year: 2006 ident: 10.1016/j.inffus.2023.101805_b90 – volume: 26 start-page: 2051 issue: 4 year: 2020 ident: 10.1016/j.inffus.2023.101805_b105 article-title: Artificial intelligence, responsibility attribution, and a relational justification of explainability publication-title: Sci. Eng. Ethics doi: 10.1007/s11948-019-00146-8 – volume: 24 start-page: 667 year: 2017 ident: 10.1016/j.inffus.2023.101805_b360 article-title: LSTMVis: A tool for visual analysis of hidden state dynamics in recurrent neural networks publication-title: IEEE Trans. Vis. Comput. Graphics doi: 10.1109/TVCG.2017.2744158 – year: 2014 ident: 10.1016/j.inffus.2023.101805_b248 – year: 2019 ident: 10.1016/j.inffus.2023.101805_b452 article-title: DisCoRL: Continual reinforcement learning via policy distillation – ident: 10.1016/j.inffus.2023.101805_b208 – volume: 5 year: 2020 ident: 10.1016/j.inffus.2023.101805_b244 article-title: Visualizing the impact of feature attribution baselines publication-title: Distill doi: 10.23915/distill.00022 – volume: 31 year: 2018 ident: 10.1016/j.inffus.2023.101805_b488 article-title: Deepproblog: Neural probabilistic logic programming publication-title: Adv. Neural Inf. Process. Syst. – start-page: 267 year: 2019 ident: 10.1016/j.inffus.2023.101805_b249 article-title: The (un) reliability of saliency methods – start-page: 141 year: 2021 ident: 10.1016/j.inffus.2023.101805_b466 article-title: Machine unlearning – ident: 10.1016/j.inffus.2023.101805_b443 doi: 10.1145/3290605.3300831 – year: 2021 ident: 10.1016/j.inffus.2023.101805_b417 – start-page: 457 year: 2016 ident: 10.1016/j.inffus.2023.101805_b304 article-title: Deepred–rule extraction from deep neural networks – volume: 56 start-page: 489 year: 2014 ident: 10.1016/j.inffus.2023.101805_b384 article-title: The construct of state-level suspicion: A model and research agenda for automated and information technology (IT) contexts publication-title: Hum. Factors doi: 10.1177/0018720813497052 – year: 2021 ident: 10.1016/j.inffus.2023.101805_b51 – ident: 10.1016/j.inffus.2023.101805_b109 – volume: 55 start-page: 520 year: 2013 ident: 10.1016/j.inffus.2023.101805_b383 article-title: I trust it, but I don’t know why: Effects of implicit attitudes toward automation on trust in an automated system publication-title: Hum. Factors doi: 10.1177/0018720812465081 – volume: 64 start-page: 3197 issue: 12 year: 2022 ident: 10.1016/j.inffus.2023.101805_b26 article-title: Interpretable deep learning: interpretation, interpretability, trustworthiness, and beyond publication-title: Knowledge and Information Systems doi: 10.1007/s10115-022-01756-8 – volume: 217 start-page: 1273 issue: 12 year: 2003 ident: 10.1016/j.inffus.2023.101805_b273 article-title: RULES-5: a rule induction algorithm for classification problems involving continuous attributes publication-title: Proc. Inst. Mech. Eng. C doi: 10.1243/095440603322769929 – ident: 10.1016/j.inffus.2023.101805_b337 doi: 10.1145/2858036.2858558 – start-page: 235 year: 2022 ident: 10.1016/j.inffus.2023.101805_b327 article-title: A survey on methods and metrics for the assessment of explainability under the proposed AI Act – year: 2018 ident: 10.1016/j.inffus.2023.101805_b424 – year: 2016 ident: 10.1016/j.inffus.2023.101805_b307 – year: 2018 ident: 10.1016/j.inffus.2023.101805_b503 – volume: 23 start-page: 91 issue: 1 year: 2016 ident: 10.1016/j.inffus.2023.101805_b359 article-title: Towards better analysis of deep convolutional neural networks publication-title: IEEE Trans. Vis. Comput. Graphics doi: 10.1109/TVCG.2016.2598831 – start-page: 31 year: 2018 ident: 10.1016/j.inffus.2023.101805_b28 article-title: The mythos of model interpretability: In machine learning, the concept of interpretability is both important and slippery publication-title: Commun. ACM (CACM) – ident: 10.1016/j.inffus.2023.101805_b324 – start-page: 447 year: 1994 ident: 10.1016/j.inffus.2023.101805_b318 article-title: Sensitivity analysis for minimization of input data dimension for feedforward neural network – volume: 64 start-page: 34 year: 2021 ident: 10.1016/j.inffus.2023.101805_b502 article-title: Medical artificial intelligence: the European legal perspective publication-title: Commun. ACM doi: 10.1145/3458652 – ident: 10.1016/j.inffus.2023.101805_b368 doi: 10.1145/3290605.3300509 – volume: 24 start-page: 44 issue: 1 year: 2015 ident: 10.1016/j.inffus.2023.101805_b237 article-title: Peeking inside the black box: Visualizing statistical learning with plots of individual conditional expectation publication-title: J. Comput. Graph. Statist. doi: 10.1080/10618600.2014.907095 – year: 2019 ident: 10.1016/j.inffus.2023.101805_b381 – volume: 807 start-page: 298 year: 2020 ident: 10.1016/j.inffus.2023.101805_b432 article-title: A game-based approximate verification of deep neural networks with provable guarantees publication-title: Theoret. Comput. Sci. doi: 10.1016/j.tcs.2019.05.046 – start-page: 3 year: 2018 ident: 10.1016/j.inffus.2023.101805_b344 article-title: Theory→ concepts→ measures but policies→ metrics – volume: 17 start-page: 107 issue: 2 year: 2002 ident: 10.1016/j.inffus.2023.101805_b40 article-title: A review of explanation methods for Bayesian networks publication-title: Knowl. Eng. Rev. doi: 10.1017/S026988890200019X – volume: 79 start-page: 58 year: 2021 ident: 10.1016/j.inffus.2023.101805_b219 article-title: Explainable neural-symbolic learning (X-NeSyL) methodology to fuse deep learning representations with expert knowledge graphs: the monuMAI cultural heritage use case publication-title: Inf. Fusion doi: 10.1016/j.inffus.2021.09.022 – volume: 26 start-page: 1096 year: 2019 ident: 10.1016/j.inffus.2023.101805_b444 article-title: S ummit: Scaling deep learning interpretability by visualizing activation and attribution summarizations publication-title: IEEE Trans. Vis. Comput. Graphics doi: 10.1109/TVCG.2019.2934659 – ident: 10.1016/j.inffus.2023.101805_b192 doi: 10.1609/aaai.v32i1.11501 – volume: 108 start-page: 379 year: 2018 ident: 10.1016/j.inffus.2023.101805_b222 article-title: State representation learning for control: An overview publication-title: Neural Netw. doi: 10.1016/j.neunet.2018.07.006 – ident: 10.1016/j.inffus.2023.101805_b189 doi: 10.1145/3306618.3314273 – ident: 10.1016/j.inffus.2023.101805_b340 – start-page: 37 year: 1994 ident: 10.1016/j.inffus.2023.101805_b297 article-title: Using sampling and queries to extract rules from trained neural networks – volume: 225 start-page: 1 year: 2013 ident: 10.1016/j.inffus.2023.101805_b322 article-title: Using sensitivity analysis and visualization techniques to open black box data mining models publication-title: Inform. Sci. doi: 10.1016/j.ins.2012.10.039 – volume: 296 year: 2021 ident: 10.1016/j.inffus.2023.101805_b177 article-title: Using ontologies to enhance human understandability of global post-hoc explanations of Black-box models publication-title: Artificial Intelligence doi: 10.1016/j.artint.2021.103471 – year: 2014 ident: 10.1016/j.inffus.2023.101805_b214 – volume: 11 issue: 1 year: 2021 ident: 10.1016/j.inffus.2023.101805_b83 article-title: A historical perspective of explainable Artificial Intelligence publication-title: WIREs Data Min. Knowl. Discov. – volume: 23 start-page: 18 issue: 1 year: 2021 ident: 10.1016/j.inffus.2023.101805_b73 article-title: Explainable AI: A review of machine learning interpretability methods publication-title: Entropy doi: 10.3390/e23010018 – year: 2021 ident: 10.1016/j.inffus.2023.101805_b116 – volume: 40 start-page: 44 issue: 2 year: 2019 ident: 10.1016/j.inffus.2023.101805_b496 article-title: DARPA’s Explainable Artificial Intelligence (XAI) program publication-title: AI Mag. – year: 2021 ident: 10.1016/j.inffus.2023.101805_b497 – ident: 10.1016/j.inffus.2023.101805_b341 doi: 10.1145/3172944.3172946 – volume: 3 start-page: 41 year: 1975 ident: 10.1016/j.inffus.2023.101805_b91 article-title: Logic and conversation, syntax and semantics publication-title: Speech Acts doi: 10.1163/9789004368811_003 – year: 2018 ident: 10.1016/j.inffus.2023.101805_b380 – ident: 10.1016/j.inffus.2023.101805_b187 doi: 10.1145/3287560.3287595 – year: 2021 ident: 10.1016/j.inffus.2023.101805_b62 article-title: Reviewing the need for Explainable Artificial Intelligence (XAI) – volume: 77 start-page: 29 year: 2022 ident: 10.1016/j.inffus.2023.101805_b71 article-title: Unbox the black-box for the medical Explainable AI via multi-modal and multi-centre data fusion: A mini-review, two showcases and beyond publication-title: Inf. Fusion doi: 10.1016/j.inffus.2021.07.016 – ident: 10.1016/j.inffus.2023.101805_b79 doi: 10.1145/3173574.3174156 – volume: 54 start-page: 3849 year: 2021 ident: 10.1016/j.inffus.2023.101805_b126 article-title: Artificial Intelligence, cyber-threats and Industry 4.0: Challenges and opportunities publication-title: Artif. Intell. Rev. doi: 10.1007/s10462-020-09942-2 – volume: 3 start-page: e745 year: 2021 ident: 10.1016/j.inffus.2023.101805_b111 article-title: The false hope of current approaches to explainable Artificial Intelligence in health care publication-title: Lancet Digit. Health doi: 10.1016/S2589-7500(21)00208-9 – year: 2019 ident: 10.1016/j.inffus.2023.101805_b325 – volume: 220 start-page: 1433 issue: 9 year: 2006 ident: 10.1016/j.inffus.2023.101805_b278 article-title: RULES-F: A fuzzy inductive learning algorithm publication-title: Proc. Inst. Mech. Eng. C doi: 10.1243/0954406C20004 – start-page: 1 year: 2022 ident: 10.1016/j.inffus.2023.101805_b69 article-title: Counterfactual explanations and how to find them: literature review and benchmarking publication-title: Data Min. Knowl. Discov. – start-page: 245 year: 2021 ident: 10.1016/j.inffus.2023.101805_b209 article-title: Stop ordering machine learning algorithms by their explainability! An empirical investigation of the tradeoff between performance and explainability – volume: 3 start-page: 525 year: 2021 ident: 10.1016/j.inffus.2023.101805_b146 article-title: Deterministic local interpretable model-agnostic explanations for stable explainability publication-title: Mach. Learn. Knowl. Extr. doi: 10.3390/make3030027 – volume: 13 start-page: 71 issue: 1 year: 1993 ident: 10.1016/j.inffus.2023.101805_b286 article-title: Extracting refined rules from knowledge-based neural networks publication-title: Mach. Learn. doi: 10.1007/BF00993103 – volume: 1 start-page: 48 issue: 1 year: 2017 ident: 10.1016/j.inffus.2023.101805_b55 article-title: Towards better analysis of machine learning models: A visual analytics perspective publication-title: Vis. Inform. doi: 10.1016/j.visinf.2017.01.006 – start-page: 153 year: 2021 ident: 10.1016/j.inffus.2023.101805_b267 article-title: Factual and counterfactual explanation of fuzzy information granules – year: 2020 ident: 10.1016/j.inffus.2023.101805_b150 – volume: 1 start-page: 1 issue: 3 year: 2017 ident: 10.1016/j.inffus.2023.101805_b1 article-title: Low-resource multi-task audio sensing for mobile and embedded devices via shared deep neural network representations publication-title: Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. doi: 10.1145/3131895 – ident: 10.1016/j.inffus.2023.101805_b243 doi: 10.1007/978-3-030-28954-6_9 – start-page: 1 year: 2022 ident: 10.1016/j.inffus.2023.101805_b454 article-title: Explain to not forget: defending against catastrophic forgetting with xai – year: 2018 ident: 10.1016/j.inffus.2023.101805_b246 – ident: 10.1016/j.inffus.2023.101805_b30 doi: 10.1109/CVPRW50498.2020.00020 – volume: 11 year: 2021 ident: 10.1016/j.inffus.2023.101805_b115 article-title: A historical perspective of explainable Artificial Intelligence publication-title: Wiley Interdiscip. Rev. Data Min. Knowl. Discov. doi: 10.1002/widm.1391 – volume: 15 start-page: 318 issue: 3 year: 2010 ident: 10.1016/j.inffus.2023.101805_b281 article-title: RULES3-EXT improvements on rules-3 induction algorithm publication-title: Math. Comput. Appl. – volume: 15 issue: 12 year: 2020 ident: 10.1016/j.inffus.2023.101805_b22 article-title: Demonstration of the potential of white-box machine learning approaches to gain insights from cardiovascular disease electrocardiograms publication-title: PLoS One doi: 10.1371/journal.pone.0243615 – start-page: 1 year: 2021 ident: 10.1016/j.inffus.2023.101805_b129 article-title: XAI-AV: Explainable Artificial Intelligence for trust management in autonomous vehicles – start-page: 3 year: 2013 ident: 10.1016/j.inffus.2023.101805_b352 article-title: Too much, too little, or just right? Ways explanations impact end users’ mental models – start-page: 1 year: 2021 ident: 10.1016/j.inffus.2023.101805_b9 article-title: Artificial intelligence and business value: A literature review publication-title: Inform. Syst. Front. – year: 2014 ident: 10.1016/j.inffus.2023.101805_b44 – start-page: 0210 year: 2018 ident: 10.1016/j.inffus.2023.101805_b64 article-title: Explainable artificial intelligence: A survey – ident: 10.1016/j.inffus.2023.101805_b42 doi: 10.1145/358916.358995 – volume: 3 start-page: 786 issue: 26 year: 2018 ident: 10.1016/j.inffus.2023.101805_b397 article-title: iml: An R package for interpretable machine learning publication-title: J. Open Source Softw. doi: 10.21105/joss.00786 – year: 2018 ident: 10.1016/j.inffus.2023.101805_b225 – ident: 10.1016/j.inffus.2023.101805_b516 doi: 10.1145/2601248.2601268 – year: 2021 ident: 10.1016/j.inffus.2023.101805_b514 – volume: 29 year: 2016 ident: 10.1016/j.inffus.2023.101805_b158 article-title: Examples are not enough, learn to criticize! criticism for interpretability publication-title: Adv. Neural Inf. Process. Syst. – volume: 8 start-page: 373 issue: 6 year: 1995 ident: 10.1016/j.inffus.2023.101805_b39 article-title: Survey and critique of techniques for extracting rules from trained artificial neural networks publication-title: Knowl.-Based Syst. doi: 10.1016/0950-7051(96)81920-4 – start-page: 118 year: 2022 ident: 10.1016/j.inffus.2023.101805_b180 article-title: Physics guided neural networks for spatio-temporal super-resolution of turbulent flows – start-page: 39 year: 2018 ident: 10.1016/j.inffus.2023.101805_b80 article-title: Explainable artificial intelligence: Understanding, visualizing and interpreting deep learning models publication-title: ITU J. ICT Discoveries – start-page: 1 year: 2021 ident: 10.1016/j.inffus.2023.101805_b95 article-title: Educating software and AI stakeholders about algorithmic fairness, accountability, transparency and ethics publication-title: Int. J. Artif. Intell. Educ. – ident: 10.1016/j.inffus.2023.101805_b232 – year: 2020 ident: 10.1016/j.inffus.2023.101805_b436 – year: 2022 ident: 10.1016/j.inffus.2023.101805_b434 – start-page: 1 year: 2021 ident: 10.1016/j.inffus.2023.101805_b106 article-title: Toward explainable artificial intelligence through fuzzy systems – ident: 10.1016/j.inffus.2023.101805_b231 doi: 10.1609/aaai.v32i1.11491 – volume: 28 start-page: 1503 issue: 5 year: 2014 ident: 10.1016/j.inffus.2023.101805_b254 article-title: A peek into the black box: exploring classifiers by randomization publication-title: Data Min. Knowl. Discov. doi: 10.1007/s10618-014-0368-8 – start-page: 579 year: 2002 ident: 10.1016/j.inffus.2023.101805_b163 article-title: Data squashing: constructing summary data sets – volume: 28 start-page: 2660 issue: 11 year: 2016 ident: 10.1016/j.inffus.2023.101805_b317 article-title: Evaluating the visualization of what a deep neural network has learned publication-title: IEEE Trans. Neural Netw. Learn. Syst. doi: 10.1109/TNNLS.2016.2599820 – volume: 70 start-page: 245 year: 2021 ident: 10.1016/j.inffus.2023.101805_b61 article-title: A survey on the explainability of supervised machine learning publication-title: J. Artificial Intelligence Res. doi: 10.1613/jair.1.12228 – ident: 10.1016/j.inffus.2023.101805_b201 – year: 2020 ident: 10.1016/j.inffus.2023.101805_b419 – volume: 267 start-page: 1 year: 2019 ident: 10.1016/j.inffus.2023.101805_b52 article-title: Explanation in artificial intelligence: Insights from the social sciences publication-title: Artificial Intelligence doi: 10.1016/j.artint.2018.07.007 – ident: 10.1016/j.inffus.2023.101805_b369 doi: 10.1145/2678025.2701399 – ident: 10.1016/j.inffus.2023.101805_b362 doi: 10.1145/3025171.3025209 – start-page: 2148 year: 2018 ident: 10.1016/j.inffus.2023.101805_b48 article-title: Graph theoretical properties of logic based argumentation frameworks – volume: 26 start-page: 56 issue: 1 year: 2019 ident: 10.1016/j.inffus.2023.101805_b412 article-title: The what-if tool: Interactive probing of machine learning models publication-title: IEEE Trans. Vis. Comput. Graphics – year: 2018 ident: 10.1016/j.inffus.2023.101805_b226 – volume: 220 start-page: 537 issue: 4 year: 2006 ident: 10.1016/j.inffus.2023.101805_b279 article-title: SRI: a scalable rule induction algorithm publication-title: Proc. Inst. Mech. Eng. C doi: 10.1243/09544062C18304 – year: 2020 ident: 10.1016/j.inffus.2023.101805_b449 – volume: 5 start-page: e5 year: 2023 ident: 10.1016/j.inffus.2023.101805_b335 article-title: Explainable machine learning for public policy: Use cases, gaps, and research directions publication-title: Data & Policy doi: 10.1017/dap.2023.2 – ident: 10.1016/j.inffus.2023.101805_b342 doi: 10.1145/2702123.2702174 – volume: 58 start-page: 82 year: 2020 ident: 10.1016/j.inffus.2023.101805_b63 article-title: Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI publication-title: Inf. Fusion doi: 10.1016/j.inffus.2019.12.012 – volume: 19 start-page: 388 issue: 6 year: 2006 ident: 10.1016/j.inffus.2023.101805_b284 article-title: A new algorithm for automatic knowledge acquisition in inductive learning publication-title: Knowl.-Based Syst. doi: 10.1016/j.knosys.2006.03.001 – volume: 4 issue: 35 year: 2019 ident: 10.1016/j.inffus.2023.101805_b425 article-title: Yellowbrick: Visualizing the scikit-learn model selection process publication-title: J. Open Source Softw. doi: 10.21105/joss.01075 – ident: 10.1016/j.inffus.2023.101805_b141 – volume: 258 year: 2022 ident: 10.1016/j.inffus.2023.101805_b218 article-title: Greybox XAI: A Neural-Symbolic learning framework to produce interpretable predictions for image classification publication-title: Knowl.-Based Syst. doi: 10.1016/j.knosys.2022.109947 – ident: 10.1016/j.inffus.2023.101805_b204 doi: 10.1145/2939672.2939874 – ident: 10.1016/j.inffus.2023.101805_b348 doi: 10.1609/hcomp.v7i1.5280 – volume: 83 start-page: 187 year: 2017 ident: 10.1016/j.inffus.2023.101805_b3 article-title: Deep learning networks for stock market analysis and prediction: Methodology, data representations, and case studies publication-title: Expert Syst. Appl. doi: 10.1016/j.eswa.2017.04.030 – year: 2018 ident: 10.1016/j.inffus.2023.101805_b411 – volume: 11 start-page: 448 issue: 3 year: 1999 ident: 10.1016/j.inffus.2023.101805_b299 article-title: Symbolic interpretation of artificial neural networks publication-title: IEEE Trans. Knowl. Data Eng. doi: 10.1109/69.774103 – ident: 10.1016/j.inffus.2023.101805_b326 – start-page: 93 year: 2006 ident: 10.1016/j.inffus.2023.101805_b367 article-title: Trust building with explanation interfaces – ident: 10.1016/j.inffus.2023.101805_b458 doi: 10.1145/3306618.3314293 – start-page: 13 year: 2009 ident: 10.1016/j.inffus.2023.101805_b175 article-title: Kddonto: An ontology for discovery and composition of kdd algorithms – volume: 8 start-page: 537 issue: 4 year: 2014 ident: 10.1016/j.inffus.2023.101805_b283 article-title: RULES-IT: incremental transfer learning with RULES family publication-title: Front. Comput. Sci. doi: 10.1007/s11704-014-3297-1 – year: 2021 ident: 10.1016/j.inffus.2023.101805_b86 article-title: A survey of visual analytics for Explainable Artificial Intelligence methods publication-title: Comput. Graph. – year: 2018 ident: 10.1016/j.inffus.2023.101805_b395 – ident: 10.1016/j.inffus.2023.101805_b233 – ident: 10.1016/j.inffus.2023.101805_b184 doi: 10.18653/v1/N19-1404 – year: 2021 ident: 10.1016/j.inffus.2023.101805_b257 – volume: 22 start-page: 250 year: 2015 ident: 10.1016/j.inffus.2023.101805_b358 article-title: An uncertainty-aware approach for exploratory microblog retrieval publication-title: IEEE Trans. Vis. Comput. Graphics doi: 10.1109/TVCG.2015.2467554 – ident: 10.1016/j.inffus.2023.101805_b316 doi: 10.1109/CVPR.2016.318 – start-page: 60 year: 2018 ident: 10.1016/j.inffus.2023.101805_b430 article-title: A reductions approach to fair classification – volume: 32 start-page: 88 issue: 1 year: 2017 ident: 10.1016/j.inffus.2023.101805_b499 article-title: Regulating autonomous systems: Beyond standards publication-title: IEEE Intell. Syst. doi: 10.1109/MIS.2017.1 – volume: 72 start-page: 367 issue: 4 year: 2014 ident: 10.1016/j.inffus.2023.101805_b345 article-title: How should I explain? A comparison of different explanation types for recommender systems publication-title: Int. J. Hum.-Comput. Stud. doi: 10.1016/j.ijhcs.2013.12.007 – volume: 34 start-page: 9391 year: 2021 ident: 10.1016/j.inffus.2023.101805_b455 article-title: Reliable post hoc explanations: Modeling uncertainty in explainability publication-title: Adv. Neural Inf. Process. Syst. – ident: 10.1016/j.inffus.2023.101805_b185 – volume: 10 start-page: 1 issue: 1 year: 2019 ident: 10.1016/j.inffus.2023.101805_b462 article-title: Unmasking Clever Hans predictors and assessing what machines really learn publication-title: Nature Commun. doi: 10.1038/s41467-019-08987-4 – volume: 7 start-page: 151 year: 1999 ident: 10.1016/j.inffus.2023.101805_b287 article-title: Heuristic constraints enforcement for training of and rule extraction from a fuzzy/neural architecture publication-title: IEEE Trans. Fuzzy Syst. doi: 10.1109/91.755397 – ident: 10.1016/j.inffus.2023.101805_b290 – start-page: 2668 year: 2018 ident: 10.1016/j.inffus.2023.101805_b230 article-title: Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (tcav) – year: 2016 ident: 10.1016/j.inffus.2023.101805_b8 – year: 2020 ident: 10.1016/j.inffus.2023.101805_b103 – volume: 11 start-page: 1 year: 2021 ident: 10.1016/j.inffus.2023.101805_b119 article-title: A multilayer multimodal detection and prediction model based on explainable artificial intelligence for Alzheimer’s disease publication-title: Sci. Rep. doi: 10.1038/s41598-021-82098-3 – start-page: 39 year: 2022 ident: 10.1016/j.inffus.2023.101805_b472 article-title: General pitfalls of model-agnostic interpretation methods for machine learning models – volume: 31 start-page: 841 year: 2017 ident: 10.1016/j.inffus.2023.101805_b261 article-title: Counterfactual explanations without opening the black box: Automated decisions and the GDPR publication-title: Harv. JL Tech. – volume: 4 issue: 37 year: 2019 ident: 10.1016/j.inffus.2023.101805_b20 article-title: XAI: Explainable artificial intelligence publication-title: Science Robotics doi: 10.1126/scirobotics.aay7120 – volume: 3 year: 2018 ident: 10.1016/j.inffus.2023.101805_b375 article-title: The building blocks of interpretability publication-title: Distill doi: 10.23915/distill.00010 – ident: 10.1016/j.inffus.2023.101805_b349 doi: 10.1609/hcomp.v6i1.13337 – volume: 2 start-page: 1 year: 2021 ident: 10.1016/j.inffus.2023.101805_b10 article-title: Machine learning: Algorithms, real-world applications and research directions publication-title: SN Comput. Sci. doi: 10.1007/s42979-021-00592-x – volume: 10 start-page: 464 issue: 10 year: 2006 ident: 10.1016/j.inffus.2023.101805_b89 article-title: The structure and function of explanations publication-title: Trends in Cognitive Sciences doi: 10.1016/j.tics.2006.08.004 – ident: 10.1016/j.inffus.2023.101805_b140 – volume: 116 start-page: 22071 issue: 44 year: 2019 ident: 10.1016/j.inffus.2023.101805_b151 article-title: Definitions, methods, and applications in interpretable machine learning publication-title: Proc. Natl. Acad. Sci. doi: 10.1073/pnas.1900654116 – ident: 10.1016/j.inffus.2023.101805_b117 – ident: 10.1016/j.inffus.2023.101805_b465 – start-page: 229 year: 2022 ident: 10.1016/j.inffus.2023.101805_b471 article-title: Interpreting and improving deep-learning models with reality checks – year: 2020 ident: 10.1016/j.inffus.2023.101805_b149 – volume: 11 start-page: 3494 year: 2023 ident: 10.1016/j.inffus.2023.101805_b110 article-title: Deep learning for predictive analytics in reversible steganography publication-title: IEEE Access doi: 10.1109/ACCESS.2023.3233976 – year: 2021 ident: 10.1016/j.inffus.2023.101805_b102 |
SSID | ssj0017031 |
Score | 2.7377722 |
Snippet | Artificial intelligence (AI) is currently being utilized in a wide range of sophisticated applications, but the outcomes of many AI models are challenging to... |
SourceID | crossref elsevier |
SourceType | Enrichment Source Index Database Publisher |
StartPage | 101805 |
SubjectTerms | AI principles Data Fusion Deep Learning Explainable Artificial Intelligence Interpretable machine learning Post-hoc explainability Trustworthy AI XAI assessment |
Title | Explainable Artificial Intelligence (XAI): What we know and what is left to attain Trustworthy Artificial Intelligence |
URI | https://dx.doi.org/10.1016/j.inffus.2023.101805 |
Volume | 99 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV3NS8MwFA9jXvQgfuL8GDl40EO3fqRp420Mx6ZuiG6wW0nSBiajG65zePFvNy9txwQ_wFNpmpTw8njvJfx-vyB06QtX520aWkGsmN6gcGnpOplbijHBJOOUxUBw7g9od0Tuxv64gtolFwZglUXsz2O6idZFS7OwZnM-mTSfYefhgjqJZxihQPglJAAvb3ysYR4O6LMbzVRKLehd0ucMxksvolqCaLfrNYyUlf99etpIOZ09tFvUiriVT2cfVZL0AO3010Kri0P0Bhi6ggBlOuaCELi3obSJr8at3vUNBpVuvEowHKNhnsZ4BQ2TBZ4mKsPZDPMMzgnwEGgYBjr4_tMvj9Cocztsd63iKgVLEifMLOGHMqYkcVwubVfoHG0zFejaxGGSMk-Fwgm5Q4T-xFzBwyBUATU03YSFtiTeMaqmszQ5ASwUiIDZLOCCEb0b4TZRktoxt3nscWXXkFdaMJKFzjhcdzGNSkDZS5TbPQK7R7nda8haj5rnOht_9A_KxYm--EukU8GvI0__PfIMbcNbzkQ8R9XsdZlc6JIkE3Xjc3W01Wo_PTzCs3ffHXwCDGrh1w |
linkProvider | Elsevier |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1LTwIxEG4QD-rB-Iz47MGDHlb22W29ESIBBS5Cwm3TdrcJhixEFokXf7ud7i7BxEfitZ1uNtN2Ztp831eErgPh6rxNqBXGiukDCpeWrpO5pRgTTDJOWAwE516ftIf-4ygYVVCz5MIArLKI_XlMN9G6aKkX3qzPxuP6M5w8XFAn8QwjlG6gTV9vX3jG4O5jhfNwQKDdiKYSYoF5yZ8zIC89i2oBqt2ud2e0rILv89Nazmntod2iWMSN_H_2USVJD9BOb6W0Oj9EbwCiKxhQxjBXhMCdNalNfDNqdG7vMch042WC4R4N8zTGS2gYz_EkURnOpphncFGAB8DDMNjB958-eYSGrYdBs20VbylY0ndoZomAypj4ieNyabtCJ2mbqVAXJw6ThHmKCodyxxe6i7mC05CqkBiebsKoLX3vGFXTaZqcABgKVMBsFnLBfH0c4bavJLFjbvPY48quIa_0YCQLoXF472ISlYiylyj3ewR-j3K_15C1GjXLhTb-sA_LyYm-LJhI54JfR57-e-QV2moPet2o2-k_naFt6Mlpieeomr0ukgtdn2Ti0qy_Tz3a4dA |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Explainable+Artificial+Intelligence+%28XAI%29%3A+What+we+know+and+what+is+left+to+attain+Trustworthy+Artificial+Intelligence&rft.jtitle=Information+fusion&rft.au=Ali%2C+Sajid&rft.au=Abuhmed%2C+Tamer&rft.au=El-Sappagh%2C+Shaker&rft.au=Muhammad%2C+Khan&rft.date=2023-11-01&rft.issn=1566-2535&rft.volume=99&rft.spage=101805&rft_id=info:doi/10.1016%2Fj.inffus.2023.101805&rft.externalDBID=n%2Fa&rft.externalDocID=10_1016_j_inffus_2023_101805 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1566-2535&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1566-2535&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1566-2535&client=summon |