Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system

In light of the widespread use of algorithmic (intelligent) systems across numerous domains, there is an increasing awareness about the need to explain their underlying decision-making process and resulting outcomes. Since oftentimes these systems are being considered as black boxes, adding explanat...

Full description

Saved in:
Bibliographic Details
Published inEthics and information technology Vol. 24; no. 1
Main Authors Shulner-Tal, Avital, Kuflik, Tsvi, Kliger, Doron
Format Journal Article
LanguageEnglish
Published Dordrecht Springer Netherlands 01.03.2022
Springer Nature B.V
Subjects
Online AccessGet full text

Cover

Loading…
Abstract In light of the widespread use of algorithmic (intelligent) systems across numerous domains, there is an increasing awareness about the need to explain their underlying decision-making process and resulting outcomes. Since oftentimes these systems are being considered as black boxes, adding explanations to their outcomes may contribute to the perception of their transparency and, as a result, increase users’ trust and fairness perception towards the system, regardless of its actual fairness, which can be measured using various fairness tests and measurements. Different explanation styles may have a different impact on users’ perception of fairness towards the system and on their understanding of the outcome of the system. Hence, there is a need to understand how various explanation styles may impact non-expert users’ perceptions of fairness and understanding of the system’s outcome. In this study we aimed at fulfilling this need. We performed a between-subject user study in order to examine the effect of various explanation styles on users’ fairness perception and understanding of the outcome. In the experiment we examined four known styles of textual explanations (case-based, demographic-based, input influence-based and sensitivity-based) along with a new style (certification-based) that reflect the results of an auditing process of the system. The results suggest that providing some kind of explanation contributes to users’ understanding of the outcome and that some explanation styles are more beneficial than others. Moreover, while explanations provided by the system are important and can indeed enhance users’ perception of fairness, their perception mainly depends on the outcome of the system. The results may shed light on one of the main problems in explainability of algorithmic systems, which is choosing the best explanation to promote users’ fairness perception towards a particular system, with respect to the outcome of the system. The contribution of this study is reflected in the new and realistic case study that was examined, in the creation and evaluation of a new explanation style that can be used as the link between the actual (computational) fairness of the system and users’ fairness perception and in the need of analyzing and evaluating explanations while taking into account the outcome of the system.
AbstractList In light of the widespread use of algorithmic (intelligent) systems across numerous domains, there is an increasing awareness about the need to explain their underlying decision-making process and resulting outcomes. Since oftentimes these systems are being considered as black boxes, adding explanations to their outcomes may contribute to the perception of their transparency and, as a result, increase users’ trust and fairness perception towards the system, regardless of its actual fairness, which can be measured using various fairness tests and measurements. Different explanation styles may have a different impact on users’ perception of fairness towards the system and on their understanding of the outcome of the system. Hence, there is a need to understand how various explanation styles may impact non-expert users’ perceptions of fairness and understanding of the system’s outcome. In this study we aimed at fulfilling this need. We performed a between-subject user study in order to examine the effect of various explanation styles on users’ fairness perception and understanding of the outcome. In the experiment we examined four known styles of textual explanations (case-based, demographic-based, input influence-based and sensitivity-based) along with a new style (certification-based) that reflect the results of an auditing process of the system. The results suggest that providing some kind of explanation contributes to users’ understanding of the outcome and that some explanation styles are more beneficial than others. Moreover, while explanations provided by the system are important and can indeed enhance users’ perception of fairness, their perception mainly depends on the outcome of the system. The results may shed light on one of the main problems in explainability of algorithmic systems, which is choosing the best explanation to promote users’ fairness perception towards a particular system, with respect to the outcome of the system. The contribution of this study is reflected in the new and realistic case study that was examined, in the creation and evaluation of a new explanation style that can be used as the link between the actual (computational) fairness of the system and users’ fairness perception and in the need of analyzing and evaluating explanations while taking into account the outcome of the system.
ArticleNumber 2
Author Shulner-Tal, Avital
Kuflik, Tsvi
Kliger, Doron
Author_xml – sequence: 1
  givenname: Avital
  orcidid: 0000-0003-2091-2966
  surname: Shulner-Tal
  fullname: Shulner-Tal, Avital
  email: avitalshulner@gmail.com
  organization: Department of Information Systems, University of Haifa
– sequence: 2
  givenname: Tsvi
  surname: Kuflik
  fullname: Kuflik, Tsvi
  organization: Department of Information Systems, University of Haifa
– sequence: 3
  givenname: Doron
  surname: Kliger
  fullname: Kliger, Doron
  organization: Department of Economics, University of Haifa
BookMark eNp9UU1u1DAUjlCRaAsXYGWJLQb_JHbCDlUUkCqxgbX1Yr_MuErsYHtUZsc1uASH4iR4mkpILLqxn-3vT_4umrMQAzbNS87ecMb028yZ0ooyISgblJC0fdKc804L2rdyOKuz7HvKh04_ay5yvmWMdZrr8-b3NfgUMOfXBH-sM_gAo599ORIIjvhARyx3iOEdOQSHKZd67cOOlD0Sv6xgC4kTcX6aMGEom0iA4mMgC5Z9dJnUscal9QlTIYdcZf78_EXqyeJ6QuaTxvQQhJR4B8lVfwLzLiZf9ou3JB9zweV583SCOeOLh_2y-Xb94evVJ3rz5ePnq_c31Eo-FKoVR9Uq2Q4OpFNWMzFCOwySAaK1euxEb52cHBvV2GkQMIKCumg79Z1w8rJ5temuKX4_YC7mNh5SqJZGKNEyqTsuK0psKJtizgknsya_QDoazsypF7P1Ymov5r4X01ZS_x_J-nL_YSWBnx-nyo2aq0_YYfqX6hHWX-nsqyU
CitedBy_id crossref_primary_10_1145_3716394
crossref_primary_10_1109_MTS_2023_3340238
crossref_primary_10_1177_20539517221115189
crossref_primary_10_14712_23366478_2024_24
crossref_primary_10_3390_electronics12122594
crossref_primary_10_1155_2024_4628855
crossref_primary_10_1016_j_tele_2023_101954
crossref_primary_10_1080_10447318_2024_2348843
crossref_primary_10_1177_09636625241291192
crossref_primary_10_1007_s11257_024_09400_6
crossref_primary_10_1080_0960085X_2024_2395531
crossref_primary_10_3390_bdcc8090105
crossref_primary_10_1080_10447318_2023_2210890
crossref_primary_10_1007_s10676_024_09746_w
crossref_primary_10_1016_j_tourman_2022_104716
crossref_primary_10_3389_fpsyg_2024_1221177
crossref_primary_10_3389_frobt_2024_1375490
crossref_primary_10_3389_frai_2022_879603
crossref_primary_10_1057_s41599_024_02759_2
crossref_primary_10_1080_10447318_2022_2095705
crossref_primary_10_1016_j_im_2024_103969
Cites_doi 10.1145/3172944.3172961
10.1109/ACCESS.2018.2870052
10.23919/MIPRO.2018.8400040
10.1109/MCI.2018.2881645
10.1007/s13218-020-00636-z
10.2753/MIS0742-1222230410
10.1145/3442188.3445941
10.1609/aimag.v38i3.2741
10.1007/978-3-319-90403-0_2
10.1145/506443.506619
10.1016/j.inffus.2019.12.012
10.1038/s42256-019-0048-x
10.1109/SMAP.2019.8864914
10.1080/09540091.2017.1310182
10.1016/j.ijhcs.2013.12.007
10.1145/3301275.3302310
10.1007/s11257-017-9195-0
10.1145/2939672.2939778
10.1016/j.dsp.2017.10.011
10.1007/s11747-019-00710-5
10.1145/3173574.3173951
10.1007/978-0-387-85820-3_15
10.1109/ACCESS.2019.2949286
10.1145/3236009
10.1609/aimag.v40i2.2850
10.1145/3359130
10.1089/big.2016.0007
ContentType Journal Article
Copyright The Author(s), under exclusive licence to Springer Nature B.V. 2022
The Author(s), under exclusive licence to Springer Nature B.V. 2022.
Copyright_xml – notice: The Author(s), under exclusive licence to Springer Nature B.V. 2022
– notice: The Author(s), under exclusive licence to Springer Nature B.V. 2022.
DBID AAYXX
CITATION
3V.
7WY
7WZ
7XB
87Z
8FE
8FG
8FK
8FL
8G5
AABKS
ABSDQ
ABUWG
AEUYN
AFKRA
ALSLI
ARAPS
AVQMV
AZQEC
BENPR
BEZIV
BGLVJ
CCPQU
CNYFK
DWQXO
FRNLG
F~G
GNUQQ
GUQSH
HCIFZ
K50
K60
K6~
L.-
M0C
M1D
M1O
M2O
MBDVC
P5Z
P62
PEJEM
PGAAH
PHGZM
PHGZT
PKEHL
PMKZF
PQBIZ
PQBZA
PQEST
PQGLB
PQQKQ
PQUKI
PRQQA
Q9U
DOI 10.1007/s10676-022-09623-4
DatabaseName CrossRef
ProQuest Central (Corporate)
ABI/INFORM Collection
ABI/INFORM Global (PDF only)
ProQuest Central (purchase pre-March 2016)
ABI/INFORM Collection
ProQuest SciTech Collection
ProQuest Technology Collection
ProQuest Central (Alumni) (purchase pre-March 2016)
ABI/INFORM Collection (Alumni)
ProQuest Research Library
Philosophy Collection
Philosophy Database
ProQuest Central (Alumni)
ProQuest One Sustainability
ProQuest Central UK/Ireland
ProQuest Social Science Premium Collection
Advanced Technologies & Aerospace Collection
ProQuest Arts Premium Collection
ProQuest Central Essentials
ProQuest Central
Business Premium Collection
Technology Collection
ProQuest One Community College
Library & Information Science Collection
ProQuest Central
Business Premium Collection (Alumni)
ABI/INFORM Global (Corporate)
ProQuest Central Student
ProQuest Research Library
ProQuest SciTech Premium Collection
Art, Design & Architecture Collection
ProQuest Business Collection (Alumni Edition)
ProQuest Business Collection
ABI/INFORM Professional Advanced
ABI/INFORM Global
Arts & Humanities Database
Library Science Database
Research Library
Research Library (Corporate)
Advanced Technologies & Aerospace Database
ProQuest Advanced Technologies & Aerospace Collection
ProQuest One Visual Arts & Design
ProQuest One Religion & Philosophy
ProQuest Central Premium
ProQuest One Academic (New)
ProQuest One Academic Middle East (New)
ProQuest Digital Collections
ProQuest One Business
ProQuest One Business (Alumni)
ProQuest One Academic Eastern Edition (DO NOT USE)
ProQuest One Applied & Life Sciences
ProQuest One Academic
ProQuest One Academic UKI Edition
ProQuest One Social Sciences
ProQuest Central Basic
DatabaseTitle CrossRef
ProQuest Business Collection (Alumni Edition)
Research Library Prep
ProQuest Central Student
ProQuest Advanced Technologies & Aerospace Collection
ProQuest Central Essentials
SciTech Premium Collection
ABI/INFORM Complete
ProQuest One Religion & Philosophy
Philosophy Collection
ProQuest One Applied & Life Sciences
ProQuest One Sustainability
Arts Premium Collection
Library & Information Science Collection
ProQuest Central (New)
Advanced Technologies & Aerospace Collection
Business Premium Collection
Social Science Premium Collection
ABI/INFORM Global
ProQuest One Academic Eastern Edition
ProQuest Technology Collection
ProQuest Business Collection
ProQuest One Academic UKI Edition
Arts & Humanities Full Text
ProQuest One Academic
ProQuest One Academic (New)
ABI/INFORM Global (Corporate)
ProQuest One Business
Technology Collection
ProQuest One Academic Middle East (New)
ProQuest Central (Alumni Edition)
ProQuest One Community College
Research Library (Alumni Edition)
ProQuest Central
ABI/INFORM Professional Advanced
ProQuest Library Science
ProQuest Central Korea
ProQuest Research Library
ProQuest Art, Design and Architecture Collection
ABI/INFORM Complete (Alumni Edition)
ProQuest One Social Sciences
ABI/INFORM Global (Alumni Edition)
ProQuest Central Basic
ProQuest SciTech Collection
ProQuest Digital Collections
Advanced Technologies & Aerospace Database
ProQuest One Business (Alumni)
ProQuest One Visual Arts & Design
ProQuest Central (Alumni)
Business Premium Collection (Alumni)
Philosophy Database
DatabaseTitleList
ProQuest Business Collection (Alumni Edition)
Database_xml – sequence: 1
  dbid: 8FG
  name: ProQuest Technology Collection
  url: https://search.proquest.com/technologycollection1
  sourceTypes: Aggregation Database
DeliveryMethod fulltext_linktorsrc
Discipline Library & Information Science
Philosophy
Computer Science
EISSN 1572-8439
ExternalDocumentID 10_1007_s10676_022_09623_4
GroupedDBID -59
-5G
-BR
-EM
-Y2
-~C
.4S
.86
.DC
.VR
06D
0R~
0VY
199
1N0
1SB
203
29G
2J2
2JN
2JY
2KG
2KM
2LR
2P1
2VQ
2~H
30V
3V.
4.4
406
408
409
40D
40E
5GY
5VS
67Z
6NX
78A
7WY
8FE
8FG
8FL
8FW
8G5
8TC
8UJ
95-
95.
95~
96X
AAAVM
AABHQ
AABKS
AACDK
AACJB
AAHNG
AAIAL
AAJBT
AAJKR
AANZL
AARHV
AARTL
AASML
AATNV
AATVU
AAUYE
AAWCG
AAYIU
AAYQN
AAYTO
AAYZH
ABAKF
ABBBX
ABBXA
ABDZT
ABECU
ABFTD
ABFTV
ABHLI
ABHQN
ABJNI
ABJOX
ABKCH
ABKTR
ABMNI
ABMQK
ABNWP
ABQBU
ABQSL
ABSDQ
ABSXP
ABTEG
ABTHY
ABTKH
ABTMW
ABULA
ABUWG
ABWNU
ABXPI
ACAOD
ACBXY
ACDTI
ACGFO
ACGFS
ACHQT
ACHSB
ACHXU
ACKNC
ACMDZ
ACMLO
ACOKC
ACOMO
ACPIV
ACREN
ACSNA
ACZOJ
ADHHG
ADHIR
ADINQ
ADKNI
ADKPE
ADMLS
ADRFC
ADTPH
ADURQ
ADYFF
ADYOE
ADZKW
AEBTG
AEFQL
AEGAL
AEGNC
AEJHL
AEJRE
AEKMD
AEMSY
AEOHA
AEPYU
AESKC
AETLH
AEUYN
AEVLU
AEXYK
AFBBN
AFGCZ
AFKRA
AFLOW
AFQWF
AFWTZ
AFYQB
AFZKB
AGAYW
AGDGC
AGGDS
AGJBK
AGMZJ
AGQEE
AGQMX
AGRTI
AGWIL
AGWZB
AGYKE
AHAVH
AHBYD
AHKAY
AHSBF
AHYZX
AIAKS
AIGIU
AIIXL
AILAN
AITGF
AJBLW
AJRNO
AJZVZ
ALMA_UNASSIGNED_HOLDINGS
ALSLI
ALWAN
AMKLP
AMTXH
AMXSW
AMYLF
AMYQR
AOCGG
ARAPS
ARCSS
ARMRJ
ASPBG
AVQMV
AVWKF
AXYYD
AYJHY
AZFZN
AZQEC
B-.
BA0
BDATZ
BENPR
BEZIV
BGLVJ
BGNMA
BPHCQ
BSONS
CAG
CCPQU
CNYFK
COF
CS3
CSCUP
DDRTE
DL5
DNIVK
DPUIP
DU5
DWQXO
EBLON
EBS
EDO
EIOEI
EJD
ESBYG
F5P
FD6
FEDTE
FERAY
FFXSO
FIGPU
FINBP
FNLPD
FRNLG
FRRFC
FSGXE
FWDCC
GGCAI
GGRSB
GJIRD
GNUQQ
GNWQR
GQ6
GQ7
GQ8
GROUPED_ABI_INFORM_COMPLETE
GROUPED_ABI_INFORM_RESEARCH
GUQSH
GXS
H13
HCIFZ
HF~
HG5
HG6
HMJXF
HQYDN
HRMNR
HVGLF
HZ~
I09
IHE
IJ-
IKXTQ
ITM
IWAJR
IXC
IZIGR
IZQ
I~X
I~Z
J-C
J0Z
JBSCW
JCJTX
JZLTJ
K50
K60
K6~
KDC
KOV
LAK
LLZTM
M0C
M1D
M1O
M2O
M4Y
MA-
MK~
ML~
N2Q
N9A
NB0
NPVJJ
NQJWS
NU0
O9-
O93
O9J
OAM
OVD
P2P
P62
P9O
PF-
PQBIZ
PQBZA
PQQKQ
PROAC
PT4
PT5
Q2X
QOS
R89
R9I
RNI
RNS
ROL
RPX
RSV
RZC
RZE
RZK
S16
S1Z
S27
S3B
SAP
SCO
SDH
SHX
SISQX
SJYHP
SNE
SNPRN
SNX
SOHCF
SOJ
SPISZ
SRMVM
SSLCW
STPWE
SZN
T13
TEORI
TSG
TSK
TSV
TUC
TUS
U2A
U5U
UG4
UOJIU
UTJUX
UZXMN
VC2
VFIZW
W23
W48
WK8
YLTOR
Z45
Z7R
Z7W
Z7X
Z81
Z83
Z88
ZMTXR
AAPKM
AAYXX
ABBRH
ABDBE
ABFSG
ACMFV
ACSTC
ADHKG
AEZWR
AFDZB
AFHIU
AFOHR
AGQPQ
AHPBZ
AHWEU
AIXLP
ATHPR
AYFIA
CITATION
PHGZM
PHGZT
PMKZF
7XB
8FK
ABRTQ
L.-
MBDVC
PEJEM
PGAAH
PKEHL
PQEST
PQGLB
PQUKI
PRQQA
Q9U
ID FETCH-LOGICAL-c319t-761e646349da3d6c702ba49930aeecc7b528cd3fd0b6b57a2aba6aaba7cf852d3
IEDL.DBID U2A
ISSN 1388-1957
IngestDate Sat Aug 23 14:07:56 EDT 2025
Tue Jul 01 01:27:47 EDT 2025
Thu Apr 24 22:54:42 EDT 2025
Fri Feb 21 02:47:43 EST 2025
IsPeerReviewed true
IsScholarly true
Issue 1
Keywords Decision support systems
Users perception
Explainability
Fairness
Algorithmic systems
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c319t-761e646349da3d6c702ba49930aeecc7b528cd3fd0b6b57a2aba6aaba7cf852d3
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ORCID 0000-0003-2091-2966
PQID 2624037513
PQPubID 25743
ParticipantIDs proquest_journals_2624037513
crossref_primary_10_1007_s10676_022_09623_4
crossref_citationtrail_10_1007_s10676_022_09623_4
springer_journals_10_1007_s10676_022_09623_4
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 20220300
2022-03-00
20220301
PublicationDateYYYYMMDD 2022-03-01
PublicationDate_xml – month: 3
  year: 2022
  text: 20220300
PublicationDecade 2020
PublicationPlace Dordrecht
PublicationPlace_xml – name: Dordrecht
PublicationTitle Ethics and information technology
PublicationTitleAbbrev Ethics Inf Technol
PublicationYear 2022
Publisher Springer Netherlands
Springer Nature B.V
Publisher_xml – name: Springer Netherlands
– name: Springer Nature B.V
References MontavonGSamekWMüllerKRMethods for interpreting and understanding deep neural networksDigital Signal Processing201873115373787010.1016/j.dsp.2017.10.011
Lundberg, S. M., & Lee, S. I. (2017). A unified approach to interpreting model predictions. In: Advances in neural information processing systems (pp. 4765–4774).
Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). Why should I trust you?: Explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining (pp. 1135–1144). ACM.
Griffin, R. W., Phillips, J., & Gully, S. M. (2017). Organizational behavior: Managing people and organizations.
Singh, C., Murdoch, W. J., & Yu, B. (2018). Hierarchical interpretations for neural network predictions. arXiv preprint arXiv:1806.05337
RudinCStop explaining black box machine learning models for high stakes decisions and use interpretable models insteadNature Machine Intelligence20191520621510.1038/s42256-019-0048-x
Samek, W., Wiegand, T., & Müller, K. R. (2017). Explainable artificial intelligence: Understanding, visualizing and interpreting deep learning models. arXiv preprint arXiv:1708.08296
Arya, V., Bellamy, R. K., Chen, P. Y., Dhurandhar, A., Hind, M., Hoffman, S. C., ... Mourad, S. (2019). One explanation does not fit all: A toolkit and taxonomy of AI explainability techniques. arXiv preprint arXiv:1909.03012
Lipton, Z. C. (2016). The mythos of model interpretability. arXiv preprint arXiv:1606.03490
Felfernig, A., Gula, B. (2006). Consumer behavior in the interaction with knowledge-based recommender applications. In: ECAI 2006 workshop on recommender systems, pp. 37–41
GoodmanBFlaxmanSEuropean Union regulations on algorithmic decision-making and a “right to explanation”AI Magazine2017383505710.1609/aimag.v38i3.2741
Tintarev, N., & Masthoff, J. (2011). Designing and evaluating explanations for recommender systems. Recommender systems handbook (pp. 479–510). Springer.
Wortham, R. H., Theodorou, A., & Bryson, J. J. (2016, June). What does the robot think? Transparency as a fundamental design requirement for intelligent systems. In: Ijcai-2016 ethics for artificial intelligence workshop.
GuidottiRMonrealeARuggieriSTuriniFGiannottiFPedreschiDA survey of methods for explaining black box modelsACM Computing Surveys (CSUR)201851514210.1145/3236009
Jesus, S., Belém, C., Balayan, V., Bento, J., Saleiro, P., Bizarro, P., & Gama, J. (2021). How can I choose an explainer? An Application-grounded Evaluation of Post-hoc Explanations. arXiv preprint arXiv:2101.08758
Eiband, M., Schneider, H., & Buschek, D. (2018). Normative vs. Pragmatic: Two perspectives on the design of explanations in intelligent systems. In: IUI workshops on explainable smart systems (EXSS)
NunesIJannachDA systematic review and taxonomy of explanations in decision support and recommender systemsUser Modeling and User-Adapted Interaction2017273–539344410.1007/s11257-017-9195-0
Binns, R., Van Kleek, M., Veale, M., Lyngs, U., Zhao, J., & Shadbolt, N. (2018, April). ‘It’s reducing a human being to a percentage’ perceptions of justice in algorithmic decisions. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (pp. 1–14).
RaiAExplainable AI: From black box to glass boxJournal of the Academy of Marketing Science202048113714110.1007/s11747-019-00710-5
WangWBenbasatIRecommendation agents for electronic commerce: Effects of explanation facilities on trusting beliefsJournal of Management Information Systems200723421724610.2753/MIS0742-1222230410
Eiband, M., Schneider, H., Bilandzic, M., Fazekas-Con, J., Haug, M., & Hussmann, H. (2018). Bringing transparency design into practice. 23rd International conference on intelligent user interfaces (pp. 211–223). ACM.
Craven, M. W. (1996). Extracting comprehensible models from trained neural networks. University of Wisconsin-Madison Department of Computer Sciences.
Van BerkelNGoncalvesJHettiachchiDWijenayakeSKellyRMKostakosVCrowdsourcing perceptions of fair predictors for machine learning: A recidivism case studyProceedings of the ACM on Human-Computer Interaction20193CSCW12110.1145/3359130
Dodge, J., Liao, Q. V., Zhang, Y., Bellamy, R. K., & Dugan, C. (2019, March). Explaining models: an empirical study of how explanations impact fairness judgment. In Proceedings of the 24th International Conference on Intelligent User Interfaces (pp. 275–285).
Zhang, J. M., Harman, M., Ma, L., & Liu, Y. (2019). Machine learning testing: Survey, landscapes and horizons. arXiv preprint arXiv:1906.10742
Barocas, S., Hardt, M., & Narayanan, A. (2018). Fairness and Machine Learning. fairmlbook. org. Retrieved from http://www.fairmlbook.org
Green, B. (2018). “Fair” risk assessments: A precarious approach for criminal justice reform. In: 5th Workshop on fairness, accountability, and transparency in machine learning.
HolzingerALangsGDenkHZatloukalKMüllerHCausability and explainability of artificial intelligence in medicineWiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery201994e1312
Fernandez, A., Herrera, F., Cordon, O., del Jesus, M. J., & Marcelloni, F. (2019). Evolutionary fuzzy systems for explainable artificial intelligence: Why, when, what for, and where to? IEEE Computational Intelligence Magazine,14(1), 69–81.
Došilović, F. K., Brčić, M., & Hlupić, N. (2018). Explainable artificial intelligence: A survey. 2018 41st International convention on information and communication technology, electronics and microelectronics (MIPRO) (pp. 0210–0215). IEEE.
Kilbertus, N., Gascón, A., Kusner, M. J., Veale, M., Gummadi, K. P., & Weller, A. (2018). Blind justice: Fairness with encrypted sensitive attributes. arXiv preprint arXiv:1806.03281
GedikliFJannachDGeMHow should I explain? A comparison of different explanation types for recommender systemsInternational Journal of Human-Computer Studies201472436738210.1016/j.ijhcs.2013.12.007
Holzinger, A., Carrington, A., & Müller, H. (2020). Measuring the quality of explanations: The system causability scale (SCS) (pp. 1–6). KI-Künstliche Intelligenz.
Loyola-GonzalezOBlack-box vs. white-box: Understanding their advantages and weaknesses from a practical point of viewIEEE Access2019715409615411310.1109/ACCESS.2019.2949286
Sinha, R., Swearingen, K. (2002). The role of transparency in recommender systems. In: Conference on Human Factors in Computing Systems, pp. 830–831
GunningDAhaDWDARPA’s explainable artificial intelligence programAI Magazine2019402445810.1609/aimag.v40i2.2850
Kim, B., Glassman, E., Johnson, B., & Shah, J. (2015). iBCM: Interactive Bayesian case model empowering humans via intuitive interaction.
GleicherMA framework for considering comprehensibility in modelingBig Data201642758810.1089/big.2016.0007
TheodorouAWorthamRHBrysonJJDesigning and implementing transparency for real time inspection of autonomous robotsConnection Science201729323024110.1080/09540091.2017.1310182
AdadiABerradaMPeeking inside the black-box: A survey on Explainable Artificial Intelligence (XAI)IEEE Access20186521385216010.1109/ACCESS.2018.2870052
Gunning, D. (2017). Explainable artificial intelligence (xai). Defense Advanced Research Projects Agency (DARPA), nd Web, 2, 2.
Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., ... Chatila, R. (2020). Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82-115.
Tal, A. S., Batsuren, K., Bogina, V., Giunchiglia, F., Hartman, A., Loizou, S. K., Kuflik, T. & Otterbacher, J. (2019) “End to End” towards a framework for reducing biases and promoting transparency of algorithmic systems. In: 2019 14th International Workshop on Semantic and Social Media Adaptation and Personalization (SMAP), Larnaca, Cyprus, , pp. 1-6. https://doi.org/10.1109/SMAP.2019.8864914
Abdollahi, B., & Nasraoui, O. (2018). Transparency in fair machine learning: The case of explainable recommender systems. Human and Machine Learning (pp. 21–35). Springer.
9623_CR38
9623_CR1
9623_CR37
9623_CR18
G Montavon (9623_CR30) 2018; 73
N Van Berkel (9623_CR41) 2019; 3
9623_CR3
9623_CR17
9623_CR4
9623_CR12
9623_CR5
9623_CR11
9623_CR33
9623_CR6
9623_CR36
9623_CR7
9623_CR13
9623_CR35
9623_CR8
9623_CR9
A Theodorou (9623_CR39) 2017; 29
C Rudin (9623_CR34) 2019; 1
F Gedikli (9623_CR14) 2014; 72
I Nunes (9623_CR31) 2017; 27
M Gleicher (9623_CR15) 2016; 4
R Guidotti (9623_CR19) 2018; 51
A Rai (9623_CR32) 2020; 48
A Holzinger (9623_CR23) 2019; 9
9623_CR10
O Loyola-Gonzalez (9623_CR28) 2019; 7
W Wang (9623_CR42) 2007; 23
9623_CR27
9623_CR26
9623_CR29
9623_CR22
9623_CR44
A Adadi (9623_CR2) 2018; 6
9623_CR25
9623_CR24
B Goodman (9623_CR16) 2017; 38
D Gunning (9623_CR21) 2019; 40
9623_CR40
9623_CR43
9623_CR20
References_xml – reference: RaiAExplainable AI: From black box to glass boxJournal of the Academy of Marketing Science202048113714110.1007/s11747-019-00710-5
– reference: Tal, A. S., Batsuren, K., Bogina, V., Giunchiglia, F., Hartman, A., Loizou, S. K., Kuflik, T. & Otterbacher, J. (2019) “End to End” towards a framework for reducing biases and promoting transparency of algorithmic systems. In: 2019 14th International Workshop on Semantic and Social Media Adaptation and Personalization (SMAP), Larnaca, Cyprus, , pp. 1-6. https://doi.org/10.1109/SMAP.2019.8864914
– reference: GleicherMA framework for considering comprehensibility in modelingBig Data201642758810.1089/big.2016.0007
– reference: HolzingerALangsGDenkHZatloukalKMüllerHCausability and explainability of artificial intelligence in medicineWiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery201994e1312
– reference: WangWBenbasatIRecommendation agents for electronic commerce: Effects of explanation facilities on trusting beliefsJournal of Management Information Systems200723421724610.2753/MIS0742-1222230410
– reference: Kim, B., Glassman, E., Johnson, B., & Shah, J. (2015). iBCM: Interactive Bayesian case model empowering humans via intuitive interaction.
– reference: Abdollahi, B., & Nasraoui, O. (2018). Transparency in fair machine learning: The case of explainable recommender systems. Human and Machine Learning (pp. 21–35). Springer.
– reference: Jesus, S., Belém, C., Balayan, V., Bento, J., Saleiro, P., Bizarro, P., & Gama, J. (2021). How can I choose an explainer? An Application-grounded Evaluation of Post-hoc Explanations. arXiv preprint arXiv:2101.08758
– reference: TheodorouAWorthamRHBrysonJJDesigning and implementing transparency for real time inspection of autonomous robotsConnection Science201729323024110.1080/09540091.2017.1310182
– reference: Eiband, M., Schneider, H., & Buschek, D. (2018). Normative vs. Pragmatic: Two perspectives on the design of explanations in intelligent systems. In: IUI workshops on explainable smart systems (EXSS)
– reference: Griffin, R. W., Phillips, J., & Gully, S. M. (2017). Organizational behavior: Managing people and organizations.
– reference: Lipton, Z. C. (2016). The mythos of model interpretability. arXiv preprint arXiv:1606.03490
– reference: Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). Why should I trust you?: Explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining (pp. 1135–1144). ACM.
– reference: Binns, R., Van Kleek, M., Veale, M., Lyngs, U., Zhao, J., & Shadbolt, N. (2018, April). ‘It’s reducing a human being to a percentage’ perceptions of justice in algorithmic decisions. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (pp. 1–14).
– reference: GoodmanBFlaxmanSEuropean Union regulations on algorithmic decision-making and a “right to explanation”AI Magazine2017383505710.1609/aimag.v38i3.2741
– reference: Loyola-GonzalezOBlack-box vs. white-box: Understanding their advantages and weaknesses from a practical point of viewIEEE Access2019715409615411310.1109/ACCESS.2019.2949286
– reference: Kilbertus, N., Gascón, A., Kusner, M. J., Veale, M., Gummadi, K. P., & Weller, A. (2018). Blind justice: Fairness with encrypted sensitive attributes. arXiv preprint arXiv:1806.03281
– reference: NunesIJannachDA systematic review and taxonomy of explanations in decision support and recommender systemsUser Modeling and User-Adapted Interaction2017273–539344410.1007/s11257-017-9195-0
– reference: Barocas, S., Hardt, M., & Narayanan, A. (2018). Fairness and Machine Learning. fairmlbook. org. Retrieved from http://www.fairmlbook.org
– reference: GunningDAhaDWDARPA’s explainable artificial intelligence programAI Magazine2019402445810.1609/aimag.v40i2.2850
– reference: Fernandez, A., Herrera, F., Cordon, O., del Jesus, M. J., & Marcelloni, F. (2019). Evolutionary fuzzy systems for explainable artificial intelligence: Why, when, what for, and where to? IEEE Computational Intelligence Magazine,14(1), 69–81.
– reference: Samek, W., Wiegand, T., & Müller, K. R. (2017). Explainable artificial intelligence: Understanding, visualizing and interpreting deep learning models. arXiv preprint arXiv:1708.08296
– reference: Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., ... Chatila, R. (2020). Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82-115.
– reference: Felfernig, A., Gula, B. (2006). Consumer behavior in the interaction with knowledge-based recommender applications. In: ECAI 2006 workshop on recommender systems, pp. 37–41
– reference: Lundberg, S. M., & Lee, S. I. (2017). A unified approach to interpreting model predictions. In: Advances in neural information processing systems (pp. 4765–4774).
– reference: RudinCStop explaining black box machine learning models for high stakes decisions and use interpretable models insteadNature Machine Intelligence20191520621510.1038/s42256-019-0048-x
– reference: Craven, M. W. (1996). Extracting comprehensible models from trained neural networks. University of Wisconsin-Madison Department of Computer Sciences.
– reference: Singh, C., Murdoch, W. J., & Yu, B. (2018). Hierarchical interpretations for neural network predictions. arXiv preprint arXiv:1806.05337
– reference: AdadiABerradaMPeeking inside the black-box: A survey on Explainable Artificial Intelligence (XAI)IEEE Access20186521385216010.1109/ACCESS.2018.2870052
– reference: Dodge, J., Liao, Q. V., Zhang, Y., Bellamy, R. K., & Dugan, C. (2019, March). Explaining models: an empirical study of how explanations impact fairness judgment. In Proceedings of the 24th International Conference on Intelligent User Interfaces (pp. 275–285).
– reference: Wortham, R. H., Theodorou, A., & Bryson, J. J. (2016, June). What does the robot think? Transparency as a fundamental design requirement for intelligent systems. In: Ijcai-2016 ethics for artificial intelligence workshop.
– reference: Zhang, J. M., Harman, M., Ma, L., & Liu, Y. (2019). Machine learning testing: Survey, landscapes and horizons. arXiv preprint arXiv:1906.10742
– reference: Došilović, F. K., Brčić, M., & Hlupić, N. (2018). Explainable artificial intelligence: A survey. 2018 41st International convention on information and communication technology, electronics and microelectronics (MIPRO) (pp. 0210–0215). IEEE.
– reference: Sinha, R., Swearingen, K. (2002). The role of transparency in recommender systems. In: Conference on Human Factors in Computing Systems, pp. 830–831
– reference: Arya, V., Bellamy, R. K., Chen, P. Y., Dhurandhar, A., Hind, M., Hoffman, S. C., ... Mourad, S. (2019). One explanation does not fit all: A toolkit and taxonomy of AI explainability techniques. arXiv preprint arXiv:1909.03012
– reference: Van BerkelNGoncalvesJHettiachchiDWijenayakeSKellyRMKostakosVCrowdsourcing perceptions of fair predictors for machine learning: A recidivism case studyProceedings of the ACM on Human-Computer Interaction20193CSCW12110.1145/3359130
– reference: GuidottiRMonrealeARuggieriSTuriniFGiannottiFPedreschiDA survey of methods for explaining black box modelsACM Computing Surveys (CSUR)201851514210.1145/3236009
– reference: Tintarev, N., & Masthoff, J. (2011). Designing and evaluating explanations for recommender systems. Recommender systems handbook (pp. 479–510). Springer.
– reference: GedikliFJannachDGeMHow should I explain? A comparison of different explanation types for recommender systemsInternational Journal of Human-Computer Studies201472436738210.1016/j.ijhcs.2013.12.007
– reference: MontavonGSamekWMüllerKRMethods for interpreting and understanding deep neural networksDigital Signal Processing201873115373787010.1016/j.dsp.2017.10.011
– reference: Gunning, D. (2017). Explainable artificial intelligence (xai). Defense Advanced Research Projects Agency (DARPA), nd Web, 2, 2.
– reference: Green, B. (2018). “Fair” risk assessments: A precarious approach for criminal justice reform. In: 5th Workshop on fairness, accountability, and transparency in machine learning.
– reference: Eiband, M., Schneider, H., Bilandzic, M., Fazekas-Con, J., Haug, M., & Hussmann, H. (2018). Bringing transparency design into practice. 23rd International conference on intelligent user interfaces (pp. 211–223). ACM.
– reference: Holzinger, A., Carrington, A., & Müller, H. (2020). Measuring the quality of explanations: The system causability scale (SCS) (pp. 1–6). KI-Künstliche Intelligenz.
– ident: 9623_CR11
  doi: 10.1145/3172944.3172961
– ident: 9623_CR25
– volume: 6
  start-page: 52138
  year: 2018
  ident: 9623_CR2
  publication-title: IEEE Access
  doi: 10.1109/ACCESS.2018.2870052
– ident: 9623_CR27
– ident: 9623_CR5
– ident: 9623_CR7
– ident: 9623_CR17
– ident: 9623_CR9
  doi: 10.23919/MIPRO.2018.8400040
– ident: 9623_CR13
  doi: 10.1109/MCI.2018.2881645
– ident: 9623_CR22
  doi: 10.1007/s13218-020-00636-z
– volume: 9
  start-page: e1312
  issue: 4
  year: 2019
  ident: 9623_CR23
  publication-title: Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery
– volume: 23
  start-page: 217
  issue: 4
  year: 2007
  ident: 9623_CR42
  publication-title: Journal of Management Information Systems
  doi: 10.2753/MIS0742-1222230410
– ident: 9623_CR24
  doi: 10.1145/3442188.3445941
– volume: 38
  start-page: 50
  issue: 3
  year: 2017
  ident: 9623_CR16
  publication-title: AI Magazine
  doi: 10.1609/aimag.v38i3.2741
– ident: 9623_CR35
– ident: 9623_CR10
– ident: 9623_CR1
  doi: 10.1007/978-3-319-90403-0_2
– ident: 9623_CR12
– ident: 9623_CR37
  doi: 10.1145/506443.506619
– ident: 9623_CR3
  doi: 10.1016/j.inffus.2019.12.012
– volume: 1
  start-page: 206
  issue: 5
  year: 2019
  ident: 9623_CR34
  publication-title: Nature Machine Intelligence
  doi: 10.1038/s42256-019-0048-x
– ident: 9623_CR43
– ident: 9623_CR38
  doi: 10.1109/SMAP.2019.8864914
– volume: 29
  start-page: 230
  issue: 3
  year: 2017
  ident: 9623_CR39
  publication-title: Connection Science
  doi: 10.1080/09540091.2017.1310182
– volume: 72
  start-page: 367
  issue: 4
  year: 2014
  ident: 9623_CR14
  publication-title: International Journal of Human-Computer Studies
  doi: 10.1016/j.ijhcs.2013.12.007
– ident: 9623_CR8
  doi: 10.1145/3301275.3302310
– volume: 27
  start-page: 393
  issue: 3–5
  year: 2017
  ident: 9623_CR31
  publication-title: User Modeling and User-Adapted Interaction
  doi: 10.1007/s11257-017-9195-0
– ident: 9623_CR33
  doi: 10.1145/2939672.2939778
– volume: 73
  start-page: 1
  year: 2018
  ident: 9623_CR30
  publication-title: Digital Signal Processing
  doi: 10.1016/j.dsp.2017.10.011
– ident: 9623_CR26
– ident: 9623_CR4
– ident: 9623_CR20
– volume: 48
  start-page: 137
  issue: 1
  year: 2020
  ident: 9623_CR32
  publication-title: Journal of the Academy of Marketing Science
  doi: 10.1007/s11747-019-00710-5
– ident: 9623_CR18
– ident: 9623_CR6
  doi: 10.1145/3173574.3173951
– ident: 9623_CR40
  doi: 10.1007/978-0-387-85820-3_15
– volume: 7
  start-page: 154096
  year: 2019
  ident: 9623_CR28
  publication-title: IEEE Access
  doi: 10.1109/ACCESS.2019.2949286
– volume: 51
  start-page: 1
  issue: 5
  year: 2018
  ident: 9623_CR19
  publication-title: ACM Computing Surveys (CSUR)
  doi: 10.1145/3236009
– ident: 9623_CR36
– volume: 40
  start-page: 44
  issue: 2
  year: 2019
  ident: 9623_CR21
  publication-title: AI Magazine
  doi: 10.1609/aimag.v40i2.2850
– ident: 9623_CR29
– ident: 9623_CR44
– volume: 3
  start-page: 1
  issue: CSCW
  year: 2019
  ident: 9623_CR41
  publication-title: Proceedings of the ACM on Human-Computer Interaction
  doi: 10.1145/3359130
– volume: 4
  start-page: 75
  issue: 2
  year: 2016
  ident: 9623_CR15
  publication-title: Big Data
  doi: 10.1089/big.2016.0007
SSID ssj0005717
Score 2.4647048
Snippet In light of the widespread use of algorithmic (intelligent) systems across numerous domains, there is an increasing awareness about the need to explain their...
SourceID proquest
crossref
springer
SourceType Aggregation Database
Enrichment Source
Index Database
Publisher
SubjectTerms Algorithms
Computer Science
Decision making
Ethics
Evaluation
Innovation/Technology Management
Library Science
Management of Computing and Information Systems
Original Paper
Perception
Perceptions
User Interfaces and Human Computer Interaction
SummonAdditionalLinks – databaseName: ProQuest Central
  dbid: BENPR
  link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwfV3NbtQwELagvfTCTwGxpUVzQL1Qi8RO7IRLVVBXFVKrClGpt2j8E1hpm912U4neeA1egofiSRgnTlOQ6CWKlGRiecb2ePzNfIy9qb3WtjCeY4K0QRE64YgWeV56j-QvFApDvOP4RB2dZZ_O8_MYcFtFWOUwJ3YTtVvYECN_J1SoHKfzVO4vL3lgjQqnq5FC4yFbpym4oM3X-ofDk9PPI8hDd5y7qSR7SMtcx7SZmDyndADgCk5uvJA8-3tpGv3Nf45Iu5Vn-oQ9ii4jHPQ6fsoe-GaTPR7oGCCOzk22E3MQYBdiklHo9PH5xulAW3DzjP2a4uwqTHN74L8v510OVYDJ3gA2DmYNjwCu93B9N_0FyF-EPrMSFjUM9CptL6SPLELPSr0Cum0WDe9IBFoI4ZDV7x8_YTliaYKMOjYE2g7BS_8HnH-lrm-_Xcws9KWmn7Oz6eGXj0c8cjdwS4O65VqlXmVKZqVD6ZTViTBIuyuZoCer0SYXhXWydolRJtco0KBCumhbF7lw8gVboyb6lwyUlGXwK0pnssySAIuFtYYUXpLv54oJSwe1VTYWNg_8GvNqLMkcVF2RqqtO1VU2YW9vv1n2ZT3ufXt7sIYqDvFVNRrkhO0NFjI-_r-0rfulvWIbojPKgHPbZmvt1bXfIcenNa-jdf8BQ2EFJA
  priority: 102
  providerName: ProQuest
Title Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system
URI https://link.springer.com/article/10.1007/s10676-022-09623-4
https://www.proquest.com/docview/2624037513
Volume 24
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3BbtQwELVoe-mF0gJiabuaA-JCLSV2YifcFrrbCtRSIVYqp8h2HFhpya52U4ne-A1-go_iSxg7TtMiQOLiRLEzsTIz9tiemUfIs8pKaTJtqYoULlCYjKhSRtE0t1ahvZAJ5fY7zs7F6TR5c5lehqCwdeft3h1J-pH6VrCbkM5hllE0uxmnyQbZSt3aHaV4yka9Y4f0OLsxRxmI81SGUJk_07g7HfU25m_Hon62mTwg94OZCKOWr7vknq33yE4HwQBBI_fIYYg7gOcQAovcj-7rty86qILrh-THRM1Wbmg7Avt1OfdxU8419hpUXcKspsFp6yVc3Q55AbQRoY2mhEUFHaRK0xJpdxOhRaJeA97Wi5p64IAG3BbI-ue377Ds_WccjSp0BBrvtYvfBzX_tFjNms9fZgba9NKPyHQy_vD6lAa8BmpQkRsqRWxFIniSl4qXwsiIaYUrKh4pi5IidcoyU_KqjLTQqVRMaSUUFtJUWcpK_phsYhftEwKC89zZEnmpk8QgAaMyYzTOsznae2U2IHHHtsKEZOYOU2Ne9GmYHasLZHXhWV0kA_Li5p1lm8rjn60POmkoglqvCyZc-kKZxnxAjjoJ6av_Tu3p_zXfJ9vMC6nzdTsgm83qyh6i8dPoIdnIJidDsjU6-fh2jNdX4_OL9_j0LD725buh14Zf75cGpg
linkProvider Springer Nature
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV3NbtQwEB6VcqAXfgqILS3MAbhQi6yd2AkSQghYtvRHHFqpt2A7Dqy0zS7dVLA3XoOX6EPxJIwThwASvfUSRUoySfR9GY-dmfkAHpVOKZsax3SkaYLCVcS0tpolmXOa4oVUar_esX8gx0fx--PkeAXOu1oYn1bZ-cTGURcz69fIn3HpO8epZChezr8wrxrl_652EhotLXbd8itN2RYvdt4Qvo85H709fD1mQVWAWaJbzWje7mQsRZwVWhTSqogbTXG_iLSj91Em4aktRFlERppEaa6Nlpo2ypZpwgtBdq_A1ViIzH9R6ehdn1KiGoXfoSD2DbNEhSKdUKonlU_35YwmDVyw-O-BsI9u__kh24xzo5twPQSo-Kpl1C1YcdU63OjEHzD4gnXYChUP-ARDSZOHuD--9qETSVjehvORnpx6p7qN7tt82lRs-aTcJeqqwEnFQrrYczz7s9gGKTrFto4TZyV2Yi51a6Rdx8RWA3uBtFvNKtZIFtToF18WP7__wHmfueNtlOFBsG7yhen-qKefCOj688nEYtvY-g4cXQqmd2GVHtHdA5SEq49issLEsSUDVqfWGhrhM4o0i3QAww623IY26l7NY5r3DaA91DlBnTdQ5_EAnv6-Zt42Ebnw7M2ODXlwKIu8p_8AtjuG9If_b23jYmsP4dr4cH8v39s52L0Pa7whqM-w24TV-vTMbVHIVZsHDc8RPl72h_ULhWdCjA
linkToPdf http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV3NbtQwEB6VrYR64aeA2NLCHIALtZq1EztBQghoVy2F1QpRqbfUcRxYaZtduqlgb7wGL8ED8Dg8CePEIYBEb71EkZJMHM14PON8Mx_Aw8IqZeLMMh1oSlC4CpjWRrMosVZTvBBL7fY73o7k_lH4-jg6XoEfbS2Mg1W2PrF21PnMuD3yHS5d5zgVDcRO4WER493h8_kn5hik3J_Wlk6jMZFDu_xM6dvi2cEu6foR58O996_2mWcYYIZMr2KUw1sZShEmuRa5NCrgmaYcQATa0repLOKxyUWRB5nMIqW5zrTUdFCmiCOeC5J7BVYVZUVBD1Zf7o3G7zqAiar5fgeCbHGQRMqX7PjCPakc-JczSiG4YOHfy2IX6_7ze7Ze9YY34JoPV_FFY183YcWW63C9pYJA7xnWYcvXP-Bj9AVOTuHd9bVxS5mwvAXfh3py5lzsNtov82ldv-UgukvUZY6Tknnw2FM8_7P0BilWxaaqE2cFttQuVSOk2dXEhhF7gXRazkpWExhU6LZiFj-_fsN5h-NxMgo_EKxq9DC9H_X0A6m6-ng6Mdi0ub4NR5ei1TvQoyHau4BSiMTFNEmehaEhAUbHxmS03icUd-ZxHwat2lLjm6o7bo9p2rWDdqpOSdVpreo07MOT38_Mm5YiF9692VpD6t3LIu0mQx-2WwvpLv9f2sbF0h7AVZpU6ZuD0eE9WOO1fTq43Sb0qrNzu0XxV5Xd94aOcHLZc-sXpVpIHg
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Fairness%2C+explainability+and+in-between%3A+understanding+the+impact+of+different+explanation+methods+on+non-expert+users%E2%80%99+perceptions+of+fairness+toward+an+algorithmic+system&rft.jtitle=Ethics+and+information+technology&rft.au=Shulner-Tal%2C+Avital&rft.au=Kuflik%2C+Tsvi&rft.au=Kliger%2C+Doron&rft.date=2022-03-01&rft.issn=1388-1957&rft.eissn=1572-8439&rft.volume=24&rft.issue=1&rft_id=info:doi/10.1007%2Fs10676-022-09623-4&rft.externalDBID=n%2Fa&rft.externalDocID=10_1007_s10676_022_09623_4
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1388-1957&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1388-1957&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1388-1957&client=summon