Debiased-CAM to mitigate systematic error with faithful visual explanations of machine learning

Model explanations such as saliency maps can improve user trust in AI by highlighting important features for a prediction. However, these become distorted and misleading when explaining predictions of images that are subject to systematic error (bias). Furthermore, the distortions persist despite mo...

Full description

Saved in:
Bibliographic Details
Main Authors Zhang, Wencan, Dimiccoli, Mariella, Lim, Brian Y
Format Journal Article
LanguageEnglish
Published 30.01.2022
Subjects
Online AccessGet full text
DOI10.48550/arxiv.2201.12835

Cover

Abstract Model explanations such as saliency maps can improve user trust in AI by highlighting important features for a prediction. However, these become distorted and misleading when explaining predictions of images that are subject to systematic error (bias). Furthermore, the distortions persist despite model fine-tuning on images biased by different factors (blur, color temperature, day/night). We present Debiased-CAM to recover explanation faithfulness across various bias types and levels by training a multi-input, multi-task model with auxiliary tasks for explanation and bias level predictions. In simulation studies, the approach not only enhanced prediction accuracy, but also generated highly faithful explanations about these predictions as if the images were unbiased. In user studies, debiased explanations improved user task performance, perceived truthfulness and perceived helpfulness. Debiased training can provide a versatile platform for robust performance and explanation faithfulness for a wide range of applications with data biases.
AbstractList Model explanations such as saliency maps can improve user trust in AI by highlighting important features for a prediction. However, these become distorted and misleading when explaining predictions of images that are subject to systematic error (bias). Furthermore, the distortions persist despite model fine-tuning on images biased by different factors (blur, color temperature, day/night). We present Debiased-CAM to recover explanation faithfulness across various bias types and levels by training a multi-input, multi-task model with auxiliary tasks for explanation and bias level predictions. In simulation studies, the approach not only enhanced prediction accuracy, but also generated highly faithful explanations about these predictions as if the images were unbiased. In user studies, debiased explanations improved user task performance, perceived truthfulness and perceived helpfulness. Debiased training can provide a versatile platform for robust performance and explanation faithfulness for a wide range of applications with data biases.
Author Dimiccoli, Mariella
Lim, Brian Y
Zhang, Wencan
Author_xml – sequence: 1
  givenname: Wencan
  surname: Zhang
  fullname: Zhang, Wencan
– sequence: 2
  givenname: Mariella
  surname: Dimiccoli
  fullname: Dimiccoli, Mariella
– sequence: 3
  givenname: Brian Y
  surname: Lim
  fullname: Lim, Brian Y
BackLink https://doi.org/10.48550/arXiv.2201.12835$$DView paper in arXiv
BookMark eNqFzrsOgkAQheEttPD2AFbOC4BcJKE1qLGxsycjDjDJskt2F4S3F429zTnNX3xLMVNakRDbMPAPaZIEezQD934UBaEfRmmcLER-ogejpaeXHW_gNDTsuEJHYEfrqEHHBZAx2sCLXQ0lTlt2Enq2HUqgoZWopkorC7qEBouaFYEkNIpVtRbzEqWlze9XYnc537Or96XkreEGzZh_SPmXFP8v3ro2RGI
ContentType Journal Article
Copyright http://arxiv.org/licenses/nonexclusive-distrib/1.0
Copyright_xml – notice: http://arxiv.org/licenses/nonexclusive-distrib/1.0
DBID AKY
GOX
DOI 10.48550/arxiv.2201.12835
DatabaseName arXiv Computer Science
arXiv.org
DatabaseTitleList
Database_xml – sequence: 1
  dbid: GOX
  name: arXiv.org
  url: http://arxiv.org/find
  sourceTypes: Open Access Repository
DeliveryMethod fulltext_linktorsrc
ExternalDocumentID 2201_12835
GroupedDBID AKY
GOX
ID FETCH-arxiv_primary_2201_128353
IEDL.DBID GOX
IngestDate Wed Jul 23 01:58:12 EDT 2025
IsDoiOpenAccess true
IsOpenAccess true
IsPeerReviewed false
IsScholarly false
Language English
LinkModel DirectLink
MergedId FETCHMERGED-arxiv_primary_2201_128353
OpenAccessLink https://arxiv.org/abs/2201.12835
ParticipantIDs arxiv_primary_2201_12835
PublicationCentury 2000
PublicationDate 2022-01-30
PublicationDateYYYYMMDD 2022-01-30
PublicationDate_xml – month: 01
  year: 2022
  text: 2022-01-30
  day: 30
PublicationDecade 2020
PublicationYear 2022
Score 3.5712957
SecondaryResourceType preprint
Snippet Model explanations such as saliency maps can improve user trust in AI by highlighting important features for a prediction. However, these become distorted and...
SourceID arxiv
SourceType Open Access Repository
SubjectTerms Computer Science - Artificial Intelligence
Computer Science - Human-Computer Interaction
Title Debiased-CAM to mitigate systematic error with faithful visual explanations of machine learning
URI https://arxiv.org/abs/2201.12835
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwdV1NT8MwDLW2nbggJkADBvjANdC1Wdcep2ljQhpcQOqtSpYEVdro1I9pPx8nLRqXXXJILMtKFD3bsV8AngghJr4KFZPjWDByqSWLYr5mIggJzcxICNeVtnoPl1_8LRknHcC_XhhRHLJ9ww8syxef4Ol5ZCnButD1fRtcvX4kzeOko-Jq5Y9y5GO6qX8gsbiA89a7w2lzHH3o6J9LSOlOZwQWis2mK6xy3GaO2ELjkUYZdVHkBdqsKBpBo6k3uM_KmrTpw24jmqRdibnBrat_1Nh--PB9BY-L-edsyZxJ6a7hj0ittamzNriGHkX5egBoAmN4qPiah5prJaWnIi0NF3GkvYkKbmBwSsvt6aU7OPNtvb5nk0xD6FVFre8JRSv54LbyF2Q2eM0
linkProvider Cornell University
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Debiased-CAM+to+mitigate+systematic+error+with+faithful+visual+explanations+of+machine+learning&rft.au=Zhang%2C+Wencan&rft.au=Dimiccoli%2C+Mariella&rft.au=Lim%2C+Brian+Y&rft.date=2022-01-30&rft_id=info:doi/10.48550%2Farxiv.2201.12835&rft.externalDocID=2201_12835