On the Value of Labeled Data and Symbolic Methods for Hidden Neuron Activation Analysis

A major challenge in Explainable AI is in correctly interpreting activations of hidden neurons: accurate interpretations would help answer the question of what a deep learning system internally detects as relevant in the input, demystifying the otherwise black-box nature of deep learning systems. Th...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Dalal, Abhilekha, Rushrukh Rayan, Barua, Adrita, Vasserman, Eugene Y, Md Kamruzzaman Sarker, Hitzler, Pascal
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 21.04.2024
Subjects
Online AccessGet full text

Cover

Loading…
Abstract A major challenge in Explainable AI is in correctly interpreting activations of hidden neurons: accurate interpretations would help answer the question of what a deep learning system internally detects as relevant in the input, demystifying the otherwise black-box nature of deep learning systems. The state of the art indicates that hidden node activations can, in some cases, be interpretable in a way that makes sense to humans, but systematic automated methods that would be able to hypothesize and verify interpretations of hidden neuron activations are underexplored. This is particularly the case for approaches that can both draw explanations from substantial background knowledge, and that are based on inherently explainable (symbolic) methods. In this paper, we introduce a novel model-agnostic post-hoc Explainable AI method demonstrating that it provides meaningful interpretations. Our approach is based on using a Wikipedia-derived concept hierarchy with approximately 2 million classes as background knowledge, and utilizes OWL-reasoning-based Concept Induction for explanation generation. Additionally, we explore and compare the capabilities of off-the-shelf pre-trained multimodal-based explainable methods. Our results indicate that our approach can automatically attach meaningful class expressions as explanations to individual neurons in the dense layer of a Convolutional Neural Network. Evaluation through statistical analysis and degree of concept activation in the hidden layer show that our method provides a competitive edge in both quantitative and qualitative aspects compared to prior work.
AbstractList A major challenge in Explainable AI is in correctly interpreting activations of hidden neurons: accurate interpretations would help answer the question of what a deep learning system internally detects as relevant in the input, demystifying the otherwise black-box nature of deep learning systems. The state of the art indicates that hidden node activations can, in some cases, be interpretable in a way that makes sense to humans, but systematic automated methods that would be able to hypothesize and verify interpretations of hidden neuron activations are underexplored. This is particularly the case for approaches that can both draw explanations from substantial background knowledge, and that are based on inherently explainable (symbolic) methods. In this paper, we introduce a novel model-agnostic post-hoc Explainable AI method demonstrating that it provides meaningful interpretations. Our approach is based on using a Wikipedia-derived concept hierarchy with approximately 2 million classes as background knowledge, and utilizes OWL-reasoning-based Concept Induction for explanation generation. Additionally, we explore and compare the capabilities of off-the-shelf pre-trained multimodal-based explainable methods. Our results indicate that our approach can automatically attach meaningful class expressions as explanations to individual neurons in the dense layer of a Convolutional Neural Network. Evaluation through statistical analysis and degree of concept activation in the hidden layer show that our method provides a competitive edge in both quantitative and qualitative aspects compared to prior work.
Author Rushrukh Rayan
Barua, Adrita
Md Kamruzzaman Sarker
Dalal, Abhilekha
Vasserman, Eugene Y
Hitzler, Pascal
Author_xml – sequence: 1
  givenname: Abhilekha
  surname: Dalal
  fullname: Dalal, Abhilekha
– sequence: 2
  fullname: Rushrukh Rayan
– sequence: 3
  givenname: Adrita
  surname: Barua
  fullname: Barua, Adrita
– sequence: 4
  givenname: Eugene
  surname: Vasserman
  middlename: Y
  fullname: Vasserman, Eugene Y
– sequence: 5
  fullname: Md Kamruzzaman Sarker
– sequence: 6
  givenname: Pascal
  surname: Hitzler
  fullname: Hitzler, Pascal
BookMark eNqNyrsKwjAUgOEgClbtOxxwLsSkalfxgoOXQdGxxOYUU2KONqng26vgAzj9__D1WNuRwxaLhJSjJEuF6LLY-4pzLiZTMR7LiJ33DsIV4aRsg0AlbNQFLWpYqKBAOQ2H1-1C1hSwxXAl7aGkGtZGa3Sww6YmB7MimKcK5rtO2Zc3fsA6pbIe41_7bLhaHufr5F7To0Ef8oqa-oN9Lnma8umI80z-p94CqEIM
ContentType Paper
Copyright 2024. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Copyright_xml – notice: 2024. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
DBID 8FE
8FG
ABJCF
ABUWG
AFKRA
AZQEC
BENPR
BGLVJ
CCPQU
DWQXO
HCIFZ
L6V
M7S
PIMPY
PQEST
PQQKQ
PQUKI
PRINS
PTHSS
DatabaseName ProQuest SciTech Collection
ProQuest Technology Collection
Materials Science & Engineering Collection
ProQuest Central (Alumni Edition)
ProQuest Central
ProQuest Central Essentials
AUTh Library subscriptions: ProQuest Central
Technology Collection
ProQuest One Community College
ProQuest Central
SciTech Premium Collection (Proquest) (PQ_SDU_P3)
ProQuest Engineering Collection
Engineering Database
Publicly Available Content Database
ProQuest One Academic Eastern Edition (DO NOT USE)
ProQuest One Academic
ProQuest One Academic UKI Edition
ProQuest Central China
Engineering Collection
DatabaseTitle Publicly Available Content Database
Engineering Database
Technology Collection
ProQuest Central Essentials
ProQuest One Academic Eastern Edition
ProQuest Central (Alumni Edition)
SciTech Premium Collection
ProQuest One Community College
ProQuest Technology Collection
ProQuest SciTech Collection
ProQuest Central China
ProQuest Central
ProQuest Engineering Collection
ProQuest One Academic UKI Edition
ProQuest Central Korea
Materials Science & Engineering Collection
ProQuest One Academic
Engineering Collection
DatabaseTitleList Publicly Available Content Database
Database_xml – sequence: 1
  dbid: 8FG
  name: ProQuest Technology Collection
  url: https://search.proquest.com/technologycollection1
  sourceTypes: Aggregation Database
DeliveryMethod fulltext_linktorsrc
Discipline Physics
EISSN 2331-8422
Genre Working Paper/Pre-Print
GroupedDBID 8FE
8FG
ABJCF
ABUWG
AFKRA
ALMA_UNASSIGNED_HOLDINGS
AZQEC
BENPR
BGLVJ
CCPQU
DWQXO
FRJ
HCIFZ
L6V
M7S
M~E
PIMPY
PQEST
PQQKQ
PQUKI
PRINS
PTHSS
ID FETCH-proquest_journals_30440710083
IEDL.DBID 8FG
IngestDate Thu Oct 10 20:31:16 EDT 2024
IsOpenAccess true
IsPeerReviewed false
IsScholarly false
Language English
LinkModel DirectLink
MergedId FETCHMERGED-proquest_journals_30440710083
OpenAccessLink https://www.proquest.com/docview/3044071008?pq-origsite=%requestingapplication%
PQID 3044071008
PQPubID 2050157
ParticipantIDs proquest_journals_3044071008
PublicationCentury 2000
PublicationDate 20240421
PublicationDateYYYYMMDD 2024-04-21
PublicationDate_xml – month: 04
  year: 2024
  text: 20240421
  day: 21
PublicationDecade 2020
PublicationPlace Ithaca
PublicationPlace_xml – name: Ithaca
PublicationTitle arXiv.org
PublicationYear 2024
Publisher Cornell University Library, arXiv.org
Publisher_xml – name: Cornell University Library, arXiv.org
SSID ssj0002672553
Score 3.54259
SecondaryResourceType preprint
Snippet A major challenge in Explainable AI is in correctly interpreting activations of hidden neurons: accurate interpretations would help answer the question of what...
SourceID proquest
SourceType Aggregation Database
SubjectTerms Activation analysis
Artificial neural networks
Deep learning
Explainable artificial intelligence
Machine learning
Neurons
Statistical analysis
Title On the Value of Labeled Data and Symbolic Methods for Hidden Neuron Activation Analysis
URI https://www.proquest.com/docview/3044071008
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwfV07T8MwED5BIyQ2nuJRqpNgjUhsN2kmxCMlQqRUPLtVjuNMkJYmHVj47ZytBAakDh4sS5ZtWffdnX3fB3DGAmXqJSO3kBSiiJyFrgy15_pBFHIlvEIIU42cjoLkRdxN-pMm4VY13ypbm2gNdT5TJkd-zo02sqWiuZh_ukY1yryuNhIa6-D4hgnPVIoPb39zLCwIyWPm_8ysxY7hFjhjOdeLbVjT5Q5s2C-XqtqFt4cSyfvCV_m-1Dgr8F5mhAE53shaIgX4-PT1kRnaXkytzHOF5GBiYjg_SrSkGiVeqlaeDFt6kT04HcbP14nbrmba3Jdq-rc7vg8dCvz1ASAhR59akbNoIJSnByLyIpZFGfdzrwj5IXRXzXS0evgYNhkBtHkZYX4XOvViqU8IYOusZ0-xB85VPBo_Ui_9jn8A1L2EIw
link.rule.ids 786,790,12792,21416,33406,33777,43633,43838
linkProvider ProQuest
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwfV27TsMwFL2CRAg2nqJQ4EqwRiSOm8eEeLQKkIQKCnSLHMeZIC1NOvD32FYCA1IHT5Ys27LuuS-fA3BBPK7-S4ZWyWSIQgviW8wXtuV4oe9yapeUqt_ISepFr_RhOpi2Cbe6bavsbKI21MWMqxz5pau0kTUVzdX8y1KqUaq62kporIOpKDcDA8ybYTp-_s2yEM-XPrP7z9Bq9Bhtgzlmc7HYgTVR7cKGbrrk9R68P1Uo_S98Yx9LgbMSY5ZLFCjwjjUMZYiPL9-fuSLuxUQLPdcoXUyMFOtHhZpWo8Jr3gmUYUcwsg_no-HkNrK63WTti6mzv_O5B2DI0F8cAkrsGMhRFiQMKLdFQEM7JHmYu05hl77bg_6qlY5WT5_BZjRJ4iy-Tx-PYYtIuFZ1EuL0wWgWS3Ei4bbJT9s7_QEruYWv
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=On+the+Value+of+Labeled+Data+and+Symbolic+Methods+for+Hidden+Neuron+Activation+Analysis&rft.jtitle=arXiv.org&rft.au=Dalal%2C+Abhilekha&rft.au=Rushrukh+Rayan&rft.au=Barua%2C+Adrita&rft.au=Vasserman%2C+Eugene+Y&rft.date=2024-04-21&rft.pub=Cornell+University+Library%2C+arXiv.org&rft.eissn=2331-8422