Explainable Multi-View Deep Networks Methodology for Experimental Physics
Physical experiments often involve multiple imaging representations, such as X-ray scans and microscopic images. Deep learning models have been widely used for supervised analysis in these experiments. Combining different image representations is frequently required to analyze and make a decision pr...
Saved in:
Main Authors | , , , , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
16.08.2023
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | Physical experiments often involve multiple imaging representations, such as
X-ray scans and microscopic images. Deep learning models have been widely used
for supervised analysis in these experiments. Combining different image
representations is frequently required to analyze and make a decision properly.
Consequently, multi-view data has emerged - datasets where each sample is
described by views from different angles, sources, or modalities. These
problems are addressed with the concept of multi-view learning. Understanding
the decision-making process of deep learning models is essential for reliable
and credible analysis. Hence, many explainability methods have been devised
recently. Nonetheless, there is a lack of proper explainability in multi-view
models, which are challenging to explain due to their architectures. In this
paper, we suggest different multi-view architectures for the vision domain,
each suited to another problem, and we also present a methodology for
explaining these models. To demonstrate the effectiveness of our methodology,
we focus on the domain of High Energy Density Physics (HEDP) experiments, where
multiple imaging representations are used to assess the quality of foam
samples. We apply our methodology to classify the foam samples quality using
the suggested multi-view architectures. Through experimental results, we
showcase the improvement of accurate architecture choice on both accuracy - 78%
to 84% and AUC - 83% to 93% and present a trade-off between performance and
explainability. Specifically, we demonstrate that our approach enables the
explanation of individual one-view models, providing insights into the
decision-making process of each view. This understanding enhances the
interpretability of the overall multi-view model. The sources of this work are
available at:
https://github.com/Scientific-Computing-Lab-NRCN/Multi-View-Explainability. |
---|---|
AbstractList | Physical experiments often involve multiple imaging representations, such as
X-ray scans and microscopic images. Deep learning models have been widely used
for supervised analysis in these experiments. Combining different image
representations is frequently required to analyze and make a decision properly.
Consequently, multi-view data has emerged - datasets where each sample is
described by views from different angles, sources, or modalities. These
problems are addressed with the concept of multi-view learning. Understanding
the decision-making process of deep learning models is essential for reliable
and credible analysis. Hence, many explainability methods have been devised
recently. Nonetheless, there is a lack of proper explainability in multi-view
models, which are challenging to explain due to their architectures. In this
paper, we suggest different multi-view architectures for the vision domain,
each suited to another problem, and we also present a methodology for
explaining these models. To demonstrate the effectiveness of our methodology,
we focus on the domain of High Energy Density Physics (HEDP) experiments, where
multiple imaging representations are used to assess the quality of foam
samples. We apply our methodology to classify the foam samples quality using
the suggested multi-view architectures. Through experimental results, we
showcase the improvement of accurate architecture choice on both accuracy - 78%
to 84% and AUC - 83% to 93% and present a trade-off between performance and
explainability. Specifically, we demonstrate that our approach enables the
explanation of individual one-view models, providing insights into the
decision-making process of each view. This understanding enhances the
interpretability of the overall multi-view model. The sources of this work are
available at:
https://github.com/Scientific-Computing-Lab-NRCN/Multi-View-Explainability. |
Author | Schneider, Nadav Bar, Galit Lazovski, Guy Oren, Gilad Oren, Gal Tzdaka, Muriel Gvishi, Raz Sturm, Galit |
Author_xml | – sequence: 1 givenname: Nadav surname: Schneider fullname: Schneider, Nadav – sequence: 2 givenname: Muriel surname: Tzdaka fullname: Tzdaka, Muriel – sequence: 3 givenname: Galit surname: Sturm fullname: Sturm, Galit – sequence: 4 givenname: Guy surname: Lazovski fullname: Lazovski, Guy – sequence: 5 givenname: Galit surname: Bar fullname: Bar, Galit – sequence: 6 givenname: Gilad surname: Oren fullname: Oren, Gilad – sequence: 7 givenname: Raz surname: Gvishi fullname: Gvishi, Raz – sequence: 8 givenname: Gal surname: Oren fullname: Oren, Gal |
BackLink | https://doi.org/10.48550/arXiv.2308.08206$$DView paper in arXiv |
BookMark | eNotz7FOwzAUBVAPMEDhA5jwDyQ8241jj6gUqNQCQ8UaPdcv1MLEURJo8_cthelO9-qeS3bWpIYYuxGQT01RwB12-_CTSwUmByNBX7DFfN9GDA26SHz1HYeQvQfa8Qeilr_QsEvdZ89XNGyTTzF9jLxOHT-WqAtf1AwY-dt27MOmv2LnNcaerv9zwtaP8_XsOVu-Pi1m98sMdakzi2SddlPjQDhQQjsvPTgPEoxQFmAjCY3yaFxthdbeG0miEFDakkCQmrDbv9mTpWqPN7Abq19TdTKpA8j-SNU |
ContentType | Journal Article |
Copyright | http://creativecommons.org/licenses/by/4.0 |
Copyright_xml | – notice: http://creativecommons.org/licenses/by/4.0 |
DBID | AKY GOX |
DOI | 10.48550/arxiv.2308.08206 |
DatabaseName | arXiv Computer Science arXiv.org |
DatabaseTitleList | |
Database_xml | – sequence: 1 dbid: GOX name: arXiv.org url: http://arxiv.org/find sourceTypes: Open Access Repository |
DeliveryMethod | fulltext_linktorsrc |
ExternalDocumentID | 2308_08206 |
GroupedDBID | AKY GOX |
ID | FETCH-LOGICAL-a676-9ae9b6b48b01b0316bd2d0bd020813900c2ea83da8bf9166dd82e1510797e01e3 |
IEDL.DBID | GOX |
IngestDate | Wed Jul 31 12:20:35 EDT 2024 |
IsDoiOpenAccess | true |
IsOpenAccess | true |
IsPeerReviewed | false |
IsScholarly | false |
Language | English |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-a676-9ae9b6b48b01b0316bd2d0bd020813900c2ea83da8bf9166dd82e1510797e01e3 |
OpenAccessLink | https://arxiv.org/abs/2308.08206 |
ParticipantIDs | arxiv_primary_2308_08206 |
PublicationCentury | 2000 |
PublicationDate | 2023-08-16 |
PublicationDateYYYYMMDD | 2023-08-16 |
PublicationDate_xml | – month: 08 year: 2023 text: 2023-08-16 day: 16 |
PublicationDecade | 2020 |
PublicationYear | 2023 |
Score | 1.8955146 |
SecondaryResourceType | preprint |
Snippet | Physical experiments often involve multiple imaging representations, such as
X-ray scans and microscopic images. Deep learning models have been widely used
for... |
SourceID | arxiv |
SourceType | Open Access Repository |
SubjectTerms | Computer Science - Artificial Intelligence Computer Science - Computer Vision and Pattern Recognition |
Title | Explainable Multi-View Deep Networks Methodology for Experimental Physics |
URI | https://arxiv.org/abs/2308.08206 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwdV09T8MwED21nVgQCFD5lAdWg5MY2xkRtBSklqWgbJEvvkpdQtWUws_HdoLowmqfLPks657P794BXFNmrXAy53kmFZeZRY7Sap4TGYs5yiyyKqczNXmTL8Vd0QP2Wwtj19_LbasPjM2tx8fmJgQp1Yd-mgbK1tNr0X5ORimuzv7PzmPMOLQTJMYHsN-hO3bfHsch9Kg-gufAc-uKlFgseOXvS_pij0QrNmtp2A2bxlbOMcnNPJBkox3pfRZ5mlVzDPPxaP4w4V0HA26VVjwIX6NCaVAk6G-PQpc6gS40xvTIS4gqJWsyZw0uPExTzpmUfAgWOtckEspOYFB_1DQEllhCkS6kf9Ch1K4yuZBIfi2rlfIw6hSGcd_lqhWpKINLyuiSs_-nzmEvtE8POdJEXcBgs_6kSx9kN3gVPf0DL4t83g |
link.rule.ids | 228,230,783,888 |
linkProvider | Cornell University |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Explainable+Multi-View+Deep+Networks+Methodology+for+Experimental+Physics&rft.au=Schneider%2C+Nadav&rft.au=Tzdaka%2C+Muriel&rft.au=Sturm%2C+Galit&rft.au=Lazovski%2C+Guy&rft.date=2023-08-16&rft_id=info:doi/10.48550%2Farxiv.2308.08206&rft.externalDocID=2308_08206 |