DeepVID: Deep Visual Interpretation and Diagnosis for Image Classifiers via Knowledge Distillation
Deep Neural Networks (DNNs) have been extensively used in multiple disciplines due to their superior performance. However, in most cases, DNNs are considered as black-boxes and the interpretation of their internal working mechanism is usually challenging. Given that model trust is often built on the...
Saved in:
Published in | IEEE transactions on visualization and computer graphics Vol. 25; no. 6; pp. 2168 - 2180 |
---|---|
Main Authors | , , , , |
Format | Journal Article |
Language | English |
Published |
United States
IEEE
01.06.2019
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | Deep Neural Networks (DNNs) have been extensively used in multiple disciplines due to their superior performance. However, in most cases, DNNs are considered as black-boxes and the interpretation of their internal working mechanism is usually challenging. Given that model trust is often built on the understanding of how a model works, the interpretation of DNNs becomes more important, especially in safety-critical applications (e.g., medical diagnosis, autonomous driving). In this paper, we propose DeepVID, a Deep learning approach to Visually Interpret and Diagnose DNN models, especially image classifiers. In detail, we train a small locally-faithful model to mimic the behavior of an original cumbersome DNN around a particular data instance of interest, and the local model is sufficiently simple such that it can be visually interpreted (e.g., a linear model). Knowledge distillation is used to transfer the knowledge from the cumbersome DNN to the small model, and a deep generative model (i.e., variational auto-encoder) is used to generate neighbors around the instance of interest. Those neighbors, which come with small feature variances and semantic meanings, can effectively probe the DNN's behaviors around the interested instance and help the small model to learn those behaviors. Through comprehensive evaluations, as well as case studies conducted together with deep learning experts, we validate the effectiveness of DeepVID. |
---|---|
AbstractList | Deep Neural Networks (DNNs) have been extensively used in multiple disciplines due to their superior performance. However, in most cases, DNNs are considered as black-boxes and the interpretation of their internal working mechanism is usually challenging. Given that model trust is often built on the understanding of how a model works, the interpretation of DNNs becomes more important, especially in safety-critical applications (e.g., medical diagnosis, autonomous driving). In this paper, we propose DeepVID, a Deep learning approach to Visually Interpret and Diagnose DNN models, especially image classifiers. In detail, we train a small locally-faithful model to mimic the behavior of an original cumbersome DNN around a particular data instance of interest, and the local model is sufficiently simple such that it can be visually interpreted (e.g., a linear model). Knowledge distillation is used to transfer the knowledge from the cumbersome DNN to the small model, and a deep generative model (i.e., variational auto-encoder) is used to generate neighbors around the instance of interest. Those neighbors, which come with small feature variances and semantic meanings, can effectively probe the DNN's behaviors around the interested instance and help the small model to learn those behaviors. Through comprehensive evaluations, as well as case studies conducted together with deep learning experts, we validate the effectiveness of DeepVID.Deep Neural Networks (DNNs) have been extensively used in multiple disciplines due to their superior performance. However, in most cases, DNNs are considered as black-boxes and the interpretation of their internal working mechanism is usually challenging. Given that model trust is often built on the understanding of how a model works, the interpretation of DNNs becomes more important, especially in safety-critical applications (e.g., medical diagnosis, autonomous driving). In this paper, we propose DeepVID, a Deep learning approach to Visually Interpret and Diagnose DNN models, especially image classifiers. In detail, we train a small locally-faithful model to mimic the behavior of an original cumbersome DNN around a particular data instance of interest, and the local model is sufficiently simple such that it can be visually interpreted (e.g., a linear model). Knowledge distillation is used to transfer the knowledge from the cumbersome DNN to the small model, and a deep generative model (i.e., variational auto-encoder) is used to generate neighbors around the instance of interest. Those neighbors, which come with small feature variances and semantic meanings, can effectively probe the DNN's behaviors around the interested instance and help the small model to learn those behaviors. Through comprehensive evaluations, as well as case studies conducted together with deep learning experts, we validate the effectiveness of DeepVID. Deep Neural Networks (DNNs) have been extensively used in multiple disciplines due to their superior performance. However, in most cases, DNNs are considered as black-boxes and the interpretation of their internal working mechanism is usually challenging. Given that model trust is often built on the understanding of how a model works, the interpretation of DNNs becomes more important, especially in safety-critical applications (e.g., medical diagnosis, autonomous driving). In this paper, we propose DeepVID, a Deep learning approach to Visually Interpret and Diagnose DNN models, especially image classifiers. In detail, we train a small locally-faithful model to mimic the behavior of an original cumbersome DNN around a particular data instance of interest, and the local model is sufficiently simple such that it can be visually interpreted (e.g., a linear model). Knowledge distillation is used to transfer the knowledge from the cumbersome DNN to the small model, and a deep generative model (i.e., variational auto-encoder) is used to generate neighbors around the instance of interest. Those neighbors, which come with small feature variances and semantic meanings, can effectively probe the DNN's behaviors around the interested instance and help the small model to learn those behaviors. Through comprehensive evaluations, as well as case studies conducted together with deep learning experts, we validate the effectiveness of DeepVID. |
Author | Yang, Hao Shen, Han-Wei Wang, Junpeng Gou, Liang Zhang, Wei |
Author_xml | – sequence: 1 givenname: Junpeng orcidid: 0000-0002-1130-9914 surname: Wang fullname: Wang, Junpeng email: wang.7665@osu.edu organization: Department of Computer Science and Engineering, Ohio State University, Columbus, OH, USA – sequence: 2 givenname: Liang surname: Gou fullname: Gou, Liang email: ligou@visa.com organization: Visa Research, Palo Alto, CA, USA – sequence: 3 givenname: Wei surname: Zhang fullname: Zhang, Wei email: wzhan@visa.com organization: Visa Research, Palo Alto, CA, USA – sequence: 4 givenname: Hao surname: Yang fullname: Yang, Hao email: haoyang@visa.com organization: Visa Research, Palo Alto, CA, USA – sequence: 5 givenname: Han-Wei surname: Shen fullname: Shen, Han-Wei email: shen.94@osu.edu organization: Department of Computer Science and Engineering, Ohio State University, Columbus, OH, USA |
BackLink | https://www.ncbi.nlm.nih.gov/pubmed/30892211$$D View this record in MEDLINE/PubMed |
BookMark | eNp9kU1v1DAQhi1URD_gByAkZIkLlyz-js2t2oWyohKXsldrkkwqV1lnsRMQ_x6H3XLogZNH8vOMRu97Sc7iGJGQ15ytOGfuw91ufbMSjLuVcEw6JZ-RC-4Ur5hm5qzMrK4rYYQ5J5c5PzDGlbLuBTmXzDohOL8gzQbxsNtuPtJloLuQZxjoNk6YDgknmMIYKcSObgLcxzGHTPsx0e0e7pGuB8g59AFTpj8D0K9x_DVgV342IU9hGP7qL8nzHoaMr07vFfn--dPd-kt1--1mu76-rVqp3FSBsTVIkK2xWmvWS1BoW921jhmlAbjuVGtq2-iGgzUWmJRGNcIZpk2ntLwi7497D2n8MWOe_D7kFssVEcc5e1Gi0dYwsaDvnqAP45xiuc4LUQIVUnJWqLcnam722PlDCntIv_1jegWoj0CbxpwT9r4Nx8imBGHwnPmlJ7_05Jee_KmnYvIn5uPy_zlvjk5AxH-8NaY2hss_SO-bmQ |
CODEN | ITVGEA |
CitedBy_id | crossref_primary_10_1016_j_compmedimag_2024_102350 crossref_primary_10_1371_journal_pone_0271225 crossref_primary_10_1016_j_engappai_2024_108547 crossref_primary_10_3390_rs15020370 crossref_primary_10_1016_j_compag_2021_106183 crossref_primary_10_1016_j_knosys_2022_108651 crossref_primary_10_1109_TVCG_2022_3209382 crossref_primary_10_1016_j_neucom_2022_07_006 crossref_primary_10_1371_journal_pone_0285668 crossref_primary_10_1109_TVCG_2023_3301722 crossref_primary_10_1109_TVCG_2021_3076749 crossref_primary_10_1109_TVCG_2024_3357065 crossref_primary_10_1016_j_visinf_2020_04_005 crossref_primary_10_1177_14738716221130338 crossref_primary_10_1016_j_media_2022_102725 crossref_primary_10_1109_MCAS_2021_3071629 crossref_primary_10_3390_s23104829 crossref_primary_10_1109_ACCESS_2024_3480791 crossref_primary_10_1007_s00530_022_00960_4 crossref_primary_10_1016_j_ins_2019_10_074 crossref_primary_10_1016_j_visinf_2024_10_004 crossref_primary_10_1109_TVCG_2021_3114855 crossref_primary_10_1109_TPAMI_2021_3125931 crossref_primary_10_1007_s10515_022_00337_x crossref_primary_10_1109_TII_2021_3091521 crossref_primary_10_1016_j_neucom_2025_129745 crossref_primary_10_1016_j_compbiomed_2022_106466 crossref_primary_10_1016_j_cag_2021_09_002 crossref_primary_10_1109_TVCG_2022_3148107 crossref_primary_10_1089_big_2021_0333 crossref_primary_10_1109_TVCG_2019_2934591 crossref_primary_10_1016_j_compeleceng_2024_109180 crossref_primary_10_1145_3426866 crossref_primary_10_1109_TVCG_2022_3172107 crossref_primary_10_1007_s41095_020_0191_7 crossref_primary_10_1016_j_measurement_2023_113438 crossref_primary_10_1109_TVCG_2020_3030461 crossref_primary_10_1111_cgf_14034 crossref_primary_10_1007_s11704_024_3735_7 crossref_primary_10_1016_j_dt_2021_04_014 crossref_primary_10_1109_ACCESS_2021_3089766 crossref_primary_10_1007_s00521_023_08929_8 crossref_primary_10_1111_cgf_15127 crossref_primary_10_1109_TCYB_2021_3125320 crossref_primary_10_1109_TNNLS_2022_3160939 crossref_primary_10_1109_TVCG_2020_2969185 crossref_primary_10_3390_ai5020023 crossref_primary_10_1016_j_artmed_2020_101952 crossref_primary_10_1007_s12650_019_00607_z crossref_primary_10_1109_ACCESS_2019_2962775 crossref_primary_10_3389_fcomp_2024_1279810 crossref_primary_10_1016_j_infrared_2024_105202 crossref_primary_10_3390_app112411790 crossref_primary_10_1109_TVCG_2023_3326591 crossref_primary_10_1190_geo2023_0382_1 crossref_primary_10_1007_s00530_022_01019_0 crossref_primary_10_1016_j_knosys_2022_108345 crossref_primary_10_1109_TVCG_2020_3030350 crossref_primary_10_1111_cgf_14329 crossref_primary_10_3103_S0147688221050129 crossref_primary_10_1007_s10489_021_02686_8 crossref_primary_10_1109_TVCG_2020_3028888 crossref_primary_10_1109_COMST_2022_3199544 |
Cites_doi | 10.1109/TVCG.2018.2864504 10.1109/TVCG.2018.2864499 10.1109/5.726791 10.1109/TVCG.2016.2598831 10.1145/2939672.2939778 10.1007/978-3-319-27857-5_77 10.1109/ICCV.2015.425 10.1109/CVPR.2016.90 10.1109/TVCG.2018.2864500 10.1109/CVPR.2016.319 10.1371/journal.pone.0130140 10.1109/TVCG.2017.2744878 10.1109/ICCV.2017.371 10.1109/CVPR.2018.00175 10.1007/BF00994018 10.1109/21.97458 10.1109/TVCG.2017.2744718 10.1109/WACV.2017.131 10.1016/j.patcog.2016.11.008 10.1145/2858036.2858111 10.1109/TVCG.2018.2816223 10.1109/ICCV.2017.74 |
ContentType | Journal Article |
Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2019 |
Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2019 |
DBID | 97E RIA RIE AAYXX CITATION NPM 7SC 7SP 8FD JQ2 L7M L~C L~D 7X8 |
DOI | 10.1109/TVCG.2019.2903943 |
DatabaseName | IEEE Xplore (IEEE) IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Electronic Library (IEL) CrossRef PubMed Computer and Information Systems Abstracts Electronics & Communications Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional MEDLINE - Academic |
DatabaseTitle | CrossRef PubMed Technology Research Database Computer and Information Systems Abstracts – Academic Electronics & Communications Abstracts ProQuest Computer Science Collection Computer and Information Systems Abstracts Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Professional MEDLINE - Academic |
DatabaseTitleList | MEDLINE - Academic Technology Research Database PubMed |
Database_xml | – sequence: 1 dbid: NPM name: PubMed url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database – sequence: 2 dbid: RIE name: IEEE Electronic Library (IEL) url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/ sourceTypes: Publisher |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Engineering |
EISSN | 1941-0506 |
EndPage | 2180 |
ExternalDocumentID | 30892211 10_1109_TVCG_2019_2903943 8667661 |
Genre | orig-research Journal Article |
GrantInformation_xml | – fundername: NSF grantid: SBE-1738502 – fundername: UT-Battelle LLC grantid: 4000159557 – fundername: Los Alamos National Laboratory grantid: 471415 funderid: 10.13039/100008902 |
GroupedDBID | --- -~X .DC 0R~ 29I 4.4 53G 5GY 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABQJQ ABVLG ACGFO ACIWK AENEX AGQYO AHBIQ AKJIK AKQYR ALMA_UNASSIGNED_HOLDINGS ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ CS3 DU5 EBS EJD F5P HZ~ IEDLZ IFIPE IPLJI JAVBF LAI M43 O9- OCL P2P PQQKQ RIA RIE RNS TN5 AAYXX CITATION 5VS AAYOK AETIX AGSQL AI. AIBXA ALLEH H~9 IFJZH NPM RIG RNI RZB VH1 7SC 7SP 8FD JQ2 L7M L~C L~D 7X8 |
ID | FETCH-LOGICAL-c349t-a687a3a3c685550f3a4e8c5dc90645aa15d4c678b5b1a868a03364b296056d453 |
IEDL.DBID | RIE |
ISSN | 1077-2626 1941-0506 |
IngestDate | Fri Jul 11 10:06:44 EDT 2025 Sun Jun 29 12:54:43 EDT 2025 Thu Apr 03 06:54:48 EDT 2025 Thu Apr 24 22:52:19 EDT 2025 Tue Jul 01 03:58:54 EDT 2025 Wed Aug 27 02:46:59 EDT 2025 |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 6 |
Language | English |
License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037 |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c349t-a687a3a3c685550f3a4e8c5dc90645aa15d4c678b5b1a868a03364b296056d453 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 |
ORCID | 0000-0002-1130-9914 |
PMID | 30892211 |
PQID | 2220123310 |
PQPubID | 75741 |
PageCount | 13 |
ParticipantIDs | ieee_primary_8667661 pubmed_primary_30892211 proquest_miscellaneous_2194586025 proquest_journals_2220123310 crossref_citationtrail_10_1109_TVCG_2019_2903943 crossref_primary_10_1109_TVCG_2019_2903943 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 2019-06-01 |
PublicationDateYYYYMMDD | 2019-06-01 |
PublicationDate_xml | – month: 06 year: 2019 text: 2019-06-01 day: 01 |
PublicationDecade | 2010 |
PublicationPlace | United States |
PublicationPlace_xml | – name: United States – name: New York |
PublicationTitle | IEEE transactions on visualization and computer graphics |
PublicationTitleAbbrev | TVCG |
PublicationTitleAlternate | IEEE Trans Vis Comput Graph |
PublicationYear | 2019 |
Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
References | ref35 ref13 szegedy (ref37) 2013; abs 1312 6199 ref12 ref14 maaten (ref26) 2008; 9 ref11 ref32 ref10 ref2 ref1 ref17 chattopadhyay (ref18) 2017 ref19 (ref33) 0 (ref31) 2018 springenberg (ref9) 2014 ref24 ref23 hinton (ref15) 2015 ref25 krizhevsky (ref5) 2012 ref22 karras (ref34) 2017; abs 1710 10196 ref21 kahng (ref20) 2018; 25 (ref30) 2018 (ref28) 2017 ref27 simonyan (ref3) 2014 ref29 ref8 krizhevsky (ref36) 2009 ref4 ref6 montavon (ref7) 2017 kingma (ref16) 2013 |
References_xml | – ident: ref25 doi: 10.1109/TVCG.2018.2864504 – ident: ref14 doi: 10.1109/TVCG.2018.2864499 – ident: ref27 doi: 10.1109/5.726791 – ident: ref19 doi: 10.1109/TVCG.2016.2598831 – ident: ref8 doi: 10.1145/2939672.2939778 – ident: ref22 doi: 10.1007/978-3-319-27857-5_77 – year: 0 ident: ref33 – year: 2013 ident: ref16 article-title: Auto-encoding variational bayes publication-title: arXiv 1312 6114 – ident: ref29 doi: 10.1109/ICCV.2015.425 – ident: ref4 doi: 10.1109/CVPR.2016.90 – volume: 25 start-page: 310 year: 2018 ident: ref20 article-title: Gan lab: Understanding complex deep generative models using interactive visual experimentation publication-title: IEEE Trans Vis Comput Graph doi: 10.1109/TVCG.2018.2864500 – year: 2018 ident: ref31 – ident: ref12 doi: 10.1109/CVPR.2016.319 – ident: ref10 doi: 10.1371/journal.pone.0130140 – year: 2017 ident: ref18 article-title: Grad-cam++: Generalized gradient-based visual explanations for deep convolutional networks publication-title: arXiv 1710 11063 – ident: ref21 doi: 10.1109/TVCG.2017.2744878 – volume: 9 start-page: 2579 year: 2008 ident: ref26 article-title: Visualizing data using t-sne publication-title: J Mach Learn Res – ident: ref13 doi: 10.1109/ICCV.2017.371 – year: 2014 ident: ref3 article-title: Very deep convolutional networks for large-scale image recognition publication-title: arXiv 1409 1556 – ident: ref6 doi: 10.1109/CVPR.2018.00175 – year: 2009 ident: ref36 article-title: Learning multiple layers of features from tiny images publication-title: Tech Rep – ident: ref1 doi: 10.1007/BF00994018 – start-page: 1097 year: 2012 ident: ref5 article-title: Imagenet classification with deep convolutional neural networks publication-title: Proc Neural Inf Process Syst – ident: ref2 doi: 10.1109/21.97458 – year: 2017 ident: ref7 article-title: Methods for interpreting and understanding deep neural networks publication-title: Digit Signal Process – ident: ref23 doi: 10.1109/TVCG.2017.2744718 – year: 2015 ident: ref15 article-title: Distilling the knowledge in a neural network publication-title: ArXiv 1503 02531 – volume: abs 1312 6199 year: 2013 ident: ref37 article-title: Intriguing properties of neural networks publication-title: CoRR – ident: ref32 doi: 10.1109/WACV.2017.131 – volume: abs 1710 10196 year: 2017 ident: ref34 article-title: Progressive growing of gans for improved quality, stability, and variation publication-title: CoRR – year: 2014 ident: ref9 article-title: Striving for simplicity: The all convolutional net publication-title: arXiv 1412 6806 – ident: ref11 doi: 10.1016/j.patcog.2016.11.008 – year: 2017 ident: ref28 – ident: ref35 doi: 10.1145/2858036.2858111 – ident: ref24 doi: 10.1109/TVCG.2018.2816223 – year: 2018 ident: ref30 – ident: ref17 doi: 10.1109/ICCV.2017.74 |
SSID | ssj0014489 |
Score | 2.5638852 |
Snippet | Deep Neural Networks (DNNs) have been extensively used in multiple disciplines due to their superior performance. However, in most cases, DNNs are considered... |
SourceID | proquest pubmed crossref ieee |
SourceType | Aggregation Database Index Database Enrichment Source Publisher |
StartPage | 2168 |
SubjectTerms | Analytical models Artificial neural networks Classifiers Data models Deep learning Deep neural networks Diagnosis Distillation generative model knowledge distillation Knowledge management Machine learning Medical imaging model interpretation Neural networks Safety critical Semantics Training Visual analytics |
Title | DeepVID: Deep Visual Interpretation and Diagnosis for Image Classifiers via Knowledge Distillation |
URI | https://ieeexplore.ieee.org/document/8667661 https://www.ncbi.nlm.nih.gov/pubmed/30892211 https://www.proquest.com/docview/2220123310 https://www.proquest.com/docview/2194586025 |
Volume | 25 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1LT9wwEB4Bp3KgpTy6La1ciRMiS-JHEvdWsaVQBCdYcYtsx5FWbLOou-mBX89MnI0KAtSbpYwTR2OPv_F8ngHYL0shK5vyKDGWR9IkPNLOpREXwghnYyvaW64Xl-nptfx1o25W4LC_C-O9b8lnfkjNNpZfzlxDR2VHOREyyddZRcct3NXqIwboZujAL8wijii9i2AmsT66Gh__JBKXHnIdCy3Foz2oLaryMr5s95mTt3CxHGGgl9wOm4UduvsnyRv_9xfewUYHONn3MEM2YcXX72H9nzSEW2BH3t-Nz0bfGDXYeDJvsMdjOiIzdclGgZc3mTOEuuzsN9oi1lbVnFRUUZv9nRh2vjykQ2k0H9PAtduG65MfV8enUVd7IXJC6kVk0jwzpK40V-jEVMJInztVOk0J7oxJVCkdbnRW2cTkaW5iIVJpOTpEKi2lEjuwVs9q_wFYVmVoOpwu48RIzg1iPpNZp9FyOFtZNYB4qY3CdYnJqT7GtGgdlFgXpMCCFFh0ChzAQd_lLmTleE14i_TQC3YqGMDeUuVFt27nBaIlApmIeQfwtX-MK47CKKb2swZlEi0Vle7Cke-GqdK_W8S55uhTf3z-m5_gDY0sUM32YG3xp_GfEdQs7Jd2Nj8A9-jw8w |
linkProvider | IEEE |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwzV3NbtQwEB5V7QE4tEChLBQwElyQso3_khiJA-pSdtm2p-2qt2A7jrSizVbsBgTPwqv03TqOsxFFwK0SN0sZx4lnPP5sf54BeFkUXJQmYRHVhkVCUxYpa5OIca65NbHhzS3Xo-NkeCI-nsrTNfjZ3YVxzjXkM9f3xeYsv5jb2m-V7WWekJnQlkI5dt-_4QJt8XY0QG2-Yuzg_WR_GLU5BCLLhVpGOslS7ZtNMolgvORauMzKwiofqE1rKgth0WEbaajOkkzHnCfCMAT2MimEzwmBDn4DcYZk4XZYd0aBCxsVGI1pxHBd0J6Z0ljtTab7HzxtTPWZirkS_Nqs16Rx-TuibWa2gy24XPVJILR87tdL07c_fgsX-b922l3YbCE1eRfGwD1Yc9V9uPNLoMVtMAPnLqajwRviC2Q6W9RY4zrhkuiqIIPAPJwtCIJ5MjpHb0uavKGz0ucMJ19nmoxX25AojQ7yLLAJH8DJjfzkQ1iv5pV7BCQtU3SOVhUx1YIxjahWp8Yq9I3WlEb2IF5pP7dt6HWfAeQsb5Zgscq9weTeYPLWYHrwuqtyEeKO_Et42-u9E2xV3oPdlYnlrWda5IgHPYxGVN-DF91j9Cn-oEhXbl6jDFVC-uRk-OU7wTS7d_M4U4xR-vjPbT6HW8PJ0WF-ODoeP4Hb_isDsW4X1pdfavcUIdzSPGtGEoFPN22FV35IS_k |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=DeepVID%3A+Deep+Visual+Interpretation+and+Diagnosis+for+Image+Classifiers+via+Knowledge+Distillation&rft.jtitle=IEEE+transactions+on+visualization+and+computer+graphics&rft.au=Wang%2C+Junpeng&rft.au=Gou%2C+Liang&rft.au=Zhang%2C+Wei&rft.au=Yang%2C+Hao&rft.date=2019-06-01&rft.issn=1941-0506&rft.eissn=1941-0506&rft.volume=25&rft.issue=6&rft.spage=2168&rft_id=info:doi/10.1109%2FTVCG.2019.2903943&rft.externalDBID=NO_FULL_TEXT |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1077-2626&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1077-2626&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1077-2626&client=summon |