Analysis of Explainers of Black Box Deep Neural Networks for Computer Vision: A Survey
Deep Learning is a state-of-the-art technique to make inference on extensive or complex data. As a black box model due to their multilayer nonlinear structure, Deep Neural Networks are often criticized as being non-transparent and their predictions not traceable by humans. Furthermore, the models le...
Saved in:
Published in | Machine learning and knowledge extraction Vol. 3; no. 4; pp. 966 - 989 |
---|---|
Main Authors | , , |
Format | Journal Article |
Language | English |
Published |
Basel
MDPI AG
01.12.2021
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Deep Learning is a state-of-the-art technique to make inference on extensive or complex data. As a black box model due to their multilayer nonlinear structure, Deep Neural Networks are often criticized as being non-transparent and their predictions not traceable by humans. Furthermore, the models learn from artificially generated datasets, which often do not reflect reality. By basing decision-making algorithms on Deep Neural Networks, prejudice and unfairness may be promoted unknowingly due to a lack of transparency. Hence, several so-called explanators, or explainers, have been developed. Explainers try to give insight into the inner structure of machine learning black boxes by analyzing the connection between the input and output. In this survey, we present the mechanisms and properties of explaining systems for Deep Neural Networks for Computer Vision tasks. We give a comprehensive overview about the taxonomy of related studies and compare several survey papers that deal with explainability in general. We work out the drawbacks and gaps and summarize further research ideas. |
---|---|
ISSN: | 2504-4990 2504-4990 |
DOI: | 10.3390/make3040048 |