Layer-Wise Relevance Propagation for Explainable Deep Learning Based Speech Recognition
We develop a framework for incorporating explanations in a deep learning based speech recognition model. The most cited criticism against deep learning based methods across domains is the non-interpretability of the model. This means that the model in itself provides very less or no insight into whi...
Saved in:
Published in | 2018 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT) pp. 168 - 174 |
---|---|
Main Author | |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
01.12.2018
|
Subjects | |
Online Access | Get full text |
DOI | 10.1109/ISSPIT.2018.8642691 |
Cover
Loading…
Summary: | We develop a framework for incorporating explanations in a deep learning based speech recognition model. The most cited criticism against deep learning based methods across domains is the non-interpretability of the model. This means that the model in itself provides very less or no insight into which features of the input are most responsible for the model`s predictions, Layer-wise relevance propagation is an emerging technique for explaining the predictions of deep neural networks. It has shown great success in computer vision applications, but to the best of our knowledge there has been no application of its use in a speech-recognition setup. In this paper, we develop a bi-directional GRU based speech recognition model in such a way that layer-wise relevance propagation can be suitably applied to explain the recognition task. We show through simulation results that the benefit of explainability does not compromise on the model accuracy of speech recognition. |
---|---|
DOI: | 10.1109/ISSPIT.2018.8642691 |