Explaining Deep Neural Networks for Point Clouds using Gradient-based Visualisations
Explaining decisions made by deep neural networks is a rapidly advancing research topic. In recent years, several approaches have attempted to provide visual explanations of decisions made by neural networks designed for structured 2D image input data. In this paper, we propose a novel approach to g...
Saved in:
Main Authors | , , |
---|---|
Format | Journal Article |
Language | English |
Published |
26.07.2022
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Explaining decisions made by deep neural networks is a rapidly advancing
research topic. In recent years, several approaches have attempted to provide
visual explanations of decisions made by neural networks designed for
structured 2D image input data. In this paper, we propose a novel approach to
generate coarse visual explanations of networks designed to classify
unstructured 3D data, namely point clouds. Our method uses gradients flowing
back to the final feature map layers and maps these values as contributions of
the corresponding points in the input point cloud. Due to dimensionality
disagreement and lack of spatial consistency between input points and final
feature maps, our approach combines gradients with points dropping to compute
explanations of different parts of the point cloud iteratively. The generality
of our approach is tested on various point cloud classification networks,
including 'single object' networks PointNet, PointNet++, DGCNN, and a 'scene'
network VoteNet. Our method generates symmetric explanation maps that highlight
important regions and provide insight into the decision-making process of
network architectures. We perform an exhaustive evaluation of trust and
interpretability of our explanation method against comparative approaches using
quantitative, quantitative and human studies. All our code is implemented in
PyTorch and will be made publicly available. |
---|---|
DOI: | 10.48550/arxiv.2207.12984 |