Interpreting Multilayer Perceptrons Using 3-Valued Activation Function
Multilayer perceptrons (MLPs) have been successfully applied to solving many problems, but in most cases, they are used as black boxes which are not interpretable. That is, even if an MLP can provide correct answers, we cannot understand the reasons why it makes these decisions. In this study, we tr...
Saved in:
Published in | 2017 3rd IEEE International Conference on Cybernetics (CYBCONF) pp. 1 - 6 |
---|---|
Main Authors | , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
01.06.2017
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Multilayer perceptrons (MLPs) have been successfully applied to solving many problems, but in most cases, they are used as black boxes which are not interpretable. That is, even if an MLP can provide correct answers, we cannot understand the reasons why it makes these decisions. In this study, we try to interpret a single hidden layer MLP by discretizing the hidden neuron outputs into 3 values (e.g. -1, 0, and 1). The 3 values correspond to false, unknown, and true, respectively. The basic process is (1) train an MLP first, (2) discretize the hidden neurons, (3) retrain the output layer of the MLP, (4) add more hidden neurons if needed, and (5) induce a decision tree based on the hidden neuron outputs. Experiments on several public datasets show that the proposed method is feasible for acquiring interpretable knowledge. |
---|---|
DOI: | 10.1109/CYBConf.2017.7985786 |