Learning to Deceive Knowledge Graph Augmented Models via Targeted Perturbation
Knowledge graphs (KGs) have helped neural models improve performance on various knowledge-intensive tasks, like question answering and item recommendation. By using attention over the KG, such KG-augmented models can also "explain" which KG information was most relevant for making a given...
Saved in:
Main Authors | , , , , , , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
24.10.2020
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Knowledge graphs (KGs) have helped neural models improve performance on
various knowledge-intensive tasks, like question answering and item
recommendation. By using attention over the KG, such KG-augmented models can
also "explain" which KG information was most relevant for making a given
prediction. In this paper, we question whether these models are really behaving
as we expect. We show that, through a reinforcement learning policy (or even
simple heuristics), one can produce deceptively perturbed KGs, which maintain
the downstream performance of the original KG while significantly deviating
from the original KG's semantics and structure. Our findings raise doubts about
KG-augmented models' ability to reason about KG information and give sensible
explanations. |
---|---|
DOI: | 10.48550/arxiv.2010.12872 |