Can Edge Probing Tasks Reveal Linguistic Knowledge in QA Models?
There have been many efforts to try to understand what grammatical knowledge (e.g., ability to understand the part of speech of a token) is encoded in large pre-trained language models (LM). This is done through `Edge Probing' (EP) tests: supervised classification tasks to predict the grammatic...
Saved in:
Main Authors | , , |
---|---|
Format | Journal Article |
Language | English |
Published |
15.09.2021
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | There have been many efforts to try to understand what grammatical knowledge
(e.g., ability to understand the part of speech of a token) is encoded in large
pre-trained language models (LM). This is done through `Edge Probing' (EP)
tests: supervised classification tasks to predict the grammatical properties of
a span (whether it has a particular part of speech) using only the token
representations coming from the LM encoder. However, most NLP applications
fine-tune these LM encoders for specific tasks. Here, we ask: if an LM is
fine-tuned, does the encoding of linguistic information in it change, as
measured by EP tests? Specifically, we focus on the task of Question Answering
(QA) and conduct experiments on multiple datasets. We find that EP test results
do not change significantly when the fine-tuned model performs well or in
adversarial situations where the model is forced to learn wrong correlations.
From a similar finding, some recent papers conclude that fine-tuning does not
change linguistic knowledge in encoders but they do not provide an explanation.
We find that EP models themselves are susceptible to exploiting spurious
correlations in the EP datasets. When this dataset bias is corrected, we do see
an improvement in the EP test results as expected. |
---|---|
DOI: | 10.48550/arxiv.2109.07102 |