Enhancing Few-shot NER with Prompt Ordering based Data Augmentation
Recently, data augmentation (DA) methods have been proven to be effective for pre-trained language models (PLMs) in low-resource settings, including few-shot named entity recognition (NER). However, conventional NER DA methods are mostly aimed at sequence labeling models, i.e., token-level classific...
Saved in:
Main Authors | , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
19.05.2023
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Recently, data augmentation (DA) methods have been proven to be effective for
pre-trained language models (PLMs) in low-resource settings, including few-shot
named entity recognition (NER). However, conventional NER DA methods are mostly
aimed at sequence labeling models, i.e., token-level classification, and few
are compatible with unified autoregressive generation frameworks, which can
handle a wider range of NER tasks, such as nested NER. Furthermore, these
generation frameworks have a strong assumption that the entities will appear in
the target sequence with the same left-to-right order as the source sequence.
In this paper, we claim that there is no need to keep this strict order, and
more diversified but reasonable target entity sequences can be provided during
the training stage as a novel DA method. Nevertheless, a naive mixture of
augmented data can confuse the model since one source sequence will then be
paired with different target sequences. Therefore, we propose a simple but
effective Prompt Ordering based Data Augmentation (PODA) method to improve the
training of unified autoregressive generation frameworks under few-shot NER
scenarios. Experimental results on three public NER datasets and further
analyses demonstrate the effectiveness of our approach. |
---|---|
DOI: | 10.48550/arxiv.2305.11791 |