Adaptive Meta-learner via Gradient Similarity for Few-shot Text Classification
Few-shot text classification aims to classify the text under the few-shot scenario. Most of the previous methods adopt optimization-based meta learning to obtain task distribution. However, due to the neglect of matching between the few amount of samples and complicated models, as well as the distin...
Saved in:
Main Authors | , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
10.09.2022
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Few-shot text classification aims to classify the text under the few-shot
scenario. Most of the previous methods adopt optimization-based meta learning
to obtain task distribution. However, due to the neglect of matching between
the few amount of samples and complicated models, as well as the distinction
between useful and useless task features, these methods suffer from the
overfitting issue. To address this issue, we propose a novel Adaptive
Meta-learner via Gradient Similarity (AMGS) method to improve the model
generalization ability to a new task. Specifically, the proposed AMGS
alleviates the overfitting based on two aspects: (i) acquiring the potential
semantic representation of samples and improving model generalization through
the self-supervised auxiliary task in the inner loop, (ii) leveraging the
adaptive meta-learner via gradient similarity to add constraints on the
gradient obtained by base-learner in the outer loop. Moreover, we make a
systematic analysis of the influence of regularization on the entire framework.
Experimental results on several benchmarks demonstrate that the proposed AMGS
consistently improves few-shot text classification performance compared with
the state-of-the-art optimization-based meta-learning approaches. |
---|---|
DOI: | 10.48550/arxiv.2209.04702 |