Exploiting all samples in low-resource sentence classification: early stopping and initialization parameters

To improve deep-learning performance in low-resource settings, many researchers have redesigned model architectures or applied additional data (e.g., external resources, unlabeled samples). However, there have been relatively few discussions on how to make good use of small amounts of labeled sample...

Full description

Saved in:
Bibliographic Details
Published inIEEE access Vol. 11; p. 1
Main Authors Choi, HongSeok, Lee, Hyunju
Format Journal Article
LanguageEnglish
Published Piscataway IEEE 01.01.2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:To improve deep-learning performance in low-resource settings, many researchers have redesigned model architectures or applied additional data (e.g., external resources, unlabeled samples). However, there have been relatively few discussions on how to make good use of small amounts of labeled samples, although it is potentially beneficial and should be done before applying additional data or redesigning models. In this study, we assume a low-resource setting in which only a few labeled samples (i.e., 30-100 per class) are available, and we discuss how to exploit them without additional data or model redesigns. We explore possible approaches in the following three aspects: training validation splitting, early stopping, and weight initialization. Extensive experiments are conducted on six public sentence classification datasets. Performance on various evaluation metrics (e.g., accuracy, loss, and calibration error) significantly varied depending on the approaches that were combined in the three aspects. Based on the results, we propose an integrated method, which is to initialize the model with a weight averaging method and use a non-validation stop method to train all samples. This simple integrated method consistently outperforms the competitive methods; e.g., the average accuracy of six datasets of this method was 1.8% higher than those of conventional validation-based methods. In addition, the integrated method further improves the performance when adapted to several state-of-the-art models that use additional data or redesign the network architecture (e.g., self-training and enhanced structural models). Our results highlight the importance of the training strategy and suggest that the integrated method can be the first step in the low-resource setting. This study provides empirical knowledge that will be helpful when dealing with low-resource data in future efforts. Our code is publicly available at https://github.com/DMCB-GIST/exploit_all_samples.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2023.3261884