Optimizing Scarce Labeled Data Usage in Online Continual Learning via Enhanced Complementary Systems

In recent years, although deep learning technology has made significant progress, the performance of existing models often shows significant degradation under limited data annotation. This phenomenon becomes more prominent especially when trying to reduce the labeling cost, which becomes an importan...

Full description

Saved in:
Bibliographic Details
Published in2025 8th International Symposium on Big Data and Applied Statistics (ISBDAS) pp. 739 - 743
Main Authors Zeng, Shiwu, Lin, Chenyu, Chen, Shuaibo, An, Puping, Wu, Zhiyu, Ke, Zunwang
Format Conference Proceeding
LanguageEnglish
Published IEEE 28.02.2025
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:In recent years, although deep learning technology has made significant progress, the performance of existing models often shows significant degradation under limited data annotation. This phenomenon becomes more prominent especially when trying to reduce the labeling cost, which becomes an important bottleneck that limits the generalization ability and practical application of the model. To address this challenge, we propose a series of innovative methods to improve the performance of models with a small amount of labeled data. First, we design a pseudo-label generation mechanism based on the prediction consistency of the fast learner and the slow learner. The cross-entropy loss is used to evaluate the category consistency and the IoU threshold is used to measure the bounding box consistency, which ensures that the fast learner's pseudo-label is accepted only when the two models are highly consistent on the category and location predictions, thus improving the quality of the pseudo-label. Secondly, in order to solve the problem of model forgetting and balance the learning of old and new knowledge, the Dynamic-EMA strategy is adopted, and the KL divergence is used to dynamically adjust the EMA rate, so that the model can quickly adapt to new information while maintaining historical knowledge. Experimental results show that our method exhibits significant superiority in both high and low annotation cost scenarios, demonstrating its potential and value in reducing annotation dependency.
DOI:10.1109/ISBDAS64762.2025.11117007