Enabling On-Device Self-Supervised Contrastive Learning with Selective Data Contrast

After a model is deployed on edge devices, it is desirable for these devices to learn from unlabeled data to continuously improve accuracy. Contrastive learning has demonstrated its great potential in learning from unlabeled data. However, the online input data are usually none independent and ident...

Full description

Saved in:
Bibliographic Details
Published in2021 58th ACM/IEEE Design Automation Conference (DAC) pp. 655 - 660
Main Authors Wu, Yawen, Wang, Zhepeng, Zeng, Dewen, Shi, Yiyu, Hu, Jingtong
Format Conference Proceeding
LanguageEnglish
Published IEEE 05.12.2021
Subjects
Online AccessGet full text
DOI10.1109/DAC18074.2021.9586228

Cover

More Information
Summary:After a model is deployed on edge devices, it is desirable for these devices to learn from unlabeled data to continuously improve accuracy. Contrastive learning has demonstrated its great potential in learning from unlabeled data. However, the online input data are usually none independent and identically distributed (non-iid) and edge devices' storages are usually too limited to store enough representative data from different data classes. We propose a framework to automatically select the most representative data from the unlabeled input stream, which only requires a small data buffer for dynamic learning. Experiments show that accuracy and learning speed are greatly improved.
DOI:10.1109/DAC18074.2021.9586228