Exploring Public Data Vulnerabilities in Semi-Supervised Learning Models through Gray-box Adversarial Attack

Semi-supervised learning (SSL) models, integrating labeled and unlabeled data, have gained prominence in vision-based tasks, yet their susceptibility to adversarial attacks remains underexplored. This paper unveils the vulnerability of SSL models to gray-box adversarial attacks—a scenario where the...

Full description

Saved in:
Bibliographic Details
Published inElectronics (Basel) Vol. 13; no. 5; p. 940
Main Authors Jo, Junhyung, Kim, Joongsu, Suh, Young-Joo
Format Journal Article
LanguageEnglish
Published Basel MDPI AG 01.03.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Semi-supervised learning (SSL) models, integrating labeled and unlabeled data, have gained prominence in vision-based tasks, yet their susceptibility to adversarial attacks remains underexplored. This paper unveils the vulnerability of SSL models to gray-box adversarial attacks—a scenario where the attacker has partial knowledge of the model. We introduce an efficient attack method, Gray-box Adversarial Attack on Semi-supervised learning (GAAS), which exploits the dependency of SSL models on publicly available labeled data. Our analysis demonstrates that even with limited knowledge, GAAS can significantly undermine the integrity of SSL models across various tasks, including image classification, object detection, and semantic segmentation, with minimal access to labeled data. Through extensive experiments, we exhibit the effectiveness of GAAS, comparing it to white-box attack scenarios and underscoring the critical need for robust defense mechanisms. Our findings highlight the potential risks of relying on public datasets for SSL model training and advocate for the integration of adversarial training and other defense strategies to safeguard against such vulnerabilities.
ISSN:2079-9292
2079-9292
DOI:10.3390/electronics13050940