Adversarial Attack and Defense for Transductive Support Vector Machine

As a classic semi-supervised approach, the Transductive Support Vector Machine (TSVM) has exhibited remarkable accuracy by utilizing unlabeled data. However, the robustness of TSVM against adversarial attacks remains a subject of investigation, prompting concerns about its reliability in security-cr...

Full description

Saved in:
Bibliographic Details
Published inProceedings of ... International Joint Conference on Neural Networks pp. 1 - 8
Main Authors Liu, Li, Chen, Haiyan, Yin, Changchun, Fang, Liming
Format Conference Proceeding
LanguageEnglish
Published IEEE 30.06.2024
Subjects
Online AccessGet full text
ISSN2161-4407
DOI10.1109/IJCNN60899.2024.10650664

Cover

Loading…
More Information
Summary:As a classic semi-supervised approach, the Transductive Support Vector Machine (TSVM) has exhibited remarkable accuracy by utilizing unlabeled data. However, the robustness of TSVM against adversarial attacks remains a subject of investigation, prompting concerns about its reliability in security-critical applications. To unveil the vulnerability of TSVM, we introduce a finite-attack model specifically tailored to its characteristics, effectively manipulating its outputs. Additionally, we present Adversarial Defense-based TSVM (AD-TSVM), the first dedicated defense scheme designed for TSVM. AD-TSVM incorporates adversarial information into the optimization process, enhancing robustness by rebuilding a customized loss function and decision margin to counteract attacks. Rigorous experiments conducted on benchmark datasets demonstrate the effectiveness of AD-TSVM in significantly improving both the accuracy and stability of TSVM when confronted with adversarial attacks. This pioneering research assesses the weaknesses of TSVM and, more importantly, offers valuable insights and solutions for developing secure and trustworthy TSVM systems in the face of emerging threats.
ISSN:2161-4407
DOI:10.1109/IJCNN60899.2024.10650664