LRNAS: Differentiable Searching for Adversarially Robust Lightweight Neural Architecture

The adversarial robustness is critical to deep neural networks (DNNs) in deployment. However, the improvement of adversarial robustness often requires compromising with the network size. Existing approaches to addressing this problem mainly focus on the combination of model compression and adversari...

Full description

Saved in:
Bibliographic Details
Published inIEEE Transactions on Neural Networks and Learning Systems Vol. 36; no. 3; pp. 5629 - 5643
Main Authors Feng, Yuqi, Lv, Zeqiong, Chen, Hongyang, Gao, Shangce, An, Fengping, Sun, Yanan
Format Journal Article
LanguageEnglish
Published United States IEEE 01.03.2025
Institute of Electrical and Electronics Engineers (IEEE)
Subjects
Online AccessGet full text
ISSN2162-237X
2162-2388
2162-2388
DOI10.1109/TNNLS.2024.3382724

Cover

Loading…
More Information
Summary:The adversarial robustness is critical to deep neural networks (DNNs) in deployment. However, the improvement of adversarial robustness often requires compromising with the network size. Existing approaches to addressing this problem mainly focus on the combination of model compression and adversarial training. However, their performance heavily relies on neural architectures, which are typically manual designs with extensive expertise. In this article, we propose a lightweight and robust neural architecture search (LRNAS) method to automatically search for adversarially robust lightweight neural architectures. Specifically, we propose a novel search strategy to quantify contributions of the components in the search space, based on which the beneficial components can be determined. In addition, we further propose an architecture selection method based on a greedy strategy, which can keep the model size while deriving sufficient beneficial components. Owing to these designs in LRNAS, the lightness, the natural accuracy, and the adversarial robustness can be collectively guaranteed to the searched architectures. We conduct extensive experiments on various benchmark datasets against the state of the arts. The experimental results demonstrate that the proposed LRNAS method is superior at finding lightweight neural architectures that are both accurate and adversarially robust under popular adversarial attacks. Moreover, ablation studies are also performed, which reveals the validity of the individual components designed in LRNAS and the component effects in positively deciding the overall performance.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:2162-237X
2162-2388
2162-2388
DOI:10.1109/TNNLS.2024.3382724