Adversarial attacks on deep-learning-based radar range profile target recognition

•We generate the nontargeted and targeted fine-grained adversarial perturbations based on the binary search algorithm and the multiple-iteration method, respectively.•We generate nontargeted and targeted UAPs based on the aggregation method and the scaling method, respectively.•We verify that the UA...

Full description

Saved in:
Bibliographic Details
Published inInformation sciences Vol. 531; pp. 159 - 176
Main Authors Huang, Teng, Chen, Yongfeng, Yao, Bingjian, Yang, Bifen, Wang, Xianmin, Li, Ya
Format Journal Article
LanguageEnglish
Published Elsevier Inc 01.08.2020
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:•We generate the nontargeted and targeted fine-grained adversarial perturbations based on the binary search algorithm and the multiple-iteration method, respectively.•We generate nontargeted and targeted UAPs based on the aggregation method and the scaling method, respectively.•We verify that the UAP has transferability and generalization ability.•We verify that the adversarial perturbation is more aggressive to HRRP data than random noise. Target recognition based on a high-resolution range profile (HRRP) has always been a research hotspot in the radar signal interpretation field. Deep learning has been an important method for HRRP target recognition. However, recent research has shown that optical image target recognition methods based on deep learning are vulnerable to adversarial samples. Whether HRRP target recognition methods based on deep learning can be attacked remains an open question. In this paper, four methods of generating adversarial perturbations are proposed. Algorithm 1 generates the nontargeted fine-grained perturbation based on the binary search method. Algorithm 2 generates the targeted fine-grained perturbation based on the multiple-iteration method. Algorithm 3 generates the nontargeted universal adversarial perturbation (UAP) based on aggregating some fine-grained perturbations. Algorithm 4 generates the targeted universal perturbation based on scaling one fine-grained perturbation. These perturbations are used to generate adversarial samples to attack HRRP target recognition methods based on deep learning under white-box and black-box attacks. The experiments are conducted with actual radar data and show that the HRRP adversarial samples have certain aggressiveness. Therefore, HRRP target recognition methods based on deep learning have potential security risks.
ISSN:0020-0255
1872-6291
DOI:10.1016/j.ins.2020.03.066