Robust Adversarial Perturbation on Deep Proposal-based Models

Adversarial noises are useful tools to probe the weakness of deep learning based computer vision algorithms. In this paper, we describe a robust adversarial perturbation (R-AP) method to attack deep proposal-based object detectors and instance segmentation algorithms. Our method focuses on attacking...

Full description

Saved in:
Bibliographic Details
Main Authors Li, Yuezun, Tian, Daniel, Chang, Ming-Ching, Bian, Xiao, Lyu, Siwei
Format Journal Article
LanguageEnglish
Published 16.09.2018
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Adversarial noises are useful tools to probe the weakness of deep learning based computer vision algorithms. In this paper, we describe a robust adversarial perturbation (R-AP) method to attack deep proposal-based object detectors and instance segmentation algorithms. Our method focuses on attacking the common component in these algorithms, namely Region Proposal Network (RPN), to universally degrade their performance in a black-box fashion. To do so, we design a loss function that combines a label loss and a novel shape loss, and optimize it with respect to image using a gradient based iterative algorithm. Evaluations are performed on the MS COCO 2014 dataset for the adversarial attacking of 6 state-of-the-art object detectors and 2 instance segmentation algorithms. Experimental results demonstrate the efficacy of the proposed method.
DOI:10.48550/arxiv.1809.05962