Defending Observation Attacks in Deep Reinforcement Learning via Detection and Denoising
Neural network policies trained using Deep Reinforcement Learning (DRL) are well-known to be susceptible to adversarial attacks. In this paper, we consider attacks manifesting as perturbations in the observation space managed by the external environment. These attacks have been shown to downgrade po...
Saved in:
Main Authors | , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
14.06.2022
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Neural network policies trained using Deep Reinforcement Learning (DRL) are
well-known to be susceptible to adversarial attacks. In this paper, we consider
attacks manifesting as perturbations in the observation space managed by the
external environment. These attacks have been shown to downgrade policy
performance significantly. We focus our attention on well-trained deterministic
and stochastic neural network policies in the context of continuous control
benchmarks subject to four well-studied observation space adversarial attacks.
To defend against these attacks, we propose a novel defense strategy using a
detect-and-denoise schema. Unlike previous adversarial training approaches that
sample data in adversarial scenarios, our solution does not require sampling
data in an environment under attack, thereby greatly reducing risk during
training. Detailed experimental results show that our technique is comparable
with state-of-the-art adversarial training approaches. |
---|---|
DOI: | 10.48550/arxiv.2206.07188 |