Adaptive Normalized Attacks for Learning Adversarial Attacks and Defenses in Power Systems

Vulnerability of various machine learning methods to adversarial examples has been recently explored in the literature. Power systems which use these vulnerable methods face a huge threat against adversarial examples. To this end, we first propose a more accurate and computationally efficient method...

Full description

Saved in:
Bibliographic Details
Published in2019 IEEE International Conference on Communications, Control, and Computing Technologies for Smart Grids (SmartGridComm) pp. 1 - 6
Main Authors Tian, Jiwei, Li, Tengyao, Shang, Fute, Cao, Kunrui, Li, Jing, Ozay, Mete
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.10.2019
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Vulnerability of various machine learning methods to adversarial examples has been recently explored in the literature. Power systems which use these vulnerable methods face a huge threat against adversarial examples. To this end, we first propose a more accurate and computationally efficient method called Adaptive Normalized Attack (ANA) to attack power systems using generate adversarial examples. We then adopt adversarial training to defend against attacks of adversarial examples. Experimental analyses demonstrate that our attack method provides less perturbation compared to the state-of-the-art FGSM (Fast Gradient Sign Method) and DeepFool, while our proposed method increases misclassification rate of learning methods for attacking power systems. In addition, the results show that the proposed adversarial training improves robustness of power systems to adversarial examples compared to using state-of-the-art methods.
DOI:10.1109/SmartGridComm.2019.8909713