Adversarial Training Methods for Deep Learning: A Systematic Review

Deep neural networks are exposed to the risk of adversarial attacks via the fast gradient sign method (FGSM), projected gradient descent (PGD) attacks, and other attack algorithms. Adversarial training is one of the methods used to defend against the threat of adversarial attacks. It is a training s...

Full description

Saved in:
Bibliographic Details
Published inAlgorithms Vol. 15; no. 8; p. 283
Main Authors Zhao, Weimin, Alwidian, Sanaa, Mahmoud, Qusay H.
Format Journal Article
LanguageEnglish
Published Basel MDPI AG 01.08.2022
Subjects
Online AccessGet full text

Cover

Loading…