Evaluating Adversarial Robustness on Document Image Classification

Adversarial attacks and defenses have gained increasing interest on computer vision systems in recent years, but as of today, most investigations are limited to images. However, many artificial intelligence models actually handle documentary data, which is very different from real world images. Henc...

Full description

Saved in:
Bibliographic Details
Main Authors Fronteau, Timothée, Paran, Arnaud, Shabou, Aymen
Format Journal Article
LanguageEnglish
Published 24.04.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Adversarial attacks and defenses have gained increasing interest on computer vision systems in recent years, but as of today, most investigations are limited to images. However, many artificial intelligence models actually handle documentary data, which is very different from real world images. Hence, in this work, we try to apply the adversarial attack philosophy on documentary and natural data and to protect models against such attacks. We focus our work on untargeted gradient-based, transfer-based and score-based attacks and evaluate the impact of adversarial training, JPEG input compression and grey-scale input transformation on the robustness of ResNet50 and EfficientNetB0 model architectures. To the best of our knowledge, no such work has been conducted by the community in order to study the impact of these attacks on the document image classification task.
DOI:10.48550/arxiv.2304.12486