Learning the Legibility of Visual Text Perturbations

Many adversarial attacks in NLP perturb inputs to produce visually similar strings ('ergo' \(\rightarrow\) '\(\epsilon\)rgo') which are legible to humans but degrade model performance. Although preserving legibility is a necessary condition for text perturbation, little work has...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Dev, Seth, Rickard Stureborg, Pruthi, Danish, Dhingra, Bhuwan
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 10.03.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Many adversarial attacks in NLP perturb inputs to produce visually similar strings ('ergo' \(\rightarrow\) '\(\epsilon\)rgo') which are legible to humans but degrade model performance. Although preserving legibility is a necessary condition for text perturbation, little work has been done to systematically characterize it; instead, legibility is typically loosely enforced via intuitions around the nature and extent of perturbations. Particularly, it is unclear to what extent can inputs be perturbed while preserving legibility, or how to quantify the legibility of a perturbed string. In this work, we address this gap by learning models that predict the legibility of a perturbed string, and rank candidate perturbations based on their legibility. To do so, we collect and release LEGIT, a human-annotated dataset comprising the legibility of visually perturbed text. Using this dataset, we build both text- and vision-based models which achieve up to \(0.91\) F1 score in predicting whether an input is legible, and an accuracy of \(0.86\) in predicting which of two given perturbations is more legible. Additionally, we discover that legible perturbations from the LEGIT dataset are more effective at lowering the performance of NLP models than best-known attack strategies, suggesting that current models may be vulnerable to a broad range of perturbations beyond what is captured by existing visual attacks. Data, code, and models are available at https://github.com/dvsth/learning-legibility-2023.
ISSN:2331-8422