FAT-RABBIT: Fault-Aware Training towards Robustness AgainstBit-flip Based Attacks in Deep Neural Networks
Machine learning and in particular deep learning is used in a broad range of crucial applications. Implementing such models in custom hardware can be highly beneficial thanks to their low power and computation latency compared to GPUs. However, an error in their output can lead to disastrous outcome...
Saved in:
Published in | Proceedings - International Test Conference pp. 106 - 110 |
---|---|
Main Authors | , , , , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
03.11.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Machine learning and in particular deep learning is used in a broad range of crucial applications. Implementing such models in custom hardware can be highly beneficial thanks to their low power and computation latency compared to GPUs. However, an error in their output can lead to disastrous outcomes. An adversary may force misclassification in the model's outcome by inducing a number of bit-flips in the targeted locations; thus declining the accuracy. To fill the gap, this paper presents FAT-RABBIT, a cost-effective mechanism designed to mitigate such threats by training the model such that there would be few weights that can be highly impactful in the outcome; thus reducing the sensitivity of the model to the fault injection attacks. Moreover, to increase robustness against bit-wise large perturbations, we propose an optimization scheme so-called M-SAM. We then augment FAT-RABBIT with the M-SAM optimizer to further bolster model accuracy against bit-flipping fault attacks. Notably, these approaches incur no additional hardware overhead. Our experimental results demonstrate the robustness of FAT-RABBIT and its augmented version, called Augmented FAT-RABBIT, against such attacks. |
---|---|
ISSN: | 2378-2250 |
DOI: | 10.1109/ITC51657.2024.00029 |