FAT-RABBIT: Fault-Aware Training towards Robustness AgainstBit-flip Based Attacks in Deep Neural Networks
Machine learning and in particular deep learning is used in a broad range of crucial applications. Implementing such models in custom hardware can be highly beneficial thanks to their low power and computation latency compared to GPUs. However, an error in their output can lead to disastrous outcome...
Saved in:
Published in | Proceedings - International Test Conference pp. 106 - 110 |
---|---|
Main Authors | , , , , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
03.11.2024
|
Subjects | |
Online Access | Get full text |
ISSN | 2378-2250 |
DOI | 10.1109/ITC51657.2024.00029 |
Cover
Loading…
Abstract | Machine learning and in particular deep learning is used in a broad range of crucial applications. Implementing such models in custom hardware can be highly beneficial thanks to their low power and computation latency compared to GPUs. However, an error in their output can lead to disastrous outcomes. An adversary may force misclassification in the model's outcome by inducing a number of bit-flips in the targeted locations; thus declining the accuracy. To fill the gap, this paper presents FAT-RABBIT, a cost-effective mechanism designed to mitigate such threats by training the model such that there would be few weights that can be highly impactful in the outcome; thus reducing the sensitivity of the model to the fault injection attacks. Moreover, to increase robustness against bit-wise large perturbations, we propose an optimization scheme so-called M-SAM. We then augment FAT-RABBIT with the M-SAM optimizer to further bolster model accuracy against bit-flipping fault attacks. Notably, these approaches incur no additional hardware overhead. Our experimental results demonstrate the robustness of FAT-RABBIT and its augmented version, called Augmented FAT-RABBIT, against such attacks. |
---|---|
AbstractList | Machine learning and in particular deep learning is used in a broad range of crucial applications. Implementing such models in custom hardware can be highly beneficial thanks to their low power and computation latency compared to GPUs. However, an error in their output can lead to disastrous outcomes. An adversary may force misclassification in the model's outcome by inducing a number of bit-flips in the targeted locations; thus declining the accuracy. To fill the gap, this paper presents FAT-RABBIT, a cost-effective mechanism designed to mitigate such threats by training the model such that there would be few weights that can be highly impactful in the outcome; thus reducing the sensitivity of the model to the fault injection attacks. Moreover, to increase robustness against bit-wise large perturbations, we propose an optimization scheme so-called M-SAM. We then augment FAT-RABBIT with the M-SAM optimizer to further bolster model accuracy against bit-flipping fault attacks. Notably, these approaches incur no additional hardware overhead. Our experimental results demonstrate the robustness of FAT-RABBIT and its augmented version, called Augmented FAT-RABBIT, against such attacks. |
Author | Nooralinejad, Parsa Pirsiavash, Hamed Karimi, Naghmeh Pourmehrani, Hossein Bahrami, Javad |
Author_xml | – sequence: 1 givenname: Hossein surname: Pourmehrani fullname: Pourmehrani, Hossein organization: University of Maryland,Baltimore – sequence: 2 givenname: Javad surname: Bahrami fullname: Bahrami, Javad organization: University of Maryland,Baltimore – sequence: 3 givenname: Parsa surname: Nooralinejad fullname: Nooralinejad, Parsa organization: University of California,Davis – sequence: 4 givenname: Hamed surname: Pirsiavash fullname: Pirsiavash, Hamed organization: University of California,Davis – sequence: 5 givenname: Naghmeh surname: Karimi fullname: Karimi, Naghmeh organization: University of Maryland,Baltimore |
BookMark | eNotj11LwzAYhaMouM39Ar3IH8h8kzRf3rXTaWEojN6PbH074mo7mozhv7egVw-H53DgTMlN13dIyAOHBefgnspqqbhWZiFAZAsAEO6KzJ1xVkquBHBprslESGOZEAruyDTGr7EFo5uQsMortsmLoqye6cqf28Tyix-QVoMPXegONPVjriPd9LtzTB3GSPPD6GIqQmJNG0608BFrmqfk98dIQ0dfEE_0A8-Db0ekSz8c4z25bXwbcf7PGalWr9Xyna0_38plvmbB8cT2qrYAXroatNnvhLTaZ7oWyLVz40_IwDeZdFBboRrgHE3mjNpJ67Tk0soZefybDYi4PQ3h2w8_Ww5Ga8Ol_AW23ldP |
CODEN | IEEPAD |
ContentType | Conference Proceeding |
DBID | 6IE 6IH CBEJK RIE RIO |
DOI | 10.1109/ITC51657.2024.00029 |
DatabaseName | IEEE Electronic Library (IEL) Conference Proceedings IEEE Proceedings Order Plan (POP) 1998-present by volume IEEE Xplore All Conference Proceedings IEEE Electronic Library (IEL) IEEE Proceedings Order Plans (POP) 1998-present |
DatabaseTitleList | |
Database_xml | – sequence: 1 dbid: RIE name: IEEE Xplore url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/ sourceTypes: Publisher |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Engineering |
EISBN | 9798331520137 |
EISSN | 2378-2250 |
EndPage | 110 |
ExternalDocumentID | 10766713 |
Genre | orig-research |
GroupedDBID | 29O 6IE 6IF 6IH 6IK 6IL 6IM 6IN AAJGR AAWTH ABLEC ADZIZ ALMA_UNASSIGNED_HOLDINGS BEFXN BFFAM BGNUA BKEBE BPEOZ CBEJK CHZPO IEGSK IJVOP IPLJI M43 OCL RIE RIL RIO RNS |
ID | FETCH-LOGICAL-i91t-c5d800a39d067cb2386a46d2e1699516040af4390d825f011e74975b389631383 |
IEDL.DBID | RIE |
IngestDate | Wed Jan 15 06:20:53 EST 2025 |
IsPeerReviewed | false |
IsScholarly | true |
Language | English |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-i91t-c5d800a39d067cb2386a46d2e1699516040af4390d825f011e74975b389631383 |
PageCount | 5 |
ParticipantIDs | ieee_primary_10766713 |
PublicationCentury | 2000 |
PublicationDate | 2024-Nov.-3 |
PublicationDateYYYYMMDD | 2024-11-03 |
PublicationDate_xml | – month: 11 year: 2024 text: 2024-Nov.-3 day: 03 |
PublicationDecade | 2020 |
PublicationTitle | Proceedings - International Test Conference |
PublicationTitleAbbrev | ITC |
PublicationYear | 2024 |
Publisher | IEEE |
Publisher_xml | – name: IEEE |
SSID | ssj0020520 |
Score | 2.2729256 |
Snippet | Machine learning and in particular deep learning is used in a broad range of crucial applications. Implementing such models in custom hardware can be highly... |
SourceID | ieee |
SourceType | Publisher |
StartPage | 106 |
SubjectTerms | Accuracy Computational modeling Deep learning fault aware training fault injection attacks Force Hardware machine learning accelerator Optimization Perturbation methods Robustness Sensitivity Training |
Title | FAT-RABBIT: Fault-Aware Training towards Robustness AgainstBit-flip Based Attacks in Deep Neural Networks |
URI | https://ieeexplore.ieee.org/document/10766713 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1bS8MwFA66J33xNvFOHnzN7DVdfOumYwoOGRX2NnKrFEc31hTBX-9J1s0hCD61tIWWHJLvfM35voPQrbKNT1g3IgmgM4m4EoTLICSMKesfBhm-k4-9jOjwLXqexJNGrO60MFprV3ymO_bU7eWruaztrzKY4Qmlie1RuwvMbSXW2rArW9DR2Ar5Hrt7yvqxT-MEKGBgDbI9m0RuNVBx-DE4QKP1m1dlIx-d2oiO_PplyvjvTztE7R-pHn7dgNAR2tHlMdrfchk8QcUgzcg4BcKb3eMBr2eGpJ98qXHWtIfAxtXOVng8F3Vl7OKH03e4V5leYUg-Kxa4B2incGqM1eTjosQPWi-wtfbgMzi4WvKqjbLBY9YfkqbDAimYb4iMFeSLPGQKMEsKQG_KI6oC7VOIkU9hgvMcMhZPAY_MYSXQScSSWECSQ0MfuO0papXzUp8hHHOR5JJ7CgY_Ypp1Oc0DwQSEHB6V8hy17aBNFysPjel6vC7-uH6J9mzgnOovvEIts6z1NcC_ETcu7N9JEa0K |
linkProvider | IEEE |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1dS8MwFA2iD-qLXxO_zYOvmW3Xpotv3XRsug0ZFfY2kiaV4ujGmiL4673JujkEwaeWttCS2-Sc295zLkJ30jQ-YU2fhIDOxOdSEJ54DcKYNP5hwPCtfGwwpN03_3kcjCuxutXCKKVs8Zmqm137L1_OktJ8KoMZHlIamh61O4FR4y7lWuv8ypR0VMZCrsPue3E7cGkQQhLoGYtsx9DIjRYqFkE6B2i4uveycOSjXmpRT75-2TL---EOUe1HrIdf1zB0hLZUfoz2N3wGT1DWiWIyiiDljR9wh5dTTaJPvlA4rhpEYG2rZws8momy0Gb5w9E7nCt0K9MknWZz3AK8kzjS2qjycZbjR6Xm2Jh78ClsbDV5UUNx5ylud0nVY4FkzNUkCSQwRt5gElArEYDflPtUesqlECWXwhTnKXAWR0ImmcJaoEKfhYEAmkMbLmS3p2g7n-XqDOGAizBNuCNh8H2mWJPT1BNMQNDh0iQ5RzUzaJP50kVjshqviz-O36LdbjzoT_q94csl2jNBtBrAxhXa1otSXQMZ0OLGvgLffrWwUg |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Abook&rft.genre=proceeding&rft.title=Proceedings+-+International+Test+Conference&rft.atitle=FAT-RABBIT%3A+Fault-Aware+Training+towards+Robustness+AgainstBit-flip+Based+Attacks+in+Deep+Neural+Networks&rft.au=Pourmehrani%2C+Hossein&rft.au=Bahrami%2C+Javad&rft.au=Nooralinejad%2C+Parsa&rft.au=Pirsiavash%2C+Hamed&rft.date=2024-11-03&rft.pub=IEEE&rft.eissn=2378-2250&rft.spage=106&rft.epage=110&rft_id=info:doi/10.1109%2FITC51657.2024.00029&rft.externalDocID=10766713 |