Empirical study on security verification and assessment of neural network accelerator

With the significant success of machine learning, there are plenty of innovative neural network designs nowadays. The related applications become more and more pervasive in our daily life, even in life-critical domains such as autopilot and medical diagnosis, etc. In these domains, whether the AI-ba...

Full description

Saved in:
Bibliographic Details
Published inMicroprocessors and microsystems Vol. 99; p. 104845
Main Authors Chen, Yean Ru, Wang, Tzu Fan, Chen, Si-Han, Kao, Yi-Chun
Format Journal Article
LanguageEnglish
Published Elsevier B.V 01.06.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:With the significant success of machine learning, there are plenty of innovative neural network designs nowadays. The related applications become more and more pervasive in our daily life, even in life-critical domains such as autopilot and medical diagnosis, etc. In these domains, whether the AI-based system is “secure” or not is a critical issue. In this work, we first present six Hardware Trojan attacks with demonstrations of their impacts on the hardware design of neural networks. When data leakage occurs, we encode the leakage data to the output and make it more difficult to be detected. Most of our attacks can either achieve more than 98% attack success rate or leak out confidential data without causing any functional violation, with less than 1.5% overhead. We also discuss how to effectively and efficiently detect these Hardware Trojans with formal verification methods and further propose a risk assessment process to constitute a priority guidance to suggest security verification tasks of neuron network hardware. Based on our results, we strongly suggest that security specification and total verification are essential to neuron network designs.
ISSN:0141-9331
1872-9436
DOI:10.1016/j.micpro.2023.104845