Preventing DNN Model IP Theft via Hardware Obfuscation

Training accurate deep learning (DL) models require large amounts of training data, significant work in labeling the data, considerable computing resources, and substantial domain expertise. In short, they are expensive to develop. Hence, protecting these models, which are valuable storehouses of in...

Full description

Saved in:
Bibliographic Details
Published inIEEE journal on emerging and selected topics in circuits and systems Vol. 11; no. 2; pp. 267 - 277
Main Authors Goldstein, Brunno F., Patil, Vinay C., Ferreira, Victor C., Nery, Alexandre S., Franca, Felipe M. G., Kundu, Sandip
Format Journal Article
LanguageEnglish
Published Piscataway IEEE 01.06.2021
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Training accurate deep learning (DL) models require large amounts of training data, significant work in labeling the data, considerable computing resources, and substantial domain expertise. In short, they are expensive to develop. Hence, protecting these models, which are valuable storehouses of intellectual properties (IP), against model stealing/cloning attacks is of paramount importance. Today's mobile processors feature Neural Processing Units (NPUs) to accelerate the execution of DL models. DL models executing on NPUs are vulnerable to hyperparameter extraction via side-channel attacks and model parameter theft via bus monitoring attacks. This paper presents a novel solution to defend against DL IP theft in NPUs during model distribution and deployment/execution via lightweight, keyed model obfuscation scheme. Unauthorized use of such models results in inaccurate classification. In addition, we present an ideal end-to-end deep learning trusted system composed of: 1) model distribution via hardware root-of-trust and public-key cryptography infrastructure (PKI) and 2) model execution via low-latency memory encryption. We demonstrate that our proposed obfuscation solution achieves IP protection objectives without requiring specialized training or sacrificing the model's accuracy. In addition, the proposed obfuscation mechanism preserves the output class distribution while degrading the model's accuracy for unauthorized parties, covering any evidence of a hacked model.
ISSN:2156-3357
2156-3365
DOI:10.1109/JETCAS.2021.3076151