ARIANN: Low-Interaction Privacy-Preserving Deep Learning via Function Secret Sharing
We propose AriaNN, a low-interaction privacy-preserving framework for private neural network training and inference on sensitive data. Our semi-honest 2-party computation protocol (with a trusted dealer) leverages function secret sharing, a recent lightweight cryptographic protocol that allows us to...
Saved in:
Main Authors | , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
08.06.2020
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | We propose AriaNN, a low-interaction privacy-preserving framework for private
neural network training and inference on sensitive data. Our semi-honest
2-party computation protocol (with a trusted dealer) leverages function secret
sharing, a recent lightweight cryptographic protocol that allows us to achieve
an efficient online phase. We design optimized primitives for the building
blocks of neural networks such as ReLU, MaxPool and BatchNorm. For instance, we
perform private comparison for ReLU operations with a single message of the
size of the input during the online phase, and with preprocessing keys close to
4X smaller than previous work. Last, we propose an extension to support n-party
private federated learning. We implement our framework as an extensible system
on top of PyTorch that leverages CPU and GPU hardware acceleration for
cryptographic and machine learning operations. We evaluate our end-to-end
system for private inference between distant servers on standard neural
networks such as AlexNet, VGG16 or ResNet18, and for private training on
smaller networks like LeNet. We show that computation rather than communication
is the main bottleneck and that using GPUs together with reduced key size is a
promising solution to overcome this barrier. |
---|---|
DOI: | 10.48550/arxiv.2006.04593 |