NoPeek-Infer: Preventing face reconstruction attacks in distributed inference after on-premise training
For models trained on-premise but deployed in a distributed fashion across multiple entities, we demonstrate that minimizing distance correlation between sensitive data such as faces and intermediary representations enables prediction while preventing reconstruction attacks. Leakage (measured using...
Saved in:
Published in | 2021 16th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2021) pp. 1 - 8 |
---|---|
Main Authors | , , , , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
15.12.2021
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | For models trained on-premise but deployed in a distributed fashion across multiple entities, we demonstrate that minimizing distance correlation between sensitive data such as faces and intermediary representations enables prediction while preventing reconstruction attacks. Leakage (measured using distance correlation between input and intermediate representations) is the risk associated with the reconstruction of raw face data from intermediary representations that are communicated in a distributed setting. We demonstrate on face datasets that our method is resilient to reconstruction attacks during distributed inference while maintaining information required to sustain good classification accuracy. We share modular code for performing NoPeek-Infer at http://tiny.cc/nopeek along with corresponding trained models for benchmarking attack techniques. |
---|---|
DOI: | 10.1109/FG52635.2021.9667085 |