NoPeek-Infer: Preventing face reconstruction attacks in distributed inference after on-premise training

For models trained on-premise but deployed in a distributed fashion across multiple entities, we demonstrate that minimizing distance correlation between sensitive data such as faces and intermediary representations enables prediction while preventing reconstruction attacks. Leakage (measured using...

Full description

Saved in:
Bibliographic Details
Published in2021 16th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2021) pp. 1 - 8
Main Authors Vepakomma, Praneeth, Singh, Abhishek, Zhang, Emily, Gupta, Otkrist, Raskar, Ramesh
Format Conference Proceeding
LanguageEnglish
Published IEEE 15.12.2021
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:For models trained on-premise but deployed in a distributed fashion across multiple entities, we demonstrate that minimizing distance correlation between sensitive data such as faces and intermediary representations enables prediction while preventing reconstruction attacks. Leakage (measured using distance correlation between input and intermediate representations) is the risk associated with the reconstruction of raw face data from intermediary representations that are communicated in a distributed setting. We demonstrate on face datasets that our method is resilient to reconstruction attacks during distributed inference while maintaining information required to sustain good classification accuracy. We share modular code for performing NoPeek-Infer at http://tiny.cc/nopeek along with corresponding trained models for benchmarking attack techniques.
DOI:10.1109/FG52635.2021.9667085