Comments on "Dropping Activation Outputs with Localized First-Layer Deep Network for Enhancing User Privacy and Data Security"

Inference based on deep learning models is usually implemented by exposing sensitive user data to the outside models, which of course gives rise to acute privacy concerns. To deal with these concerns, Dong et al. recently proposed an approach, namely the dropping-activation-outputs (DAO) first layer...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on information forensics and security Vol. 15; pp. 3938 - 3939
Main Authors Tan, Xinrui, Li, Hongjia, Wang, Liming, Xu, Zhen
Format Journal Article
LanguageEnglish
Published New York IEEE 2020
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Inference based on deep learning models is usually implemented by exposing sensitive user data to the outside models, which of course gives rise to acute privacy concerns. To deal with these concerns, Dong et al. recently proposed an approach, namely the dropping-activation-outputs (DAO) first layer. This approach was claimed to be a non-invertible transformation, such that the privacy of user data could not be compromised. However, In this paper, we prove that the DAO first layer, in fact, can generally be inverted, and hence fails to preserve privacy. We also provide a countermeasure against the privacy vulnerabilities that we examined.
ISSN:1556-6013
1556-6021
DOI:10.1109/TIFS.2020.2988156