From Gradient Leakage To Adversarial Attacks In Federated Learning

Deep neural networks (DNN) are widely used in real-life applications despite the lack of understanding on this technology and its challenges. Data privacy is one of the bottlenecks that is yet to be overcome and more challenges in DNN arise when researchers start to pay more attention to DNN vulnera...

Full description

Saved in:
Bibliographic Details
Published in2021 IEEE International Conference on Image Processing (ICIP) pp. 3602 - 3606
Main Authors Lim, Jia Qi, Chan, Chee Seng
Format Conference Proceeding
LanguageEnglish
Published IEEE 19.09.2021
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Deep neural networks (DNN) are widely used in real-life applications despite the lack of understanding on this technology and its challenges. Data privacy is one of the bottlenecks that is yet to be overcome and more challenges in DNN arise when researchers start to pay more attention to DNN vulnerabilities. In this work, we aim to cast the doubts towards the reliability of the DNN with solid evidence particularly in Federated Learning environment by utilizing an existing privacy breaking algorithm which inverts gradients of models to reconstruct the input data. By performing the attack algorithm, we exemplify the data reconstructed from inverting gradients algorithm as a potential threat and further reveal the vulnerabilities of models in representation learning. Pytorch implementation are provided at https://github.com/Jiaqi0602/adversarial-attack-from-leakage/
ISSN:2381-8549
DOI:10.1109/ICIP42928.2021.9506589