Survey of Membership Inference Attacks for Machine Learning

Artificial intelligence has been integrated into all aspects of people's daily lives with the continuous development of machine learning, especially in the deep learning area.Machine learning models are deployed in various applications, enhancing the intelligence of traditional applications.How...

Full description

Saved in:
Bibliographic Details
Published inJi suan ji ke xue Vol. 50; no. 1; pp. 302 - 317
Main Authors Chen, Depeng, Liu, Xiao, Cui, Jie, He, Daojing
Format Journal Article
LanguageChinese
Published Chongqing Guojia Kexue Jishu Bu 01.01.2023
Editorial office of Computer Science
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Artificial intelligence has been integrated into all aspects of people's daily lives with the continuous development of machine learning, especially in the deep learning area.Machine learning models are deployed in various applications, enhancing the intelligence of traditional applications.However, in recent years, research has pointed out that personal data used to train machine learning models is vulnerable to the risk of privacy disclosure.Membership inference attacks(MIAs) are significant attacks against the machine learning model that threatens users' privacy.MIA aims to judge whether user data samples are used to train the target model.When the data is closely related to the individual, such as in medical, financial, and other fields, it directly interferes with the user's private information.This paper first introduces the background knowledge of membership inference attacks.Then, we classify the existing MIAs according to whether the attacker has a shadow model.We also summarize the threats of MIAs i
ISSN:1002-137X
DOI:10.11896/jsjkx.220800227