A Review on Machine Unlearning
Recently, an increasing number of laws have governed the useability of users’ privacy. For example, Article 17 of the General Data Protection Regulation (GDPR), the right to be forgotten , requires machine learning applications to remove a portion of data from a dataset and retrain it if the user ma...
Saved in:
Published in | SN computer science Vol. 4; no. 4; p. 337 |
---|---|
Main Authors | , , , |
Format | Journal Article |
Language | English |
Published |
Singapore
Springer Nature Singapore
19.04.2023
Springer Nature B.V |
Subjects | |
Online Access | Get full text |
ISSN | 2661-8907 2662-995X 2661-8907 |
DOI | 10.1007/s42979-023-01767-4 |
Cover
Loading…
Abstract | Recently, an increasing number of laws have governed the useability of users’ privacy. For example, Article 17 of the General Data Protection Regulation (GDPR),
the right to be forgotten
, requires machine learning applications to remove a portion of data from a dataset and retrain it if the user makes such a request. Furthermore, from the security perspective, training data for machine learning models, i.e., data that may contain user privacy, should be effectively protected, including appropriate erasure. Therefore, researchers propose various privacy-preserving methods to deal with such issues as machine unlearning. This paper provides an in-depth review of the security and privacy concerns in machine learning models. First, we present how machine learning can use users’ private data in daily life and the role that the GDPR plays in this problem. Then, we introduce the concept of machine unlearning by describing the security threats in machine learning models and how to protect users’ privacy from being violated using machine learning platforms. As the core content of the paper, we introduce and analyze current machine unlearning approaches and several representative results and discuss them in the context of the data lineage. Furthermore, we also discuss the future research challenges in this field. |
---|---|
AbstractList | Recently, an increasing number of laws have governed the useability of users’ privacy. For example, Article 17 of the General Data Protection Regulation (GDPR),
the right to be forgotten
, requires machine learning applications to remove a portion of data from a dataset and retrain it if the user makes such a request. Furthermore, from the security perspective, training data for machine learning models, i.e., data that may contain user privacy, should be effectively protected, including appropriate erasure. Therefore, researchers propose various privacy-preserving methods to deal with such issues as machine unlearning. This paper provides an in-depth review of the security and privacy concerns in machine learning models. First, we present how machine learning can use users’ private data in daily life and the role that the GDPR plays in this problem. Then, we introduce the concept of machine unlearning by describing the security threats in machine learning models and how to protect users’ privacy from being violated using machine learning platforms. As the core content of the paper, we introduce and analyze current machine unlearning approaches and several representative results and discuss them in the context of the data lineage. Furthermore, we also discuss the future research challenges in this field. Recently, an increasing number of laws have governed the useability of users’ privacy. For example, Article 17 of the General Data Protection Regulation (GDPR), the right to be forgotten, requires machine learning applications to remove a portion of data from a dataset and retrain it if the user makes such a request. Furthermore, from the security perspective, training data for machine learning models, i.e., data that may contain user privacy, should be effectively protected, including appropriate erasure. Therefore, researchers propose various privacy-preserving methods to deal with such issues as machine unlearning. This paper provides an in-depth review of the security and privacy concerns in machine learning models. First, we present how machine learning can use users’ private data in daily life and the role that the GDPR plays in this problem. Then, we introduce the concept of machine unlearning by describing the security threats in machine learning models and how to protect users’ privacy from being violated using machine learning platforms. As the core content of the paper, we introduce and analyze current machine unlearning approaches and several representative results and discuss them in the context of the data lineage. Furthermore, we also discuss the future research challenges in this field. |
ArticleNumber | 337 |
Author | Sakurai, Kouichi Nakamura, Toru Zhang, Haibo Isohara, Takamasa |
Author_xml | – sequence: 1 givenname: Haibo orcidid: 0000-0002-4275-405X surname: Zhang fullname: Zhang, Haibo email: zhang.haibo.892@s.kyushu-u.ac.jp organization: Department of Information Science and Technology, Graduate School of Information Science and Electrical Engineering, Kyushu University – sequence: 2 givenname: Toru surname: Nakamura fullname: Nakamura, Toru organization: KDDI Research Inc – sequence: 3 givenname: Takamasa surname: Isohara fullname: Isohara, Takamasa organization: KDDI Research Inc – sequence: 4 givenname: Kouichi surname: Sakurai fullname: Sakurai, Kouichi organization: Department of Information Science and Technology, Faculty of Information Science and Electrical Engineering, Kyushu University |
BookMark | eNp9kE1LAzEQhoNUsNb-AQ-y4DmaTLKZzbEUv6AiiD2HbJqtW9ZsTbaK_97VFRQPPc17eJ-Z4Tkmo9AGT8gpZxecMbxMEjRqykBQxlEhlQdkDEpxWmiGoz_5iExT2jDGIGdSqnxMzmbZo3-r_XvWhuzeuuc6-GwZGm9jqMP6hBxWtkl--jMnZHl99TS_pYuHm7v5bEEdYCGpBABrJbpSlhqsEN5pdFZJViqNxUpBWfEqR8GUQL4S1uUFyj6h9-iqQkzI-bB3G9vXnU-d2bS7GPqTBjRwyKWQom8VQ8vFNqXoK-PqznZ1G7po68ZwZr6EmEGI6YWYbyFG9ij8Q7exfrHxYz8kBij15bD28ferPdQnVn1xvg |
CitedBy_id | crossref_primary_10_1038_d41586_024_02838_z crossref_primary_10_3390_s24103087 crossref_primary_10_1007_s13347_023_00644_5 crossref_primary_10_1007_s10115_024_02312_2 crossref_primary_10_1145_3704997 crossref_primary_10_1016_j_jisa_2025_104010 crossref_primary_10_1016_j_ijft_2025_101119 crossref_primary_10_1007_s40319_023_01419_3 crossref_primary_10_1016_j_neucom_2023_126629 crossref_primary_10_1177_02537176241302898 crossref_primary_10_1002_path_6168 |
Cites_doi | 10.1145/2810103.2813677 10.1016/j.eswa.2020.114154 10.48550/arXiv.2201.09441 10.1109/IWQOS52092.2021.9521274 10.48550/arXiv.2201.05629 10.1145/3133956.3134077 10.1109/SP.2017.41 10.1145/3460120.3484756 10.1109/ICIOT.2018.00015 10.48550/arXiv.1907.05012 10.1109/CVPR42600.2020.00932 10.48550/arXiv.2108.11577 10.2196/27778 10.2478/popets-2019-0008 10.1145/293347.293351 10.1609/aaai.v36i7.20736 10.1145/2623330.2623661 10.48550/arXiv.2109.13398 10.1109/SP.2015.35 10.1109/SP40001.2021.00019 10.48550/arXiv.2005.08502 10.48550/arXiv.2202.03460 10.1145/3078597.3078603 10.1109/MLSP52302.2021.9596170 10.1145/3128572.3140450 10.48550/arXiv.2110.11891 10.1145/3196494.3196517 10.48550/arXiv.2002.02730 10.1109/TNN.2010.2048039 10.48550/arXiv.2202.13295 10.48550/arXiv.2201.09538 10.1145/3390557.3394126 10.48550/arXiv.2203.11491 10.1109/CSF.2018.00027 10.1109/MSEC.2018.2888775 10.48550/arXiv.2105.06209 10.48550/arXiv.2111.12056 10.1109/CVPR46437.2021.00085 10.48550/arXiv.2106.15093 10.48550/arXiv.1412.1193 10.1145/3351095.3372834 10.48550/arXiv.2010.10981 10.48550/arXiv.1911.03030 10.1145/3319535.3363226 |
ContentType | Journal Article |
Copyright | The Author(s), under exclusive licence to Springer Nature Singapore Pte Ltd 2023. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law. |
Copyright_xml | – notice: The Author(s), under exclusive licence to Springer Nature Singapore Pte Ltd 2023. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law. |
DBID | AAYXX CITATION 8FE 8FG AFKRA ARAPS AZQEC BENPR BGLVJ CCPQU DWQXO GNUQQ HCIFZ JQ2 K7- P5Z P62 PHGZM PHGZT PKEHL PQEST PQGLB PQQKQ PQUKI |
DOI | 10.1007/s42979-023-01767-4 |
DatabaseName | CrossRef ProQuest SciTech Collection ProQuest Technology Collection ProQuest Central UK/Ireland Health Research Premium Collection ProQuest Central Essentials ProQuest Central Technology Collection ProQuest One Community College ProQuest Central Korea ProQuest Central Student SciTech Premium Collection ProQuest Computer Science Collection Computer Science Database Advanced Technologies & Aerospace Database ProQuest Advanced Technologies & Aerospace Collection ProQuest Central Premium ProQuest One Academic ProQuest One Academic Middle East (New) ProQuest One Academic Eastern Edition (DO NOT USE) ProQuest One Applied & Life Sciences ProQuest One Academic ProQuest One Academic UKI Edition |
DatabaseTitle | CrossRef Advanced Technologies & Aerospace Collection Computer Science Database ProQuest Central Student Technology Collection ProQuest One Academic Middle East (New) ProQuest Advanced Technologies & Aerospace Collection ProQuest Central Essentials ProQuest Computer Science Collection ProQuest One Academic Eastern Edition SciTech Premium Collection ProQuest One Community College ProQuest Technology Collection ProQuest SciTech Collection ProQuest Central Advanced Technologies & Aerospace Database ProQuest One Applied & Life Sciences ProQuest One Academic UKI Edition ProQuest Central Korea ProQuest Central (New) ProQuest One Academic ProQuest One Academic (New) |
DatabaseTitleList | Advanced Technologies & Aerospace Collection |
Database_xml | – sequence: 1 dbid: 8FG name: ProQuest Technology Collection url: https://search.proquest.com/technologycollection1 sourceTypes: Aggregation Database |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Computer Science |
EISSN | 2661-8907 |
ExternalDocumentID | 10_1007_s42979_023_01767_4 |
GrantInformation_xml | – fundername: JST-Mirai Program grantid: JPMJSP2136 funderid: http://dx.doi.org/10.13039/501100020959 |
GroupedDBID | 0R~ 406 AACDK AAHNG AAJBT AASML AATNV AAUYE ABAKF ABECU ABHQN ABJNI ABMQK ABTEG ABTKH ABWNU ACAOD ACDTI ACHSB ACOKC ACPIV ACZOJ ADKNI ADTPH ADYFF AEFQL AEMSY AESKC AFBBN AFKRA AFQWF AGMZJ AGQEE AGRTI AIGIU AILAN AJZVZ ALMA_UNASSIGNED_HOLDINGS AMXSW AMYLF ARAPS BAPOH BENPR BGLVJ CCPQU DPUIP EBLON EBS FIGPU FNLPD GGCAI GNWQR HCIFZ IKXTQ IWAJR JZLTJ K7- LLZTM NPVJJ NQJWS OK1 PT4 ROL RSV SJYHP SNE SOJ SRMVM SSLCW UOJIU UTJUX ZMTXR 2JN AAYXX ABBRH ABDBE ABFSG ACSTC ADKFA AEZWR AFDZB AFHIU AFOHR AHPBZ AHWEU AIXLP ATHPR AYFIA CITATION PHGZM PHGZT 8FE 8FG ABRTQ AZQEC DWQXO GNUQQ JQ2 P62 PKEHL PQEST PQGLB PQQKQ PQUKI |
ID | FETCH-LOGICAL-c2784-4222aa47cb4b92a33ec97ca640b6978d62bf1f57306371d3ac587471d7ee7cf83 |
IEDL.DBID | 8FG |
ISSN | 2661-8907 2662-995X |
IngestDate | Fri Jul 25 23:31:49 EDT 2025 Thu Apr 24 22:51:24 EDT 2025 Tue Jul 01 03:19:19 EDT 2025 Fri Feb 21 02:43:28 EST 2025 |
IsDoiOpenAccess | false |
IsOpenAccess | true |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 4 |
Keywords | Machine unlearning Data lineage Privacy Security Machine learning |
Language | English |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c2784-4222aa47cb4b92a33ec97ca640b6978d62bf1f57306371d3ac587471d7ee7cf83 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ORCID | 0000-0002-4275-405X |
OpenAccessLink | https://kyutech.repo.nii.ac.jp/records/2000802 |
PQID | 2921254343 |
PQPubID | 6623307 |
ParticipantIDs | proquest_journals_2921254343 crossref_citationtrail_10_1007_s42979_023_01767_4 crossref_primary_10_1007_s42979_023_01767_4 springer_journals_10_1007_s42979_023_01767_4 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 20230419 |
PublicationDateYYYYMMDD | 2023-04-19 |
PublicationDate_xml | – month: 4 year: 2023 text: 20230419 day: 19 |
PublicationDecade | 2020 |
PublicationPlace | Singapore |
PublicationPlace_xml | – name: Singapore – name: Kolkata |
PublicationTitle | SN computer science |
PublicationTitleAbbrev | SN COMPUT. SCI |
PublicationYear | 2023 |
Publisher | Springer Nature Singapore Springer Nature B.V |
Publisher_xml | – name: Springer Nature Singapore – name: Springer Nature B.V |
References | KashefRA boosted svm classifier trained by incremental learning and decremental unlearning approachExpert Syst Appl202116711415410.1016/j.eswa.2020.114154 GravesLNagisettyVGaneshVAmnesiac machine learningarXiv202010.48550/arXiv.2010.10981 Chen M, Zhang Z, Wang T, Backes M, Humbert M, Zhang Y. When machine unlearning jeopardizes privacy. In: Proceedings of the 2021 ACM SIGSAC conference on computer and communications security. 2021;896–911 Al-RubaieMChangJMPrivacy-preserving machine learning: threats and solutionsIEEE Secur Priv2019172495810.1109/MSEC.2018.2888775 Yeom S, Giacomelli I, Fredrikson M, Jha S. Privacy risk in machine learning: analyzing the connection to overfitting. In: 2018 IEEE 31st computer security foundations symposium (CSF). IEEE. 2018;268–282. GuoTGuoSZhangJXuWWangJVertical machine unlearning: Selectively removing sensitive information from latent feature spacearXiv202210.48550/arXiv.2202.13295 ThudiAJiaHShumailovIPapernotNOn the necessity of auditable algorithmic definitions for machine unlearningarXiv202110.48550/arXiv.2110.11891 LiuYFanMChenCLiuXMaZWangLMaJBackdoor defense with machine unlearningarXiv202210.48550/arXiv.2201.09538 ThiagoRMSouzaRAzevedoLSoaresEFDSSantosRDos SantosWDe BayserMCardosoMCMorenoMFCerqueiraRManaging data lineage of o&g machine learning models: the sweet spot for shale use caseFirst EAGE Digit Conf Exhib2020202015 Cao Y, Yang J. Towards making systems forget with machine unlearning. In: 2015 IEEE Symposium on Security and Privacy. IEEE. 2015;463–480. Song C, Ristenpart T, Shmatikov V. Machine learning models that remember too much. In: Proceedings of the 2017 ACM SIGSAC confer-ence on computer and communications security. 2017;587–601. AlsdurfHBelliveauEBengioYDeleuTGuptaPIppolitoDJandaRJarvieMKolodyTKrastevSCovi white paperarXiv202010.48550/arXiv.2005.08502 GolatkarAAchilleASoattoSForgetting outside the box: Scrubbing deep networks of information accessible from input-output observations Europea conference on computer vision2020ChamSpringer383398 HeYMengGChenKHeJHuXDeepobliviate: a powerful charm for erasing data residual memory in deep neural networksarXiv202110.48550/arXiv.2105.06209 Tsai C-H, Lin C-Y, Lin C-J. Incremental and decremental training for linear classification. In: Proceedings of the 20th ACM SIGKDD international conference on knowledge discovery and data mining. 2014;343–352. KarasuyamaMTakeuchiIMultiple incremental decremental learning of support vector machinesIEEE Trans Neural Networks20102171048105910.1109/TNN.2010.2048039 BaumhauerTSch¨ottlePZeppelzauerMMachine unlearning: linear filtration for logit-based classifiersarXiv202010.48550/arXiv.2002.0273007624270 Chaudhuri K, Monteleoni C. Privacy-preserving logistic regression. advances in neural information processing systems. 2008;21. Fredrikson M, Jha S, Ristenpart T. Model inversion attacks that exploit confidence information and basic countermeasures. In: Proceedings of the 22nd ACM SIGSAC conference on computer and communications security. 2015;1322–1333. WarneckeAPirchLWressneggerCRieckKMachine unlearning of features and labelsarXiv202110.48550/arXiv.2108.11577 DworkCRothAThe algorithmic foundations of differential privacyFound Trends Theor Comput Sci201493–421140732540201302.68109 Surma J. Hacking machine learning: towards the comprehensive taxonomy of attacks against machine learning systems. In: Proceedings of the 2020 the 4th international conference on innovation in artificial intelligence. 2020;1–4. Baracaldo N, Chen B, Ludwig H, Safavi A, Zhang R. Detecting poisoning attacks on machine learning in iot environments. In: 2018 IEEE international congress on internet of things (ICIOT). IEEE 2018;57–64. Jose ST, Simeone O. A unified pac-bayesian framework for machine unlearning via information risk minimization. In: 2021 IEEE 3 1st international workshop on machine learning for signal processing (MLSP). IEEE. 2021;1–6. Shokri R, Stronati M, Song C, Shmatikov V. Membership inference attacks against machine learning models. In: 2017 IEEE Symposium on Security and Privacy (SP). IEEE. 2017;3–18. Bourtoule L, Chandrasekaran V, Choquette-Choo CA, Jia H, Travers A, Zhang B, Lie D, Papernot N. Machine unlearning. In: 2021 IEEE symposium on security and privacy (SP). IEEE. 2021;141–159 Du M, Chen Z, Liu C, Oak R, Song D. Lifelong anomaly detection through unlearning. In: Proceedings of the 2019 ACM SIGSAC conference on computer and communications security. 2019;1283–1297. LuoGA roadmap for automating lineage tracing to aid automatically explaining machine learning predictions for clinical decision supportJMIR Med Inform2021952777810.2196/27778 Brophy J, Lowd D. Machine unlearning for random forests. In: International Conference on Machine Learning. PMLR. 2021;1092–1104. GinartAGuanMYValiantGZouJMaking ai forget you: data deletion in machine learningarXiv201910.48550/arXiv.1907.05012 Giordano R, Stephenson W, Liu R, Jordan M, Broderick T. A swiss army infinitesimal jackknife. In: The 22nd international conference on artificial intelligence and statistics. International conference on machine learning. PMLR. 2019;1139–1147. Marchant NG, Rubinstein BI, Alfeld S. Hard to forget: poisoning attacks on certified machine unlearning. arXiv preprint arXiv:2109.08266. 2021 Shen S, Tople S, Saxena P. Auror: Defending against poisoning attacks in collaborative deep learning systems. In: Proceedings of the 32nd annual conference on computer security applications. 2016;508–519. WuCZhuSMitraPFederated unlearning with knowledge distillationarXiv202210.48550/arXiv.2201.09441 Sablayrolles A, Douze M, Schmid C, Ollivier Y, J´egou H. White-box vs black-box: Bayes optimal strategies for membership inference. In: International conference on machine learning. PMLR. 2019;5558–5567. Golatkar A, Achille A, Ravichandran A, Polito M, Soatto S. Mixed—privacy forgetting in deep networks. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2021;792–801. GuoCGoldsteinTHannunAVan Der MaatenLCertified data removal from machine learning modelsarXiv201910.48550/arXiv.1911.03030 Cao Y, Yu AF, Aday A, Stahl E, Merwine J, Yang J. Efficient repair of polluted machine learning systems via causal unlearning. In: Proceedings of the 2018 on Asia conference on computer and communications security. 2018;735–747. Golatkar A, Achille A, Soatto S. Eternal sunshine of the spotless net: selective forgetting in deep networks. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020;9304–9312. MahadevanAMathioudakisMCertifiable machine unlearning for linear modelsarXiv202110.48550/arXiv.2106.15093 Izzo Z, Smart MA, Chaudhuri K, Zou J. Approximate data deletion from machine learning models. In: International conference on artificial intelligence and statistics. PMLR. 2021;2008–2016. Wu Y, Dobriban E, Davidson S. Deltagrad: rapid retraining of machine learning models. In: International conference on machine learning. PMLR. 2021;10355–10366. Baracaldo N, Chen B, Ludwig H, Safavi JA. Mitigating poisoning attacks on machine learning models: a data provenance based approach. In: Proceedings of the 10th ACM workshop on artificial intelligence and security. 2017;103–110 MartensJNew insights and perspectives on the natural gradient methodarXiv201410.48550/arXiv.1412.119307306852 GongJSimeoneOKassabRKangJForget-svgd: Particle-based bayesian federated unlearningarXiv202110.48550/arXiv.2111.12056 Koh PW, Liang P. Understanding black-box predictions via influence functions. In: International conference on machine learning. PMLR. 2017;1885–1894. Ullah E, Mai T, Rao A, Rossi RA, Arora R. Machine unlearning via algorithmic stability. In: conference on learning theory. PMLR. 2021;4126–4142. KearnsMEfficient noise-tolerant learning from statistical queriesJ ACM19984569831006167884910.1145/293347.2933511065.68605 LiYZhengXChenCLiuJMaking recommender systems forget: learning and unlearning for erasable recommendationarXiv202210.48550/arXiv.2203.11491 ChundawatVSTarunAKMandalMKankanhalliMZero-shot machine unlearningarXiv202210.48550/arXiv.2201.05629 Cauwenberghs G, Poggio T. Incremental and decremental support vector machine learning. advances in neural information processing systems. 2000;13. Neel S, Roth A, Sharifi-Malvajerdi S. Descent-to-delete: gradient- based methods for machine unlearning. In: Algorithmic learning theory. PMLR. 2021;931–962. Zhang Z, Sparks ER, Franklin MJ. Diagnosing machine learning pipelines with fine-grained lineage. In: Proceedings of the 26th international symposium on high-performance parallel and distributed computing. 2017;143–153. GaoJGargSMahmoodyMVasudevanPNDeletion inference, reconstruction, and compliance in machine (un) learningarXiv202210.48550/arXiv.2202.03460 ThudiADezaGChandrasekaranVPapernotNUnrolling sgd: understanding factors influencing machine unlearningarXiv202110.48550/arXiv.2109.13398 Liu G, Ma X, Yang Y, Wang C, Liu J. Federaser: enabling efficient client-level data removal from federated learning models. In: 2021 IEEE/ACM 29th international symposium on quality of service (IWQOS). IEEE. 2021;1–10. TramèrFZhangFJuelsAReiterMKRistenpartTStealing machine learning models via prediction APIsUSENIX Secur Symp201616601618 Toreini E, Aitken M, Coopamootoo K, Elliott K, Zelaya CG, Van Moorsel A. The relationship between trust in ai and trustworthy machine learning technologies. In: Proceedings of the 2020 conference on fairness, accountability, and transparency. 2020;272–283. Schelter S. Towards efficient machine unlearning via incremental view maintenance. HayesJMelisLDanezisGDe CristofaroELogan: membership inference attacks against generative modelsProc Privacy Enhanc Technol De Gruyter2019201913315210.2478/popets-2019-0008 J Hayes (1767_CR60) 2019; 2019 C Guo (1767_CR21) 2019 1767_CR19 1767_CR16 1767_CR15 1767_CR59 1767_CR58 1767_CR13 1767_CR57 1767_CR12 R Kashef (1767_CR26) 2021; 167 1767_CR10 T Guo (1767_CR42) 2022 1767_CR53 Y Liu (1767_CR2) 2022 1767_CR52 Y He (1767_CR40) 2021 1767_CR51 1767_CR1 1767_CR3 1767_CR5 G Luo (1767_CR54) 2021; 9 A Thudi (1767_CR38) 2021 J Gong (1767_CR41) 2021 1767_CR49 A Golatkar (1767_CR50) 2020 A Ginart (1767_CR18) 2019 1767_CR44 1767_CR7 1767_CR43 VS Chundawat (1767_CR11) 2022 1767_CR9 Y Li (1767_CR56) 2022 H Alsdurf (1767_CR17) 2020 RM Thiago (1767_CR55) 2020; 2020 J Martens (1767_CR47) 2014 A Warnecke (1767_CR39) 2021 A Thudi (1767_CR22) 2021 1767_CR37 C Dwork (1767_CR48) 2014; 9 1767_CR36 A Mahadevan (1767_CR20) 2021 1767_CR35 L Graves (1767_CR6) 2020 1767_CR34 M Kearns (1767_CR46) 1998; 45 1767_CR33 1767_CR31 J Gao (1767_CR8) 2022 M Karasuyama (1767_CR45) 2010; 21 M Al-Rubaie (1767_CR4) 2019; 17 1767_CR29 1767_CR28 1767_CR27 1767_CR25 1767_CR24 1767_CR23 T Baumhauer (1767_CR32) 2020 C Wu (1767_CR30) 2022 F Tramèr (1767_CR14) 2016; 16 |
References_xml | – reference: Schelter S. Towards efficient machine unlearning via incremental view maintenance. – reference: Cauwenberghs G, Poggio T. Incremental and decremental support vector machine learning. advances in neural information processing systems. 2000;13. – reference: Golatkar A, Achille A, Ravichandran A, Polito M, Soatto S. Mixed—privacy forgetting in deep networks. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2021;792–801. – reference: ChundawatVSTarunAKMandalMKankanhalliMZero-shot machine unlearningarXiv202210.48550/arXiv.2201.05629 – reference: Brophy J, Lowd D. Machine unlearning for random forests. In: International Conference on Machine Learning. PMLR. 2021;1092–1104. – reference: BaumhauerTSch¨ottlePZeppelzauerMMachine unlearning: linear filtration for logit-based classifiersarXiv202010.48550/arXiv.2002.0273007624270 – reference: Fredrikson M, Jha S, Ristenpart T. Model inversion attacks that exploit confidence information and basic countermeasures. In: Proceedings of the 22nd ACM SIGSAC conference on computer and communications security. 2015;1322–1333. – reference: Cao Y, Yu AF, Aday A, Stahl E, Merwine J, Yang J. Efficient repair of polluted machine learning systems via causal unlearning. In: Proceedings of the 2018 on Asia conference on computer and communications security. 2018;735–747. – reference: Sablayrolles A, Douze M, Schmid C, Ollivier Y, J´egou H. White-box vs black-box: Bayes optimal strategies for membership inference. In: International conference on machine learning. PMLR. 2019;5558–5567. – reference: HayesJMelisLDanezisGDe CristofaroELogan: membership inference attacks against generative modelsProc Privacy Enhanc Technol De Gruyter2019201913315210.2478/popets-2019-0008 – reference: Tsai C-H, Lin C-Y, Lin C-J. Incremental and decremental training for linear classification. In: Proceedings of the 20th ACM SIGKDD international conference on knowledge discovery and data mining. 2014;343–352. – reference: Zhang Z, Sparks ER, Franklin MJ. Diagnosing machine learning pipelines with fine-grained lineage. In: Proceedings of the 26th international symposium on high-performance parallel and distributed computing. 2017;143–153. – reference: WuCZhuSMitraPFederated unlearning with knowledge distillationarXiv202210.48550/arXiv.2201.09441 – reference: GuoTGuoSZhangJXuWWangJVertical machine unlearning: Selectively removing sensitive information from latent feature spacearXiv202210.48550/arXiv.2202.13295 – reference: Baracaldo N, Chen B, Ludwig H, Safavi JA. Mitigating poisoning attacks on machine learning models: a data provenance based approach. In: Proceedings of the 10th ACM workshop on artificial intelligence and security. 2017;103–110 – reference: Baracaldo N, Chen B, Ludwig H, Safavi A, Zhang R. Detecting poisoning attacks on machine learning in iot environments. In: 2018 IEEE international congress on internet of things (ICIOT). IEEE 2018;57–64. – reference: LiuYFanMChenCLiuXMaZWangLMaJBackdoor defense with machine unlearningarXiv202210.48550/arXiv.2201.09538 – reference: WarneckeAPirchLWressneggerCRieckKMachine unlearning of features and labelsarXiv202110.48550/arXiv.2108.11577 – reference: Chaudhuri K, Monteleoni C. Privacy-preserving logistic regression. advances in neural information processing systems. 2008;21. – reference: GaoJGargSMahmoodyMVasudevanPNDeletion inference, reconstruction, and compliance in machine (un) learningarXiv202210.48550/arXiv.2202.03460 – reference: GongJSimeoneOKassabRKangJForget-svgd: Particle-based bayesian federated unlearningarXiv202110.48550/arXiv.2111.12056 – reference: GolatkarAAchilleASoattoSForgetting outside the box: Scrubbing deep networks of information accessible from input-output observations Europea conference on computer vision2020ChamSpringer383398 – reference: ThudiADezaGChandrasekaranVPapernotNUnrolling sgd: understanding factors influencing machine unlearningarXiv202110.48550/arXiv.2109.13398 – reference: HeYMengGChenKHeJHuXDeepobliviate: a powerful charm for erasing data residual memory in deep neural networksarXiv202110.48550/arXiv.2105.06209 – reference: Ullah E, Mai T, Rao A, Rossi RA, Arora R. Machine unlearning via algorithmic stability. In: conference on learning theory. PMLR. 2021;4126–4142. – reference: Liu G, Ma X, Yang Y, Wang C, Liu J. Federaser: enabling efficient client-level data removal from federated learning models. In: 2021 IEEE/ACM 29th international symposium on quality of service (IWQOS). IEEE. 2021;1–10. – reference: Bourtoule L, Chandrasekaran V, Choquette-Choo CA, Jia H, Travers A, Zhang B, Lie D, Papernot N. Machine unlearning. In: 2021 IEEE symposium on security and privacy (SP). IEEE. 2021;141–159 – reference: DworkCRothAThe algorithmic foundations of differential privacyFound Trends Theor Comput Sci201493–421140732540201302.68109 – reference: Song C, Ristenpart T, Shmatikov V. Machine learning models that remember too much. In: Proceedings of the 2017 ACM SIGSAC confer-ence on computer and communications security. 2017;587–601. – reference: Yeom S, Giacomelli I, Fredrikson M, Jha S. Privacy risk in machine learning: analyzing the connection to overfitting. In: 2018 IEEE 31st computer security foundations symposium (CSF). IEEE. 2018;268–282. – reference: Al-RubaieMChangJMPrivacy-preserving machine learning: threats and solutionsIEEE Secur Priv2019172495810.1109/MSEC.2018.2888775 – reference: KashefRA boosted svm classifier trained by incremental learning and decremental unlearning approachExpert Syst Appl202116711415410.1016/j.eswa.2020.114154 – reference: Du M, Chen Z, Liu C, Oak R, Song D. Lifelong anomaly detection through unlearning. In: Proceedings of the 2019 ACM SIGSAC conference on computer and communications security. 2019;1283–1297. – reference: MahadevanAMathioudakisMCertifiable machine unlearning for linear modelsarXiv202110.48550/arXiv.2106.15093 – reference: Shen S, Tople S, Saxena P. Auror: Defending against poisoning attacks in collaborative deep learning systems. In: Proceedings of the 32nd annual conference on computer security applications. 2016;508–519. – reference: Izzo Z, Smart MA, Chaudhuri K, Zou J. Approximate data deletion from machine learning models. In: International conference on artificial intelligence and statistics. PMLR. 2021;2008–2016. – reference: Golatkar A, Achille A, Soatto S. Eternal sunshine of the spotless net: selective forgetting in deep networks. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020;9304–9312. – reference: Koh PW, Liang P. Understanding black-box predictions via influence functions. In: International conference on machine learning. PMLR. 2017;1885–1894. – reference: Jose ST, Simeone O. A unified pac-bayesian framework for machine unlearning via information risk minimization. In: 2021 IEEE 3 1st international workshop on machine learning for signal processing (MLSP). IEEE. 2021;1–6. – reference: TramèrFZhangFJuelsAReiterMKRistenpartTStealing machine learning models via prediction APIsUSENIX Secur Symp201616601618 – reference: Chen M, Zhang Z, Wang T, Backes M, Humbert M, Zhang Y. When machine unlearning jeopardizes privacy. In: Proceedings of the 2021 ACM SIGSAC conference on computer and communications security. 2021;896–911 – reference: Shokri R, Stronati M, Song C, Shmatikov V. Membership inference attacks against machine learning models. In: 2017 IEEE Symposium on Security and Privacy (SP). IEEE. 2017;3–18. – reference: Cao Y, Yang J. Towards making systems forget with machine unlearning. In: 2015 IEEE Symposium on Security and Privacy. IEEE. 2015;463–480. – reference: Wu Y, Dobriban E, Davidson S. Deltagrad: rapid retraining of machine learning models. In: International conference on machine learning. PMLR. 2021;10355–10366. – reference: Toreini E, Aitken M, Coopamootoo K, Elliott K, Zelaya CG, Van Moorsel A. The relationship between trust in ai and trustworthy machine learning technologies. In: Proceedings of the 2020 conference on fairness, accountability, and transparency. 2020;272–283. – reference: GinartAGuanMYValiantGZouJMaking ai forget you: data deletion in machine learningarXiv201910.48550/arXiv.1907.05012 – reference: GuoCGoldsteinTHannunAVan Der MaatenLCertified data removal from machine learning modelsarXiv201910.48550/arXiv.1911.03030 – reference: ThiagoRMSouzaRAzevedoLSoaresEFDSSantosRDos SantosWDe BayserMCardosoMCMorenoMFCerqueiraRManaging data lineage of o&g machine learning models: the sweet spot for shale use caseFirst EAGE Digit Conf Exhib2020202015 – reference: LuoGA roadmap for automating lineage tracing to aid automatically explaining machine learning predictions for clinical decision supportJMIR Med Inform2021952777810.2196/27778 – reference: LiYZhengXChenCLiuJMaking recommender systems forget: learning and unlearning for erasable recommendationarXiv202210.48550/arXiv.2203.11491 – reference: ThudiAJiaHShumailovIPapernotNOn the necessity of auditable algorithmic definitions for machine unlearningarXiv202110.48550/arXiv.2110.11891 – reference: Surma J. Hacking machine learning: towards the comprehensive taxonomy of attacks against machine learning systems. In: Proceedings of the 2020 the 4th international conference on innovation in artificial intelligence. 2020;1–4. – reference: Neel S, Roth A, Sharifi-Malvajerdi S. Descent-to-delete: gradient- based methods for machine unlearning. In: Algorithmic learning theory. PMLR. 2021;931–962. – reference: GravesLNagisettyVGaneshVAmnesiac machine learningarXiv202010.48550/arXiv.2010.10981 – reference: Giordano R, Stephenson W, Liu R, Jordan M, Broderick T. A swiss army infinitesimal jackknife. In: The 22nd international conference on artificial intelligence and statistics. International conference on machine learning. PMLR. 2019;1139–1147. – reference: KearnsMEfficient noise-tolerant learning from statistical queriesJ ACM19984569831006167884910.1145/293347.2933511065.68605 – reference: Marchant NG, Rubinstein BI, Alfeld S. Hard to forget: poisoning attacks on certified machine unlearning. arXiv preprint arXiv:2109.08266. 2021 – reference: AlsdurfHBelliveauEBengioYDeleuTGuptaPIppolitoDJandaRJarvieMKolodyTKrastevSCovi white paperarXiv202010.48550/arXiv.2005.08502 – reference: KarasuyamaMTakeuchiIMultiple incremental decremental learning of support vector machinesIEEE Trans Neural Networks20102171048105910.1109/TNN.2010.2048039 – reference: MartensJNew insights and perspectives on the natural gradient methodarXiv201410.48550/arXiv.1412.119307306852 – ident: 1767_CR19 doi: 10.1145/2810103.2813677 – volume: 167 start-page: 114154 year: 2021 ident: 1767_CR26 publication-title: Expert Syst Appl doi: 10.1016/j.eswa.2020.114154 – year: 2022 ident: 1767_CR30 publication-title: arXiv doi: 10.48550/arXiv.2201.09441 – ident: 1767_CR28 doi: 10.1109/IWQOS52092.2021.9521274 – year: 2022 ident: 1767_CR11 publication-title: arXiv doi: 10.48550/arXiv.2201.05629 – ident: 1767_CR15 doi: 10.1145/3133956.3134077 – ident: 1767_CR57 doi: 10.1109/SP.2017.41 – ident: 1767_CR7 doi: 10.1145/3460120.3484756 – ident: 1767_CR10 doi: 10.1109/ICIOT.2018.00015 – year: 2019 ident: 1767_CR18 publication-title: arXiv doi: 10.48550/arXiv.1907.05012 – ident: 1767_CR33 doi: 10.1109/CVPR42600.2020.00932 – ident: 1767_CR29 – ident: 1767_CR5 – year: 2021 ident: 1767_CR39 publication-title: arXiv doi: 10.48550/arXiv.2108.11577 – volume: 9 start-page: 27778 issue: 5 year: 2021 ident: 1767_CR54 publication-title: JMIR Med Inform doi: 10.2196/27778 – volume: 2019 start-page: 133 year: 2019 ident: 1767_CR60 publication-title: Proc Privacy Enhanc Technol De Gruyter doi: 10.2478/popets-2019-0008 – volume: 45 start-page: 983 issue: 6 year: 1998 ident: 1767_CR46 publication-title: J ACM doi: 10.1145/293347.293351 – ident: 1767_CR9 doi: 10.1609/aaai.v36i7.20736 – ident: 1767_CR44 doi: 10.1145/2623330.2623661 – ident: 1767_CR43 – year: 2021 ident: 1767_CR38 publication-title: arXiv doi: 10.48550/arXiv.2109.13398 – ident: 1767_CR24 doi: 10.1109/SP.2015.35 – ident: 1767_CR3 doi: 10.1109/SP40001.2021.00019 – year: 2020 ident: 1767_CR17 publication-title: arXiv doi: 10.48550/arXiv.2005.08502 – ident: 1767_CR16 – ident: 1767_CR37 – year: 2022 ident: 1767_CR8 publication-title: arXiv doi: 10.48550/arXiv.2202.03460 – ident: 1767_CR53 doi: 10.1145/3078597.3078603 – ident: 1767_CR27 doi: 10.1109/MLSP52302.2021.9596170 – ident: 1767_CR1 doi: 10.1145/3128572.3140450 – year: 2021 ident: 1767_CR22 publication-title: arXiv doi: 10.48550/arXiv.2110.11891 – ident: 1767_CR25 doi: 10.1145/3196494.3196517 – year: 2020 ident: 1767_CR32 publication-title: arXiv doi: 10.48550/arXiv.2002.02730 – volume: 21 start-page: 1048 issue: 7 year: 2010 ident: 1767_CR45 publication-title: IEEE Trans Neural Networks doi: 10.1109/TNN.2010.2048039 – year: 2022 ident: 1767_CR42 publication-title: arXiv doi: 10.48550/arXiv.2202.13295 – volume: 16 start-page: 601 year: 2016 ident: 1767_CR14 publication-title: USENIX Secur Symp – ident: 1767_CR36 – ident: 1767_CR52 – year: 2022 ident: 1767_CR2 publication-title: arXiv doi: 10.48550/arXiv.2201.09538 – ident: 1767_CR13 doi: 10.1145/3390557.3394126 – volume: 2020 start-page: 1 year: 2020 ident: 1767_CR55 publication-title: First EAGE Digit Conf Exhib – year: 2022 ident: 1767_CR56 publication-title: arXiv doi: 10.48550/arXiv.2203.11491 – ident: 1767_CR58 doi: 10.1109/CSF.2018.00027 – volume: 17 start-page: 49 issue: 2 year: 2019 ident: 1767_CR4 publication-title: IEEE Secur Priv doi: 10.1109/MSEC.2018.2888775 – year: 2021 ident: 1767_CR40 publication-title: arXiv doi: 10.48550/arXiv.2105.06209 – volume: 9 start-page: 211 issue: 3–4 year: 2014 ident: 1767_CR48 publication-title: Found Trends Theor Comput Sci – ident: 1767_CR23 – year: 2021 ident: 1767_CR41 publication-title: arXiv doi: 10.48550/arXiv.2111.12056 – ident: 1767_CR35 doi: 10.1109/CVPR46437.2021.00085 – year: 2021 ident: 1767_CR20 publication-title: arXiv doi: 10.48550/arXiv.2106.15093 – ident: 1767_CR49 – ident: 1767_CR51 – year: 2014 ident: 1767_CR47 publication-title: arXiv doi: 10.48550/arXiv.1412.1193 – start-page: 383 volume-title: Forgetting outside the box: Scrubbing deep networks of information accessible from input-output observations Europea conference on computer vision year: 2020 ident: 1767_CR50 – ident: 1767_CR34 – ident: 1767_CR12 doi: 10.1145/3351095.3372834 – ident: 1767_CR59 – year: 2020 ident: 1767_CR6 publication-title: arXiv doi: 10.48550/arXiv.2010.10981 – year: 2019 ident: 1767_CR21 publication-title: arXiv doi: 10.48550/arXiv.1911.03030 – ident: 1767_CR31 doi: 10.1145/3319535.3363226 |
SSID | ssj0002504465 |
Score | 2.4878855 |
Snippet | Recently, an increasing number of laws have governed the useability of users’ privacy. For example, Article 17 of the General Data Protection Regulation... |
SourceID | proquest crossref springer |
SourceType | Aggregation Database Enrichment Source Index Database Publisher |
StartPage | 337 |
SubjectTerms | Algorithms Artificial intelligence Big Data Computer Imaging Computer Science Computer Systems Organization and Communication Networks Data analysis Data Structures and Information Theory Datasets General Data Protection Regulation Information Systems and Communication Service Internet Machine learning Pattern Recognition and Graphics Privacy Research methodology Search engines Security Software Engineering/Programming and Operating Systems Survey Article Vision |
Title | A Review on Machine Unlearning |
URI | https://link.springer.com/article/10.1007/s42979-023-01767-4 https://www.proquest.com/docview/2921254343 |
Volume | 4 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwfV1LSwMxEB60vXgRRcVqLXvwpsF9JJvkJFVai9AiYqG3JZnNepFttfX_m6TZFgV7zgMyk8wzMx_AtaZWj2iuiciVdVBomROh0UECMhHrmAmsXHHyeJKPpvR5xmYh4LYM3yobmegFdTlHFyO_S6UVsr4O8n7xSRxqlMuuBgiNfWgnVtO4Gy6GT5sYi2vPRT2apB1MiZRsFupmfPWcFcVcEqu0rD_NXfvv37ppa3D-yZF61TM8gsNgM0b9NZOPYc_UJ9DrR-u4fjSvo7H_EmmiaR1QIN5PYTocvD2OSAA7IOhyf8SFYpSiHDXVMlVZZlByVDmNdW49vTJPdZVUzD7IPONJmSlkwjmUJTeGYyWyM2jV89qcQ1RVmY5RoGIJ0iyutMIk5aVAHqdYSt2BpDlmgaETuAOk-Cg2PYw9aQpLmsKTpqAduNmsWaz7YOyc3W2oV4Q3sSy2HOzAbUPR7fD_u13s3u0SDlLPREoS2YXW6uvbXFlLYaV7_jr0oP0wmLy8_gBtHbcK |
linkProvider | ProQuest |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV07T8MwED6VMsCCQIAolJIBJrBIbCe2B4QqoLT0MbVStxA7CQtKCy1C_Cl-I7aTtAKJbp2TnJTz-d53H8C5pNqOSCYRDyIdoNA4QFwqAwnoc1e6PlepGU7uD4L2iD6N_XEFvstZGNNWWepEq6jjiTI58msstJK1c5C30zdkUKNMdbWE0MjFopt8feqQbXbTudfne4Fx62F410YFqgBSpsiGTM4jiihTkkqBI0ISJZiKAurKQIdUcYBl6qW-lvyAMC8mkfK5idxiliRMpZxouhuwSQkRpoWQtx4XOR2zDoxa9Ept9jASwh8Xczp2Wk-rfiaQNpI6fmdm3fhvW7h0cP_UZK2pa-3CTuGjOs1cqPagkmT70Gg6eR3BmWRO37ZgJs4oK1AnXg5gtBY2HEI1m2TJEThpSqSruIp8T1HipjJSHmYxV8zFKhayBl75m6EqNo8bAIzXcLEz2bIm1KwJLWtCWoPLxTfTfO_GyrfrJffC4g7OwqXE1OCq5Ojy8f_UjldTO4Ot9rDfC3udQfcEtrE9UIo8UYfq_P0jOdVeylw2rGg48LxuWfwBnGPxVg |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=A+Review+on+Machine+Unlearning&rft.jtitle=SN+computer+science&rft.au=Zhang%2C+Haibo&rft.au=Nakamura%2C+Toru&rft.au=Isohara%2C+Takamasa&rft.au=Sakurai%2C+Kouichi&rft.date=2023-04-19&rft.pub=Springer+Nature+Singapore&rft.eissn=2661-8907&rft.volume=4&rft.issue=4&rft_id=info:doi/10.1007%2Fs42979-023-01767-4&rft.externalDocID=10_1007_s42979_023_01767_4 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2661-8907&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2661-8907&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2661-8907&client=summon |