Data Poisoning Attack Aiming the Vulnerability of Continual Learning

Generally, regularization-based continual learning models limit access to the previous task data to imitate the real-world constraints related to memory and privacy. However, this introduces a problem in these models by not being able to track the performance on each task. In essence, current contin...

Full description

Saved in:
Bibliographic Details
Published in2023 IEEE International Conference on Image Processing (ICIP) pp. 1905 - 1909
Main Authors Han, Gyojin, Choi, Jaehyun, Hong, Hyeong Gwon, Kim, Junmo
Format Conference Proceeding
LanguageEnglish
Published IEEE 08.10.2023
Subjects
Online AccessGet full text
DOI10.1109/ICIP49359.2023.10222168

Cover

Loading…
Abstract Generally, regularization-based continual learning models limit access to the previous task data to imitate the real-world constraints related to memory and privacy. However, this introduces a problem in these models by not being able to track the performance on each task. In essence, current continual learning methods are susceptible to attacks on previous tasks. We demonstrate the vulnerability of regularization-based continual learning methods by presenting a simple task-specific data poisoning attack that can be used in the learning process of a new task. Training data generated by the proposed attack causes performance degradation on a specific task targeted by the attacker. We experiment with the attack on the two representative regularization-based continual learning methods, Elastic Weight Consolidation (EWC) and Synaptic Intelligence (SI), trained with variants of MNIST dataset. The experiment results justify the vulnerability proposed in this paper and demonstrate the importance of developing continual learning models that are robust to adversarial attacks.
AbstractList Generally, regularization-based continual learning models limit access to the previous task data to imitate the real-world constraints related to memory and privacy. However, this introduces a problem in these models by not being able to track the performance on each task. In essence, current continual learning methods are susceptible to attacks on previous tasks. We demonstrate the vulnerability of regularization-based continual learning methods by presenting a simple task-specific data poisoning attack that can be used in the learning process of a new task. Training data generated by the proposed attack causes performance degradation on a specific task targeted by the attacker. We experiment with the attack on the two representative regularization-based continual learning methods, Elastic Weight Consolidation (EWC) and Synaptic Intelligence (SI), trained with variants of MNIST dataset. The experiment results justify the vulnerability proposed in this paper and demonstrate the importance of developing continual learning models that are robust to adversarial attacks.
Author Hong, Hyeong Gwon
Han, Gyojin
Choi, Jaehyun
Kim, Junmo
Author_xml – sequence: 1
  givenname: Gyojin
  surname: Han
  fullname: Han, Gyojin
  organization: KAIST,School of Electrical Engineering,South Korea
– sequence: 2
  givenname: Jaehyun
  surname: Choi
  fullname: Choi, Jaehyun
  organization: KAIST,School of Electrical Engineering,South Korea
– sequence: 3
  givenname: Hyeong Gwon
  surname: Hong
  fullname: Hong, Hyeong Gwon
  organization: KAIST,Kim Jaechul Graduate School of AI,South Korea
– sequence: 4
  givenname: Junmo
  surname: Kim
  fullname: Kim, Junmo
  organization: KAIST,School of Electrical Engineering,South Korea
BookMark eNo1j8FOwzAQRI0EB1r4AyT8AwnedZzExygFGikSPQDXap1swCJ1UOoe-ve0Ak6jJ80baRbiMkyBhbgHlQIo-9DUzSaz2tgUFeoUFCJCXl6IBRRYgi21ya_FakWR5Gby-yn48CGrGKn7kpXfnSl-snw_jIFncn708SinQdZTiD4caJQt03y2bsTVQOOeb_9yKd6eHl_rddK-PDd11SYeVRYTZ3RPCg1QN7iyKPMuQ8qI3YBk8s4ws6I8wxNaZuh74kKB1bofyDlAvRR3v7v-VN1-z35H83H7_0z_AD5YSeU
ContentType Conference Proceeding
DBID 6IE
6IH
CBEJK
RIE
RIO
DOI 10.1109/ICIP49359.2023.10222168
DatabaseName IEEE Electronic Library (IEL) Conference Proceedings
IEEE Proceedings Order Plan (POP) 1998-present by volume
IEEE Xplore All Conference Proceedings
IEEE Xplore Digital Library
IEEE Proceedings Order Plans (POP) 1998-present
DatabaseTitleList
Database_xml – sequence: 1
  dbid: RIE
  name: IEEE Xplore Digital Library
  url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
EISBN 1728198356
9781728198354
EndPage 1909
ExternalDocumentID 10222168
Genre orig-research
GroupedDBID 6IE
6IH
CBEJK
RIE
RIO
ID FETCH-LOGICAL-i204t-b53da0251acfb8786c42a4aebf2a56c5eee0a642f2a9ee1ddae701933dfabb123
IEDL.DBID RIE
IngestDate Wed Jan 10 09:27:47 EST 2024
IsPeerReviewed false
IsScholarly true
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-i204t-b53da0251acfb8786c42a4aebf2a56c5eee0a642f2a9ee1ddae701933dfabb123
PageCount 5
ParticipantIDs ieee_primary_10222168
PublicationCentury 2000
PublicationDate 2023-Oct.-8
PublicationDateYYYYMMDD 2023-10-08
PublicationDate_xml – month: 10
  year: 2023
  text: 2023-Oct.-8
  day: 08
PublicationDecade 2020
PublicationTitle 2023 IEEE International Conference on Image Processing (ICIP)
PublicationTitleAbbrev ICIP
PublicationYear 2023
Publisher IEEE
Publisher_xml – name: IEEE
Score 2.2556236
Snippet Generally, regularization-based continual learning models limit access to the previous task data to imitate the real-world constraints related to memory and...
SourceID ieee
SourceType Publisher
StartPage 1905
SubjectTerms catastrophic forgetting
continual learning
Data poisoning
Data privacy
Degradation
Image processing
Learning systems
Memory management
Perturbation methods
Training data
Title Data Poisoning Attack Aiming the Vulnerability of Continual Learning
URI https://ieeexplore.ieee.org/document/10222168
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV3NS8MwFA9uJ08qTvwmB6_t-pG06XFsjk1w7OBkt_GSJmNMVpH0oH-9eWmnKAje2kehaXJ4-aW_D0LuilwyA7kM3N7aBCxzAKXgkAVMQixVkpQc8ED_cZZNFuxhyZetWN1rYbTWnnymQ7z0__LLStV4VNb36CTORId0HHJrxFotZyuOiv50OJ0zVJqGmAke7p_-kZvi28b4iMz2L2zYItuwtjJUH7-8GP89omPS-1bo0flX7zkhB3p3SkYjsEDnFRKEXJUOrAW1pQMM7lpTt9Ojz_ULukx7Quw7rQxFc6oNupLS1mh13SOL8f3TcBK0KQnBJomYDSRPS0CkAMpIkYtMsQQYaGkS4JnibswROJThbgut47IEjRbsaVoakNI1rjPS3VU7fU4oYzmwMk4Nhm8AZ5AKYRgrpDCuFIkL0sMpWL02Rhir_ddf_lG_Ioe4Eg1l7pp07Vutb1wPt_LWr90nOymdbg
linkProvider IEEE
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV3NT8IwFG8UD3pSI8Zve_C6sY92H0cCElAgHMBwI69dSwiEGbMd9K-3rxsaTUy8dS9L1vYdXl_3-yDkIY0F0xALx5yttcMi06CkHCKHCfCFDIKMA17oj8ZRf8ae5nxek9UtF0YpZcFnysWh_Zef5bLEq7KW7U78KNknBxzZuBVdq0Zt-V7aGnQGE4ZcUxddwd3d-z-cU2zh6B2T8e6TFV5k7ZaFcOXHLzXGf8_phDS_OXp08lV9Tsme2p6RbhcKoJMcIUImSttFAXJN22jdtaTmrEdfyg3qTFtI7DvNNUV5qhXqktJaanXZJLPe47TTd2qfBGcVeKxwBA8zwF4BpBZJnESSBcBACR0AjyQ3c_bA9BnmMVXKzzJQKMIehpkGIUzpOieNbb5VF4QyFgPL_FCj_QZwBmGSaMZSkWgT8pJL0sQtWLxWUhiL3eqv_ojfk8P-dDRcDAfj52tyhFmpAHQ3pFG8lerWVPRC3Nk8fgKhOqC2
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Abook&rft.genre=proceeding&rft.title=2023+IEEE+International+Conference+on+Image+Processing+%28ICIP%29&rft.atitle=Data+Poisoning+Attack+Aiming+the+Vulnerability+of+Continual+Learning&rft.au=Han%2C+Gyojin&rft.au=Choi%2C+Jaehyun&rft.au=Hong%2C+Hyeong+Gwon&rft.au=Kim%2C+Junmo&rft.date=2023-10-08&rft.pub=IEEE&rft.spage=1905&rft.epage=1909&rft_id=info:doi/10.1109%2FICIP49359.2023.10222168&rft.externalDocID=10222168