A Robust Adversary Detection-Deactivation Method for Metaverse-Oriented Collaborative Deep Learning

Metaverse is trending to create a digital circumstance that can transfer the real world to an online platform supported by large quantities of real-time interactions. Pretrained artificial intelligence (AI) models are demonstrating their increasing capability in aiding the metaverse to achieve an ex...

Full description

Saved in:
Bibliographic Details
Published inIEEE sensors journal Vol. 24; no. 14; pp. 22011 - 22022
Main Authors Li, Pengfei, Zhang, Zhibo, Al-Sumaiti, Ameena Saad, Werghi, Naoufel, Yeun, Chan Yeob
Format Journal Article
LanguageEnglish
Published New York IEEE 15.07.2024
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Metaverse is trending to create a digital circumstance that can transfer the real world to an online platform supported by large quantities of real-time interactions. Pretrained artificial intelligence (AI) models are demonstrating their increasing capability in aiding the metaverse to achieve an excellent response with negligible delay, and nowadays, many large models are collaboratively trained by various participants in a manner named collaborative deep learning (CDL). However, several security weaknesses can threaten the safety of the CDL training process, which might result in fatal attacks to either the pretrained large model or the local sensitive datasets possessed by an individual entity. In CDL, malicious participants can hide within the major innocent and silently upload deceptive parameters to degenerate the model performance, or they can abuse the downloaded parameters to construct a generative adversarial network (GAN) to acquire the private information of others illegally. To compensate for these vulnerabilities, this article proposes an adversary detection-deactivation method that can limit and isolate the access of potential malicious participants as well as quarantine and disable the GAN attack or harmful backpropagation (BP) of received threatening gradients. A detailed protection analysis has been conducted on a multiview (MV) CDL case, and results show that the protocol can effectively prevent harmful access by heuristic manner analysis and can protect the existing model by swiftly checking received gradients using only one low-cost branch with an embedded firewall.
ISSN:1530-437X
1558-1748
DOI:10.1109/JSEN.2023.3325771