DePoL: Assuring training integrity in collaborative learning via decentralized verification

Collaborative learning enables multiple participants to jointly train complex models but is vulnerable to attacks like model poisoning or backdoor attacks. Ensuring training integrity can prevent these threats by blocking any tampered contributions from affecting the model. However, traditional appr...

Full description

Saved in:
Bibliographic Details
Published inJournal of parallel and distributed computing Vol. 199; p. 105056
Main Authors Xu, Zhicheng, Zhang, Xiaoli, Yin, Xuanyu, Cheng, Hongbing
Format Journal Article
LanguageEnglish
Published Elsevier Inc 01.05.2025
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Collaborative learning enables multiple participants to jointly train complex models but is vulnerable to attacks like model poisoning or backdoor attacks. Ensuring training integrity can prevent these threats by blocking any tampered contributions from affecting the model. However, traditional approaches often suffer from single points of bottleneck or failure in decentralized environments. To address these issues, we propose DePoL, a secure, scalable, and efficient decentralized verification framework based on duplicated execution. DePoL leverages blockchain to distribute the verification tasks across multiple participant-formed groups, eliminating single-point bottlenecks. Within each group, redundant verification and a majority-based arbitration prevent single points of failure. To further enhance security, DePoL introduces a two-stage plagiarism-free commitment scheme to prevent untrusted verifiers from exploiting public on-chain data. Additionally, a hybrid verification method employs fuzzy matching to handle unpredictable reproduction errors, while a “slow path” ensures zero false positives for honest trainers. Our theoretical analysis demonstrates DePoL's security and termination properties. Extensive evaluations show that DePoL has overhead similar to common distributed machine learning algorithms, while outperforming centralized verification schemes in scalability, reducing training latency by up to 46%. Additionally, DePoL effectively handles reproduction errors with 0 false positives. •Decentralized verification framework to ensure training integrity.•Two-stage plagiarism-free commitment method to prevent cheating.•Hybrid verification method that tolerates reproduction errors.•Superior scalability and reduced latency compared to centralized methods.
ISSN:0743-7315
DOI:10.1016/j.jpdc.2025.105056