SFML: A personalized, efficient, and privacy-preserving collaborative traffic classification architecture based on split learning and mutual learning

Traffic classification is essential for network management and optimization, enhancing user experience, network performance, and security. However, evolving technologies and complex network environments pose challenges. Recently, researchers have turned to machine learning for traffic classification...

Full description

Saved in:
Bibliographic Details
Published inFuture generation computer systems Vol. 162; p. 107487
Main Authors Xia, Jiaqi, Wu, Meng, Li, Pengyong
Format Journal Article
LanguageEnglish
Published Elsevier B.V 01.01.2025
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Traffic classification is essential for network management and optimization, enhancing user experience, network performance, and security. However, evolving technologies and complex network environments pose challenges. Recently, researchers have turned to machine learning for traffic classification due to its ability to automatically extract and distinguish traffic features, outperforming traditional methods in handling complex patterns and environmental changes while maintaining high accuracy. Federated learning, a distributed learning approach, enables model training without revealing original data, making it appealing for traffic classification to safeguard user privacy and data security. However, applying it to this task poses two challenges. Firstly, common client devices like routers and switches have limited computing resources, which can hinder efficient training and increase time costs. Secondly, real-world applications often demand personalized models and tasks for clients, posing further complexities. To address these issues, we propose Split Federated Mutual Learning (SFML), an innovative federated learning architecture designed for traffic classification that combines split learning and mutual learning. In SFML, each client maintains two models: a privacy model for the local task and a public model for the global task. These two models learn from each other through knowledge distillation. Furthermore, by leveraging split learning, we offload most of the computational tasks to the server, significantly reducing the computational burden on the client. Experimental results demonstrate that SFML outperforms typical training architectures in terms of convergence speed, model performance, and privacy protection. Not only does SFML improve training efficiency, but it also satisfies the personalized needs of clients and reduces their computational workload and communication overhead, providing users with a superior network experience. •A federated learning framework that combines split learning with mutual learning.•Applicable for traffic classification and compatible with model heterogeneity.•Reduction of computational and storage overhead on the client side.•Privacy protection for data and models from the architectural level.•Higher model accuracy and faster convergence speed.
ISSN:0167-739X
DOI:10.1016/j.future.2024.107487