Safely Learning with Private Data: A Federated Learning Framework for Large Language Model
Private data, being larger and quality-higher than public data, can greatly improve large language models (LLM). However, due to privacy concerns, this data is often dispersed in multiple silos, making its secure utilization for LLM training a challenge. Federated learning (FL) is an ideal solution...
Saved in:
Main Authors | , , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
21.06.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Private data, being larger and quality-higher than public data, can greatly
improve large language models (LLM). However, due to privacy concerns, this
data is often dispersed in multiple silos, making its secure utilization for
LLM training a challenge. Federated learning (FL) is an ideal solution for
training models with distributed private data, but traditional frameworks like
FedAvg are unsuitable for LLM due to their high computational demands on
clients. An alternative, split learning, offloads most training parameters to
the server while training embedding and output layers locally, making it more
suitable for LLM. Nonetheless, it faces significant challenges in security and
efficiency. Firstly, the gradients of embeddings are prone to attacks, leading
to potential reverse engineering of private data. Furthermore, the server's
limitation of handle only one client's training request at a time hinders
parallel training, severely impacting training efficiency. In this paper, we
propose a Federated Learning framework for LLM, named FL-GLM, which prevents
data leakage caused by both server-side and peer-client attacks while improving
training efficiency. Specifically, we first place the input block and output
block on local client to prevent embedding gradient attacks from server.
Secondly, we employ key-encryption during client-server communication to
prevent reverse engineering attacks from peer-clients. Lastly, we employ
optimization methods like client-batching or server-hierarchical, adopting
different acceleration methods based on the actual computational capabilities
of the server. Experimental results on NLU and generation tasks demonstrate
that FL-GLM achieves comparable metrics to centralized chatGLM model,
validating the effectiveness of our federated learning framework. |
---|---|
DOI: | 10.48550/arxiv.2406.14898 |