Differentially Private CutMix for Split Learning with Vision Transformer
Recently, vision transformer (ViT) has started to outpace the conventional CNN in computer vision tasks. Considering privacy-preserving distributed learning with ViT, federated learning (FL) communicates models, which becomes ill-suited due to ViT' s large model size and computing costs. Split...
Saved in:
Main Authors | , , , , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
28.10.2022
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Recently, vision transformer (ViT) has started to outpace the conventional
CNN in computer vision tasks. Considering privacy-preserving distributed
learning with ViT, federated learning (FL) communicates models, which becomes
ill-suited due to ViT' s large model size and computing costs. Split learning
(SL) detours this by communicating smashed data at a cut-layer, yet suffers
from data privacy leakage and large communication costs caused by high
similarity between ViT' s smashed data and input data. Motivated by this
problem, we propose DP-CutMixSL, a differentially private (DP) SL framework by
developing DP patch-level randomized CutMix (DP-CutMix), a novel
privacy-preserving inter-client interpolation scheme that replaces randomly
selected patches in smashed data. By experiment, we show that DP-CutMixSL not
only boosts privacy guarantees and communication efficiency, but also achieves
higher accuracy than its Vanilla SL counterpart. Theoretically, we analyze that
DP-CutMix amplifies R\'enyi DP (RDP), which is upper-bounded by its Vanilla
Mixup counterpart. |
---|---|
DOI: | 10.48550/arxiv.2210.15986 |