Privacy-Preserving Federated Brain Tumour Segmentation

Due to medical data privacy regulations, it is often infeasible to collect and share patient data in a centralised data lake. This poses challenges for training machine learning algorithms, such as deep convolutional networks, which often require large numbers of diverse training examples. Federated...

Full description

Saved in:
Bibliographic Details
Published inMachine Learning in Medical Imaging Vol. 11861; pp. 133 - 141
Main Authors Li, Wenqi, Milletarì, Fausto, Xu, Daguang, Rieke, Nicola, Hancox, Jonny, Zhu, Wentao, Baust, Maximilian, Cheng, Yan, Ourselin, Sébastien, Cardoso, M. Jorge, Feng, Andrew
Format Book Chapter Journal Article
LanguageEnglish
Published Cham Springer International Publishing 01.01.2019
SeriesLecture Notes in Computer Science
Online AccessGet full text

Cover

Loading…
More Information
Summary:Due to medical data privacy regulations, it is often infeasible to collect and share patient data in a centralised data lake. This poses challenges for training machine learning algorithms, such as deep convolutional networks, which often require large numbers of diverse training examples. Federated learning sidesteps this difficulty by bringing code to the patient data owners and only sharing intermediate model training updates among them. Although a high-accuracy model could be achieved by appropriately aggregating these model updates, the model shared could indirectly leak the local training examples. In this paper, we investigate the feasibility of applying differential-privacy techniques to protect the patient data in a federated learning setup. We implement and evaluate practical federated learning systems for brain tumour segmentation on the BraTS dataset. The experimental results show that there is a trade-off between model performance and privacy protection costs.
ISBN:9783030326913
3030326918
ISSN:0302-9743
1611-3349
DOI:10.1007/978-3-030-32692-0_16