Multi-Task Federated Learning for Personalised Deep Neural Networks in Edge Computing
Federated Learning (FL) is an emerging approach for collaboratively training Deep Neural Networks (DNNs) on mobile devices, without private user data leaving the devices. Previous works have shown that non-Independent and Identically Distributed (non-IID) user data harms the convergence speed of the...
Saved in:
Published in | IEEE transactions on parallel and distributed systems Vol. 33; no. 3; pp. 630 - 641 |
---|---|
Main Authors | , , |
Format | Journal Article |
Language | English |
Published |
New York
IEEE
01.03.2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
ISSN | 1045-9219 1558-2183 |
DOI | 10.1109/TPDS.2021.3098467 |
Cover
Loading…
Abstract | Federated Learning (FL) is an emerging approach for collaboratively training Deep Neural Networks (DNNs) on mobile devices, without private user data leaving the devices. Previous works have shown that non-Independent and Identically Distributed (non-IID) user data harms the convergence speed of the FL algorithms. Furthermore, most existing work on FL measures global-model accuracy, but in many cases, such as user content-recommendation, improving individual User model Accuracy (UA) is the real objective. To address these issues, we propose a Multi-Task FL (MTFL) algorithm that introduces non-federated Batch-Normalization (BN) layers into the federated DNN. MTFL benefits UA and convergence speed by allowing users to train models personalised to their own data. MTFL is compatible with popular iterative FL optimisation algorithms such as Federated Averaging (FedAvg), and we show empirically that a distributed form of Adam optimisation (FedAvg-Adam) benefits convergence speed even further when used as the optimisation strategy within MTFL. Experiments using MNIST and CIFAR10 demonstrate that MTFL is able to significantly reduce the number of rounds required to reach a target UA, by up to <inline-formula><tex-math notation="LaTeX">5\times</tex-math> <mml:math><mml:mrow><mml:mn>5</mml:mn><mml:mo>×</mml:mo></mml:mrow></mml:math><inline-graphic xlink:href="hu-ieq1-3098467.gif"/> </inline-formula> when using existing FL optimisation strategies, and with a further <inline-formula><tex-math notation="LaTeX">3\times</tex-math> <mml:math><mml:mrow><mml:mn>3</mml:mn><mml:mo>×</mml:mo></mml:mrow></mml:math><inline-graphic xlink:href="hu-ieq2-3098467.gif"/> </inline-formula> improvement when using FedAvg-Adam. We compare MTFL to competing personalised FL algorithms, showing that it is able to achieve the best UA for MNIST and CIFAR10 in all considered scenarios. Finally, we evaluate MTFL with FedAvg-Adam on an edge-computing testbed, showing that its convergence and UA benefits outweigh its overhead. |
---|---|
AbstractList | Federated Learning (FL) is an emerging approach for collaboratively training Deep Neural Networks (DNNs) on mobile devices, without private user data leaving the devices. Previous works have shown that non-Independent and Identically Distributed (non-IID) user data harms the convergence speed of the FL algorithms. Furthermore, most existing work on FL measures global-model accuracy, but in many cases, such as user content-recommendation, improving individual User model Accuracy (UA) is the real objective. To address these issues, we propose a Multi-Task FL (MTFL) algorithm that introduces non-federated Batch-Normalization (BN) layers into the federated DNN. MTFL benefits UA and convergence speed by allowing users to train models personalised to their own data. MTFL is compatible with popular iterative FL optimisation algorithms such as Federated Averaging (FedAvg), and we show empirically that a distributed form of Adam optimisation (FedAvg-Adam) benefits convergence speed even further when used as the optimisation strategy within MTFL. Experiments using MNIST and CIFAR10 demonstrate that MTFL is able to significantly reduce the number of rounds required to reach a target UA, by up to [Formula Omitted] when using existing FL optimisation strategies, and with a further [Formula Omitted] improvement when using FedAvg-Adam. We compare MTFL to competing personalised FL algorithms, showing that it is able to achieve the best UA for MNIST and CIFAR10 in all considered scenarios. Finally, we evaluate MTFL with FedAvg-Adam on an edge-computing testbed, showing that its convergence and UA benefits outweigh its overhead. Federated Learning (FL) is an emerging approach for collaboratively training Deep Neural Networks (DNNs) on mobile devices, without private user data leaving the devices. Previous works have shown that non-Independent and Identically Distributed (non-IID) user data harms the convergence speed of the FL algorithms. Furthermore, most existing work on FL measures global-model accuracy, but in many cases, such as user content-recommendation, improving individual User model Accuracy (UA) is the real objective. To address these issues, we propose a Multi-Task FL (MTFL) algorithm that introduces non-federated Batch-Normalization (BN) layers into the federated DNN. MTFL benefits UA and convergence speed by allowing users to train models personalised to their own data. MTFL is compatible with popular iterative FL optimisation algorithms such as Federated Averaging (FedAvg), and we show empirically that a distributed form of Adam optimisation (FedAvg-Adam) benefits convergence speed even further when used as the optimisation strategy within MTFL. Experiments using MNIST and CIFAR10 demonstrate that MTFL is able to significantly reduce the number of rounds required to reach a target UA, by up to <inline-formula><tex-math notation="LaTeX">5\times</tex-math> <mml:math><mml:mrow><mml:mn>5</mml:mn><mml:mo>×</mml:mo></mml:mrow></mml:math><inline-graphic xlink:href="hu-ieq1-3098467.gif"/> </inline-formula> when using existing FL optimisation strategies, and with a further <inline-formula><tex-math notation="LaTeX">3\times</tex-math> <mml:math><mml:mrow><mml:mn>3</mml:mn><mml:mo>×</mml:mo></mml:mrow></mml:math><inline-graphic xlink:href="hu-ieq2-3098467.gif"/> </inline-formula> improvement when using FedAvg-Adam. We compare MTFL to competing personalised FL algorithms, showing that it is able to achieve the best UA for MNIST and CIFAR10 in all considered scenarios. Finally, we evaluate MTFL with FedAvg-Adam on an edge-computing testbed, showing that its convergence and UA benefits outweigh its overhead. |
Author | Hu, Jia Mills, Jed Min, Geyong |
Author_xml | – sequence: 1 givenname: Jed orcidid: 0000-0001-6344-9364 surname: Mills fullname: Mills, Jed email: jm729@exeter.ac.uk organization: Department of Computer Science, University of Exeter, Exeter, U.K – sequence: 2 givenname: Jia orcidid: 0000-0001-5406-8420 surname: Hu fullname: Hu, Jia email: j.hu@exeter.ac.uk organization: Department of Computer Science, University of Exeter, Exeter, U.K – sequence: 3 givenname: Geyong orcidid: 0000-0003-1395-7314 surname: Min fullname: Min, Geyong email: g.min@exeter.ac.uk organization: Department of Computer Science, University of Exeter, Exeter, U.K |
BookMark | eNp9kE1Lw0AQhhepYFv9AeIl4Dl1P5PsUfqhQtWC7TlsspOybZqNuxvEf29KiwcPnmZg3udleEZo0NgGELoleEIIlg_r1exjQjElE4ZlxpP0Ag2JEFlMScYG_Y65iCUl8gqNvN9hTLjAfIg2r10dTLxWfh8tQINTAXS0BOUa02yjyrpoBc7bRtXG95cZQBu9QedU3Y_wZd3eR6aJ5noL0dQe2i703DW6rFTt4eY8x2izmK-nz_Hy_ell-riMS8aSEGupKlJVZZGyjGqCGdOl5EziosS8UCmlNJGYFTrhZUoZZSBEqhOpS42V4AUbo_tTb-vsZwc-5Dvbuf5Xn1ORYImzvq1PpadU6az3Dqq8NEEFY5vglKlzgvOjw_zoMD86zM8Oe5L8IVtnDsp9_8vcnRgDAL95ySVNhWA_uTF-XQ |
CODEN | ITDSEO |
CitedBy_id | crossref_primary_10_3390_s22124394 crossref_primary_10_1002_int_22951 crossref_primary_10_1007_s10462_024_11082_w crossref_primary_10_3390_app132312962 crossref_primary_10_3390_info15090550 crossref_primary_10_1109_TMLCN_2023_3302811 crossref_primary_10_1016_j_future_2023_11_006 crossref_primary_10_1109_TNSM_2023_3278023 crossref_primary_10_26599_BDMA_2022_9020046 crossref_primary_10_3390_jsan11040070 crossref_primary_10_1186_s13677_024_00721_w crossref_primary_10_26599_BDMA_2024_9020001 crossref_primary_10_1109_TMC_2024_3504271 crossref_primary_10_3390_app13179813 crossref_primary_10_1088_1742_6596_2456_1_012035 crossref_primary_10_1109_JIOT_2024_3524469 crossref_primary_10_1109_TCSS_2023_3259431 crossref_primary_10_1109_TPDS_2024_3396133 crossref_primary_10_1109_TII_2022_3192882 crossref_primary_10_1109_OJCOMS_2024_3458088 crossref_primary_10_1109_TSP_2024_3459808 crossref_primary_10_1109_JIOT_2023_3299308 crossref_primary_10_3390_math12142229 crossref_primary_10_1109_JIOT_2024_3447036 crossref_primary_10_1109_TCBB_2022_3184319 crossref_primary_10_1007_s10723_023_09730_6 crossref_primary_10_1016_j_aej_2024_08_048 crossref_primary_10_1007_s00779_024_01820_w crossref_primary_10_1016_j_comnet_2024_110510 crossref_primary_10_1109_JIOT_2024_3486121 crossref_primary_10_1109_TNSE_2022_3185672 crossref_primary_10_3390_s22134909 crossref_primary_10_1007_s13042_022_01647_y crossref_primary_10_3390_app13053083 crossref_primary_10_1016_j_artmed_2024_103024 crossref_primary_10_1109_JIOT_2023_3306778 crossref_primary_10_1016_j_future_2024_02_026 crossref_primary_10_1109_JIOT_2024_3376548 crossref_primary_10_1016_j_ijepes_2024_109905 crossref_primary_10_1007_s10489_022_04431_1 crossref_primary_10_1109_TPDS_2022_3157258 crossref_primary_10_1016_j_asoc_2024_111588 crossref_primary_10_1109_TMC_2024_3374706 crossref_primary_10_1109_TITS_2022_3209899 crossref_primary_10_1016_j_inffus_2023_102046 crossref_primary_10_1109_JIOT_2024_3399074 crossref_primary_10_1016_j_dcan_2022_07_006 crossref_primary_10_1109_TNNLS_2023_3323302 crossref_primary_10_1109_JIOT_2024_3446725 crossref_primary_10_1016_j_cosrev_2024_100697 crossref_primary_10_1016_j_future_2023_03_042 crossref_primary_10_1109_COMST_2022_3202047 crossref_primary_10_1109_JIOT_2023_3276865 crossref_primary_10_1109_COMST_2023_3244674 crossref_primary_10_2139_ssrn_4627317 crossref_primary_10_1038_s41598_024_84797_z crossref_primary_10_1016_j_neucom_2023_126897 crossref_primary_10_1109_MCOM_001_2300155 crossref_primary_10_1109_JIOT_2024_3524005 crossref_primary_10_1109_TWC_2023_3301611 crossref_primary_10_1016_j_neunet_2025_107199 crossref_primary_10_1109_JIOT_2024_3434627 crossref_primary_10_1016_j_jksuci_2024_101996 crossref_primary_10_1109_TIFS_2024_3515814 crossref_primary_10_1109_MGRS_2023_3272825 crossref_primary_10_1109_TMC_2024_3484493 crossref_primary_10_1109_TMC_2024_3449129 crossref_primary_10_1016_j_eswa_2022_119159 crossref_primary_10_1080_00207543_2024_2432469 crossref_primary_10_1109_TBC_2023_3332012 crossref_primary_10_1109_TGCN_2023_3260199 crossref_primary_10_1109_IOTM_001_2200266 crossref_primary_10_1109_TC_2024_3353455 crossref_primary_10_1364_OE_544542 crossref_primary_10_1109_TMC_2024_3361876 crossref_primary_10_1109_TPDS_2022_3225185 crossref_primary_10_1109_TMC_2024_3464512 crossref_primary_10_1109_JIOT_2023_3298814 crossref_primary_10_1109_TVT_2022_3188324 crossref_primary_10_1109_TII_2022_3203395 crossref_primary_10_1109_TIM_2023_3341116 crossref_primary_10_1109_TITS_2022_3161986 crossref_primary_10_1109_TWC_2023_3286990 crossref_primary_10_1109_JIOT_2025_3531884 crossref_primary_10_1109_JIOT_2024_3461853 crossref_primary_10_3390_s24165141 crossref_primary_10_3390_s25010233 crossref_primary_10_1109_COMST_2023_3282264 crossref_primary_10_1109_LGRS_2024_3437743 crossref_primary_10_1109_JSTSP_2022_3223526 crossref_primary_10_1109_TCDS_2023_3288985 crossref_primary_10_1109_OJCOMS_2024_3381545 crossref_primary_10_1016_j_segan_2024_101537 crossref_primary_10_3233_JIFS_231880 crossref_primary_10_1007_s10489_023_04753_8 crossref_primary_10_1109_TSC_2023_3268990 crossref_primary_10_1109_JSEN_2022_3165042 crossref_primary_10_1007_s11063_025_11724_2 crossref_primary_10_1109_TITS_2023_3336823 crossref_primary_10_1016_j_comcom_2022_09_008 crossref_primary_10_1016_j_engappai_2023_107125 crossref_primary_10_1109_TSC_2024_3387734 crossref_primary_10_1016_j_future_2024_03_020 crossref_primary_10_1016_j_jpdc_2024_104948 crossref_primary_10_1109_JIOT_2023_3277632 crossref_primary_10_26599_TST_2023_9010066 crossref_primary_10_1109_ACCESS_2024_3442014 crossref_primary_10_1109_TMC_2024_3357874 crossref_primary_10_1109_TPDS_2023_3240883 crossref_primary_10_3390_electronics12122689 crossref_primary_10_1016_j_comcom_2022_07_010 crossref_primary_10_1109_TCE_2024_3424574 crossref_primary_10_1016_j_sysarc_2024_103258 crossref_primary_10_1109_TPDS_2023_3240767 crossref_primary_10_1109_TNNLS_2024_3362974 crossref_primary_10_1109_TPDS_2023_3289444 crossref_primary_10_1371_journal_pone_0302539 crossref_primary_10_26599_BDMA_2024_9020035 crossref_primary_10_1109_JSAC_2023_3345431 crossref_primary_10_1109_TIFS_2024_3484946 |
Cites_doi | 10.1109/ICC.2019.8761212 10.1109/TPDS.2020.3009406 10.1109/ICC.2019.8761315 10.1109/TMC.2019.2934103 10.1109/ICASSP.2019.8683546 10.1109/COMST.2017.2745201 10.1109/ICDCS.2019.00099 10.1109/TPDS.2020.3023905 10.1109/MSP.2020.2975749 10.1109/PIMRC.2019.8904164 10.1109/TPDS.2020.2975189 10.1109/TMC.2019.2941492 10.1016/j.jbi.2019.103291 10.1109/TCOMM.2019.2944169 |
ContentType | Journal Article |
Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022 |
Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022 |
DBID | 97E RIA RIE AAYXX CITATION 7SC 7SP 8FD JQ2 L7M L~C L~D |
DOI | 10.1109/TPDS.2021.3098467 |
DatabaseName | IEEE All-Society Periodicals Package (ASPP) 2005–Present IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Electronic Library (IEL) CrossRef Computer and Information Systems Abstracts Electronics & Communications Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional |
DatabaseTitle | CrossRef Technology Research Database Computer and Information Systems Abstracts – Academic Electronics & Communications Abstracts ProQuest Computer Science Collection Computer and Information Systems Abstracts Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Professional |
DatabaseTitleList | Technology Research Database |
Database_xml | – sequence: 1 dbid: RIE name: IEEE/IET Electronic Library url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/ sourceTypes: Publisher |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Engineering Computer Science |
EISSN | 1558-2183 |
EndPage | 641 |
ExternalDocumentID | 10_1109_TPDS_2021_3098467 9492755 |
Genre | orig-research |
GrantInformation_xml | – fundername: EU Horizon 2020 INITIATE grantid: 101008297 – fundername: EPSRC DTP Studentship |
GroupedDBID | --Z -~X .DC 0R~ 29I 4.4 5GY 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABQJQ ABVLG ACGFO ACIWK AENEX AGQYO AHBIQ AKJIK AKQYR ALMA_UNASSIGNED_HOLDINGS ASUFR ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ CS3 DU5 EBS EJD HZ~ IEDLZ IFIPE IPLJI JAVBF LAI M43 MS~ O9- OCL P2P PQQKQ RIA RIE RNS TN5 TWZ UHB AAYXX CITATION 7SC 7SP 8FD JQ2 L7M L~C L~D |
ID | FETCH-LOGICAL-c336t-d9af1ffcb7382d1033dc94390bc04ba72226903bd64c72323e557d69dcd0a54b3 |
IEDL.DBID | RIE |
ISSN | 1045-9219 |
IngestDate | Mon Jun 30 04:33:29 EDT 2025 Thu Apr 24 23:00:08 EDT 2025 Tue Jul 01 03:58:39 EDT 2025 Wed Aug 27 02:26:44 EDT 2025 |
IsDoiOpenAccess | false |
IsOpenAccess | true |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 3 |
Language | English |
License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037 |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c336t-d9af1ffcb7382d1033dc94390bc04ba72226903bd64c72323e557d69dcd0a54b3 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ORCID | 0000-0001-5406-8420 0000-0003-1395-7314 0000-0001-6344-9364 |
OpenAccessLink | http://hdl.handle.net/10871/126748 |
PQID | 2560908439 |
PQPubID | 85437 |
PageCount | 12 |
ParticipantIDs | crossref_citationtrail_10_1109_TPDS_2021_3098467 proquest_journals_2560908439 crossref_primary_10_1109_TPDS_2021_3098467 ieee_primary_9492755 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 2022-03-01 |
PublicationDateYYYYMMDD | 2022-03-01 |
PublicationDate_xml | – month: 03 year: 2022 text: 2022-03-01 day: 01 |
PublicationDecade | 2020 |
PublicationPlace | New York |
PublicationPlace_xml | – name: New York |
PublicationTitle | IEEE transactions on parallel and distributed systems |
PublicationTitleAbbrev | TPDS |
PublicationYear | 2022 |
Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
References | dinh (ref20) 2021 ref15 lecun and (ref33) 0 ref31 konstantinov (ref30) 2019 ammad-ud-din (ref9) 2019 ref2 ref1 yu (ref32) 2020 mcmahan (ref5) 2017 hard (ref8) 2018 kingma (ref17) 2014 krizhevsky (ref34) 2019 jiang (ref11) 2019 ref24 ref23 ref26 ref25 ref21 roy (ref6) 2019 mudrakarta (ref14) 2019 bonawitz (ref22) 2019 ref28 ref27 huang (ref19) 2021 ref7 smith (ref13) 2017 ref4 dinh (ref10) 2020; 33 ref3 fallah (ref12) 2020; 33 reddi (ref16) 2021 zhao (ref29) 2018 hanzely (ref18) 2020 |
References_xml | – year: 2020 ident: ref32 article-title: Salvaging federated learning by local adaptation – year: 2019 ident: ref9 article-title: Federated collaborative filtering for privacy-preserving personalized recommendation system – year: 0 ident: ref33 article-title: Mnist handwritten digit database – ident: ref2 doi: 10.1109/ICC.2019.8761212 – ident: ref23 doi: 10.1109/TPDS.2020.3009406 – volume: 33 start-page: 21394 year: 2020 ident: ref10 article-title: Personalized federated learning with moreau envelopes publication-title: Proc Conf Neural Inf Process Syst – start-page: 1273 year: 2017 ident: ref5 article-title: Communication-efficient learning of deep networks from decentralized data publication-title: Proc 20th Int Conf Artif Intell Stat – year: 2021 ident: ref20 article-title: Fedu: A unified framework for federated multi-task learning with laplacian regularization – ident: ref27 doi: 10.1109/ICC.2019.8761315 – ident: ref3 doi: 10.1109/TMC.2019.2934103 – ident: ref15 doi: 10.1109/ICASSP.2019.8683546 – ident: ref1 doi: 10.1109/COMST.2017.2745201 – ident: ref31 doi: 10.1109/ICDCS.2019.00099 – year: 2019 ident: ref14 article-title: K for the price of 1: Parameter efficient multi-task and transfer learning publication-title: Proc Int Conf Learn Representations – start-page: 3488 year: 2019 ident: ref30 article-title: Robust learning from untrusted sources publication-title: Proc Int Conf Mach Learn – ident: ref26 doi: 10.1109/TPDS.2020.3023905 – year: 2019 ident: ref34 article-title: Learning multiple layers of features from tiny images – year: 2018 ident: ref8 article-title: Federated learning for mobile keyboard prediction – year: 2021 ident: ref16 article-title: Adaptive federated optimization publication-title: Proc Int Conf Learn Representations – year: 2019 ident: ref22 article-title: Towards federated learning at scale: System design publication-title: Proc Syst Mach Learn Conf – year: 2014 ident: ref17 article-title: Adam: A method for stochastic optimization publication-title: Proc Int Conf Learn Representations – year: 2018 ident: ref29 article-title: Federated learning with non-IID data – year: 2019 ident: ref11 article-title: Improving federated learing personalization via model agnostic meta learning – year: 2019 ident: ref6 article-title: Braintorrent: A peer-to-peer environment for decentralized federated learning – volume: 33 start-page: 3557 year: 2020 ident: ref12 article-title: Personalized federated learning with theoretical guarantees: A model-agnostic meta-learning approach publication-title: Proc Conf Neural Inf Process Syst – ident: ref4 doi: 10.1109/MSP.2020.2975749 – ident: ref25 doi: 10.1109/PIMRC.2019.8904164 – ident: ref28 doi: 10.1109/TPDS.2020.2975189 – start-page: 7865 year: 2021 ident: ref19 article-title: Personalized cross-silo federated learning on non-IID data publication-title: Proc Assoc Adv Artif Intell – ident: ref21 doi: 10.1109/TMC.2019.2941492 – year: 2020 ident: ref18 article-title: Federated learning of a mixture of global and local models – start-page: 4427 year: 2017 ident: ref13 article-title: Federated multi-task learning publication-title: Proc Conf Neural Inf Process Syst – ident: ref7 doi: 10.1016/j.jbi.2019.103291 – ident: ref24 doi: 10.1109/TCOMM.2019.2944169 |
SSID | ssj0014504 |
Score | 2.66418 |
Snippet | Federated Learning (FL) is an emerging approach for collaboratively training Deep Neural Networks (DNNs) on mobile devices, without private user data leaving... |
SourceID | proquest crossref ieee |
SourceType | Aggregation Database Enrichment Source Index Database Publisher |
StartPage | 630 |
SubjectTerms | Adaptation models adaptive optimization Algorithms Artificial neural networks Computational modeling Convergence Customization Data models deep learning Edge computing Electronic devices Federated learning Iterative methods Machine learning Model accuracy multi-task learning Neural networks Optimization Servers Training |
Title | Multi-Task Federated Learning for Personalised Deep Neural Networks in Edge Computing |
URI | https://ieeexplore.ieee.org/document/9492755 https://www.proquest.com/docview/2560908439 |
Volume | 33 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1NT9wwEB0Bp_ZQvlp1C0U-9FQ1ixPbSXxEwAohgZC6K3GLYs8EVVQLYncv_fWMHWdblariZim27Oh54jfxzBuAL0ExvMQ6z0jllGld-KzNuYWdR2OlLagK-c5X1-XFTF_emtsN-LbOhSGiGHxG49CMd_n44FfhV9mx1baojNmETXbc-lyt9Y2BNrFUIHsXJrNshukGM5f2eHpz9p09wSIfK2nrvqT87zMoFlV58SWOx8tkG66GhfVRJffj1dKN_a-_NBtfu_IdeJd4pjjpN8YubNB8D7aHGg4imfQevP1DkHAfZjEfN5u2i3sxCTITzERRJA3WO8EEV9wM7H3BT86IHkXQ9-CprvuA8oX4MRfneEein4zHvYfZ5Hx6epGlwguZV6pcZmjbLu867ypVF5hLpdBbZi7SealdWzGnYKdaOSy1r5iSKTKmwtKiR9ka7dQH2Jo_zOkjiA5dh2yI2Fakc9ItOcSOSaPtCizregRygKLxSZU8FMf42UTvRNomoNcE9JqE3gi-roc89pIc_-u8H9BYd0xAjOBwwLtJRrtoAvuzsuYX_fTvUQfwpgjZDzEE7RC2lk8r-sycZOmO4mZ8BoPI3YQ |
linkProvider | IEEE |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1BT9swFH5i7LBxgA02UWCbD5zQUpzYTuIjGlTdRiskWolbFPu9IMRUEG0v_HpsxykTm6bdLMWWHX1-8ffi974HcOgVw3Ms04RESomUmU3q1LWwsag01xkVPt95NM6HU_njSl2twddVLgwRheAz6vtmuMvHO7v0v8qOtdRZodQreK18Mm6brbW6M5AqFAt0_oVKtDPEeIeZcn08uTi9dL5glvYF12VbVP75FAplVf74FocDZrAFo25pbVzJbX-5MH37-EK18X_X_g42I9NkJ-3WeA9rNNuGra6KA4tGvQ0bv0kS7sA0ZOQmk3p-ywZeaMJxUWRRhfWaOYrLLjr-PndPTonumVf4cFON25DyObuZsTO8JtZO5sZ9gOngbPJtmMTSC4kVIl8kqOsmbRprClFmmHIh0GrHXbixXJq6cKzCudXCYC5t4UiZIKUKzDVa5LWSRnyE9dndjHaBNWgadKaIdUEyJVmTQWwcbdRNhnlZ9oB3UFQ26pL78hi_quCfcF159CqPXhXR68HRash9K8rxr847Ho1VxwhEDw46vKtotvPK8z_NS_eie38f9QXeDCej8-r8-_jnPrzNfC5ECEg7gPXFw5I-OYayMJ_DxnwCN_ngzA |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Multi-Task+Federated+Learning+for+Personalised+Deep+Neural+Networks+in+Edge+Computing&rft.jtitle=IEEE+transactions+on+parallel+and+distributed+systems&rft.au=Mills%2C+Jed&rft.au=Hu%2C+Jia&rft.au=Min%2C+Geyong&rft.date=2022-03-01&rft.issn=1045-9219&rft.eissn=1558-2183&rft.volume=33&rft.issue=3&rft.spage=630&rft.epage=641&rft_id=info:doi/10.1109%2FTPDS.2021.3098467&rft.externalDBID=n%2Fa&rft.externalDocID=10_1109_TPDS_2021_3098467 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1045-9219&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1045-9219&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1045-9219&client=summon |