A Bi-layered Parallel Training Architecture for Large-Scale Convolutional Neural Networks

Benefitting from large-scale training datasets and the complex training network, Convolutional Neural Networks (CNNs) are widely applied in various fields with high accuracy. However, the training process of CNNs is very time-consuming, where large amounts of training samples and iterative operation...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on parallel and distributed systems Vol. 30; no. 5; pp. 965 - 976
Main Authors Chen, Jianguo, Li, Kenli, Bilal, Kashif, Zhou, Xu, Li, Keqin, Yu, Philip S.
Format Journal Article
LanguageEnglish
Published New York IEEE 01.05.2019
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
Abstract Benefitting from large-scale training datasets and the complex training network, Convolutional Neural Networks (CNNs) are widely applied in various fields with high accuracy. However, the training process of CNNs is very time-consuming, where large amounts of training samples and iterative operations are required to obtain high-quality weight parameters. In this paper, we focus on the time-consuming training process of large-scale CNNs and propose a Bi-layered Parallel Training (BPT-CNN) architecture in distributed computing environments. BPT-CNN consists of two main components: (a) an outer-layer parallel training for multiple CNN subnetworks on separate data subsets, and (b) an inner-layer parallel training for each subnetwork. In the outer-layer parallelism, we address critical issues of distributed and parallel computing, including data communication, synchronization, and workload balance. A heterogeneous-aware Incremental Data Partitioning and Allocation (IDPA) strategy is proposed, where large-scale training datasets are partitioned and allocated to the computing nodes in batches according to their computing power. To minimize the synchronization waiting during the global weight update process, an Asynchronous Global Weight Update (AGWU) strategy is proposed. In the inner-layer parallelism, we further accelerate the training process for each CNN subnetwork on each computer, where computation steps of convolutional layer and the local weight training are parallelized based on task-parallelism. We introduce task decomposition and scheduling strategies with the objectives of thread-level load balancing and minimum waiting time for critical paths. Extensive experimental results indicate that the proposed BPT-CNN effectively improves the training performance of CNNs while maintaining the accuracy.
AbstractList Benefitting from large-scale training datasets and the complex training network, Convolutional Neural Networks (CNNs) are widely applied in various fields with high accuracy. However, the training process of CNNs is very time-consuming, where large amounts of training samples and iterative operations are required to obtain high-quality weight parameters. In this paper, we focus on the time-consuming training process of large-scale CNNs and propose a Bi-layered Parallel Training (BPT-CNN) architecture in distributed computing environments. BPT-CNN consists of two main components: (a) an outer-layer parallel training for multiple CNN subnetworks on separate data subsets, and (b) an inner-layer parallel training for each subnetwork. In the outer-layer parallelism, we address critical issues of distributed and parallel computing, including data communication, synchronization, and workload balance. A heterogeneous-aware Incremental Data Partitioning and Allocation (IDPA) strategy is proposed, where large-scale training datasets are partitioned and allocated to the computing nodes in batches according to their computing power. To minimize the synchronization waiting during the global weight update process, an Asynchronous Global Weight Update (AGWU) strategy is proposed. In the inner-layer parallelism, we further accelerate the training process for each CNN subnetwork on each computer, where computation steps of convolutional layer and the local weight training are parallelized based on task-parallelism. We introduce task decomposition and scheduling strategies with the objectives of thread-level load balancing and minimum waiting time for critical paths. Extensive experimental results indicate that the proposed BPT-CNN effectively improves the training performance of CNNs while maintaining the accuracy.
Author Li, Keqin
Bilal, Kashif
Li, Kenli
Zhou, Xu
Yu, Philip S.
Chen, Jianguo
Author_xml – sequence: 1
  givenname: Jianguo
  orcidid: 0000-0001-5009-578X
  surname: Chen
  fullname: Chen, Jianguo
  email: cccjianguo@163.com
  organization: College of Computer Science and Electronic Engineering, Hunan University, Changsha, China
– sequence: 2
  givenname: Kenli
  orcidid: 0000-0002-2635-7716
  surname: Li
  fullname: Li, Kenli
  email: lkl@hnu.edu.cn
  organization: College of Computer Science and Electronic Engineering, Hunan University, Changsha, China
– sequence: 3
  givenname: Kashif
  orcidid: 0000-0002-4381-8094
  surname: Bilal
  fullname: Bilal, Kashif
  email: kashifbilal@ciit.net.pk
  organization: COMSATS University Islamabad, Abbottabad, Pakistan
– sequence: 4
  givenname: Xu
  orcidid: 0000-0002-1400-8375
  surname: Zhou
  fullname: Zhou, Xu
  email: happypanda2006@126.com
  organization: College of Computer Science and Electronic Engineering, Hunan University, Changsha, China
– sequence: 5
  givenname: Keqin
  orcidid: 0000-0001-5224-4048
  surname: Li
  fullname: Li, Keqin
  email: lik@newpaltz.edu
  organization: College of Computer Science and Electronic Engineering, Hunan University, Changsha, China
– sequence: 6
  givenname: Philip S.
  surname: Yu
  fullname: Yu, Philip S.
  email: psyu@uic.edu
  organization: Department of Computer Science, University of Illinois at Chicago, Chicago, IL, USA
BookMark eNp9kD1PwzAQhi0EEi3wAxBLJOYUX2w39ljKp1QBEmVgiq7OBVJMXJwE1H9PQisGBqa74X1O7z1Dtlv5ihg7Bj4C4OZs_nDxOEo46FGi01Qos8MGoJSOE9Bit9u5VLFJwOyzYV0vOQepuByw50l0XsYO1xQojx4woHPkonnAsiqrl2gS7GvZkG3aQFHhQzTD8ELxo0VH0dRXn961TekrdNEdteFnNF8-vNWHbK9AV9PRdh6wp6vL-fQmnt1f304ns9h2LZuYhLGoF5YAgROiztMFGDBKo5XAEcUizdHqXEgzXkiVSqmKgqMZ53mRSysO2Onm7ir4j5bqJlv6NnSF6ixJgHMh9Fh0qXSTssHXdaAis2WDffOme9VlwLPeY9Z7zHqP2dZjR8IfchXKdwzrf5mTDVMS0W9eK56ABPENO6mBNg
CODEN ITDSEO
CitedBy_id crossref_primary_10_1007_s00521_019_04498_x
crossref_primary_10_1016_j_scs_2022_103765
crossref_primary_10_1109_ACCESS_2022_3143811
crossref_primary_10_1007_s40815_019_00697_9
crossref_primary_10_3390_app12094429
crossref_primary_10_1016_j_scs_2021_103364
crossref_primary_10_32604_iasc_2022_026934
crossref_primary_10_1109_TPDS_2023_3281931
crossref_primary_10_1016_j_eswa_2024_124828
crossref_primary_10_1088_2040_8986_abfee0
crossref_primary_10_1109_TITS_2021_3105682
crossref_primary_10_3390_coatings11101173
crossref_primary_10_1016_j_bspc_2023_105119
crossref_primary_10_3390_app10217572
crossref_primary_10_1016_j_ins_2020_05_013
crossref_primary_10_1007_s00521_019_04485_2
crossref_primary_10_1080_15325008_2022_2135051
crossref_primary_10_1142_S0218126621503011
crossref_primary_10_1109_JIOT_2021_3125530
crossref_primary_10_3390_coatings11121564
crossref_primary_10_1007_s42107_023_00834_8
crossref_primary_10_1109_JSEN_2023_3342833
crossref_primary_10_1016_j_amc_2020_125483
crossref_primary_10_1109_ACCESS_2019_2939940
crossref_primary_10_35377_saucis___1516717
crossref_primary_10_1016_j_ins_2020_04_039
crossref_primary_10_1109_TII_2024_3423457
crossref_primary_10_1016_j_egyr_2021_08_191
crossref_primary_10_1007_s00521_021_05775_4
crossref_primary_10_1007_s00521_019_04319_1
crossref_primary_10_1007_s11071_020_05687_3
crossref_primary_10_1007_s13042_023_01934_2
crossref_primary_10_1007_s00521_019_04311_9
crossref_primary_10_1007_s00521_019_04243_4
crossref_primary_10_1016_j_compbiomed_2022_105252
crossref_primary_10_1007_s00521_019_04121_z
crossref_primary_10_1088_1402_4896_ac99ad
crossref_primary_10_3390_coatings11121555
crossref_primary_10_32604_cmc_2022_026660
crossref_primary_10_1109_ACCESS_2019_2913280
crossref_primary_10_1007_s42107_023_00865_1
crossref_primary_10_1080_03772063_2024_2434580
crossref_primary_10_1016_j_neucom_2020_04_099
crossref_primary_10_1109_TIP_2024_3445731
crossref_primary_10_1016_j_ins_2021_03_027
crossref_primary_10_1145_3412371
crossref_primary_10_1016_j_jpdc_2019_05_001
crossref_primary_10_1016_j_heliyon_2024_e31675
crossref_primary_10_1016_j_engappai_2023_106513
crossref_primary_10_1016_j_knosys_2021_107038
crossref_primary_10_1016_j_isatra_2020_02_023
crossref_primary_10_1109_TPDS_2022_3150579
crossref_primary_10_1016_j_biortech_2023_130151
crossref_primary_10_1142_S0218001421520121
crossref_primary_10_1016_j_yofte_2024_104113
crossref_primary_10_1109_ACCESS_2021_3060154
crossref_primary_10_1007_s00521_020_05478_2
crossref_primary_10_1007_s10115_020_01495_8
crossref_primary_10_1109_TSMC_2021_3055501
crossref_primary_10_1109_TII_2023_3313641
crossref_primary_10_1155_2021_5916748
crossref_primary_10_1109_JIOT_2021_3112715
crossref_primary_10_1109_JSAC_2023_3242707
crossref_primary_10_1038_s41598_024_61412_9
crossref_primary_10_1016_j_heliyon_2024_e32650
crossref_primary_10_1109_ACCESS_2021_3115911
crossref_primary_10_1088_1674_1056_acb202
crossref_primary_10_1109_JIOT_2020_3011558
crossref_primary_10_1016_j_parco_2020_102736
crossref_primary_10_1007_s42514_019_00018_4
crossref_primary_10_1007_s00521_019_04415_2
crossref_primary_10_1016_j_ins_2020_03_043
crossref_primary_10_1016_j_egyr_2021_10_094
crossref_primary_10_1016_j_fraope_2024_100081
crossref_primary_10_1007_s00521_022_07986_9
crossref_primary_10_1109_ACCESS_2022_3176385
crossref_primary_10_1109_TII_2022_3195168
crossref_primary_10_1016_j_bspc_2024_106895
crossref_primary_10_1016_j_jpdc_2024_104978
crossref_primary_10_1016_j_knosys_2021_107046
crossref_primary_10_1115_1_4067742
crossref_primary_10_1109_TPDS_2022_3187815
crossref_primary_10_1142_S0218001421550053
crossref_primary_10_1016_j_neucom_2021_10_097
crossref_primary_10_1109_ACCESS_2021_3070012
crossref_primary_10_1016_j_ins_2020_05_057
crossref_primary_10_1016_j_neucom_2020_04_155
crossref_primary_10_1016_j_inffus_2023_102028
crossref_primary_10_1002_er_7316
crossref_primary_10_1007_s41024_024_00416_4
crossref_primary_10_1016_j_cherd_2024_04_045
crossref_primary_10_1049_rpg2_12854
crossref_primary_10_1007_s00521_022_07952_5
crossref_primary_10_1016_j_neucom_2023_126277
crossref_primary_10_1016_j_heliyon_2024_e24265
crossref_primary_10_1007_s00521_019_04374_8
crossref_primary_10_1016_j_measurement_2021_110566
crossref_primary_10_1002_aisy_202100064
crossref_primary_10_3390_a14110309
crossref_primary_10_1007_s00521_019_04149_1
crossref_primary_10_1016_j_rsase_2023_101126
crossref_primary_10_1142_S021800142253007X
crossref_primary_10_1080_08839514_2024_2321552
crossref_primary_10_1109_TKDE_2020_3035685
crossref_primary_10_1016_j_sysarc_2024_103070
crossref_primary_10_1007_s00521_019_04499_w
crossref_primary_10_1007_s11277_021_08763_y
crossref_primary_10_1016_j_jenvman_2023_119555
crossref_primary_10_1016_j_scs_2021_103189
crossref_primary_10_1117_1_JRS_16_044517
crossref_primary_10_1016_j_bspc_2025_107801
crossref_primary_10_1515_ijcre_2021_0145
crossref_primary_10_1016_j_future_2021_09_011
crossref_primary_10_3390_electronics13214193
crossref_primary_10_1016_j_neucom_2020_07_149
crossref_primary_10_1109_ACCESS_2020_2971260
crossref_primary_10_1016_j_cnsns_2020_105390
crossref_primary_10_1007_s00521_022_07886_y
crossref_primary_10_1016_j_est_2023_108675
crossref_primary_10_1007_s40815_019_00673_3
crossref_primary_10_3390_su132111606
crossref_primary_10_1109_ACCESS_2019_2940545
crossref_primary_10_1109_TPDS_2019_2955935
crossref_primary_10_1007_s10586_022_03959_8
crossref_primary_10_1109_JBHI_2024_3392531
crossref_primary_10_1016_j_jpdc_2022_09_007
crossref_primary_10_1109_JIOT_2022_3177449
crossref_primary_10_1016_j_ins_2020_09_031
crossref_primary_10_1109_OJIES_2022_3174218
crossref_primary_10_1016_j_compbiomed_2023_107399
crossref_primary_10_1364_OSAC_387075
crossref_primary_10_3390_su16031212
crossref_primary_10_1088_1612_202X_ab7a9a
crossref_primary_10_1109_TSMC_2020_3006124
crossref_primary_10_1016_j_ins_2020_12_079
crossref_primary_10_1109_JIOT_2020_3029030
crossref_primary_10_3390_su132011466
Cites_doi 10.1109/TPDS.2016.2626289
10.1109/PDP.2010.43
10.1016/j.ins.2016.08.003
10.1145/2742060.2743766
10.1145/2020408.2020426
10.1109/ICASSP.2013.6639348
10.1109/IPDPSW.2014.194
10.1145/2897937.2897995
10.1109/CVPR.2009.5206848
10.1016/j.procs.2013.05.198
10.1109/TPDS.2014.2357019
10.1109/TPDS.2017.2768366
10.1007/s10766-017-0535-9
10.1109/ASAP.2009.25
10.1109/JSSC.2016.2616357
10.1145/1815961.1815993
10.1007/978-3-319-16086-3_6
10.1145/2647868.2654889
ContentType Journal Article
Copyright Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2019
Copyright_xml – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2019
DBID 97E
RIA
RIE
AAYXX
CITATION
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
DOI 10.1109/TPDS.2018.2877359
DatabaseName IEEE All-Society Periodicals Package (ASPP) 2005–Present
IEEE All-Society Periodicals Package (ASPP) 1998–Present
IEEE Xplore
CrossRef
Computer and Information Systems Abstracts
Electronics & Communications Abstracts
Technology Research Database
ProQuest Computer Science Collection
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
DatabaseTitle CrossRef
Technology Research Database
Computer and Information Systems Abstracts – Academic
Electronics & Communications Abstracts
ProQuest Computer Science Collection
Computer and Information Systems Abstracts
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts Professional
DatabaseTitleList
Technology Research Database
Database_xml – sequence: 1
  dbid: RIE
  name: IEEE Xplore
  url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
Computer Science
EISSN 1558-2183
EndPage 976
ExternalDocumentID 10_1109_TPDS_2018_2877359
8502141
Genre orig-research
GrantInformation_xml – fundername: National Key R&D Program of China
  grantid: 2016YFB0200201
– fundername: National Outstanding Youth Science Program of National Natural Science Foundation of China
  grantid: 61625202
– fundername: China Postdoctoral Science Foundation
  grantid: 2018T110829
  funderid: 10.13039/501100002858
– fundername: International Postdoctoral Exchange Fellowship Program
  grantid: 2018024
– fundername: National Science Foundation
  grantid: IIS-1526499; IIS-1763325; CNS-1626432
  funderid: 10.13039/100000001
– fundername: NSFC
  grantid: 61672313
GroupedDBID --Z
-~X
.DC
0R~
29I
4.4
5GY
6IK
97E
AAJGR
AARMG
AASAJ
AAWTH
ABAZT
ABQJQ
ABVLG
ACGFO
ACIWK
AENEX
AGQYO
AHBIQ
AKJIK
AKQYR
ALMA_UNASSIGNED_HOLDINGS
ASUFR
ATWAV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
CS3
DU5
EBS
EJD
HZ~
IEDLZ
IFIPE
IPLJI
JAVBF
LAI
M43
MS~
O9-
OCL
P2P
PQQKQ
RIA
RIE
RNS
TN5
TWZ
UHB
AAYXX
CITATION
RIG
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
ID FETCH-LOGICAL-c359t-e39ca8bce1a10eaa8d7b191958ac410aa3b7dac8d3496b457445ff0a96ddfd4c3
IEDL.DBID RIE
ISSN 1045-9219
IngestDate Mon Jun 30 05:14:53 EDT 2025
Tue Jul 01 03:58:38 EDT 2025
Thu Apr 24 23:13:01 EDT 2025
Wed Aug 27 02:44:52 EDT 2025
IsPeerReviewed true
IsScholarly true
Issue 5
Language English
License https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html
https://doi.org/10.15223/policy-029
https://doi.org/10.15223/policy-037
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c359t-e39ca8bce1a10eaa8d7b191958ac410aa3b7dac8d3496b457445ff0a96ddfd4c3
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ORCID 0000-0002-1400-8375
0000-0002-4381-8094
0000-0002-2635-7716
0000-0001-5224-4048
0000-0001-5009-578X
PQID 2210033863
PQPubID 85437
PageCount 12
ParticipantIDs ieee_primary_8502141
crossref_citationtrail_10_1109_TPDS_2018_2877359
proquest_journals_2210033863
crossref_primary_10_1109_TPDS_2018_2877359
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 2019-05-01
PublicationDateYYYYMMDD 2019-05-01
PublicationDate_xml – month: 05
  year: 2019
  text: 2019-05-01
  day: 01
PublicationDecade 2010
PublicationPlace New York
PublicationPlace_xml – name: New York
PublicationTitle IEEE transactions on parallel and distributed systems
PublicationTitleAbbrev TPDS
PublicationYear 2019
Publisher IEEE
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Publisher_xml – name: IEEE
– name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
References ref13
ref12
ref15
ref11
chilimbi (ref21) 2014
ref2
dean (ref14) 2012
ref17
ref19
ref18
coates (ref1) 2013
zinkevich (ref25) 2010
abadi (ref16) 2016; abs 1603 4467
ho (ref10) 2013
ref23
ref26
ref20
ref22
ref8
le (ref24) 2011
ref9
ref4
ref3
recht (ref7) 2011
ref6
ref5
References_xml – ident: ref18
  doi: 10.1109/TPDS.2016.2626289
– volume: abs 1603 4467
  start-page: 725
  year: 2016
  ident: ref16
  article-title: Tensorflow: Large-scale machine learning on heterogeneous distributed systems
  publication-title: CoRR
– start-page: 2595
  year: 2010
  ident: ref25
  article-title: Parallelized stochastic gradient descent
  publication-title: Proc 23rd Int Conf Neural Inf Process Syst - Vol 2
– start-page: 693
  year: 2011
  ident: ref7
  article-title: Hogwild: A lock-free approach to parallelizing stochastic gradient descent
  publication-title: Proc 24th Int Conf Neural Inf Process Syst
– ident: ref13
  doi: 10.1109/PDP.2010.43
– ident: ref26
  doi: 10.1016/j.ins.2016.08.003
– ident: ref5
  doi: 10.1145/2742060.2743766
– start-page: 571
  year: 2014
  ident: ref21
  article-title: Project adam: Building an efficient and scalable deep learning training system
  publication-title: Proc 11th USENIX Conf Operating Syst Des Implementation
– ident: ref2
  doi: 10.1145/2020408.2020426
– ident: ref4
  doi: 10.1109/ICASSP.2013.6639348
– ident: ref9
  doi: 10.1109/IPDPSW.2014.194
– start-page: 1
  year: 2013
  ident: ref10
  article-title: More effective distributed ml via a stale synchronous parallel parameter server
  publication-title: Proc 26th Int Conf Neural Inf Process Syst - Vol 1
– ident: ref6
  doi: 10.1145/2897937.2897995
– ident: ref3
  doi: 10.1109/CVPR.2009.5206848
– ident: ref12
  doi: 10.1016/j.procs.2013.05.198
– start-page: 1223
  year: 2012
  ident: ref14
  article-title: Large scale distributed deep networks
  publication-title: Proc 25th Int Conf Neural Inf Process Syst - Vol 1
– start-page: 265
  year: 2011
  ident: ref24
  article-title: On optimization methods for deep learning
  publication-title: Proc Int Conf Int Conf Mach Learn
– ident: ref20
  doi: 10.1109/TPDS.2014.2357019
– ident: ref17
  doi: 10.1109/TPDS.2017.2768366
– ident: ref8
  doi: 10.1007/s10766-017-0535-9
– ident: ref19
  doi: 10.1109/ASAP.2009.25
– ident: ref22
  doi: 10.1109/JSSC.2016.2616357
– ident: ref23
  doi: 10.1145/1815961.1815993
– ident: ref11
  doi: 10.1007/978-3-319-16086-3_6
– ident: ref15
  doi: 10.1145/2647868.2654889
– start-page: 1337
  year: 2013
  ident: ref1
  article-title: Deep learning with cots hpc systems
  publication-title: Proc 30th Int Conf Int Conf Mach Learn - Vol 28
SSID ssj0014504
Score 2.5983672
Snippet Benefitting from large-scale training datasets and the complex training network, Convolutional Neural Networks (CNNs) are widely applied in various fields with...
SourceID proquest
crossref
ieee
SourceType Aggregation Database
Enrichment Source
Index Database
Publisher
StartPage 965
SubjectTerms Acceleration
Artificial neural networks
bi-layered parallel computing
Big data
Computation
Computational modeling
Computer architecture
Computer networks
convolutional neural networks
Datasets
deep learning
Distributed computing
Distributed processing
Iterative methods
Neural networks
Parallel processing
Synchronism
Task analysis
Task scheduling
Training
Weight
Weightlifting
Workload
Title A Bi-layered Parallel Training Architecture for Large-Scale Convolutional Neural Networks
URI https://ieeexplore.ieee.org/document/8502141
https://www.proquest.com/docview/2210033863
Volume 30
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV3PS8MwGP2YO-nB-ROnU3LwJGZL16RNj3MqQ1QEJ8xTSdIExLHJfgj615uk2Rgq4qk9JG3oS5q8fF_eAziVUZTqRGrcloxgKlSBJTcxlsZOlipNWCTdQeG7-6T3RG8GbFCB8-VZGK21Tz7TTXfrY_nFWM3dVlmLM6fwZbnOmiVu5VmtZcSAMm8VaNkFw5kdhiGCGZGs1X-4fHRJXLxp6UEaO1nSlTnIm6r8-BP76eW6BneLhpVZJa_N-Uw21ec3zcb_tnwLNsM6E3XKjrENFT3agdrCwwGFIb0DGyuChLvw3EEXL3goPpyDJ3oQE-e0MkT94COBOithB2SXu-jWJZLjRwu0Rt3x6D10ZPtmp_rhLz7NfLoHT9dX_W4PB_MFrOwXmmEdZ0pwqXQkIqKF4EUqLbfLGBeKRkSIWKaFULxwivOSspRSZgwRWVIUpqAq3ofqaDzSB4ASQ-LYqIwo3bb8iIuUSSejxiNjqBa8DmQBR66CMrkzyBjmnqGQLHcI5g7BPCBYh7NllbdSluOvwrsOkWXBAEYdGgvM8zBwp3nbUmDbWp7Eh7_XOoJ1--yszHlsQHU2metjuy6ZyRPfIb8ARorfmw
linkProvider IEEE
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV3PS8MwGP0QPagHf4vTqTl4EjPTNWnT45zK1E0EJ-ipJGkC4thkboL-9SZpNoaKeGoPDQl9Sb-8fl_eAziSUZTqRGpcl4xgKlSBJTcxlsYGS5UmLJLuoHDnNmk90OtH9jgHJ9OzMFprX3yma-7W5_KLgRq7X2WnnDmFL8t1FmzcZ_XytNY0Z0CZNwu0_ILhzC7EkMOMSHbavTu_d2VcvGYJQho7YdKZKORtVX58i32AuVyFzmRoZV3JS208kjX1-U218b9jX4OVsNNEjXJqrMOc7m_A6sTFAYVFvQHLM5KEm_DUQGfPuCc-nIcnuhND57XSQ93gJIEaM4kHZDe8qO1KyfG9hVqj5qD_Hqay7dnpfviLLzR_24KHy4tus4WD_QJW9g2NsI4zJbhUOhIR0ULwIpWW3WWMC0UjIkQs00IoXjjNeUlZSikzhogsKQpTUBVvw3x_0Nc7gBJD4tiojChdtwyJi5RJJ6TGI2OoFrwCZAJHroI2ubPI6OWeo5AsdwjmDsE8IFiB42mT11KY46-HNx0i0wcDGBWoTjDPw9J9y-uWBNvR8iTe_b3VISy2up123r66vdmDJdtPVlZAVmF-NBzrfbtLGckDPzm_ABsb4uU
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=A+Bi-layered+Parallel+Training+Architecture+for+Large-Scale+Convolutional+Neural+Networks&rft.jtitle=IEEE+transactions+on+parallel+and+distributed+systems&rft.au=Chen%2C+Jianguo&rft.au=Li%2C+Kenli&rft.au=Bilal%2C+Kashif&rft.au=Zhou%2C+Xu&rft.date=2019-05-01&rft.issn=1045-9219&rft.eissn=1558-2183&rft.volume=30&rft.issue=5&rft.spage=965&rft.epage=976&rft_id=info:doi/10.1109%2FTPDS.2018.2877359&rft.externalDBID=n%2Fa&rft.externalDocID=10_1109_TPDS_2018_2877359
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1045-9219&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1045-9219&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1045-9219&client=summon