Adaptive cache pre-forwarding policy for distributed deep learning

With the rapid growth of deep learning algorithms, several high-accuracy models have been developed and applied to many real-world domains. Deep learning is parallel and suitable for distributed computing, which can significantly improve the system throughput. However, there is a bottleneck for cros...

Full description

Saved in:
Bibliographic Details
Published inComputers & electrical engineering Vol. 82; pp. 106558 - 20
Main Authors Cheng, Sheng-Tzong, Hsu, Chih-Wei, Horng, Gwo-Jiun, Lin, Che-Hsuan
Format Journal Article
LanguageEnglish
Published Amsterdam Elsevier Ltd 01.03.2020
Elsevier BV
Subjects
Online AccessGet full text

Cover

Loading…
Abstract With the rapid growth of deep learning algorithms, several high-accuracy models have been developed and applied to many real-world domains. Deep learning is parallel and suitable for distributed computing, which can significantly improve the system throughput. However, there is a bottleneck for cross-machine training, that is, network latency. Nodes frequently need to wait for synchronization, and the content of each synchronization may range from several megabytes to hundred megabytes. Thus, network communication takes considerable time in the training process, which reduces system performance. Therefore, many computing architectures have been proposed. This paper proposes a type of distributed computing system for deep learning. Our design aims to reduce synchronization times and network blocking times by using a new cache mechanism, called cache pre-forwarding. The design concept of cache pre-forwarding aims to exploit reinforcement learning to train a pre-forwarding policy to increase the cache hit rate. Because of the features of reinforcement learning, our policy is adaptive and applicable to different computing environments. Finally, we experimentally demonstrate that our system is feasible.
AbstractList With the rapid growth of deep learning algorithms, several high-accuracy models have been developed and applied to many real-world domains. Deep learning is parallel and suitable for distributed computing, which can significantly improve the system throughput. However, there is a bottleneck for cross-machine training, that is, network latency. Nodes frequently need to wait for synchronization, and the content of each synchronization may range from several megabytes to hundred megabytes. Thus, network communication takes considerable time in the training process, which reduces system performance. Therefore, many computing architectures have been proposed. This paper proposes a type of distributed computing system for deep learning. Our design aims to reduce synchronization times and network blocking times by using a new cache mechanism, called cache pre-forwarding. The design concept of cache pre-forwarding aims to exploit reinforcement learning to train a pre-forwarding policy to increase the cache hit rate. Because of the features of reinforcement learning, our policy is adaptive and applicable to different computing environments. Finally, we experimentally demonstrate that our system is feasible.
ArticleNumber 106558
Author Horng, Gwo-Jiun
Lin, Che-Hsuan
Cheng, Sheng-Tzong
Hsu, Chih-Wei
Author_xml – sequence: 1
  givenname: Sheng-Tzong
  surname: Cheng
  fullname: Cheng, Sheng-Tzong
  organization: Department of Computer Science and Information Engineering, National Cheng Kung University, Tainan, Taiwan
– sequence: 2
  givenname: Chih-Wei
  surname: Hsu
  fullname: Hsu, Chih-Wei
  email: awei.hsu@seed.net.tw
  organization: Department of Computer Science and Information Engineering, National Cheng Kung University, Tainan, Taiwan
– sequence: 3
  givenname: Gwo-Jiun
  surname: Horng
  fullname: Horng, Gwo-Jiun
  email: grojium@gmail.com
  organization: Department of Computer Science and Information Engineering, Southern Taiwan University of Science and Technology, Tainan, Taiwan
– sequence: 4
  givenname: Che-Hsuan
  surname: Lin
  fullname: Lin, Che-Hsuan
  organization: Department of Computer Science and Information Engineering, National Cheng Kung University, Tainan, Taiwan
BookMark eNqNkMtOwzAQRS0EEm3hH4JYp_gRx84KlYqXVIkNrC3HnhRXqR0ct6h_T6qwQKy6Gs3o3jPSmaJzHzwgdEPwnGBS3m3mJmw7aMGAX88ppsd7ybk8QxMiRZVjwfk5mmBc8FxUuLxE077f4GEviZygh4XVXXJ7yIw2n5B1EfImxG8drfPrrAutM4dsuGTW9Sm6epfAZhagy1rQ0Q-hK3TR6LaH6985Qx9Pj-_Ll3z19vy6XKxyw4oq5VRgQxllpKzrQlJqq1IIYTnWjDYVFwKapsIF41LWFaNaamZJbUEKSQzDJZuh25HbxfC1gz6pTdhFP7xUtCjI0BMMD6lqTJkY-j5Co7rotjoeFMHqqExt1B9l6qhMjcqG7v2_rnFJJxd8itq1JxGWIwEGEXsHUfXGgTdgXQSTlA3uBMoPdtaQ2Q
CitedBy_id crossref_primary_10_1109_ACCESS_2023_3234761
crossref_primary_10_1016_j_heliyon_2023_e23567
Cites_doi 10.1038/nature16961
10.1145/1327452.1327492
10.1016/j.future.2018.09.032
10.1145/79173.79181
10.1007/BF00992696
10.1162/neco.2006.18.7.1527
10.1109/ACCESS.2019.2915020
10.1016/j.jss.2007.10.024
ContentType Journal Article
Copyright 2020 Elsevier Ltd
Copyright Elsevier BV Mar 2020
Copyright_xml – notice: 2020 Elsevier Ltd
– notice: Copyright Elsevier BV Mar 2020
DBID AAYXX
CITATION
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
DOI 10.1016/j.compeleceng.2020.106558
DatabaseName CrossRef
Computer and Information Systems Abstracts
Electronics & Communications Abstracts
Technology Research Database
ProQuest Computer Science Collection
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
DatabaseTitle CrossRef
Technology Research Database
Computer and Information Systems Abstracts – Academic
Electronics & Communications Abstracts
ProQuest Computer Science Collection
Computer and Information Systems Abstracts
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts Professional
DatabaseTitleList Technology Research Database

DeliveryMethod fulltext_linktorsrc
Discipline Engineering
EISSN 1879-0755
EndPage 20
ExternalDocumentID 10_1016_j_compeleceng_2020_106558
S0045790619313266
GroupedDBID --K
--M
.DC
.~1
0R~
1B1
1~.
1~5
29F
4.4
457
4G.
5GY
5VS
7-5
71M
8P~
9JN
AACTN
AAEDT
AAEDW
AAIAV
AAIKJ
AAKOC
AALRI
AAOAW
AAQFI
AAQXK
AAXUO
AAYFN
ABBOA
ABEFU
ABFNM
ABJNI
ABMAC
ABXDB
ABYKQ
ACDAQ
ACGFO
ACGFS
ACNNM
ACRLP
ACZNC
ADBBV
ADEZE
ADJOM
ADMUD
ADTZH
AEBSH
AECPX
AEKER
AENEX
AFFNX
AFKWA
AFTJW
AGHFR
AGUBO
AGYEJ
AHHHB
AHJVU
AHZHX
AIALX
AIEXJ
AIKHN
AITUG
AJBFU
AJOXV
ALMA_UNASSIGNED_HOLDINGS
AMFUW
AMRAJ
AOUOD
ASPBG
AVWKF
AXJTR
AZFZN
BJAXD
BKOJK
BLXMC
CS3
DU5
EBS
EFJIC
EFLBG
EJD
EO8
EO9
EP2
EP3
FDB
FEDTE
FGOYB
FIRID
FNPLU
FYGXN
G-2
G-Q
GBLVA
GBOLZ
HLZ
HVGLF
HZ~
IHE
J1W
JJJVA
KOM
LG9
LY7
M41
MO0
N9A
O-L
O9-
OAUVE
OZT
P-8
P-9
P2P
PC.
PQQKQ
Q38
R2-
RIG
ROL
RPZ
RXW
SBC
SDF
SDG
SDP
SES
SET
SEW
SPC
SPCBC
SST
SSV
SSZ
T5K
TAE
TN5
UHS
VOH
WH7
WUQ
XPP
ZMT
~G-
~S-
AATTM
AAXKI
AAYWO
AAYXX
ABWVN
ACRPL
ACVFH
ADCNI
ADNMO
AEIPS
AEUPX
AFJKZ
AFPUW
AFXIZ
AGCQF
AGQPQ
AGRNS
AIGII
AIIUN
AKBMS
AKRWK
AKYEP
ANKPU
APXCP
BNPGV
CITATION
SSH
7SC
7SP
8FD
EFKBS
JQ2
L7M
L~C
L~D
ID FETCH-LOGICAL-c349t-270c232316bb4822d96777d50a32f9577eff9043588b932a8a3d1bde8781c3063
IEDL.DBID .~1
ISSN 0045-7906
IngestDate Fri Jul 25 04:41:16 EDT 2025
Thu Apr 24 23:06:37 EDT 2025
Tue Jul 01 01:45:51 EDT 2025
Fri Feb 23 02:49:09 EST 2024
IsPeerReviewed true
IsScholarly true
Keywords Deep learning
Distributed computing
Cache, Reinforcement learning
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c349t-270c232316bb4822d96777d50a32f9577eff9043588b932a8a3d1bde8781c3063
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
PQID 2441588730
PQPubID 2045266
PageCount 20
ParticipantIDs proquest_journals_2441588730
crossref_primary_10_1016_j_compeleceng_2020_106558
crossref_citationtrail_10_1016_j_compeleceng_2020_106558
elsevier_sciencedirect_doi_10_1016_j_compeleceng_2020_106558
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate March 2020
2020-03-00
20200301
PublicationDateYYYYMMDD 2020-03-01
PublicationDate_xml – month: 03
  year: 2020
  text: March 2020
PublicationDecade 2020
PublicationPlace Amsterdam
PublicationPlace_xml – name: Amsterdam
PublicationTitle Computers & electrical engineering
PublicationYear 2020
Publisher Elsevier Ltd
Elsevier BV
Publisher_xml – name: Elsevier Ltd
– name: Elsevier BV
References Qiao, Li, Wu Z (bib0029) 2017; 12
Krizhevsky, Sutskever, Hinton (bib0011) 2012; 1
Dettmers, T., "8-bit approximations for parallelism in deep learning", arXiv
Dean, Ghemawat (bib0005) 2008; 51
Chen, Lee, Xia, Sabrina Lin, Suzumura, Ching-Yung (bib0018) 2015
Hegde, Usmani (bib0019) June 2016
Cobb, Aarag (bib0024) 2008
Dean (bib0010) 2012; 1
Chilimbi, Suzue, Apacible, Kalyanaraman (bib0013) 2014; 14
Ho, Cipar, Cui, Kim, Lee, Phillip B, Gibson, Ganger, Xing (bib0012) 2013; 1
Vanhoucke, Senior, Mark (bib0008) 2011; 1
Lee, Kim, Vuduc (bib0009) 2012; 9
Wu, Lu, Patrick, Hung, Huang, Tong, Wang (bib0027) 2019; 92
Chen, Tang, Lu, Wu, Duan, Huang, Tang (bib0028) 2019; 7
Silver, Huang, Maddison, Guez, Sifre, Driessche, Schrittwieser, Antonoglou, Panneershelvam, Lanctot, Dieleman, Grewe, Nham, Kalchbrenner, Sutskever, Lillicrap, Leach, Kavukcuoglu, Graepel, Hassabis (bib0022) 2016; 529
Hinton, Osindero, Yee-Whye (bib0004) July 2006; 18
2015.
Maldikar (bib0014) 2014
Cui, Zhang, Ganger, Gibbons, Xing (bib0021) 2016
Williams (bib0002) 1992; 8
Ooi, Tan, Wang (bib0030) 2015
Zhang, W., S. Gupta, X. Lian, Ji Liu, "Staleness-aware async-sgd for distributed deep learning", arXiv
Abadi, Barham, Chen, Chen, Davis, Dean, Devin, Ghemawat, Irving, Isard, Kudlur, Levenberg, Monga, Moore, Murray, Steiner, Tucker, Vasudevan, Warden, Wicke, Yuan, Zheng (bib0020) 2016; 16
Sutton, Barto (bib0003) 1998; 135
Tsai, Wang, Han (bib0025) 2018
Valiant (bib0001) Aug. 1990; 33
Shvachko, Kuang, Radia, Chansler (bib0007) 2010
Hardy, Merrer, Sericola (bib0026) 2017
Liao, Hung, Nguyen, Chou, Tu, Zhou (bib0006) 2009
Gupta, Zhang, Wang (bib0023) 2016
Li, Andersen, Park, Smola, Ahmed, Josifovski, Long, Shekita, Su (bib0015) 2014; 1
Wu (10.1016/j.compeleceng.2020.106558_bib0027) 2019; 92
Hardy (10.1016/j.compeleceng.2020.106558_bib0026) 2017
Vanhoucke (10.1016/j.compeleceng.2020.106558_bib0008) 2011; 1
Gupta (10.1016/j.compeleceng.2020.106558_bib0023) 2016
Ooi (10.1016/j.compeleceng.2020.106558_bib0030) 2015
Maldikar (10.1016/j.compeleceng.2020.106558_bib0014) 2014
Krizhevsky (10.1016/j.compeleceng.2020.106558_bib0011) 2012; 1
Shvachko (10.1016/j.compeleceng.2020.106558_bib0007) 2010
Chen (10.1016/j.compeleceng.2020.106558_bib0028) 2019; 7
Ho (10.1016/j.compeleceng.2020.106558_bib0012) 2013; 1
Qiao (10.1016/j.compeleceng.2020.106558_bib0029) 2017; 12
Williams (10.1016/j.compeleceng.2020.106558_bib0002) 1992; 8
Cui (10.1016/j.compeleceng.2020.106558_bib0021) 2016
Lee (10.1016/j.compeleceng.2020.106558_bib0009) 2012; 9
Li (10.1016/j.compeleceng.2020.106558_bib0015) 2014; 1
Tsai (10.1016/j.compeleceng.2020.106558_bib0025) 2018
Chilimbi (10.1016/j.compeleceng.2020.106558_bib0013) 2014; 14
Chen (10.1016/j.compeleceng.2020.106558_bib0018) 2015
Liao (10.1016/j.compeleceng.2020.106558_bib0006) 2009
Abadi (10.1016/j.compeleceng.2020.106558_bib0020) 2016; 16
Cobb (10.1016/j.compeleceng.2020.106558_bib0024) 2008
Hegde (10.1016/j.compeleceng.2020.106558_bib0019) 2016
Silver (10.1016/j.compeleceng.2020.106558_bib0022) 2016; 529
10.1016/j.compeleceng.2020.106558_bib0017
Valiant (10.1016/j.compeleceng.2020.106558_bib0001) 1990; 33
10.1016/j.compeleceng.2020.106558_bib0016
Dean (10.1016/j.compeleceng.2020.106558_bib0005) 2008; 51
Dean (10.1016/j.compeleceng.2020.106558_bib0010) 2012; 1
Sutton (10.1016/j.compeleceng.2020.106558_bib0003) 1998; 135
Hinton (10.1016/j.compeleceng.2020.106558_bib0004) 2006; 18
References_xml – volume: 33
  start-page: 103
  year: Aug. 1990
  end-page: 111
  ident: bib0001
  article-title: A bridging model for parallel computation
  publication-title: Commun ACM
– volume: 1
  start-page: 1223
  year: 2012
  end-page: 1231
  ident: bib0010
  article-title: Large scale distributed deep networks
  publication-title: NIPS'12 proceedings of the 25th international conference on neural information processing systems
– volume: 1
  year: 2011
  ident: bib0008
  article-title: Improving the speed of neural networks on CPUs
  publication-title: Proc. deep learning and unsupervised feature learning NIPS workshop
– volume: 1
  start-page: 1223
  year: 2013
  end-page: 1231
  ident: bib0012
  article-title: More effective distributed ml via a stale synchronous parallel parameter server
  publication-title: NIPS'13 proceedings of the 26th international conference on neural information processing systems
– year: 2016
  ident: bib0021
  article-title: GeePS: scalable deep learning on distributed GPUs with a GPU-specialized parameter server
  publication-title: Proceedings of the eleventh european conference on computer systems
– volume: 92
  start-page: 17
  year: 2019
  end-page: 28
  ident: bib0027
  article-title: QaMeC: a QoS-driven IoVs application optimizing deployment scheme in multimedia edge clouds
  publication-title: Future Gener. Comput. Syst.
– year: 2017
  ident: bib0026
  article-title: Distributed deep learning on edge-devices: Feasibility via adaptive compression
  publication-title: IEEE 16th International Symposium on Network Computing andApplications (NCA)
– year: 2010
  ident: bib0007
  article-title: The hadoop distributed file system
  publication-title: 2010 IEEE 26th symposium on Mass storage systems and technologies (MSST)
– volume: 9
  year: 2012
  ident: bib0009
  article-title: When prefetching works, when it doesn't, and why
  publication-title: ACM Trans Arch Code Optim
– volume: 18
  start-page: 1527
  year: July 2006
  end-page: 1554
  ident: bib0004
  article-title: A fast learning algorithm for deep belief nets
  publication-title: Neural Comput
– reference: Dettmers, T., "8-bit approximations for parallelism in deep learning", arXiv:
– start-page: 685
  year: 2015
  end-page: 688
  ident: bib0030
  article-title: SINGA: A Distributed Deep Learning Platform
  publication-title: MM '15: Proceedings of the 23rd ACM international conference on Multimedia
– volume: 7
  start-page: 59172
  year: 2019
  end-page: 59184
  ident: bib0028
  article-title: iDiSC: a new approach to iot-data-intensive service components deployment in edge-cloud-hybrid system
  publication-title: IEEE Access
– year: 2014
  ident: bib0014
  article-title: Adaptive cache prefetching using machine learning and monitoring hardware performance counters
– year: June 2016
  ident: bib0019
  article-title: Parallel and distributed deep learning
– reference: Zhang, W., S. Gupta, X. Lian, Ji Liu, "Staleness-aware async-sgd for distributed deep learning", arXiv:
– reference: , 2015.
– volume: 12
  start-page: 03030
  year: 2017
  ident: bib0029
  article-title: DLTAP: a network-efficient scheduling method for distributed deep learning workload in containerized cluster environment
  publication-title: ITM Web of Conferences
– year: 2009
  ident: bib0006
  article-title: Machine learning-based prefetch optimization for data center applications
  publication-title: Proceedings of the conference on high performance computing networking, storage and analysis
– volume: 1
  start-page: 583
  year: 2014
  end-page: 598
  ident: bib0015
  article-title: Scaling distributed machine learning with the parameter server
  publication-title: OSDI'14 Proceedings of the 11th USENIX conference on operating systems design and implementation
– volume: 51
  start-page: 107
  year: 2008
  end-page: 113
  ident: bib0005
  article-title: MapReduce: simplified data processing on large clusters
  publication-title: Commun ACM
– volume: 8
  start-page: 229
  year: 1992
  end-page: 256
  ident: bib0002
  article-title: Simple statistical gradient-following algorithms for connectionist reinforcement learning
  publication-title: Mach Learn
– volume: 135
  year: 1998
  ident: bib0003
  publication-title: Introduction to reinforcement learning
– year: 2015
  ident: bib0018
  article-title: Efficient multi-training framework of image deep learning on GPU cluster
  publication-title: 2015 IEEE international symposium multimedia (ISM)
– volume: 529
  start-page: 484
  year: 2016
  end-page: 489
  ident: bib0022
  article-title: Mastering the game of Go with deep neural networks and tree search
  publication-title: Nature
– start-page: 1539
  year: 2008
  end-page: 1558
  ident: bib0024
  article-title: Web proxy cache replacement scheme based on back-propagation neural network
  publication-title: J. Syst. Softw.
– year: 2018
  ident: bib0025
  article-title: Mobile social media networks caching with convolutional neural network
  publication-title: IEEE wireless communications and networking conference workshops
– volume: 1
  start-page: 1097
  year: 2012
  end-page: 1105
  ident: bib0011
  article-title: Imagenet classification with deep convolutional neural networks
  publication-title: NIPS'12 proceedings of the 25th international conference on neural information processing systems
– volume: 14
  year: 2014
  ident: bib0013
  article-title: Project Adam: building an efficient and scalable deep learning training system
  publication-title: 11th USENIX symposium on operating systems design and implementation
– volume: 16
  year: 2016
  ident: bib0020
  publication-title: TensorFlow: a system for large-scale machine learning
– year: 2016
  ident: bib0023
  article-title: Model accuracy and runtime tradeoff in distributed deep learning: a systematic study
  publication-title: Data mining (ICDM), 2016 IEEE 16th international conference on
– year: 2015
  ident: 10.1016/j.compeleceng.2020.106558_bib0018
  article-title: Efficient multi-training framework of image deep learning on GPU cluster
– volume: 1
  start-page: 1097
  year: 2012
  ident: 10.1016/j.compeleceng.2020.106558_bib0011
  article-title: Imagenet classification with deep convolutional neural networks
– volume: 14
  year: 2014
  ident: 10.1016/j.compeleceng.2020.106558_bib0013
  article-title: Project Adam: building an efficient and scalable deep learning training system
– volume: 1
  start-page: 1223
  year: 2012
  ident: 10.1016/j.compeleceng.2020.106558_bib0010
  article-title: Large scale distributed deep networks
– year: 2016
  ident: 10.1016/j.compeleceng.2020.106558_bib0019
– volume: 9
  issue: 1
  year: 2012
  ident: 10.1016/j.compeleceng.2020.106558_bib0009
  article-title: When prefetching works, when it doesn't, and why
  publication-title: ACM Trans Arch Code Optim
– year: 2010
  ident: 10.1016/j.compeleceng.2020.106558_bib0007
  article-title: The hadoop distributed file system
– volume: 529
  start-page: 484
  year: 2016
  ident: 10.1016/j.compeleceng.2020.106558_bib0022
  article-title: Mastering the game of Go with deep neural networks and tree search
  publication-title: Nature
  doi: 10.1038/nature16961
– volume: 1
  start-page: 1223
  year: 2013
  ident: 10.1016/j.compeleceng.2020.106558_bib0012
  article-title: More effective distributed ml via a stale synchronous parallel parameter server
– year: 2017
  ident: 10.1016/j.compeleceng.2020.106558_bib0026
  article-title: Distributed deep learning on edge-devices: Feasibility via adaptive compression
– volume: 135
  year: 1998
  ident: 10.1016/j.compeleceng.2020.106558_bib0003
– year: 2016
  ident: 10.1016/j.compeleceng.2020.106558_bib0021
  article-title: GeePS: scalable deep learning on distributed GPUs with a GPU-specialized parameter server
– ident: 10.1016/j.compeleceng.2020.106558_bib0016
– volume: 51
  start-page: 107
  issue: 1
  year: 2008
  ident: 10.1016/j.compeleceng.2020.106558_bib0005
  article-title: MapReduce: simplified data processing on large clusters
  publication-title: Commun ACM
  doi: 10.1145/1327452.1327492
– start-page: 685
  year: 2015
  ident: 10.1016/j.compeleceng.2020.106558_bib0030
  article-title: SINGA: A Distributed Deep Learning Platform
– volume: 92
  start-page: 17
  year: 2019
  ident: 10.1016/j.compeleceng.2020.106558_bib0027
  article-title: QaMeC: a QoS-driven IoVs application optimizing deployment scheme in multimedia edge clouds
  publication-title: Future Gener. Comput. Syst.
  doi: 10.1016/j.future.2018.09.032
– year: 2014
  ident: 10.1016/j.compeleceng.2020.106558_bib0014
– volume: 33
  start-page: 103
  issue: 8
  year: 1990
  ident: 10.1016/j.compeleceng.2020.106558_bib0001
  article-title: A bridging model for parallel computation
  publication-title: Commun ACM
  doi: 10.1145/79173.79181
– year: 2009
  ident: 10.1016/j.compeleceng.2020.106558_bib0006
  article-title: Machine learning-based prefetch optimization for data center applications
– volume: 12
  start-page: 03030
  year: 2017
  ident: 10.1016/j.compeleceng.2020.106558_bib0029
  article-title: DLTAP: a network-efficient scheduling method for distributed deep learning workload in containerized cluster environment
– volume: 8
  start-page: 229
  issue: 3–4
  year: 1992
  ident: 10.1016/j.compeleceng.2020.106558_bib0002
  article-title: Simple statistical gradient-following algorithms for connectionist reinforcement learning
  publication-title: Mach Learn
  doi: 10.1007/BF00992696
– volume: 16
  year: 2016
  ident: 10.1016/j.compeleceng.2020.106558_bib0020
– volume: 18
  start-page: 1527
  issue: 7
  year: 2006
  ident: 10.1016/j.compeleceng.2020.106558_bib0004
  article-title: A fast learning algorithm for deep belief nets
  publication-title: Neural Comput
  doi: 10.1162/neco.2006.18.7.1527
– year: 2016
  ident: 10.1016/j.compeleceng.2020.106558_bib0023
  article-title: Model accuracy and runtime tradeoff in distributed deep learning: a systematic study
– volume: 1
  year: 2011
  ident: 10.1016/j.compeleceng.2020.106558_bib0008
  article-title: Improving the speed of neural networks on CPUs
– year: 2018
  ident: 10.1016/j.compeleceng.2020.106558_bib0025
  article-title: Mobile social media networks caching with convolutional neural network
– ident: 10.1016/j.compeleceng.2020.106558_bib0017
– volume: 7
  start-page: 59172
  year: 2019
  ident: 10.1016/j.compeleceng.2020.106558_bib0028
  article-title: iDiSC: a new approach to iot-data-intensive service components deployment in edge-cloud-hybrid system
  publication-title: IEEE Access
  doi: 10.1109/ACCESS.2019.2915020
– volume: 1
  start-page: 583
  year: 2014
  ident: 10.1016/j.compeleceng.2020.106558_bib0015
  article-title: Scaling distributed machine learning with the parameter server
– start-page: 1539
  year: 2008
  ident: 10.1016/j.compeleceng.2020.106558_bib0024
  article-title: Web proxy cache replacement scheme based on back-propagation neural network
  publication-title: J. Syst. Softw.
  doi: 10.1016/j.jss.2007.10.024
SSID ssj0004618
Score 2.2078147
Snippet With the rapid growth of deep learning algorithms, several high-accuracy models have been developed and applied to many real-world domains. Deep learning is...
SourceID proquest
crossref
elsevier
SourceType Aggregation Database
Enrichment Source
Index Database
Publisher
StartPage 106558
SubjectTerms Algorithms
Cache, Reinforcement learning
Computer networks
Deep learning
Distributed computing
Distributed processing
Machine learning
Model accuracy
Network latency
Synchronism
Training
Title Adaptive cache pre-forwarding policy for distributed deep learning
URI https://dx.doi.org/10.1016/j.compeleceng.2020.106558
https://www.proquest.com/docview/2441588730
Volume 82
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV1LS8NAEF5KBdGD-MRqLSt4jc1jk90FL7VYqmJPFnpbkn2UisSg9epvdyYPreKh4DEbNoTJ5ptv4JtvCLmAlMkCp4XnuMs8yHgWcNBwD61FfK2l5SkWig-TZDxld7N41iLDphcGZZU19leYXqJ1vdKvo9kvFgvs8WUxl5CPJNoPJmi7zRjHU375Eaz0RgYVGjO0ZvSTTXL-rfFC2TaOm7H5HErFENeTGKe__52jfqF1mYJGu2Sn5o50UL3eHmnZfJ9srzgKHpDrgUkLRDCq0amZosgDaClKY-E-LUoXYAor1KBjLg67soYaawtaz4-YH5Lp6OZxOPbqMQmejphcYkeZBl4UBUmWMcj3RiaccxP7aRQ6GXNunZM-0CIhMmBrqUgjE2TGCi4CDRVDdETa-Utujwk1RmgNFUeQGsck0C_jdKi1CwxPkkyyDhFNYJSuPcRxlMWzasRiT2olpgpjqqqYdkj4tbWojDTW2XTVRF_9OBUKAH-d7d3mi6n613xTIVaQAK2Rf_K_p5-SLbyqBGld0l6-vtszYCjLrFcewR7ZGNzejyefdcLl9A
linkProvider Elsevier
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV1LS8NAEB60go-D-MRH1RW8RvPY7GbBi4qlvnqq0NuS7KMoUoPWq7_dmSbRKh4Erxs2hMnuNzPwzfcBHGHK5JE3WeClLwLMeA5x0MqApEVCY5STOTWKdz3RvefXg3QwAxfNLAzRKmvsrzB9gtb1ykkdzZPy4YFmfHkqFeYjRfKDQszCHMfrSzYGx-_R1HBkVMExJ23GUMzD4RfJi3jb5DfjRkPsFWNaFynZv_-epH7A9SQHdVZguS4e2Vn1fasw40ZrsDQlKbgO52c2LwnCmCGpZkYsD6xLiRuLz1k5kQFmuMIsSeaS25WzzDpXstpAYrgB953L_kU3qH0SApNwNaaRMoOFURKJouCY8K0SUkqbhnkSe5VK6bxXIdZFWVZguZZneWKjwrpMZpHBliHZhNboeeS2gFmbGYMtR5RbzxXWX9ab2BgfWSlEofg2ZE1gtKlFxMnL4kk3bLFHPRVTTTHVVUy3If7cWlZKGn_ZdNpEX387FhoR_y_b280f0_XdfNUxtZCIrUm487-3H8BCt393q2-veje7sEhPKnZaG1rjlze3h-XKuNifHMcPyKzngg
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Adaptive+cache+pre-forwarding+policy+for+distributed+deep+learning&rft.jtitle=Computers+%26+electrical+engineering&rft.au=Cheng%2C+Sheng-Tzong&rft.au=Hsu%2C+Chih-Wei&rft.au=Horng%2C+Gwo-Jiun&rft.au=Lin%2C+Che-Hsuan&rft.date=2020-03-01&rft.pub=Elsevier+Ltd&rft.issn=0045-7906&rft.eissn=1879-0755&rft.volume=82&rft_id=info:doi/10.1016%2Fj.compeleceng.2020.106558&rft.externalDocID=S0045790619313266
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0045-7906&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0045-7906&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0045-7906&client=summon