Deep Learning-Based Data Storage for Low Latency in Data Center Networks
Low-latency data access is becoming an upcoming and increasingly important challenge. The proper placement of data blocks can reduce data travel among distributed storage systems, which contributes significantly to the latency reduction. However, the dominant data placement optimization has primaril...
Saved in:
Published in | IEEE access Vol. 7; pp. 26411 - 26417 |
---|---|
Main Authors | , , , , , |
Format | Journal Article |
Language | English |
Published |
Piscataway
IEEE
2019
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
ISSN | 2169-3536 2169-3536 |
DOI | 10.1109/ACCESS.2019.2901742 |
Cover
Summary: | Low-latency data access is becoming an upcoming and increasingly important challenge. The proper placement of data blocks can reduce data travel among distributed storage systems, which contributes significantly to the latency reduction. However, the dominant data placement optimization has primarily relied on prior known data requests or static initial data distribution, which ignores the dynamics of clients' data access requests and networks. The learning technology can help the data center networks (DCNs) learn from historical access information and make optimal data storage decision. Consider a more practical DCNs with fat-tree topology, we utilize a deep-learning technology <inline-formula> <tex-math notation="LaTeX">k </tex-math></inline-formula>-means to help store data blocks and then improve the read and write latency of the DCN, where <inline-formula> <tex-math notation="LaTeX">k </tex-math></inline-formula> is the number of cores in the fat-tree. The evaluation results demonstrate that the average write and read latency of the whole system can be lowered by 33% and 45%, respectively. And the best set of parameter <inline-formula> <tex-math notation="LaTeX">k </tex-math></inline-formula> is analyzed and recommended to provide guidance to the real application, which is equal to the number of cores in the DCNs. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ISSN: | 2169-3536 2169-3536 |
DOI: | 10.1109/ACCESS.2019.2901742 |