Adaptive DRAM Cache Division for Computational Solid-state Drives

High computational capabilities enable modern solid-state drives (SSDs) to be computing nodes, not just faster storage devices, and the SSD having such capability is generally called as the computational SSD (CompSSD). Then, the DRAM data cache of CompSSD should hold not only the output data of the...

Full description

Saved in:
Bibliographic Details
Published in2024 Design, Automation & Test in Europe Conference & Exhibition (DATE) pp. 1 - 6
Main Authors Yu, Shuaiwen, Sha, Zhibing, Tang, Chengyong, Cai, Zhigang, Tang, Peng, Huang, Min, Li, Jun, Liao, Jianwei
Format Conference Proceeding
LanguageEnglish
Published EDAA 25.03.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:High computational capabilities enable modern solid-state drives (SSDs) to be computing nodes, not just faster storage devices, and the SSD having such capability is generally called as the computational SSD (CompSSD). Then, the DRAM data cache of CompSSD should hold not only the output data of the tasks running at the host side, but also the input data of the tasks executed at the SSD side. To boost the use efficiency of the cache inside CompSSD, this paper proposes an adaptive cache division scheme, to dynamically split the cache space for separately buffering the output data running at the host and the input data running at the CompSSD. Specifically, we construct a mathematical model running at flash translation layer of CompSSD, to periodically determine the cache proportion of the workloads running at the host side and the CompSSD side, by considering the factors of the ratios of read/write data amount, the cache hits, and the overhead of data transfer between the storage device and the host. Then, both the output data and the input data can be buffered in their own private cache parts, so that the overall I/O performance can be enhanced. Trace-driven simulation experiments show that our proposal can reduce the overall I/O latency by 27.5 % on average, in contrast to existing cache management schemes.
ISSN:1558-1101
DOI:10.23919/DATE58400.2024.10546745