Adaptive Management With Request Granularity for DRAM Cache Inside nand-Based SSDs

Most flash-based solid-state drives (SSDs) adopt an onboard dynamic random access memory (DRAM) to buffer hot write data. Then, the write or overwrite operations can be absorbed by the DRAM cache, given that there is sufficient locality in the applications' I/O access pattern, to consequently a...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on computer-aided design of integrated circuits and systems Vol. 42; no. 8; pp. 2475 - 2487
Main Authors Lin, Haodong, Li, Jun, Sha, Zhibing, Cai, Zhigang, Shi, Yuanquan, Gerofi, Balazs, Liao, Jianwei
Format Journal Article
LanguageEnglish
Published New York IEEE 01.08.2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Most flash-based solid-state drives (SSDs) adopt an onboard dynamic random access memory (DRAM) to buffer hot write data. Then, the write or overwrite operations can be absorbed by the DRAM cache, given that there is sufficient locality in the applications' I/O access pattern, to consequently avoid flushing the write data onto underlying SSD cells. After analyzing typical real-world workloads over SSDs, we observed that the buffered data of small-size requests are more likely to be reaccessed than those of large write requests. To efficiently utilize the limited space of DRAM cache, this article proposes an adaptive request granularity-based cache management scheme for SSDs. First, we introduce a request block corresponding to a write request, as the cache management granularity, and propose a dynamic manner for classifying small and large request blocks. Next, we design three-level linked lists for supporting different routines of upgradation for small and large request blocks, once their data have been hit in the cache. Finally, we present a scheme of evicting the request blocks having the minimum cost in cache replacement, by taking both factors of access hotness and time discounting into account. Experimental results show that our proposal can yield improvements on cache hits and the overall I/O latency by 21.8 % and 14.7 % on average, compared to state-of-the-art cache management schemes inside SSDs.
ISSN:0278-0070
1937-4151
DOI:10.1109/TCAD.2022.3229293