Exploring DRAM cache architectures for CMP server platforms

As dual-core and quad-core processors arrive in the marketplace, the momentum behind CMP architectures continues to grow strong. As more and more cores/threads are placed on-die, the pressure on the memory subsystem is rapidly increasing. To address this issue, we explore DRAM cache architectures fo...

Full description

Saved in:
Bibliographic Details
Published in2007 25th International Conference on Computer Design pp. 55 - 62
Main Authors Li Zhao, Iyer, R., Illikkal, R., Newell, D.
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.10.2007
Online AccessGet full text

Cover

Loading…
More Information
Summary:As dual-core and quad-core processors arrive in the marketplace, the momentum behind CMP architectures continues to grow strong. As more and more cores/threads are placed on-die, the pressure on the memory subsystem is rapidly increasing. To address this issue, we explore DRAM cache architectures for CMP platforms. In this paper, we investigate the impact of introducing a low latency, large capacity and high bandwidth DRAM-based cache between the last level SRAM cache and memory subsystem. We first show the potential benefits of large DRAM caches for key commercial server workloads. As the primary hurdle to achieving these benefits with DRAM caches is the tag space overheads associated with them, we identify the most efficient DRAM cache organization and investigate various options. Our results show that the combination of 8-bit partial tags and 2-way sectoring achieves the highest performance (20% to 70%) with the lowest tag space (<25%) overhead.
ISBN:9781424412570
1424412579
ISSN:1063-6404
2576-6996
DOI:10.1109/ICCD.2007.4601880