A space-efficient on-chip compressed cache organization for high performance computing

In order to alleviate the ever-increasing processor-memory performance gap of high-end parallel computers, on-chip compressed caches have been developed that can reduce the cache miss count and off-chip memory traffic by storing and transferring cache lines in a compressed form. However, we observed...

Full description

Saved in:
Bibliographic Details
Published inLecture notes in computer science pp. 952 - 964
Main Authors Yim, Keun Soo, Lee, Jang-Soo, Kim, Jihong, Kim, Shin-Dug, Koh, Kern
Format Conference Proceeding Book Chapter
LanguageEnglish
Published Berlin, Heidelberg Springer-Verlag 01.01.2004
Springer Berlin Heidelberg
Springer
SeriesACM Conferences
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:In order to alleviate the ever-increasing processor-memory performance gap of high-end parallel computers, on-chip compressed caches have been developed that can reduce the cache miss count and off-chip memory traffic by storing and transferring cache lines in a compressed form. However, we observed that their performance gain is often limited due to their use of the coarse-grained compressed cache line management which incurs internally fragmented space. In this paper, we present the fine-grained compressed cache line management which addresses the fragmentation problem, while avoiding an increase in the metadata size such as tag field and VM page table. Based on the SimpleScalar simulator with the SPEC benchmark suite, we show that over an existing compressed cache system the proposed cache organization can reduce the memory traffic by 15%, as it delivers compressed cache lines in a fine-grained way, and the cache miss count by 23%, as it stores up to three compressed cache lines in a physical cache line.
Bibliography:“This research was supported by University IT Research center project in korea”.
ISBN:9783540241287
3540241280
ISSN:0302-9743
1611-3349
DOI:10.1007/978-3-540-30566-8_109