Reducing cache access energy in array-intensive applications
Summary form only given. Cache memories are known to consume a large percentage of on-chip energy in current microprocessors. Direct-mapped caches are, in general, more energy efficient as they are simpler as compared to set-associative caches, and require no complex line replacement mechanisms. Thi...
Saved in:
Published in | Design, Automation, and Test in Europe: Proceedings of the conference on Design, automation and test in Europe; 04-08 Mar. 2002 p. 1092 |
---|---|
Main Authors | , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
2002
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Summary form only given. Cache memories are known to consume a large percentage of on-chip energy in current microprocessors. Direct-mapped caches are, in general, more energy efficient as they are simpler as compared to set-associative caches, and require no complex line replacement mechanisms. This study goes beyond performance-centric techniques, and proposes an energy-oriented optimization strategy that aims directly at reducing per access energy cost for direct-mapped data caches (rather than as a side effect of a performance-oriented optimization). Specifically, we have developed a compiler algorithm that uses access pattern analysis to determine those memory references that are certain to result in cache hits in a virtually-addressed direct-mapped data cache. After detecting such references, the compiler substitutes the corresponding load operations with energy-efficient loads that access only the data array of the cache instead of both tag and data arrays. This tag access elimination, in turn, reduces the per access energy consumption for data accesses. |
---|---|
Bibliography: | SourceType-Conference Papers & Proceedings-1 ObjectType-Conference Paper-1 content type line 25 |
ISBN: | 0769514715 9780769514710 |
ISSN: | 1530-1591 1558-1101 |
DOI: | 10.1109/DATE.2002.998448 |