Memory-, Bandwidth-, and Power-Aware Multi-core for a Graph Database Workload
Processors have evolved to the now de-facto standard multi-core architecture. The continuous advances in technology allow for increased component density, thus resulting in a larger number of cores on the chip. This, in turn, places pressure on the off-chip and pin bandwidth. Large Last-Level Caches...
Saved in:
Published in | Architecture of Computing Systems - ARCS 2011 pp. 171 - 182 |
---|---|
Main Authors | , , |
Format | Book Chapter Publication |
Language | English |
Published |
Berlin, Heidelberg
Springer Berlin Heidelberg
2011
Springer |
Series | Lecture Notes in Computer Science |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Processors have evolved to the now de-facto standard multi-core architecture. The continuous advances in technology allow for increased component density, thus resulting in a larger number of cores on the chip. This, in turn, places pressure on the off-chip and pin bandwidth. Large Last-Level Caches (LLC), which are shared among all cores, have been used as a way to control the out-of-chip requests.
In this work we focus on analyzing the memory behavior of a modern demanding application, a graph-based database workload, which is representative of future workloads. We analyze the performance of this application for different cache configurations in terms of: memory access time, bandwidth requirements, and power consumption. The experimental results show that the bandwidth requirements reduce as the number of clusters reduces and the LLC per cluster increases. This configuration is also the most power efficient. If on the other hand, memory latency is the dominant factor, assuming bandwidth is not a limitation, then the best configuration is the one with more clusters and smaller LLCs. |
---|---|
ISBN: | 3642191363 9783642191367 |
ISSN: | 0302-9743 1611-3349 |
DOI: | 10.1007/978-3-642-19137-4_15 |