Memory-, Bandwidth-, and Power-Aware Multi-core for a Graph Database Workload

Processors have evolved to the now de-facto standard multi-core architecture. The continuous advances in technology allow for increased component density, thus resulting in a larger number of cores on the chip. This, in turn, places pressure on the off-chip and pin bandwidth. Large Last-Level Caches...

Full description

Saved in:
Bibliographic Details
Published inArchitecture of Computing Systems - ARCS 2011 pp. 171 - 182
Main Authors Trancoso, Pedro, Martinez, Norbert, Larriba-Pey, Josep-Lluis
Format Book Chapter Publication
LanguageEnglish
Published Berlin, Heidelberg Springer Berlin Heidelberg 2011
Springer
SeriesLecture Notes in Computer Science
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Processors have evolved to the now de-facto standard multi-core architecture. The continuous advances in technology allow for increased component density, thus resulting in a larger number of cores on the chip. This, in turn, places pressure on the off-chip and pin bandwidth. Large Last-Level Caches (LLC), which are shared among all cores, have been used as a way to control the out-of-chip requests. In this work we focus on analyzing the memory behavior of a modern demanding application, a graph-based database workload, which is representative of future workloads. We analyze the performance of this application for different cache configurations in terms of: memory access time, bandwidth requirements, and power consumption. The experimental results show that the bandwidth requirements reduce as the number of clusters reduces and the LLC per cluster increases. This configuration is also the most power efficient. If on the other hand, memory latency is the dominant factor, assuming bandwidth is not a limitation, then the best configuration is the one with more clusters and smaller LLCs.
ISBN:3642191363
9783642191367
ISSN:0302-9743
1611-3349
DOI:10.1007/978-3-642-19137-4_15