A coordinated multi-agent reinforcement learning approach to multi-level cache co-partitioning

The widening gap between the processor and memory performance has led to the inclusion of multiple levels of caches in the modern multi-core systems. Processors with simultaneous multithreading (SMT) support multiple hardware threads on the same physical core, which results in shared private caches....

Full description

Saved in:
Bibliographic Details
Published inDesign, Automation & Test in Europe Conference & Exhibition (DATE), 2017 pp. 800 - 805
Main Authors Jain, Rahul, Panda, Preeti Ranjan, Subramoney, Sreenivas
Format Conference Proceeding
LanguageEnglish
Published EDAA 01.03.2017
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:The widening gap between the processor and memory performance has led to the inclusion of multiple levels of caches in the modern multi-core systems. Processors with simultaneous multithreading (SMT) support multiple hardware threads on the same physical core, which results in shared private caches. Any inefficiency in the cache hierarchy can negatively impact the system performance and motivates the need to perform a co-optimization of multiple cache levels by trading off individual application throughput for better system throughput and energy-delay-product (EDP). We propose a novel coordinated multiagent reinforcement learning technique for performing Dynamic Cache Co-partitioning, called Machine Learned Caches (MLC). MLC has low implementation overhead and does not require any special hardware data profilers. We have validated our proposal with 15 8-core workloads created using Spec2006 benchmarks and found it to be an effective co-partitioning technique. MLC exhibited system throughput and EDP improvements of up to 14% (gmean:9.35%) and 19.2% (gmean: 13.5%) respectively. We believe this is the first attempt at addressing the problem of multi-level cache co-partitioning.
ISSN:1558-1101
DOI:10.23919/DATE.2017.7927098